I am running a very basic blogging app using Flask. Its runs fine when I run it using Docker i.e. docker run -it -d -p 5000:5000 app.
* Serving Flask app 'app' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
* Running on all addresses.
WARNING: This is a development server. Do not use it in a production deployment.
* Running on http://10.138.0.96:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 144-234-816
This runs on my localhost:5000 just fine.
But when I deploy this in Minikube, it says
This site can’t be reached 34.105.79.215 refused to connect.
I use this workflow in Kubernetes
$ eval $(minikube docker-env)
$ docker build -t app:latest .
$ kubectl apply -f deployment.yaml (contains deployment & service)
kubectl logs app-7bf8f865cc-gb9fl returns
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on all addresses.
WARNING: This is a development server. Do not use it in a production deployment.
* Running on http://172.17.0.3:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 713-503-298
Dockerfile
FROM ubuntu:18.04
WORKDIR /app
COPY . /app
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install python3 && apt-get -y install python3-pip
RUN pip3 install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python3"]
CMD ["app.py"]
deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: app
ports:
- protocol: "TCP"
port: 5000
targetPort: 5000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
selector:
matchLabels:
app: app
replicas: 1
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: app:latest
imagePullPolicy: Never
ports:
- containerPort: 5000
Also I noticed that on running from Docker container when I do docker ps I get PORTS as
0.0.0.0:5000->5000/tcp but the kubernetes ports shows
127.0.0.1:32792->22/tcp, 127.0.0.1:32791->2376/tcp, 127.0.0.1:32790->5000/tcp, 127.0.0.1:32789->8443/tcp, 127.0.0.1:32788->32443/tcp
The port: on a Service only controls the internal port, the one that's part of the ClusterIP service. By default the node port is randomly assigned from the available range. This is because while the port value only has to be unique within the Service itself (couldn't have the same port go to two places, would make no sense), node ports are a global resource and have to be globally unique. You can override it via nodePort: whatever in the Service definition but I wouldn't recommend it.
Minikube includes a helper to manage this for you, run minikube service app-service and it will load the URL in your browser mapped through the correct node port.
Related
What do I need to do in order to get my local browser to and request a resource to a web service running inside Minikube instance running locally on my machine?
I am getting a Connection refused when trying to kubectl port-forward.
My workflow is:
Creating Dockerfile with web service on
Start minikube in docker
Build docker image
Import image locally into Minikube
Created a deployment with one container and a NodePort service
Applied deployment/service
Ran kubectl port-forward (to hopefully forward requests to my container)
Open browser to 127.0.0.1:31000
Port Configuration Summary
Dockerfile:
Expose: 80
uvicorn: 80
Deployment
NodePort Service:
Port: 80
Target Port: 80
Node Port: 31000
Kubectl Command: 8500:31000
Browser: 127.0.0.1:8500
Setup and run through
dev.dockerfile (Step 1)
FROM python:3.11-buster # Some Debian Python image... I built my own
COPY ../sources/api/ /app/
RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt
ENV PYTHONPATH=/app/
EXPOSE 80
CMD ["uvicorn", "app.main:app", "--proxy-headers", "--host", "0.0.0.0", "--port", "80"]
Build Sequence (Steps 2 to 4)
# 2 - start minikube
minikube start --bootstrapper=kubeadm --vm-driver=docker
minikube docker-env
## 3 - build image
docker build -f ../../service1/deploy/dev.dockerfile ../../service1 -t acme-app.service1:latest
## 4 - load image into minikube
minikube image load acme-app.service1:latest
Deployment (Step 5 and 6)
deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: acme-service-1-deployment
namespace: acme-app-dev
labels:
app: service-1
spec:
replicas: 1
selector:
matchLabels:
app: service-1
template:
metadata:
labels:
app: service-1
spec:
containers:
- name: service1-container
image: docker.io/library/acme-app.service1:latest
imagePullPolicy: Never
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: service-1-service
namespace: acme-app-dev
spec:
type: NodePort
selector:
app: service-1
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 31000
Deploy
kubectl apply -f deployment.yaml
kubectl port forward (Step 7)
Find Pod
kubectl get pods -n acme-app-dev
NAME READY STATUS RESTARTS AGE
acme-service-1-deployment-76748d7ff6-llcsr 1/1 Running 0 11s
Port Forward to pod
port-forward acme-service-1-deployment-76748d7ff6-llcsr 8500:31000 -n acme-app-dev
Forwarding from 127.0.0.1:8500 -> 31000
Forwarding from [::1]:8500 -> 31000
Test in Browser (Step 8)
Open favorite browser and navigate to 127.0.0.1:31000.
The console running the port forward now outputs:
E0123 14:54:16.208010 25932 portforward.go:406] an error occurred forwarding 8500 -> 31000: error forwarding port 31000 to pod d4c0fa6cb16ce02335a05cad904fbf2ab7818e2073d7c7ded8ad05f193aa37e7, uid : exit status 1: 2023/01/23 14:54:16 socat[39370] E connect(5, AF=2 127.0.0.1:31000, 16): Connection refused
E0123 14:54:16.213268 25932 portforward.go:234] lost connection to pod
What have I looked at?
I've tried looking through the docs on kubernetes website as well as issues on here (yes there are similar). This is pretty similar - although no marked answer and still an issue by the looks of it. I couldn't see a solution for my issue here.
NodePort exposed Port connection refused
I am running Minikube on Windows and I'm just setting out on a kubernetes journey.
The image itself works in docker from a docker compose. I can see the pod is up and running in minikube from the logs (minikube dashboard).
You got your wires crossed:
The pod is listening on port 80
The NodePort service is listening on port 31000 on the node, but its underlying ClusterIP service is listening on port 80 as well.
You are trying to port-forward to port 31000 on the Pod. This will not work.
Call one of the following instead:
kubectl port-forward -n acme-app-dev deploy/acme-service-1-deployment 8500:80
or kubectl port-forward -n acme-app-dev service/service-1-service 8500:80
or use minikube service -n acme-app-dev service-1-service and use the provided URL.
I have created Docker that has debian + python-django that runs on 8000 port. But after deploying into azure-aks, url path is not working under 8000 port. Keeping important detials below.
Step 1:
Dockerfile :
EXPOSE 8000
RUN /usr/local/bin/python3 manage.py migrate
CMD [ "python3", "manage.py", "runserver", "0.0.0.0:8000" ]
Step 2:
After building docker image, pushing it to azure registry.
Step 3:
myfile.yaml : this is to deploy azure registry file into aks cluster.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myops
spec:
replicas: 1
selector:
matchLabels:
app: myops
template:
metadata:
labels:
app: myops
spec:
containers:
- name: myops
image: quantumregistry.azurecr.io/myops:v1.0
ports:
- containerPort: 8000
---
# [START service]
apiVersion: v1
kind: Service
metadata:
name: myops-python
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8888
selector:
app: myops
# [END service]
Deploy into aks : kubectl apply -f myops.yaml
Step 4: check sevice
kubectl get service myops-python --watch
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myops-python LoadBalancer <cluster-ip> <external-ip> 8000:30778/TCP 37m
Note: i have masked IP to not to expose to public.
step 5: i see container is running alright
kubectl get pods
NAME READY STATUS RESTARTS AGE
myops-5bbd459745-cz2vc 1/1 Running 0 19m
step 6: I see container log and it shows that python is running under host 0.0.0.0:8000 port.
kubectl logs -f myops-5bbd459745-cz2vc
Watching for file changes with StatReloader
Performing system checks...
WARNING:param.main: pandas could not register all extension types imports failed with the following error: cannot import name 'ABCIndexClass' from 'pandas.core.dtypes.generic' (/usr/local/lib/python3.9/site-packages/pandas/core/dtypes/generic.py)
System check identified no issues (0 silenced).
September 19, 2021 - 06:47:57
Django version 3.2.5, using settings 'myops_project.settings'
Starting development server at http://0.0.0.0:8000/
Quit the server with CONTROL-C.
The issue is that when I open this in browser http://:8000/myops_app, it is not working and timing out.
The Service myops-python is set up to receive requests on port 8000 but then it will send the request to the pod on target port 8888.
ports:
- port: 8000
targetPort: 8888
The container myops in the Pod myops, however, is not listening on port 8888. Rather it is listening on port 8000.
Dockerfile:
EXPOSE 8000
RUN /usr/local/bin/python3 manage.py migrate CMD [ "python3", "manage.py", "runserver", "0.0.0.0:8000" ]
Please set spec.ports[0].targetPort to 8000 manually or remove targetPort from spec.ports[0] in the Service myops-python. By default and for convenience, the targetPort is set to the same value as the port field. For more information please see Defining a Service.
Tip: You can use kubectl edit service <service-name> -n <namepsace> to edit your Service manifest.
I saw the example for docker healthcheck of RabbitMQ at docker-library/healthcheck.
I would like to apply a similar mechanism to my Kubernetes deployment to await on Rabbit deployment readiness. I'm doing a similar thing with MongoDB, using a container that busy-waits mongo with some ping command.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-1
spec:
replicas: 1
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
initContainers:
- name: wait-for-mongo
image: gcr.io/app-1/tools/mongo-ping
containers:
- name: app-1-service
image: gcr.io/app-1/service
...
However when I tried to construct such an init container I couldn't find any solution on how to query the health of rabbit from outside its cluster.
The following works without any extra images/scripts, but requires you to enable the Management Plugin, eg by using the rabbitmq:3.8-management image instead of eg rabbitmq:3.8.
initContainers:
- name: check-rabbitmq-ready
image: busybox
command: [ 'sh', '-c',
'until wget http://guest:guest#rabbitmq:15672/api/aliveness-test/%2F;
do echo waiting for rabbitmq; sleep 2; done;' ]
Specifically, this is waiting until the HTTP Management API is available, and then checking that the default vhost is running healthily. The %2F refers to the default / vhost, which has to be urlendoded. If using your own vhost, enter that instead.
Adapted from this example, as suggested by #Hanx:
Dockerfile
FROM python:3-alpine
ENV RABBIT_HOST="my-rabbit"
ENV RABBIT_VHOST="vhost"
ENV RABBIT_USERNAME="root"
RUN pip install pika
COPY check_rabbitmq_connection.py /check_rabbitmq_connection.py
RUN chmod +x /check_rabbitmq_connection.py
CMD ["sh", "-c", "python /check_rabbitmq_connection.py --host $RABBIT_HOST --username $RABBIT_USERNAME --password $RABBIT_PASSWORD --virtual_host $RABBIT_VHOST"]
check_rabbitmq_connection.py
#!/usr/bin/env python3
# Check connection to the RabbitMQ server
# Source: https://blog.sleeplessbeastie.eu/2017/07/10/how-to-check-connection-to-the-rabbitmq-message-broker/
import argparse
import time
import pika
# define and parse command-line options
parser = argparse.ArgumentParser(description='Check connection to RabbitMQ server')
parser.add_argument('--host', required=True, help='Define RabbitMQ server hostname')
parser.add_argument('--virtual_host', default='/', help='Define virtual host')
parser.add_argument('--port', type=int, default=5672, help='Define port (default: %(default)s)')
parser.add_argument('--username', default='guest', help='Define username (default: %(default)s)')
parser.add_argument('--password', default='guest', help='Define password (default: %(default)s)')
args = vars(parser.parse_args())
print(args)
# set amqp credentials
credentials = pika.PlainCredentials(args['username'], args['password'])
# set amqp connection parameters
parameters = pika.ConnectionParameters(host=args['host'], port=args['port'], virtual_host=args['virtual_host'], credentials=credentials)
# try to establish connection and check its status
while True:
try:
connection = pika.BlockingConnection(parameters)
if connection.is_open:
print('OK')
connection.close()
exit(0)
except Exception as error:
raise
print('No connection yet:', error.__class__.__name__)
time.sleep(5)
Build and run:
docker build -t rabbit-ping .
docker run --rm -it \
--name rabbit-ping \
--net=my-net \
-e RABBIT_PASSWORD="<rabbit password>" \
rabbit-ping
and in deployment.yaml:
apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
initContainers:
- name: wait-for-rabbit
image: gcr.io/my-org/rabbit-ping
env:
- name: RABBIT_PASSWORD
valueFrom:
secretKeyRef:
name: rabbit
key: rabbit-password
containers:
...
I'm running a Kubernetes cluster with minikube and my deployment (or individual Pods) won't stay running even though I specify in the Dockerfile that it should stay leave a terminal open (I've also tried it with sh). They keep getting restarted and sometimes they get stuck on a CrashLoopBackOff status before restarting again:
FROM ubuntu
EXPOSE 8080
CMD /bin/bash
My deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sleeper-deploy
spec:
replicas: 10
selector:
matchLabels:
app: sleeper-world
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: sleeper-world
spec:
containers:
- name: sleeper-pod
image: kubelab
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
All in all, my workflow follows (deploy.sh):
#!/bin/bash
# Cleaning
kubectl delete deployments --all
kubectl delete pods --all
# Building the Image
sudo docker build \
-t kubelab \
.
# Deploying
kubectl apply -f sleeper_deployment.yml
By the way, I've tested the Docker Container solo using sudo docker run -dt kubelab and it does stay up. Why doesn't it stay up within Kubernetes? Is there a parameter (in the YAML file) or a flag I should be using for this special case?
1. Original Answer (but edited...)
If you are familiar with Docker, check this.
If you are looking for an equivalent of docker run -dt kubelab, try kubectl run -it kubelab --restart=Never --image=ubuntu /bin/bash. In your case, with the Docker -t flag: Allocate a pseudo-tty. That's why your Docker Container stays up.
Try:
kubectl run kubelab \
--image=ubuntu \
--expose \
--port 8080 \
-- /bin/bash -c 'while true;do sleep 3600;done'
Or:
kubectl run kubelab \
--image=ubuntu \
--dry-run -oyaml \
--expose \
--port 8080 \
-- /bin/bash -c 'while true;do sleep 3600;done'
2. Explaining what's going on (Added by Philippe Fanaro):
As stated by #David Maze, the bash process is going to exit immediately because the artificial terminal won't have anything going into it, a slightly different behavior from Docker.
If you change the restart Policy, it will still terminate, the difference is that the Pod won't regenerate or restart.
One way of doing it is (pay attention to the tabs of restartPolicy):
apiVersion: v1
kind: Pod
metadata:
name: kubelab-pod
labels:
zone: prod
version: v1
spec:
containers:
- name: kubelab-ctr
image: kubelab
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
restartPolicy: Never
However, this will not work if it is specified inside a deployment YAML. And that's because deployments force regeneration, trying to always get to the desired state. This can be confirmed in the Deployment Documentation Webpage:
Only a .spec.template.spec.restartPolicy equal to Always is allowed, which is the default if not specified.
3. If you really wish to force the Docker Container to Keep Running
In this case, you will need something that doesn't exit. A server-like process is one example. But you can also try something mentioned in this StackOverflow answer:
CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"
This will keep your container alive until it is told to stop. Using trap and wait will make your container react immediately to a stop request. Without trap/wait stopping will take a few seconds.
I'm porting a node/react/webpack app to k8s, and am trying to configure a development environment that makes use of the hot-reloading feature of webpack. I'm hitting an error when running this with a shared volume on minikube:
ERROR in ./~/css-loader!./~/sass-loader/lib/loader.js?{"data":"$primary: #f9427f;$secondary: #171735;$navbar-back-rotation: 0;$navbar-link-rotation: 0;$login-background: url('/images/login-background.jpg');$secondary-background: url('/images/secondary-bg.jpg');"}!./src/sass/style.sass
Module build failed: Error: Node Sass does not yet support your current environment: Linux 64-bit with Unsupported runtime (67)
For more information on which environments are supported please see:
Running the code in the container by itself (mostly) works--it starts up without errors and serves the page via docker run -it --rm --name=frontend --publish=3000:3000 <container hash>
#Dockerfile
FROM node:latest
RUN mkdir /code
ADD . /code/
WORKDIR /code/
RUN yarn cache clean && yarn install --non-interactive && npm rebuild node-sass
CMD npm run dev-docker
where dev-docker in package.json is NODE_ENV=development npm run -- webpack --progress --hot --watch
In the following, commenting out the volumeMounts key eliminates the error.
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: dev
name: web
labels:
app: web
spec:
replicas: 1
selector:
matchLabels:
app: frontend-container
template:
metadata:
labels:
app: frontend-container
spec:
volumes:
- name: frontend-repo
hostPath:
path: /Users/me/Projects/code/frontend
containers:
- name: web-container
image: localhost:5000/react:dev
ports:
- name: http
containerPort: 3000
protocol: TCP
volumeMounts:
- name: frontend-repo
mountPath: /code
env:
... # redacted for simplicity, assume works
Based on what i've found elsewhere, I believe that the os-native binding used by node-sass are interfering between host and container when the shared volume is introduced. That is, the image build process creates the bindings that would work for the container, but those are overwritten when the shared volume is mounted.
Is this understanding correct? How do I best structure things so that a developer can work on their local repo and see those changes automatically reflected in the cluster instance, without rebuilding images?
My hypothesis was borne out--the node modules were being built for the container, but overwritten by the volumeMount. The approach that worked best at this point was to do the requirements building as the entrypoint of the container, so that it would run when the container started up, rather than only at build time.
# Dockerfile
CMD RUN yarn cache clean && yarn install --non-interactive --force && npm run dev-docker