Lets say I have a Python code on my local machine that listens on localhost and port 8000 as below:
import waitress
app = hug.API(__name__)
app.http.add_middleware(CORSMiddleware(app))
waitress.serve(__hug_wsgi__, host='127.0.0.1', port=8000)
This code accepts requests on 127.0.0.1:8000 and send back some response.
Now I want to move this application (with two more related apps) into Docker, and use Kubernetes to orchestrate the communication between them.
But for simplicity, I will take this Python node (app) only.
First I built the docker image using:
docker build -t gcr.io/${PROJECT_ID}/python-app:v1 .
Then I pushed it into gcloud docker ( I am using google cloud not docker hub):
gcloud docker -- push gcr.io/${PROJECT_ID}/python-app:v1
Now I created the container cluster:
gcloud container clusters create my-cluster
Deployed the app into kubernates:
kubectl run python-app --image=gcr.io/${PROJECT_ID}/python-app:v1 --port 8000
And finally exposed it to internet via:
kubectl expose deployment python-app --type=LoadBalancer --port 80 --target-port 8000
Now the output of the command kubectl get services is:
Ok, my question is, I want to send a request from another application (lets say a node js app).
How can I do that externally? i.e. from any machine.
How can I do that internally? i.e. from another container pod.
How can I let my Python app use those IP addresses and listen on them?
This is the Dockerfile of the Python app:
FROM python:3
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "python", "./app.py" ]
Thanks in advance!
Externally
By running:
kubectl run python-app --image=gcr.io/${PROJECT_ID}/python-app:v1 --port 8000
You are specifying that your pod listen on port 8000.
By running:
kubectl expose deployment python-app --type=LoadBalancer --port 80 --target-port 8000
You are specifying that your service listens on port 80, and the service sends traffic to TargetPort 8000 (the port the pod listens on).
So it could be summarised that with your configuration traffic follows the following path:
traffic (port 80) > Load Balancer > Service (Port 80) > TargetPort/Pod (port 8000)
Using a service of type Load Balancer (rather than the alternative 'Ingress', which creates a service of type Nodeport, and a HTTP(s) Load Balancer rather than a TCP Load Balancer) you are specifying that traffic that targets the pods should arrive at the LoadBalancer on port 80, and then the service directs this traffic to port 8000 on your App. So if you want to direct traffic to your App from an external source, based on the addresses in your screen shot, you would send traffic to 35.197.94.202:80.
Internally
As others have pointed out in the comments, the cluster IP can be used to target pods internally. The port you specify as the service port (in your case 80, although this could be any number you choose for the service) can be used alongside the cluster IP to target the pods on that cluster targeted by the service. For example, you could target:
10.3.254.16:80
However, to target specific pods, you can use the pod IP address and the port the pod listens on. You can discover this by either running a describe command on the pod:
kubectl describe pod
Or by running:
kubectl get endpoints
Which generates the pod IP and port it is listing on.
Related
In Docker we all know how to expose ports, EXPOSE instruction publishes and -p or -P option to expose during runtime. When we use "docker inspect" or "docker port" to see the port mappings and these configs output are pulled /var/lib/docker/containers/container-id/config.v2.json.
The question I got is when we expose port how does Docker actually changes the port in container, say the Apache or Nginx, say we can have the installation anywhere in the OS or file path, how does Docker finds the correct conf file(Apache /etc/httpd/conf/httpd.conf) to change if I suppose Docker does this on the line "Listen 80" or Listen "443" in the httpd.conf file. Or my whole understanding of Docker is in stake:)
Any help is appreciated.
"docker" does not change anything in the internal configuation of the container (or the services it provides).
There are three different points where you can configure ports
the service itself (for instance nginx) inside the image/container
EXPOSE xxxx in the Dockerfile (ie at build time of the image)
docker run -p 80:80 (or the respective equivalent for docker compose) (ie at the runtime of the container)
All three are (in principle) independent of each other. Ie, you can have completely different values in each of them. But in practice, you will have to adjust them to each other to get a working system.
We know, EXPOSE xxxx in the dockerfile doesn't actually publish any port at runtime, but just tells the docker service, that that specific container will listen to port xxxx at runtime. You can see this as sort of documentation for that image. So it's your responsibility as creator of the Dockerfile to provide the correct value here. Because anyone using that image, will probaby rely on that value.
But regardless, of what port you have EXPOSEd (or not, EXPOSE is completely optional) you still have to publish that port when you run the container (for instance when using docker run via -p aaaa:xxxx).
Now let us assume you have an nginx image which has the nginx service configured to listen to port 8000. Regardless of what you define with EXPOSE or -p aaaa:xxxx, that nginx service will always listen to port 8000 only and nothing else.
So if you now run your container with docker run -p 80:80, the runtime will bind port 80 of the host to port 80 of the container. But as there is no service listening on port 80 within the container, you simply won't be able to contact your nginx service on port 80. And you also won't be able to connect to nginx on port 8000, because it hasn't been published.
So in a typical setup, if your service in the container is configured to listen to port 8000, you should also EXPOSE 8000 in your dockerfile and use docker run -p aaaa:8000 to bind port aaaa of your host machine to port 8000 of your container, so that you will be able to connect to the nginx service via http://hostmachine:aaaa
I've created a Dockerfile that successfully runs my Laravel 8 application locally. This is the content of the Dockerfile, located in my Laravel project root:
FROM webdevops/php-nginx:8.0-alpine
WORKDIR /app
COPY . .
RUN chmod 777 -R ./storage
ENV WEB_DOCUMENT_ROOT=/app/public
RUN composer install
I built the image and ran it locally:
docker build . -t gcr.io/my-project/my-image
docker run -p 5000:80 gcr.io/my-project/my-image
The container starts and the application runs as expected. No problems. If I shell into the container, I can see the ports that nginx is listening on:
> netstat -nlp | grep nginx
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 49/nginx -g daemon
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 49/nginx -g daemon
As you can see the container is listening on port 80 of all network interfaces (0.0.0.0). This conforms with the CloudRun documentation and their troubleshooting guide:
https://cloud.google.com/run/docs/troubleshooting#port
A common issue is to forget to listen for incoming requests, or to
listen for incoming requests on the wrong port.
As documented in the container runtime contract, your container must
listen for incoming requests on the port that is defined by Cloud Run
and provided in the PORT environment variable.
If your container fails to listen on the expected port, the revision
health check will fail, the revision will be in an error state and the
traffic will not be routed to it.
https://cloud.google.com/run/docs/troubleshooting#listen_address
A common reason for Cloud Run services failing to start is that the
server process inside the container is configured to listen on the
localhost (127.0.0.1) address. This refers to the loopback network
interface, which is not accessible from outside the container and
therefore Cloud Run health check cannot be performed, causing the
service deployment failure.
To solve this, configure your application to start the HTTP server to
listen on all network interfaces, commonly denoted as 0.0.0.0.
From what I can tell, I have a working docker container, listening on the correct port. When I deploy to Google Cloud Run, I receive an error:
> gcloud run deploy my-service --image gcr.io/my-project/my-image --project my-project --port 80
...
Deployment failed
ERROR: (gcloud.run.deploy) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
In the Cloud Run Console and I can see that the service is configured with a PORT of 80 as you would expect from the --port 80 included in the deployment command. I am having trouble figuring out why this isn't working. It seems like I've done everything right.
Does anybody have any idea what might be going wrong here?
This is what I see in the deployment log on Google Cloud:
Maybe the issue is related to the third line that says ln -f -s /var/lib/nginx/logs /var/log/nginx?
It looks like I'm not the only person to have this issue with this base image:
https://github.com/webdevops/Dockerfile/issues/358
I still don't know what the problem is, but it seems to effect other people trying to use this image specifically with Cloud Run.
I am trying to figure out the networking in Kubernetes, and especially the handling of multicontainer pods. In my simple scenario, I have total of 3 pods. One has two containers in it and the other one has only one container which wants to communicate with a specific container in that multicontainer pod. I want to figure out how kubernetes handles the communication between such containers.
For this purpose I have simple multicontainer pod in a "sidecar architecture" the YAML file is as follows:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
- name: sidecar
image: curlimages/curl
command: ["/bin/sh"]
args: ["-c", "echo Hello from the sidecar container; sleep 300"]
ports:
- containerPort: 5000
What I want to achieve with this YAML file is that, in the pod "nginx", have two containers, one running nginx and listens on the port 80 of that pod the other running a simple curl image (anything different than nginx to not violate one container per pod convention of kubernetes) and can listen communication on pod's port 5000.
Then I have another YAML file again running an nginx image. This container is going to trying to communicate with the nginx and curl images on the other pod. The YAML file is as follows:
apiVersion: v1
kind: Pod
metadata:
name: nginx-simple
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
After deploying the pods I expose the nginx pod simply using the following command:
kubectl expose pods/nginx
Then I start a terminal inside the nginx-simple container.(Pod with one container). When I curl the ip address I get from kubectl get svc which is the service generated from my previous expose command with port 80, I can easily get the welcome message of nginx. However, the problem starts when I try to communicate with the curl container. When I curl the same ip adrress but this time with port 5000(containerPort I set in the Yaml file), I get a connection refused error. What am I doing wrong here?
Thanks in advance.
P.S: I would also be more than happy to hear your learning material suggestions for this topic. Thank you so much.
curl is a command line tool. It is not a server that is listening to a port, but a client tool that can be used to access servers.
This container does not contain a server that listen to a port:
- name: sidecar
image: curlimages/curl
command: ["/bin/sh"]
args: ["-c", "echo Hello from the sidecar container; sleep 300"]
ports:
- containerPort: 5000
Services deployed on Kubernetes are typically containers containing some form of webserver, but might be other kind of services as well.
shouldnt i at least be able to ping the curl container?
Nope, containers are not Virtual Machines. Containers typically only contain a single process and a container can only do what that process do. On Kubernetes these processes are typically webservers listening e.g. on port 8080, so commonly you can only check if they are alive by sending them an HTTP-request. See e.g. Configure Liveness, Readiness and Startup Probes.
When i run telnet pod-ip 5000 i cannot ping this curl container.
The curl binary is not a process that listen to any port. E.g. it cannot respond to ICMP. You can typically ping nodes but not containers. Curl is a http-client that typically is used to **send and http-request, wait for the http-response and then the process terminates. You can probably see this by inspecting the Pod, that the curl container has terminated.
I am trying to figure out how communication is handled in a multicontainer pod. Each pod has their own unique ip address and containers in the pod can use localhost. I get it but how a container in a different pod can target a specific container in a multicontainer pod?
I suggest that you add two webservers (e.g. two nginx containers) to a pod. But they have to listen to different ports, e.g. port 8080 and port 8081. A client can choose what container it want to interact with by using the Pod IP and the container Port, <Pod IP>:<containerPort>. E.g. add two nginx-containers, configure them to listen to different ports and let them serve different content.
I am using the local Kubernetes cluster from Docker Desktop on Windows 10. No virtual machines, no minikubes.
I need to expose a port on my localhost for some service.
For example, I take kubernetes-bootcamp image from the official tutorial:
docker pull jocatalin/kubernetes-bootcamp:v2
Put it in the local registry:
docker tag jocatalin/kubernetes-bootcamp:v2 localhost:5000/kubernetes-bootcamp
docker push localhost:5000/kubernetes-bootcamp
Then create a deployment with this image:
kubectl create deployment kubernetes-bootcamp --image=localhost:5000/kubernetes-bootcamp
Then let's expose a port for accessing our deployment:
kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080
kubectl get services
kubernetes-bootcamp NodePort 10.102.167.98 <none> 8080:32645/TCP 8s
So we found out that the exposed port for our deployment is 32645. Let's try to request it:
curl localhost:32645
Failed to connect to localhost port 32645: Connection refused
And nothing is work.
But if I try port-forward everything is working:
kubectl port-forward deployment/kubernetes-bootcamp 7000:8080
Forwarding from 127.0.0.1:7000 -> 8080
Forwarding from [::1]:7000 -> 8080
Handling connection for 7000
Another console:
curl localhost:7000
Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-7b5598d7b5-8qf9j | v=2
What am I doing wrong? I have found out several posts like mine, but none of them help me.
try to run the this CMD:
kubectl get svc | grep kubernetes-bootcamp
after this expose the pod to your network by using the CMD:
kubectl expose pod (podname) --type=NodePort
After that, you can check the URL by using the cmd example
minikube 0r kubectl service (service name) --url
So I have found out the problem root - local Kubernetes cluster somehow work the inappropriate way.
How I solve the problem:
Remove C:\ProgramData\DockerDesktop\pki
Recreate all pods, services, deployments
Now the same script I use before works great.
I am having trouble understanding how ports work when using kubernetes. There are three ports in question
Port that my app is listening on inside the docker container
Port mentioned in kubernetes config file as containerPort
LoadBalancer port when the deployment is exposed as a service
What is the relationship between the above three ports? In my current setup I mention EXPOSE 8000 in my Dockerfile and containerPort: 8000 in kubernetes config file. My app is listening on port 8000 inside the docker container. When I expose this deployment using kubectl expose deployment myapp --type="LoadBalancer", it results in the following service -
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myapp 10.59.248.232 <some-ip> 8000:32417/TCP 16s
But my curl fails as shown below -
$ curl http://<some-ip>:8000/status/ -i
curl: (52) Empty reply from server
Can someone please explain me how the above three ports work together and what should be their values for successful 'exposure' of my app?
The issue was with my Django server and not Kubernetes or docker. I was starting my server with python manage.py runserver instead of python manage.py runserver 0.0.0.0:8080 which was causing it to return empty responses as the requests were not coming from localhost.