Container in one pod communicating with one in a multicontainer pod - docker

I am trying to figure out the networking in Kubernetes, and especially the handling of multicontainer pods. In my simple scenario, I have total of 3 pods. One has two containers in it and the other one has only one container which wants to communicate with a specific container in that multicontainer pod. I want to figure out how kubernetes handles the communication between such containers.
For this purpose I have simple multicontainer pod in a "sidecar architecture" the YAML file is as follows:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
- name: sidecar
image: curlimages/curl
command: ["/bin/sh"]
args: ["-c", "echo Hello from the sidecar container; sleep 300"]
ports:
- containerPort: 5000
What I want to achieve with this YAML file is that, in the pod "nginx", have two containers, one running nginx and listens on the port 80 of that pod the other running a simple curl image (anything different than nginx to not violate one container per pod convention of kubernetes) and can listen communication on pod's port 5000.
Then I have another YAML file again running an nginx image. This container is going to trying to communicate with the nginx and curl images on the other pod. The YAML file is as follows:
apiVersion: v1
kind: Pod
metadata:
name: nginx-simple
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
After deploying the pods I expose the nginx pod simply using the following command:
kubectl expose pods/nginx
Then I start a terminal inside the nginx-simple container.(Pod with one container). When I curl the ip address I get from kubectl get svc which is the service generated from my previous expose command with port 80, I can easily get the welcome message of nginx. However, the problem starts when I try to communicate with the curl container. When I curl the same ip adrress but this time with port 5000(containerPort I set in the Yaml file), I get a connection refused error. What am I doing wrong here?
Thanks in advance.
P.S: I would also be more than happy to hear your learning material suggestions for this topic. Thank you so much.

curl is a command line tool. It is not a server that is listening to a port, but a client tool that can be used to access servers.
This container does not contain a server that listen to a port:
- name: sidecar
image: curlimages/curl
command: ["/bin/sh"]
args: ["-c", "echo Hello from the sidecar container; sleep 300"]
ports:
- containerPort: 5000
Services deployed on Kubernetes are typically containers containing some form of webserver, but might be other kind of services as well.
shouldnt i at least be able to ping the curl container?
Nope, containers are not Virtual Machines. Containers typically only contain a single process and a container can only do what that process do. On Kubernetes these processes are typically webservers listening e.g. on port 8080, so commonly you can only check if they are alive by sending them an HTTP-request. See e.g. Configure Liveness, Readiness and Startup Probes.
When i run telnet pod-ip 5000 i cannot ping this curl container.
The curl binary is not a process that listen to any port. E.g. it cannot respond to ICMP. You can typically ping nodes but not containers. Curl is a http-client that typically is used to **send and http-request, wait for the http-response and then the process terminates. You can probably see this by inspecting the Pod, that the curl container has terminated.
I am trying to figure out how communication is handled in a multicontainer pod. Each pod has their own unique ip address and containers in the pod can use localhost. I get it but how a container in a different pod can target a specific container in a multicontainer pod?
I suggest that you add two webservers (e.g. two nginx containers) to a pod. But they have to listen to different ports, e.g. port 8080 and port 8081. A client can choose what container it want to interact with by using the Pod IP and the container Port, <Pod IP>:<containerPort>. E.g. add two nginx-containers, configure them to listen to different ports and let them serve different content.

Related

Ports exposed even though it is not defined in kubelet manifest.yaml file

There are two kubelet nodes and each kubelet node contains several containers including server with wildfly. Even though I do not define containerPort <> hostPort, the management console can be reached with port 9990 from outside. I do not have any clue, why?
- name: server
image: registry/server:develop-latest
ports:
- name: server-https
containerPort: 8443
hostPort: 8443
In docker container inspect <container-id> I see:
"ExposedPorts": {
"9990/tcp": {},
...
So,
Why container port 9990 is exposed? and
Why containerPort 9990 is mapped to hostPort and I can reach the port 9990 from outside?
You can expose the port in two places, when you run the container, and when you build the image. Typically you only do the latter since exposing the port is documentation of what ports are likely listening for connections inside the container (it doesn't have any affect on networking).
To see if the port was exposed at build time, you can run:
docker image inspect registry/server:develop-latest
And if that port wasn't exposed in your build, then it was likely exposed in your base image.
...I can reach the port 9990 from outside?
Presumed "outside" here means the host network; then hostNetwork: true in your pod spec would allow that in this case.
Otherwise, please post the complete spec and describe the url/endpoint you used to "reach the port 9990" in your question.

Unable to connect to redis cluster on kubernetes from my golang application deployed within the same cluster

I deployed a redis cluster on Kubernetes with bitnami helm charts (https://github.com/bitnami/charts/tree/master/bitnami/redis-cluster).
I can successfully connect to the Redis cluster from within the Kubernetes cluster by running the below commands:
kubectl run my-redis-release-client --rm -it --image docker.io/bitnami/redis:4.0.11-debian-9 -- bash
redis-cli -h redis-cluster-0.redis-cluster-headless.redis
But I am unable to connect to redis cluster from my golang application deployed within the same cluster.
The redis connection string uri I used on my golang application is "redis://redis-cluster-0.redis-cluster-headless.redis:6379". This is following the "redis-pod-name.redis-service-name.namespace" convention.
NOTE: I want to be able to access the redis cluster from only within the Kubernetes cluster. I don’t want to grant external access. Please help...
Headless service is if you don't need load-balancing and a single Service IP. Headless service is not for accessing the redis cluster from only within the Kubernetes cluster
You can create a service to expose redis. Below is an example to create a ClusterIP type which only let's you connect to it from within the cluster and not from outside the cluster.
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: default
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
The pod or deployment of redis need to have matching label app: redis
Then you can connect to it using redis.default.svc.cluster.local:6379 to connect to it from Golang app.

Can't expose port on local Kubernetes cluster (via Docker Desktop)

I am using the local Kubernetes cluster from Docker Desktop on Windows 10. No virtual machines, no minikubes.
I need to expose a port on my localhost for some service.
For example, I take kubernetes-bootcamp image from the official tutorial:
docker pull jocatalin/kubernetes-bootcamp:v2
Put it in the local registry:
docker tag jocatalin/kubernetes-bootcamp:v2 localhost:5000/kubernetes-bootcamp
docker push localhost:5000/kubernetes-bootcamp
Then create a deployment with this image:
kubectl create deployment kubernetes-bootcamp --image=localhost:5000/kubernetes-bootcamp
Then let's expose a port for accessing our deployment:
kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080
kubectl get services
kubernetes-bootcamp NodePort 10.102.167.98 <none> 8080:32645/TCP 8s
So we found out that the exposed port for our deployment is 32645. Let's try to request it:
curl localhost:32645
Failed to connect to localhost port 32645: Connection refused
And nothing is work.
But if I try port-forward everything is working:
kubectl port-forward deployment/kubernetes-bootcamp 7000:8080
Forwarding from 127.0.0.1:7000 -> 8080
Forwarding from [::1]:7000 -> 8080
Handling connection for 7000
Another console:
curl localhost:7000
Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-7b5598d7b5-8qf9j | v=2
What am I doing wrong? I have found out several posts like mine, but none of them help me.
try to run the this CMD:
kubectl get svc | grep kubernetes-bootcamp
after this expose the pod to your network by using the CMD:
kubectl expose pod (podname) --type=NodePort
After that, you can check the URL by using the cmd example
minikube 0r kubectl service (service name) --url
So I have found out the problem root - local Kubernetes cluster somehow work the inappropriate way.
How I solve the problem:
Remove C:\ProgramData\DockerDesktop\pki
Recreate all pods, services, deployments
Now the same script I use before works great.

Kubernetes Pods intercommunication with internal/external IP addresses

Lets say I have a Python code on my local machine that listens on localhost and port 8000 as below:
import waitress
app = hug.API(__name__)
app.http.add_middleware(CORSMiddleware(app))
waitress.serve(__hug_wsgi__, host='127.0.0.1', port=8000)
This code accepts requests on 127.0.0.1:8000 and send back some response.
Now I want to move this application (with two more related apps) into Docker, and use Kubernetes to orchestrate the communication between them.
But for simplicity, I will take this Python node (app) only.
First I built the docker image using:
docker build -t gcr.io/${PROJECT_ID}/python-app:v1 .
Then I pushed it into gcloud docker ( I am using google cloud not docker hub):
gcloud docker -- push gcr.io/${PROJECT_ID}/python-app:v1
Now I created the container cluster:
gcloud container clusters create my-cluster
Deployed the app into kubernates:
kubectl run python-app --image=gcr.io/${PROJECT_ID}/python-app:v1 --port 8000
And finally exposed it to internet via:
kubectl expose deployment python-app --type=LoadBalancer --port 80 --target-port 8000
Now the output of the command kubectl get services is:
Ok, my question is, I want to send a request from another application (lets say a node js app).
How can I do that externally? i.e. from any machine.
How can I do that internally? i.e. from another container pod.
How can I let my Python app use those IP addresses and listen on them?
This is the Dockerfile of the Python app:
FROM python:3
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "python", "./app.py" ]
Thanks in advance!
Externally
By running:
kubectl run python-app --image=gcr.io/${PROJECT_ID}/python-app:v1 --port 8000
You are specifying that your pod listen on port 8000.
By running:
kubectl expose deployment python-app --type=LoadBalancer --port 80 --target-port 8000
You are specifying that your service listens on port 80, and the service sends traffic to TargetPort 8000 (the port the pod listens on).
So it could be summarised that with your configuration traffic follows the following path:
traffic (port 80) > Load Balancer > Service (Port 80) > TargetPort/Pod (port 8000)
Using a service of type Load Balancer (rather than the alternative 'Ingress', which creates a service of type Nodeport, and a HTTP(s) Load Balancer rather than a TCP Load Balancer) you are specifying that traffic that targets the pods should arrive at the LoadBalancer on port 80, and then the service directs this traffic to port 8000 on your App. So if you want to direct traffic to your App from an external source, based on the addresses in your screen shot, you would send traffic to 35.197.94.202:80.
Internally
As others have pointed out in the comments, the cluster IP can be used to target pods internally. The port you specify as the service port (in your case 80, although this could be any number you choose for the service) can be used alongside the cluster IP to target the pods on that cluster targeted by the service. For example, you could target:
10.3.254.16:80
However, to target specific pods, you can use the pod IP address and the port the pod listens on. You can discover this by either running a describe command on the pod:
kubectl describe pod
Or by running:
kubectl get endpoints
Which generates the pod IP and port it is listing on.

Understanding kubernetes deployment, service, and docker image ports

I am having trouble understanding how ports work when using kubernetes. There are three ports in question
Port that my app is listening on inside the docker container
Port mentioned in kubernetes config file as containerPort
LoadBalancer port when the deployment is exposed as a service
What is the relationship between the above three ports? In my current setup I mention EXPOSE 8000 in my Dockerfile and containerPort: 8000 in kubernetes config file. My app is listening on port 8000 inside the docker container. When I expose this deployment using kubectl expose deployment myapp --type="LoadBalancer", it results in the following service -
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myapp 10.59.248.232 <some-ip> 8000:32417/TCP 16s
But my curl fails as shown below -
$ curl http://<some-ip>:8000/status/ -i
curl: (52) Empty reply from server
Can someone please explain me how the above three ports work together and what should be their values for successful 'exposure' of my app?
The issue was with my Django server and not Kubernetes or docker. I was starting my server with python manage.py runserver instead of python manage.py runserver 0.0.0.0:8080 which was causing it to return empty responses as the requests were not coming from localhost.

Resources