Kubernetes - container communication within a pod using names instead of 'localhost'? - docker

From the kubernetes docs:
The applications in a pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost.
Is it possible to use some container specific names instead of locahost?
For example, with docker-compose up, you use name of the service to communicate. [docs]
So, if my docker-compose.yml file is
version: '2'
services:
web:
build: .
ports:
- "8000:8000"
srv:
build: .
ports:
- "3000:3000"
Then I access srv from within web by calling http://srv:3000/, not http://localhost:3000
How can I achieve the same behaviour in kubernetes? Any way to specify what name to use in pods' yaml configuration?

localhost is just a name for the network loopback device (usually 127.0.0.1 for IPv4 and ::1 for IPv6). This is usually specified in your /etc/hosts file.
A pod has its own IP, so each container inside shares that IP. If these containers should be independent (i.e. don't need to be collocated), they should each be in their own pod. Then, you can define a service for each that allows DNS lookups as either "$SERVICENAME" from pods in the same namespace, or "$SERVICENAME.$NAMESPACE" from pods in different namespaces.

docker-compose deploys individual containers, linking them together so they know each other's name and IP.
a Pod in Kubernetes is similar, but this is not the purpose of a Pod to hold multiple external services and link them together.
A Pod is for containers that must be running on the same host, and interact among themselves only. The containers communicate internally via localhost.
Most Pods are in fact a single container.
A Pod communicates with the outside using Services. In essence a Pod appears as if it was just one container.
under the hood, a Pod is at least 2 containers: the pause container manages the IP of the Pod, and then your attached container. This allows your container to crash, restart, and be relinked in the Pod without changing IP, allowing to manage container crashes without involving the scheduler, and making sure the Pod stays on a single node during its lifetime, so restart is fast.
If containers we rescheduled each time they crash, they would potentially end up on a different host, routing would have to be updated etc...

Generally, Containers running inside a pod, shares pod's IP and Port space. The communication between the containers will happen through localhost by default. To communicate between the containers using the name(like DNS), the containers should run in the independent POD and expose it as a service to rest of application world.

Related

How do I scale two services in docker-compose (one being a network) and keep them connected to each other's network_mode?

In a scenario where one service connects to the other one's network (as in the example .yml below), is there a way to use --scale to scale both and make them connect to the correct network?
As in app1 uses vpn1 network, app2 uses vpn2 network etc.
services:
vpn:
image: myvpn
app:
image: myapp
depends_on: vpn
network_mode: service:vpn
I want to be able to run the container with docker-compose up -d --scale app=5 --scale vpn=5
The issue is that if I scale both containers the way it's currently setup (lets say with two instances of each service to simplify), both "app" services connect to the first "vpn" service.
I can confirm it by inspecting both app-1 and app-2. In "HostConfig" they both show "NetworkMode": "container:ef37426bec3dbd9c182187d87faf5fe8c92c1e1fa26066f57d163f301af2574e", which is the first vpn container.
I understand this is the expected behavior, as the .yml file indicates network_mode: service:vpn and not something like network_mode: service:vpn-${container ID here}
I want to find a way to set app-1 to use vpn-1 as NetworkMode, app-2 to use vpn-2 as NetworkMode, etc.
Out of the box docker-compose allows for two communication methods:
VIP - one VIP and DNS record to load balance between the replicas using docker's configuration
DNSRR - one DNS record and multiple IPs (one for each service) used to allow the user to implement their own load balancer
As far as I managed to research, a configuration as the one you're asking can be achieved using docker-compose in one way only, you can set in each client service the backend DNS name by the container name.
For example, if I want to connect to the second instance I'll connect to docker_backend_2
Another way that requires a more hands-on approach would be talking between the services and deciding which will connect to who of the returned IP's to the nslookup or some other probing method
If this connectivity configuration is important to you, you should look into Kubernetes where your basic working unit is a pod and you can spin multiple container images per pod, then you'll spin one of each service inside the pod and connect directly to them internally in the pod's network.

How to coordinate ports between docker containers

I have installed docker to host several containers on a server, using the host network - so ports are shared amongst all containers. If one container uses port 8000, no other ones can. Is there a tool - perhaps not so complex as k8s, though I've no idea whether that can do it - to assist me with selecting ports for each container? As the number of services on the host network grows, managing the list of available ports becomes unwieldy.
I remain confused as to why when I run docker ps, certain containers list no ports at all. It would be easier if the full list of ports were easily available, but I have two containers with a sizable list of exposed ports which show no ports at all. I suppose this is a separate question and a less important one.
Containers in a Pod are accessible via “localhost”; they use the same network namespace. Also, for containers, the observable host name is a Pod’s name. Because containers share the same IP address and port space, you should use different ports in containers for incoming connections. In other words, applications in a Pod must coordinate their usage of ports.
In the following example, we will create a multi-container Pod where nginx in one container works as a reverse proxy for a simple web application running in the second container.
Step 1. Create a ConfigMap with the nginx configuration file. Incoming HTTP requests to port 80 will be forwarded to port 5000 on localhost
Step 2. Create a multi-container Pod with the simple web app and nginx in separate containers. Note that for the Pod, we define only nginx port 80. Port 5000 will not be accessible outside of the Pod.
Step 3. Expose the Pod using the NodePort service:
$ kubectl expose pod mc3 --type=NodePort --port=80
service "mc3" exposed
Now you can use your browser (or curl) to navigate to your node’s port to access the web application.
it’s quite common for several containers in a Pod to listen on different ports — all of which need to be exposed. To make this happen, you can either create a single service with multiple exposed ports, or you can create a single service for every poirt you’re trying to expose.

How to access local machine from a pod

I have a pod created on the local machine. I also have a script file on the local machine. I want to run that script file from the pod (I will be inside the pod and run the script present on the local host).
That script will update /etc/hosts of another pod. Is there a way where i can update the /etc/hosts of one pod from another pod? The pods are created from two different deployments.
I want to run that script file from the pod (I will be inside the pod and run the script present on the local host).
You can't do that. In a plain Docker context, one of Docker's key benefits is filesystem isolation, so the container can't see the host's filesystem at all unless parts of it are explicitly published into the container. In Kubernetes not only is there this restriction, but you also have limited control over which node you're running on, and there's potential trouble if one node has a given script and another doesn't.
Is there a way where i can update the /etc/hosts of one pod from another pod?
As a general rule, you should avoid using /etc/hosts for anything. Setting up a DNS service keeps things consistent and avoids having to manually edit files in a bunch of places.
Kubernetes provides a DNS service for you. In particular, if you define a Service, then the name of that Service will be visible as a DNS name (within the cluster); one pod can reach the other via first-service-name.default.svc.cluster.local. That's probably the answer you're actually looking for.
(If you really only have a single-node environment then Kubernetes adds a lot of complexity and not much benefit; consider plain Docker and Docker Compose instead.)
As an addition to David's answer - you can copy script from your host to a pod using cp:
kubectl cp [file-path] [pod-name]:/[path]
About your question in the comment. You can do it by exposing a deployment:
kubectl expose deployment/name
Which will result in creating a service, you can find more practical examples and approach in this section.
Thus after your specific Pod terminates you can still reach new Pods by the same port and Service. You can find more details here.
In the example from documentation you can see that nginx Pod has been created with a container port 80 and the expose command will have following effect:
This specification will create a Service which targets TCP port 80 on
any Pod with the run: my-nginx label, and expose it on an abstracted
Service port (targetPort: is the port the container accepts traffic
on, port: is the abstracted Service port, which can be any port other
pods use to access the Service). View Service API object to see the
list of supported fields in service definition
Other than that seems like David provided really good explanation here, and it would be finding out more about FQDN and DNS - which also connects with services.

Kubernetes - Deploying Multiple Images into a single Pod

I'm having an issue where because an application was originally configured to execute on docker-compose.
I managed to port and rewrite the .yaml deployment files to Kubernetes, however, the issue lies within the communication of the pods.
The frontend communicates with the backend to access the services, and I assume as it should be in the same network, the frontend calls the services from the localhost.
I don't have access to the code, as it is an proprietary application that was developed by a company and it does not support Kubernetes, so modifying the code is out of question.
I believe the main reason is because the frontend and backend are runnning on different pods, with different IPs.
When the frontend tries to call the APIs, it does not find the service, and returns an error.
Therefore, I'm trying to deploy both the frontend image and backend image into the same pod, so they share the same Cluster IP.
Unfortunately, I do not know how to make a yaml file to create both containers within a single pod.
Is it possible to have both frontend and backend containers running on the same pod, or would there be another way to make the containers communicate (maybe a proxy)?
Yes, you just add entries to the containers section in your yaml file, example:
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
containers:
- name: nginx-container
image: nginx
- name: debian-container
image: debian
Therefore, I'm trying to deploy both the frontend image and backend image into the same pod, so they share the same Cluster IP.
Although you have the accepted answer already in place that is tackling example of running more containers in the same pod I'd like to point out few details:
Containers should be in the same pod only if they scale together (not if you want to communicate over clusterIP amongst them). Your scenario of frontend/backend division doesn't really look like good candidate to cram them together.
If you opt for containers to be in the same pod they can communicate over localhost (they see each other as if two processes are running on the same host (except the part their file systems are different) and can use localhost for direct communication and because of that can't allocate both the same port. Using cluster IP is as if on same host two processes are communicating over external ip).
More kubernetes philosophy approach here would be to:
Create deployment for backend
Create service for backend (exposing necessary ports)
Create deployment for frontend
Communicate from frontend to backend using backend service name (kube-dns resolves this to cluster ip of backend service) and designated backend ports.
Optionally (for this example) create service for frontend for external access or whatever goes outside. Note that here you can allocate same port as for backend service since they are not living on the same pod (host)...
Some of the benefits of this approach include: you can isolate backend better (backend-frontend communication is within cluster only, not exposed to outside world), you can schedule them independently on nodes, you can scale them independently (say you need more backend power but fronted is handling traffic ok or viceversa), you can replace any of them independently etc...

Deploying a network of containers in a single K8s pod

I have an app running in an EC2 instance. It starts multiple containers on a network:
docker run --name purple \
--net=treeOfPlums \
--net-alias=purple \
-d treeofplums/purple
docker run --name shiny \
--net=treeOfPlums \
--net-alias=shiny \
-d treeofplums/shiny
In this case the containers purple and shiny can communicate because they are both on treeOfPlums with aliases purple and shiny respectively.
I want to deploy this app on K8s. I am using minikube for development. I do not want to use docker-in-docker here, where my main app is a container and it spins up the rest. Instead, I would like to make them all siblings.
My question is, how do I specify the network name and container aliases on that network in a K8s pod?
Using the keywords network and network-alias in the deployment yaml won't work. As I understand, containers in a single pod are on one network anyway, so setting an alias will be sufficient too. I am thinking of something like:
spec:
containers:
- name: purple
image: purple
network: treeOfPlums
net-alias: purple
...
The point of pods is so that containers can talk as though they are on localhost. You don't need any additional network tweaks between containers in a pod.
From the docs:
The applications in a pod all use the same network namespace (same IP
and port space), and can thus “find” each other and communicate using
localhost. Because of this, applications in a pod must coordinate
their usage of ports. Each pod has an IP address in a flat shared
networking space that has full communication with other physical
computers and pods across the network.
The hostname is set to the pod’s Name for the application containers
within the pod.
In addition to defining the application containers that
run in the pod, the pod specifies a set of shared storage volumes.
Volumes enable data to survive container restarts and to be shared
among the applications within the pod.
http://kubernetes.io/docs/user-guide/pods/#motivation-for-pods

Resources