Deploying a network of containers in a single K8s pod - docker

I have an app running in an EC2 instance. It starts multiple containers on a network:
docker run --name purple \
--net=treeOfPlums \
--net-alias=purple \
-d treeofplums/purple
docker run --name shiny \
--net=treeOfPlums \
--net-alias=shiny \
-d treeofplums/shiny
In this case the containers purple and shiny can communicate because they are both on treeOfPlums with aliases purple and shiny respectively.
I want to deploy this app on K8s. I am using minikube for development. I do not want to use docker-in-docker here, where my main app is a container and it spins up the rest. Instead, I would like to make them all siblings.
My question is, how do I specify the network name and container aliases on that network in a K8s pod?
Using the keywords network and network-alias in the deployment yaml won't work. As I understand, containers in a single pod are on one network anyway, so setting an alias will be sufficient too. I am thinking of something like:
spec:
containers:
- name: purple
image: purple
network: treeOfPlums
net-alias: purple
...

The point of pods is so that containers can talk as though they are on localhost. You don't need any additional network tweaks between containers in a pod.
From the docs:
The applications in a pod all use the same network namespace (same IP
and port space), and can thus “find” each other and communicate using
localhost. Because of this, applications in a pod must coordinate
their usage of ports. Each pod has an IP address in a flat shared
networking space that has full communication with other physical
computers and pods across the network.
The hostname is set to the pod’s Name for the application containers
within the pod.
In addition to defining the application containers that
run in the pod, the pod specifies a set of shared storage volumes.
Volumes enable data to survive container restarts and to be shared
among the applications within the pod.
http://kubernetes.io/docs/user-guide/pods/#motivation-for-pods

Related

how to differentiate docker container and kubernetes pods running in the same host

I was handed a kubernetes cluster to manage. But in the same node, I can see running docker containers (via docker ps) that I could not able to find/relate in the pods/deployments (via kubectl get pods/deployments).
I have tried kubectl describe and docker inspect but could not pick out any differentiating parameters.
How to differentiate which is which?
There will be many. At a minimum you'll see all the pod sandbox pause containers which are normally not visible. Plus possibly anything you run directly such as the control plane if not using static pods.

Docker networks in Kubernetes/Rancher

I've been trying to convert my SimpleLogin Docker containers to Kubernetes using Rancher. However one of the steps requires me to create a network.
sudo docker network create -d bridge \
--subnet=240.0.0.0/24 \
--gateway=240.0.0.1 \
sl-network
I couldn't really find a way to do this on Kubernetes/Rancher.
How do I set up an equivalent network like the above command in Kubernetes?
If you want more information about what this network should do you can find it here.
You don't. Kubernetes has its own network ecosystem, which mostly acts as though every Pod and Service is on the same network. You can't create separate subnets within that, there's no way to create a separate network per logical application. You also can't control the IP range of networks in Kubernetes (it shouldn't usually be necessary in Docker either).
Generally you can communicate between Kubernetes Pods by putting a Service in front of each, and then using the Service's DNS name as a host name. If all of the parts were running in the same Namespace, and the Service in front of the database were named sl-db, then the webapp Pod could use sl-db as the host name part of the DB_URI setting.
Reading through the documentation you link to, you will probably need to do some extra work to get the Postfix MTA set up. Note that it looks like it runs outside of Docker in this setup; either you will have to port the setup to run inside Kubernetes or configure its mynetworks settings to include the network that contains the Kubernetes nodes. You will also need to set up Kubernetes ConfigMaps and Secrets to hold the various configuration files and certificates this setup needs.

How to coordinate ports between docker containers

I have installed docker to host several containers on a server, using the host network - so ports are shared amongst all containers. If one container uses port 8000, no other ones can. Is there a tool - perhaps not so complex as k8s, though I've no idea whether that can do it - to assist me with selecting ports for each container? As the number of services on the host network grows, managing the list of available ports becomes unwieldy.
I remain confused as to why when I run docker ps, certain containers list no ports at all. It would be easier if the full list of ports were easily available, but I have two containers with a sizable list of exposed ports which show no ports at all. I suppose this is a separate question and a less important one.
Containers in a Pod are accessible via “localhost”; they use the same network namespace. Also, for containers, the observable host name is a Pod’s name. Because containers share the same IP address and port space, you should use different ports in containers for incoming connections. In other words, applications in a Pod must coordinate their usage of ports.
In the following example, we will create a multi-container Pod where nginx in one container works as a reverse proxy for a simple web application running in the second container.
Step 1. Create a ConfigMap with the nginx configuration file. Incoming HTTP requests to port 80 will be forwarded to port 5000 on localhost
Step 2. Create a multi-container Pod with the simple web app and nginx in separate containers. Note that for the Pod, we define only nginx port 80. Port 5000 will not be accessible outside of the Pod.
Step 3. Expose the Pod using the NodePort service:
$ kubectl expose pod mc3 --type=NodePort --port=80
service "mc3" exposed
Now you can use your browser (or curl) to navigate to your node’s port to access the web application.
it’s quite common for several containers in a Pod to listen on different ports — all of which need to be exposed. To make this happen, you can either create a single service with multiple exposed ports, or you can create a single service for every poirt you’re trying to expose.

RabbitMQ cluster by docker-compose on different hosts and different projects

I have 3 projects, that deploys on different hosts. Every project have it's own RabbitMQ container. But I need to create cluster with this 3 hosts, using the same vhost, but different user/login pair.
I was tried Swarm and overlay networks, but swarm is aimed to run solo containers and with compose it doesn't work. Also, I was tried docker-compose bundle, but this is not work as expected :(
I assumed that it would work something like this:
1) On manager node I create overlay network
2) In every compose file I extend networks config for rabbitmq container with my overlay network.
3) They work as expected and I don't publish to Internet rabbitmq port.
Any idea, how can I do this?
Your approach is right, but Docker Compose doesn't work with Swarm Mode at the moment. Compose just runs docker commands, so you could script up what you want instead. For each project you'd have a script like this:
docker network create -d overlay app1-net
docker service create --network app1-net --name rabbit-app1 rabbitmq:3
docker service create --network app1-net --name app1 your-app-1-image
...
When you run all three scripts on the manager, you'll have three networks, each network will have its own RabbitMQ service (just 1 container by default, use --replicas to run more than one). Within the network other services can reach the message queue by the DNS name rabbit-appX. You don't need to publish any ports, so Rabbit is not accessible outside of the Docker network.

Kubernetes - container communication within a pod using names instead of 'localhost'?

From the kubernetes docs:
The applications in a pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost.
Is it possible to use some container specific names instead of locahost?
For example, with docker-compose up, you use name of the service to communicate. [docs]
So, if my docker-compose.yml file is
version: '2'
services:
web:
build: .
ports:
- "8000:8000"
srv:
build: .
ports:
- "3000:3000"
Then I access srv from within web by calling http://srv:3000/, not http://localhost:3000
How can I achieve the same behaviour in kubernetes? Any way to specify what name to use in pods' yaml configuration?
localhost is just a name for the network loopback device (usually 127.0.0.1 for IPv4 and ::1 for IPv6). This is usually specified in your /etc/hosts file.
A pod has its own IP, so each container inside shares that IP. If these containers should be independent (i.e. don't need to be collocated), they should each be in their own pod. Then, you can define a service for each that allows DNS lookups as either "$SERVICENAME" from pods in the same namespace, or "$SERVICENAME.$NAMESPACE" from pods in different namespaces.
docker-compose deploys individual containers, linking them together so they know each other's name and IP.
a Pod in Kubernetes is similar, but this is not the purpose of a Pod to hold multiple external services and link them together.
A Pod is for containers that must be running on the same host, and interact among themselves only. The containers communicate internally via localhost.
Most Pods are in fact a single container.
A Pod communicates with the outside using Services. In essence a Pod appears as if it was just one container.
under the hood, a Pod is at least 2 containers: the pause container manages the IP of the Pod, and then your attached container. This allows your container to crash, restart, and be relinked in the Pod without changing IP, allowing to manage container crashes without involving the scheduler, and making sure the Pod stays on a single node during its lifetime, so restart is fast.
If containers we rescheduled each time they crash, they would potentially end up on a different host, routing would have to be updated etc...
Generally, Containers running inside a pod, shares pod's IP and Port space. The communication between the containers will happen through localhost by default. To communicate between the containers using the name(like DNS), the containers should run in the independent POD and expose it as a service to rest of application world.

Resources