How to coordinate ports between docker containers - docker

I have installed docker to host several containers on a server, using the host network - so ports are shared amongst all containers. If one container uses port 8000, no other ones can. Is there a tool - perhaps not so complex as k8s, though I've no idea whether that can do it - to assist me with selecting ports for each container? As the number of services on the host network grows, managing the list of available ports becomes unwieldy.
I remain confused as to why when I run docker ps, certain containers list no ports at all. It would be easier if the full list of ports were easily available, but I have two containers with a sizable list of exposed ports which show no ports at all. I suppose this is a separate question and a less important one.

Containers in a Pod are accessible via “localhost”; they use the same network namespace. Also, for containers, the observable host name is a Pod’s name. Because containers share the same IP address and port space, you should use different ports in containers for incoming connections. In other words, applications in a Pod must coordinate their usage of ports.
In the following example, we will create a multi-container Pod where nginx in one container works as a reverse proxy for a simple web application running in the second container.
Step 1. Create a ConfigMap with the nginx configuration file. Incoming HTTP requests to port 80 will be forwarded to port 5000 on localhost
Step 2. Create a multi-container Pod with the simple web app and nginx in separate containers. Note that for the Pod, we define only nginx port 80. Port 5000 will not be accessible outside of the Pod.
Step 3. Expose the Pod using the NodePort service:
$ kubectl expose pod mc3 --type=NodePort --port=80
service "mc3" exposed
Now you can use your browser (or curl) to navigate to your node’s port to access the web application.
it’s quite common for several containers in a Pod to listen on different ports — all of which need to be exposed. To make this happen, you can either create a single service with multiple exposed ports, or you can create a single service for every poirt you’re trying to expose.

Related

How to expose a Docker container port to one specific Docker network only, when a container is connected to multiple networks?

From the Docker documentation:
--publish or -p flag. Publish a container's port(s) to the host.
--expose. Expose a port or a range of ports.
--link. Add link to another container. Is a legacy feature of Docker. It may eventually be removed.
I am using docker-compose with several networks. I do not want to publish any ports to the host, yet when I use expose, the port is then exposed to all the networks that container is connected to. It seems that after a lot of testing and reading I cannot figure out how to limit this to a specific network.
For example in this docker-compose file with where container1 joins the following three networks: internet, email and database.
services:
container1:
networks:
- internet
- email
- database
Now what if I have one specific port that I want to expose to ONLY the database network, so NOT to the host machine and also NOT to the email and internet networks in this example? If I would use ports: on container1 it is exposed to the host or I can bind it to a specific IP address of the host. *I also tried making a custom overlay network, giving the container a static IPv4 address and trying to set the ports in that format in ports: like - '10.8.0.3:80:80', but that also did not work because I think the binding can only happen to a HOST IP address. If i use expose: on container1 the port will be exposed to all three networks: internet, email and database.
I am aware I can make custom firewall ruling but it annoys me that I cannot write such simple config in my docker-compose file. Also, maybe something like 80:10.8.0.3:80 (HOST_IP:HOST_PORT:CONTAINER_IP:CONTAINER_PORT) would make perfect sense here (did not test it).*
Am I missing something or is this really not possible in Docker and Docker-compose?
Also posted here: https://github.com/docker/compose/issues/8795
No, container to container networking in docker is one-size-fits-many. When two containers are on the same network, and ICC has not been disabled, container-to-container communication is unrestricted. Given Docker's push into the developer workflow, I don't expect much development effort to change this.
This is handled by other projects like Kubernetes by offloading the networking to a CNI where various vendors support networking policies. This may be iptables rules, eBPF code, some kind of sidecar proxy, etc to implement it. But it has to be done as the container networking is setup, and docker doesn't have the hooks for you to implement anything there.
Perhaps you could hook into docker events and run various iptables commands for containers after they've been created. The application could also be configured to listen on the specific IP address for the network it trusts, but this requires injecting the subnet you trust and then looking up your container IP in your entrypoint, non-trivial to script up, and I'm not even sure it would work. Otherwise, this is solved by either restructuring the application so components that need to be on a less secure network are minimized, by hardening the sensitive ports, or switching the runtime over to something like Kubernetes with a network policy.
Things that won't help:
Removing exposed ports: this won't help since expose is just documentation. Changing exposed ports doesn't change networking between containers, or between the container and host.
Links: links are a legacy feature that adds entries to the host file when the container is created. This was replaced by creating networks with DNS resolution of other containers.
Removing published ports on the host: This doesn't impact container to container communication. The published port with -p creates a port forward from the host to the container, which you do want to limit, but containers can still communicate over a shared network without that published port.
The answer to this for me was to remove the -p command as that binds the container to the host and makes it available outside the host.
If you don't specify -p options. The container is available on all the networks it is connected to. On whichever port or ports the application is listening on.
It seems the -P forces the container on to the host and binds it to the port specified.
In your example if you don't use -p when staring "container1". "container1" would be available to the networks: internet, email, database with all its ports but not outside the host.

what is the container port in kubernetes yaml file

As we expose the container port in dockerfile itself then what is the use of container port in kubernetes yaml. What does it actually do. Is it mandatory to mention the container port in yaml file or we need not to mention in when we expose it in docker file.
Anyways, we will be using target port the map the container port with pod
ports:
- containerPort: 80
ports :
containerPortList of ports to expose from the container. Exposing a
port here gives the system additional information about the network
connections a container uses, but is primarily informational. Not
specifying a port here DOES NOT prevent that port from being exposed.
Any port which is listening on the default "0.0.0.0" address inside
a container will be accessible from the network. Cannot be
updated.
container-core
So it is exactly same with docker EXPOSE instruction. Both are informational. If you don’t configure ports in Kubernetes deployment, you can still access to the ports using Pod IP inside the cluster. You can create a service to access the ports externally without configuring ports in the deployment. But it is good to configure. It will help you or others to understand the deployment configuration better.
The EXPOSE instruction does not actually publish the port. It
functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published.
.docker-reference-builder

Multiple docker containers with same container port connected to the same network

I am having an app that depends on multiple docker containers. I use docker compose so that all of them are in the same network for inter-container communication. But, two of my containers are listening to the same port 8080 inside their respective containers but, are mapped to different ports on the host: 8072,8073. For inter-container communication since we use the container's port will this cause problems?
Constraints:
I need both the containers for my app to run. Thus I cannot isolate the other container with same internal port to a different network
All containers should run on the same host.
Am new to docker and I am not sure how to solve this.
Thanks
IIUC see the documentation here:
https://docs.docker.com/compose/networking
You need not expose each of the service's ports on the host unless you wish to access them from the host, i.e. outside of the docker-compose created network.
Ports must be unique per host but each service in your docker-compose created network can use the same port with impunity and is referenced by <service-name>:<port>.
In the Docker example, there could be 2 Postgres services. Each would need a unique name: db1; db2 but both could use the same port - "5432" and be uniquely addressable from the service called web (and each other) as db1:8432 and db2:8432.
Each service corresponds effectively to a different host. So, as long as the ports are unique for each service|host, you're good. And, as long as any ports you expose on the host are unique, you're good too....
Extending the example, db1 could expose port 9432:8432 but then db2 would need to find a different host port to use, perhaps 9433:8432.
Within the docker-compose created network, you would access db1 as db1:8432 and db2 as db2:8432.
From the host (outside the docker-compose create network), you would access db1 as localhost:9432 and db2 as localhost:9433.
NB It's likely a good practice to only expose service ports to the host when those service's must be accessible from outside (e.g. web probably must be exposed but dbX probably need not be exposed). You may wish to be more liberal in exposing service ports while debugging.

Kubernetes - container communication within a pod using names instead of 'localhost'?

From the kubernetes docs:
The applications in a pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost.
Is it possible to use some container specific names instead of locahost?
For example, with docker-compose up, you use name of the service to communicate. [docs]
So, if my docker-compose.yml file is
version: '2'
services:
web:
build: .
ports:
- "8000:8000"
srv:
build: .
ports:
- "3000:3000"
Then I access srv from within web by calling http://srv:3000/, not http://localhost:3000
How can I achieve the same behaviour in kubernetes? Any way to specify what name to use in pods' yaml configuration?
localhost is just a name for the network loopback device (usually 127.0.0.1 for IPv4 and ::1 for IPv6). This is usually specified in your /etc/hosts file.
A pod has its own IP, so each container inside shares that IP. If these containers should be independent (i.e. don't need to be collocated), they should each be in their own pod. Then, you can define a service for each that allows DNS lookups as either "$SERVICENAME" from pods in the same namespace, or "$SERVICENAME.$NAMESPACE" from pods in different namespaces.
docker-compose deploys individual containers, linking them together so they know each other's name and IP.
a Pod in Kubernetes is similar, but this is not the purpose of a Pod to hold multiple external services and link them together.
A Pod is for containers that must be running on the same host, and interact among themselves only. The containers communicate internally via localhost.
Most Pods are in fact a single container.
A Pod communicates with the outside using Services. In essence a Pod appears as if it was just one container.
under the hood, a Pod is at least 2 containers: the pause container manages the IP of the Pod, and then your attached container. This allows your container to crash, restart, and be relinked in the Pod without changing IP, allowing to manage container crashes without involving the scheduler, and making sure the Pod stays on a single node during its lifetime, so restart is fast.
If containers we rescheduled each time they crash, they would potentially end up on a different host, routing would have to be updated etc...
Generally, Containers running inside a pod, shares pod's IP and Port space. The communication between the containers will happen through localhost by default. To communicate between the containers using the name(like DNS), the containers should run in the independent POD and expose it as a service to rest of application world.

How can I expose kubernetes services running within docker?

What I want to do is run kubernetes within docker and expose the kubernetes services externally. I followed the docs on getting kubernetes running within docker. As long as I connect from the localhost, I can access my services. However, connecting from a different computer doesn't work. If I spin up a docker image directly, then I can access it. Only things running within kubernetes aren't exposed. Is this possible?
Ensure your nodes have externally reachable IP addresses.
Then create a service of type NodePort:
https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md#type-nodeport
And direct traffic to nodes at the allocated port.

Resources