can the same docker port be used for two different applications? - docker

If we have two applications app1.py and app2.py both running in docker container as flask services with following commands:
docker run -p 5000:5002 app1.py
docker run -p 9000:5002 app2.py
Is it possible to keep the same docker port 5002 for both containers?
Secondly, if I use app.run(host='0.0.0.0', port=5000, debug=True) in flask endpoint.py file which is used for image building, is port=5000 the docker port in container or the port available externally on host?

Yes, each container runs in an isolated network namespace, so every one can listen on the same port and they won’t conflict. Inside your application code the port you listen on is the container-internal port, and unless you get told in some other way (like an HTTP Host: header) you have no way of knowing what ports you’ve been remapped to externally.

Is it possible to keep the same docker port 5002 for both containers?
Yes, of course. Typically every container runs in an isolate network namespace, which means containers cannot communicate with each unless they are configured to do so. What confuses you maybe that containers do communicate with each other well by default, which should thank Docker network default setting. But there are still other use cases. You may see more about The container Network Model and network namespace here.
Is port=5000 the port in container or the port valid externally on host?
It’s the port in container with no doubt. We could notice it is a user-defined argument for function run() in Flask. Since Flask application runs in container, so 5000 will be the port which Flask app would listen on in container.
We should map it out if we wanna access 5000 on host(outside of container). The flag -p would help you.
Hope this helps~

Related

How to expose the docker container ip to the external network?

i want to expose the container ip to the external network where the host is running so that i can directly ping the docker container ip from an external machine.
If i ping the docker container ip from the external machine where the machine hosting the docker and the machine from which i am pinging are in the same network i need to get the response from these machines
Pinging the container's IP (i.e. the IP it shows when you look at docker inspect [CONTAINER]) from another machine does not work. However, the container is reachable via the public IP of its host.
In addition to Borja's answer, you can expose the ports of Docker containers by adding -p [HOST_PORT]:[CONTAINER_PORT] to your docker run command.
E.g. if you want to reach a web server in a Docker container from another machine, you can start it with docker run -d -p 80:80 httpd:alpine. The container's port 80 is then reachable via the host's port 80. Other machines on the same network will then also be able to reach the webserver in this container (depending on Firewall settings etc. of course...)
Since you tagged this as kubernetes:
You cannot directly send packets to individual Docker containers. You need to send them to somewhere else that’s able to route them. In the case of plain Docker, you need to use the docker run -p option to publish a port to the host, and then containers will be reachable via the published port via the host’s IP address or DNS name. In a Kubernetes context, you need to set up a Service that’s able to route traffic to the Pod (or Pods) that are running your container, and you ultimately reach containers via that Service.
The container-internal IP addresses are essentially useless in many contexts. (They cannot be reached from off-host at all; in some environments you can’t even reach them from outside of Docker on the same host.) There are other mechanisms you can use to reach containers (docker run -p from outside Docker, inter-container DNS from within Docker) and you never need to look up these IP addresses at all.
Your question places a heavy emphasis on ping(1). This is a very-low-level debugging tool that uses a network protocol called ICMP. If sending packets using ICMP is actually core to your workflow, you will have difficulty running it in Docker or Kubernetes. I suspect you aren’t actually. Don’t worry so much about being able to directly ping containers; use higher-level tools like curl(1) if you need to verify that a request is reaching its container.
It's pretty easy actually, assuming you have control over the routing tables of your external devices (either directly, or via your LAN's gateway/router). Assuming your containers are using a bridge network of 172.17.0.0/16, you add a static entry for the 172.17.0.0/16 network, with your Docker physical LAN IP as the gateway. You might need to also allow this forwarding in your Docker OS firewall configuration.
After that, you should be able to connect to your docker container using its bridge address (172.17.0.2 for example). Note however that it will likely not respond to pings, due to the container's firewall.
If you're content to access your container using only the bridge IP (and never again use your Docker host IP with the mapped-port), you can remove port mapping from the container entirely.
You need to create a new bridge docker network and attach the container to this network. You should be able to connect by this way.
docker network create -d bridge my-new-bridge-network
or
docker network create --driver=bridge --subnet=192.168.0.0/16 my-new-bridge-network
connect:
docker network connect my-new-bridge-network container1
or
docker network connect --ip 192.168.0.10/16 my-new-bridge-network container-name
If the problem persist, just reload docker daemon, restart the service. Is a known issue.

Can we have two or more container running on docker at the same time

I have not done any practical with the docker and container, But as per my knowledge.
As per the documents available online I did not get the details about the running two or more containers at the same time.
Docker allows container to map port address of container to the host machine.
Now, the question is can we run multiple container at the same time on docker? if yes then if two containers are mapped to same port number then how does the port is handled in this case?
Also out of curiosity, can two containers on docker communicate with each other?
Yes you can run multiple containers on a single host; docker is designed for exactly that.
You cannot map two containers of different images to the same port number; you get an error response if you try. However, if your containers run the same image (e.g.2 instances of a webapp) you could run them as a service, and have them exposed on the same port. Docker will load-balance the requests. You can read more about services here or follow the Get Started (Part 3, services) here
Yes, the containers on a single host can communicate with each other, by container name. For example if you have one container running MongoDB called mongo, and another one running Node.js called webserver, the webserver container can connect to the database by using the name mongo e.g. db.Connect("mongodb://mongo:27017/testdb").
We can run more one than one Docker at a time in a host but yes we will hit the limitation of binding the same port to the docker; so to resolve this we need to bind different port in the host to docker that is if you are running mongo-db then its default port is 27017 so we can run two mongo-db as -p 27017:27017 for Docker D1 and -p 27018:27017 for Docker D2 and 5000:27017 for docker D3; Like this you can bind different host port to map to 27017 for mongo-db port; Now your question is how to manage this ports from host then I would recommend you to use nginx for port managing in the host machine.
Coming to your next question all dockers are connected to default docker0 bridge network so we can connect to any of the dockers connected to default bridge 'docker0' network; If I am right it will come with ipaddress of 172.x.x.x network. Get inside to the docker and run 'ip addr' to see the ip-address assigned to the dockers and you can test connection by running ping command.
Yes two containers can run same time, they can also communicate with each other also, you can define your own network and they can communicate with each other. if two containers have their private ports, they are their internal ports, one container port does not collide with another container port. if you want to expose the port to host, then you have to publish the port(s).

How to obtain the published ports from within a docker container?

I wrote a simple peer to peer system, in which, when starting a node
the node looks for a free port and then makes its service accessible on that port,
it registers its URL including the port to a central server, so that other nodes know how to connect to it.
I have the impression this is a typical kind of task that docker is useful for, so I thought of making a container for these peers (my previous experience with docker has only been to write a hello world container).
Ideally I would map (publish) my exposed port to a host port from within the container using the same code that I am running now, but I could imagine that is simply not possible, and I could get around that by starting the image using a script that checks for availability of ports and then runs the container on an appropriate free host port. If the first is possible however, that would be even better. To be explicit, I do something like the following in Python
port = 5001
while not port_is_free(port):
port += 1
The second part really has to be taken care of from within the container. Assume that it has been started with the command docker run -p 5005:80 p2p-node then I need to find out the published port 5005 that the exposed port 80 is mapped to from within the container.
Searching this site and the internet it looks like more people are interested in doing the same, but I couldn't find a solution, nor an affirmation that this simply cannot be done.
So this is the main question I want to ask: how can I see which published ports my exposed ports are mapped to from within a running docker container?
Some of your requirements are not clear to me.
However if you want to know only which host port is mapped with your container's port, you can simply pass an environment variable, -e VAR=val. Just an idea
Start container:
docker run -p 5005:80 -e HOST_PORT=5005 p2p-node
Access the variable from container
echo $HOST_PORT
there is docker-py, a python library of docker.
It is not clear why you want the host port. In docker, containers can communicate with each other without having to expose ports on host machines.
As long as the peer apps are containerized, you don't need the expose port. The containers can be connected via a Docker network and the internal port can be used for communication between the containers.

Docker: Map host ports to several docker containers

I haven't fully understood the way port forwarding works with docker.
My scenario looks like this:
I have a Dockerfile that exposes a port (in my case it's 8000)
I have built an image using this Dockerfile (by using "docker build -t test_docker")
Now I created several containers by using "docker run -p 808X:8000 -d test_docker"
The host reacts on calling its IP with the different ports I have assigned on "docker run"
What exactly does this EXPOSE command do in the Dockerfile? I understood that the docker daemon itself handles the network connections and while calling "docker run" I also tell what image should be used...
Thanks
OK, I think I understood the reason.
If you are listening on ports within your application, you need to expose exactly this port. E.g.
HttpServer.bind('127.0.0.1', 8000).then((server) {...}
will need "EXPOSE 8000". Like this you can listen to several to several ports in your app but then need to expose them all.
Am I right?
Exposing ports in your dockerfile allows you to spin up a container using the -P(See here) flag on the docker run command.
A simple use case might be that you have nginx sitting on port 80 on a load balancing server, and it's going to load balance that traffic across a few docker conatiners sitting on a coreos docker server. Since each of your apps use the same port, 8000, you wouldn't be able to get to them individually. So docker would map each container to a high random, and non conflicting port on the host. So when you hit 49805, it goes to container 1s 8000, and when you hit 49807, it goes to container 2s 8000.

Docker container linking via port forwarding?

It seems that the preferred way to expose services to other Docker containers is container linking, which sets some environment variables that you then have to use in your application code to look up host names and port numbers:
psql -h $PG_PORT_5432_TCP_ADDR -p $PG_PORT_5432_TCP_PORT
Is there a reason this is not done via port forwarding in a way that is transparent to the application? So that in the same way that I can just run my web server inside the container on standard port 80 and have Docker figure out what actual port to use, I could just be doing
psql -h 0.0.0.0 # no -p necessary, we use the default port
The port forwarding would be set up when I start docker, just like with server ports.
This is possible! It has actually be proposed by the CoreOS team; you can read more in the following blog post:
http://coreos.com/blog/Jumpers-and-the-software-defined-localhost/
Docker will soon allow to start a container sharing the network namespace of another container; it will help with those scenarios (and in the short term, it will allow to do what you suggest very easily).
Project Atomic is also following this approach:
http://www.projectatomic.io/docs/inter-container-networking/
Geard uses iptables to enable containers to connect to each other. Network namespaces allows adding iptables rules to the network namespace of a container. The basic idea is to make remote endpoints appear as if they were local to a container. For example the database container could be made to appear to be running locally inside the application container.

Resources