I haven't fully understood the way port forwarding works with docker.
My scenario looks like this:
I have a Dockerfile that exposes a port (in my case it's 8000)
I have built an image using this Dockerfile (by using "docker build -t test_docker")
Now I created several containers by using "docker run -p 808X:8000 -d test_docker"
The host reacts on calling its IP with the different ports I have assigned on "docker run"
What exactly does this EXPOSE command do in the Dockerfile? I understood that the docker daemon itself handles the network connections and while calling "docker run" I also tell what image should be used...
Thanks
OK, I think I understood the reason.
If you are listening on ports within your application, you need to expose exactly this port. E.g.
HttpServer.bind('127.0.0.1', 8000).then((server) {...}
will need "EXPOSE 8000". Like this you can listen to several to several ports in your app but then need to expose them all.
Am I right?
Exposing ports in your dockerfile allows you to spin up a container using the -P(See here) flag on the docker run command.
A simple use case might be that you have nginx sitting on port 80 on a load balancing server, and it's going to load balance that traffic across a few docker conatiners sitting on a coreos docker server. Since each of your apps use the same port, 8000, you wouldn't be able to get to them individually. So docker would map each container to a high random, and non conflicting port on the host. So when you hit 49805, it goes to container 1s 8000, and when you hit 49807, it goes to container 2s 8000.
Related
i'm new of docker's world. I'm trying to import two Dockers model downloaded from peltarion on Ubuntu version 20.04.
I download docker, setup the two images and create the first container with command
docker run -p 8000:8000 model-export
when i call xxx.xx.xx.xx:8000 with http request i receive answer by it.
My problem is that when i create the second container
docker run -p 8090:8090 model-export
I get the following configuration on the port
enter image description here
At this point i try to call with xxx.xx.xx.xx:8090, but no response.
How can i configure the container so i can call it with http request?
Thanks in advance to anyone who helps me
TLDR: your parameter values should be -p 8090:8000.
The order of -p/--publish parameter values should be:
published_port:container_exposed_port
published_port is a port used by host (your machine), where Docker is installed. And that can be used outside like a public-available service. For example it can be visible by other computers in your local network, or by Internet.
container_exposed_port is a port that is exposed from Docker container to Docker host by image (for example by EXPOSE 8000 instruction in image Dockerfile). It's visible only for your Docker host, or for other Docker containers (Docker networking) too.
I have a container for which I expose my port to access a service running within the container. I am not exposing my ports outside the container i.e. to the host (using host network on mac). On getting inside the container using exec -t and running a curl for a post request, I get the error:
curl command: curl http://localhost:19999
Failed connect to localhost:19999; Connection refused.
I have the expose command in my dockerfile and do not want to expose ports to my host. My service is also up and running inside the container. I also have the property within config set as
"ExposedPorts": {"19999/tcp": {}}
(obtained through `docker inspect <container id/name>\ Any idea on why this is not working? Using docker for Mac
I'd post my docker-compose file too but this is being built through maven. I can ensure that I am exposing my port using 19999:19999. Another weird issue is that on disabling my proxies it would run a very light weight command for my custom service and wouldn't run it again returning the same error as above. The issue only occurs on my machine and not others
Hints:
The app must be listening on port 19999 which is probably not.
The EXPOSE that you're using inside the Dockerfile does nothing.
Usually there is no need to change the default port on which an application is listening, hence each container has its own IP and you shouldn't run in a port conflict.
Answer:
Instead of curling 19999 try to use the default port on which your app would normally be listening to(it's hard to guess what you are trying to run).
If you don't publish a port (with the docker run -p option or the Docker Compose ports: option), you cannot directly reach the container on Docker for Mac. See the Known limitations, use cases, and workarounds in the Docker Desktop for Mac documentation: the "per-container IP addressing is not possible" item ism what you're trying to attempt.
The docker inspect IP address is basically useless, except in one very specific Docker configuration (on a native-Linux host, calling from outside of Docker, on the same host); I wouldn't bother looking it up.
The Dockerfile EXPOSE directive and similar runtime options do very little and mostly serve as documentation. Even if you have that configured you still need to separately publish the port when you start the container to reach it from outside of Docker space.
If we have two applications app1.py and app2.py both running in docker container as flask services with following commands:
docker run -p 5000:5002 app1.py
docker run -p 9000:5002 app2.py
Is it possible to keep the same docker port 5002 for both containers?
Secondly, if I use app.run(host='0.0.0.0', port=5000, debug=True) in flask endpoint.py file which is used for image building, is port=5000 the docker port in container or the port available externally on host?
Yes, each container runs in an isolated network namespace, so every one can listen on the same port and they won’t conflict. Inside your application code the port you listen on is the container-internal port, and unless you get told in some other way (like an HTTP Host: header) you have no way of knowing what ports you’ve been remapped to externally.
Is it possible to keep the same docker port 5002 for both containers?
Yes, of course. Typically every container runs in an isolate network namespace, which means containers cannot communicate with each unless they are configured to do so. What confuses you maybe that containers do communicate with each other well by default, which should thank Docker network default setting. But there are still other use cases. You may see more about The container Network Model and network namespace here.
Is port=5000 the port in container or the port valid externally on host?
It’s the port in container with no doubt. We could notice it is a user-defined argument for function run() in Flask. Since Flask application runs in container, so 5000 will be the port which Flask app would listen on in container.
We should map it out if we wanna access 5000 on host(outside of container). The flag -p would help you.
Hope this helps~
I wrote a simple peer to peer system, in which, when starting a node
the node looks for a free port and then makes its service accessible on that port,
it registers its URL including the port to a central server, so that other nodes know how to connect to it.
I have the impression this is a typical kind of task that docker is useful for, so I thought of making a container for these peers (my previous experience with docker has only been to write a hello world container).
Ideally I would map (publish) my exposed port to a host port from within the container using the same code that I am running now, but I could imagine that is simply not possible, and I could get around that by starting the image using a script that checks for availability of ports and then runs the container on an appropriate free host port. If the first is possible however, that would be even better. To be explicit, I do something like the following in Python
port = 5001
while not port_is_free(port):
port += 1
The second part really has to be taken care of from within the container. Assume that it has been started with the command docker run -p 5005:80 p2p-node then I need to find out the published port 5005 that the exposed port 80 is mapped to from within the container.
Searching this site and the internet it looks like more people are interested in doing the same, but I couldn't find a solution, nor an affirmation that this simply cannot be done.
So this is the main question I want to ask: how can I see which published ports my exposed ports are mapped to from within a running docker container?
Some of your requirements are not clear to me.
However if you want to know only which host port is mapped with your container's port, you can simply pass an environment variable, -e VAR=val. Just an idea
Start container:
docker run -p 5005:80 -e HOST_PORT=5005 p2p-node
Access the variable from container
echo $HOST_PORT
there is docker-py, a python library of docker.
It is not clear why you want the host port. In docker, containers can communicate with each other without having to expose ports on host machines.
As long as the peer apps are containerized, you don't need the expose port. The containers can be connected via a Docker network and the internal port can be used for communication between the containers.
To give the background, I have a deployment workflow which concurrently downloads and installs an application into multiple systems/servers. To test this workflow, I need to verify the concurrent deployment on 500 systems. I am not in a position to create 500 VMs to test this. I took the approach of Docker containers to test this workflow. Now the challenge is if I start a container with public/static IP address and install ssh inside the container, then I can login to this container via ssh. But I cannot start aother container with same configuration because port 22 is already used by container #1 on the host and I cannot give different ports because my deployment workflow internally uses only port 22.
I think using port forwarding/NAT this can be achieved, may be whenever a request comes to IP#1, then use the port 22 and when ever a request comes to IP#2 then use the port #27. But I am not sure if this is possible.
Any pointers on this will be very helpful.
First, docker maps container ports to different host ports -- launch containers with -p.
docker run mycontainer -p 10001:22
docker run mycontainer -p 10002:22
docker run mycontainer -p 10003:22
etc.
From the docker run reference:
-p=[] : Publish a container᾿s port or a range of ports to the host.
Edit: I think I misread your use of ssh.
Does the deployment workflow connect to each container via ssh (push) or is it contacted(pull)? If push, just push out to 500 clients, e.g. :10001 through :10501. If pull, all clients will be calling on 22 anyway.