I'm using Docker 1.9.1's remote API to create a container.
One thing I'm trying to accomplish is that among all the exposed ports of an image, I only want to expose a few of them (or in other words give them host port mapping), at the same time I don't want to manage the host ports to use but want Docker to pick up random and available ones.
For example, an image has port 80, 443, 22 exposed, what I want is something like this in a Docker run flavor (I know this is not possible through cmd line though)
docker run -p {a random available port}:80 image
Can I achieve something like this through remote API? Right now I can only set PublishAllPorts = true but that publish all ports and waste too many host ports.
Docker rest api for starting container allows you to define port bindings. For random mapping to host port use "PortBindings": { "80/tcp": [{ "HostPort": "" }] }
Related
One thing I am trying to accomplish is that I have a single port exposed from the container(8001).I want to map this container port with a host port. But I want make this host port as randomly selected port from a given port range(8081-8089). Below is the syntax
ports:
- "8081-8089:8001"
If I use docker-compose v1.29.2 , its working as expected(Selecting single random port within given range). But when I try to use docker-compose v2 its not mapping a single port instead its running container on all ports of given range(8081-8089).
I haven't been able to find a way to do this in the Docker documentation. Do we have any flag which enables this functionality in docker-compose v2? or is it not supported in docker compose v2?
There's no way to do this in Linux in general. You can ask the OS to use a specific port or you can ask the OS to pick a port for you, but you can't constrain the automatically-chosen port.
In a Compose context you can omit the host port to let the OS choose the port
ports:
- "8001" # container port only
and you will need to use docker-compose port to find out what the port number actually is. That will be any free port number, and there is no way to limit it to a specific range.
One of the machines where we need to deploy docker containers has an eth0 IP set to within the docker IPs range (172.17.0.1/16).
The problem is that when we try to access this server through NAT from outside (SSH etc), then everything "hangs". I guess the packets get missdirected by the docker iptables rules.
What is the recommendation in this case if we cannot change the eth0 IP?
Docker should avoid subnet collisions if it sees all of the in use subnets when it creates it's networks. However if you change networks (e.g. a laptop), then you want to setup address pools for docker to use. Steps for this are in my slides here:
https://sudo-bmitch.github.io/presentations/dc2018eu/tips-and-tricks-of-the-captains.html#19
The important details are to setup a /etc/docker/daemon.json file containing:
{
"bip": "10.15.0.0/24",
"default-address-pools": [
{"base": "10.20.0.0/16", "size": 24},
{"base": "10.40.0.0/16", "size": 24}
]
}
Adjust the ip ranges as needed. Stop all containers in the bad networks, delete the containers, delete any user created networks, restart the docker engine, and then recreate any user created networks and containers (often the last two steps just involves removing and redeploying a compose project or swarm stack).
Note, it wasn't clear if you were attempting to connect to your host or container. You should not be connecting directly to a container IP externally (with very few exceptions). Instead you publish the desired ports that you need to be able to access externally, and you connect to the host IP on that published port to reach the container. E.g.
docker run -d -p 8080:80 nginx
Will start nginx with it's normal port 80 inside the container that you normally cannot reach externally. Publishing host port 8080 (could just as easily be 80 to match the container port) maps connections to the container port 80.
One important prerequisite is the application inside the container must listen on all interfaces, not just 127.0.0.1, to be able to access it from outside of that container's network namespace.
I am using a docker-compose file and I need to scale specific containers by using the command
docker-compose scale taskcontainer=3
Unfortunately this is not possible due to the fact that I need to expose a specific port from all internal containers and I need a port range on host to make it happen. For example consider the following snippet
taskcontainer:
image: custom-container
ports:
- "9251:9249"
when I scale to 3 containers I need to export port 9249 from internal containers to 3 ports on host e.g. 9251-9253 (to avoid port clash on host) and I need that functionality dynamically because I do not know in advance how many containers will be used for scaling.
Is there a way to achieve that functionality?
I wrote a simple peer to peer system, in which, when starting a node
the node looks for a free port and then makes its service accessible on that port,
it registers its URL including the port to a central server, so that other nodes know how to connect to it.
I have the impression this is a typical kind of task that docker is useful for, so I thought of making a container for these peers (my previous experience with docker has only been to write a hello world container).
Ideally I would map (publish) my exposed port to a host port from within the container using the same code that I am running now, but I could imagine that is simply not possible, and I could get around that by starting the image using a script that checks for availability of ports and then runs the container on an appropriate free host port. If the first is possible however, that would be even better. To be explicit, I do something like the following in Python
port = 5001
while not port_is_free(port):
port += 1
The second part really has to be taken care of from within the container. Assume that it has been started with the command docker run -p 5005:80 p2p-node then I need to find out the published port 5005 that the exposed port 80 is mapped to from within the container.
Searching this site and the internet it looks like more people are interested in doing the same, but I couldn't find a solution, nor an affirmation that this simply cannot be done.
So this is the main question I want to ask: how can I see which published ports my exposed ports are mapped to from within a running docker container?
Some of your requirements are not clear to me.
However if you want to know only which host port is mapped with your container's port, you can simply pass an environment variable, -e VAR=val. Just an idea
Start container:
docker run -p 5005:80 -e HOST_PORT=5005 p2p-node
Access the variable from container
echo $HOST_PORT
there is docker-py, a python library of docker.
It is not clear why you want the host port. In docker, containers can communicate with each other without having to expose ports on host machines.
As long as the peer apps are containerized, you don't need the expose port. The containers can be connected via a Docker network and the internal port can be used for communication between the containers.
I haven't fully understood the way port forwarding works with docker.
My scenario looks like this:
I have a Dockerfile that exposes a port (in my case it's 8000)
I have built an image using this Dockerfile (by using "docker build -t test_docker")
Now I created several containers by using "docker run -p 808X:8000 -d test_docker"
The host reacts on calling its IP with the different ports I have assigned on "docker run"
What exactly does this EXPOSE command do in the Dockerfile? I understood that the docker daemon itself handles the network connections and while calling "docker run" I also tell what image should be used...
Thanks
OK, I think I understood the reason.
If you are listening on ports within your application, you need to expose exactly this port. E.g.
HttpServer.bind('127.0.0.1', 8000).then((server) {...}
will need "EXPOSE 8000". Like this you can listen to several to several ports in your app but then need to expose them all.
Am I right?
Exposing ports in your dockerfile allows you to spin up a container using the -P(See here) flag on the docker run command.
A simple use case might be that you have nginx sitting on port 80 on a load balancing server, and it's going to load balance that traffic across a few docker conatiners sitting on a coreos docker server. Since each of your apps use the same port, 8000, you wouldn't be able to get to them individually. So docker would map each container to a high random, and non conflicting port on the host. So when you hit 49805, it goes to container 1s 8000, and when you hit 49807, it goes to container 2s 8000.