I have a use case where there will be multiple Docker containers running with web server. I can't bind port 80 for all containers. I am looking for a solution where I can bind container's dynamic ports to host at 80. Is it something possible with Traefik? If so, how?
I have to implement it for gitlab's review-apps. If anyone has done it before, please guide me.
If I understood your question, you can do it at the primitive stage itself while starting the container. Below command will bind port 80 of host to a dynamic (random) port on the container:
docker run --name <container-name> -d -p 80 <image-name>
If you are talking about knowing the dynamic ports you need to use a service discovery tool which in turn will talk to Docker API and extract the information for you.
N.B: I don't have much idea about Traefik but the above are usual ways to achieve what is asked.
Related
I want to run 5 docker cointeiners with same app, it uses port 50505, I want expose this port to internet.
My server runs Ubuntu 18.04
Assigned IPs
204.12.240.210-214
So looks like i got 4 IP adresses
I can ssh to any of them and it works.
Now I am bit fresh with docker still learning it.
Anyone could give commands how to create network with this IP and then how to start instance with this IP's ?
I believe its possible.
Normalli I start app like this:
docker run -d -P --net=host -v /mnt/chaindata:/go-matrix/chaindata --name matrix --restart always disarmm/matrix
But when u start 2nd instance it will crash cuz ports are used by first one
So I could fix that with IP's
It doesn't seem a question regarding Docker, if I were you, I would setup a Nginx as a proxy/load balancer and pass traffic to the backend service, be it any service running on Docker or others
Also you can create Nginx using Docker
Next is how you assign the IP and find the port Nginx is listening to
To be honestly, I don't know how to handle it with --net=host.
But, one possible solution is: not use host network, use default bridge0's bridge network. But you need to know the ports of your application used, and expose it by yourself. Then, when expose the ports, you can specify the ip.
Something like follows:
docker run -d -p 204.12.240.210:80:80 -p 204.12.240.210:8080:8080 disarmm/matrix
docker run -d -p 204.12.240.211:80:80 -p 204.12.240.211:8080:8080 disarmm/matrix
Then, you can see in host, you have two 80/8080 ports open on same host machine binding with different IP.
As mentioned above, the limit is: you have to know what ports you used exactly.
See this for format.
I have a process running on a host that needs to communicate with a docker and I want it to be done by some parameter that can't change (like docker name or host name) unlike IP (prefer not to make the IP of the docker static or install external dockers for this).
I'm aware that dockers can resolve addressees by name in a private network and that's what I want but not between dockers but between process running on the host and docker.
couldn't find a solution, can it be done ?
Edit:
I'm not allowed to use host network and open additional ports on the host for security reasons.
You're welcome to choose the way which fits your needs better.
Option 1. Use host's networking. In this case Docker does not create separate net for container and you connect to container's services as if they would run on your host:
docker run --network=host <image_name>
Drawback of this approach - low isolation and thus security. You dont need to expose any ports here - if service listens on 8080, just open localhost:8080 and enjoy.
Second approach is more correct - you expose (somehow forward) internal ports in container and map them onto ports in the host.
docker run -p 8080:80 <image_name>
This will map port 80 from container to port 8080 on the host. As in previous example, you still connect using localhost, e.g. localhost:8080.
If i start a container using -p 80 for example, docker will assign a random outbound port.
Everytime Docker assign a port, it also add an iptable rule to open this port to the world, is it possible to prevent this behaviour ?
Note : I am using a nginx load balancer to get the content, I really don't need to have my application associated with two different port.
You can specify both interface and port as follows:
-p ip:hostPort:containerPort
or
-p ip::containerPort
Another solution is to run nginx inside container and to use conteiner linking without exposing other services whatsoever.
The iptable feature is a startup parameter for the docker demon. Look for the docker demon conf file in your docker installation. Add --iptables=false and docker never touches your iptables.
I understand that to expose ports in a docker container, you can use the -p flag (e.g. -p 1-100:1-100). But is there a nice way to expose a large percentage of possible ports from the container to the host machine? For instance if I am running a router of sorts in a container that lives in a VM, and I would like to expose all ports in the container from 32768 upwards to 65535, is there a nice way to do this? As it stands I've tried using the -p flag and it complains about memory allocation errors.
Nvm. I figured out my misunderstanding. -P is what I want, and I want to expose and not explicitly map ports.
tl;dr
docker run --net=host ...
Docker offers different networking modes for containers. By default the networking mode is bridge which implies the need to expose ports.
If you run a container with networking mode host then you won't need to expose/forward ports as both the docker host and the container will share the very same network interface.
In the container, localhost will refer to the docker host. Any port opened in the container is in fact opened on the docker host network interface.
Let's say I start a container, exposing a random port to 80 like this: docker run -d -p 80 -name my_container username/container.
Is there any way to tell my_container on which host's port his 80 was exposed?
Edit: My situation:
I'm running Apache to serve some static HTML files, and an Go API server on this container. I'm exposing both services here. The static files request data from the server via javascript on the user's browser, but to be able to do this, the clients need to know on which port the API server is made available, to be able to connect to it. Is this the appropriate way to do this?
I don't think there exists an easy to tell from the container the host port on which its port 80 was exposed, but I also believe there is a good reason for that: making the container dependent on this would make it dependent on its containing environment, which goes against Docker's logic.
If you really need this, you could pass the host port as an environment variable to the container using the -e flag (assuming that the host port is fixed), or rely on a hack such as mounting the Docker socket in the container (-v /var/run/docker.sock:/var/run/docker.sock) and have it "inspect himself" (which is similar to what progrium/ambassadord does to implement its omni mode).
Maybe you should clarify why you need this information in the first place and perhaps there's a simpler solution that can help you achieve that.
You can run docker ps which will show the ports, for example
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
containerid ubuntu:14.04 /bin/bash 14 seconds ago Up 13 seconds 0.0.0.0:49153->80/tcp my_container
In this instance it is 49153.
Also docker inspect will tell you lots about your container, including the port mappings
$ docker inspect my_container | grep HostPort
"HostPort": "49153"