Consider:
docker run -p 5000:5000 -v /host/:/host appimage
it forwards 5000 to 50000
even in multiple:
docker run -p 5000:5000 -p 5001:5001 -v /host/:/host appimage
What I want to know is:
docker run -p allports:allports
is there any command available that allows to forward all ports in container? Because in my case I am running flask app. For testing purpose I want to run multiple flask instances. So for each flask instance I want to run it in different ports. This auto multi-port forwarding would help.
You can expose a range of ports using the -p option, for example:
docker run -p 2000-5000:2000-5000 -v /host/:/host appimage
See the docker run reference documentation for more details.
You might have a working set-up by using docker run --net host ..., in which case host's network is directly exposed to the continer and all port bindings are "public". I haven't tested this with multiple containers simultaneously but it might work just fine.
Related
I have 3 docker applications(containers) in which one container is communicating with other 2 containers. If I run that containers using below command, container 3 is able to access the container 1 and container 2.
docker run -d --network="host" --env-file container1.txt -p 8001:8080 img1:latest
docker run -d --network="host" --env-file container2.txt -p 8080:8080 img2:latest
docker run -d --network="host" --env-file container3.txt -p 8000:8080 img3:latest
But this is working only with host network if I remove this --network="host" option then I am not able to access this application outside(on web browser). In order to access it outside i need to make the host port and container ports same as below.
docker run -d --env-file container1.txt -p 8001:8001 img1:latest
docker run -d --env-file container2.txt -p 8080:8080 img2:latest
docker run -d --env-file container3.txt -p 8000:8000 img3:latest
With this above commands i am able to access my application on web browser but container 3 is not able to communicate with container 1. here container 3 can access the container 2 because there i am exposing 8080 host + container port. But i can't expose again 8080 host port for container 3.
How to resolve this issue??
At last my goal is this application should be accessible on browser without using host network, it should use the bridge network . And container 3 needs to communicate with container 1 & 2.
On user-defined networks, containers can not only communicate by IP address but can also resolve a container name to an IP address. This capability is called automatic service discovery.
Read this for more details on Docker container networking.
You can perform the following steps to achieve the desired result.
Create a private bridge network.
docker network create --driver bridge privet-net
Now start your application containers along with the --network private-net added to your docker run command.
docker run -d --env-file container1.txt -p 8001:8001 --network private-net img1:latest
docker run -d --env-file container2.txt -p 8080:8080 --network private-net img2:latest
docker run -d --env-file container3.txt -p 8000:8000 --network private-net img3:latest
With this way, all the three containers will be able to communicate with each other and also to the internet.
In this case when you are using --network=host, then you are telling docker to not isolate the network rather to use the host's network. So all the containers are on the same network, hence can communicate with each other without any issues. However when you remove --newtork=host, then docker will isolate the network as well there by restricting container 3 to communicate with container 1.
You will need some sort of orchestration service like docker compose, docker swarm etc.
I create an Docker's image with name is sample, then I installed nginx on both of them that listen to port 80 and it shows simple index.html.
then I use below commands to run contianers:
docker run -it -p 80:80 --name sample1 sample
docker run -it -p 81:80 --name sample2 sample
and I successfully see the index.html from main OS from two containers, but when I go inside container sample1 I couldn't see the index.html of sample2 and It does not work conversely either.
The -p option is the shortform for ports. When you do -p you are binding the container's port 80 to its host's port 80.
So container sample1 and sample2 are just merely binding their respective port 80 to the host's port 80 and 81, hence there is no direct linkage between them.
To make the containers visible to each other, first you will have to use the --link option and then do an --expose to allow the containers to see each other through the exposed port.
Example:
docker run -it -p 80:80 --name sample1 sample
docker run -it -p 81:80 --link=sample1 --expose="80" --name sample2 sample
Essentially --link means to allow the container to see the link value's container
--expose means the linked containers are able to communicate through that expose port.
Note: linking the containers is not sufficient, you need to expose ports for them to communicate.
You might want refer to the docker-compose documentation for more details;
https://docs.docker.com/compose/compose-file/
While the documentation is for docker-compose but the options are pretty much the same as the raw docker binary, and everything is nicely put on 1 page. That's why I prefer looking at there.
In Docker you can bind container's port to docker machine (Machine installed with docker) port using
docker run -it -p 80:80 image
Then you can use docker machine Ip and port inside the another container.
Ok, I am pretty new to Docker world. So this might be a very basic question.
I have a container running in Docker, which is running RabbitMQ. Let's say the name of this container is "Rabbit-container".
RabbitMQ container was started with this command:
docker run -d -t -i --name rmq -p 5672:5672 rabbitmq:3-management
Python script command with 2 args:
python ~/Documents/myscripts/migrate_data.py amqp://rabbit:5672/ ~/Documents/queue/
Now, I am running a Python script from my host machine, which is creating some messages. I want to send these messages to my "Rabbit-container". Hence I want to connect to this container from my host machine (Mac OSX).
Is this even possible? If yes, how?
Please let me know if more details are needed.
So, I solved it by simply mapping the RMQ listening port to host OS:
docker run -d -t -i --name rmq -p 15672:15672 -p 5672:5672 rabbitmq:3-management
I previously had only -p 15672:15672 in my command. This is mapping the Admin UI from Docker container to my host OS. I added -p 5672:5672, which mapped RabbitMQ listening port from Docker container to host OS.
If you're running this container in your local OSX system then you should find your default docker-machine ip address by running:
docker-machine ip default
Then you can change your python script to point to that address and mapped port on <your_docker_machine_ip>:5672.
That happens because docker runs in a virtualization engine on OSX and Windows, so when you map a port to the host, you're actually mapping it to the virtual machine.
You'd need to run the container with port 5672 exposed, perhaps 15672 as well if you want WebUI, and 5671 if you use SSL, or any other port for which you add tcp listener in rabbitmq.
It would be also easier if you had a specific IP and a host name for the rabbitmq container. To do this, you'd need to create your own docker network
docker network create --subnet=172.18.0.0/16 mynet123
After that start the container like so
docker run -d --net mynet123--ip 172.18.0.11 --hostname rmq1 --name rmq_container_name -p 15673:15672 rabbitmq:3-management
note that with rabbitmq:3-management image the port 5672 is (well, was when I used it) already exposed so no need to do that. --name is for container name, and --hostname obviously for host name.
So now, from your host you can connect to rmq1 rabbitmq server.
You said that you have never used docker-machine before, so i assume you are using the Docker Beta for Mac (you should see the docker-icon in the menu bar at the top).
Your docker run command for rabbit is correct. If you now want to connect to rabbit, you have two options:
Wrap your python script in a new container and link it to rabbit:
docker run -it --rm --name migration --link rmq:rabbit -v ~/Documents/myscripts:/app -w /app python:3 python migrate_data.py
Note that we have to link rmq:rabbit, because you name your container rmq but use rabbit in the script.
Execute your python script on your host machine and use localhost:5672
python ~/Documents/myscripts/migrate_data.py amqp://localhost:5672/ ~/Documents/queue/
I have found a similar thread, but failed to get it to work. So, the use case is
I start a container on my Linux host
docker run -i -t --privileged -p 8080:2375 mattgruter/doubledocker
When in that container, I want to start another one with GAE SDK devserver running.
At that, I need to access a running app from the host system browser.
When I start a container in the container as
docker run -i -t -p 2375:8080 image/name
I get an error saying that 2375 port is in use. I start the app, and can curl 0.0.0.0:8080 when inside both containers (when using another port 8080:8080 for example) but cannot preview the app from the host system, since lohalhost:8080 listens to 2375 port in the first container, and that port cannot be used when launching the second container.
I'm able to do that using the image jpetazzo/dind. The test I have done and worked (as an example):
From my host machine I run the container with docker installed:
docker run --privileged -t -i --rm -e LOG=file -p 18080:8080
jpetazzo/dind
Then inside the container I've pulled nginx image and run it with
docker run -d -p 8080:80 nginx
And from the host environment I can browse the nginx welcome page with http://localhost:18080
With the image you were using (mattgruter/doubledocker) I have some problem running it (something related to log attach).
Docker's shipyard project has a prebuilt container to simplify running its components. It's simply just a run script that launches and links several other containers.
However, I find their usage of the port-publish parameter (-p) confusing in two of the run commands:
sudo docker run -i -t -d -p 80 --link shipyard_redis:redis --name shipyard_router shipyard/router
sudo docker run -i -t -d -p 80:80 --link shipyard_redis:redis --link shipyard_router:app_router --name shipyard_lb shipyard/lb
The first command passes a single parameter to "-p", which doesn't seem legal, since every official usage is suppose to have at least two, colon-separated parts:
-p, --publish=[] Publish a container's port to the host
format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort
(use 'docker port' to see the actual mapping)
The second command is confusing because it seems like this would cause a port collision with the container started in the first command.
Can someone clarify?
When you specify -p with only 1/single port number. Docker automatically assigns a random port mapping (usually starting from port 49150) to the single port exposed in the container ie. 80
what this means is, lets say you run Apache 2 on port 80 inside your container. Then you will have to point your browser to localhost:49150 to access your Apache web server.