Could I expose different docker container points to the same HTTP port on the host?
Example
docker container run --publish 80:80 -d -it --name wp wordpress
docker container run --publish 90:80 -d -it --name ci jenkins
docker container run --publish 100:80 -d -it --name gitlab gitlab/gitlab-ce
With that commands you are not using the same port at host. The nomenclature for -pis "hostPort:containerPort" so in that way you are mapping container's port 80 from all of them to your host at ports 80, 90 and 100. So no conflict at all.
Anyway, to answer to your question about possible conflicting. In first instance, your commands should be:
docker container run --publish 80:80 -d -it --name wp wordpress
docker container run --publish 80:80 -d -it --name ci jenkins
docker container run --publish 80:80 -d -it --name gitlab gitlab/gitlab-ce
In this way, you can do that commands but you'll probably get an error saying Bind for 0.0.0.0:80 failed: port is already allocated..
Anyway, in the hypothetical case of docker allowing that without an error...
The first one you map is which is going to work because on "docker run" command there are iptables commands for openning ports from container to host, and iptables rules work in "first matching is which works" style. So you'll have 3 iptables rules in this case but the one is going to work is the first.
Related
I have created three networks A, B and C using docker on a ubuntu VM, each network contained the three containers 2 busybox and 1 nginx. Each nginx container in different network i have port-forwarded on 80 81 and 82 respectively using below commands:
sudo docker run -itd --rm -p 82:82 --network C --name web3 nginx
sudo docker run -itd --rm -p 81:81 --network B --name web2 nginx
sudo docker run -itd --rm -p 80:80 --network A --name web1 nginx
but when i tried to access the container from my host machine providing the ip address of my vm along with port e.g. https://192.168.18.240:82 it does not give access to that container in different network. While giving the only IP address with port 80 am able to access the nginx but not on port 82 and 81. I have cleared the cache and clear the browsing history but all in vain.
All of the docker nginx containers listen on port 80. You are mapping B and C to the wrong container port.
sudo docker run -itd --rm -p 82:**80** --network C --name web3 nginx
sudo docker run -itd --rm -p 81:**80** --network B --name web2 nginx
sudo docker run -itd --rm -p 80:80 --network A --name web1 nginx
I installed Docker Local Registry as below
docker pull registry
after
docker run -d -p 5001:5001 -v C:/localhub/registry:/var/lib/registry --restart=always --name hub.local registry
because of 5000 port using another application.
but i can't reach to
http://localhost:5001/v2/_catalog
The first part of the -p value is the host port and the second part is the port within the container.
This code runs the registry on port 5001
docker run -d -p 5001:5000 --name hub.local registry
If you want to change the port the registry listens on within the container, you must use this code
docker run -d -e REGISTRY_HTTP_ADDR=0.0.0.0:5001 -p 5001:5001 --name hub.local registry
'''
docker run -d -p 5001:5000 -v C:/localhub/registry:/var/lib/registry --restart=always --name hub.local registry
'''
Keep the internal port the same and change only your local port
I want to run docker inside another docker container. My main container is running in a virtualbox of OS Ubuntu 18.04 which is there on my Windows 10. On trying to run it, it is showing me as:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
How can I resolve this issue?
Yes, you can do this. Check for dind (docker in docker) on docker webpage how to achieve it: https://hub.docker.com/_/docker
Your error indicates that either dockerd in the top level container is not running or you didn't mount docker.sock on the dependent container to communicate with dockerd running on your top-level container.
I am running electric-flow in a docker container in my Ubuntu virtual-box using this docker command: docker run --name efserver --hostname=efserver -d -p 8080:8080 -p 9990:9990 -p 7800:7800 -p 7070:80 -p 443:443 -p 8443:8443 -p 8200:8200 -i -t ecdocker/eflow-ce. Inside this docker container, I want to install and run docker so that my CI/CD pipeline in electric-flow can access and use docker commands.
From your above description, ecdocker/eflow-ce is your CI/CD solution container, and you just want to use docker command in this container, then you did not need dind solution. You can just access to a container's host docker server.
Something like follows:
docker run --privileged --name efserver --hostname=efserver -d -p 8080:8080 -p 9990:9990 -p 7800:7800 -p 7070:80 -p 443:443 -p 8443:8443 -p 8200:8200 -v $(which docker):/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock -i -t ecdocker/eflow-ce
Compared to your old command:
Add --privileged
Add -v $(which docker):/usr/bin/docker, then you can use docker client in container.
Add -v /var/run/docker.sock:/var/run/docker.sock, then you can access host's docker daemon using client in container.
I am building a docker image and running it will following command:
docker run --name myjenkins -u root -d -p 8080:8080 -p 50000:50000 -v jenkins-volume:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock --net=host vm31
docker container is up and running when i do docker ps output is :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
22a92a3b7875 vm31 "/sbin/tini -- /usr/…" 4 seconds ago Up 3 seconds
why does not it show the port on which this container is running - so i can not reach jenkins on localhost:8080
You are using two conflicting things together:
--net=host
-p 8080:8080 -p 50000:50000
The first tells the container to use the network stack of the host, the second is the way to bind container ports with host ports. I believe you only want to use the second one.
try after removing option --net=host.
I need to connect my db container with my server container. Now I just red about the legacy parameter --link, which works perfect
$> docker run -d -P --name rethinkdb1 rethinkdb
$> docker run -d --link rethinkdb:db my-server
But, if this parameter will be dropped eventually, how would I do something like the above ?
The docs says to use the docker network command instead (which is available since Docker 1.9.0 - 2015-11-03)
Instead of
$> docker run -d -P --name rethinkdb rethinkdb
$> docker run -d --link rethinkdb:rethinkdb my-server
you will now use
$> docker network create --name my-network
$> docker run -d -P --name rethinkdb1 --net=my-network rethinkdb
$> docker run -d --net=my-network my-server
Note that in the new form, container names are used, while before you were able to define an alias.
When two containers are part of the same network, their /etc/hosts file is updated so that you can use the container names instead of their IP addresses.