I need to run a docker container (hosting nginx), such that the container gets a static IP address on the host network. Example:
Suppose the host has IP 172.18.0.2/16 then I would like to give 172.18.0.3/16 to the docker container running on the host. I'd like the other physical machines in the host's network to be able to connect to the container at 172.18.0.3/16.
I have tried the solution described by: https://qiita.com/kojiwell/items/f16757c1f0cc86ff225b, (without vegrant) but it didn't help. I'm not sure about the --subnet option that needed to be supplied to the docker network create command.
As suggested in this post, I was trying to do:
docker network create \
--driver bridge \
--subnet=<WHAT TO SUPPLY HERE?> \
--gateway=<WHAT TO SUPPLY HERE?> \
--opt "com.docker.network.bridge.name"="docker1" \
shared_nw
# Add my host NIC to the bridge
brctl addif docker1 eth1
Then start the container as:
docker run --name myApp --net shared_nw --ip 172.18.0.3 -dt ubuntu
Somehow it did not work. I will appreciate if someone could point me to the right direction about how to set such a thing up. Grateful!
On your use-case the ipvlan docker network could work for you.
using your assumptions about the host ip address and mask, you could create the network like this:
docker network create -d ipvlan --subnet=172.18.0.1/16 \
-o ipvlan_mode=l2 my_network
Then run your docker container within that network and assign an IP address:
docker run --name myApp --net my_network --ip 172.18.0.3 -dt ubuntu
Note that any exposed port of that container will be available on the 172.18.0.3 ip address, but any other services on your host will not be reachable with that IP address.
You can find more info on ipvlan at the official docker documentation
The docker run -p option optionally accepts a bind-address part, which specifies a specific host IP address that will accept inbound connections. If your host is already configured with the alternate IP address, you can just run
docker run -p 172.18.0.3:80:8080 ...
and http://172.18.0.3/ (on the default HTTP port 80) will forward to port 8080 in the container.
Docker has a separate internal IP address space for containers, that you can almost totally ignore. You almost never need the docker network create --subnet option and you really never need the docker run --ip option. If you ran ifconfig inside this container you'd see a totally different IP address, and that would be fine; the container doesn't know what host ports or IP addresses (if any) it's associated with.
Im trying to understand the "macvlan" network from docker. I create a new network:
docker network create -d macvlan \
--subnet=192.168.2.0/24 \
--gateway=192.168.2.1 \
-o parent=eno1 \
pub_net
And start new container with the new network:
docker run --rm -d --net=pub_net --ip=192.168.2.74 --name=whoami -t jwilder/whoami
When i try to access the service from the container or ping it i get:
curl: (7) Failed to connect to 192.168.2.74 port 8000: no route to host
Tested with Ubuntu 16.04, Ubuntu 18.04 & CentOS 7.
Neither from the docker host itself or other clients on the network can reach the container.
I followed the example fromt he docker site: https://docs.docker.com/network/network-tutorial-macvlan/#bridge-example
What im missing ?
I read here Bind address in Docker macvlan to execute these commands (no clue what they do):
sudo ip link add pub_net link eno1 type macvlan mode bridge
sudo ip addr add 192.168.2.22/24 dev pub_net
But this does nothing on my machine(s)
I believe it is by design that host cannot reach its own containers through a macvlan network. I leave it to others to explain why exactly this is so, but to verify that this is where your problem lies, you can try to ping your container at 192.168.2.74 from another host on the network or even from another container or vm on the same host. If you can reach the container from other machines but not from the host, everything is working as it should.
According to this blog post, you can nevertheless allow for host-container communication by creating a macvlan interface on the host sub-interface and then create a macvlan interface in host in order to let it access the macvlan that the container is in.
I have not tried this myself yet and I'm not sure about the exact consequences, so I quote the instructions from the blog post here so that others can add to it where necessary:
Create a macvlan interface on host sub-interface:
docker network create -d macvlan \
–subnet=192.168.0.0/16 \
–ip-range=192.168.2.0/24 \
-o macvlan_mode=bridge \
-o parent=eth2.70 macvlan70
Create container on that macvlan interface:
docker run -d –net=macvlan70 –name nginx nginx
Find ip address of Container:
docker inspect nginx | grep IPAddress
“SecondaryIPAddresses”: null,
“IPAddress”: “”,
“IPAddress”: “192.168.2.1”,
At this point, we cannot ping container IP “192.168.2.1” from host machine.
Now, let’s create macvlan interface in host with address “192.168.2.10” in same network.
sudo ip link add mymacvlan70 link eth2.70 type macvlan mode bridge
sudo ip addr add 192.168.2.10/24 dev mymacvlan70
sudo ifconfig mymacvlan70 up
Now, we should be able to ping the Container IP as well as access “nginx” container from host machine.
$ ping -c1 192.168.2.1
PING 192.168.2.1 (192.168.2.1): 56 data bytes
64 bytes from 192.168.2.1: seq=0 ttl=64 time=0.112 ms
— 192.168.2.1 ping statistics —
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.112/0.112/0.112 ms
I have a jupyterhub running in a container with network_mode: host due to some requirement.
However after setting the network_mode to host in my docker-compose file, I can't access jupyterhub from an external host using the host ip:8000.
my understanding from this is
If you use the host network mode for a container, that container’s
network stack is not isolated from the Docker host (the container
shares the host’s networking namespace), and the container does not
get its own IP-address allocated. For instance, if you run a container
which binds to port 80 and you use host networking, the container’s
application is available on port 80 on the host’s IP address.
Is there anything i am missing?
EDIT:
To simplify i follow the instructions here
docker run --rm -d --network host --name my_nginx nginx
I can access the nginx welcome page doing
$ curl localhost:80
but if i try to curl from another host i get
$ curl 10.230.0.123:80
curl: (7) Failed connect to 10.230.0.123:80; No route to host
This issue can happen when on your system firewall is active and is blocking the port access. You can enable port access using below:
# in centos7, by updating iptables rules
iptables -I INPUT 5 -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
# in ubuntu
sudo ufw allow 80/tcp
If I run this command on the host(ubuntu)
echo "PD.file.processing:1|c" | nc -w 1 -u localhost 8125
It sends the udp packet fine and the dogstatsd agent running on port 8125 picks it up and I can see it.
But when I run the following command on the docker container on the same host
Here are the port mappings of the container when I do a docker ps
8125/udp, 0.0.0.0:20019->8080/tcp, 0.0.0.0:20018->8443/tcp, 0.0.0.0:20017->11400/tcp, 0.0.0.0:20016->11401/tcp, 0.0.0.0:20015->11402/tcp
echo "MD.file.returned.success:1|c" | nc -w 1 -u 172.17.0.1 8125
This doesn't hit the host and it is not captured by the dogstatsagent running on the host on 8125
Here is the expose line of code in Dockerfile
EXPOSE 8125/udp
Am I doing something wrong?
EXPOSE doesn't publish container ports to the host; it's used more for documenting intent and is considered good practice. You'd usually then need to publish the ports too (e.g. --publish=8125:8125).
However, you want to achieve the inverse -- IIUC -- and make the host's port accessible to the container. One way that you may do this is to run the container with --net=host. Your container can then access the host's 8125 port.
And, if you did want to access any of the container's ports, you'd be able to do so without using publish.
I'm trying to create a Docker container that acts like a full-on virtual machine. I know I can use the EXPOSE instruction inside a Dockerfile to expose a port, and I can use the -p flag with docker run to assign ports, but once a container is actually running, is there a command to open/map additional ports live?
For example, let's say I have a Docker container that is running sshd. Someone else using the container ssh's in and installs httpd. Is there a way to expose port 80 on the container and map it to port 8080 on the host, so that people can visit the web server running in the container, without restarting it?
You cannot do this via Docker, but you can access the container's un-exposed port from the host machine.
If you have a container with something running on its port 8000, you can run
wget http://container_ip:8000
To get the container's IP address, run the 2 commands:
docker ps
docker inspect container_name | grep IPAddress
Internally, Docker shells out to call iptables when you run an image, so maybe some variation on this will work.
To expose the container's port 8000 on your localhost's port 8001:
iptables -t nat -A DOCKER -p tcp --dport 8001 -j DNAT --to-destination 172.17.0.19:8000
One way you can work this out is to setup another container with the port mapping you want, and compare the output of the iptables-save command (though, I had to remove some of the other options that force traffic to go via the docker proxy).
NOTE: this is subverting docker, so should be done with the awareness that it may well create blue smoke.
OR
Another alternative is to look at the (new? post 0.6.6?) -P option - which will use random host ports, and then wire those up.
OR
With 0.6.5, you could use the LINKs feature to bring up a new container that talks to the existing one, with some additional relaying to that container's -p flags? (I have not used LINKs yet.)
OR
With docker 0.11? you can use docker run --net host .. to attach your container directly to the host's network interfaces (i.e., net is not namespaced) and thus all ports you open in the container are exposed.
Here's what I would do:
Commit the live container.
Run the container again with the new image, with ports open (I'd recommend mounting a shared volume and opening the ssh port as well)
sudo docker ps
sudo docker commit <containerid> <foo/live>
sudo docker run -i -p 22 -p 8000:80 -m /data:/data -t <foo/live> /bin/bash
While you cannot expose a new port of an existing container, you can start a new container in the same Docker network and get it to forward traffic to the original container.
# docker run \
--rm \
-p $PORT:1234 \
verb/socat \
TCP-LISTEN:1234,fork \
TCP-CONNECT:$TARGET_CONTAINER_IP:$TARGET_CONTAINER_PORT
Worked Example
Launch a web-service that listens on port 80, but do not expose its internal port 80 (oops!):
# docker run -ti mkodockx/docker-pastebin # Forgot to expose PORT 80!
Find its Docker network IP:
# docker inspect 63256f72142a | grep IPAddress
"IPAddress": "172.17.0.2",
Launch verb/socat with port 8080 exposed, and get it to forward TCP traffic to that IP's port 80:
# docker run --rm -p 8080:1234 verb/socat TCP-LISTEN:1234,fork TCP-CONNECT:172.17.0.2:80
You can now access pastebin on http://localhost:8080/, and your requests goes to socat:1234 which forwards it to pastebin:80, and the response travels the same path in reverse.
IPtables hacks don't work, at least on Docker 1.4.1.
The best way would be to run another container with the exposed port and relay with socat. This is what I've done to (temporarily) connect to the database with SQLPlus:
docker run -d --name sqlplus --link db:db -p 1521:1521 sqlplus
Dockerfile:
FROM debian:7
RUN apt-get update && \
apt-get -y install socat && \
apt-get clean
USER nobody
CMD socat -dddd TCP-LISTEN:1521,reuseaddr,fork TCP:db:1521
Here's another idea. Use SSH to do the port forwarding; this has the benefit of also working in OS X (and probably Windows) when your Docker host is a VM.
docker exec -it <containterid> ssh -R5432:localhost:5432 <user>#<hostip>
To add to the accepted answer iptables solution, I had to run two more commands on the host to open it to the outside world.
HOST> iptables -t nat -A DOCKER -p tcp --dport 443 -j DNAT --to-destination 172.17.0.2:443
HOST> iptables -t nat -A POSTROUTING -j MASQUERADE -p tcp --source 172.17.0.2 --destination 172.17.0.2 --dport https
HOST> iptables -A DOCKER -j ACCEPT -p tcp --destination 172.17.0.2 --dport https
Note: I was opening port https (443), my docker internal IP was 172.17.0.2
Note 2: These rules and temporrary and will only last until the container is restarted
I had to deal with this same issue and was able to solve it without stopping any of my running containers. This is a solution up-to-date as of February 2016, using Docker 1.9.1. Anyway, this answer is a detailed version of #ricardo-branco's answer, but in more depth for new users.
In my scenario, I wanted to temporarily connect to MySQL running in a container, and since other application containers are linked to it, stopping, reconfiguring, and re-running the database container was a non-starter.
Since I'd like to access the MySQL database externally (from Sequel Pro via SSH tunneling), I'm going to use port 33306 on the host machine. (Not 3306, just in case there is an outer MySQL instance running.)
About an hour of tweaking iptables proved fruitless, even though:
Step by step, here's what I did:
mkdir db-expose-33306
cd db-expose-33306
vim Dockerfile
Edit dockerfile, placing this inside:
# Exposes port 3306 on linked "db" container, to be accessible at host:33306
FROM ubuntu:latest # (Recommended to use the same base as the DB container)
RUN apt-get update && \
apt-get -y install socat && \
apt-get clean
USER nobody
EXPOSE 33306
CMD socat -dddd TCP-LISTEN:33306,reuseaddr,fork TCP:db:3306
Then build the image:
docker build -t your-namespace/db-expose-33306 .
Then run it, linking to your running container. (Use -d instead of -rm to keep it in the background until explicitly stopped and removed. I only want it running temporarily in this case.)
docker run -it --rm --name=db-33306 --link the_live_db_container:db -p 33306:33306 your-namespace/db-expose-33306
You can use SSH to create a tunnel and expose your container in your host.
You can do it in both ways, from container to host and from host to container. But you need a SSH tool like OpenSSH in both (client in one and server in another).
For example, in the container, you can do
$ yum install -y openssh openssh-server.x86_64
service sshd restart
Stopping sshd: [FAILED]
Generating SSH2 RSA host key: [ OK ]
Generating SSH1 RSA host key: [ OK ]
Generating SSH2 DSA host key: [ OK ]
Starting sshd: [ OK ]
$ passwd # You need to set a root password..
You can find the container IP address from this line (in the container):
$ ifconfig eth0 | grep "inet addr" | sed 's/^[^:]*:\([^ ]*\).*/\1/g'
172.17.0.2
Then in the host, you can just do:
sudo ssh -NfL 80:0.0.0.0:80 root#172.17.0.2
Based on Robm's answer I have created a Docker image and a Bash script called portcat.
Using portcat, you can easily map multiple ports to an existing Docker container. An example using the (optional) Bash script:
curl -sL https://raw.githubusercontent.com/archan937/portcat/master/script/install | sudo bash
portcat my-awesome-container 3456 4444:8080
And there you go! Portcat is mapping:
port 3456 to my-awesome-container:3456
port 4444 to my-awesome-container:8080
Please note that the Bash script is optional, the following commands:
ipAddress=$(docker inspect my-awesome-container | grep IPAddress | grep -o '[0-9]\{1,3\}\(\.[0-9]\{1,3\}\)\{3\}' | head -n 1)
docker run -p 3456:3456 -p 4444:4444 --name=alpine-portcat -it pmelegend/portcat:latest $ipAddress 3456 4444:8080
I hope portcat will come in handy for you guys. Cheers!
There is a handy HAProxy wrapper.
docker run -it -p LOCALPORT:PROXYPORT --rm --link TARGET_CONTAINER:EZNAME -e "BACKEND_HOST=EZNAME" -e "BACKEND_PORT=PROXYPORT" demandbase/docker-tcp-proxy
This creates an HAProxy to the target container. easy peasy.
Here are some solutions:
https://forums.docker.com/t/how-to-expose-port-on-running-container/3252/12
The solution to mapping port while running the container.
docker run -d --net=host myvnc
that will expose and map the port automatically to your host
In case no answer is working for someone - check if your target container is already running in docker network:
CONTAINER=my-target-container
docker inspect $CONTAINER | grep NetworkMode
"NetworkMode": "my-network-name",
Save it for later in the variable $NET_NAME:
NET_NAME=$(docker inspect --format '{{.HostConfig.NetworkMode}}' $CONTAINER)
If yes, you should run the proxy container in the same network.
Next look up the alias for the container:
docker inspect $CONTAINER | grep -A2 Aliases
"Aliases": [
"my-alias",
"23ea4ea42e34a"
Save it for later in the variable $ALIAS:
ALIAS=$(docker inspect --format '{{index .NetworkSettings.Networks "'$NET_NAME'" "Aliases" 0}}' $CONTAINER)
Now run socat in a container in the network $NET_NAME to bridge to the $ALIASed container's exposed (but not published) port:
docker run \
--detach --name my-new-proxy \
--net $NET_NAME \
--publish 8080:1234 \
alpine/socat TCP-LISTEN:1234,fork TCP-CONNECT:$ALIAS:80
You can use an overlay network like Weave Net, which will assign a unique IP address to each container and implicitly expose all the ports to every container part of the network.
Weave also provides host network integration. It is disabled by default but, if you want to also access the container IP addresses (and all its ports) from the host, you can run simply run weave expose.
Full disclosure: I work at Weaveworks.
It's not possible to do live port mapping but there are multiple ways you can give a Docker container what amounts to a real interface like a virtual machine would have.
Macvlan Interfaces
Docker now includes a Macvlan network driver. This attaches a Docker network to a "real world" interface and allows you to assign that networks addresses directly to the container (like a virtual machines bridged mode).
docker network create \
-d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
-o parent=eth0 pub_net
pipework can also map a real interface into a container or setup a sub interface in older versions of Docker.
Routing IP's
If you have control of the network you can route additional networks to your Docker host for use in the containers.
Then you assign that network to the containers and setup your Docker host to route the packets via the docker network.
Shared host interface
The --net host option allows the host interface to be shared into a container but this is probably not a good setup for running multiple containers on the one host due to the shared nature.
Read Ricardo's response first. This worked for me.
However, there exists a scenario where this won't work if the running container was kicked off using docker-compose. This is because docker-compose (I'm running docker 1.17) creates a new network. The way to address this scenario would be
docker network ls
Then append the following
docker run -d --name sqlplus --link db:db -p 1521:1521 sqlplus --net network_name
docker run -i --expose=22 b5593e60c33b bash
ref: https://forums.docker.com/t/how-to-expose-port-on-running-container/3252/5
this may help you