I am currently trying to set a ip to container which can accessed by any other machine within the network. The network in which my docker-host machine is 9.158.143.0/24 with a gateway 9.158.143.254.
Tried setting the ip(any free) within the subnet 9.158.143.0/24 with network type as bridge.It did not work(not even able to ping the container within docker host machine).
Then created a user defined network with subnet 9.10.10.0/24 and network driver as bridge. the container created is pingable but only within docker host machine.
Is there any way that this container be accessible from all the machine within the network(not just docker host machine).
PS: i do not want to expose the port.(Is changing routes helpful ..i know very little networking)
I aso tried userdefined network with macvlan driver. in this case i am able to ping container from other machines but not from docker host machine where container is present
To do this, you need to:
1- dedicate a new address for this container, on your local network;
2- configure this address as a secondary address on your docker engine host;
3- enable routing between local interfaces on your docker engine host;
4- use iptables to redirect the connections to this new address to your container.
Here is an example:
We suppose:
your container instance is named myinstance
your LAN network interface is named eth0
you have chosen the available network address 192.168.1.178 on your LAN
So, you need to do the following:
start your container (in this example, we start a web server):
docker run --rm -t -i --name=mycontainer httpd:2.2-alpine
enable local IP routing:
sysctl -w net.ipv4.ip_forward=1
add a secondary address for your container:
ip addr add 192.168.1.178/24 dev eth0
redirect connections for this address to your container:
First, find your container local address:
% docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' mycontainer
172.17.0.2
Now, use this local address with iptables:
iptables -t nat -A PREROUTING -i eth0 -d 192.168.1.178/32 -j DNAT --to 172.17.0.2
Finally, you can check it works by accessing http://192.168.1.178/ from any other host on your LAN.
Related
I have multiple private ipv4 addresses on my machine (each one bound to a separate public IP address)
10.0.0.4
10.0.0.5
10.0.0.6
10.0.0.7
10.0.0.8
When I run my application which uses each IP address to perform some requests everything works fine and as expected. However, when I try to run it in docker my application claims that it failed to bind to the IP address. I believe this is because docker networking is isolated.
I'm wondering how I can "expose" these ipv4 addresses to my service via a docker-compose.yml file.
You're right that Docker's network isolation is involved: your application will see a single unpredictable IP address, and Docker provides a NAT layer that translates the host's network addresses to this.
The most common way to set this up is to set your application to bind to 0.0.0.0, "all interfaces". The Compose ports: setting takes an optional IP address part, which also defaults to 0.0.0.0. You can have multiple ports: targeting the same container port, so long as the host IP and port pairs don't conflict with other bound ports or non-Docker services.
As a hypothetical example:
version: '3.8'
services:
app:
image: registry.example.com/app
environment:
# Tell the application to listen on all interfaces, port 8080
BIND_ADDR: '0.0.0.0:8080'
ports:
# As the default HTTP service on the first IP address
- '10.0.0.4:80:8080'
# On its own port on the last IP address
- '10.0.0.8:8080:8080'
# And not on any of the other IP addresses at all
An alternative is to disable Docker's networking stack with network_mode: host. In this mode your application will see all of the host interfaces directly, and if it has specific logic to selectively bind to them, that will work just as if the program wasn't running in a container. However, this also disables all other Docker networking functionality: you cannot hide or remap ports, and you cannot communicate with other containers by hostname, only via their published ports. I'd generally discourage host networking, but it might be a reasonable approach to this particular scenario.
You can configure your docker container to use multiple IP addresses, at least in two ways:
Add additional IP addresses inside the container manually:
container # ip address add 172.17.1.4/32 dev eth0
container # ip address add 172.17.1.5/32 dev eth0
...
Note: These addresses probably need to belong to the container's subnet, not sure. docker network inspect bridge prints the default bridge network's subnet, 172.17.0.0/16 for me.
(source: Multiple ip on same interface in Docker container)
or
Create multiple bridge networks, each with a different subnet (IP range), then attach your container to these multiple networks.
For details, see
https://docs.docker.com/engine/reference/commandline/network_create/#specify-advanced-options
https://docs.docker.com/network/network-tutorial-standalone/#use-user-defined-bridge-networks
Then you can configure your docker host to route (packets from) these different container IP addresses via your different host IP addresses:
host # iptables -t nat -I POSTROUTING -p all -s 172.17.1.4/32 -j SNAT --to-source 10.0.0.4
host # iptables -t nat -I POSTROUTING -p all -s 172.17.1.5/32 -j SNAT --to-source 10.0.0.5
...
(source: https://serverfault.com/a/686107)
The end result is, traffic outgoing from your container via the different container IPs is routed via the different host IPs. You can confirm this eg. with:
container # curl -v --interface 172.17.1.4 <some destination that will show which host IP is used>
Regarding docker compose, I don't know enough about it to answer that part of your question.
I need to run a docker container (hosting nginx), such that the container gets a static IP address on the host network. Example:
Suppose the host has IP 172.18.0.2/16 then I would like to give 172.18.0.3/16 to the docker container running on the host. I'd like the other physical machines in the host's network to be able to connect to the container at 172.18.0.3/16.
I have tried the solution described by: https://qiita.com/kojiwell/items/f16757c1f0cc86ff225b, (without vegrant) but it didn't help. I'm not sure about the --subnet option that needed to be supplied to the docker network create command.
As suggested in this post, I was trying to do:
docker network create \
--driver bridge \
--subnet=<WHAT TO SUPPLY HERE?> \
--gateway=<WHAT TO SUPPLY HERE?> \
--opt "com.docker.network.bridge.name"="docker1" \
shared_nw
# Add my host NIC to the bridge
brctl addif docker1 eth1
Then start the container as:
docker run --name myApp --net shared_nw --ip 172.18.0.3 -dt ubuntu
Somehow it did not work. I will appreciate if someone could point me to the right direction about how to set such a thing up. Grateful!
On your use-case the ipvlan docker network could work for you.
using your assumptions about the host ip address and mask, you could create the network like this:
docker network create -d ipvlan --subnet=172.18.0.1/16 \
-o ipvlan_mode=l2 my_network
Then run your docker container within that network and assign an IP address:
docker run --name myApp --net my_network --ip 172.18.0.3 -dt ubuntu
Note that any exposed port of that container will be available on the 172.18.0.3 ip address, but any other services on your host will not be reachable with that IP address.
You can find more info on ipvlan at the official docker documentation
The docker run -p option optionally accepts a bind-address part, which specifies a specific host IP address that will accept inbound connections. If your host is already configured with the alternate IP address, you can just run
docker run -p 172.18.0.3:80:8080 ...
and http://172.18.0.3/ (on the default HTTP port 80) will forward to port 8080 in the container.
Docker has a separate internal IP address space for containers, that you can almost totally ignore. You almost never need the docker network create --subnet option and you really never need the docker run --ip option. If you ran ifconfig inside this container you'd see a totally different IP address, and that would be fine; the container doesn't know what host ports or IP addresses (if any) it's associated with.
I got confused between these two ip addresses :
$ docker-machine ls
NAME ACTIVE URL STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376 v1.10.3
and:
$ docker inspect aa769fbe3a3a | grep IPAddress | cut -d '"' -f 4
172.17.0.2
I looked in the docker doc, but can't find an answer.
Can someone explain to me what the ip 192.168.99.100 is used for ?
And what the other ip 172.17.0.2 used for ?
The first one is the IP of the Linux host which runs the docker daemon
The second one is the IP of the container aa769fbe3a3a using the bridge network eth0 to docker0.
Se for instance "Docker Networking" :
Also "Concerning Containers' Connections: on Docker Networking":
When the Docker service dæmon starts, it configures a virtual bridge, docker0, on the host system (Figure below).
Docker picks a subnet not in use on the host and assigns a free IP address to the bridge. The first try is 172.17.42.1/16, but that could be different if there are conflicts.
This virtual bridge handles all host-containers communications.
When Docker starts a container, by default, it creates a virtual interface on the host with a unique name, such as veth220960a, and an address within the same subnet.
This new interface will be connected to the eth0 interface on the container itself.
In order to allow connections, iptables rules are added, using a DOCKER-named chain. Network address translation (NAT) is used to forward traffic to external hosts, and the host machine must be set up to forward IP packets.
I wish to make two of my containers available outside of the VM host on their separate, specific IP addresses (192.168.0.222, 192.168.0.227), without port mapping. That means I wish to access any port directly on the containers by using its IP. I already have machines running in the network outside of the VM host in the range 192.168.0.1–192.168.0.221.
Is this now possible with Docker 1.10.0, and if so, how?
I'm on OS X 10.11 with docker version 1.10.0, build 590d5108 and docker-machine version 0.6.0, build e27fb87, using boot2docker/VirtualBox driver.
I have been trying to figure this out for some while, without luck, and I've read the following questions and answers:
How to assign static public IP to docker container
How to expose docker container's ip and port to outside docker host without port mapping?
How can I make other machines on my network access my Docker containers (using port mapping)?
According to Jessie Frazelle, this should now be possible.
See "IPs for all the Things"
This is so cool I can hardly stand it.
In Docker 1.10, the awesome libnetwork team added the ability to specifiy a specific IP for a container. If you want to see the pull request it’s here: docker/docker#19001.
# create a new bridge network with your subnet and gateway for your ip block
$ docker network create --subnet 203.0.113.0/24 --gateway 203.0.113.254 iptastic
# run a nginx container with a specific ip in that block
$ docker run --rm -it --net iptastic --ip 203.0.113.2 nginx
# curl the ip from any other place (assuming this is a public ip block duh)
$ curl 203.0.113.2
# BOOM golden
That does illustrate the new docker run --ip option that you now see in docker network connect.
If specified, the container's IP address(es) is reapplied when a stopped container is restarted. If the IP address is no longer available, the container fails to start.
One way to guarantee that the IP address is available is to specify an --ip-range when creating the network, and choose the static IP address(es) from outside that range. This ensures that the IP address is not given to another container while this container is not on the network.
$ docker network create --subnet 172.20.0.0/16 --ip-range 172.20.240.0/20 multi-host-network
$ docker network connect --ip 172.20.128.2 multi-host-network container2
The "making accessible" part would involve, as usual, port forwarding.
I'm trying to expose a docker container to the outside world, not just the host machine. When I created the image from a base CentOS image it looks like this:
# install openssh server and ssh client
RUN yum install -y openssh-server
RUN yum install -y openssh-clients
RUN echo 'root:password' | chpasswd
RUN sed -ri 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config
RUN sed -ri 's/#UsePAM no/UsePAM no/g' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
I run this image like so:
sudo docker run -d -P crystal/ssh
When I try to look at the container with sudo docker ps, I see Ports:
0.0.0.0:49154->22tcp
If I ifconfig on the host machine (ubuntu), I see docker0 inet addr:172.17.42.1. I can ping this from my host machine, but not from any other machine. What am I doing wrong in setting up the container to look at the outside world? Thanks.
Edit:
I have tried inspecting the IPAddress of the container and I see IPAddress: 172.17.0.28, but I cannot ping that either...
If I try nmap , that seems to return the ports. So does that mean it is open and I should be able to ssh into it if I have ssh set up? Thanks.
nmap -p 49154 10.211.55.1 shows that the port is open with an unknown service.
I tried to ssh in by ssh -l root -p 49154 10.211.55.1 and I get
Read from socket failed: Connection reset by peer.
UPDATE
Your Dockerfile is wrong. Your sshd is not properly configured, it does not start properly and thats the reason while container does not respond on port 22 correctly. See errors:
Could not load host key: /etc/ssh/ssh_host_rsa_key
Could not load host key: /etc/ssh/ssh_host_dsa_key
You need to generate host keys. This line will do the magic:
RUN ssh-keygen -P "" -t dsa -f /etc/ssh/ssh_host_dsa_key
PREVIOUS ANSWER
You probably need to look up IP address of eth0 interface (that is accessible from network) and you need to connect to your container via this IP address. Traffic from/to docker0 bridge should be forwarded by default to your eth interfaces.
Also, you better check if you have ip forwarding enabled:
cat /proc/sys/net/ipv4/ip_forward
This command should return 1, otherwise you should execute:
sudo echo 1 > /proc/sys/net/ipv4/ip_forward
Q: Why you can connect this way to container?
If you have ip forwarding enabled, packets incoming from eth0 interface are forwarded to virtual docker0 interface. Magic happens and packet is received at correct container. See Docker Advanced Networking for more details:
But docker0 is no ordinary interface. It is a virtual Ethernet bridge
that automatically forwards packets between any other network
interfaces that are attached to it. This lets containers communicate
both with the host machine and with each other. Every time Docker
creates a container, it creates a pair of “peer” interfaces that are
like opposite ends of a pipe — a packet sent on one will be received
on the other. It gives one of the peers to the container to become its
eth0 interface and keeps the other peer, with a unique name like
vethAQI2QT, out in the namespace of the host machine. By binding every
veth* interface to the docker0 bridge, Docker creates a virtual subnet
shared between the host machine and every Docker container.
You can't ping 172.17.42.1 from outside your host because it is a private ip so it can be accessed only in private network as it is the one created by the host on which you run the docker container, the virtual switch docker0 and the docker container which is attached with a virtual interface to the bridge docker0...
Moreover 172.17.42.1 is the ip of the bridge docker0, not the ip of your docker instance. If you want to know the ip of the docker instance you have to run ifconfig inside it or you can use docker inspect
I'm not an expert about port mapping, but up to me that means that to access the docker container on port 22 you have to connect to port 49154 of the host and all the traffic will be forwarded.