I have installed docker on a private cloud VM (RHEL 7.2) with a floating IP say 10.135.118.6
I also have a Java Play Application which talks to third party database servers. The database have white-listed the floating IP 10.135.118.6 so that my Java Play App can make a connection to it.
Now I wish to dockerize this Java Play App, but while doing so, the IP addresses which get assigned to the docker containers are mapped using a default docker bridge whose IPs eventually turn out to be of the range 172.17.0.2 (Dynamic IP)
This is creating a problem for me as my new IP is not white-listed on my Database server which eventually stops the container.
Is there any way I can assign the VM floating IP to my docker
container instead of the docker bridge network IP?
To achieve this:
First, you can create your own docker network with custom subnet(e.g JavaPlay_net)
docker network create --subnet=172.32.0.0/16 JavaPlay_net
than simply run the image (for example ubuntu image)
docker run --net JavaPlay_net --ip 172.32.0.22 -it ubuntu bash
then in ubuntu shell
hostname -i
Additionally you could use
--hostname to specify a hostname
--add-host to add more entries to /etc/hosts
Reference to create Docker Network:
https://docs.docker.com/engine/reference/commandline/network_create/#options
Related
I have a basic question about Docker that is probably due to lack of knowledge on my part about networking. The Docker container networking documentation states:
By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host.
It sounds like, when you install a container on your computer without mapping any ports from the container to the host machine, the container should not be able to access the internet. However, for example, I install the Ubuntu container with:
docker pull ubuntu
Then I enter the container's command line with:
docker run -ti ubuntu bash
At that point, I can run apt-get update and the container starts pulling information from the internet without mapping any ports (e.g. -p 80:80). How is this possible?
Publishing a port allows machines external to the docker host to access the container, inbound connectivity. By default, containers can access the network with outbound connectivity.
To restrict a container from accessing the network, you can either run the container with no network (note: this still creates a loopback interface, and you can later connect it to another network):
docker run --net none ...
Or you can create a network with the --internal option and run containers on that network:
docker network create --internal internal
docker run --net internal ...
The internal network is created without a gateway interface on the bridge network.
When they talk about publishing ports, they mean inbound ports.
Outbound ports work - depending on your network type - see here for more:
https://docs.docker.com/network/
I really don't understand what's going on here. I just simply want to perform a http request from inside one docker container, to another docker container, via the host, using the host's public ip, on a published port.
Here is my setup. I have my dev machine. And I have a docker host machine with two containers. CONT_A listens and publishes a web service on port 3000.
DEV-MACHINE
HOST (Public IP = 111.222.333.444)
CONT_A (Publish 3000)
CONT_B
On my dev machine (a completely different machine)
I can curl without any problems
curl http://111.222.333.444:3000 --> OK
When I SSH into the HOST
I can curl without any problesm
curl http://111.222.333.444:3000 --> OK
When I execute inside CONT_B
Not possible, just timeout. Ping is fine though...
docker exec -it CONT_B bash
$ curl http://111.222.333.444:3000 --> TIMEOUT
$ ping 111.222.333.444 --> OK
Why?
Ubuntu 16.04, Docker 1.12.3 (default network setup)
I know this isn't strictly answer to the question but there's a more Docker-ish way of solving your problem. I would forget about publishing the port for inter-container communication altogether. Instead create an overlay network using docker swarm. You can find the full guide here but in essence you do the following:
//create network
docker network create --driver overlay --subnet=10.0.9.0/24 my-net
//Start Container A
docker run -d --name=A --network=my-net producer:latest
//Start Container B
docker run -d --name=B --network=my-net consumer:latest
//Magic has occured
docker exec -it B /bin/bash
> curl A:3000 //MIND BLOWN!
Then inside container be you can just curl hostname A and it will resolve for you (even when you start doing scaling etc.)
If you're not keen on using Docker swarm you can still use Docker legacy links as well:
docker run -d --name B --link A:A consumer:latest
which would link any exposed (not published) ports in your A container.
And finally, if you start moving to production...forget about links & overlay networks altogether...use Kubernetes :-) Bit more difficult initial setup but they introduce a bunch of concepts & tools to make linking & scaling clusters of containers a lot easier! But that's just my personal opinion.
By running your container B with --network host argument, You can simply access your container A using localhost, no public ip needed.
> docker run -d --name containerB --network host yourimagename:version
After you run container B with above command then you can try curl container A from container B like this
> docker exec -it containerB /bin/bash
> curl http://localhost:3000
None of the current answers explain why the docker containers behave like described in the question
Docker is there to provide a lightweight isolation of the host resources to one or several containers.
The Docker network is by default isolated from the host network, and use a bridge network (again, by default; you have have overlay network) for inter-container communication.
and how to fix the problem without docker networks.
From "How to connect to the Docker host from inside a Docker container?"
As of Docker version 18.03, you can use the host.docker.internal hostname to connect to your Docker host from inside a Docker container.
This works fine on Docker for Mac and Docker for Windows, but unfortunately, this is not was not supported on Linux until Docker 20.10.0was released in December 2020.
Starting from version 20.10 , the Docker Engine now also supports communicating with the Docker host via host.docker.internal on Linux.
Unfortunately, this won't work out of the box on Linux because you need to add the extra --add-host run flag:
--add-host=host.docker.internal:host-gateway
This is for development purpose and will not work in a production environment outside of Docker Desktop for Windows/Mac.
That way, you don't have to change your network driver to --network=host, and you still can access the host through host.docker.internal.
I had a similar problem, I have a nginx server in one container (lets call it web) with several server blocks, and cron installed in another container (lets call it cron). I use docker compose. I wanted to use curl from cron to web from time to time to execute some php script on one of the application. It should look as follows:
curl http://app1.example.com/some_maintance.php
But I always was getting host unreachable after some time.
First solution was to update /etc/hosts in cron container, and add:
1.2.3.4 app1.example.com
where 1.2.3.4 is the ip for web container, and it worked - but this is a hack - also as far as I know such manual updates are not encouraged. You should use extra_hosts in docker compose, which requires explicit ip address instead of name of container to specify IP address.
I tried to use custom networks solution, which as I have seen is the correct way to deal with this, but I never succeeded here. If I ever learn how to do this I promise to update this answer.
Finally I used curl capability to specify IP address of the server, and I pass domain name as a header in separate parameter:
curl -H'Host: app1.example.com' web/some_maintance.php
not very beautiful but does work.
(here web is the name of my nginx container)
I want to be able to access a docker container via its Ip eg the one I can see when I do docker container inspect foo
The reason is I am using zookeeper inside a docker container that is managing two other docker containers running solr. My code (not in docker and I don't at this stage want it to be) calls zookeeper to get the urls of the solr servers which zookeeper reports as the docker containers ip. My code then falls over because calling the docker containers ip from the host fails as it should be calling localhost.
So how can I allow a call to the docker containers ip from the host to be routed correctly. (I am using Docker native for Mac)
I'm not using Docker for Mac, so I'm not sure the newest version Docker for Mac is still based on Docker-machine (which based on VirtualBox) or not.
If you can confirm your Docker for Mac is based on VirtualBox, then you probably could get the inet IP of vboxnet0 network interface via ifconfig command. This IP should be used as your calling IP.
Besides, you should know the port number of your Zookeeper container. Normally the exposed port of a container could be configured in docker run command, for example:
docker run -p 5000:5001 -i -t ubuntu /bin/bash
Where -p indicated the exposed port of the container.
I need to create some docker containers that must be accessed by other computers at the same network.
Problem is that when I create the container, Docker gets IP addresses valid only within the host machine.
I already took a look at Docker documentation (Networking) but nothing has worked.
If I run ifconfig on my machine my IP address is 172.21.46.149. When I go inside the container (Ubuntu) and run ifconfig the IP address is 172.17.0.2. I need Docker to get, for example, 172.21.46.150.
How can I do it?
You have to create a bridge on your host and assign that bridge to the container. This may help you: https://jpetazzo.github.io/2013/10/16/configure-docker-bridge-network/
Multi-host access involves an overlay network with service discovery.
See docker/networking:
An overlay network requires a key-value store. The store maintains information about the network state which includes discovery, networks, endpoints, IP Addresses, and more.
The Docker Engine currently supports Consul, etcd, ZooKeeper (Distributed store), and BoltDB (Local store) key-value store stores.
This example uses Consul.
If if your your nodes (the other computers across the same network) runs their docker daemon with a reference to that key-value store, they will be able to communicate with containers from other nodes.
DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=consul://<NODE-0-PRIVATE-IP>:8500/network --cluster-advertise=eth0:2375"
You just need to create an overlay network:
docker network create -d overlay --subnet=10.10.10.0/24 RED
(it will be available in all computers because of the key-value store)
And run your containers on that network:
docker run -itd --name container1 --net RED busybox
Docker containers can easily be accessed by other network node when a container:port is published through a host:port.
This is done using the -p docker-run option. Here is the sum-up of the man-page ($man docker-run gives more details and example that I won't copy/paste):
-p, --publish=[]
Publish a container's port, or range of ports, to the host.
See the doc online. This question/answer could be interesting to read too.
Basically:
docker run -it --rm -p 8085:8080 my_netcat nc -l -p 8080
Would allow LAN nodes to connect to the docker-host-ip:8085 and discuss with the netcat command.
I wish to make two of my containers available outside of the VM host on their separate, specific IP addresses (192.168.0.222, 192.168.0.227), without port mapping. That means I wish to access any port directly on the containers by using its IP. I already have machines running in the network outside of the VM host in the range 192.168.0.1–192.168.0.221.
Is this now possible with Docker 1.10.0, and if so, how?
I'm on OS X 10.11 with docker version 1.10.0, build 590d5108 and docker-machine version 0.6.0, build e27fb87, using boot2docker/VirtualBox driver.
I have been trying to figure this out for some while, without luck, and I've read the following questions and answers:
How to assign static public IP to docker container
How to expose docker container's ip and port to outside docker host without port mapping?
How can I make other machines on my network access my Docker containers (using port mapping)?
According to Jessie Frazelle, this should now be possible.
See "IPs for all the Things"
This is so cool I can hardly stand it.
In Docker 1.10, the awesome libnetwork team added the ability to specifiy a specific IP for a container. If you want to see the pull request it’s here: docker/docker#19001.
# create a new bridge network with your subnet and gateway for your ip block
$ docker network create --subnet 203.0.113.0/24 --gateway 203.0.113.254 iptastic
# run a nginx container with a specific ip in that block
$ docker run --rm -it --net iptastic --ip 203.0.113.2 nginx
# curl the ip from any other place (assuming this is a public ip block duh)
$ curl 203.0.113.2
# BOOM golden
That does illustrate the new docker run --ip option that you now see in docker network connect.
If specified, the container's IP address(es) is reapplied when a stopped container is restarted. If the IP address is no longer available, the container fails to start.
One way to guarantee that the IP address is available is to specify an --ip-range when creating the network, and choose the static IP address(es) from outside that range. This ensures that the IP address is not given to another container while this container is not on the network.
$ docker network create --subnet 172.20.0.0/16 --ip-range 172.20.240.0/20 multi-host-network
$ docker network connect --ip 172.20.128.2 multi-host-network container2
The "making accessible" part would involve, as usual, port forwarding.