I am trying to connect from an application container to a database container in two situations, one succeeds, one doesn't.
There are two containers on my dockerhost:
mysql container with port 3306 connected to 3356 on dockerhost
application container
At work, dockerhost has IP-address 10.0.2.15, at home, dockerhost has IP-address 192.168.8.11 (hostname -I).
In both situations, I want to connect to the database container from the app container with host 10.0.2.15/192.168.8.11 and port 3356.
When I do this at work (Windows network, Vagrant/Virtualbox dockerhost), this is no problem. I can 'telnet 10.0.2.15 3356' from the app container and connect to the db container.
When I do this at home (Ubuntu), it is impossible to connect. The only way is to use the docker ip address of the db container (172.17.0.2) with port 3306. However, I can ping 192.168.8.11.
The scripts to start the containers are identical; I do not use --add-host, so the dockerhost IP-address is not in /etc/hosts.
Any suggestions?
Ok, use docker to run 3 database instances
docker run --name mysqldb1 -e MYSQL_ROOT_PASSWORD=changeme -d mysql
docker run --name mysqldb2 -e MYSQL_ROOT_PASSWORD=changeme -d mysql
docker run --name mysqldb3 -e MYSQL_ROOT_PASSWORD=changeme -d mysql
Each one will have a different IP address on my host machine:
$ for i in mysqldb1 mysqldb2 mysqldb3
> do
> docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $i
> done
172.17.0.2
172.17.0.3
172.17.0.4
Repeat this on your machine and you'll very likely have different IP addresses.
So how is this problem fixed.
The older approach (deprecated in docker 1.9) is to use links. The following commands will shows how environment variables are set within your linked application container (the one using the database)
$ docker run -it --rm --link mysqldb1:mysql mysql env
..
MYSQL_PORT_3306_TCP_ADDR=172.17.0.2
$ docker run -it --rm --link mysqldb2:mysql mysql env
..
MYSQL_PORT_3306_TCP_ADDR=172.17.0.3
$ docker run -it --rm --link mysqldb3:mysql mysql env
..
MYSQL_PORT_3306_TCP_ADDR=172.17.0.4
And the following demonstrates how links are also created:
$ docker run -it --rm --link mysqldb1:mysql mysql grep mysql /etc/hosts
172.17.0.2 mysql 2a12644351a0 mysqldb1
$ docker run -it --rm --link mysqldb2:mysql mysql grep mysql /etc/hosts
172.17.0.3 mysql 89140cbf68c7 mysqldb2
$ docker run -it --rm --link mysqldb3:mysql mysql grep mysql /etc/hosts
172.17.0.4 mysql 27535e8848ef mysqldb3
So you can just refer to the other container using the "mysql" hostname or the "MYSQL_PORT_3306_TCP_ADDR" environment variable.
In Docker 1.9 there is a more powerful networking feature that enables containers to be linked across hosts.
http://docs.docker.com/engine/userguide/networking/dockernetworks/
you can use my container acting as a NAT gateway to dockerhost without any manually setup https://github.com/qoomon/docker-host
Related
Desciption: I am trying to connect my sql metastore of hive from python script on local machine.I am using docker image of cloudera quickstart to hadoop package.I am even exposing the my sql port(9083) while running the cloudera container but still it is not working.Am I exposing the port correctly or I need to do something else too
I am running the cloudera container using below command
docker run --hostname=quickstart.cloudera --privileged=true --publish-all=true --expose 9083 -t -i -p 8888:8888 -p 80:80 -p 7180:7180 -p 9083:9083 cloudera/quickstart /usr/bin/docker-quickstart
I am trying below python command to connect to the mysql db of cloudera docker image
db = pymysql.connect(host="127.0.0.1",
port="9083",# your host, usually localhost
user="hive", # your username
passwd="cloudera",
db="metastore"
) # name of the data base
Getting below error
Operational Error: (2003, "Can't connect to MySQL server on '127.0.0.1' ([Win Error 10061] No connection could be made because the target machine actively refused it)")
Expected Result:
Python script should get connected to the mysql metastore db of hive
You have to specify the Host IP address in MySQL string connection if you are trying to connect to external MySQL. Here 127.0.0.1 point to localhost of the container, not the HOST.
Change your docker command and set the HOST IP in the container.
docker run --add-host hostdb:HOST_IP --hostname=quickstart.cloudera --privileged=true --publish-all=true --expose 9083 -t -i -p 8888:8888 -p 80:80 -p 7180:7180 -p 9083:9083 cloudera/quickstart /usr/bin/docker-quickstart
Now update the MySQL connection string and set the host to hostdb
db = pymysql.connect(host="hostdb", # this will point to IP that we set in docker run command
port="9083",# your host, usually localhost
user="hive", # your username
passwd="cloudera",
db="metastore"
)
Managing /etc/hosts
Your container will have lines in /etc/hosts which define the hostname
of the container itself as well as localhost and a few other common
things. The --add-host flag can be used to add additional lines to
/etc/hosts.
docker run reference
I'm having a rather awful issue with running a Redis container. For some reason, even though I have attempted to bind the port and what have you, it won't expose the Redis port it claims to expose (6379). Obviously, I've checked this by scanning the open ports on the IP assigned to the Redis container (172.17.0.3) and it returned no open ports whatsoever. How might I resolve this issue?
Docker Redis Page (for reference to where I pulled the image from): https://hub.docker.com/_/redis/
The command variations I have tried:
docker run --name ausbot-ranksync-redis -p 127.0.0.1:6379:6379 -d redis
docker run --name ausbot-ranksync-redis -p 6379:6379 -d redis
docker run --name ausbot-ranksync-redis -d redis
docker run --name ausbot-ranksync-redis --expose=6379 -d redis
https://gyazo.com/991eb379f66eaa434ad44c5d92721b55 (The last container I scan is a MariaDB container)
The command variations I have tried:
docker run --name ausbot-ranksync-redis -p 127.0.0.1:6379:6379 -d redis
docker run --name ausbot-ranksync-redis -p 6379:6379 -d redis
Those two should work and make the port available on your host.
Obviously, I've checked this by scanning the open ports on the IP assigned to the Redis container (172.17.0.3) and it returned no open ports whatsoever. How might I resolve this issue?
You shouldn't be checking the ports directly on the container from outside of docker. If you want to access the container from the host or outside, you publish the port (as done above), and then access the port on the host IP (or 127.0.0.1 on the host in your first example).
For docker networking, you need to run your application listening on all interfaces (not localhost/loopback). The official redis image already does this, and you can verify with:
docker run --rm --net container:ausbot-ranksync-redis nicolaka/netshoot netstat -lnt
or
docker run --rm --net container:ausbot-ranksync-redis nicolaka/netshoot ss -lnt
To access the container from outside of docker, you need to publish the port (docker run -p ... or ports in the docker-compose.yml). Then you connect to the host IP and the published port.
To access the container from inside of docker, you create a shared network, run your containers there, and access using docker's DNS and the container port (publish and expose are not needed for this):
docker network create app
docker run --name ausbot-ranksync-redis --net app -d redis
docker run --name redis-cli --rm --net app redis redis-cli -h ausbot-ranksync-redis ping
To make development easier for a project, I've put a couple of services it depends on in docker containers. This makes 'localhost' in the project's config mean something different when it is passed to one of the containers.
edit
To be clear, I'm trying to forward one of the container's ports to the host so when a process running in the container tries to access localhost:5432, it connects to the host's port 5432.
endedit
I'm currently using
HOST_IP=`ip route | grep default | awk '{ printf "%s",$3 }'`
cat /etc/hosts | sed "s/127.0.0.1/$HOST_IP/" > /tmp/etc_hosts
cp /tmp/etc_hosts /etc/hosts
to redirect anything targeting 'localhost' to the container's host. It works in this situation, but I'd prefer to find a way to do this only for the needed port as I expect it won't work in other situations.
Here's what I came up with to do that, but it's not working; when a connection in the container is to localhost:5432, it tries to connect to the container's 5432 instead of the host's:
# --- These are the things that should make redirecting port 5432 to the host machine
# work, provided the container is run in privileged mode.
sysctl -w net.ipv4.ip_forward=1
sysctl -w net.ipv4.conf.all.route_localnet=1
iptables -t nat -A PREROUTING -p tcp --dport 5432 -j DNAT --to 172.19.0.1:5432
iptables -A FORWARD -d 172.19.0.1 -p tcp --dport 5432 -j ACCEPT
iptables -t nat -A POSTROUTING -j MASQUERADE
If I understand well, for development, you'd want localhost to resolve to a specific container, including when it's called from another container.
Host forwarding
Rewriting your hosts file is, as you mentioned it, not a good idea, since many services can experiment issues if you design localhost as being something different than, well... your local host.
But you can consider a few solutions.
Docker Toolbox
If running docker with Docker Toolbox, or by yourself on a virtual machine with Virtual Box, the intermediate virtual machine is visible, so localhost will represent it. You'll have to run the container, exposing this port, and then to set up a port forwarding in Virtualbox. If I use Wordpress as an example:
docker run -p 80:80 --name website -d wordpress
Virtual Box -> your docker VM (usually called default) -> Network -> Adapter 1 -> port forwarding -> create a mapping from host 8080 to guest 80
It will make Wordpress available at http://localhost:8080. Please note that under MacOS, the kernel restrains non-privileged port forwarding (ports under 1024).
This port forwarding can be created in command line, if you want to put it in a script:
VBoxManage modifyvm "default" --natpf1 "app,tcp,,8080,,80"
Docker for Windows/Docker for Mac
If running docker through Docker for Windows/Docker for Mac (or directly under Linux), rather than Docker Toolbox, you can run the container using the -p parameter, as specified by Scott's post, and your service will be available on localhost at this port (because the intermediate virtual machine is transparent, or no VM under Linux):
docker run -p 5432:5432 --name myapp -d myimage will make myapp available at localhost:5432.
socat (or iptables)
You can run socat on your host this way to forward communication on a specific port to your container:
socat TCP-LISTEN:5432,fork,reuseaddr,user=node,group=node,mode=777 TCP:172.19.0.1:5432 &
(where 172.19.0.1 is your container IP)
Container forwarding
--network
Your containers have their own hosts file, that you can see by issuing such a command:
docker run ubuntu cat /etc/hosts
You can add entries to hosts with the --add-host parameter:
docker run --add-host domain:1.2.3.4 --add-host domain2:5.6.7.8 ubuntu cat /etc/hosts
However this solution will be useless for localhost, because it won't remove the previous localhost associations. What you're looking for (and what is cleaner) is the parameter --network=host which allows the container to share the network interfaces of the host:
docker run --network=host ubuntu
This way, your container will be able to call the other containers services on localhost using their port.
The right way
Of course, the right way to achieve what you want would be to link your containers together and use their link names rather than localhost.
docker run -d --name mariadb -e MYSQL_ROOT_PASSWORD=password mariadb
docker run -d --name="wordpress" -p 8080:80 -e WORDPRESS_DB_PASSWORD=password --link mariadb:mysql wordpress
In this case, the Wordpress container will have a mysql entry in its hosts file, pointing to the mariadb container IP address. To see it, open a bash session in the Wordpress container and see by yourself.
docker exec -ti wordpress bash
#cat /etc/hosts
Show us how you are launching your container
port mapping can happen in your docker run command : -p hostport:containerport
as in
docker run -p 5432:5432 --name mycontainer -d myimage
I am really confused about this problem. I have two computer in our internal network. Both computers can ping internal servers.
Both computers have same docker version.
I run simple docker container with docker run -it --rm --name cont1 --net=host java:8 command on both computers. Then ssh into containers and try to ping internal server. One of the container can ping an internal server but other one can't reach any internal server.
How it can be possible? Do you have any idea about that?
Thank you
connect container to other systems in the same network is done by port mapping .
for that you need to run docker container with port mapping.
like - docker run -it --rm --name cont1 -p host_ip:host_port:container_port java:8
e.g., docker run -it --rm --name cont1 -p 192.168.134.122:1234:1500 java:8
NOTE : container port given in docker run is exposed in Dockerfile
now for example container ip will be - 172.17.0.2 port given in run is :1500
Now the request send to host_ip(192.168.134.122) and host_port(1234) is redirect to container with ip (172.17.0.2) and port (1500).
See the binding details in iptables -L -n -t nat
Thanks
Ok, I am pretty new to Docker world. So this might be a very basic question.
I have a container running in Docker, which is running RabbitMQ. Let's say the name of this container is "Rabbit-container".
RabbitMQ container was started with this command:
docker run -d -t -i --name rmq -p 5672:5672 rabbitmq:3-management
Python script command with 2 args:
python ~/Documents/myscripts/migrate_data.py amqp://rabbit:5672/ ~/Documents/queue/
Now, I am running a Python script from my host machine, which is creating some messages. I want to send these messages to my "Rabbit-container". Hence I want to connect to this container from my host machine (Mac OSX).
Is this even possible? If yes, how?
Please let me know if more details are needed.
So, I solved it by simply mapping the RMQ listening port to host OS:
docker run -d -t -i --name rmq -p 15672:15672 -p 5672:5672 rabbitmq:3-management
I previously had only -p 15672:15672 in my command. This is mapping the Admin UI from Docker container to my host OS. I added -p 5672:5672, which mapped RabbitMQ listening port from Docker container to host OS.
If you're running this container in your local OSX system then you should find your default docker-machine ip address by running:
docker-machine ip default
Then you can change your python script to point to that address and mapped port on <your_docker_machine_ip>:5672.
That happens because docker runs in a virtualization engine on OSX and Windows, so when you map a port to the host, you're actually mapping it to the virtual machine.
You'd need to run the container with port 5672 exposed, perhaps 15672 as well if you want WebUI, and 5671 if you use SSL, or any other port for which you add tcp listener in rabbitmq.
It would be also easier if you had a specific IP and a host name for the rabbitmq container. To do this, you'd need to create your own docker network
docker network create --subnet=172.18.0.0/16 mynet123
After that start the container like so
docker run -d --net mynet123--ip 172.18.0.11 --hostname rmq1 --name rmq_container_name -p 15673:15672 rabbitmq:3-management
note that with rabbitmq:3-management image the port 5672 is (well, was when I used it) already exposed so no need to do that. --name is for container name, and --hostname obviously for host name.
So now, from your host you can connect to rmq1 rabbitmq server.
You said that you have never used docker-machine before, so i assume you are using the Docker Beta for Mac (you should see the docker-icon in the menu bar at the top).
Your docker run command for rabbit is correct. If you now want to connect to rabbit, you have two options:
Wrap your python script in a new container and link it to rabbit:
docker run -it --rm --name migration --link rmq:rabbit -v ~/Documents/myscripts:/app -w /app python:3 python migrate_data.py
Note that we have to link rmq:rabbit, because you name your container rmq but use rabbit in the script.
Execute your python script on your host machine and use localhost:5672
python ~/Documents/myscripts/migrate_data.py amqp://localhost:5672/ ~/Documents/queue/