Connect RabbitMQ container from another container - docker

I created rabbitmq container by running the below command
docker run -d --hostname My-rabbit --name test-rabbit -p 15672:15672 rabbitmq:3-management
Created a user called userrabbit and given the permissions as below
rabbitmqctl add_user userrabbit password
rabbitmqctl set_user_tags userrabbit administrator
rabbitmqctl set_permissions -p / userrabbit ".*" ".*" ".*"
IP of this(test-rabbit) is 172.17.0.3
I created one more container(172.17.0.4) in which my application is running and in which I need to provide the url of the rabbitmq and I've provided the url as below
transport_url = rabbit://userrabbit:password#172.17.0.3:15672/
In the logs of container(172.17.0.4), it's showing as
AMQP server 172.17.0.3:15672 closed the connection. Check login credentials: Socket closed
But I"m able to ping the RabbitMq from the container(172.17.0.4) with the same credentials as shown below
curl -i -u userrabbit:password http://172.17.0.3:15672/api/whoami
HTTP/1.1 200 OK
vary: Accept-Encoding, origin
Server: MochiWeb/1.1 WebMachine/1.10.0 (never breaks eye contact)
Date: Tue, 14 Feb 2017 17:06:39 GMT
Content-Type: application/json
Content-Length: 45
Cache-Control: no-cache
{"name":"userrabbit","tags":"administrator"}

2 things...
first thing:
it's port 5672 for the transport_url
the 15672 port you listed is the web admin console
and second thing:
you need to network you containers together via docker networking.
easiest way is with the --link option, provided to the second container at run time.
docker run --link test-rabbit:test-rabbit (... other options here)
by adding the --link test-rabbit:test-rabbit option, your application will be able to see test-rabbit as a valid network name with ip address.
updating your transport url
with these two fixes, your transport_url then becomes this
transport_url = rabbit://userrabbit:password#test-rabbit:5672/
other networking options
using --link is the easiest way to start, but isn't very scalable.
docker-compose makes it really easy to add links between containers
and you can also create custom docker networks via command-line tools and docker-compose, to connect containers together. that's more work, but better long-term.

You need to specify a hostname for each docker container with --hostname option and to add /etc/host entries for all the other containers, this you can do with --add-host option or by manually editing /etc/hosts file.
First create a network so that you can assign IPs: docker network create --subnet=172.18.0.0/16 mynet1
Then run the containers:
docker run -d --net mynet1 --ip 172.18.0.11 --hostname rab1 --add-host rab2:172.18.0.12 --name rab1con -p 15672:15672 rabbitmq:3-management
and the second one
docker run -d --net mynet1 --ip 172.18.0.12 --hostname rab2 --add-host rab1:172.18.0.11 --name rab2con -p 15673:15672 rabbitmq:3-management

Create a docker network so that rabbitmq client can connect to rabbitmq server, both running as docker containers.
Ex
docker network create sdk-net
Then run the rabbitmq docker using that network and give a name to the same
docker run -d --rm --name demo-rabbit --net sdk-net -p 5672:5672 -p 15672:15672 rabbitmq:3.6.15-management-alpine
Run you client like (note the network name in the run command sdk-net
docker run --rm -p 8090:8090 --net sdk-net pythontest
In your client the name of Docker container is reachable. So AMQ connection string will become like
amqp_url ='amqp://demo-rabbit:5672/'

Related

i can't access container in a docker

I am running several services on my CentOS 7 Linux server. Nginx and netdata are being run as root and are working well.
I started Portainer as a Docker container:
docker volume create portainer_data
docker run -d -p 8000:8000 -p 9000:9000 --name=portainer \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer
I can connect to the Portainer port locally with telnet localhost 9000. But, when I try to telnet ip 9000 from an external client PC on the same network, it doesn't connect.
The Linux server does not have a firewall. Nginx, netdata, and myapp that are not running in Docker work fine. In short, all other services can be accessed from a Linux server without a firewall, but Docker's internal container service is inaccessible.
What do I need to change to be able to reach the container?
You have to disable you ipv6
add this links to /etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
to effect those
sysctl -p

Running Toxiproxy in docker containers with MacOS

Very basic question about proxying inside docker containers in OSX: I am running a toxiproxy docker container with:
docker run --name proxy -p 8474:8474 -p 19200:19200 shopify/toxiproxy
and an Elasticsearch container:
docker run --name es -p 9200:9200 elasticsearch:6.8.6
I want toxiproxy to redirect the traffic from the `Elasticsearch:9200 container to localhost:19200. I config the toxiproxy with:
curl -XPOST "localhost:8474/proxies -d "{ \"name\": \"proxy_es\", \"listen\": \"0.0.0.0:19200\", \"upstream\": \"localhost:9200\", \"enabled\": true}"
Now, I would expect that:
curl -XGET localhost:19200/_cat
would point me to the Elasticsearch endpoint. But get:
curl: (52) Empty reply from server
Any idea why this is wrong? How can I fix it?
From inside the toxiproxy container, localhost:9200 does not resolve to es container.
This is because by default these containers are attached to default network. On the default network, localhost refers to the localhost of the container. It does not resolve to the localhost of the host machine (where docker-machine is running).
You can use the host network by adding --net=host for this to work. A better approach would be to create a new network, and run all containers in that network.
docker run --name proxy -p 8474:8474 -p 19200:19200 --net host shopify/toxiproxy
docker run --name es -p 9200:9200 --net host elasticsearch:6.8.6
You localhost should be resolvable in both ways

Expected exposed port on Redis container isn't reachable, even after binding the port

I'm having a rather awful issue with running a Redis container. For some reason, even though I have attempted to bind the port and what have you, it won't expose the Redis port it claims to expose (6379). Obviously, I've checked this by scanning the open ports on the IP assigned to the Redis container (172.17.0.3) and it returned no open ports whatsoever. How might I resolve this issue?
Docker Redis Page (for reference to where I pulled the image from): https://hub.docker.com/_/redis/
The command variations I have tried:
docker run --name ausbot-ranksync-redis -p 127.0.0.1:6379:6379 -d redis
docker run --name ausbot-ranksync-redis -p 6379:6379 -d redis
docker run --name ausbot-ranksync-redis -d redis
docker run --name ausbot-ranksync-redis --expose=6379 -d redis
https://gyazo.com/991eb379f66eaa434ad44c5d92721b55 (The last container I scan is a MariaDB container)
The command variations I have tried:
docker run --name ausbot-ranksync-redis -p 127.0.0.1:6379:6379 -d redis
docker run --name ausbot-ranksync-redis -p 6379:6379 -d redis
Those two should work and make the port available on your host.
Obviously, I've checked this by scanning the open ports on the IP assigned to the Redis container (172.17.0.3) and it returned no open ports whatsoever. How might I resolve this issue?
You shouldn't be checking the ports directly on the container from outside of docker. If you want to access the container from the host or outside, you publish the port (as done above), and then access the port on the host IP (or 127.0.0.1 on the host in your first example).
For docker networking, you need to run your application listening on all interfaces (not localhost/loopback). The official redis image already does this, and you can verify with:
docker run --rm --net container:ausbot-ranksync-redis nicolaka/netshoot netstat -lnt
or
docker run --rm --net container:ausbot-ranksync-redis nicolaka/netshoot ss -lnt
To access the container from outside of docker, you need to publish the port (docker run -p ... or ports in the docker-compose.yml). Then you connect to the host IP and the published port.
To access the container from inside of docker, you create a shared network, run your containers there, and access using docker's DNS and the container port (publish and expose are not needed for this):
docker network create app
docker run --name ausbot-ranksync-redis --net app -d redis
docker run --name redis-cli --rm --net app redis redis-cli -h ausbot-ranksync-redis ping

Docker Consul Multiple Containers in single VM setup

Docker commands that I have used to spin the consul container-
Created static Ip for container 1 = docker network create --subnet=172.18.0.0/16 C1
Run a consul container to that Ip:
docker run -d --net C1 --ip 172.18.0.10 -p 48301:8301/tcp -p 48400:8400/tcp -p 48600:8600/tcp -p 48300:8300/tcp -p 48302:8302/tcp -p 48302:8302/udp -p 48500:8500/tcp -p 48600:8600/udp -p 48301:8301/udp --name=test1 consul agent -client=172.18.0.10 -bind=172.18.0.10 -server -bootstrap -ui
Similarly created static Ip for containter 2 - docker network create --subnet=172.19.0.0/16 C2
docker run -d --net C2 --ip 172.19.0.10 -p 58301:8301/tcp -p 58400:8400/tcp -p 58600:8600/tcp -p 58300:8300/tcp -p 58302:8302/tcp -p 58302:8302/udp -p 58500:8500/tcp -p 58600:8600/udp -p 58301:8301/udp --name=test2 consul agent -client=172.19.0.10 -bind=172.19.0.10 -server -bootstrap -ui -join 192.168.99.100:48301
The consul container test2 at IP 172.19.0.10:8301 is not able to gossip with the 172.18.0.10:8301. I get the No Acknowledgement received message.
I also tried the --link to link both containers. But that didn't work.
Can anyone let me know if I am doing everything correct?
When you create a user-defined network on the docker daemon, there are some properties of these networks that you have to be aware of.
Each container in the network can immediately communicate with other containers in the network. Though, the network itself isolates the containers from external networks. Docker documentation
That effectively says what you are experiencing. The containers can not talk to each other because they are isolated from each other (reside in different networks).
To the point of --link, it is not supported in user-defined networks.
Within a user-defined bridge network, linking is not supported. Docker documentation
The solution would be to simply put both containers in the same network. I don't see an apparent need to use two different networks from your description. Just use a different --ip for the second one.

docker: connect to database container via dockerhost

I am trying to connect from an application container to a database container in two situations, one succeeds, one doesn't.
There are two containers on my dockerhost:
mysql container with port 3306 connected to 3356 on dockerhost
application container
At work, dockerhost has IP-address 10.0.2.15, at home, dockerhost has IP-address 192.168.8.11 (hostname -I).
In both situations, I want to connect to the database container from the app container with host 10.0.2.15/192.168.8.11 and port 3356.
When I do this at work (Windows network, Vagrant/Virtualbox dockerhost), this is no problem. I can 'telnet 10.0.2.15 3356' from the app container and connect to the db container.
When I do this at home (Ubuntu), it is impossible to connect. The only way is to use the docker ip address of the db container (172.17.0.2) with port 3306. However, I can ping 192.168.8.11.
The scripts to start the containers are identical; I do not use --add-host, so the dockerhost IP-address is not in /etc/hosts.
Any suggestions?
Ok, use docker to run 3 database instances
docker run --name mysqldb1 -e MYSQL_ROOT_PASSWORD=changeme -d mysql
docker run --name mysqldb2 -e MYSQL_ROOT_PASSWORD=changeme -d mysql
docker run --name mysqldb3 -e MYSQL_ROOT_PASSWORD=changeme -d mysql
Each one will have a different IP address on my host machine:
$ for i in mysqldb1 mysqldb2 mysqldb3
> do
> docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $i
> done
172.17.0.2
172.17.0.3
172.17.0.4
Repeat this on your machine and you'll very likely have different IP addresses.
So how is this problem fixed.
The older approach (deprecated in docker 1.9) is to use links. The following commands will shows how environment variables are set within your linked application container (the one using the database)
$ docker run -it --rm --link mysqldb1:mysql mysql env
..
MYSQL_PORT_3306_TCP_ADDR=172.17.0.2
$ docker run -it --rm --link mysqldb2:mysql mysql env
..
MYSQL_PORT_3306_TCP_ADDR=172.17.0.3
$ docker run -it --rm --link mysqldb3:mysql mysql env
..
MYSQL_PORT_3306_TCP_ADDR=172.17.0.4
And the following demonstrates how links are also created:
$ docker run -it --rm --link mysqldb1:mysql mysql grep mysql /etc/hosts
172.17.0.2 mysql 2a12644351a0 mysqldb1
$ docker run -it --rm --link mysqldb2:mysql mysql grep mysql /etc/hosts
172.17.0.3 mysql 89140cbf68c7 mysqldb2
$ docker run -it --rm --link mysqldb3:mysql mysql grep mysql /etc/hosts
172.17.0.4 mysql 27535e8848ef mysqldb3
So you can just refer to the other container using the "mysql" hostname or the "MYSQL_PORT_3306_TCP_ADDR" environment variable.
In Docker 1.9 there is a more powerful networking feature that enables containers to be linked across hosts.
http://docs.docker.com/engine/userguide/networking/dockernetworks/
you can use my container acting as a NAT gateway to dockerhost without any manually setup https://github.com/qoomon/docker-host

Resources