How to connect outside command to rabbitmq docker container? - docker

I'm doing self study using docker. I'm trying to experiment where my rabbitmq is inside the docker and my eclipse-mosquitto is outside the docker. I know there is an image for eclipse-mosquitto, but I just want to know if I can connect it outside the docker.
In my docker-compose.yml, this is my code
version: "3.2"
networks:
test_network:
services:
rabbitmq:
container_name: rabbitmq-test
build:
context: ./docker/rabbitmq
ports:
- 5672:5672
- 15672:15672
volumes:
- ~/.docker-conf/rabbitmq/data/:/var/lib/rabbitmq/
- ~/.docker-conf/rabbitmq/log/:/var/log/rabbitmq
- ./docker/rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
networks:
- test_network
and then I install mosquitto using homebrew in my machine. I try this command
mosquitto_pub -h 172.20.0.5 -t test.hello -m "hello world" -u guest -P guest -p 1883 -d
I always get error Error: Operation timed out. I try to use the container name of rabbitmq as a host, but it doesn't work. I just want to know if this is possible?
I try this command
mosquitto_pub -h 0.0.0.0 -t rh.mumbai -m "miyatx" -u guest -P guest -p 1883 -d
I get an error
Client null sending CONNECT
Error: The connection was lost.

Related

Cannot connect to docker container (redis) in host mode

This probably just related to WSL in general but Redis is my use case.
This works fine and I can connect like:
docker exec -it redis-1 redis-cli -c -p 7001 -a Password123
But I cannot make any connections from my local windows pc to the container. I get
Could not connect: Error 10061 connecting to host.docker.internal:7001. No connection could be made because the target machine actively refused it.
This is the same error when the container isn't running, so not sure if it's a docker issue or WSL?
version: '3.9'
services:
redis-cluster:
image: redis:latest
container_name: redis-cluster
command: redis-cli -a Password123 -p 7001 --cluster create 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 127.0.0.1:7006 --cluster-replicas 1 --cluster-yes
depends_on:
- redis-1
- redis-2
- redis-3
- redis-4
- redis-5
- redis-6
network_mode: host
redis-1:
image: "redis:latest"
container_name: redis-1
network_mode: host
entrypoint: >
redis-server
--port 7001
--appendonly yes
--cluster-enabled yes
--cluster-config-file nodes.conf
--cluster-node-timeout 5000
--masterauth Password123
--requirepass Password123
--bind 0.0.0.0
--protected-mode no
# Five more the same as the above
According to the provided docker-compose.yml file, container ports are not exposed, so they are unreachable from the outside (your windows/wls host). Check here for the official reference. More about docker and ports here
As an example for redis-1 service, you should add the following to the definition.
...
redis-1:
ports:
- 7001:7001
...
...
The docker exec ... is working because the port is reachable from inside the container.

My docker cannot connect to the internet - not even with proper dns

I am running a docker container on CentOS Linux release 8.2.2004. The CentOs Server itself has a stable internet connection. Ultimately I am trying to start a docker container with the following docker-compose.yaml
version: '3'
services:
postgres:
restart: always
image: postgres:12.1
environment:
POSTGRES_PASSWORD: myusername
POSTGRES_USER: mypass
networks:
- myname
volumes:
- /home/user/data/myname:/var/lib/postgresql/data
ports:
- 5432:5432
web:
restart: always
build: .
environment:
JPDA_ADDRESS: 8001
JPDA_TRANSPORT: dt_socket
networks:
- myname
depends_on:
- postgres
ports:
- 80:8080
- 8001:8001
volumes:
- /home/user/data/images:/data/images
networks:
myname:
driver: bridge
But upon docker-compose build maven repositories cannot be reached (due to a missing internet connection from with in the docker). Adding dns rules to the yaml doesn't change anything, neither does setting network_mode: "host"
When I try and execute
docker run --dns 8.8.8.8 busybox nslookup google.com
;; connection timed out; no servers could be reached
Upon trying a normal ping, the connection fails too
docker run -it busybox ping -c 1 8.8.8.8
1 packets transmitted, 0 packets received, 100% packet loss
However
docker run --rm -it busybox ping 172.17.0.1
seems to work just fine.
docker run --net=host -it busybox ping -c 1 8.8.8.8
Works too
how can I get docker to connect to the internet?
Looks like it is a known issue of busybox. Check this thread here: nslookup can not get service ip on latest busybox
On short, you must use busybox versions before 1.28.4.
I just ran the following command on a CentOS 7 with Docker 19 and it worked fine:
# docker run --dns 8.8.8.8 busybox:1.28.0 nslookup google.com
Server: 8.8.8.8
Address 1: 8.8.8.8 dns.google
Name: google.com
Address 1: 2a00:1450:4016:807::200e muc11s04-in-x0e.1e100.net
Address 2: 216.58.207.174 muc11s04-in-f14.1e100.net
I had to rewrite the build section in the docker-compose.yaml to the following in order to fix the problem
build:
context: .
network: host

Is it possible to curl across docker network via docker-compose between 2 docker-compose.yaml?

I have 2 application run with a different network and it uses separate docker-compose.yaml. So I trying to call an request from app A to app B, but it not works.
docker exec -it app_a_running curl http://localhost:8012/user/1
So I got an error
cURL error 7: Failed to connect to localhost port 8012
docker-compose-app-a.yaml
version: "3"
services:
app:
build: go/
restart: always
ports:
- 8011:8011
volumes:
- ../src/app:/go/src/app
working_dir: /go/src/app
container_name: app-a
command: sleep 72000
networks:
- app-a-network
networks:
app-a-network:
docker-compose-app-b.yaml
version: "3"
services:
app:
build: go/
restart: always
ports:
- 8012:8012
volumes:
- ../src/app:/go/src/app
working_dir: /go/src/app
container_name: app-b
command: sleep 72000
networks:
- app-b-network
networks:
app-b-network:
Questions:
Is it possible to do this?
If the first question is possible, Please suggest me :)
You can use curl on docker containers. The reason why your curl command didn't work is probably that you did not publish your docker container's port. For example, try:
docker run -d -p 8080:8080 tomcat
instead of
docker run -d tomcat
This will forward the port 8080 of your machine to the port 8080 of your container.
If you have a shell to your container, you can use the service name or the container's name to curl a container on your Docker network, provided your target exists with the same network.

Redis connection refused between Vagrant and Docker

I have a docker like this:
version: '3.5'
services:
RedisServerA:
container_name: RedisServerA
image: redis:3.2.11
command: "redis-server --port 26379"
volumes:
- ../docker/redis/RedisServerA:/data
ports:
- 26379:26379
expose:
- 26379
RedisServerB:
container_name: RedisServerB
image: redis:3.2.11
command: "redis-server --port 6379"
volumes:
- ../docker/redis/RedisServerB:/data
ports:
- 6379:6379
expose:
- 6379
Now I do a vagrant ssh and do
ping RedisServerA
ping RedisServerB
They both work.
Now I try to connect to the redis server:
redis-cli -h RedisServerB
Works fine
Then I try to connect to the other
redis-cli -h RedisServerA -p 26739
It says:
Could not connect to Redis at RedisServerA:26739: Connection refused
Could not connect to Redis at RedisServerA:26739: Connection refused
Twice.
What am I missing here?
Typically in this setup you'd let each container run on its "natural" port. For connections from outside Docker you need the ports: mapping, and you'd access a container via its published port on the host's IP address. For connections between Docker containers (assuming they're on the same network, and if you used bare docker run, you manually created that network), you use the container name and the container's internal port number.
We can clean up the docker-compose.yml file by removing some unnecessary lines (container_name: and expose: don't really have a practical effect) and letting the image run its default command: on the default port, and only remapping with ports:. We'd get:
version: '3.5'
services:
RedisServerA:
image: redis:3.2.11
volumes:
- ../docker/redis/RedisServerA:/data
ports:
- 26379:6379
RedisServerB:
image: redis:3.2.11
volumes:
- ../docker/redis/RedisServerB:/data
ports:
- 6379:6379
Between containers, you'd use the default port
redis-cli -h RedisServerA
redis-cli -h RedisServerB
From outside Docker you'd use the server's host name and the published ports
redis-cli -h server.example.com -p 23679
redis-cli -h server.example.com

Does any one has idea how to start consul web ui with the docker image on windows?

I have downloaded the Docker Consul image and it is running, but I am not able to access its web UI. Does any one have an idea how to get started. I am running this on my local machine in developer mode.
I am running:
docker run -d --name=dev-consul -e CONSUL_BIND_INTERFACE=eth0 consul
See documentation:
The Web UI can be enabled by adding the -ui-dir flag:
$ docker run -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h node1 progrium/consul -server -bootstrap -ui-dir /ui
We publish 8400 (RPC), 8500 (HTTP), and 8600 (DNS) so you can try all three interfaces. We also give it a hostname of node1. Setting the container hostname is the intended way to name the Consul Agent node.
You can try to activate ui by setting the -ui-dir flag.
First set experimental to true in docker desktop if you're using windows containers.
The command below will work, because you need to expose the port 8500.
docker run -d -e CONSUL_BIND_INTERFACE=eth0 -p 8500:8500 consul
You will be able to access consul at http://localhost:8500/
You could also use a compose file like this:
version: "3.7"
services:
consul:
image: consul
ports:
- "8500:8500"
environment:
- CONSUL_BIND_INTERFACE=eth0
networks:
nat:
aliases:
- consul

Resources