I'm working through an example in the Clojure Programming Cookbook that involves running RabbitMQ locally in Docker. I start it up using
docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
and I see log output that includes the line
started TCP listener on [::]:5672.
When I try to connect to it using
(langohr.core/connect {:host "172.17.0.2"})
I get the error "Operation timed out (Connection timed out)". Not sure if it's relevant, but I'm on macOS 11.4.
Also,
docker inspect --format '{{ .NetworkSettings.IPAddress }}' some-rabbit
returns
172.17.0.2.
Any ideas?
try to map the ports:
docker run -it --rm --name my-rabbitmq \
-p 5672:5672 -p 15672:15672 rabbitmq:3-management
then you can connect using HTTP://localhost:15672 and in amqp with localhost:5672
Related
I just tried to create two containers for Elastic Search and Kibana.
docker network create esnetwork
docker run --name myes --net esnetwork -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 -e "discovery.type=single-node" elasticsearch:7.9.3
and Elastic Search works when I use http://localhost:9200 or http://internal-ip:9200
But when I use http://myes:9200, it just can't resolve the container name.
Thus when I run
docker run --name mykib --net esnetwork -p 5601:5601 -e “ELASTICSEARCH_HOSTS=http://myes:9200” docker.elastic.co/kibana/kibana:7.9.3
It couldn't be created because it cannot resolve myes:9200
I also tried to replace "ELASTICSEARCH_HOSTS=http://myes:9200" with localhost:9200 or internal IP instead. but nothing works.
So I think my question should be how to make the container's DNS works?
How are you resolving 'myes'?
Is it mapped in hostname file and resolving to 127.0.0.1?
Also, use 127.0.0.1 wherever possible as localhost could be pointing to something else and not getting resolved.
It seems this problem doesn't arise from DNS. Both Elastic search and Kibana containers should use the fix name "elasticsearch" . so the docker command will be:
$docker network create esnetwork
$sudo vi /etc/sysctl.d/max_map_count.conf
vm.max_map_count=262144
$docker run --name elasticsearch --net esnetwork -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 -e
$docker run --name kib01-test --net esnetwork -p 5601:5601 -e “ELASTICSEARCH_HOSTS=http://elasticsearch:9200” docker.elastic.co/kibana/kibana:7.9.3
Then if the terminals that run installations could be ended automatically, just close them. And restart containers from the docker desktop manager. Then everything will go smoothly.
My environment is Fedora 36, docker 20.10.18
I want to install elasticsearch and kibana, on dockers, on my host machine:
$sudo docker run -dit --name elasticsearch -h elasticsearch --net host -p 9200:9200 -p 9300:9300 -v $(pwd)/elasticsearch/data/:/usr/share/elasticsearch/data/ -e "discovery.type=single-node" elasticsearch:6.6.1
WARNING: Published ports are discarded when using host network mode
$sudo docker run -dit --name kibana -h kibana --net host -p 5601:5601 kibana:6.6.1
WARNING: Published ports are discarded when using host network mode
and I get the following errors on kibana:
log [14:32:26.655] [warning][admin][elasticsearch] Unable to revive connection: http://elasticsearch:9200/
log [14:32:26.656] [warning][admin][elasticsearch] No living connections
But If I don't use host machine:
sudo docker network create mynetwork
sudo docker run -dit --name elasticsearch -h elasticsearch --net mynetwork -p 9200:9200 -p 9300:9300 -v $(pwd)/elasticsearch/data/:/usr/share/elasticsearch/data/ -e "discovery.type=single-node" elasticsearch:6.6.1
sudo docker run -dit --name kibana -h kibana --net mynetwork -p 5601:5601 kibana:6.6.1
all working fine. What is the problem?
--net host disables most of the Docker networking stack. Basic features like communicating between containers using their container name as a host name don’t work. Except in very unusual circumstances it’s never necessary.
Your second setup that uses standard Docker networking and publishes selected ports through the host is a best practice.
I'm following the Docker documentation on https://docs.docker.com/samples/library/rabbitmq but when I get to port forwarding, I get the following error: C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint some-rabbit11 (c8065d91c990ad498501160011a7f264522ddb5f5a1188db934c47853f833fa2): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:8080:tcp:172.17.0.2:15672: input/output error.
The command I'm trying to run from the terminal is docker run -d --hostname my-rabbit --name some-rabbit -p 15672:15672 rabbitmq:3.6-management
From what I can find online, the command appears to be correct so I'm not sure what is the root cause.
Find out if any docker images use rabbitmq:
docker ps -a
Remove any images using docker rabbitmq:
docker rm <IMAGE ID>
Restart docker with the system tray app
Restart docker rabbitmq
docker run -d -p 15672:15672 -p 5672:5672 --name some-rabbit rabbitmq:3.6-management
The management console is exposed on port 15672 and rabbitmq on port 5672
Insure the new instance is running:
docker ps
Use the Firefox web-browser. This does not work in Google chrome. Browse to 127.0.0.1:5672
This cryptic code shows that rabbit is working.
Go to 127.0.0.1:15672 to view the management plugin in action.
The passwords are defaults.
Goal: Connect to Redis via an app from a remote server.
Problem: I don't know the exact syntax of a Redis container creation.
You have to expose ports from docker to the world.
docker run --name some-redis -d -p 6379:6379 redis
But you need to be carefuly if you doing this on public IP,
so is better to attach a config file with security enabled.
docker run --name some-redis -d -p 6379:6379 \
-v /path/redis.conf:/usr/local/etc/redis/redis.conf \
redis redis-server /usr/local/etc/redis/redis.conf
Bind Redis container on host port & connect from the remote server using "REDIS_HOST:REDIS_HOST_PORT".
docker run -d --name redis -v <data-dir>:/data -p 6379:6379 redis
You should be able to connect to redis now from remote app server on REDIS_HOST and port 6379.
PS - The DNS/IP address of the Redis host should not change.
Ref - https://docs.docker.com/config/containers/container-networking/#published-ports
I created rabbitmq container by running the below command
docker run -d --hostname My-rabbit --name test-rabbit -p 15672:15672 rabbitmq:3-management
Created a user called userrabbit and given the permissions as below
rabbitmqctl add_user userrabbit password
rabbitmqctl set_user_tags userrabbit administrator
rabbitmqctl set_permissions -p / userrabbit ".*" ".*" ".*"
IP of this(test-rabbit) is 172.17.0.3
I created one more container(172.17.0.4) in which my application is running and in which I need to provide the url of the rabbitmq and I've provided the url as below
transport_url = rabbit://userrabbit:password#172.17.0.3:15672/
In the logs of container(172.17.0.4), it's showing as
AMQP server 172.17.0.3:15672 closed the connection. Check login credentials: Socket closed
But I"m able to ping the RabbitMq from the container(172.17.0.4) with the same credentials as shown below
curl -i -u userrabbit:password http://172.17.0.3:15672/api/whoami
HTTP/1.1 200 OK
vary: Accept-Encoding, origin
Server: MochiWeb/1.1 WebMachine/1.10.0 (never breaks eye contact)
Date: Tue, 14 Feb 2017 17:06:39 GMT
Content-Type: application/json
Content-Length: 45
Cache-Control: no-cache
{"name":"userrabbit","tags":"administrator"}
2 things...
first thing:
it's port 5672 for the transport_url
the 15672 port you listed is the web admin console
and second thing:
you need to network you containers together via docker networking.
easiest way is with the --link option, provided to the second container at run time.
docker run --link test-rabbit:test-rabbit (... other options here)
by adding the --link test-rabbit:test-rabbit option, your application will be able to see test-rabbit as a valid network name with ip address.
updating your transport url
with these two fixes, your transport_url then becomes this
transport_url = rabbit://userrabbit:password#test-rabbit:5672/
other networking options
using --link is the easiest way to start, but isn't very scalable.
docker-compose makes it really easy to add links between containers
and you can also create custom docker networks via command-line tools and docker-compose, to connect containers together. that's more work, but better long-term.
You need to specify a hostname for each docker container with --hostname option and to add /etc/host entries for all the other containers, this you can do with --add-host option or by manually editing /etc/hosts file.
First create a network so that you can assign IPs: docker network create --subnet=172.18.0.0/16 mynet1
Then run the containers:
docker run -d --net mynet1 --ip 172.18.0.11 --hostname rab1 --add-host rab2:172.18.0.12 --name rab1con -p 15672:15672 rabbitmq:3-management
and the second one
docker run -d --net mynet1 --ip 172.18.0.12 --hostname rab2 --add-host rab1:172.18.0.11 --name rab2con -p 15673:15672 rabbitmq:3-management
Create a docker network so that rabbitmq client can connect to rabbitmq server, both running as docker containers.
Ex
docker network create sdk-net
Then run the rabbitmq docker using that network and give a name to the same
docker run -d --rm --name demo-rabbit --net sdk-net -p 5672:5672 -p 15672:15672 rabbitmq:3.6.15-management-alpine
Run you client like (note the network name in the run command sdk-net
docker run --rm -p 8090:8090 --net sdk-net pythontest
In your client the name of Docker container is reachable. So AMQ connection string will become like
amqp_url ='amqp://demo-rabbit:5672/'