Docker with rabbitmq networking - docker

I have trouble understanding how docker port mapping works. I have a docker-compose file with a couple of containers, one of them is a rabbitmq service.
The docker-compose file is:
version: "3.9"
volumes:
test:
external: true
services:
rabbitmq3:
container_name: "rabbitmq"
image: rabbitmq:3.8-management-alpine
environment:
- RABBITMQ_DEFAULT_USER=myuser
- RABBITMQ_DEFAULT_PASS=mypassword
ports:
# AMQP protocol port
- '5671:5672'
# HTTP management UI
- '15671:15672'
So the container runs using docker compose up, no problem. But when I access the rabbitmq management plugin using container_ip:15671 or container_ip:15671, I don't get anything. But when I access it using 127.0.0.1:15672, I can access the management plugin.
It probably is a stupid question but how can I access the container service using localhost?

The port sematic is as such <HOST_PORT>:<CONTAINER_PORT>. So -p 15671:15672 implies that the container port 15672 is mapped to the port 15671 on your machine.
Based on your docker compose file, the ports 5671 and 15671 are exposed on your machine.
The management portal can be accessed using http://localhost:15671 and the rabbitmq service can be used using the http://localhost:5671.
The IP 127.0.0.1 is localhost.

Related

Why is that I am able to access container outside the bridge network?

I started mysqldb from a docker container . I was surprised that I could connect it via the localhost using the below command
mysql -uroot -proot -P3306 -h localhost
I thought the docker containers that start on the bridge network and wont be available outside that network. How is that mysql CLI is able to connect to this instance
Below is my docker compose that runs the mysqldb-docker instance
version: '3.8'
services:
mysqldb-docker:
image: 'mysql:8.0.27'
restart: 'unless-stopped'
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=reco-tracker-dev
volumes:
- mysqldb:/var/lib/mysql
reco-tracker-docker:
image: 'reco-tracker-docker:v1'
ports:
- "8083:8083"
environment:
- SPRING_DATASOURCE_USERNAME=root
- SPRING_DATASOURCE_PASSWORD=root
- SPRING_DATASOURCE_URL="jdbc:mysql://mysqldb-docker:3306/reco-tracker-dev"
depends_on: [mysqldb-docker]
env_file:
- ./.env
volumes:
mysqldb:
You have published the port(s). That means you can reach them on the host system on the published port.
By default, when you create or run a container using docker create or docker run, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host to the outside world.
The critical section in your config is the below. You have added a ports key to your service. This is composes way to publish ports. The left part is the port where you publish it to on the host system. The right part is where the container actually listens on.
ports:
- "3306:3306"
Also keep in mind that when you start compose, a default network is created that joins all container in the compose stack. That's why These containers can find each other, with the service name and/or container name as hostname.
You don't need to publish the port(s) like you did in order for them to be able to communicate. I guess that's why you did it. You can and probably should remove any port mapping from internal services, if possible. This will add extra security to your setup, because then it behaves like you describe. Only containers in the same network find each other.

resolve host machine hostname inside docker container

I have one application running on http://home.local:8180 in container A. And the other container B is running on http://data.local:9010. Container B is using container A to hit the API. If I specify container A hostname as http://host.docker.internal:8180 in container B then it works. What I would have to do if I want to use the hostname as is (home.local:8180)
Following is the docker-compose file:
home_app:
hostname: "home.local"
image: "home-app"
ports:
- "8180:8080"
environment:
data_app:
hostname: "data.local"
image: "data-app"
links:
- "home_app"
ports:
- "9010:9010"
Just use "home.local:8080". 8180 is just on the host machine and forwards to 8080 on the container, whereas based on your docker-compose, 8080 is the port of your application on home_app container, so within the docker-compose network, other containers should be able to access it via hostname (home.local) and the actual ports (8080).
You need to configure your application to use the Compose service name home_app as a host name, and the port number that the process inside the container is using. Neither hostname: nor ports: has any effect on connections between containers. You don't need to (and can't) specify a custom DNS suffix. See Networking in Compose in the Docker documentation for additional details.
So I might specify:
version: '3.8'
services:
home_app:
image: "home-app"
ports:
- "8180:8080" # optional, only for access from outside Docker
data_app:
image: "data-app"
ports:
- "9010:9010"
environment:
HOME_APP_URL: 'http://home_app:8080'
You don't need hostname:, which only affects what a container thinks its own hostname is and has no effect on anything outside the container; and you don't need links:, which is an obsolete option from first-generation Docker networking.

Connect java application in docker container to rabbitmq

I have a Java application running in a Docker container and rabbitmq in another container.
How can I connect the containers to use rabbitmq in my Java application?
You have to set up a network and attach the running containers to the network.
Then you have to set the connection URL of your app to the name of the rabbitmq's network name in Docker container.
The easiest way is to create docker-compose file because it will create the network and attach the containers automatically.
Create a network
Connect the container
Or
Docker compose file
Example of docker-compose.yml
version: '3.7'
services:
yourapp:
image: image_from_dockerhub_or_local // or use "build: ./myapp_folder_below_this_where_is_the_Dockerfile" to build container from scratch
hostname: myapp
ports:
- 8080:8080
rabbitmq:
image: rabbitmq:3.8.3-management-alpine
hostname: rabbitmq
environment:
RABBITMQ_DEFAULT_USER: user
RABBITMQ_DEFAULT_PASS: pass
ports:
- 5672:5672
- 15672:15672
You can run it with docker-compose up command.
Then in your connection url use host:rabbitmq, port:5672.
Note that you don't have to create a port forward if you don't want to reach rabbitmq from your host machine.

Connecting to a dockerized REST JaxRS end point from within another container locally

I am attempting to connect to a rest end point of a JaxRS liferay portlet.
If I try and connect through postman using http://localhost:8078/engine-rest/process-definition
It works 200 okay.
I am attempting to connect to the same end point from within another docker container part of the same docker network, I have tried with localhost and I receive the error:
java.net.ConnectException: Connection refused (Connection refused)
I have also tried http://wasp-engine:8078, wasp-engine is the docker name of the container. Still receiving the same error.
Here are the two containers in my compose file:
wasp-engine:
image: in/digicor-engine:test
container_name: wasp-engine
ports:
- "8078:8080"
depends_on:
mysql:
condition: service_healthy
wasp:
image: in/wasp:local2
container_name: Wasp
volumes:
- liferay-document-library:/opt/liferay/data
environment:
- camundaEndPoint=http://wasp-engine:8078
ports:
- "8079:8080"
depends_on:
mysql:
condition: service_healthy
They are both connecting to the mysql fine which is part of the same docker network and referenced via:
jdbc.default.url=jdbc:mysql://mysql/liferay_test
tl;dr
Use http://wasp-engine:8080
The why
In your docker-compose the
ports: - "8078:8080"
field on wasp-engine will expose port 8080 of the docker container to your host computer on port 8078. This is what allows your postman to succeed in connecting to the container over localhost. However, once inside the docker container localhost refers to the docker container itself. This port forwarding no longer applies.
Using docker-compose you can use the name of the container to target the specific docker container. You mentioned you tried this with the URI http://wasp-engine:8078. When you access the container this way the original port is used not the forwarded port for the host machine. This means that the docker container should be targeting port 8080.
Putting it all together, the final URI should be http://wasp-engine:8080.

docker-compose microservice inter-container api communication with nginx proxy

I am trying to build a docker-compose file that will mimic my production environment with its various microservices. I am using a custom bridge network with an nginx proxy that routes port 80 and 443 requests to the correct service containers. The docker-compose file and the nginx conf files together specify the port mappings that allow the proxy container to route traffic for each DNS entry to its matching container.
Consequently, I can use my container names as DNS entries to access each container service from my host browser. I can also exec into each container and ping other containers by that same DNS hostname. However, I cannot successfully curl from one container to another by the container name alone.
It seems that I need to append the proxy port mapping to each inter-service API call when operating within the Docker environment. In my production environment each service has its own environment and can respond on ports 80 and 443. The code written for each service therefore ignores port specifications and simply calls each service by its DNS hostname. I would rather not have to append port id mappings to each API call throughout the various code bases in order for my services to talk to each other in the Docker environment.
Is there a tool or configuration setting that will allow my microservice containers to successfully call each other in Docker without the need of a proxy port map?
version: '3'
services:
#---------------------
# nginx proxy service
#---------------------
nginx_proxy:
image: nginx:alpine
networks:
- test_network
ports:
- "80:80"
- "443:443"
volumes:
- "./site1/site1.test.conf:/etc/nginx/conf.d/site1.test.conf"
- "./site2/site2.test.conf:/etc/nginx/conf.d/site2.test.conf"
container_name: nginx_proxy
#------------
# site1.test
#------------
site1.test:
build: alpine:latest
networks:
- test_network
ports:
- "9001:9000"
environment:
- "VIRTUAL_HOST=site1.test"
volumes:
- "./site1:/site1"
container_name: site1.test
#------------
# site2.test
#------------
site2.test:
build: alpine:latest
networks:
- test_network
ports:
- "9002:9000"
environment:
- "VIRTUAL_HOST=site2.test"
volumes:
- "./site2:/site2"
container_name: site2.test
# networks
networks:
test_network:
http://hostname/ always means http://hostname:80/ (that is, TCP port 80 is the default port for HTTP URLs). So if you want one container to be able to reach the other as http://othercontainer/, the other container needs to be running an HTTP daemon of some sort on port 80 (which probably means it needs to at least be started as root within its container).
If your nginx proxy routes to all of the containers successfully, it's not wrong to just route all inter-container traffic through it (in a previous technology generation we would have called this a service bus). There's not a trivial way to do this in Docker, but you might be able to configure it as a standard HTTP proxy.
I would suggest making all of the outbound service URLs configurable in any case, probably as environment variables. You can imagine wanting to run multiple services together in a development environment (in which case the service URL might be http://localhost:9002), or in a pure-Docker environment like what you show (http://otherservice:9000), or in a hybrid multi-host Docker setup (http://other.host.example.com:9002), or in Kubernetes (http://otherservice.default.svc.cluster.local:9000).

Resources