How to configure dns entries for Docker Compose - docker

I am setting up a Spring application to run using compose. The application needs to establish a connection to ActiveMQ either running locally for developers or to existing instances for staging/production.
I setup the following which is working great for local dev:
amq:
image: rmohr/activemq:latest
ports:
- "61616:61616"
- "8161:8161"
legacy-bridge:
image: myco/myservice
links:
- amq
and in the application configuration I am declaring the AMQ connection as
broker-url=tcp://amq:61616
Running docker-compose up is working great, activeMQ is fired up locally and my application constiner starts and connects to it.
Now I need to set this up for staging/production where the ActiveMQ instances are running on existing hardware within the infrastructure. My thoughts are to either use spring profiles to handle a different configurations in which case the application configuration entry for 'broker-url=tcp://amq:61616' would become something like broker-url=tcp://some.host.here:61616 or find some way to create a dns entry within my production docker-compose.yml which will point an amq dns entry to the associated staging or production queues.
What is the best approach here and if it is DNS, how to I set that up in compose?
Thanks!

Using the extra_hosts flag
First thing that comes to mind is using Compose's extra_hosts flag:
legacy-bridge:
image: myco/myservice
extra_hosts:
- "amq:1.2.3.4"
This will not create a DNS record, but an entry in the container's /etc/hosts file, effectively allowing you to continue using tcp://amq:61616 as your broker URL in your application.
Using an ambassador container
If you're not content with directly specifying the production broker's IP address and would like to leverage existing DNS records, you can use the ambassador pattern:
amq-ambassador:
image: svendowideit/ambassador
command: ["your-amq-dns-name", "61616"]
ports:
- 61616
legacy-bridge:
image: myco/myservice
links:
- "amq-ambassador:amq"

Related

Docker Compose network_mode - adding argument causes local testing to fail

I'm trying to build an application that is able to use local integration testing via Docker Compose with Google Cloud emulator containers, while also being able to run that same Docker Compose configuration on a Docker-based CI/CD tool (Google Cloud Build).
The kind of docker-compose.yml configuration I'm using is:
version: '3.7'
services:
main-application:
build:
context: .
target: dev
image: main-app-dev
container_name: main-app-dev
network_mode: $DOCKER_NETWORK
environment:
- MY_ENV=my_env
command: ["sh", "-c", "PYTHONPATH=./ python app/main.py"]
volumes:
- ~/.config:/home/appuser/.config
- ./app:/home/appuser/app
- ./tests:/home/appuser/tests
depends_on:
- firestore
firestore:
image: google/cloud-sdk
container_name: firestore
network_mode: $DOCKER_NETWORK
environment:
- GCP_PROJECT_ID=dummy-project
command: ["sh", "-c", "gcloud beta emulators firestore start --project=$$GCP_PROJECT_ID --host-port=0.0.0.0:9000"]
ports:
- "9000:9000"
I added the network_mode arguments to enable the configuration to use the "cloudbuild" network type available on the CI/CD pipeline, which is currently working perfectly. However this network configuration is not available to local Docker, hence why I've tried to use environment variables to enable the switch depending on local vs Cloud Build CI/CD environment.
Before I added these network_mode params/args for the CI/CD, the local testing was working just fine. However since I added them, my application either can't run, or can't connect to its accompanying services, like the firestore one specified in the YAML above.
I have tried the following valid Docker network modes with no success:
"bridge" - runs the service, but doesn't allow connection between containers
"host" - doesn't allow the service to run because of not being compatible with assigning ports
"none" - doesn't allow the service to connect externally
"service" - doesn't allow the service to run due to invalid mode/service
Anyone able to provide advice on what I'm missing here?
I would assume one of these network modes would be what Docker Compose would be using if the network_mode is not assigned, so I'm not sure why all of them are failing.
I want to avoid having a separate cloud build file for the remote and local configurations, and would also like to avoid the hassle of setting up my own docker network locally. Ideally if there were some way of only applying network_mode only remotely, that would work best in my case I think.
TL;DR:
Specifying network_mode does not give me the same result as not specifying it when running docker-compose up locally.
Due to running the configuration in the cloud I can't avoid specifying it.
Found a solution thanks to this thread and the comment by David Maze.
As far as I understand it, Docker Compose when not provided a specific network_mode for all the containers, creates its own private default network, named after the folder in which the docker-compose.yml file exists (as default).
Specifying a network mode like the default "bridge" network, without using this custom network created by docker compose means container discovery between services isn't possible, as in main-application couldn't find the firestore:9000 container.
Basically all I had to do was set the network_mode variable to myapplication_default, if the folder where docker-compose.yml sat in was called "MyApplication", to force app apps to use the same custom network set up in docker-compose up

How to configure RabbitMQ for message persistence in Docker swarm?

How can I configure RabbitMQ to retain messages on node restart in docker swarm?
I've marked the queues as durable and I'm setting the message's delivery mode to 2. I'm mounting /var/lib/rabbitmq/mnesia to a persistent volume. I've docker exec'd to verify that rabbitmq is indeed creating files in said folder, and all seems well. Everything works in my local machine using docker-compose.
However, when the container crashes, docker swarm creates a new one, and this one seems to initialize a new Mnesia database instead of using the old one. The database's name seems to be related to the container's id. It's just a single node, I'm not configuring any clustering.
I haven't changed anything in rabbitmq.conf, except for the cluster_name, since it seemed to be related to the folder created, but that didn't solve it.
Relevant section of the docker swarm configuration:
rabbitmq:
image: rabbitmq:3.9.11-management-alpine
networks:
- default
environment:
- RABBITMQ_DEFAULT_PASS=password
- RABBITMQ_ERLANG_COOKIE=cookie
- RABBITMQ_NODENAME=rabbit
volumes:
- rabbitmq:/var/lib/rabbitmq/mnesia
- rabbitmq-conf:/etc/rabbitmq
deploy:
placement:
constraints:
- node.hostname==foomachine

Docker: Multiple Compositions

I've seen many examples of Docker compose and that makes perfect sense to me, but all bundle their frontend and backend as separate containers on the same composition. In my use case I've developed a backend (in Django) and a frontend (in React) for a particular application. However, I want to be able to allow my backend API to be consumed by other client applications down the road, and thus I'd like to isolate them from one another.
Essentially, I envision it looking something like this. I would have a docker-compose file for my backend, which would consist of a PostgreSQL container and a webserver (Apache) container with a volume to my source code. Not going to get into implementation details but because containers in the same composition exist on the same network I can refer to the DB in the source code using the alias in the file. That is one environment with 2 containers.
On my frontend and any other future client applications that consume the backend, I would have a webserver (Apache) container to serve the compiled static build of the React source. That of course exists in it's own environement, so my question is like how do I converge the two such that I can refer to the backend alias in my base url (axios, fetch, etc.) How do you ship both "environments" to a registry and then deploy from that registry such that they can continue to communicate across?
I feel like I'm probably missing the mark on how the Docker architecture works at large but to my knowledge there is a default network and Docker will execute the composition and run it on the default network unless otherwise specified or if it's already in use. However, two separate compositions are two separate networks, no? I'd very much appreciate a lesson on the semantics, and thank you in advance.
There's a couple of ways to get multiple Compose files to connect together. The easiest is just to declare that one project's default network is the other's:
networks:
default:
external:
name: other_default
(docker network ls will tell you the actual name once you've started the other Compose project.) This is also suggested in the Docker Networking in Compose documentation.
An important architectural point is that your browser application will never be able to use the Docker hostnames. Your fetch() call runs in the browser, not in Docker, and so it needs to reach a published port. The best way to set this up is to have the Apache server that's serving the built UI code also run a reverse proxy, so that you can use a same-server relative URL /api/... to reach the backend. The Apache ProxyPass directive would be able to use the Docker-internal hostnames.
You also mention "volume with your source code". This is not a Docker best practice. It's frequently used to make Docker simulate a local development environment, but it's not how you want to deploy or run your code in production. The Docker image should be self-contained, and your docker-compose.yml generally shouldn't need volumes: or a command:.
A skeleton layout for what you're proposing could look like:
version: '3'
services:
db:
image: postgres:12
volumes:
- pgdata:/var/lib/postgresql/data
backend:
image: my/backend
environment:
PGHOST: db
# No ports: (not directly exposed) (but it could be)
# No volumes: or command: (use what's in the image)
volumes:
pgdata:
version: '3'
services:
frontend:
image: my/frontend
environment:
BACKEND_URL: http://backend:3000
ports:
- 8080:80
networks:
default:
external:
name: backend_default

Share connection details with container and host

My docker-compose.yml contains application container and database container
app:
links:
- db
db:
image: postgres
ports:
- 5003:5432
Let's say that during development I want to start only db container and to connect to it and I use localhost:5003.
In production I want to start both containers, one with application and one with database. But now I need to use db:5432 in application container to connect to db
Is it possible to modify docker-compose configuration file to be able to use same database uri in both cases?
I would suggest to create multiple docker-compose files for the different environments.
You can create a base docker-compose file and add overrides for the different environments as described here: https://docs.docker.com/compose/extends/#different-environments

How to setup hostnames using docker-compose?

I have setup a few docker-containers with docker-compose.
When I start them via docker-compose up I can access them via their exposed ports, e.g. localhost:9080 and localhost:9180.
I really would like to access them via hostnames, the localhost:9180 should be accessable on my localhost via api.local and the localhost:9080 via webservice.local
How can I achieve that? Is that something that docker-compose can do or do I have to use a reverse proxy on my localhost?
Currently my docker-compose.yml looks like this:
api:
build: .
ports:
- "9180:80"
- "9543:443"
external_links:
- mysql_mysql_1:mysql
links:
- booking-api
webservice:
ports:
- "9080:80"
- "9443:433"
image: registry.foo.bar:5000/webservice:latest
volumes:
- ~/.docker-history:/.bash_history
- ~/.docker-bashrc:/.bashrc
- ./:/var/www/virtual/webservice/current
No, you can't do this.
/etc/hosts file resolves host-names only. Thus it can only resolve localhost to 127.0.0.1.
If you add a line like
api.local 127.0.0.1:9180 it wont work.
The only think you can do is to setup a reverse proxy (like nginx) on your host that listen to api.local and forwards the requests to localhost:9180.
You should check out the dory project. By adding a VIRTUAL_HOST environment variable, the container becomes accessible by domain name. For example, if you set VIRTUAL_HOST=web.docker, you can reach the container at http://web.docker.
The project home page has more info. It's a young project but under active development. Support for macOS is also planned now that Docker for Mac and dlite have emerged/matured.

Resources