I am trying to bring up my container using docker compose. This container is dependent on two other containers with names 'rabbitmq' & 'mysqldb'.
What my scenario is that the dependent named containers are already created with said name. This is because of some additional configuration i needed for mysql.
How can I link those two containers to this container using docker-compose. Such that it also starts up my named containers when bringing up this myservice container.
Would appreciate any help or direction.
myservice:
image: myaccount/myservice
ports:
- "8081:8081"
restart: always
depends_on:
- rabbitmq
- mysqldb
environment:
SPRING_DATASOURCE_URL: 'jdbc:mysql://mysqldb:3306/myservice'
SPRING_PROFILES_ACTIVE: 'mysql'
SPRING_RABBITMQ_HOST: 'rabbitmq'
healthcheck:
test: ["CMD", "curl", "-f", "http://192.168.99.100:8081/actuator/health"]
interval: 1m30s
timeout: 10s
retries: 3
UPDATE:
I was able to resolve this using external_links and the default bridge.
version: '3'
myservice:
image: myaccount/myservice
ports:
- "8081:8081"
restart: always
external_links:
- rabbitmq
- mysqldb
network_mode: bridge
environment:
SPRING_DATASOURCE_URL: 'jdbc:mysql://mysqldb:3306/myservice'
SPRING_PROFILES_ACTIVE: 'mysql'
SPRING_RABBITMQ_HOST: 'rabbitmq'
healthcheck:
test: ["CMD", "curl", "-f", "http://192.168.99.100:8081/actuator/health"]
interval: 1m30s
timeout: 10s
retries: 3
Any other alternative is appreciated. Problem is using this approach, the dependent docker containers already need to be running. However, in case the docker containers are down, I want compose to bring the same containers up. Any ideas?
docker-compose up doesn't run already created containers. It creates the containers from an image and the runs them. So I'll make the presumption that you have images (or dockerfiles) already for these containers. However you can use --no-recreate with docker-compose up to re-use already built containers, this could be a workaround if you have a problem with the regular usage.
Under services in your docker-compose.yml you simply need to define a service for your other two images too. Example below.
services:
service1:
image: images/service1image
depends_on:
- service2
- service3
service2:
image: images/service2image
service3:
build: docker/service3
You don't need to define a network as terrywb suggested if you define them this way. docker-compose automatically creates a bridge network for general use and maps ports for all defined services between them. However if you don't define them as a service then you'd likely need to define a network to connect them, of course if you do this then you won't be able to automatically start them up at the same time using docker-compose which is the whole issue you're trying to solve. If you really don't want them as services then I can only suggest you create a "startup.sh" bash/shell file to handle this. As then you'd be trying to do something outside of the functionality scope docker-compose provides.
I believe that you need to define a network in each of your docker-compose files.
networks:
mynet:
Next, add that network to each container definition
myservice:
networks:
- mynet
Related
This question already has answers here:
Communication between multiple docker-compose projects
(20 answers)
Closed 7 months ago.
i have project in which Rabbit is launched:
services:
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
volumes:
- ~/.docker-conf/rabbitmq/data/:/var/lib/rabbitmq/
- ~/.docker-conf/rabbitmq/log/:/var/log/rabbitmq
networks:
- rabbitmq_go_net
networks:
rabbitmq_go_net:
driver: bridge
Then I made another project that connects to this queue.
Without the doker, I just used localhost as the host and port number 5672.
I wanted to run another project in docker with a database:
version: '3'
services:
postgres:
image: postgres:12
restart: always
networks:
- rabbitmq_go_net
ports:
- '5432:5432'
volumes:
- ./db_data:/var/lib/postgresql/data
- ./app/internal/config/dbconfig/init.sql:/docker-entrypoint-initdb.d/create_tables.sql
env_file:
- ./app/internal/config/dbconfig/.env
healthcheck:
test: [ "CMD", "pg_isready", "-q", "-d", "devdb", "-U", "postgres" ]
timeout: 45s
interval: 10s
retries: 10
app:
build: app
networks:
- rabbitmq_go_net
depends_on:
postgres:
condition: service_healthy
volumes:
db_data:
networks:
rabbitmq_go_net:
driver: bridge
And now I can't connect to Rabbit.I tried to make a different network and with the same name, but every time I get the same error:
FATA[2022-07-31T13:24:09Z]ipstack/internal.(*app).startConsume() app.go:43 failed to connect to RabbitMQ due dial tcp 127.0.0.1:5672: connect: connection refused
Connect:
addr := fmt.Sprintf("amqp://%s:%s#%s:%s/", cfg.Username, cfg.Password, cfg.Host, cfg.Port)
Where host is the container name rabbitmq.
Is it possible to do this, or is it necessary to put programs in a container in some other way?I will be glad to help
I think the issue is that your Docker networks aren't the same despite using the same name in the two different compose files. Docker prefixes networks declared inside compose files with the project name to avoid collisions.
Everything you're looking for can be found in this SO response and corresponding thread. It shows how to use the external flag on a network so that the second compose doesn't create a new network, and also talks about how Docker uses the project name as a prefix for the network name so you can predict what the generated network name can be.
Alternatively, you can create the network in advance with docker network create and use the external flag inside both compose files so you don't need to worry about docker's naming convensions.
I have to deploy a web app with a Jetty Server. This app need a database, running on MariaDB. Here the docker-compose file used to deploy the app:
version: '2.1'
services:
jetty:
build:
context: .
dockerfile: docker/jetty/Dockerfile
container_name: app-jetty
ports:
- "8080:8080"
depends_on:
mariadb:
condition: service_healthy
networks:
- app
links:
- "mariadb:mariadb"
mariadb:
image: mariadb:10.7
container_name: app-mariadb
restart: always
environment:
MARIADB_ROOT_PASSWORD: myPassword
MARIADB_DATABASE: APPDB
ports:
- "3307:3306"
healthcheck:
test: [ "CMD", "mariadb-admin", "--protocol", "tcp", "ping", "-u root", "-pmyPassword" ]
interval: 10s
timeout: 3m
retries: 10
volumes:
- datavolume:/var/lib/mysql
networks:
- app
networks:
app:
driver: bridge
volumes:
datavolume:
I use a volume to keep the data of mariaDB even if I use docker-compose down. On my Jetty app, the data is store into the database when the contextDestroyed function is load (when the container is stopped).
But I have an another problem: when I execute docker-compose down, all the containers are stopped and deleted. Although the mariaDB is the last stopped container (that's what the terminal is saying), the save on the contexDestroyed is "interrupt" and I lost some informations because mariaDB container is stopped when the Jetty container still saving data. I tested to stop every container but the mariaDB and my data is succefully saved without loss, so the problem is obviously the mariaDB container.
How can I indicate to the mariadb container to wait for all containers for stopping before stop itself?
According to the depends_on documentation - your dependency will force the following order of shutdown:
web
mariadb
You might want to look into what's happening during the shutdown of these containers and add some custom logic that will guarantee your data is consistent.
You can influence what happens during the shutdown of a container in 2 main ways:
adding a custom script as an entrypoint
handling the SIGTERM signal yourself
here's the relevant documentation on this.
Maybe the simplest - not necessarily the smartest - way would be to add a sleep(5) to the db shutdown, so your app has enough time to flush its writes.
This question already has answers here:
How to stop all containers when one container stops with docker-compose?
(4 answers)
Closed 1 year ago.
I have this docker-compose.yml which runs a node script which depends on Redis.
version: "3.9"
services:
redis:
image: "redis:alpine"
# restart: always
ports:
- "127.0.0.1:6379:6379"
volumes:
- ./docker/redis:/data
node:
image: "node:17-alpine"
user: "node"
depends_on:
- redis
environment:
- NODE_ENV=production
- REDIS_HOST_ENV=redis
volumes:
- ./docker/node/src:/home/node/app
- ./docker/node/log:/home/node/log
expose:
- "8081"
working_dir: /home/node/app
command: "npm start"
When starting this script with docker compose up both services will start. However when the node service is finished, the redis service keeps running. Is there a way to define that the redis service can stop when the node service is done?
I have examined the documentation for Compose Spec but I have not found anything that allows you to immediately stop containers based on the state of another one. Perhaps there really is a way, but you can always control the behaviour of the redis service by using an healthcheck:
services:
redis:
image: "redis:alpine"
# restart: always
ports:
- "127.0.0.1:6379:6379"
volumes:
- ./docker/redis:/data
healthcheck:
test: ping -c 2 mynode || kill 1
interval: 5s
retries: 1
start_period: 20s
node:
image: "node:17-alpine"
container_name: mynode
user: "node"
depends_on:
- redis
environment:
- NODE_ENV=production
- REDIS_HOST_ENV=redis
volumes:
- ./docker/node/src:/home/node/app
- ./docker/node/log:/home/node/log
expose:
- "8081"
working_dir: /home/node/app
command: "npm start"
As for the node service, I have added a container_name: mynode, necessary by the redis service in order to contact it. The container name becomes also the hostname, if not specified with the hostname property.
The redis service has an healthcheck that ping the node container every 5 seconds, starting after 30 seconds from the container start. If the ping is successful, the container is labeled as healthy, otherwise it is killed.
This solution might work in your case but has some downsides:
The healthcheck feature is abused here, besides what if you had another healthcheck?
You cannot always kill the init process, because protected by default. There are some discussions about this and it seems the most popular decision is to use tini as the init process. Fortunately, in the image you are using, it is possible.
redis service contacts the node service via the hostname, which means that they are supposed to be in the same network in your case. The current network is the default bridge network that should be avoided most of the times. I suggest you to declare a custom bridge network.
This solution is based on polling the node container, which is not very elegant, firstly because you have to hope that the time-based parameters in the healthcheck section are "good-enough".
I have an docker compose file which spins up the local airflow instance as below:
version: '3.7'
services:
postgres:
image: postgres:9.6
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
logging:
options:
max-size: 10m
max-file: "3"
webserver:
image: puckel/docker-airflow:1.10.6
restart: always
depends_on:
- postgres
environment:
- LOAD_EX=n
- EXECUTOR=Local
- FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
logging:
options:
max-size: 10m
max-file: "3"
volumes:
- ./dags:/usr/local/airflow/dags
- ${HOME}/.aws:/usr/local/airflow/.aws
ports:
- "8080:8080"
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3
I want to add some Airflow variables which the underlying dag uses eg: CONFIG_BUCKET.
I have added them as AIRFLOW_VAR_CONFIG_BUCKET=s3://foo-bucket
in environment section of web server but it does not seem to work. Any ideas how can I achieve this ?
You should not add variables to the webserver, but to scheduler. If you are using LocalExecutor, the tasks are run in the context of Scheduler.
Actually what tou should really do is to set all env variables to be the same for all the containers (this is explained here https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html)
Use the same configuration across all the Airflow components. While each component does not require all, some configurations need to be same otherwise they would not work as expected. A good example for that is secret_key which should be same on the Webserver and Worker to allow Webserver to fetch logs from Worker.
There are a number of ways you can do it - just read the docker-compose documntation on that https://docs.docker.com/compose/environment-variables . You can also see the "Quick start" docker compose from Airflow docs where we used anchors - which is bit more sphisticated way https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html
Just note that the "quick start" should be just inspiration, it is nowhere near production setup and if you want to make your own docker compose you need to really get a deeper understanding of the docker compose - as warned in the note in our docs.
If you add an environment variable named AIRFLOW_VAR_CONFIG_BUCKET to the list under environment:, it should be accessible by Airflow. Sounds like you're doing that correctly.
Two things to note:
Variables (& connections) set via environment variables are not visible in the Airflow UI. You can test if they exist by executing Variable.get("config_bucket") in code.
The Airflow scheduler/worker (depending on Airflow executor) require access to the variable while running a task. Adding a variable to the webserver is not required.
Please explain why docker network is needed? I read some documentation, but on a practical example, I don't understand why we need to manipulate the network. Here I have a docker-compose in which everything works fine with and without network. Explain me please, what benefits will be in practical use if you uncomment docker-compose in the right places? Now my containers interact perfectly, there are migrations from the ORM to the database, why do I need a networks?
version: '3.4'
services:
main:
container_name: main
build:
context: .
target: development
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
ports:
- ${PORT}:${PORT}
command: npm run start:dev
env_file:
- .env
# networks:
# - webnet
depends_on:
- postgres
postgres:
container_name: postgres
image: postgres:12
# networks:
# - webnet
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_DB: ${DB_DATABASE_NAME}
PG_DATA: /var/lib/postgresql/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 1m30s
timeout: 10s
retries: 3
# networks:
# webnet:
volumes:
pgdata:
If no networks are defined, docker-compose will create a default network with a generated name. Otherwise you can manually specify the network and its name in the compose file.
You can read more at Networking in Compose
Here explained Docker network - Networking overview, and here are tutorials:
Macvlan network tutorial,
Overlay networking tutorial,
Host networking tutorial,
Bridge network tutorial.
Without config network for containers, all container IPs are in one range (Ex. 172.17.0.0/16). also, they can see each other by name of the container (Docker internal DNS).
Simple usage of networking in docker:
When you want to have numbers of containers in a specific network range isolated, you must use the network of docker.