I have a docker compose which looks like following i want to run service httpd at first and after some time i want to run tomcat is it possible to do that without using two docker-compose and only using the given docker compose file
version: '2'
services:
tomcat:
expose:
- "8009"
ports:
- 8080:8080
image: 192.168.56.1:5000/tomcat
httpd:
volumes:
- ./logs:/var/log/apache2
ports:
- 80:80
image: 192.168.56.1:5000/apache
You can specify a dependency between the services by using the depends_on key eg.
services:
tomcat:
...
depends_on:
- httpd
httpd:
...
Related
Hello I have multiple projects that have there own dockerfiles and docker-compose.yml files. I am not too familiar on how I would setup the networking between these projects. So they could share the same databases and the project would be able to talk to on another. Does anyone have suggests?
Right now, In one of the projects I am just pulling in all the dockerfile into a docker-compose.yml and setting-up all the services I need from all the other projects in this yml file. I do not think this is ideal and there is a high level a coupling between the services.
version: "3"
services:
db:
image: mysql/mysql-server
ports:
- 3306:3306
mongo:
image: mongo
restart: always
rails_app:
build:
context: ${RAILS_APP_PATH}
dockerfile: Dockerfile
volumes:
- ${RAILS_APP_PATH}:/application
ports:
- 4000:4000
depends_on:
- db
- mongo
links:
- db
- mongo
frontend:
build:
context: ${FRONTEND_PATH}
ports:
- ${EXPOSED_PORT}:${EXPOSED_PORT}
depends_on:
- go_services
links:
- go_services
go_services:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
depends_on:
- db
- mongo
- rails_app
links:
- db
- mongo
- rails_app
The trick is to use an External Docker Network.
Set up the network and the Containers can talk to each other by their Service Names.
Setup the the network on the Host
docker network create my-net
First compose file
version: '3.9'
services:
mymongo:
image: mongo:latest
restart: unless-stopped
container_name: mongo
environment:
MONGO_INITDB_DATABASE: mymongo
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
volumes:
- ./database:/data/db
ports:
- "27017:27017"
networks:
default:
external: true
name: my-net
Second compose file
version: '3.9'
services:
ui:
build:
context: ./build
dockerfile: Dockerfile_ui
image: ui
restart: "no"
container_name: ui
ports:
- "8005:3000"
command: ["npm", "start"]
networks:
default:
external: true
name: my-net
You can do this without any special Compose setup, if:
each project is self-contained (they do not share databases)
the service locations are configurable via environment variables
you don't mind communicating via the host
If you're thinking about scaling up this project at all, this approach can look attractive. It will work even if you're running each Compose file on a different host, and it translates well into clustered environments like Kubernetes.
Go ahead and break up your Compose file into several independent ones:
# rails/docker-compose.yml
version: '3.8'
services:
db:
image: mysql/mysql-server
app:
build: .
ports: ['4000:4000']
depends_on: [db]
# go/docker-compose.yml
services:
mongo:
image: mongo
service:
build: .
ports: ['8080:8080']
depends_on: [mongo]
environment:
- RAILS_APP_URL
The very last line here passes the RAILS_APP_URL environment variable from the host environment into the container.
You can start the Rails application independently:
docker-compose -f ./rails/docker-compose.yml up -d
You need to find some hostname where the container can call back to the host. On MacOS and Windows hosts, Docker provides a special hostname host.docker.internal for this. You can then connect the client container to the published port of its server:
export RAILS_APP_URL=http://host.docker.internal:4000
docker-compose -f ./go/docker-compose.yml up
If you're doing development, you can run the service you're working on locally, and its dependencies in containers, and point the environment variable at the container
go build -o ./server ./cmd/server
export RAILS_APP_URL=http://localhost:4000
./server
If you want to run this setup on multiple hosts but without using a dedicated cluster manager like Docker Swarm or Kubernetes, set the environment variable to point at the DNS name of the host running the service. If you did want to translate this to Kubernetes, a Helm "chart" would be analogous, containing the Deployment, Service, etc. and dependencies for a single component, and you could configure the other service's URL through Helm values.
I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.
Currently I have a rabbitmq message broker and multiple celery workers that need to be containerized. My problem is, how can I fire up containers using different docker-compose.yml? My goal is to start the rabbitmq once and for all, and never touch it again.
Currently I have a docker-compose.yml for the rabbitmq:
version: '2'
services:
rabbit:
hostname: rabbit
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
ports:
- "5672:5672"
expose:
- "5672"
And another docker-compose.yml for celery workers:
version: '2'
services:
worker:
build:
context: .
dockerfile: dockerfile
volumes:
- .:/app
environment:
- CELERY_BROKER_URL=amqp://admin:mypass#rabbit:5672
links:
- rabbit
However, when I do docker-compose up for celery workers, I keep getting the following error:
ERROR/MainProcess] consumer: Cannot connect to
amqp://admin:**#rabbit:5672//: failed to resolve broker hostname.
Can anyone take a look if there is anything wrong with my code? Thanks.
the domain name rabbit in your second docker-compose.yml file does not resolve because there is no service with that name in that docker-compose.yml file.
As stated in the comments, one solution is to put both the rabbit service and worker service in the same docker-compose.yml file. In such a setup, all containers started for those services would join the same docker network and those service names could be resolved to the IP adresses of their containers.
Since having a single docker-compose.yml file is not convenient in your case, you have to find an other way to have the containers originating from different docker-compose.yml files join a same docker network.
To do so, you need to create a dedicated docker network for that purpose:
docker network create rabbitNetwork
Then, in each docker-compose.yml file, you need to refer to this network in the services definitions:
version: '2'
services:
rabbit:
hostname: rabbit
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
# ports:
# - "5672:5672" # there is no need to publish ports on the docker host anymore
expose:
- "5672"
networks:
- rabbitNet
networks:
rabbitNet:
external:
name: rabbitNetwork
version: '2'
services:
worker:
build:
context: .
dockerfile: dockerfile
volumes:
- .:/app
environment:
- CELERY_BROKER_URL=amqp://admin:mypass#rabbit:5672
networks:
- rabbitNet
networks:
rabbitNet:
external:
name: rabbitNetwork
You can use any file as service definition.
docker-compose.yml is default file name but any other name can be passed using -f argument.
docker-compose -f rabbit-compose.yml COMMAND
I have compose file as follows;
redis:
image: redis
ports:
- "6379:6379"
php:
build: .
image: php:fpm
volumes:
- ./code:/var/www/html
links:
- redis:redis
networks:
- code-network
I'm entering into php container with the following command.
docker exec -it php_id /bin/bash
but I can't run "redis-cli" command in this container. What do I need to do to run it.
I added "links" parameter to compose file but it didn't.
You are putting the php-fpm container in a network of its own. Here is a fixed compose file:
version: "3"
services:
redis:
image: redis
ports:
- "6379:6379"
php:
build: .
image: php:fpm
volumes:
- ./code:/var/www/html
networks:
- code-network
- default
networks:
code-network:
See this for more info on compose networking.
About the redis-cli issue: You'd need to add the appropriate repository on the php-fpm container and then install it. As you are using the php:fpm image, you propably want to use redis with some php-application, therefore you don't need debians redis-cli package, but rather the php-extension.
See this post for more info.
I have two services defined in docker-compose.yml file and expecting two containers up and running in output however the containers are getting stopped immediately. When I use the same compose file on Linux it creates and keep twp containers up and running. Is there any known issue with Windows?
I have tried using docker-compose up as well as docker-compose run -dT <servicename> no go.
my docker-compose.yml file is
version: '3'
networks:
default:
external:
name: nat
services:
awi-service:
env_file:
- awi-box.env
image: awi-box:12.0.0
ports:
- 8080:8080
depends_on:
- ae-service
ae-service:
env_file:
- ae-box.env
image: ar-box:12.0.0
ports:
- 2217:2217
- 2218:2218