I have looked at the docker docs for the answer to this question and I dont see it being laid out simply anywhere. I want to start my app in a docker using docker-compose.yml I want that docker-compose.yml to start up other containers defined in another docker-compose.yml file in a different project.
version: '3.4'
#we need network if we want the services to talk to each other
networks:
services:
driver: bridge
services:
jc:
build:
context: .
dockerfile: ./Dockerfile
args:
- PORT=8080
network: host
networks:
- services
image: jc
container_name: jc
ports:
- 8080:8080
How can I edit this file so that I can run another docker-compose.yml file in a different file path when I run this docker-compose.yml file?
After trying the to use the extends option of the docker-compose file and differing variations of its usage I consistently received an error indicating that extends is not supported even after updating to the latest docker version. I did however solve the problem by running this command.
docker-compose -f /path/to/other/docker-compose-file/docker-compose.yml up
I was sure that I had to add something to the docker-compose file so I overlooked this in the docs. But you can read more about it here.
docker-compose docs
This is kind of a hack but you can add another container that starts docker compose with another docker-compose file.
for example:
Docker file for the starter container:
FROM ubuntu:bionic
RUN mkdir -p /compose
WORKDIR /compose
CMD ["docker-compose", "up", "-d"]
The main docker compose file (starts a redis server and the starter container). note the compose binary, docker socket and another docker-compose.yml file are mounted into the starter container:
version: '2.1'
services:
from-main-compose:
image: redis:3
starter:
image: starter:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /usr/local/bin/docker-compose:/usr/local/bin/docker-compose:ro
- /home/shay/source/composeFromContainer/another-compose/docker-compose.yml:/compose/docker-compose.yml:ro
Second docker compose file:
version: '2.1'
services:
redis-from-container:
image: redis:3
The result is this:
b1faa975df49 redis:3 "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 6379/tcp compose_redis-from-container_1
7edca79d3d99 redis:3 "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 6379/tcp composefromcontainer_from-main-compose_1
Note that if using this hack as it is the services will be placed on different networks so this might need to be tweaked a bit.
Related
I have an idea to up several similar instances with my docker-compose simple mini-project.
docker-compose file:
version: '3.1'
services:
postgres:
build:
context: .
dockerfile: postgres/Dockerfile
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
ports:
- "${PORT_NUMBER}:5432"
Dockerfile:
FROM postgres:9.6
In ./config/.env.dev file I set unique required port number (eg. 5435 for first instance,
5436 for second etc.)
When I up first instance with command:
docker-compose -p instance1 --env-file ./config/.env.dev up
it's OK and I see one new container and one new network instance1_default.
But when I try to up another new instance with command:
docker-compose -p instance2 --env-file ./config/.env.dev up
Docker stuck at this:
Creating network "instance2_default" with the default driver
.. and nothing happens. Yes, I changed port number in env-file before running new instance.
What's wrong with creating new network?
Docker version 20.10.8, build 3967b7d
docker-compose version 1.29.2, build 5becea4c
I have the following docker-compose.yml file:
version: '3'
services:
db:
image: postgres:${PG_VERSION}
ports:
- "${DB_PORT}:5432"
environment:
- POSTGRES_USER=${SUPER_USER}
- POSTGRES_PASSWORD=${SUPER_PASS}
- POSTGRES_DB=${DB_NAME}
- SUPER_USER=${SUPER_USER}
- SUPER_USER_PASSWORD=${SUPER_PASS}
- DB_NAME=${DB_NAME}
- DB_USER=${DB_USER}
- DB_PASS=${DB_PASS}
- DB_ANON_ROLE=${DB_ANON_ROLE}
volumes:
- ./initdb:/docker-entrypoint-initdb.d
# PostgREST
postgrest:
image: postgrest/postgrest
ports:
- "${API_PORT}:3000"
links:
- db:db
environment:
- PGRST_DB_URI=postgres://${DB_USER}:${DB_PASS}#${DB_HOST}:5432/${DB_NAME}
- PGRST_DB_SCHEMA=${DB_SCHEMA}
- PGRST_DB_ANON_ROLE=${DB_ANON_ROLE}
- PGRST_JWT_SECRET=${JWT_SECRET}
depends_on:
- db
swagger:
image: swaggerapi/swagger-ui
ports:
- ${SWAGGER_PORT}:8080
environment:
API_URL: ${SWAGGER_API_URL-:http://localhost:${API_PORT}/
And another file docker-compose.prod.yml
version: '3'
services:
db:
volumes:
- ./initdb/init.sql:/docker-entrypoint-initdb.d/init.sql
- ./var/postgres-data:/var/lib/postgresql/data
- ./var/log/postgresql:/var/log/postgresql
- ./etc/postgresql/postgresql.conf:/var/lib/postgresql/data/postgresql.conf
nginx:
image: nginx
ports:
- 80:80
- 443:443
volumes:
- ./etc/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./var/log/nginx:/var/log/nginx
depends_on:
- postgrest
As you can see I am adding a few volumes to the db service, but importantly I have also added a new nginx service.
The reason I am adding it in this file is because nginx is not needed during development.
However, what is strange is when I issue the docker-compose up command as follows:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
And then list the processes with
docker-compose ps
I get the following output
Name Command State Ports
-----------------------------------------------------------------------------------------
api_db_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
api_postgrest_1 /bin/sh -c exec postgrest ... Up 0.0.0.0:3000->3000/tcp
api_swagger_1 /docker-entrypoint.sh sh / ... Up 80/tcp, 0.0.0.0:8080->8080/tcp
Notice that nginx is not here. However it is actually running, when I issue:
docker ps
I get the following output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ba281fd80743 nginx "/docker-entrypoint.…" 8 minutes ago Up 8 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp api_nginx_1
d0028fdaecf5 postgrest/postgrest "/bin/sh -c 'exec po…" 8 minutes ago Up 8 minutes 0.0.0.0:3000->3000/tcp api_postgrest_1
1d6e3d689210 postgres:11.2 "docker-entrypoint.s…" 8 minutes ago Up 8 minutes 0.0.0.0:5432->5432/tcp api_db_1
ed5fa7a71848 swaggerapi/swagger-ui "/docker-entrypoint.…" 8 minutes ago Up 8 minutes 80/tcp, 0.0.0.0:8080->8080/tcp api_swagger_1
So my question is, why is docker-compose not seeing nginx as part of the group of services?
NOTE: The reason I am using this override approach, and not using extends, is that extends does not support services with links and depends_on properties. My understanding is that combining files like this is the recommended approach. However I do understand why it is not possible to add new services in a secondary file.
For example see https://docs.docker.com/compose/extends/#example-use-case, here the docs are adding a new dbadmin service using this method, but no mention that the service won't be included in the output of docker-compose ps, and that there will be warnings about orphans, for example:
$docker-compose down
Stopping api_postgrest_1 ... done
Stopping api_db_1 ... done
Stopping api_swagger_1 ... done
WARNING: Found orphan containers (api_nginx_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Removing api_postgrest_1 ... done
Removing api_db_1 ... done
Removing api_swagger_1 ... done
Removing network api_default
Tested on:
Docker version 20.10.4, build d3cb89e
Docker version 19.03.12-ce, build 48a66213fe
and:
docker-compose version 1.27.0, build unknown
docker-compose version 1.29.2, build 5becea4c
So I literally figured it out as I was typing, and noticed a related question.
The trick is this https://stackoverflow.com/a/45515734/2685895
The reason why my new nginx service was not visible, is because docker-compose ps by default only looks at the docker-compose.yml file.
In order to get the expected output, one needs to specify both files,
In my case:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml ps
Scenario:
With the following docker-compose.yaml
version: 3
services:
helloworld:
image: hello-world
container_name: hello-world
whoami:
image: containous/whoami
container_name: whoami
containers are started with docker-compose up
docker-compose.yaml is then edited to expose a port
version: 3
services:
helloworld:
image: hello-world
container_name: hello-world
whoami:
image: containous/whoami
container_name: whoami
ports:
- 10000:80
whoami is restarted via docker-compose restart whoami
Problem: the port is not exposed.
My question: what is the correct command to restart a container (previouly started as part of a docker-compose up) so that its (modified) definition in docker-compose.yaml is taken into account?
Note: restarting everything with docker-compose down && docker-compose up correctly exposes the port. What I want to avoid is to interfere with other running containers when modifying a single one.
Only another docker-compose up seems to work.
According to docker-compose up documentation:
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes).
I'm trying to migrate working docker config files (Dockerfile and docker-compose.yml) so they deploy working local docker configuration to docker hub.
Tried multiple config file settings.
I have the following Dockerfile and, below, the docker-compose.yml that uses it. When I run "docker-compose up", I successfully get two containers running that can either be accessed independently or will talk to each other via the "db" and the database "container_name". So far so good.
What I cannot figure out is how to take this configuration (the files below) and modify them so I get the same behavior on docker hub. Being able to have working local containers is necessary for development, but others need to use these containers on docker hub so I need to deploy there.
--
Dockerfile:
FROM tomcat:8.0.20-jre8
COPY ./services.war /usr/local/tomcat/webapps/
--
docker-compose.yml:
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "8089:8080"
volumes:
- /Users/user/Library/apache-tomcat-9.0.7/conf/tomcat-users.xml:/usr/local/tomcat/conf/tomcat-users.xml
depends_on:
- db
db:
image: mysql:5.7
container_name: test-mysql-docker
ports:
- 3307:3306
volumes:
- ./ZipCodeLookup.sql:/docker-entrypoint-initdb.d/ZipCodeLookup.sql
environment:
MYSQL_ROOT_PASSWORD: "thepass"
Expect to see running containers on docker hub, but cannot see how these files need to be modified to get that. Thanks.
Add an image attribute.
app:
build:
context: .
dockerfile: Dockerfile
ports:
image: docker-hub-username/app
Replace "docker-hub-username" with your username. Then run docker-compose push app
I tried to run nginx like following:
docker-compose.yml
version: '3'
services:
web:
image: nginx
ports:
- "3011:80"
After I run docker-compose up, I found that nginx working success at 127.0.0.1:3011
But if I copy the nginx's dockerfile at dockerHub:
And change the docker-compose.yml like following:
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile
ports:
- "3011:80"
Then nginx won't working at 127.0.0.1:3011.
Why is that?
If you changed you Dockerfile, you should run
docker-compose up --build -d
to build your docker image before docker-compose runs it up.
tested this using the dockerfile linked above and the compose for it, this worked perfectly fine on my side. Perhaps you need to run docker-compose build and then docker-compose up --force-recreate it may be that you are using a stale container that may be broken.