This question already has answers here:
Docker compose in another directory affects other containers
(2 answers)
docker-compose containers uses wrong container with multiple projects
(3 answers)
Closed 1 year ago.
There are two docker-compose containers in one server which have different container_name and hostname. When I enter one container path and run docker-compose -f docker-compose.dev.yml stop && docker-compose -f docker-compose.dev.yml up -d command, the other container will follow the command and restart.
Container One's docker-compose.dev.yml:
version: '3.7'
networks:
default:
external: true
name: sycamore
services:
backend:
container_name: sycamore-research-backend-dev
hostname: sycamore-research-dev
build:
context: ./
dockerfile: backend.dockerfile
args:
env: dev
env_file:
- backend.dev.env
ports:
- '9989:80'
environment:
ACCESS_LOG: ./logs/access.log
ERROR_LOG: ./logs/error.log
volumes:
- './app:/app'
- './upload:/upload'
command: bash /start-reload.sh
networks:
- default
Container Two's docker-compose.dev.yml:
version: '3.8'
networks:
default:
external: true
name: sycamore
services:
backend:
container_name: sycamore-jsincubator-backend-dev
hostname: sycamore-jsincubator-dev
build:
context: ./
dockerfile: backend.dockerfile
args:
env: dev
env_file:
- backend.dev.env
ports:
- '9500:80'
environment:
ACCESS_LOG: /mnt/development/mount/SycamoreJSIncubator/logs/access.log
ERROR_LOG: /mnt/development/mount/SycamoreJSIncubator/logs/error.log
volumes:
- './app:/app'
command: bash /start-reload.sh
networks:
- default
I have no idea totally why this happened. The two docker-compose containers have different container_name and hostname. The only same is their networks.
Versions:
docker: Docker version 19.03.11, build 42e35e61f3
docker-compose: docker-compose version 1.26.0, build unknown
System: CentOS 7
Related
I am deploying, on 3 different environments (test, stage & production) an API.
I am used to deploy with docker-compose so I wrote 2 services (1 for my API and 1 for a database) like following:
# file docker-compose.yml
version: '3.3'
services:
api:
build:
context: ..
dockerfile: Dockerfile
image: my_api:${TAG}
ports:
- "${API_PORT_FROM_ENV}:8000"
env_file: .env
depends_on:
- db
db:
image: whatever:v0.0.0
ports:
- "${DB_PORT_FROM_ENV}:5000"
env_file:
- .env
In the file above, you can find the parent services.
Thne, I wrote 2 files that explains my strategy to deploy on the same Virtual Machine my containers:
-> staging environment below
# docker-compose.stage.yml
version: "3.3
services:
api:
container_name: api_stage
environment:
- environment="staging"
db:
container_name: db_stage
environment:
- environment="staging"
volumes:
- /I/Mount/a/local/volume/stage:/container/volume
-> production environment below
# docker-compose.prod.yml
version: "3.3
services:
api:
container_name: api_prod
environment:
- environment="production"
db:
container_name: db_prod
environment:
- environment="production"
volumes:
- /I/Mount/a/local/volume/prod:/container/volume
My problem:
The production is actually running.
I deploy my containers with the following command:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up --build
I want to deploy a staging environment on the same virtual machine. I want my api_prod + db_prod running in parallel with api_stage + db_stage.
Unfortunatly, when I run the command:
docker-compose -f docker-compose.yml -f docker-compose.stage.yml up --build
My containers called api_prod and db_prod stops. Why?
I found the solution:
I need to specify --project-name option that allows me to run both containers from stage and production environment without concurrency.
Below the 2 commands:
# Stage
docker-compose --project-name stage -f docker-compose.yml -f docker-compose.prod.yml up --build
# Production
docker-compose --project-name prod -f docker-compose.yml -f docker-compose.prod.yml up --build
I am also open to other solutions
You need to specify different port bindings as well:
# docker-compose.stage.yml
version: "3.3
services:
api:
container_name: api_stage
ports:
- "8001:8000"
environment:
- environment="staging"
db:
container_name: db_stage
ports:
- "xxxY:xxxx"
environment:
- environment="staging"
volumes:
- /I/Mount/a/local/volume/stage:/container/volume
Using the below docker compose files, i am unable to bring up my app correctly. Docker says my LAPIS_ENV environment variable is not set, but i am setting it in my second compose file which I am expecting to be merged into the first one. I have tried including them in reverse order to no avail.
version: '2.4'
services:
backend:
mem_limit: 50mb
memswap_limit: 50mb
build:
context: ./backend
dockerfile: Dockerfile
depends_on:
- postgres
volumes:
- ./backend:/var/www
- ./data:/var/data
restart: unless-stopped
command: bash -c "/usr/local/bin/docker-entrypoint.sh ${LAPIS_ENV}"
postgres:
build:
context: ./postgres
dockerfile: Dockerfile
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
- postgres:/var/lib/postgresql/data
- ./postgres/pg_hba.conf:/var/lib/postgres/data/pg_hba.conf
- ./data/backup:/pgbackup
restart: unless-stopped
volumes:
postgres:
version: '2.4'
services:
backend:
environment:
LAPIS_ENV: development
ports:
- 8080:80
#!/usr/bin/env bash
docker compose -f docker-compose.yml -f docker-compose.dev.yml up
I have docker compose defied with 2 service,
I need 1st to start with --dev command line option,
but I cannot find this in file format spec. https://docs.docker.com/compose/compose-file/compose-file-v3/
version: "3.9"
services:
polkadot:
image: parity/polkadot:latest
command: --dev
ports:
- "9944:9944"
sidecar:
image: parity/substrate-api-sidecar:latest
ports:
- "8080:8080"
running with docker-compose up
For comparison, when simply running docker adding --dev is straightforward:
docker run --rm -it -p 9944:9944 parity/polkadot:latest --dev
But how to within docker-compose file?
command: is the right way to go
It is possible to pass many arguments as:
version: "3.9"
services:
polkadot:
container_name: polkadotdev
image: parity/polkadot:latest
ports:
#- 30333:30333 # p2p port
- 9933:9933 # rpc port
- 9944:9944 # ws port
command: [
"--dev",
"--name", "polkadotdevnode",
"--ws-external",
"--rpc-external",
"--rpc-cors", "all"
]
sidecar:
container_name: sidecardev
image: parity/substrate-api-sidecar:latest
ports:
- "8080:8080"
environment:
SAS_SUBSTRATE_WS_URL: ws://polkadot:9944
I have 2 docker-compose files that build a dockerfile, and i want join those docker-compose files
so, i created other docker-compose that goes up these 2 images
version: "3.4"
services:
frontend:
image: frontend-image
depends_on:
- backend
ports:
- "3000:80"
networks:
- teste-network
backend:
image: backend-image
ports:
- "5001:80"
networks:
- test-network
networks:
test-network:
driver: bridge
but, this docker-compose file not build the images
then i created a bash command that build these images
bash -c "docker-compose -f ./frontend/docker/docker-compose.yml build
&& docker-compose -f ./backend/docker/docker-compose.yml build"
I want to run this script before up containers, just typing docker-compose up
i assume that you have 2 dockerfiles - one for the frontend and the other for the backend, where each of which resides in the corresponding folder from your post, that is:
frontend/docker/Dockerfile
backend/docker/Dockerfile
then you can leverage docker-compose to build and run your images. all you have to do is to tell docker-compose where are the dockerfiles, which you can do by utilizing the build configuration.
version: "3.4"
services:
frontend:
image: frontend-image
build: ./frontend/docker
depends_on:
- backend
ports:
- "3000:80"
networks:
- test-network
backend:
image: backend-image
build: ./backend/docker
ports:
- "5001:80"
networks:
- test-network
networks:
test-network:
driver: bridge
then running docker-compose up frontend will build the docker images (if they do no exist), and then start them.
I want to run an application using docker-compose on a Linux server that already has the images stored locally.
The application consists of two services. Running docker images on the server indicates that the images do in fact exist:
REPOSITORY TAG IMAGE ID CREATED SIZE
app_nginx latest b8362b71f3da About an hour ago 107MB
app_dash_alert_app latest 432f03c01dc6 About an hour ago 1.67GB
Here is my docker-compose.yml:
version: '3'
services:
dash_alert_app:
container_name: dash_alert_app
restart: always
build: ./dash_alert_app
ports:
- "8000:8000"
command: gunicorn -w 1 -b :8000 dash_histogram_daily_counts:server
nginx:
container_name: nginx
restart: always
build: ./nginx
ports:
- "80:80"
depends_on:
- dash_alert_app
When I run, docker-compose pull it seems to be able to see the images, and pulls them in:
$ sudo docker-compose pull
Pulling dash_alert_app ... done
Pulling nginx ... done
But when I try to spin up the containers I get the following suggesting that the images still need to be built:
$ docker-compose up -d --no-build
ERROR: Service 'dash_alert_app' needs to be built, but --no-build was passed.
Note that I've configured docker to store images in /mnt/data/docker - here is my /etc/docker/daemon.json file:
{
"graph": "/mnt/data/docker",
"storage-driver": "overlay",
"bip": "192.168.0.1/24"
}
Here is my folder structure:
.
│ docker-compose.yml
└───dash_alert_app
└───nginx
Why is docker-compose not using the images that exist locally?
Looks like you forgot to specify the image key. Also, do you really have to build the image again with docker-compose build or are the existing ones sufficient? If they are, please try this:
version: '3'
services:
dash_alert_app:
image: app_dash_alert_app
container_name: dash_alert_app
restart: always
ports:
- "8000:8000"
command: gunicorn -w 1 -b :8000 dash_histogram_daily_counts:server
nginx:
image: app_nginx
container_name: nginx
restart: always
ports:
- "80:80"
depends_on:
- dash_alert_app