I am having 3 compose file overrides, for dev, QA, staging environments.
I have one server and there I have to host/run containers for QA and Staging environments (completely separately !!. separate containers, network, and volumes).
In each compose file overrides, I am having different volume names, network names, image names, container names, all controlled by environment-specific .env files.
When I run docker-compose -f "docker-compose.yml" -f "docker-compose.qa.yml" up -d, it creates QA environment images and runs containers having name QA in it.
When I run docker-compose -f "docker-compose.yml" -f "docker-compose.staging.yml" up -d, it creates Staging environment images and runs containers having name Staging in it.
but I am not able to run both simultaneously.
Port bindings are also controlled by .env files and are different for each environment.
(I am able to specify the .env file I have to use during docker-compose up command)
services:
service1:
networks:
- dev
volumes:
- "vol_service1:/some/path/to/container"
service2:
networks:
- dev
volumes:
- "vol_service2:/some/path/to/container"
service3:
networks:
- dev
volumes:
- "vol_service3:/some/path/to/container"
service4:
networks:
- dev
volumes:
- "vol_service4:/some/path/to/container"
networks:
dev:
driver:bridge
volumes:
vol_service1:
vol_service2:
vol_service3:
vol_service4:
I am using Docker for Windows, following are the details:
Client: Docker Engine - Community
Version: 18.09.2
API version: 1.39
Go version: go1.10.8
Git commit: 6247962
Built: Sun Feb 10 04:12:31 2019
OS/Arch: windows/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.2
API version: 1.39 (minimum version 1.12)
Go version: go1.10.6
Git commit: 6247962
Built: Sun Feb 10 04:13:06 2019
OS/Arch: linux/amd64
Experimental: false
That was really silly on my part. I missed out an important point in the documentation on docker-compose.
You need to specify COMPOSE_PROJECT_NAME environment variable, if not specified then it will pick up the folder name where your compose file resides.
Just name this environment variable differently for your environment and you are good to go.
Related
I am using Maven to interpolate a docker compose file, in order to map the working directory in either Linux and Windows. Interpolation works as intended on both OSs.
In my local Windows environment, when running "docker compose up" I get both containers with the mapped volume (which already exists on the host machine), without specifying "volumes: " at top-level, only at service-level.
However, if I try to run the same setup in linux-based TeamCity, I get the following message "service "job_controller" refers to undefined volume path/to/target/classes: invalid compose project"
After checking others' answers from here, I've understood that I also have to specify "volumes:" at top-level, which I did at the bottom of the compose file.
Now, I am prompted with "volumes Additional property /opt/buildagent/work/9857567c5e342350/path/to/target/classes is not allowed"
name: Distributed
services:
create_database:
container_name: create_database
command:
- ./script.sh
- deployer
- -f
- ../config/product-mssql-v11.manifest.yaml
- drop-create-database-properties
image: alpine-3-corretto-11-wildfly-11.11.0-SNAPSHOT
networks:
- deploy
volumes:
- C:\\SourceCode\\Path\\to\\target/classes:/opt/product/config
healthcheck:
test: ["CMD", "/opt/product/script.sh", "deployer", "-f", "/opt/product/config/product-mssql-v11.manifest.yaml", "healthy"]
interval: 20s
timeout: 60s
retries: 5
job_controller:
container_name: job_controller
environment:
DEPLOYMENT_MANIFEST: /opt/product/config/main.manifest.yaml
PROPERTIES_FILE_NAME: /opt/product/config/risk-wildfly.properties
JAVA_OPTS: "-Xms1g -Xmx4g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=1g -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Addresses=true"
ports:
- 8080:8080
image: alpine-3-corretto-11-wildfly-11.11.0-SNAPSHOT
volumes:
- C:\\SourceCode\\Path\\to\\target/classes:/opt/product/config
networks:
- deploy
depends_on:
create_database:
condition: service_completed_successfully
restart: on-failure
healthcheck:
test: ["CMD", "/opt/product/script.sh", "health-check", "--context-path","product"]
interval: 20s
timeout: 60s
retries: 5
networks:
deploy:
name: deploy
external: true
volumes:
C:\\SourceCode\\Path\\to\\target/classes:
external: true
Now, locally, if I try to run "docker compose up" with the "volumes: " specified at the bottom I get as well the same "volumes Additional property C:\SourceCode\Path\to\target/classes is not allowed"
If, instead of
volumes:
C:\\SourceCode\\Path\\to\\target/classes:
external: true
I use
volumes:
I get the "volumes: " must be a mapping.
So neither of this works.
C:\>docker compose version
Docker Compose version v2.10.2
C:\>docker-compose version
docker-compose version 1.29.2, build 5becea4c
docker-py version: 5.0.0
CPython version: 3.9.0
OpenSSL version: OpenSSL 1.1.1g 21 Apr 2020
C:\>docker version
Client:
Cloud integration: v1.0.29
Version: 20.10.17
API version: 1.41
Go version: go1.17.11
Git commit: 100c701
Built: Mon Jun 6 23:09:02 2022
OS/Arch: windows/amd64
Context: default
Experimental: true
Server: Docker Desktop 4.12.0 (85629)
Engine:
Version: 20.10.17
API version: 1.41 (minimum version 1.12)
Go version: go1.17.11
Git commit: a89b842
Built: Mon Jun 6 23:01:23 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.8
GitCommit: 9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
runc:
Version: 1.1.4
GitCommit: v1.1.4-0-g5fd4c4d
docker-init:
Version: 0.19.0
GitCommit: de40ad0
How can I run this successfully in both OSs considering the volume mapping?
If you want to map specific folder from host to docker container you don't need root section
volumes:
at all
It's used to create named volumes somewhere inside docker and reference them by name in volumes section of service definition (and across multiple docker-compose files if external flag is set)
Hello everyone i am just beginning docker and really appreciate the whole container concepts but one thing i cant seem to figure out is that when i use docker compose to define my services i have to give a network also and all containers connect to that network and hence are reachable to one another but what if my one service needs to connect to a database service hosted on localhost.How will my service be able to reach the database i have searched about networking mode host and net:host options in compose file but they don't seem to work my docker version info is as follows.
Client: Docker Engine - Community
Version: 18.09.2
API version: 1.39
Built: Sun Feb 10 04:12:31 2019
OS/Arch: windows/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.2
API version: 1.39 (minimum version 1.12)
Built: Sun Feb 10 04:13:06 2019
OS/Arch: linux/amd64
Experimental: false
my docker-compose.yml file
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: dbaccessserviceimage
network_mode: "host"
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4000:80"
Really need help as i am designing the architecture to move all our production into containers A docker compose file with appropriate version and network option would be highly appreciated.
I have a simple yaml file which starts two containers: JBoss and Postgres. When I run:
docker-compose -f compose-application.yaml up -d
new network is created - this is what I expect. However, when I stop containers with:
docker-compose -f compose-application.yaml down
and start them once again then network gets new subnet (increased by 1). When restart is repeated few times then subnet assigned conflicts with already existing one (problem with routing etc.).
I know I can specify subnet which should be used inside yaml. However, I tried to run this on different machine (Docker for Windows 7) and there this network gets the same subnet each time.
I am using docker version:
docker version
Client:
Version: 18.06.1-ce
API version: 1.38
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:23:03 2018
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.06.1-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:25:29 2018
OS/Arch: linux/amd64
Experimental: false
and docker compose:
docker-compose version
docker-compose version 1.23.1, build b02f1306
docker-py version: 3.5.0
CPython version: 3.6.7
OpenSSL version: OpenSSL 1.1.0f 25 May 2017
I don't know why it works differently in Windows 7, but I can imagine that's because Docker machine itself.
I think the best solution it's really define a network on docker-compose yml file. Something like this:
networks:
network_name:
name: NETWORK_NAME
driver: bridge
ipam:
config:
- subnet: SUBNET
---EDITED -- NO LONGER WORKING --- PLEASE HELP---
maybe something is changed in the latest neo4j image (SE MY ANSWER BELOW FOR MORE DETAILS)
I'm try to run neo4j with docker-compose by means of this github repo (that contains the docker-compose.yml)
https://github.com/GraphRM/workshop-neo4j-docker
The docker-compose file conteined in this repo is nothing more that a plain neo4j docker image with some data already attached (you can try yourself, the image is realy small)
Running this file docker-compose up -d (from the folder where the docker-compose.yml file is) seems that all gone well (No errors are showed and the output of the console is Starting workshopneo4jdocker_neo4j_1 ... done)
but in the browser nothing is showed at the following addresses:
localhost:7474
0.0.0.0:7474
127.0.0.1:7474
<dockermachine ip>:7474 got this address with `docker-machine ip`
I suppose is it a network problem (wrong ip address or something related) so i've noted that in the docker-compose.yml file is missing the element network_mode:
docker-compose.yml
version: '3'
services:
neo4j:
image: neo4j:latest
ports:
- "7474:7474"
- "7687:7687"
environment:
- NEO4J_dbms_security_procedures_unrestricted=apoc.*
- NEO4J_apoc_import_file_enabled=true
- NEO4J_dbms_shell_enabled=true
volumes:
- ./plugins:/plugins
- ./data:/data
- ./import:/import
I'd like to modify this file adding network_mode: "bridge" or test with other values (host,none,service:[service name],container:[container name/id])
but the question now is:
how to modify this file if the nano editor is not installed in the neo4j docker image and i can't even install it because apt-get is not installed as well.
(it is a really very minimal image)
Morovere i'm not a linux user so i don't know how to modyfy this file.
May you suggest me the way to modify this file on an image that does't have these tools without using vim?
I'm not so expert with linux but i need to run this docker-compose.yml file provided with the above github repo.
MY ENVIROMENT
Docker Toobox for windows
`docker version`
Client:
Version: 18.01.0-ce
API version: 1.35
Go version: go1.9.2
Git commit: 03596f51b1
Built: Thu Jan 11 22:29:41 2018
OS/Arch: windows/amd64
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.01.0-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.2
Git commit: 03596f5
Built: Wed Jan 10 20:13:12 2018
OS/Arch: linux/amd64
Experimental: false
PS: do you think the problem is not related to the ip address?
>>>>>EDITED<<<<<
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
38e06d1020d8 neo4j:latest "/docker-entrypoint.…" 30 hours ago Up 29 minutes 0.0.0.0:7474->7474/tcp, 7473/tcp, 0.0.0.0:7687->7687/tcp workshopneo4jdocker_neo4j_1
Adding network_mode: "bridge" to the docker-compose.yml file and accessing to the docker-machine ip the image works correctly
docker-compose.yml
version: '3'
services:
neo4j:
image: neo4j:latest
network_mode: "bridge"
ports:
- "7474:7474"
- "7687:7687"
environment:
- NEO4J_dbms_security_procedures_unrestricted=apoc.*
- NEO4J_apoc_import_file_enabled=true
- NEO4J_dbms_shell_enabled=true
volumes:
- ./plugins:/plugins
- ./data:/data
- ./import:/import
Below yml file work fine to me. Yes it is not very fast and you have to wait for 2-3 minutes for it to come up and available to browser at http://localhost:7474/browser
version: '3'
services:
neo4j:
image: neo4j:4.3.3-community #4.3.3-community latest
container_name: neo4j
ports:
- "7474:7474"
- "7687:7687"
networks:
- ecosystem_network
environment:
- NEO4J_AUTH=neo4j/eco_system
- NEO4J_dbms_memory_pagecache_size=512M
volumes:
- ${HOME}/neo4j/data:/data
- ${HOME}/neo4j/logs:/logs
- ${HOME}/neo4j/import:/var/lib/neo4j/import
- ${HOME}/neo4j/plugins:/plugins
networks:
ecosystem_network:
driver: bridge
I am doing this on a AWS Linux Instance
Client:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5/1.9.1
Built:
OS/Arch: linux/amd64
Server:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5/1.9.1
Built:
OS/Arch: linux/amd64
Whenever I try to docker-compose up -d with the following docker-compose.yml (this is an excerpt)
elasticsearch:
image: elasticsearch:2.1.1
command: elasticsearch
ports:
- "9200:9200"
- "9300:9300"
volumes:
- ./elk/es/staging.yml:/etc/elasticsearch/elasticsearch.yml
links:
- kibana
volumes_from:
- es-data
es-data:
image: elasticsearch:2.1.1
command: "true"
volumes:
- /usr/share/elasticsearch/
- /etc/default/elasticsearch
- /var/lib/elasticsearch/
- /var/log/elasticsearch
I'll receive this error:
Recreating es-data_1
ERROR: cannot mount volume over existing file, file exists /var/lib/docker/devicemapper/mnt/33111596cd765c4220c65ea7cc35df4a6c295a860e4ad94294c7ee985da6ea7c/rootfs/etc/default/elasticsearch
I've tried bypassing this error by removing the volumes from es-data, but that is not ideal as I'm using it as a data only container.
I've tried removing all the volumes available on the system using docker volume rm $(docker volume ls -q), but that removes all of the volumes except one that I am not sure how to force remove.
Ideal situation would be to allow me to reuse that volume (which I'm assuming is orphaned at this point). Second ideal situation would be able to remove the volume and be able to create a new one.
If I was doing this on my personal machine, I would take the easier approach of recreating the docker-machine I am using as a clean slate, but I don't want to do that on my AWS instance.