This should be something simple. I have a docker compose file that starts a DB2 container and another Java application container that does work against the database. The purpose of this stack is for solely to verify testing, the values in the database should not persist. But, I need the Java container to be able to make a JDBC connection to DB2. Right now, it's refusing, but I'm not sure why. I created a common network for both of them (I thought).
My docker-compose.yml
version: "3.2"
services:
ssc-file-generator-db2-test:
container_name: "ssc-file-generator-db2-test"
image: ibmcom/db2:latest
hostname: db2server
privileged: true
ports:
- 50100:50000
- 55100:55000
networks:
- back-tier
restart: "no"
volumes:
- setup-sql:/setup-sql
- db2-shell-scripts:/var/custom
- host-dirs:/host-dirs
env_file:
- acceptance-run.environment
ssc-file-generator:
container_name: "ssc-file-generator_testing"
image: "ssc-file-generator:latest"
depends_on: ["ssc-file-generator-db2-test"]
entrypoint: ["sh", "/ssc-file-generator/bin/wait-for-db2.sh"]
env_file: ["acceptance-run.environment"]
networks:
- back-tier
restart: "no"
volumes:
- setup-sql:/setup-sql
- db2-shell-scripts:/var/custom
- host-dirs:/host-dirs
networks:
back-tier: {}
volumes:
setup-sql:
driver: local
driver_opts:
o: bind
type: none
device: ./setup-sql
db2-shell-scripts:
driver: local
driver_opts:
o: bind
type: none
device: ./db2-shell-scripts
host-dirs:
driver: local
driver_opts:
o: bind
type: none
device: ./host-dirs
The db in docker compose context should be available at the host:
ssc-file-generator-db2-test:50000
ssc-file-generator-db2-test:55000
try to add healthcheck to db service. To start ssc-file-generator after ssc-file-generator-db2-test successfully started and ready for connections.
https://docs.docker.com/compose/compose-file/compose-file-v2/#depends_on
ssc-file-generator-db2-test:
...
healthcheck:
test: some command that checks db2 is working
ssc-file-generator:
...
depends_on:
ssc-file-generator-db2-test:
condition: service_healthy
Related
I'm trying to run two Docker containers attached to a single Docker network using Docker Compose.
I'm running into the following error when I run the containers:
Error response from daemon: failed to add interface veth5b3bcc5 to sandbox:
error setting interface "veth5b3bcc5" IP to 172.19.0.2/16:
cannot program address 172.19.0.2/16 in sandbox
interface because it conflicts with existing
route {Ifindex: 10 Dst: 172.19.0.0/16 Src: 172.19.0.1 Gw: <nil> Flags: [] Table: 254}
My docker-compose.yml looks like this:
version: '3'
volumes:
dsn-redis-data:
driver: local
dsn-redis-conf:
driver: local
networks:
dsn-net:
driver: bridge
services:
duty-students-notifier:
image: duty-students-notifier:latest
network_mode: host
container_name: duty-students-notifier
build:
context: ../
dockerfile: ./docker/Dockerfile
env_file: ../.env
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
- dsn-net
restart: always
dsn-redis:
image: redis:latest
expose:
- 5432
volumes:
- dsn-redis-data:/var/lib/redis
- dsn-redis-conf:/usr/local/etc/redis/redis.conf
networks:
- dsn-net
restart: always
Thanks!
The network_mode: host setting generally disables Docker networking, and can interfere with other options. In your case it looks like it might be trying to apply the networks: configuration to the host system network layer.
network_mode: host is almost never necessary, and deleting it may resolve this issue.
What I am trying is to create a container isolated otherwise but having a port open for access from outside. I'd like to keep it so that container can't access internet.
I have internal network and container that has a single port open for accessing the service.
example docker-compose.yml:
version: '3.8'
networks:
vaultwarden:
driver: default
internal: true
services:
vaultwarden:
image: vaultwarden/server:latest
container_name: vaultwarden
restart: always
ports:
- 8050:80
stdin_open: true
tty: true
volumes:
- /home/user/password_test:/data/
environment:
- WEBSOCKET_ENABLED=true
- ROCKET_WORKERS=8
networks:
- vaultwarden
It seems to work, service is accessible in localhost:8050 and from the container I can't access internet.
Still I am wondering is this right way to do it?
EDIT: I'm using podman-compose where this works but in docker-compose I have to put bridge instead of default. And it seems with docker this solution does not work at all
Solution of some sorts was to create a reverse-proxy and attach it to to the internal and to a driver:bridge network. Now the traffic to vaultwarden app goes through the other network and vaultwarden itself can't access internet.
networks:
vaultwarden_net_internal:
internal: true
vaultwarden_net_outside:
driver: bridge
services:
vaultwarden:
image: vaultwarden/server:latest
restart: always
stdin_open: true
tty: true
volumes:
- /home/user/password_test:/data/
environment:
- WEBSOCKET_ENABLED=true
- ROCKET_WORKERS=8
networks:
- vaultwarden_net_internal
proxy:
build:
context: ./
dockerfile: Dockerfile
restart: always
stdin_open: true
tty: true
networks:
- vaultwarden_net_internal
- vaultwarden_net_outside
ports:
- 8051:80
depends_on:
- vaultwarden
I am trying to docker compose up to Azure Container Instances, but nothing shows up, and no docker container is created. As below
CCSU_ACA_COMP+tn3877#CCSU-ND-909264 MSYS ~/source/cab/cab-deployment (master)
$ docker compose up
CCSU_ACA_COMP+tn3877#CCSU-ND-909264 MSYS ~/source/cab/cab-deployment (master)
$ docker ps
CONTAINER ID IMAGE COMMAND STATUS PORTS
Following is my docker-compose.yaml file
version: "3.8"
services:
cassandra:
image: cassandra:4.0.0
ports:
- "9042:9042"
restart: unless-stopped
volumes:
- hi:/home/cassandra:/var/lib/cassandra
- hi:/home/cassandra/cassandra.yaml:/etc/cassandra/cassandra.yaml
networks:
- internal
cassandra-init-data:
image: cassandra:4.0.0
depends_on:
- cassandra
volumes:
- hi:/home/cassandra/schema.cql:/schema.cql
command: /bin/bash -c "sleep 60 && echo importing default data && cqlsh --username cassandra --password cassandra cassandra -f /schema.cql"
networks:
- internal
postgres:
image: postgres:13.3
ports:
- "5432:5432"
restart: unless-stopped
volumes:
- hi:/home/postgres:/var/lib/postgresql/data
networks:
- internal
rabbitmq:
image: rabbitmq:3-management-alpine
ports:
- "15672:15672"
- "5672:5672"
restart: unless-stopped
networks:
- internal
volumes:
hi:
driver: azure_file
driver_opts:
share_name: docker-fileshare
storage_account_name: cs210033fffa9b41a40
networks:
internal:
name: cabvn
I have an Azure account and a Fileshare as below
I am suspecting the volume mount is the problem. Could anyone help me please?
The problem is in the "volumes" object inside the YAML config.
Make sure to use indentation to represent the object hierarchy in YAML. This is a very common problem with YAML and most of the time the error messages are missing to address this, or they are not informative.
Previous solution with wrong indentation
volumes:
hi:
driver: azure_file
driver_opts:
share_name: docker-fileshare
storage_account_name: cs210033fffa9b41a40
Correct solution
volumes:
hi:
driver: azure_file
driver_opts:
share_name: docker-fileshare
storage_account_name: cs210033fffa9b41a40
Basing on this Node-RED tutorial, I'm trying to mount an external volume with the Node-RED files outside the docker machine. I'm using the following docker-compose file:
version: "3.7"
services:
node-red:
image: nodered/node-red:latest
environment:
- TZ=Europe/Amsterdam
ports:
- "2000:1880"
networks:
- node-red-net
volumes:
- node-red-data:/home/user/node-red1
volumes:
node-red-data:
networks:
node-red-net:
However, even though this file works fine when I run docker-compose up, the volume exists only inside the docker machine. I've tried adding the line external: true in volumes but I get the following error:
ERROR: In file './docker-compose.yml', volume 'external' must be a mapping not a boolean.
What am I missing? How do I mount an external volume using docker-compose files?
I ended up finding a related question with this answer. There are multiple answers that didn't work for me there (also there's no accepted answer). The syntax that worked was:
node-red-data:
driver: local
driver_opts:
o: bind
type: none
device: /path/external/folder
So the final dockerfile that works after running docker-compose up is:
version: "3.7"
services:
node-red:
image: nodered/node-red:latest
environment:
- TZ=Europe/Amsterdam
ports:
- "2000:1880"
networks:
- node-red-net
volumes:
- node-red-data:/data
container_name: node-red
volumes:
node-red-data:
driver: local
driver_opts:
o: bind
type: none
device: "/home/user/node-red1"
networks:
node-red-net:
Update
If we don't mind having a random name for the volume, the following solution also works fine:
version: "3.7"
services:
node-red:
image: nodered/node-red:latest
environment:
- TZ=Europe/Amsterdam
ports:
- "2000:1880"
volumes:
- /home/user/node-red1:/data
container_name: node-red
I extended my service which should now launch 130 docker containers instead of 40 (before).
however, after about 50 containers, docker-compose just stops launching more but does not show an error message. (both 2.4 and 3.7 version)
I found a work around by splitting the docker-compose files with 50 each. But this is not very elegant.
Is there a setting or a way to solve this and launch 130 containers from just 1 docker-compose file?
Here is the compose file with 2/140 sample services.
docker-compose version 1.23.1, build b02f1306
docker-compose.yml:
version: "2.4"
networks:
proxy-tier:
external:
name: nginx-proxy
volumes:
data:
driver: local
driver_opts:
type: 'none'
o: 'bind'
device: '/home/geoFrontend2'
logs:
driver: local
driver_opts:
type: 'none'
o: 'bind'
device: '/home/logs_geoFrontend'
ancestors:
driver: local
driver_opts:
type: 'none'
o: 'bind'
device: '/home/ancestors'
services:
myapp_4:
extends:
file: utils.yml
service: shiny-server
ports:
- "3004:3838"
environment:
- "VIRTUAL_PORT=3004"
- "VIRTUAL_HOST=myapp4.mydomain.com"
myapp_5:
extends:
file: utils.yml
service: shiny-server
ports:
- "3005:3838"
environment:
- "VIRTUAL_PORT=3005"
- "VIRTUAL_HOST=myapp5.mydomain.com"
utils.yml
version: "2.4"
networks:
proxy-tier:
external:
name: nginx-proxy
services:
shiny-server:
image: shiny:latest
environment:
- "VIRTUAL_NETWORK=nginx-proxy"
volumes:
- data:/srv/shiny-server/
- logs:/var/log/shiny-server/
- ancestors:/srv/shiny-server/www/ancestors/
networks:
- proxy-tier
restart: always
mem_limit: 500m
mem_reservation: 100m
The solution was
export COMPOSE_HTTP_TIMEOUT=300000