docker compose up with customized volumes on azure container instance - docker

I am trying to docker compose up to Azure Container Instances, but nothing shows up, and no docker container is created. As below
CCSU_ACA_COMP+tn3877#CCSU-ND-909264 MSYS ~/source/cab/cab-deployment (master)
$ docker compose up
CCSU_ACA_COMP+tn3877#CCSU-ND-909264 MSYS ~/source/cab/cab-deployment (master)
$ docker ps
CONTAINER ID IMAGE COMMAND STATUS PORTS
Following is my docker-compose.yaml file
version: "3.8"
services:
cassandra:
image: cassandra:4.0.0
ports:
- "9042:9042"
restart: unless-stopped
volumes:
- hi:/home/cassandra:/var/lib/cassandra
- hi:/home/cassandra/cassandra.yaml:/etc/cassandra/cassandra.yaml
networks:
- internal
cassandra-init-data:
image: cassandra:4.0.0
depends_on:
- cassandra
volumes:
- hi:/home/cassandra/schema.cql:/schema.cql
command: /bin/bash -c "sleep 60 && echo importing default data && cqlsh --username cassandra --password cassandra cassandra -f /schema.cql"
networks:
- internal
postgres:
image: postgres:13.3
ports:
- "5432:5432"
restart: unless-stopped
volumes:
- hi:/home/postgres:/var/lib/postgresql/data
networks:
- internal
rabbitmq:
image: rabbitmq:3-management-alpine
ports:
- "15672:15672"
- "5672:5672"
restart: unless-stopped
networks:
- internal
volumes:
hi:
driver: azure_file
driver_opts:
share_name: docker-fileshare
storage_account_name: cs210033fffa9b41a40
networks:
internal:
name: cabvn
I have an Azure account and a Fileshare as below
I am suspecting the volume mount is the problem. Could anyone help me please?

The problem is in the "volumes" object inside the YAML config.
Make sure to use indentation to represent the object hierarchy in YAML. This is a very common problem with YAML and most of the time the error messages are missing to address this, or they are not informative.
Previous solution with wrong indentation
volumes:
hi:
driver: azure_file
driver_opts:
share_name: docker-fileshare
storage_account_name: cs210033fffa9b41a40
Correct solution
volumes:
hi:
driver: azure_file
driver_opts:
share_name: docker-fileshare
storage_account_name: cs210033fffa9b41a40

Related

docker-compose: service "gateway" refers to undefined volume ${PWD}/config/gateway/gateway-configuration.ini: invalid compose project

My goal: generate docker-compose.yaml from docker-compose.yaml and docker-compose.override.yaml and keep the variables as they are now = without interpolate
I've tried to run
docker compose -f docker-compose.yaml -f docker-compose.override.yaml convert --no-interpolate > new-docker-compose.yaml
Here is my docker-compose.yaml:
version: "3.5"
services:
redis-db:
image: redislabs/rejson:2.0.11
container_name: redis-db
restart: unless-stopped
volumes:
- redis-storage-vol:/data
- ${PWD}/config/redis/redis.conf:/usr/local/etc/redis/redis.conf:ro
ports:
- 6379:6379
runner:
image: "${REPO}/runner/${RUNNER_CPU_IMAGE}:${RUNNER_CPU_TAG}"
container_name: runner
restart: unless-stopped
volumes:
- data-storage-vol:/data
- ${PWD}/config/runner/runner-configuration.ini:/configuration.ini:ro
- "${PWD}/solutions/${ALGO}:/home/scripts/algorithmic_solutions_list.txt:ro"
depends_on:
- "redis-db"
gateway:
image: "${REPO}/gateway/gateway-server:${GATEWAY_TAG}"
container_name: gateway
restart: unless-stopped
volumes:
- data-storage-vol:/data
- ${PWD}/config/gateway/gateway-configuration.ini:/configuration.ini:ro
ports:
- 8000:8000
depends_on:
- "redis-db"
volumes:
data-storage-vol:
driver_opts:
type: "tmpfs"
device: "tmpfs"
o: "size=5g,uid=1000"
redis-storage-vol:
driver: local
docker-compose.override.yaml
version: "3.5"
services:
runner:
image: "${REPO}/runner/${RUNNER_GPU_IMAGE}:${RUNNER_GPU_TAG}"
deploy:
resources:
reservations:
devices:
- driver: nvidia
capabilities: [ GPU ]
What I've tried:
Run docker compose convert without flag --no-interpolate, it worked well but the variables was populated.
Run like the example - but got this error:
service "gateway" refers to undefined volume ${PWD}/config/gateway/gateway-configuration.ini: invalid compose project
I want to keep using docker compose commands and not edit files after its created.

Docker does not support storing secrets on Windows home system using Docker toolbox

Using Docker toolbox on Windows 10 Home, Docker version 19.03, we have created a docker-compose.yml and added a secrets file as JSON, it runs fine on a Mac system, but it is unable to run the same in Windows 10 Home.
Error after running docker-compose up:
ERROR: for orthancserver Cannot create container for service orthanc: invalid mount config for type
"bind": invalid mount path: 'C:/Users/ABC/Desktop/Project/orthanc.json' mount path must be absolute
docker-compose.yml:
version: "3.7"
services:
orthanc:
image: jodogne/orthanc-plugins:1.6.1
command: /run/secrets/
container_name: orthancserver
restart: always
ports:
- "4242:4242"
- "8042:8042"
networks:
- mynetwork
volumes:
- /tmp/orthanc-db/:/var/lib/orthanc/db/
secrets:
- orthanc.json
dcserver:
build: ./dc_node_server
depends_on:
- orthanc
container_name: dcserver
restart: always
ports:
- "5001:5001"
networks:
- mynetwork
volumes:
- localdb:/database
volumes:
localdb:
external: true
networks:
mynetwork:
external: true
secrets:
orthanc.json:
file: orthanc.json
orthanc.json file kept next to docker-compose.yml
Found an alternative solution for windows 10 home, with docker toolbox. as commented by #Schwarz54, the file-sharing works well with docker volume for Dockerized Orthanc server.
Add shared folder:
Open Oracle VM manager
Go to setting of default VM
Click Shared Folders
Add C:\ drive to the list
Edit docker-compose.yml to transfer the config file to Orthanc via volume
version: "3.7"
services:
orthanc:
image: jodogne/orthanc-plugins:1.6.1
command: /run/secrets/
container_name: orthancserver
restart: always
ports:
- "4242:4242"
- "8042:8042"
networks:
- mynetwork
volumes:
- /tmp/orthanc-db/:/var/lib/orthanc/db/
- /c/Users/ABCUser/Desktop/Project/orthanc.json:/etc/orthanc/orthanc.json:ro
dcserver:
build: ./dc_node_server
depends_on:
- orthanc
container_name: dcserver
restart: always
ports:
- "5001:5001"
networks:
- mynetwork
volumes:
- localdb:/database
volumes:
localdb:
external: true
networks:
mynetwork:
external: true

docker compose issue on windows 10

I have an opensource project cloned, and it has docker-compose.yml.
I execute
docker-compose up
But I see error:
ERROR: could not find plugin bridge in v1 plugin registry: plugin not found
I even tried a commonly mentioned solution on SO:
docker network create --driver nat network-name
But issue still persists.
I understand that docker-compose is part of docker desktop install on windows.
How to solve it?
Content of the file:
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.7.0
ports: ["9200:9200"]
networks: ["sandbox"]
environment: ["discovery.type=single-node"]
kibana:
image: docker.elastic.co/kibana/kibana:6.7.0
ports: ["5601:5601"]
networks: ["sandbox"]
depends_on: ["elasticsearch"]
logstash:
image: docker.elastic.co/logstash/logstash:6.7.0
volumes:
- ./config/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
networks: ["sandbox"]
ports: ["5000:5000/udp"]
depends_on: ["elasticsearch"]
grafana:
image: grafana/grafana:6.0.2
volumes: ["./grafana/plugins/cinnamon-elasticsearch-app:/var/lib/grafana/plugins/cinnamon-elasticsearch-app"]
ports: ["3000:3000"]
networks: ["sandbox"]
depends_on: ["elasticsearch"]
networks:
sandbox:
driver: bridge

Nginx container won't build after running docker-compose up (volume device issue)

EDIT: For those working on a Mac, if you get the message below, it's because you're not pointing to the correct file path on your host machine. I was working on making sure everything built/ran locally but eventually this will be hosted on a linux box on Digital Ocean and I will have to change the file path.
Initially, I didn't understand the functionality behind Volumes but this video really cleared things up: https://www.youtube.com/watch?v=p2PH_YPCsis
I'm currently writing a Docker Compose YAML file to run services like a Node.js app, MongoDB, and Nginx. While the app and db build, the webserver is giving me the error:
ERROR: for webserver Cannot start service webserver: b'Mounts denied: \r\nThe path /User/alan/test\r\nis not shared from OS X and is not known to Docker.\r\nYou can configure shared paths from Docker -> Preferences... -> File S
I'm not too sure why it's giving me this error but I know that it's related to the volumes/web-root/driver_opts/device volume.
Can anyone point me in the right direction?
docker-compose.yml:
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
restart: unless-stopped
env_file: .env
environment:
- MONGO_USERNAME=$MONGO_USERNAME
- MONGO_PASSWORD=$MONGO_PASSWORD
- MONGO_HOSTNAME=db
- MONGO_PORT=$MONGO_PORT
- MONGO_DB=$MONGO_DB
ports:
- "81:8080"
depends_on:
- db
volumes:
- .:/home/node/app
- node_modules:/home/node/app/node_modules
networks:
- app-network
command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon index.js
db:
image: mongo:4.1.8-xenial
container_name: db
restart: unless-stopped
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
volumes:
- dbdata:/data/db
networks:
- app-network
ports:
- "27017:27017"
webserver:
image: nginx:mainline-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- web-root:/var/www/html
- ./nginx.conf:/etc/nginx/conf.d/nginx.conf
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
depends_on:
- nodejs
- db
networks:
- app-network
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- webserver
command: certonly --webroot --webroot-path=/var/www/html --email boyce.alan15#gmail.com --agree-tos --no-eff-email --staging -d bittap.io -d www.bittap.io
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: /User/alan/test
o: bind
dbdata:
node_modules:
networks:
app-network:
driver: bridge
Based on your error message, you are running macOS and Docker 4 Mac.
As Devid Maze said, you did a mistake in the device: attribute from the volumes.
Update the volumes section like that:
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: /Users/alan/test
o: bind
Clear everything and retry it.

How to set volume when using `docker stack deploy` swarm mode

I'm trying to save data using volume. It won't restore my data when I docker stack deploy it. How do I set volume?
Running docker stack deploy -c compose-db.yml db.
This is my compose file.
compose-db.yml
version: '3'
services:
redis:
image: 172.16.12.154:5000/redis
networks:
pitbull-overlay:
aliases:
- redis
volumes:
- redis-volume:/data
ports:
- 6379:6379
mongodb:
image: 172.16.12.154:5000/mongodb
networks:
pitbull-overlay:
aliases:
- mongodb
volumes:
- mongodb-volume:/data/db
ports:
- 27017:27017
networks:
pitbull-overlay:
external:
name: pitbull-overlay
volumes:
mongodb-volume:
redis-volume:

Resources