Why is my Portainer Docker Compose creating a new volume - docker

I installed Portainer via Docker Compose. I followed basic directions where I created a portainer_data docker volume:
docker create volume portainer_data
Then I used the following Docker Compose file to setup portainer and portainer agent:
version: '3.3'
services:
portainer-ce:
ports:
- '8000:8000'
- '9443:9443'
container_name: portainer
restart: unless-stopped
command: -H tcp://agent:9001 --tlsskipverify
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data
image: 'portainer/portainer-ce:latest'
agent:
container_name: agent
image: portainer/agent:latest
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
ports:
- "9001:9001"
volumes:
portainer_data:
But then I see this inside Portainer when I look at the volumes:
What is wrong in my configuration that it creates a new volume and ignores the one I setup?
Thanks.

docker by default prepends the project name to volume name, this is an expected behaviour https://forums.docker.com/t/docker-compose-prepends-directory-name-to-named-volumes/32835 to avoid this you have two options you can set project name when you run the docker-compose up or update the docker-compose.yml file to use the external volume you created
version: '3.3'
services:
portainer-ce:
ports:
- '8000:8000'
- '9443:9443'
container_name: portainer
restart: unless-stopped
command: -H tcp://agent:9001 --tlsskipverify
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data
image: 'portainer/portainer-ce:latest'
agent:
container_name: agent
image: portainer/agent:latest
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
ports:
- "9001:9001"
volumes:
portainer_data:
name: portainer_data
external: false

Related

Pi-hole docker Portainer Stack volume config

I need help with the correct config for my Pi-hole Docker/Portainer Stack...
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
restart: unless-stopped
ports:
- 53:53/tcp
- 53:53/udp
- 80:80/tcp
volumes:
- ./etc-pihole:/opt/pihole/etc/pihole
- ./etc-dnsmasq.d:/opt/pihole/etc/dnsmasq.d
volumes:
./etc-pihole:
./etc-dnsmasq.d:
I think my error is that I should be mapping the absolute path for the two volumes. Please can someone educate me on where the "./" would reference to inside the docker container?
Reference to where I got the Pi-hole docker compose from for stack config. https://hub.docker.com/r/pihole/pihole
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
restart: unless-stopped
ports:
- 53:53/tcp
- 53:53/udp
- 80:80/tcp
volumes:
- ./etc-pihole:/opt/pihole/etc/pihole
- ./etc-dnsmasq.d:/opt/pihole/etc/dnsmasq.d
It's not necessary to make the volumes named volumes and listed under the top-level volumes key. But if you do want it that way, you could try this out:
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
restart: unless-stopped
ports:
- 53:53/tcp
- 53:53/udp
- 80:80/tcp
volumes:
- etc-pihole:/opt/pihole/etc/pihole
- etc-dnsmasq.d:/opt/pihole/etc/dnsmasq.d
volumes:
- etc-pihole:
- etc-dnsmasq.d:
Check the doc if I didn't get your point correctly or for more detailed explaination.

Docker uses an undefined network

Hi Stackoverflow fellows,
I am facing an issue while running docker-compose up. Whereas docker-compose runs the jenkins locally. This complete docker file is as follows.
version: '2.3'
services:
jenkins:
container_name: jenkins
build: ./master
image: jenkins_casc
environment:
- CASC_JENKINS_CONFIG=/var/jenkins_casc/jenkins.yaml
- SECRETS=/var/jenkins_casc/secrets
ports:
- "8080:8080"
volumes:
- jenkins_master_home:/var/jenkins_home
jenkins_slave_docker:
container_name: jenkins_agent_docker
build: ./agent
image: jenkins_agent_docker
init: true
environment:
- JENKINS_AGENT_SSH_PUBKEY=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0xJ5n9MY0PFBR/aCHSb8JBQgbIUo0C/bPlaxM9v0uCT2CQJvNyrHUfJKaM9wJsdT7wdKBUIvhODdfoE7kc59j0WpO5TQ5Q2MeG7fpQAalM0ATwv/o7hCTvWev5gpJPSsIg9N/+VusO2R4V1H7LpZm65hHL/0lt9SmvtZzQBR+lt5IhrliEMZpo1UdNql/ueR6Em3mFW/tJvprBD445xTa0kxACGXdMI3nF2+SF49oXhTPjNFKSJilWDsoWzf9swyIf1vbH6zr3slMm7jUvOSCC3gGcqNrSG9Y3wkBzqUDe20CjbeAHMq490xlkGQeg9BAByTvn9uOU7ym3mMUnkKR
- DOCKER_CERT_PATH=/certs/client
- DOCKER_HOST=tcp://docker:2376
- DOCKER_TLS_VERIFY=1
restart: on-failure
depends_on:
- jenkins
volumes:
- jenkins-docker-certs:/certs/client:ro
- jenkins_slave_docker_workdir:/home/jenkins:z
- jenkins_slave_docker:/home/jenkins/.jenkins
docker:
container_name: docker
networks:
- harbor
image: docker:dind
command: ["--insecure-registry=proxy:8080"]
environment:
- DOCKER_TLS_CERTDIR=/certs
volumes:
- jenkins-docker-certs:/certs/client
- jenkins_slave_docker_workdir:/home/jenkins:z
privileged: true
volumes:
jenkins_master_home:
jenkins_slave_docker:
jenkins-docker-certs:
jenkins_slave_docker_workdir:
Whereas the error is as follows:
ERROR: Service "docker" uses an undefined network "harbor"
Everything is correct!
You need to define harbor network in your docker-compose file. It may be just "simple" bridge and docker-compose will create this network automatically on your behalf or you can define it as "external" network in case it already exists.
networks:
harbor:
external:
name: harbor

Docker does not support storing secrets on Windows home system using Docker toolbox

Using Docker toolbox on Windows 10 Home, Docker version 19.03, we have created a docker-compose.yml and added a secrets file as JSON, it runs fine on a Mac system, but it is unable to run the same in Windows 10 Home.
Error after running docker-compose up:
ERROR: for orthancserver Cannot create container for service orthanc: invalid mount config for type
"bind": invalid mount path: 'C:/Users/ABC/Desktop/Project/orthanc.json' mount path must be absolute
docker-compose.yml:
version: "3.7"
services:
orthanc:
image: jodogne/orthanc-plugins:1.6.1
command: /run/secrets/
container_name: orthancserver
restart: always
ports:
- "4242:4242"
- "8042:8042"
networks:
- mynetwork
volumes:
- /tmp/orthanc-db/:/var/lib/orthanc/db/
secrets:
- orthanc.json
dcserver:
build: ./dc_node_server
depends_on:
- orthanc
container_name: dcserver
restart: always
ports:
- "5001:5001"
networks:
- mynetwork
volumes:
- localdb:/database
volumes:
localdb:
external: true
networks:
mynetwork:
external: true
secrets:
orthanc.json:
file: orthanc.json
orthanc.json file kept next to docker-compose.yml
Found an alternative solution for windows 10 home, with docker toolbox. as commented by #Schwarz54, the file-sharing works well with docker volume for Dockerized Orthanc server.
Add shared folder:
Open Oracle VM manager
Go to setting of default VM
Click Shared Folders
Add C:\ drive to the list
Edit docker-compose.yml to transfer the config file to Orthanc via volume
version: "3.7"
services:
orthanc:
image: jodogne/orthanc-plugins:1.6.1
command: /run/secrets/
container_name: orthancserver
restart: always
ports:
- "4242:4242"
- "8042:8042"
networks:
- mynetwork
volumes:
- /tmp/orthanc-db/:/var/lib/orthanc/db/
- /c/Users/ABCUser/Desktop/Project/orthanc.json:/etc/orthanc/orthanc.json:ro
dcserver:
build: ./dc_node_server
depends_on:
- orthanc
container_name: dcserver
restart: always
ports:
- "5001:5001"
networks:
- mynetwork
volumes:
- localdb:/database
volumes:
localdb:
external: true
networks:
mynetwork:
external: true

New containers accessing volume on preexisting container

I have a 'master' container, that should be already running when starting all the others.
In it i have a conf/ directory, that this service is monitoring and applying the relevant changes.
How can i have each new container drop a file in this directory?
real scenario:
given my docker-compose.yml below, i want each service (portainer, whoami, apache) to drop a .yml file in the "./traefik/conf/:/etc/traefik/conf/" path mapping of the traefik service.
docker-compose.yml
version: "3.5"
services:
traefik:
image: traefik
env_file: ./traefik/env
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/conf/:/etc/traefik/conf/
- ./traefik/traefik.yml:/etc/traefik/traefik.yml
portainer:
image: portainer/portainer
depends_on: [traefik]
command: --no-auth -H unix:///var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
whoami:
image: containous/whoami
depends_on: [traefik]
portainer.traefik.yml
http:
routers:
portainer:
entryPoints: [http]
middlewares: [redirect-to-http]
service: portainer-preauth#docker
rule: Host(`portainer.docker.mydomain`)
whoami.traefik.yml
http:
routers:
whoami:
entryPoints: [http]
middlewares: [redirect-to-http]
service: whoami-preauth#docker
rule: Host(`whoami.docker.mydomain`)
Where are the files portainer.traefik.yml and whoami.traefik.yml
located? If they are on host machine, you can directly copy them to
./traefik/conf/. – Shashank V
the thing is i cant have all files in traefik/conf.
this would require manually dropping a file there every time i create a new image.
i believe that every service should be responsible for its own files.
also, when traefik starts and finds files of those other services that haven't started yet, it logs lots of errors.
to avoid this behavior, i would like to put the file there only when the container is started.
below is is the project file structure.
You can use a volume across all services. Just define it in your docker-compose.yml and assign it to each service:
version: "3.5"
services:
traefik:
image: traefik
env_file: ./traefik/env
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/conf/:/etc/traefik/conf/
- ./traefik/traefik.yml:/etc/traefik/traefik.yml
- foo:/path/to/share/
portainer:
image: portainer/portainer
depends_on: [traefik]
command: --no-auth -H unix:///var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- foo:/another/path/to/share/
whoami:
image: containous/whoami
depends_on: [traefik]
volumes:
- foo:/and/another/path/
volumes:
foo:
driver: local
This is the equivalent to the --volumes-from feature of "plain" Docker. Or at least, what comes closest to it.
Your master container would then have to use the same volume. If this container doesn't run within the same Docker Compose context, you have to define this volume externally before.

docker-compose - networks - /etc/hosts is not updated

I am using Docker version 1.12.3 and docker-compose version 1.8.1. I have some services which contains for example elasticsearch, rabbitmq and a webapp
My problem is that a service can not access another service by its host becuase docker-compose does not put all service hots in /etc/hosts file. I don't know their IP's because it is defined on docker-compose up phase.
I use networks feature as it is described at https://docs.docker.com/compose/networking/ instead of links because I do circular reference and links doesn't support it. But using networks does not put all services hosts to each service nodes /etc/hosts file. I set container_name, I set hostname but nothing happened. What I am missing;
Here is my docker-compose.yml;
version: '2'
services:
elasticsearch1:
image: elasticsearch:5.0
container_name: "elasticsearch1"
hostname: "elasticsearch1"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='Ned Stark' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
ports:
- "9200:9200"
- "9300:9300"
networks:
- webapp
elasticsearch2:
image: elasticsearch:5.0
container_name: "elasticsearch2"
hostname: "elasticsearch2"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='Daenerys Targaryen' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
networks:
- webapp
elasticsearch3:
image: elasticsearch:5.0
container_name: "elasticsearch3"
hostname: "elasticsearch3"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='John Snow' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
networks:
- webapp
rabbit1:
image: harbur/rabbitmq-cluster
container_name: "rabbit1"
hostname: "rabbit1"
environment:
- ERLANG_COOKIE=abcdefg
networks:
- webapp
rabbit2:
image: harbur/rabbitmq-cluster
container_name: "rabbit2"
hostname: "rabbit2"
environment:
- ERLANG_COOKIE=abcdefg
- CLUSTER_WITH=rabbit1
- ENABLE_RAM=true
networks:
- webapp
rabbit3:
image: harbur/rabbitmq-cluster
container_name: "rabbit3"
hostname: "rabbit3"
environment:
- ERLANG_COOKIE=abcdefg
- CLUSTER_WITH=rabbit1
networks:
- webapp
my_webapp:
image: my_webapp:0.2.0
container_name: "my_webapp"
hostname: "my_webapp"
command: "supervisord -c /etc/supervisor/supervisord.conf -n"
environment:
- DYNACONF_SETTINGS=settings.prod
ports:
- "8000:8000"
tty: true
networks:
- webapp
networks:
webapp:
driver: bridge
This is how I understand they can't comunicate with each other;
I get this error on elasticserach cluster initialization;
Caused by: java.net.UnknownHostException: elasticsearch3
And this is how I docker-composing
docker-compose up
If the container expects the hostname to be available immediate when the container starts that is likely why it's failing.
The hostname isn't going to exist until the other containers start. You can use an entrypoint script to wait until all the hostnames are available, then exec elasticsearch ...

Resources