I need help with the correct config for my Pi-hole Docker/Portainer Stack...
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
restart: unless-stopped
ports:
- 53:53/tcp
- 53:53/udp
- 80:80/tcp
volumes:
- ./etc-pihole:/opt/pihole/etc/pihole
- ./etc-dnsmasq.d:/opt/pihole/etc/dnsmasq.d
volumes:
./etc-pihole:
./etc-dnsmasq.d:
I think my error is that I should be mapping the absolute path for the two volumes. Please can someone educate me on where the "./" would reference to inside the docker container?
Reference to where I got the Pi-hole docker compose from for stack config. https://hub.docker.com/r/pihole/pihole
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
restart: unless-stopped
ports:
- 53:53/tcp
- 53:53/udp
- 80:80/tcp
volumes:
- ./etc-pihole:/opt/pihole/etc/pihole
- ./etc-dnsmasq.d:/opt/pihole/etc/dnsmasq.d
It's not necessary to make the volumes named volumes and listed under the top-level volumes key. But if you do want it that way, you could try this out:
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
restart: unless-stopped
ports:
- 53:53/tcp
- 53:53/udp
- 80:80/tcp
volumes:
- etc-pihole:/opt/pihole/etc/pihole
- etc-dnsmasq.d:/opt/pihole/etc/dnsmasq.d
volumes:
- etc-pihole:
- etc-dnsmasq.d:
Check the doc if I didn't get your point correctly or for more detailed explaination.
Related
I installed Portainer via Docker Compose. I followed basic directions where I created a portainer_data docker volume:
docker create volume portainer_data
Then I used the following Docker Compose file to setup portainer and portainer agent:
version: '3.3'
services:
portainer-ce:
ports:
- '8000:8000'
- '9443:9443'
container_name: portainer
restart: unless-stopped
command: -H tcp://agent:9001 --tlsskipverify
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data
image: 'portainer/portainer-ce:latest'
agent:
container_name: agent
image: portainer/agent:latest
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
ports:
- "9001:9001"
volumes:
portainer_data:
But then I see this inside Portainer when I look at the volumes:
What is wrong in my configuration that it creates a new volume and ignores the one I setup?
Thanks.
docker by default prepends the project name to volume name, this is an expected behaviour https://forums.docker.com/t/docker-compose-prepends-directory-name-to-named-volumes/32835 to avoid this you have two options you can set project name when you run the docker-compose up or update the docker-compose.yml file to use the external volume you created
version: '3.3'
services:
portainer-ce:
ports:
- '8000:8000'
- '9443:9443'
container_name: portainer
restart: unless-stopped
command: -H tcp://agent:9001 --tlsskipverify
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data
image: 'portainer/portainer-ce:latest'
agent:
container_name: agent
image: portainer/agent:latest
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
ports:
- "9001:9001"
volumes:
portainer_data:
name: portainer_data
external: false
I want to make my nifi data volume and configuration persist means even if I delete container and docker compose up again I would like to keep what I built so far in my nifi. I try to mount volumes as follows in my docker compose file in volumes section nevertheless it doesn't work and my nifi processors are not saved. How can I do it correctly? Below my docker-compose.yaml file.
version: "3.7"
services:
nifi:
image: koroslak/nifi:latest
container_name: nifi
restart: always
environment:
- NIFI_HOME=/opt/nifi/nifi-current
- NIFI_LOG_DIR=/opt/nifi/nifi-current/logs
- NIFI_PID_DIR=/opt/nifi/nifi-current/run
- NIFI_BASE_DIR=/opt/nifi
- NIFI_WEB_HTTP_PORT=8080
ports:
- 9000:8080
depends_on:
- openldap
volumes:
- ./volume/nifi-current/state:/opt/nifi/nifi-current/state
- ./volume/database/database_repository:/opt/nifi/nifi-current/repositories/database_repository
- ./volume/flow_storage/flowfile_repository:/opt/nifi/nifi-current/repositories/flowfile_repository
- ./volume/nifi-current/content_repository:/opt/nifi/nifi-current/repositories/content_repository
- ./volume/nifi-current/provenance_repository:/opt/nifi/nifi-current/repositories/provenance_repository
- ./volume/log:/opt/nifi/nifi-current/logs
#- ./volume/conf:/opt/nifi/nifi-current/conf
postgres:
image: koroslak/postgres:latest
container_name: postgres
restart: always
environment:
- POSTGRES_PASSWORD=secret123
ports:
- 6000:5432
volumes:
- postgres:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4:4.18
restart: always
environment:
- PGADMIN_DEFAULT_EMAIL=admin
- PGADMIN_DEFAULT_PASSWORD=admin
ports:
- 8090:80
metabase:
container_name: metabase
image: metabase/metabase:v0.34.2
restart: always
environment:
MB_DB_TYPE: postgres
MB_DB_DBNAME: metabase
MB_DB_PORT: 5432
MB_DB_USER: metabase_admin
MB_DB_PASS: secret123
MB_DB_HOST: postgres
ports:
- 3000:3000
depends_on:
- postgres
openldap:
image: osixia/openldap:1.3.0
container_name: openldap
restart: always
ports:
- 38999:389
# Mocked source systems
jira-api:
image: danielgtaylor/apisprout:latest
container_name: jira-api
restart: always
ports:
- 8000:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/jira-api.json
pipedrive-api:
image: danielgtaylor/apisprout:latest
container_name: pipedrive-api
restart: always
ports:
- 8100:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/pipedrive-api.yaml
restcountries-api:
image: danielgtaylor/apisprout:latest
container_name: restcountries-api
restart: always
ports:
- 8200:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/restcountries-api.json
volumes:
postgres:
nifi:
openldap:
metabase:
pgadmin:
Using Registry you can achieve that all changes you are doing or your nifi are committed to git. I.e. if you change some processor configuration, it will be reflected in your git repo.
As for flow files, you may need to fix volumes mappings.
For past 3 days I have been trying to connect two docker containers generated by two separate docker-compose files.
I tried a lot of approaches none seem to be working. Currently after adding network to compose files, service with added network doesn't start. My goal is to access endpoint from another container.
Endpoint-API compose file:
version: "3.1"
networks:
test:
services:
mariadb:
image: mariadb:10.1
container_name: list-rest-api-mariadb
working_dir: /application
volumes:
- .:/application
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=list-api
- MYSQL_USER=list-api
- MYSQL_PASSWORD=root
ports:
- "8003:3306"
webserver:
image: nginx:stable
container_name: list-rest-api-webserver
working_dir: /application
volumes:
- .:/application
- ./docker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "8005:80"
networks:
- test
php-fpm:
build: docker/php-fpm
container_name: list-rest-api-php-fpm
working_dir: /application
volumes:
- .:/application
- ./docker/php-fpm/php-ini-overrides.ini:/etc/php/7.2/fpm/conf.d/99-overrides.ini
Consumer compose file:
version: "3.1"
networks:
test:
services:
consumer-mariadb:
image: mariadb:10.1
container_name: consumer-service-mariadb
working_dir: /consumer
volumes:
- .:/consumer
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=consumer
- MYSQL_USER=consumer
- MYSQL_PASSWORD=root
ports:
- "8006:3306"
consumer-webserver:
image: nginx:stable
container_name: consumer-service-webserver
working_dir: /consumer
volumes:
- .:/consumer
- ./docker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "8011:80"
networks:
- test
consumer-php-fpm:
build: docker/php-fpm
container_name: consumer-service-php-fpm
working_dir: /consumer
volumes:
- .:/consumer
- ./docker/php-fpm/php-ini-overrides.ini:/etc/php/7.2/fpm/conf.d/99-overrides.ini
Could someone tell me best way of accessing API container from within consumer? I'm loosing my mind on this one.
Your two
networks:
test:
don't refer to the same network, but they are independent and don't talk to each other.
What you could do is having an external and pre-existing network that both compose file can talk and share. You can do that with docker network create, and then refer to that in your compose files with
networks:
default:
external:
name: some-external-network-here
I need to use a nginx reverse proxy. Therefore I use jwilder/nginx-proxy.
Also I'm using gitLab as a docker container.
So I came up with this docker-compose file, but accessing ci.server.com gives me a502 Bad Gateway` error.
I need some help to setup the correct ports for this docker container
version: '3.3'
services:
nginx:
container_name: 'nginx'
image: jwilder/nginx-proxy:alpine
restart: 'always'
ports:
- 80:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
gitlab:
container_name: gitlab
image: 'gitlab/gitlab-ce:10.0.2-ce.0'
restart: always
hostname: 'ci.server.com'
ports:
- '50022:22'
volumes:
- '/opt/gitlab/config:/etc/gitlab'
- '/opt/gitlab/logs:/var/log/gitlab'
- '/opt/gitlab/data:/var/opt/gitlab'
- '/opt/gitlab/secret:/secret/gitlab/backups'
- '/etc/letsencrypt:/etc/letsencrypt'
environment:
VIRTUAL_HOST: ci.server.com
VIRTUAL_PORT: 50022
Before I switched to nginx reverse proxy I used this docker-compose setup, which was working. And I don't get the difference or the mistake I made by 'converting' this.
old
version: '3.3'
services:
nginx:
container_name: 'nginx'
image: 'nginx:1.13.5'
restart: 'always'
ports:
- '80:80'
- '443:443'
volumes:
- '/opt/nginx/conf.d:/etc/nginx/conf.d:ro'
- '/opt/nginx/conf/nginx.conf:/etc/nginx/nginx.conf:ro'
- '/etc/letsencrypt:/etc/letsencrypt'
links:
- 'gitlab'
gitlab:
container_name: gitlab
image: 'gitlab/gitlab-ce:10.0.2-ce.0'
restart: always
hostname: 'ci.server.com'
ports:
- '50022:22'
volumes:
- '/opt/gitlab/config:/etc/gitlab'
- '/opt/gitlab/logs:/var/log/gitlab'
- '/opt/gitlab/data:/var/opt/gitlab'
- '/opt/gitlab/secret:/secret/gitlab/backups'
- '/etc/letsencrypt:/etc/letsencrypt'
You should set VIRTUAL_PORT: 80 in your environment.
The proxy is actually trying to redirect the 80 port to the SSH port.
To use SSL with jwilderproxy you can look here
for example, I use this.
version: '3/3'
services:
gitlab:
container_name: gitlab
image: 'gitlab/gitlab-ce:10.0.2-ce.0'
restart: always
hostname: 'ci.server.com'
ports:
- '50022:22'
volumes:
- '/opt/gitlab/config:/etc/gitlab'
- '/opt/gitlab/logs:/var/log/gitlab'
- '/opt/gitlab/data:/var/opt/gitlab'
- '/opt/gitlab/secret:/secret/gitlab/backups'
- '/etc/letsencrypt:/etc/letsencrypt'
environment:
- VIRTUAL_HOST=ci.server.com
- VIRTUAL_PORT=80
- LETSENCRYPT_HOST=ci.server.com
- LETSENCRYPT_EMAIL=youremail
I have a docker-compose.yml file:
zookeeper:
image: zookeeper:3.4
ports:
- 2181:2181
kafka:
image: ches/kafka:latest
ports:
- 9092:9092
links:
- zookeeper:3.4
myDpm:
image: dpm-image:latest
ports:
- 9000:9000
links:
- kafka:latest
mySql:
image: mysql:latest
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: root
myMc3:
image: mc3-v3:3.0
ports:
- 9001:9000
links:
- mySql:latest
environment:
runMode: dev
myElastic:
image: elasticsearch:2.4.0
ports:
- 9200:9200
Running docker-compose up -d creates the container for all the image files.
Please note that i have created images for all already so these images will not be pulled from server when i run the docker-compose.yml file.
All the containers are running successfully but the problem turns out to be that containers cannot interact with each other like i have used links in my docker-compose.yml file to provide communicate between containers. But i think links option is not working for me. Kakfa is not able to communication with zookeeper(I used links to link zookeeper and kafka).
In short, Why link option is not working?
Or i am going wrong somewhere?
Anyone please provide me the right direction.
Note: All the containers are working separately but not able to communicate with each other
The issue is you are linking your containers improperly. To link to containers in another service. Either specify both the service name and a link alias (SERVICE:ALIAS), or just the service name. See the docker compose documentation for further information. Corrected compose file.
zookeeper:
image: zookeeper:3.4
ports:
- 2181:2181
kafka:
image: ches/kafka:latest
ports:
- 9092:9092
links:
- zookeeper
myDpm:
image: dpm-image:latest
ports:
- 9000:9000
links:
- kafka
mySql:
image: mysql:latest
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: root
myMc3:
image: mc3-v3:3.0
ports:
- 9001:9000
links:
- mySql
environment:
runMode: dev
myElastic:
image: elasticsearch:2.4.0
ports:
- 9200:9200
You can also specify the link with an alias like so:
...
myMc3:
image: mc3-v3:3.0
ports:
- 9001:9000
links:
- mySql:mysqldb
environment:
runMode: dev
...
Links are to the service name, not to the image name:
zookeeper:
image: zookeeper:3.4
ports:
- 2181:2181
kafka:
image: ches/kafka:latest
ports:
- 9092:9092
links:
- zookeeper
myDpm:
image: dpm-image:latest
ports:
- 9000:9000
links:
- kafka
mySql:
image: mysql:latest
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: root
myMc3:
image: mc3-v3:3.0
ports:
- 9001:9000
links:
- mySql
environment:
runMode: dev
myElastic:
image: elasticsearch:2.4.0
ports:
- 9200:9200
So, i.e., you can point to zookeeper from kafka as this:
zookeeper:2181
PS: You don't need to expose ports if you only use containers inter connections (as example before). You expose ports when you need to access, i.e. to some service port through your localhost.