When running docker-compose up -d, I expect 2 databases to be created.
docker-compose.yml:
version: '3.4'
volumes:
db_data:
services:
postgres:
image: postgres:alpine
environment:
- POSTGRES_PASSWORD=Password123
- POSTGRES_DB=database1
ports:
- "5432:5432"
platform:
image: image1/platform:${TAG:-latest}
build:
context: .
dockerfile: PlatformApi/Dockerfile
restart: on-failure
environment:
- ASPNETCORE_ENVIRONMENT=Local
- ConnectionStrings__DefaultConnection=Server=postgres;Port=5432;Uid=postgres;Pwd=Password123;Database=database1
ports:
- "5001:80"
depends_on:
- postgres
volumes:
- .docker/setup.sql:/docker-entrypoint-initdb.d/setup.sql
- db_data:/var/lib/mysql
identity:
image: image2/identity:${TAG:-latest}
build:
context: .
dockerfile: Identity/Dockerfile
restart: on-failure
environment:
- ASPNETCORE_ENVIRONMENT=Local
- ConnectionStrings__DefaultConnection=Server=postgres;Port=5432;Uid=postgres;Pwd=Password123;Database=database2
ports:
- "5002:80"
depends_on:
- postgres
volumes:
- .docker/setup.sql:/docker-entrypoint-initdb.d/setup.sql
- db_data:/var/lib/mysql
This is my setup.sql file which is located inside a .docker folder
CREATE DATABASE IF NOT EXISTS database1;
CREATE USER postgres IDENTIFIED BY Password123;
GRANT CREATE, ALTER, INDEX, LOCK TABLES, REFERENCES, UPDATE, DELETE, DROP, SELECT, INSERT ON database1.* TO postgres;
CREATE DATABASE IF NOT EXISTS database2;
CREATE USER postgres IDENTIFIED BY Password123;
GRANT CREATE, ALTER, INDEX, LOCK TABLES, REFERENCES, UPDATE, DELETE, DROP, SELECT, INSERT ON database2.* TO postgres;
FLUSH PRIVILEGES;
When I run docker-compose up -d, 3 containers are created but 1 of them is exited with an error database "database2" does not exist.
What did I do wrong? Did the setup.sql file not execute or is the content incorrect?
In the postgres service you initiate the Database with the following config
postgres:
image: postgres:alpine
environment:
- POSTGRES_PASSWORD=Password123
- POSTGRES_DB=database1
ports:
- "5432:5432"
As a result a database with name database1 is created
In the following services you try to firstly connect to the database
In platform service:
- ConnectionStrings__DefaultConnection=...;Database=database1
Here there is no issue since database1 exists
But in identity service:
- ConnectionStrings__DefaultConnection=...;Database=database2
You try to connect to database2 which does not exist
The reason it does not exist is that there are race conditions in your setup.sql. You can not guarantee that when the identity service initiates, that the database2 would be already created by platform service.
To tackle this, you could add postgres2 service which creates the database2
postgres2:
image: postgres:alpine
environment:
- POSTGRES_PASSWORD=Password123
- POSTGRES_DB=database2
ports:
- "5433:5432"
identity:
image: image2/identity:${TAG:-latest}
build:
context: .
dockerfile: Identity/Dockerfile
restart: on-failure
environment:
- ASPNETCORE_ENVIRONMENT=Local
- ConnectionStrings__DefaultConnection=Server=postgres2;Port=5432;Uid=postgres;Pwd=Password123;Database=database2
ports:
- "5002:80"
depends_on:
- postgres2
volumes:
- .docker/setup.sql:/docker-entrypoint-initdb.d/setup.sql
- db_data:/var/lib/mysql
Related
I want to make my nifi data volume and configuration persist means even if I delete container and docker compose up again I would like to keep what I built so far in my nifi. I try to mount volumes as follows in my docker compose file in volumes section nevertheless it doesn't work and my nifi processors are not saved. How can I do it correctly? Below my docker-compose.yaml file.
version: "3.7"
services:
nifi:
image: koroslak/nifi:latest
container_name: nifi
restart: always
environment:
- NIFI_HOME=/opt/nifi/nifi-current
- NIFI_LOG_DIR=/opt/nifi/nifi-current/logs
- NIFI_PID_DIR=/opt/nifi/nifi-current/run
- NIFI_BASE_DIR=/opt/nifi
- NIFI_WEB_HTTP_PORT=8080
ports:
- 9000:8080
depends_on:
- openldap
volumes:
- ./volume/nifi-current/state:/opt/nifi/nifi-current/state
- ./volume/database/database_repository:/opt/nifi/nifi-current/repositories/database_repository
- ./volume/flow_storage/flowfile_repository:/opt/nifi/nifi-current/repositories/flowfile_repository
- ./volume/nifi-current/content_repository:/opt/nifi/nifi-current/repositories/content_repository
- ./volume/nifi-current/provenance_repository:/opt/nifi/nifi-current/repositories/provenance_repository
- ./volume/log:/opt/nifi/nifi-current/logs
#- ./volume/conf:/opt/nifi/nifi-current/conf
postgres:
image: koroslak/postgres:latest
container_name: postgres
restart: always
environment:
- POSTGRES_PASSWORD=secret123
ports:
- 6000:5432
volumes:
- postgres:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4:4.18
restart: always
environment:
- PGADMIN_DEFAULT_EMAIL=admin
- PGADMIN_DEFAULT_PASSWORD=admin
ports:
- 8090:80
metabase:
container_name: metabase
image: metabase/metabase:v0.34.2
restart: always
environment:
MB_DB_TYPE: postgres
MB_DB_DBNAME: metabase
MB_DB_PORT: 5432
MB_DB_USER: metabase_admin
MB_DB_PASS: secret123
MB_DB_HOST: postgres
ports:
- 3000:3000
depends_on:
- postgres
openldap:
image: osixia/openldap:1.3.0
container_name: openldap
restart: always
ports:
- 38999:389
# Mocked source systems
jira-api:
image: danielgtaylor/apisprout:latest
container_name: jira-api
restart: always
ports:
- 8000:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/jira-api.json
pipedrive-api:
image: danielgtaylor/apisprout:latest
container_name: pipedrive-api
restart: always
ports:
- 8100:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/pipedrive-api.yaml
restcountries-api:
image: danielgtaylor/apisprout:latest
container_name: restcountries-api
restart: always
ports:
- 8200:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/restcountries-api.json
volumes:
postgres:
nifi:
openldap:
metabase:
pgadmin:
Using Registry you can achieve that all changes you are doing or your nifi are committed to git. I.e. if you change some processor configuration, it will be reflected in your git repo.
As for flow files, you may need to fix volumes mappings.
I have a docker-compose file which contains a bunch of services. To that docker-compose file, I want to add another service now. The other service files (including its .env) are stored in another folder. I tried to build it like I show you below, but it isnt working. Where do I go wrong?
The docker-compose.yml is contained in the directory nft-trading-service, the other dockerfile which I am trying to include in this docker-compose.yaml is in its own folder nft-asset-updater.
So the structure looks like this
root/nft-trading-server (holding docker-compose.yml)
root/nft-asset-updater (holding its own Dockerfile and .env)
version: "3"
services:
nftapi:
env_file:
- .env
build:
context: .
ports:
- '5000:5000'
depends_on:
- postgres
networks:
- postgres
extra_hosts:
- "host.docker.internal:host-gateway"
restart: always
asset_update_service:
env_file:
- .env
build:
context: ../nft-asset-updater
dockerfile: .
ports:
- '9000:9000'
depends_on:
- postgres
networks:
- postgres
extra_hosts:
- "host.docker.internal:host-gateway"
restart: always
postgres:
container_name: postgres
image: postgres:latest
ports:
- "5432:5432"
volumes:
- /data/postgres:/var/lib/postgresql/data
env_file:
- docker.env
networks:
- postgres
pgadmin:
links:
- postgres:postgres
container_name: pgadmin
image: dpage/pgadmin4
ports:
- "8080:80"
env_file:
- docker.env
networks:
- postgres
networks:
postgres:
driver: bridge
I know that this question has a lot answers on stackoverflow but I didn't found a solution for my case!
I moving the Laravel app to the containers.
I CAN CONNECT TO MARIADB INSTANCE OUTSIDE THE DOCKER NETWORK BUT NOT INSIDE!
(I can connect via MySQL Workbench, locally (via docker exec), I can restore the dump locally from container console and access to DB data outside)
What's wrong?
Why the app is not working (PHP has no access to the mariadb via internal app_network) but in the same time I can get access to DB outside and inside container itself???
OS: CentOS 7.9.2009
Docker: 20.10.12 (e91ed57)
Docker-compose: 1.29.2 (5becea4c)
The same configs works fine on Windows 10.
DOCKER COMPOSE CONFIG:
version: '3.9'
networks:
app_network:
driver: bridge
name: ${NETWORK_NAME}
volumes:
app:
name: ${APP_VOLUME_NAME}
mysql_database:
name: ${MYSQL_DATABASE_VOLUME_NAME}
mysql_dumps:
name: ${MYSQL_DATABASE_DUMPS_VOLUME_NAME}
services:
mariadb:
image: mariadb
env_file:
- ./.env
command: --default-authentication-plugin=mysql_native_password
ports:
- ${MYSQL_EXTERNAL_PORT}:3306
volumes:
- mysql_database:/var/lib/mysql
- mysql_dumps:/var/mysqldump
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
networks:
app_network:
aliases:
- mariadb
profiles:
- dev
- prod
php:
restart: always
env_file:
- ./.env
build:
context: ../../
dockerfile: ./.environment/cs/php/Dockerfile
args:
- USER_ID=${PHP_USER_ID}
- GROUP_ID=${PHP_GROUP_ID}
- DEFAULT_CONFIG_FILE=${PHP_DEFAULT_CONFIG_FILE}
- CUSTOM_CONFIG_FILE=${PHP_CUSTOM_CONFIG_FILE}
- PROJECT_FOLDER=${PHP_PROJECT_FOLDER}
volumes:
- ./php/logs:/var/log
- ../../:${PHP_PROJECT_FOLDER}
networks:
app_network:
aliases:
- php
depends_on:
- memcached
- mariadb
profiles:
- dev
- prod
nginx:
restart: always
env_file:
- ./.env
build:
context: ../../
dockerfile: ./.environment/cs/nginx/Dockerfile
args:
- CONFIG_FILE=${WEB_CONFIG_FILE}
- PROJECT_FOLDER=${WEB_PROJECT_FOLDER}
ports:
- ${WEB_EXTERNAL_PORT}:80
volumes:
- ./nginx/logs:/var/log/nginx
- ../../public:${WEB_PROJECT_FOLDER}:cached
networks:
app_network:
aliases:
- nginx
depends_on:
- php
profiles:
- dev
- prod
Docker .ENV
NETWORK_NAME=CS
APP_VOLUME_NAME=CS_APP_STORAGE
MYSQL_DATABASE_VOLUME_NAME=CS_DATABASE
MYSQL_DATABASE_DUMPS_VOLUME_NAME=CS_DATABASE_DUMPS
MYSQL_EXTERNAL_PORT=3317
MYSQL_ROOT_PASSWORD=root
MYSQL_USER=client
MYSQL_PASSWORD=client
PHP_USER_ID=1000
PHP_GROUP_ID=1000
PHP_DEFAULT_CONFIG_FILE=php.ini-production
PHP_CUSTOM_CONFIG_FILE=./.environment/cs/php/custom.prod.ini
PHP_PROJECT_FOLDER=/var/www/app
WEB_EXTERNAL_PORT=127.0.0.1:8091
WEB_CONFIG_FILE=./.environment/cs/nginx/nginx.dev.conf
WEB_PROJECT_FOLDER=/var/www/app/public
Laravel .ENV
DB_CONNECTION=mysql
DB_HOST=mariadb
DB_PORT=3306
DB_DATABASE=client
DB_USERNAME=client
DB_PASSWORD=client
try to add the expose config key in the maria db service
mariadb:
//...
expose:
- 3306
This is the issue of the MariaDB container. I connected to the MariaDB via MySQL Workbench, fully remove the user, create a new one and give scheme privileges.
After that all works fine.
Whenever I run docker-compose up -d --build to start working on my project, it started up my environment just fine up until yesterday.
Upon running docker-compose up -d --build, I get this annoying error that says: ERROR: Service 'app' depends on service 'db' which is undefined.
I'm not sure how this is happening out of no where as I've made absolutely no changes whatsoever to the docker-compose.yml file. I've tried troubleshooting this extensively but to no avail.
What's wrong with my file?
Here's my docker-compose.yml file:
version: "3.7"
services:
app:
build:
args:
user: sammy
uid: 1000
context: ./
dockerfile: Dockerfile.dev
working_dir: /var/www/
environment:
- COMPOSER_MEMORY_LIMIT=-1
depends_on:
- db
volumes:
- ./:/var/www
networks:
- lahmi
myApp:
image: mysql:5.7
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_USER: ${DB_USERNAME}
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- dbdata:/var/lib/mysql
- ./docker-compose/mysql/my.cnf:/etc/mysql/my.cnf
- ./docker-compose/mysql/init:/docker-entrypoint-initdb.d
ports:
- 3307:3306
networks:
- lahmi
nginx:
image: nginx:alpine
ports:
- 8005:80
depends_on:
- db
- app
volumes:
- ./:/var/www
- ./docker-compose/nginx:/etc/nginx/conf.d/
networks:
- lahmi
networks:
lahmi:
driver: bridge
volumes:
dbdata:
driver: local
There is no service named db in docker-compose.yml. Changing db to myApp (database service) may work.
If you are referencing the database as db in service app, you must use links configuration to change myApp to db or change service name myApp to db.
https://docs.docker.com/compose/compose-file/compose-file-v3/#links
docker-compose.yml
version: '3.5'
services:
postgres:
container_name: postgres_container
image: postgres:11.7
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-root}
PGDATA: /data/postgres
# ./init.sql (for unix system)
# //docker/init.sql:/docker-entrypoint-initdb.d/init.sql - for Windows
volumes:
- postgres:/data/postgres
- //docker/init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped
pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-pgadmin4#pgadmin.org}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-admin}
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT:-5050}:80"
networks:
- postgres
depends_on:
- postgres
restart: unless-stopped
networks:
postgres:
driver: bridge
volumes:
postgres:
pgadmin:
When the container is raised, the script must be run.
init.sql
CREATE DATABASE example;
CREATE DATABASE test;
But no databases are created. I have to create them through the console, manually
Who has any idea why this is the case and how to fix it? (The figure shows that the script is mounted in a container)
Solution
I stopped and deleted all the containers.
Then deleted the volumes.
After that, I started docker-compose.yml again.
The databases were created.
Perhaps the first launch failed, the volumes were created, and when I corrected the file, the second launch of the database creation command was not executed, since the volumes were already created for the current container. Thanks for the tip.
From the update, it appears that a previous startup of the database had been done. Once that happens, the volume gets initialized. And once the volume has data, the entrypoint for the database will not perform the initialization step again.
The solution is to stop the database, delete the volume with the bad database data, and then restart the database.