I have a docker-compose.yml file that includes a container for API mocks as well as phpmyadmin and mongo-express containers none of which should be started in my production environment.
I already have seperate .env files for production and development. Is it possible to use variables from the active .env file to disable a container?
Here is my docker-compose.yml:
services:
mysql:
build: ./docker/mysql
command: --default-authentication-plugin=mysql_native_password
container_name: mysql
entrypoint: sh -c "/usr/local/bin/docker-entrypoint.sh --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci"
environment:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USERNAME}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_RANDOM_ROOT_PASSWORD=yes
- MYSQL_ONETIME_PASSWORD=yes
ports:
- 3306:3306
restart: unless-stopped
volumes:
- ./data/mysql:/var/lib/mysql
phpmyadmin:
container_name: phpmyadmin
environment:
- PMA_HOST=mysql
image: phpmyadmin/phpmyadmin
ports:
- 8080:80
mongo:
build: ./docker/mongo
container_name: mongo
environment:
- MONGO_INITDB_DATABASE=${MONGO_DATABASE}
- MONGO_INITDB_ROOT_USERNAME=${MONGO_USERNAME}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_PASSWORD}
ports:
- 27017:27017
restart: unless-stopped
volumes:
- ./data/mongo:/data/db
mongo-express:
build: ./docker/mongo-express
container_name: mongo-express
depends_on:
- mongo
environment:
- ME_CONFIG_BASICAUTH_PASSWORD=redacted
- ME_CONFIG_BASICAUTH_USERNAME=username
- ME_CONFIG_MONGODB_ADMINUSERNAME=${MONGO_USERNAME}
- ME_CONFIG_MONGODB_ADMINPASSWORD=${MONGO_PASSWORD}
ports:
- 8081:8081
redis:
build: ./docker/redis
container_name: redis
ports:
- 6379:6379
restart: unless-stopped
volumes:
- ./data/redis:/data
mock-apis:
build: ./docker/mock-apis
container_name: mock-apis
command: >
/initscript.bash
ports:
- 81:80
volumes:
- ./mock-apis:/home/nodejs
php-fpm:
build:
context: ./docker/php-fpm
args:
HOST_UID: ${HOST_UID}
command: >
/initscript.bash
container_name: php-fpm
restart: unless-stopped
depends_on:
- mongo
- mysql
- redis
volumes:
- ./laravel:/var/www/
nginx:
build: ./docker/nginx
container_name: nginx
depends_on:
- php-fpm
ports:
- 80:80
restart: unless-stopped
volumes:
- ./laravel:/var/www/
version: "3"
I'm using profiles to scope my services. If I want to use PhpMyAdmin only in dev I add this profile to the service:
phpmyadmin:
container_name: phpmyadmin
environment:
- PMA_HOST=mysql
image: phpmyadmin/phpmyadmin
ports:
- 8080:80
profiles: ["dev"]
So now I have to tell to docker compose if I want to use the dev profile. Else it will not start.
You can use one of these command (with this way you have to type --profile your_profil for each profile):
$ docker-compose --profile dev up -d
$ docker-compose --profile dev --profile profil2 up -d <- for multiple profiles
Or the cleaner way you can separate your services with a comma:
$ COMPOSE_PROFILES=dev,profil2 docker-compose up -d
Services without a profiles attribute will always be enabled.
Care about when you stop your services you have to specify the profile too:
$ COMPOSE_PROFILES=dev docker-compose down
Related
I want to make my nifi data volume and configuration persist means even if I delete container and docker compose up again I would like to keep what I built so far in my nifi. I try to mount volumes as follows in my docker compose file in volumes section nevertheless it doesn't work and my nifi processors are not saved. How can I do it correctly? Below my docker-compose.yaml file.
version: "3.7"
services:
nifi:
image: koroslak/nifi:latest
container_name: nifi
restart: always
environment:
- NIFI_HOME=/opt/nifi/nifi-current
- NIFI_LOG_DIR=/opt/nifi/nifi-current/logs
- NIFI_PID_DIR=/opt/nifi/nifi-current/run
- NIFI_BASE_DIR=/opt/nifi
- NIFI_WEB_HTTP_PORT=8080
ports:
- 9000:8080
depends_on:
- openldap
volumes:
- ./volume/nifi-current/state:/opt/nifi/nifi-current/state
- ./volume/database/database_repository:/opt/nifi/nifi-current/repositories/database_repository
- ./volume/flow_storage/flowfile_repository:/opt/nifi/nifi-current/repositories/flowfile_repository
- ./volume/nifi-current/content_repository:/opt/nifi/nifi-current/repositories/content_repository
- ./volume/nifi-current/provenance_repository:/opt/nifi/nifi-current/repositories/provenance_repository
- ./volume/log:/opt/nifi/nifi-current/logs
#- ./volume/conf:/opt/nifi/nifi-current/conf
postgres:
image: koroslak/postgres:latest
container_name: postgres
restart: always
environment:
- POSTGRES_PASSWORD=secret123
ports:
- 6000:5432
volumes:
- postgres:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4:4.18
restart: always
environment:
- PGADMIN_DEFAULT_EMAIL=admin
- PGADMIN_DEFAULT_PASSWORD=admin
ports:
- 8090:80
metabase:
container_name: metabase
image: metabase/metabase:v0.34.2
restart: always
environment:
MB_DB_TYPE: postgres
MB_DB_DBNAME: metabase
MB_DB_PORT: 5432
MB_DB_USER: metabase_admin
MB_DB_PASS: secret123
MB_DB_HOST: postgres
ports:
- 3000:3000
depends_on:
- postgres
openldap:
image: osixia/openldap:1.3.0
container_name: openldap
restart: always
ports:
- 38999:389
# Mocked source systems
jira-api:
image: danielgtaylor/apisprout:latest
container_name: jira-api
restart: always
ports:
- 8000:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/jira-api.json
pipedrive-api:
image: danielgtaylor/apisprout:latest
container_name: pipedrive-api
restart: always
ports:
- 8100:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/pipedrive-api.yaml
restcountries-api:
image: danielgtaylor/apisprout:latest
container_name: restcountries-api
restart: always
ports:
- 8200:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/restcountries-api.json
volumes:
postgres:
nifi:
openldap:
metabase:
pgadmin:
Using Registry you can achieve that all changes you are doing or your nifi are committed to git. I.e. if you change some processor configuration, it will be reflected in your git repo.
As for flow files, you may need to fix volumes mappings.
I have docker-compose.yml file which contains frontend,backend,testing,postgres and pgadmin container. The containers except testing are able to communicate each other. But the testing container fails to communicate with backend and frontend container in docker-compose.
version: '3.7'
services:
frontend:
container_name: test-frontend
build:
context: ./frontend
dockerfile: Dockerfile.local
ports:
- '3000:3000'
networks:
- test-network
environment:
# For the frontend can be applied only during the build!
# (while it's applied when TS is compiled)
# You have to build manually without cache if one of those are changed at least for the prod mode.
- REACT_APP_BACKEND_API=http://localhost:8000/api/v1
- REACT_APP_GOOGLE_CLIENT_ID=1234567dfghjjnfd
- CI=true
- CHOKIDAR_USEPOLLING=true
postgres:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5432:5432"
networks:
- test-network
restart: unless-stopped
pgadmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: "dev#dev.com"
PGADMIN_DEFAULT_PASSWORD: dev
volumes:
- pgadmin:/root/.pgadmin
- ./pgadmin-config/servers.json:/pgadmin4/servers.json
ports:
- "5050:80"
networks:
- test-network
restart: unless-stopped
backend:
container_name: test-backend
build:
context: ./backend
dockerfile: Dockerfile.local
ports:
- '8000:80'
volumes:
- ./backend:/app
command: >
bash -c "alembic upgrade head
&& exec /start-reload.sh"
networks:
- test-network
depends_on:
- postgres
environment:
- GOOGLE_APPLICATION_CREDENTIALS=/app/.secret/secret.json
- APP_DB_CONNECTION_STRING=postgresql+psycopg2://dev:dev#postgres:5432/postgres
- LOG_LEVEL=debug
- SQLALCHEMY_ECHO=True
- AUTH_ENABLED=True
- CORS=*
- GCP_ALLOWED_DOMAINS=*
testing:
container_name: test-testing
build:
context: ./testing
dockerfile: Dockerfile
volumes:
- ./testing:/isp-app
command: >
bash -c "/wait
&& robot ."
networks:
- test-network
depends_on:
- backend
- frontend
environment:
- WAIT_HOSTS= frontend:3000, backend:8000
- WAIT_TIMEOUT= 3000
- WAIT_SLEEP_INTERVAL=300
- WAIT_HOST_CONNECT_TIMEOUT=300
volumes:
postgres:
pgadmin:
networks:
test-network:
driver: bridge
All the containers are mapped to test-network. When the testing container tried to connect to frontend:3000 or backend:8000, it throws "Host [ backend:8000] not yet available"
How to fix it?
I was trying to create a PHP development environment with PHP, MariaDB, and a tutorial suggested to use Adminer for database management. So I generate my docker-compose.yml file like this:
version : '3.1'
services:
php:
build:
context: .
dockerfile: Dockerfile
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mariadb
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- ./mariadb-data:/var/lib/mysql
adminer:
image: adminer
environment:
ADMINER_DEFAULT_SERVER: db
restart: always
ports:
- 8080:8080
But when I set the volumes for MariaDB, I got an error in the Adminer login page. When I don't set them it seems to work well.
version : '3.1'
services:
php:
build:
context: .
dockerfile: Dockerfile
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mariadb
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- ./mariadb-data:/var/lib/mysql
adminer:
image: adminer
environment:
ADMINER_DEFAULT_SERVER: db
restart: always
ports:
- 8080:8080
links:
- php
- db
i have a problem with docker container.
That's my docker-compose file with 5 services
version: '3'
networks:
laravel:
services:
nginx:
image: nginx:stable-alpine
container_name: nginx
ports:
- "8088:80"
volumes:
- ./src:/var/www/html
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- mysql
- php
networks:
- laravel
mysql:
image: mysql:5.7.22
container_name: mysql
restart: unless-stopped
tty: true
ports:
- "4306:3306"
environment:
MYSQL_DATABASE: homestead
MYSQL_USER: homestead
MYSQL_PASSWORD: secret
MYSQL_ROOT_PASSWORD: secret
SERVICE_TAGS: dev
SERVICE_NAME: mysql
networks:
- laravel
php:
build:
context: .
dockerfile: Dockerfile
container_name: php
volumes:
- ./src:/var/www/html
ports:
- "9000:9000"
networks:
- laravel
redis:
image: redis:5.0.0-alpine
restart: always
container_name: redis
ports:
- "6379:6379"
networks:
- laravel
composer:
image: composer:latest
container_name: composer
volumes:
- ./src:/var/www/html
tty: true
working_dir: /var/www/html
networks:
- laravel
then i run
docker-compose up -d
and then
docker-compose ps
to see my container and i always get the composer contaier down with code 0. that's the screenshot
:
can someone explain me why i can't put this container up. Thanks a lot
composer isn't a program that stays alive. It's a program that does specific some work and then exits.
There's not much purpose in keeping it "up", since it's not going to do anything like the other processes do (nginx intercepts web traffic and writes response, mysql accepts database connections and reads/writes from a database, php serves web content, redis can be connected to as a cache).
I try to set up a Docker-compose for my application(s) including a service based on the nginx image. I want to have the possibility to simply access the config from my Host. But when i mount the volume with
volumes:
- ./nginxConf:/etc/nginx
this volume is empty and the container crashes.
Full docker-compose.yml
version: '3'
services:
frontend:
image: myFrontend
restart: always
environment:
- API_URL=http://localhost:3000/api/v1
ports:
- "80:80"
- "443:443"
depends_on:
- "api"
volumes:
- ./nginxConf:/etc/nginx
api:
image: myApi
restart: always
command: bash -c "npm run build && npm run start"
ports:
- "3000:3000"
links:
- mongo
depends_on:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- "27017:27017"