Docker compose multiple dockerfile - docker

I'm currently working on a maven project where I would like to have 2 docker container where one launch all the tests and the other compile de project if all the tests succeed.
The problem is that both containers launch the prod dockerfile.
So my question is why after pointing each Dockerfile in the docker compose they both start on the prod one
docker-compose.yml:
version: '3.8'
services:
db:
container_name: db
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_DATABASE: cuisine
MYSQL_ROOT_PASSWORD: example
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 20s
retries: 10
ports:
- "3306:3306"
volumes:
- ./docker/mysql-dump/cuisine:/docker-entrypoint-initdb.d
- mysql:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
test:
container_name: java-test
image: spring-boot
build:
context: .
dockerfile: docker/test/Dockerfile
ports:
- "8081:8081"
- "5005:5005"
depends_on:
db:
condition: service_healthy
volumes:
- ${APPLICATION_ROOT_FOLDER}:/usr/src/mymaven
- ${MAVEN_SETTINGS_FOLDER}:/root/.m2
java:
container_name: java
image: spring-boot
build:
context: .
dockerfile: docker/prod/Dockerfile
ports:
- "8082:8082"
- "5006:5006"
depends_on:
db:
condition: service_healthy
volumes:
- ${APPLICATION_ROOT_FOLDER}:/usr/src/mymaven
- ${MAVEN_SETTINGS_FOLDER}:/root/.m2
volumes:
mysql:

When you have both build: and image: on your services, the built image is tagged with the image: name. You have the same image: name for both services, so the last one to be built 'wins' and is run for both services. Remove the image: name for both.

Related

Make call from one docker container to another one

So I have two different applications one wordpress and other is api. And both running on docker containers and have their own configurations. This is their docker-compose settings:
version: "3.8"
services:
app:
container_name: ${APP_NAME}_app
build:
context: .
dockerfile: ./.docker/php/Dockerfile
expose:
- 9000
volumes:
- .:/usr/src/app
- ./public:/usr/src/app/public
depends_on:
- db
networks:
- app_network
nginx:
container_name: ${APP_NAME}_nginx
build:
context: .
dockerfile: ./.docker/nginx/Dockerfile
volumes:
- ./public:/usr/src/app/public
ports:
- "8081:8081"
expose:
- 8081
environment:
NGINX_FPM_HOST: app
NGINX_ROOT: /usr/src/app/public
depends_on:
- app
networks:
- app_network
db:
container_name: ${APP_NAME}_db
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
ports:
- "3307:3306"
environment:
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
networks:
- app_network
networks:
app_network:
driver: bridge
volumes:
db_data:
driver: local
And this is my wordpress configuration:
version: '3.8'
services:
mysql:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=somewordpress
- MYSQL_DATABASE=wordpress
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=wordpress
expose:
- 3306
- 33060
healthcheck:
test: mysqladmin ping -h 127.0.0.1 -u $$MYSQL_USER --password=$$MYSQL_PASSWORD
interval: 1s
timeout: 3s
retries: 30
networks:
- app_network
wordpress:
build:
context: .
dockerfile: Dockerfile
depends_on:
mysql:
condition: service_healthy
volumes:
- .:/var/www/html/wp-content/plugins/name
ports:
- "80:80"
restart: always
environment:
- WORDPRESS_URL=http://localhost
- WORDPRESS_DB_HOST=mysql
- WORDPRESS_DB_USER=wordpress
- WORDPRESS_DB_PASSWORD=wordpress
- WORDPRESS_DB_NAME=wordpress
networks:
- app_network
networks:
app_network:
driver: bridge
And when I try to make request to api to this URL http://localhost:8081 well nothing happens. Locally works everything fine but on docker it doesn't.
Would appreciate some help how to make this work :)
If you have a docker container and you do http://localhost:8081, you wont go to your host pc, but to the container itself.
In docker-compose you need to replace localhost with the service name:
For example if you want to access 8081 of the nginx, you need to connect to http://nginx:8081

Request to MX server from a docker container

Context
Our solution send emailing campaign, and some email provider makes temporary blacklist because we used inexistant email addresses (they may be created/imported by any user)
Solution
I try to implement a email checker based on https://www.codexworld.com/verify-email-address-check-if-real-exists-domain-php/
Problem
From my local computer, this script works (when my IP is not temporary blacklisted) but from my Docker Container stream_socket_client or even telnet always times out.
As I see, MX servers always time out non verified requester but how can I make it work from my docker container?
I've no specific docker configuration for the port 25.
Thank you
docker-compose.yml
# Base docker-compose file to run required services: Adminer, MySQL, and Redis
version: '3.7'
services:
adminer:
image: adminer:latest
ports:
- "8080:8080"
database:
image: mysql:8.0
volumes:
- db_data:/var/lib/mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
cap_add: [ SYS_NICE ] # https://github.com/docker-library/mysql/issues/303
command: ['mysqld', '--character-set-server=utf8mb4', '--collation-server=utf8mb4_general_ci', '--lower_case_table_names=1']
redis:
image: redis:alpine
expose:
- "6379"
redisinsight:
image: redislabs/redisinsight
ports:
- "8081:8001"
reverse-proxy:
image: nginx:alpine
depends_on:
- backend
- frontend
volumes:
- ./../../../backend/infra/etc/nginx/dev.conf:/etc/nginx/nginx.conf
ports:
- "80:80"
backend:
build:
context: ./../..
dockerfile: container/dev/backend.dockerfile
expose:
- "80"
volumes:
- {...}
depends_on:
- database
frontend:
build:
context: ./../..
dockerfile: container/dev/frontend.dockerfile
expose:
- "3000"
volumes:
- {...}
tty: true # required to keep yarn running (https://stackoverflow.com/a/61050994)
volumes:
# Contains the database's data
db_data: {}
# Base docker-compose file to run required services: Adminer, MySQL, and Redis
version: '3.7'
services:
adminer:
image: adminer:latest
ports:
- "8080:8080"
database:
image: mysql:8.0
volumes:
- db_data:/var/lib/mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
cap_add: [ SYS_NICE ] # https://github.com/docker-library/mysql/issues/303
command: ['mysqld', '--character-set-server=utf8mb4', '--collation-server=utf8mb4_general_ci', '--lower_case_table_names=1']
redis:
image: redis:alpine
expose:
- "6379"
redisinsight:
image: redislabs/redisinsight
ports:
- "8081:8001"
reverse-proxy:
image: nginx:alpine
depends_on:
- backend
- frontend
volumes:
- ./../../../backend/infra/etc/nginx/dev.conf:/etc/nginx/nginx.conf
ports:
- "80:80"
backend:
build:
context: ./../..
dockerfile: container/dev/backend.dockerfile
expose:
- "80"
volumes:
- {...}
depends_on:
- database
frontend:
build:
context: ./../..
dockerfile: container/dev/frontend.dockerfile
expose:
- "3000"
volumes:
- {...}
tty: true # required to keep yarn running (https://stackoverflow.com/a/61050994)
volumes:
# Contains the database's data
db_data: {}
backend.dockerfile is an ubuntu with git, mysql-client, php

Docker image names are changed to sha256 after subsequent runs

I am using Docker 20.10.7 on Mac OS and docker-compose to run multiple docker containers.
When I start it for the first time, all the docker images are properly labeled and appear as the following.
However, after subsequent runs (docker-compose up, docker-compose down), suddenly all the image names are changed to sha256 and start to look like this
Please advise how to avoid this behavior. Thank you.
UPDATE #1
This is the docker-compose file I use to start containers.
Initially the old displayed with properly labeled image names.
However, not even if I run a docker system prune command it continues to label them as sha256:...
version: '3.8'
services:
influxdb:
image: influxdb:1.8
container_name: influxdb
ports:
- "8083:8083"
- "8086:8086"
- "8090:8090"
- "2003:2003"
env_file:
- 'env.influxdb.properties'
volumes:
- /Users/user1/Docker/influxdb/data:/var/lib/influxdb
restart: unless-stopped
telegraf:
image: telegraf:latest
container_name: telegraf
links:
- db
volumes:
- ./telegraf.conf:/etc/telegraf/telegraf.conf:ro
- /var/run/docker.sock:/var/run/docker.sock
restart: unless-stopped
grafana:
image: grafana/grafana:latest
container_name: grafana
ports:
- "3000:3000"
env_file:
- 'env.grafana.properties'
links:
- influxdb
volumes:
- /Users/user1/Docker/grafana/data:/var/lib/grafana
restart: unless-stopped
db:
image: mysql
container_name: db-container
command: --default-authentication-plugin=mysql_native_password
ports:
- '3306:3306'
environment:
MYSQL_ROOT_PASSWORD: P#ssw0rd
MYSQL_USER: root
MYSQL_PASSWORD: P#ssw0rd
MYSQL_DATABASE: db1
volumes:
- /Users/user1/Docker/mysql/data:/var/lib/mysql
- "../sql/schema.sql:/docker-entrypoint-initdb.d/1.sql"
healthcheck:
test: "/usr/bin/mysql --user=root --password=P#ssw0rd --execute \"SHOW DATABASES;\""
interval: 2s
timeout: 20s
retries: 10
restart: always
adminer:
image: adminer
container_name: adminer
restart: always
ports:
- 8081:8080
redis:
image: bitnami/redis
container_name: redis
environment:
- ALLOW_EMPTY_PASSWORD=yes
#- REDIS_DISABLE_COMMANDS=FLUSHDB,FLUSHALL
ports:
- '6379:6379'
volumes:
- '/Users/user1/Docker/redis/data:/bitnami/redis/data'
- ./redis.conf:/opt/bitnami/redis/mounted-etc/redis.conf

"Unable to find template" with Docker

I created a Symfony environment with Docker. I then included this file in my web project (skeleton website). But when I try to access my base.html.twig page located in a main folder from the controller, I get this error:
Unable to find template "main/base.html.twig" (looked into: /var/www/templates, /var/www/vendor/symfony/twig-bridge/Resources/views/Form).
How can I solve the problem? I have version 5 of Symfony.
Here is the content of my docker-compose file:
version: '3'
services:
php:
container_name: "php-fpm"
build:
context: ./php
environment:
- APP_ENV=${APP_ENV}
- APP_SECRET=${APP_SECRET}
volumes:
- ${APP_FOLDER}:/var/www
networks:
- dev
nginx:
container_name: "nginx"
build:
context: ./nginx
volumes:
- ${APP_FOLDER}:/var/www
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./logs:/var/log/nginx/
depends_on:
- php
ports:
- "80:80"
networks:
- dev
db:
image: mysql
container_name: "db"
restart: always
volumes:
- db-data:/var/lib/mysql
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
networks:
- dev
phpmyadmin:
image: phpmyadmin
container_name: "phpmyadmin"
restart: always
depends_on:
- db
ports:
- 8080:80
environment:
PMA_HOST: db
networks:
- dev
networks:
dev:
volumes:
db-data:
Your Docker Compose file needs to link local folders with Docker folders.
#docker-compose example
version: "3"
services:
web:
build: .
ports:
- "8080:80"
volumes:
- .:/var/www # Map local folder to Docker

How can I connect a adminer docker container with a mariadb docker container?

I was trying to create a PHP development environment with PHP, MariaDB, and a tutorial suggested to use Adminer for database management. So I generate my docker-compose.yml file like this:
version : '3.1'
services:
php:
build:
context: .
dockerfile: Dockerfile
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mariadb
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- ./mariadb-data:/var/lib/mysql
adminer:
image: adminer
environment:
ADMINER_DEFAULT_SERVER: db
restart: always
ports:
- 8080:8080
But when I set the volumes for MariaDB, I got an error in the Adminer login page. When I don't set them it seems to work well.
version : '3.1'
services:
php:
build:
context: .
dockerfile: Dockerfile
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mariadb
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- ./mariadb-data:/var/lib/mysql
adminer:
image: adminer
environment:
ADMINER_DEFAULT_SERVER: db
restart: always
ports:
- 8080:8080
links:
- php
- db

Resources