I know that this question has a lot answers on stackoverflow but I didn't found a solution for my case!
I moving the Laravel app to the containers.
I CAN CONNECT TO MARIADB INSTANCE OUTSIDE THE DOCKER NETWORK BUT NOT INSIDE!
(I can connect via MySQL Workbench, locally (via docker exec), I can restore the dump locally from container console and access to DB data outside)
What's wrong?
Why the app is not working (PHP has no access to the mariadb via internal app_network) but in the same time I can get access to DB outside and inside container itself???
OS: CentOS 7.9.2009
Docker: 20.10.12 (e91ed57)
Docker-compose: 1.29.2 (5becea4c)
The same configs works fine on Windows 10.
DOCKER COMPOSE CONFIG:
version: '3.9'
networks:
app_network:
driver: bridge
name: ${NETWORK_NAME}
volumes:
app:
name: ${APP_VOLUME_NAME}
mysql_database:
name: ${MYSQL_DATABASE_VOLUME_NAME}
mysql_dumps:
name: ${MYSQL_DATABASE_DUMPS_VOLUME_NAME}
services:
mariadb:
image: mariadb
env_file:
- ./.env
command: --default-authentication-plugin=mysql_native_password
ports:
- ${MYSQL_EXTERNAL_PORT}:3306
volumes:
- mysql_database:/var/lib/mysql
- mysql_dumps:/var/mysqldump
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
networks:
app_network:
aliases:
- mariadb
profiles:
- dev
- prod
php:
restart: always
env_file:
- ./.env
build:
context: ../../
dockerfile: ./.environment/cs/php/Dockerfile
args:
- USER_ID=${PHP_USER_ID}
- GROUP_ID=${PHP_GROUP_ID}
- DEFAULT_CONFIG_FILE=${PHP_DEFAULT_CONFIG_FILE}
- CUSTOM_CONFIG_FILE=${PHP_CUSTOM_CONFIG_FILE}
- PROJECT_FOLDER=${PHP_PROJECT_FOLDER}
volumes:
- ./php/logs:/var/log
- ../../:${PHP_PROJECT_FOLDER}
networks:
app_network:
aliases:
- php
depends_on:
- memcached
- mariadb
profiles:
- dev
- prod
nginx:
restart: always
env_file:
- ./.env
build:
context: ../../
dockerfile: ./.environment/cs/nginx/Dockerfile
args:
- CONFIG_FILE=${WEB_CONFIG_FILE}
- PROJECT_FOLDER=${WEB_PROJECT_FOLDER}
ports:
- ${WEB_EXTERNAL_PORT}:80
volumes:
- ./nginx/logs:/var/log/nginx
- ../../public:${WEB_PROJECT_FOLDER}:cached
networks:
app_network:
aliases:
- nginx
depends_on:
- php
profiles:
- dev
- prod
Docker .ENV
NETWORK_NAME=CS
APP_VOLUME_NAME=CS_APP_STORAGE
MYSQL_DATABASE_VOLUME_NAME=CS_DATABASE
MYSQL_DATABASE_DUMPS_VOLUME_NAME=CS_DATABASE_DUMPS
MYSQL_EXTERNAL_PORT=3317
MYSQL_ROOT_PASSWORD=root
MYSQL_USER=client
MYSQL_PASSWORD=client
PHP_USER_ID=1000
PHP_GROUP_ID=1000
PHP_DEFAULT_CONFIG_FILE=php.ini-production
PHP_CUSTOM_CONFIG_FILE=./.environment/cs/php/custom.prod.ini
PHP_PROJECT_FOLDER=/var/www/app
WEB_EXTERNAL_PORT=127.0.0.1:8091
WEB_CONFIG_FILE=./.environment/cs/nginx/nginx.dev.conf
WEB_PROJECT_FOLDER=/var/www/app/public
Laravel .ENV
DB_CONNECTION=mysql
DB_HOST=mariadb
DB_PORT=3306
DB_DATABASE=client
DB_USERNAME=client
DB_PASSWORD=client
try to add the expose config key in the maria db service
mariadb:
//...
expose:
- 3306
This is the issue of the MariaDB container. I connected to the MariaDB via MySQL Workbench, fully remove the user, create a new one and give scheme privileges.
After that all works fine.
Related
i've installed docker (windows 10) with wsl2 (ubuntu distro) and added my docker-compose.yml
version: '3'
services:
web:
image: nginx:1.20.1
container_name: web
restart: always
ports:
- "80:80"
volumes:
- ./nginx.d.conf:/etc/nginx/conf.d/nginx.conf
- ./nginx.conf:/etc/nginx/nginx.conf
- ./www/my-app:/app
php:
build:
context: .
dockerfile: myphp.dockerFile
container_name: php
restart: always
depends_on:
- web
volumes:
- ./www/my-app:/app
mysql:
image: mariadb:10.3.28
container_name: mysql
restart: always
depends_on:
- php
environment:
MYSQL_ROOT_PASSWORD: '******'
MYSQL_USER: 'root'
MYSQL_PASSWORD: '******'
MYSQL_DATABASE: 'my-database'
command: ["--default-authentication-plugin=mysql_native_password"]
volumes:
- mysqldata:/var/lib/mysql
- ./my.cnf:/etc/mysql/my.cnf
ports:
- 3306:3306
cache:
image: redis:5.0.3
container_name: cache
restart: always
ports:
- 6379:6379
networks:
- my-network
volumes:
- ./cache:/cache
volumes:
mysqldata: {}
networks:
my-network:
driver: "bridge"
So my symfony code is in the /www/my-app window's folder. This includes the /www/my-app/vendor too.
My application is running extremely slow (50-70 seconds). If i'm correct it's because the vendor folder is huge (80MB) and docker creates an image of it every time. Other discussions mentioned that vendor folder sould be moved into a new volume, and here i'm stuck with it. How to move and mount that in this case, and how should the docker-compose.yml look like after it?
Whenever I run docker-compose up -d --build to start working on my project, it started up my environment just fine up until yesterday.
Upon running docker-compose up -d --build, I get this annoying error that says: ERROR: Service 'app' depends on service 'db' which is undefined.
I'm not sure how this is happening out of no where as I've made absolutely no changes whatsoever to the docker-compose.yml file. I've tried troubleshooting this extensively but to no avail.
What's wrong with my file?
Here's my docker-compose.yml file:
version: "3.7"
services:
app:
build:
args:
user: sammy
uid: 1000
context: ./
dockerfile: Dockerfile.dev
working_dir: /var/www/
environment:
- COMPOSER_MEMORY_LIMIT=-1
depends_on:
- db
volumes:
- ./:/var/www
networks:
- lahmi
myApp:
image: mysql:5.7
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_USER: ${DB_USERNAME}
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- dbdata:/var/lib/mysql
- ./docker-compose/mysql/my.cnf:/etc/mysql/my.cnf
- ./docker-compose/mysql/init:/docker-entrypoint-initdb.d
ports:
- 3307:3306
networks:
- lahmi
nginx:
image: nginx:alpine
ports:
- 8005:80
depends_on:
- db
- app
volumes:
- ./:/var/www
- ./docker-compose/nginx:/etc/nginx/conf.d/
networks:
- lahmi
networks:
lahmi:
driver: bridge
volumes:
dbdata:
driver: local
There is no service named db in docker-compose.yml. Changing db to myApp (database service) may work.
If you are referencing the database as db in service app, you must use links configuration to change myApp to db or change service name myApp to db.
https://docs.docker.com/compose/compose-file/compose-file-v3/#links
I want to know how to configure correctly the backend endpoint.
I have a docker images that runs different containers:
Backend
Frontend
Nginx for backend
DB
From my understanding, since all containers are running on the same machine, I should be able to reach the backend with "host.docker.internal".
Indeed I can successfully do it on the local machine where Docker is running on.
By the way the frontend is not able to resolve the endpoint "host.docker.internal" if I try to make a request from another machine. Please note that I'm able to reach the frontend from another machine, it's just a matter of endpoint configuration.
Note that "192.168.1.11" is the IP of the machine where Docker is running, and "8888" it's the port where the frontend is.
Obviously I can succesfully make the requests from other machines too if I put the static IP address instead of "host.docker.internal". But the question is: since the React frontend application is served on Docker itself, shouldn't it be able to resolve the "host.docker.internal" endpoint?
Just for reference, here it is my docker compose:
version: "3.8"
services:
db: #mysqldb
image: mysql:5.7
container_name: ${DB_SERVICE_NAME}
restart: unless-stopped
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_USER: ${DB_USERNAME}
SERVICE_TAGS: dev
SERVICE_NAME: mysql
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- ./docker-compose/mysql:/docker-entrypoint-initdb.d
networks:
- backend
mrmfrontend:
build:
context: ./mrmfrontend
args:
- REACT_APP_API_BASE_URL=$CLIENT_API_BASE_URL
- REACT_APP_BACKEND_ENDPOINT=$REACT_APP_BACKEND_ENDPOINT
- REACT_APP_FRONTEND_ENDPOINT=$REACT_APP_FRONTEND_ENDPOINT
- REACT_APP_FRONTEND_ENDPOINT_ERROR=$REACT_APP_FRONTEND_ENDPOINT_ERROR
- REACT_APP_CUSTOMER=$REACT_APP_CUSTOMER
- REACT_APP_NAME=$REACT_APP_NAME
- REACT_APP_OWNER=""
ports:
- $REACT_LOCAL_PORT:$REACT_DOCKER_PORT
networks:
- frontend
volumes:
- ./docker-compose/nginx/frontend:/etc/nginx/conf.d/
app:
build:
args:
user: admin
uid: 1000
context: ./MRMBackend
dockerfile: Dockerfile
image: backend
container_name: backend-app
restart: unless-stopped
working_dir: /var/www/
volumes:
- ./MRMBackend:/var/www
networks:
- backend
nginx:
image: nginx:alpine
container_name: backend-nginx
restart: unless-stopped
ports:
- 8000:80
volumes:
- ./MRMBackend:/var/www
- ./docker-compose/nginx/backend:/etc/nginx/conf.d/
networks:
- backend
- frontend
volumes:
db:
networks:
frontend:
driver: bridge
backend:
driver: bridge
The endpoint is configured in this way in the .env:
REACT_APP_BACKEND_ENDPOINT="http://host.docker.internal:8000"
Note: Thanks #Ferran Buireu for the suggestion. I'm quite sure to get minus vote because of very new to docker and changing network world to system and programming.
After deploy gatsbyjs, I found the socketio error "net::ERR_CONNECTION_REFUSED".
Even it works properly when I browse to any pages but I think it is not running correctly.
How can I solve this error? (below is the error capture)
I implement and deploy these services on Ubuntu 20.04.2 with Docker 20.10.6, please see the below "docker-compose.yml"
version: "3"
services:
frontendapp01:
working_dir: /frontendapp01
build:
context: ./frontendapp01
dockerfile: Dockerfile
depends_on:
- backendsrv01
- mongoserver
volumes:
- ./sentric01:/srv/front
ports:
- "8001:8000"
environment:
GATSBY_WEBPACK_PUBLICPATH: /
STRAPI_URL: backendsrv01:1337
networks:
- vpsnetwork
frontendapp02:
working_dir: /frontendapp02
build:
context: ./frontendapp02
dockerfile: Dockerfile
depends_on:
- backendsrv02
- mongoserver
volumes:
- ./sentric02:/srv/front
ports:
- "8002:8000"
environment:
GATSBY_WEBPACK_PUBLICPATH: /
STRAPI_URL: backendsrv02:1338
networks:
- vpsnetwork
frontendapp03:
working_dir: /frontendapp03
build:
context: ./frontendapp03
dockerfile: Dockerfile
depends_on:
- backendsrv02
- mongoserver
volumes:
- ./sentric03:/srv/front
ports:
- "8003:8000"
environment:
GATSBY_WEBPACK_PUBLICPATH: /
STRAPI_URL: backendsrv02:1338
networks:
- vpsnetwork
backendsrv01:
image: strapi/strapi
container_name: backendsrv01
restart: unless-stopped
environment:
DATABASE_CLIENT: mongo
DATABASE_NAME: essential
DATABASE_HOST: mongoserver
DATABASE_PORT: 27017
networks:
- vpsnetwork
volumes:
- ./app01:/srv/app
ports:
- "1337:1337"
backendsrv02:
image: strapi/strapi
container_name: backendsrv02
restart: unless-stopped
environment:
DATABASE_CLIENT: mongo
DATABASE_NAME: solven
DATABASE_HOST: mongoserver
DATABASE_PORT: 27017
networks:
- vpsnetwork
volumes:
- ./app02:/srv/app
ports:
- "1338:1337"
mongoserver:
image: mongo
container_name: mongoserver
restart: unless-stopped
networks:
- vpsnetwork
volumes:
- vpsappdata:/data/db
ports:
- "27017:27017"
networks:
vpsnetwork:
driver: bridge
volumes:
vpsappdata:
The socket connection only appears during the development stage (gatsby develop) and it's intended to refresh and update the browser on each saves by hot-reloading, so without losing component state. This feature is known as fast-refresh.
As I said, and for obvious reasons, this only applies in gatsby develop. Under gatsby build, there's no connection socket. If your Docker development environment is sharing the port 8000 and 8001 (according to your docker-compose.yml setup), once built, can cause a break of the socket because it has changed the scope of the project.
Answering, you don't have to worry about, your project seems to build properly but, because of the sharing port between environments it prompts the log.
Further readings:
https://www.gatsbyjs.com/docs/conceptual/overview-of-the-gatsby-build-process/
https://www.gatsbyjs.com/docs/reference/local-development/fast-refresh/
i have a problem with docker container.
That's my docker-compose file with 5 services
version: '3'
networks:
laravel:
services:
nginx:
image: nginx:stable-alpine
container_name: nginx
ports:
- "8088:80"
volumes:
- ./src:/var/www/html
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- mysql
- php
networks:
- laravel
mysql:
image: mysql:5.7.22
container_name: mysql
restart: unless-stopped
tty: true
ports:
- "4306:3306"
environment:
MYSQL_DATABASE: homestead
MYSQL_USER: homestead
MYSQL_PASSWORD: secret
MYSQL_ROOT_PASSWORD: secret
SERVICE_TAGS: dev
SERVICE_NAME: mysql
networks:
- laravel
php:
build:
context: .
dockerfile: Dockerfile
container_name: php
volumes:
- ./src:/var/www/html
ports:
- "9000:9000"
networks:
- laravel
redis:
image: redis:5.0.0-alpine
restart: always
container_name: redis
ports:
- "6379:6379"
networks:
- laravel
composer:
image: composer:latest
container_name: composer
volumes:
- ./src:/var/www/html
tty: true
working_dir: /var/www/html
networks:
- laravel
then i run
docker-compose up -d
and then
docker-compose ps
to see my container and i always get the composer contaier down with code 0. that's the screenshot
:
can someone explain me why i can't put this container up. Thanks a lot
composer isn't a program that stays alive. It's a program that does specific some work and then exits.
There's not much purpose in keeping it "up", since it's not going to do anything like the other processes do (nginx intercepts web traffic and writes response, mysql accepts database connections and reads/writes from a database, php serves web content, redis can be connected to as a cache).