Deploy my project using docker compose. At the moment, my project is being built locally, from local folders with projects. Docker uses Laravel service, VueJS service, NGINX service, Mysql service and Redis service.
I set up a docker compose from an article from digital ocean. The question is as follows. In the article, it was necessary to create volumes (volumes) that linked the files of my application locally with the files of the container, but what should I do if I want to build my project using DockerHub, after all, I will no longer have the files stored locally.
I tried without these volumes in the backend and webserver, but then without any errors during the assembly, I cannot reach the server via the localhost.
Do I just need to push the image right away with the volume? Here is my compose.yml:
version: '3'
services:
#VueJS Service
frontend:
build: ./frontend
container_name: frontend
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: frontend
working_dir: /var/www/frontend
ports:
- "3000:80"
volumes:
- ./frontend/:/var/www/frontend
- ./frontend/node_modules:/var/www/frontend/node_modules
networks:
- backend-network
#PHP Service
backend:
build: ./backend
container_name: backend
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: backend
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./backend/:/var/www
- ./backend/php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- backend-network
#Nginx Service
webserver:
image: nginx:alpine
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "443:443"
- "8080:8080"
volumes:
- ./backend/:/var/www
- ./backend/nginx/conf.d/:/etc/nginx/conf.d/
networks:
- backend-network
#MySQL Service
db:
image: mysql:5.7.22
container_name: db
restart: unless-stopped
tty: true
ports:
- "33062:3306"
environment:
MYSQL_DATABASE: qcortex
MYSQL_ROOT_PASSWORD: kc3wcfjk5
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- dbdata:/var/lib/mysql
- ./backend/mysql/my.cnf:/etc/mysql/my.cnf
networks:
- backend-network
#Redis
redis:
image: caster977/redis
restart: unless-stopped
container_name: redis
networks:
- backend-network
#Docker Networks
networks:
backend-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
Related
i've installed docker (windows 10) with wsl2 (ubuntu distro) and added my docker-compose.yml
version: '3'
services:
web:
image: nginx:1.20.1
container_name: web
restart: always
ports:
- "80:80"
volumes:
- ./nginx.d.conf:/etc/nginx/conf.d/nginx.conf
- ./nginx.conf:/etc/nginx/nginx.conf
- ./www/my-app:/app
php:
build:
context: .
dockerfile: myphp.dockerFile
container_name: php
restart: always
depends_on:
- web
volumes:
- ./www/my-app:/app
mysql:
image: mariadb:10.3.28
container_name: mysql
restart: always
depends_on:
- php
environment:
MYSQL_ROOT_PASSWORD: '******'
MYSQL_USER: 'root'
MYSQL_PASSWORD: '******'
MYSQL_DATABASE: 'my-database'
command: ["--default-authentication-plugin=mysql_native_password"]
volumes:
- mysqldata:/var/lib/mysql
- ./my.cnf:/etc/mysql/my.cnf
ports:
- 3306:3306
cache:
image: redis:5.0.3
container_name: cache
restart: always
ports:
- 6379:6379
networks:
- my-network
volumes:
- ./cache:/cache
volumes:
mysqldata: {}
networks:
my-network:
driver: "bridge"
So my symfony code is in the /www/my-app window's folder. This includes the /www/my-app/vendor too.
My application is running extremely slow (50-70 seconds). If i'm correct it's because the vendor folder is huge (80MB) and docker creates an image of it every time. Other discussions mentioned that vendor folder sould be moved into a new volume, and here i'm stuck with it. How to move and mount that in this case, and how should the docker-compose.yml look like after it?
I want to know how to configure correctly the backend endpoint.
I have a docker images that runs different containers:
Backend
Frontend
Nginx for backend
DB
From my understanding, since all containers are running on the same machine, I should be able to reach the backend with "host.docker.internal".
Indeed I can successfully do it on the local machine where Docker is running on.
By the way the frontend is not able to resolve the endpoint "host.docker.internal" if I try to make a request from another machine. Please note that I'm able to reach the frontend from another machine, it's just a matter of endpoint configuration.
Note that "192.168.1.11" is the IP of the machine where Docker is running, and "8888" it's the port where the frontend is.
Obviously I can succesfully make the requests from other machines too if I put the static IP address instead of "host.docker.internal". But the question is: since the React frontend application is served on Docker itself, shouldn't it be able to resolve the "host.docker.internal" endpoint?
Just for reference, here it is my docker compose:
version: "3.8"
services:
db: #mysqldb
image: mysql:5.7
container_name: ${DB_SERVICE_NAME}
restart: unless-stopped
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_USER: ${DB_USERNAME}
SERVICE_TAGS: dev
SERVICE_NAME: mysql
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- ./docker-compose/mysql:/docker-entrypoint-initdb.d
networks:
- backend
mrmfrontend:
build:
context: ./mrmfrontend
args:
- REACT_APP_API_BASE_URL=$CLIENT_API_BASE_URL
- REACT_APP_BACKEND_ENDPOINT=$REACT_APP_BACKEND_ENDPOINT
- REACT_APP_FRONTEND_ENDPOINT=$REACT_APP_FRONTEND_ENDPOINT
- REACT_APP_FRONTEND_ENDPOINT_ERROR=$REACT_APP_FRONTEND_ENDPOINT_ERROR
- REACT_APP_CUSTOMER=$REACT_APP_CUSTOMER
- REACT_APP_NAME=$REACT_APP_NAME
- REACT_APP_OWNER=""
ports:
- $REACT_LOCAL_PORT:$REACT_DOCKER_PORT
networks:
- frontend
volumes:
- ./docker-compose/nginx/frontend:/etc/nginx/conf.d/
app:
build:
args:
user: admin
uid: 1000
context: ./MRMBackend
dockerfile: Dockerfile
image: backend
container_name: backend-app
restart: unless-stopped
working_dir: /var/www/
volumes:
- ./MRMBackend:/var/www
networks:
- backend
nginx:
image: nginx:alpine
container_name: backend-nginx
restart: unless-stopped
ports:
- 8000:80
volumes:
- ./MRMBackend:/var/www
- ./docker-compose/nginx/backend:/etc/nginx/conf.d/
networks:
- backend
- frontend
volumes:
db:
networks:
frontend:
driver: bridge
backend:
driver: bridge
The endpoint is configured in this way in the .env:
REACT_APP_BACKEND_ENDPOINT="http://host.docker.internal:8000"
i have a problem with docker container.
That's my docker-compose file with 5 services
version: '3'
networks:
laravel:
services:
nginx:
image: nginx:stable-alpine
container_name: nginx
ports:
- "8088:80"
volumes:
- ./src:/var/www/html
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- mysql
- php
networks:
- laravel
mysql:
image: mysql:5.7.22
container_name: mysql
restart: unless-stopped
tty: true
ports:
- "4306:3306"
environment:
MYSQL_DATABASE: homestead
MYSQL_USER: homestead
MYSQL_PASSWORD: secret
MYSQL_ROOT_PASSWORD: secret
SERVICE_TAGS: dev
SERVICE_NAME: mysql
networks:
- laravel
php:
build:
context: .
dockerfile: Dockerfile
container_name: php
volumes:
- ./src:/var/www/html
ports:
- "9000:9000"
networks:
- laravel
redis:
image: redis:5.0.0-alpine
restart: always
container_name: redis
ports:
- "6379:6379"
networks:
- laravel
composer:
image: composer:latest
container_name: composer
volumes:
- ./src:/var/www/html
tty: true
working_dir: /var/www/html
networks:
- laravel
then i run
docker-compose up -d
and then
docker-compose ps
to see my container and i always get the composer contaier down with code 0. that's the screenshot
:
can someone explain me why i can't put this container up. Thanks a lot
composer isn't a program that stays alive. It's a program that does specific some work and then exits.
There's not much purpose in keeping it "up", since it's not going to do anything like the other processes do (nginx intercepts web traffic and writes response, mysql accepts database connections and reads/writes from a database, php serves web content, redis can be connected to as a cache).
I have one docker-compose file that have nginx, mysql and phpmyadmin.
phpmyadmin is open in port 8080 in this address mydomain.com:8080 as well.
How can I convert this address to http://phpmyadmin.mydomain.com or http://pma.mydomain.com in my server?
I have used this docker-compose file:
version: '3'
services:
#PHP Service
app:
build:
context: .
dockerfile: ./Dockerfile/Dockerfile
container_name: app
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./Beh:/var/www
- ./Config/php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- app-network
#Nginx Service
webserver:
image: nginx:alpine
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
# - "443:443"
volumes:
- ./Beh:/var/www
- ./Config/nginx/conf.d/:/etc/nginx/conf.d/
networks:
- app-network
database:
image: mariadb:latest
container_name: database
environment:
- "MYSQL_USERNAME=root"
- "MYSQL_ROOT_PASSWORD=secret"
ports:
- "3306:3306"
networks:
- app-network
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: phpmyadmin
environment:
- "MYSQL_USERNAME=root"
- "MYSQL_ROOT_PASSWORD=secret"
- "PMA_HOST=database"
links:
- database
ports:
- "8080:80"
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
I don't know how to configure nginx file to achieve this goal.
You can use the repo -> https://github.com/jwilder/nginx-proxy
This is what I'm using to create new local domain name for each project.
You have to add "VIRTUAL_HOST" to the environment, expose the ports ("expose: 80") and also add the new domain in your /etc/hosts.
I'm using jwilder/nginx-proxy to host multiple (web)apps from a single server. This is working great except that all services can communicate with each other because they are all on the same network because that is required for the proxy to work.
Proxy docker-compose.yaml
version: "3"
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
container_name: nginx-proxy
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy"
ports:
- "80:80"
- "443:443"
volumes:
- ./data/certs:/etc/nginx/certs:ro
- ./data/nginx/vhost.d:/etc/nginx/vhost.d
- ./data/share/nginx/html:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
letsencrypt-proxy:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: letsencrypt-proxy
depends_on:
- nginx-proxy
volumes:
- ./data/nginx/vhost.d:/etc/nginx/vhost.d
- ./data/share/nginx/html:/usr/share/nginx/html
- ./data/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: always
networks:
default:
external:
name: nginx-proxy
App 1 docker-compose.yaml
version: "3"
services:
app:
image: nginx:latest
depends_on:
- db
- cache
expose:
- 80
volumes:
- ./application:/var/www/html
restart: always
working_dir: /var/www/html
environment:
VIRTUAL_HOST: app1.example.com
LETSENCRYPT_HOST: app1.example.com
LETSENCRYPT_EMAIL: user#example.com
cache:
image: redis:alpine
restart: always
volumes:
- cachedata:/data
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: rootpasswd
MYSQL_DATABASE: database_name
MYSQL_USER: database_user
MYSQL_PASSWORD: database_passwd
volumes:
- dbdata:/var/lib/mysql
networks:
default:
external:
name: nginx-proxy
volumes:
dbdata:
driver: local
cachedata:
driver: local
App 2 docker-compose.yaml
version: "3"
services:
app:
image: nginx:latest
depends_on:
- db
- cache
expose:
- 80
volumes:
- ./application:/var/www/html
restart: always
working_dir: /var/www/html
environment:
VIRTUAL_HOST: app2.example.com
LETSENCRYPT_HOST: app2.example.com
LETSENCRYPT_EMAIL: user#example.com
cache:
image: redis:alpine
restart: always
volumes:
- cachedata:/data
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: rootpasswd
MYSQL_DATABASE: database_name
MYSQL_USER: database_user
MYSQL_PASSWORD: database_passwd
volumes:
- dbdata:/var/lib/mysql
networks:
default:
external:
name: nginx-proxy
volumes:
dbdata:
driver: local
cachedata:
driver: local
With this setup both applications will use de db and cache instance of App 1. The only way to solve that is to give those services unique names like app_1_db and app_2_db. But then App 1 is still able to connect to the app_2_db which I would like to prevent.
Is there a way to isolate all services within their docker-composer.yaml file and still use the nginx proxy?
Docker version 18.09.0, build 4d60db4
docker-compose version 1.21.2, build a133471
You can connect only the app(nginx) container from your apps to the nginx-proxy network. The only edit needed should be in the app's docker-compose:
version: '3'
services:
app:
networks:
- default
- nginx-proxy
networks:
nginx-proxy:
external: true
That way the app service will be connected to nginx-proxy and default networks at the same time. (If you omit networks key, service is always connected to the default network)
Resolving service names to containers ip's then works as expected as long as no container can see (across all networks it's connected to) two containers with same service name.
If you want even more isolation, you can create nginx-proxy network for every app.
So in your nginx-proxy docker-compose you will have:
version: "3"
services:
nginx-proxy:
networks:
- default
- nginx-proxy_app1
- nginx-proxy_app2
# letsencrypt-proxy service doesn't have to have networks key
networks:
nginx-proxy_app1:
external: true
nginx-proxy_app2:
external: true
and in your apps:
version: '3'
services:
app:
networks:
- default
- nginx-proxy_app1
networks:
nginx-proxy_app1:
external: true
and
version: '3'
services:
app:
networks:
- default
- nginx-proxy_app2
networks:
nginx-proxy_app2:
external: true
That way in every "proxy" network there is only one (if you are not using docker-compose scaling) app container and the nginx-proxy container.
More reading:
https://docs.docker.com/compose/networking/
https://docs.docker.com/network/overlay/#operations-for-standalone-containers-on-overlay-networks