Sharing volumes in networked docker containers with docker composer fails - docker

I have two docker-compose.yml files
THe first one is a global one and am using it to configure nginx webserver and the other one am using it for holding the application code and below are their configurations
First one with nginx configuration
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
image: globaldocker
container_name: app
restart: unless-stopped
tty: true
working_dir: /var/www
volumes:
- ./:/var/www
- ./dockerconfig/php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- common_network
webserver:
image: nginx
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./:/var/www
- ./dockerconfig/nginx/:/etc/nginx/conf.d/
networks:
- webserver_network
- common_network
networks:
common_network:
external: false
webserver_network:
external: false
The above creates two networks
global_docker_common_network, global_docker_webserver_network
On the docker config folder there is a nginx configuration like
server {
listen 80;
server_name pos.test www.pos.test;
index index.php index.html;
//other nginx configurations for pos.test
}
ON THE docker-compose configuration with php file
Now the one one holding the source code for pos.test i have the following configuration
app:
build:
context: .
dockerfile: Dockerfile
image: posapp/php
container_name: envanto_pos
restart: unless-stopped
tty: true
working_dir: /var/www/pos
volumes:
- ./:/var/www/pos
- ./dockerconfig/nginx/:/etc/nginx/conf.d/
networks:
- globaldocker_webserver_network
networks:
globaldocker_webserver_network:
external: true
Which i have added the external network
When i try accessing nginx pos.test it doesnt display the application but only shows the default nginx page
I have tried accessing the first docker nginx configuration bash and checked on the var/www/pos folder but i cant see the files from the second docker config(source code).
How do i share volumes with my nginx docker configuration container so that when i access docker via exposed port 80 am able to access my site pos.test
What am i missing out on this to make this work?
UPDATE
The two docker configuration files are located on different folders on my host machine
UPDATE ON THE QUESTION
This is my nginx config file
server {
listen 80;
server_name pos.test www.pos.test;
index index.php index.html;
error_log /var/log/nginx/pos_error.log;
access_log /var/log/nginx/pos_access.log;
root /var/www/pos/web;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}

you are mounting the current directory of docker-compose file each. So the only that container will have the source code which resides in the same source code directory.
You need some common directory
First File
volumes:
- /path_to_sc/common:/var/www
Second File
volumes:
- /path_to_sc/common:/var/www/pos
When I try accessing Nginx pos.test it doesn't display the application
but only shows the default Nginx page
Probably you first File not picking correct configuration. Double-check ./dockerconfig/nginx/:/etc/nginx/conf.d/ or run the command inside docker to verify the correct configuration file.
docker exec nginx bash -c "cat /etc/nginx/conf.d/filename.conf`
I have tried accessing the first docker nginx configuration bash and
checked on the var/www/pos folder but i cant see the files from the
second docker config(source code).
Mount the common directory so that can accessible for both containers.
update:
From your comment, it seems like there is a syntax error in your docker-compose file. Take a look into this example
web:
image: nginx
volumes:
- ./data:/var/www/html/
ports:
- 80:80
command: [nginx-debug, '-g', 'daemon off;']
web2:
image: nginx
volumes:
- ./data:/var/www/html
command: [nginx-debug, '-g', 'daemon off;']

Related

How do I configure/ reconfigure an existing NGINX server to proxy to a docker container?

I have an existing NGINX server hosting 2 websites, one as standard and one on a node server. I want to run 3 docker containers as well on this.
All of the tutorials suggest running NGINX in a container, however this would conflict with my existing set up.
nodejs server, ports 3030:3030
mysql, ports 3360:3360
phpmyadmin, ports 8080:80
They run on localhost on my local machine fine, but I cant get NGINX on the remote server to host them.
I want to be able to access the node server at http://publicIP:3030
I have tried to follow this answer but NGINX is giving me 404 error when trying to access.
my nginx config is:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html index.php;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location /paragon/ {
proxy_pass http://localhost:3030/;
# proxy_set_header X-SRV paragon;
}
location /phpmyadmin {
proxy_pass http://localhost:8080/;
# proxy_set_header X-SRV phpmyadmin;
}
location /mysql {
proxy_pass http://localhost:3360/;
# proxy_set_header X-SRV mysql;
}
I have tried it with the X-SRV headers uncommented as well.
My docker-compose.yml config is:
services:
web:
container_name: paragon_web
build: .
command: npm run
depends_on:
- db
volumes:
- ./:/app
- /node_modules
networks:
- paragon_net
ports:
- "3030:3030"
db:
container_name: paragon_db
image: mysql:8.0
command:
--default-authentication-plugin=mysql_native_password
--init-file ./src/data/db_init.sql
restart: unless-stopped
volumes:
- ./src/data/db_init.sql:/docker-entrypoint-initdb.d/
- mysql-data:/var/lib/mysql
ports:
- "3360:3306"
expose:
- "3306"
environment:
MYSQL_DATABASE: paragon
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: admin
MYSQL_PASSWORD: paragon99
SERVICE_TAG: dev
SERVICE_NAME: paragon_db
networks:
- paragon_net
# volumes:
phpmyadmin:
container_name: sql_admin
image: phpmyadmin:5.2.0-apache
restart: always
depends_on:
- db
ports:
- "8090:80"
networks:
- paragon_net
networks:
paragon_net:
driver: bridge
The location of the new site on the server are at /var/www/newsite

Can't call my Laravel API from the Node.js container but I can call it from Postman

I have the following 4 containers:
php-fpm for Laravel API backend
node.js container for the Next.js frontend
nginx container
mysql container
I am able to call the Laravel API and get the correct JSON data from Postman using http://localhost:8088/api/products , but when I try from the Node container, I get FetchError: request to http://localhost:8088/api/ failed, reason: connect ECONNREFUSED 10.0.238.3:8088. I am not sure if it's the nginx configuration issue, or docker-compose.yml configuration issue.
I also tried to call the API from the node container using several other options (none of them worked):
http://php:8088/api/products
http://localhost:8088/api/products
http://php:9000/api/products - gives a different error: FetchError: request to http://php:9000/api/products/ failed, reason: read ECONNRESET
This is the docker-compose.yml:
networks:
laravel:
driver: bridge
services:
nginx:
image: nginx:stable-alpine
container_name: nginx
ports:
- "8088:80"
volumes:
- ./laravel-app:/var/www/html
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php
- mysql
- node
networks:
- laravel
mysql:
image: mysql
container_name: mysql
restart: unless-stopped
tty: true
ports:
- "4306:3306"
volumes:
- ./mysql:/var/lib/mysql
environment:
MYSQL_DATABASE: laravel
MYSQL_ROOT_PASSWORD: password
SERVICE_TAGS: dev
SERVICE_NAME: mysql
networks:
- laravel
php:
build:
context: .
dockerfile: Dockerfile
container_name: php
volumes:
- ./laravel-app:/var/www/html
ports:
- "9000:9000"
networks:
- laravel
node:
build:
context: ./nextjs
dockerfile: Dockerfile
container_name: next
volumes:
- ./nextjs:/var/www/html
ports:
- "3000:3000"
- "49153:49153"
networks:
- laravel
And this is the nginx default.conf file:
server {
listen 80;
index index.php index.html;
server_name localhost;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/html/public;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
try_files $uri = 404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
If you want to reach your Laravel API from the node service, you must use http://nginx/your_api_endpoint.
Nginx acts as a reverse proxy here, it takes the traffic on the port 80 (listen 80;) and redirects it to your Laravel container (fastcgi_pass php:9000;).
So your target is not the Laravel container itself, it is nginx.
http://nginx/api/products/ should works if everything else is ok.

Traefik: Level=error msg=“field not found, node: mywebsite” providerName=docker

I am building a static website using Gatsby, and I am using Nginx to serve the static files.
I am also setting up Docker for the application deployment to production and also using Traefik as the reverse proxy in the Docker container.
Traefik runs on a different container while the Gatsby application runs on a different container with Nginx together.
However, when I run the application in production I get this error:
level=error msg="field not found, node: mywebsite" providerName=docker container=web-my-website
Here's my code:
Nginx's defualt.conf
server {
listen 3008;
add_header Cache-Control no-cache;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
expires -1;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Dockerfile
# Set base image
FROM node:latest AS builder
# Set working directory
WORKDIR /app
# Copy package.json and install packages
COPY package.json .
RUN npm install
# Copy other project files and build
COPY . ./
RUN npm run build
# Set nginx image
FROM nginx:latest
# Nginx config
RUN rm -rf /etc/nginx/conf.d/default.conf
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
# Static build
COPY --from=builder /app/public /usr/share/nginx/html
# Set working directory
WORKDIR /usr/share/nginx/html
# Start Nginx server
CMD ["/bin/bash", "-c", "nginx -g \"daemon off;\""]
Gatsby application's docker-compose.yml
version: "3"
services:
web:
image: my-website
build:
context: .
dockerfile: Dockerfile
expose:
- "3004"
labels:
- traefik.enable=true
- traefik.http.routers.mywebsite.rule=Host(`mywebsite.com`)
- traefik.http.services.educollectwebsite.loadbalancer.server.port=3004
restart: always
volumes:
- .:/app
networks:
default:
external:
name: traefik-proxy
Traefik's docker-compose.yml
version: "3"
services:
reverse-proxy:
# The official v2 Traefik docker image
image: traefik:v2.2
# Enables the web UI and tells Traefik to listen to docker
command:
- --api.insecure=true
- --entrypoints.web.address=:80
- --providers.docker=true
- --providers.docker.exposedbydefault=false
ports:
# The HTTP port
- "88:80"
# The Web UI (enabled by --api.insecure=true)
- "8088:8080"
restart: always
volumes:
# So that Traefik can listen to the Docker events
- /var/run/docker.sock:/var/run/docker.sock
networks:
default:
external:
name: traefik-proxy
I can't seem to figure out what the issue is here. Any form of help will be appreciated.
I was finally able to resolve it after some hours of working with my Line Manager.
The issue was that I defined port 3008 in the Nginx default.conf file and then defined port 3004 in the Gatsby application's docker-compose.yml file. This did not allow traffic into the application from Traefik reverse proxy. since both ports were different.
Solution 1:
Simply defining the same port of 3008 in the Nginx default.conf and in the Gatsby application's docker-compose.yml file fixed it:
Nginx's defualt.conf
server {
listen 3008;
add_header Cache-Control no-cache;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
expires -1;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Gatsby application's docker-compose.yml
version: "3"
services:
web:
image: my-website
build:
context: .
dockerfile: Dockerfile
expose:
- "3004"
labels:
- traefik.enable=true
- traefik.http.routers.mywebsite.rule=Host(`mywebsite.com`)
- traefik.http.services.educollectwebsite.loadbalancer.server.port=3008
restart: always
volumes:
- .:/app
networks:
default:
external:
name: traefik-proxy
Solution 2:
Defining the default port in Traefik which is port 80 in the Nginx default.conf and in the Gatsby application's docker-compose.yml file fixed it. This is more preferable when deploying static applications since it helps me to assume a reasonable default for the application.
Nginx's defualt.conf
server {
listen 80;
add_header Cache-Control no-cache;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
expires -1;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Gatsby application's docker-compose.yml
version: "3"
services:
web:
image: my-website
build:
context: .
dockerfile: Dockerfile
expose:
- "80"
labels:
- traefik.enable=true
- traefik.http.routers.mywebsite.rule=Host(`mywebsite.com`)
restart: always
volumes:
- .:/app
networks:
default:
external:
name: traefik-proxy
Note: Using the same port with Traefik which is port 80 in the application, invalidates the need for a Traefik loadbalancer service.
- traefik.http.services.educollectwebsite.loadbalancer.server.port=80
That's all.
I hope this helps

Linking two docker compose configuration fails with a bridge network

I have setup a global docker configuration which i expect to handle nginx,and database configuration. This has the following configuration
webserver:
image: nginx
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./:/var/www
- ./dockerconfig/nginx/conf.d/:/etc/nginx/conf.d/
networks:
- common
networks:
common:
driver: bridge
on the folder dockerconfig/nginx/conf.d i have a file pos.test with the following nginx config
server {
listen 80;
server_name pos.test www.pos.test;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/web;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
ON THE SECOND DOCKER configuration file i have
Now i have another docker configuration with the following
app:
build:
context: .
dockerfile: Dockerfile
image: posapp/php
container_name: envanto_pos
restart: unless-stopped
tty: true
working_dir: /var/www/pos
volumes:
- ./:/var/www/pos
networks:
- common
networks:
common:
driver: bridge
Now after running both docker files via docker-compose up -d they both run without any issue but now i nginx cannot run the domain pos.test where the app code is executed on the second docker file
The idea behind this is to have one docker-compose configuration file handle nginx server while the other configuration to copy files is handled by the other docker configuration files
How can i make both docker configurations work as including the network part with a bridge fails to work.What am i missing out.
UPDATE
I know one of the way to solve this would be to add a single docker config file but i want to split the docker config files to different configuration files
You are most likely running the docker-compose command from different directories, and not overriding the compose project name. Docker compose will prefix objects created, like containers, volumes, and networks, with the project name, to allow different instances to be run in isolation from each other.
To solve this, you need a known name of the network, and you'll want to define it as external to at least one of your compose files. When the network is defined as external, compose will not try to create it, but will require that it was already created externally, either by a docker network create command or by the other compose file.
To create the network with a known name, you can specify the name value in newer versions of the docker compose file.
Here is the first compose file that would create the network with a known name:
version: '3.7'
services:
webserver:
image: nginx
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./:/var/www
- ./dockerconfig/nginx/conf.d/:/etc/nginx/conf.d/
networks:
- common
networks:
common:
external: false
name: common
And the second compose file that would use the already created network:
version: '3.7'
services:
app:
build:
context: .
dockerfile: Dockerfile
image: posapp/php
container_name: envanto_pos
restart: unless-stopped
tty: true
working_dir: /var/www/pos
volumes:
- ./:/var/www/pos
networks:
- common
networks:
common:
external: true
name: common
The problem is that you're redefining the common network.
You can define the 2 docker-compose.yml files as follow:
docker-compose.yml
networks:
common:
driver: bridge
services:
app:
build:
context: .
dockerfile: Dockerfile
image: posapp/php
container_name: envanto_pos
restart: unless-stopped
tty: true
working_dir: /var/www/pos
volumes:
- ./${AppPath}/:/var/www/pos
networks:
- common
and docker-compose.webserver.yml
services:
webserver:
image: nginx
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./${WebserverPath}/:/var/www
- ./${WebserverPath}/dockerconfig/nginx/conf.d/:/etc/nginx/conf.d/
networks:
- common
and start the application with:
docker-compose -f docker-compose.yml -f docker-compose.webserver.yml up -d

Docker-compose and composer for laravel project

I'm trying to use docker ( first time at all ) to build a development env for my laravel projects.
I have read the documentation and looks like that docker-compose.yml file is the way to go, at least in my case.
I'm trying to create a LEMP env and this is my compose file:
version: "3.1"
services:
memcached:
image: memcached:alpine
container_name: 01dev-memcached
mysql:
image: mysql:8.0
container_name: 01dev-mysql
working_dir: /application
volumes:
- ./:/application
environment:
- MYSQL_ROOT_PASSWORD=laravel
- MYSQL_DATABASE=laravel
- MYSQL_USER=laravel
- MYSQL_PASSWORD=laravel
ports:
- "8082:3306"
webserver:
image: nginx:alpine
container_name: 01dev-webserver
working_dir: /application
volumes:
- ./:/application
- ./phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "8080:80"
php-fpm:
build: phpdocker/php-fpm
container_name: 01dev-php-fpm
working_dir: /application
volumes:
- ./:/application
- ./phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/7.3/fpm/conf.d/99-overrides.ini
My nginx.conf file:
server {
listen 80 default;
client_max_body_size 108M;
access_log /var/log/nginx/application.access.log;
root /application/public;
index index.php;
if (!-e $request_filename) {
rewrite ^.*$ /index.php last;
}
location ~ \.php$ {
fastcgi_pass php-fpm:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PHP_VALUE "error_log=/var/log/nginx/application_php_errors.log";
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
include fastcgi_params;
}
}
since is a Laravel application I need to set the public folder as root, so in my nginx.conf I have set it.
If I manually create the public folder with a index.php with just a echo "hello"; I'm able to connect and see the string on http://localhost:8080.
Now I need to use composer to install and to manage my project so I found a composer image and I have added it under services in docker-compose.yml:
composer:
restart: 'no'
container_name: 01dev-composer
image: "composer"
command: install
volumes:
- ./:/application
As far as I know with volumes I bind my host to the container path, so it should point outside the public folder, am I right?
If I run docker-compose up I see that the composer container exit with code 1:
01dev-composer | Composer could not find a composer.json file in /app
01dev-composer | To initialize a project, please create a composer.json file as described in the https://getcomposer.org/ "Getting Started" section
01dev-composer exited with code 1
And I'm not able to connect to it:
docker-compose exec composer install
ERROR: No container found for composer_1
How can I use composer in my project? There is a better way to do that?
See composer Dockerfile:
WORKDIR /app
This image's working dir is /app, and also from the guide, all example mount code directory to the /app folder in container.
Besides, the error also tells you that:
Composer could not find a composer.json file in /app
So, the thing you need to do is mount your code directory to /app & make sure this service start first, then it will generate vendor folder in your source code directory:
volumes:
- ./:/app

Resources