So, first let me explain what I am trying to do. I have 2 websites, a frontend and a backend, the frontend is just HTML and vue, which uses the backend to store information (an api)
Websites:
- erp.test (frontend)
- api.erp.test (backend; php, api)
docker-compose.yml
version: '3'
services:
#web
frontend:
build:
context: .
dockerfile: ./environment/nginx/Dockerfile
container_name: frontend
restart: always
ports:
- 80:80
- 442:442
volumes:
- ./environment/nginx/sites-enabled:/etc/nginx/sites-enabled
- ./frontend/public:/usr/share/nginx/html/frontend
- ./api:/usr/share/nginx/html/api
links:
- php
php:
build:
context: .
args:
version: 7.3.0-fpm
dockerfile: ./environment/php/Dockerfile
container_name: php_backend
restart: always
depends_on:
- mysql
mysql:
build:
context: .
args:
version: 5.7
dockerfile: ./environment/mysql/Dockerfile
restart: always
volumes:
- ./environment/mysql/data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: laravel
MYSQL_DATABASE: laravel
ports:
- 13306:3306
command:
build:
context: .
dockerfile: ./environment/command/Dockerfile
container_name: command
restart: always
command: "tail -f /dev/null"
volumes:
- ./frontend:/frontend
This uses the following files for the sites-enabled.
My dockerfile for the nginx environment is the following:
FROM nginx
Config files for the websites:
etc/nginx/sites-enabled/api.erp.test
server {
listen 80;
listen [::]:80;
server_name api.erp.test;
root /usr/share/nginx/html/backend/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php7.3.0-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
etc/nginx/sites-enabled/erp.test
server {
listen 80;
listen [::]:80;
server_name erp.test;
root /usr/share/nginx/html/frontend/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html;
location / {
try_files $uri $uri/ =404;
}
charset utf-8;
}
Both of them files should (I assume) be enabled and work. I checked the container and the files are in the correct position, and I've even added the IP address of the container to my hosts file on my machine like so:
172.18.0.3 erp.test
172.18.0.3 api.erp.test
Whenever I visit them urls, it just goes to the default nginx url and not the specific websites. Any idea what I am doing wrong?
I believe for nginx in docker the virtual host files need to go into /etc/nginx/conf.d not /etc/nginx/sites-enabled
So in your docker-compose.yml change
./environment/nginx/sites-enabled:/etc/nginx/sites-enabled
to
./environment/nginx/sites-enabled:/etc/nginx/conf.d
Related
Currently using WSL2 ubuntu with docker-desktop for windows with WSL integration.
docker-compose.yml file
version: '3.9'
services:
wordpress:
# default port 9000 (FastCGI)
image: wordpress:6.1.1-fpm
container_name: wp-wordpress
env_file:
- .env
restart: unless-stopped
networks:
- wordpress
depends_on:
- database
volumes:
- ${WORDPRESS_LOCAL_HOME}:/var/www/html
- ${WORDPRESS_UPLOADS_CONFIG}:/usr/local/etc/php/conf.d/uploads.ini
# - /path/to/repo/myTheme/:/var/www/html/wp-content/themes/myTheme
environment:
- WORDPRESS_DB_HOST=${WORDPRESS_DB_HOST}
- WORDPRESS_DB_NAME=${WORDPRESS_DB_NAME}
- WORDPRESS_DB_USER=${WORDPRESS_DB_USER}
- WORDPRESS_DB_PASSWORD=${WORDPRESS_DB_PASSWORD}
database:
# default port 3306
image: mysql:latest
container_name: wp-database
env_file:
- .env
restart: unless-stopped
networks:
- wordpress
environment:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
volumes:
- ${MYSQL_LOCAL_HOME}:/var/lib/mysql
command:
- '--default-authentication-plugin=mysql_native_password'
nginx:
# default ports 80, 443 - expose mapping as needed to host
image: nginx:latest
container_name: wp-nginx
env_file:
- .env
restart: unless-stopped
networks:
- wordpress
depends_on:
- wordpress
ports:
- 8080:80 # http
- 8443:443 # https
volumes:
- ${WORDPRESS_LOCAL_HOME}:/var/www/html
- ${NGINX_CONF}:/etc/nginx/conf.d/default.conf
- ${NGINX_SSL_CERTS}:/etc/nginx/certs
- ${NGINX_LOGS}:/var/log/nginx
adminer:
# default port 8080
image: adminer:latest
container_name: wp-adminer
restart: unless-stopped
networks:
- wordpress
depends_on:
- database
ports:
- "9000:8080"
networks:
wordpress:
name: wp-wordpress
driver: bridge
I'm just starting out with development using docker. The file on the local storage(in the Linux file system) was initially owned by www-data so I changed it to my linux username using sudo chown -R username:username wordpress/ because it wasn't writeable. But doing this doesn't allow me to upload files(from wordpress interface) or write to files inside the nginx container unless the ownership is changed back to www-data:www-data.
Things I've tried:
Starting a bash session inside the nginx container with docker exec -it <cname> bash and changing the ownership of the uploads directory and writing files to my username.(after adding user with adduser username)
Changing the nginx user within the bash session to my username using user username username
I don't know what else to try except sudo chmod -R a+rwx in the main directory.
default.conf:
# default.conf
# redirect to HTTPS
server {
listen 80;
listen [::]:80;
server_name wordpress-docker.test;
location / {
# update port as needed for host mapped https
rewrite ^ https://wordpress-docker.test:8443$request_uri? permanent;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name wordpress-docker.test;
index index.php index.html index.htm;
root /var/www/html;
server_tokens off;
client_max_body_size 75M;
# update ssl files as required by your deployment
ssl_certificate /etc/nginx/certs/localhost+2.pem;
ssl_certificate_key /etc/nginx/certs/localhost+2-key.pem;
# logging
access_log /var/log/nginx/wordpress.access.log;
error_log /var/log/nginx/wordpress.error.log;
# some security headers ( optional )
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri = 404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass wordpress:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off; access_log off;
}
location = /favicon.svg {
log_not_found off; access_log off;
}
location = /robots.txt {
log_not_found off; access_log off; allow all;
}
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
}
Folder struct:
|-config
|--uploads.ini
|-dbdata
|-logs
|-nginx
|--certs
|--default.conf
|-wordpress
|-.env
|-docker-compose.yml
Refering to this answer, this is how I resolved my issue:
Add your user to the www-data group
sudo usermod -a -G www-data username
Give rw permissions to the www-data group(f flag applies the permissions only to files and leaves the directories)
sudo find wordpress -type f -exec chmod g+rw {} +
Upgrading Nginx docker with image tag Nginx:latest causes not executing PHP files and give direct access to web directory!
Upgrading docker-compose.yml from nginx:1.18.0 to Nginx:latest seems to cause a major issue.
Ngnix container not executing PHP files anymore and give direct access to all content of web repository
Details:
Extract of docker-compose.yml (full reproductible example below)
webserver:
#image: nginx:1.8.0
image: nginx:latest
and then "docker-composer up -d"
raises the issue.
Effect:
Nginx 1.18.0 not executing PHP files (using php7.4-fpm) and give direct access to web contains
eg: domain.com/index.php can then be directly downloaded!
First elements:
image nginx:latest or image nginx produce the same effect
image nginx:1.8.0 (nor any explicit x.y.z tag) does not produce this issue
Troubling facts:
nginx image with tag: nginx:mainline download version # nginx version: nginx/1.21.5
nginx image with tag: nginx:latest download a 1.8.0 version # nginx version: nginx/1.8.0
Probable issue :
image nginx:latest has the following file (extract)
/etc/nginx/nginx.conf
html {
(...)
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*; # THIS LINE IS NEW - instantiated a default site
}
Don't know if this point has been noticed
Is a Dockerfile with "rm /etc/nginx/sites-enabled/" cmd an acceptable workaround or a prerequisite?
Reproducible example
docker-compose.yml
version: "3"
services:
cms_php:
image: php:7.4-fpm
container_name: cms_php
restart: unless-stopped
networks:
- internal
- external
volumes:
- ./src:/var/www/html
webserver:
# image: nginx:1.18.0 # OK
# image: nginx:1.17.0 # OK
# image: nginx:mainline # OK
image: nginx:latest # NOK
# image: nginx # NOK
container_name: webserver
depends_on:
- cms_php
restart: unless-stopped
ports:
- 80:80
volumes:
- ./src:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d/
networks:
- external
networks:
external:
driver: bridge
internal:
driver: bridge
nginx-conf/nginx.conf
server {
listen 80;
listen [::]:80;
server_name localhost;
index index.php index.html index.htm;
root /var/www/html;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass cms_php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off; access_log off;
}
location = /robots.txt {
log_not_found off; access_log off; allow all;
}
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
}
src/index.php
<?php echo "Hi..."; ?>
With the below setup, I am able to get the desired data. I didn't have to make changes to your files. You may have an issue with your paths/setup. Try to imitate my setup. I am using nginx:latest.
$ curl localhost:80
Hi...
Running docker processes in this setup
$ docker-compose ps
Name Command State Ports
-----------------------------------------------------------------------
cms_php docker-php-entrypoint php-fpm Up 9000/tcp
webserver /docker-entrypoint.sh ngin ... Up 0.0.0.0:80->80/tcp
Folder structure
$ tree
.
├── docker-compose.yaml
├── nginx-conf
│ └── nginx.conf
└── src
└── index.php
2 directories, 3 files
src/index.php
$ cat src/index.php
<?php echo "Hi..."; ?>
docker-compose.yaml
$ cat docker-compose.yaml
version: "3"
services:
cms_php:
image: php:7.4-fpm
container_name: cms_php
restart: unless-stopped
networks:
- internal
- external
volumes:
- ./src:/var/www/html
webserver:
image: nginx:latest
container_name: webserver
depends_on:
- cms_php
restart: unless-stopped
ports:
- 80:80
volumes:
- ./src:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d/
networks:
- external
networks:
external:
driver: bridge
internal:
driver: bridge
nginx-conf/nginx.conf
$ cat nginx-conf/nginx.conf
server {
listen 80;
listen [::]:80;
server_name localhost;
index index.php index.html index.htm;
root /var/www/html;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass cms_php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off; access_log off;
}
location = /robots.txt {
log_not_found off; access_log off; allow all;
}
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
}
I'm trying to host multiple sites using docker in one VPS. I want that each site will have 1 nginx server and 1 php and all the sites will have 1 common mysql database.
This is how containers looks like:
mysql_container (port: 3306)
main_webserver (nginx container) (port 80)
site_1 (site.com)
- nginx container (81:80), php container
site_2 (site2.com)
- another nginx container (82:80), another php container
main_server .conf
server {
listen 80;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
server_name site.com;
location / {
proxy_pass http://<site_container_ip_address>:82/;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
}
}
site1 .conf
server {
listen 80;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
If I try to access site.com:82, it works fine. But in site.com:80 it returns error 504
I run the containers using docker-compose
version: '3.5'
services:
#PHP Service
app:
build:
context: .
dockerfile: Dockerfile
image: digitalocean.com/php
container_name: testapp
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: testapp
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- testapp-network
#Nginx Service
webserver:
image: nginx:alpine
container_name: testwebserver
restart: unless-stopped
tty: true
ports:
- "82:80"
volumes:
- ./:/var/www
- ./nginx/conf.d/:/etc/nginx/conf.d/
networks:
- testapp-network
#Docker Networks
networks:
testapp-network:
driver: bridge
name: testapp_network
I have two apps -- app1 is running on localhost/app1. app2 which exposes api is running on localhost/app2 .Both these apps uses nginx server. App1 is makes an HTTP GET request to app2, which throws cURL error 7: Failed to connect.
But when containerizing both of these apps (and running them on the same network), what url should app1 be sending to fetch the api details?
Docker-compose.yml
version: '2'
services:
web:
image: nginx
ports:
- "83:80"
- "443:443"
links:
- php
volumes:
- /code:/var/www/html
php:
build:
context: .
dockerfile: ./Dockerfile
volumes:
- /code:/var/www/html
Code Directory has 2 folders app1 and app2, Where app1 is the application code base and app2 is the api code base
Virutalhost entry mounted inside web container
server {
listen 80;
server_name default;
root /var/www/html;
index index.html index.php;
client_max_body_size 100M;
fastcgi_read_timeout 1800;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
access_log off; }
location ~ \.php$ {
try_files $uri =404;
include fastcgi_params;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass lamp_php_1:9000;
}
}
I have one node.js application (web-app) and two lumen applications (api, customer-api) that are load balanced by an nginx container listening on port 80.
My docker-compose.yml file:
version: '2'
services:
nginx:
build:
context: ../
dockerfile: posbytz-docker/nginx/dockerfile
volumes:
- api
- customer-api
ports:
- "80:80"
networks:
- network
depends_on:
- web-app
- api
- customer-api
web-app:
build:
context: ../
dockerfile: posbytz-docker/web-app-dockerfile
volumes:
- ../web-app:/posbytz/web-app
- /posbytz/web-app/node_modules
ports:
- "3004:3004"
networks:
- network
api:
build:
context: ../
dockerfile: posbytz-docker/api-dockerfile
volumes:
- ../api:/var/www/api
networks:
- network
customer-api:
build:
context: ../
dockerfile: posbytz-docker/customer-api-dockerfile
volumes:
- ../customer-api:/var/www/customer-api
networks:
- network
redis:
image: redis
ports:
- "6379:6379"
networks:
- network
memcached:
image: memcached
ports:
- "11211:11211"
networks:
- network
mysql:
image: mysql:5.7
volumes:
- ./db-data:/var/lib/mysql
ports:
- "3306:3306"
networks:
- network
adminer:
image: adminer
restart: always
ports:
- "9001:8080"
networks:
- network
networks:
network:
driver: bridge
Since I am using a bridged network, I am able to access each container from another container using the container names. But what I want instead is, access the containers using the server_name of their nginx configuation.
Below are the nginx configuration of each application,
web-app.conf:
server {
listen 80;
server_name posbytz.local;
resolver 127.0.0.11 valid=10s;
location / {
proxy_pass http://web-app:3004;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
api.conf:
server {
listen 80;
index index.php index.html;
root /var/www/api/public;
server_name api.posbytz.local;
resolver 127.0.0.11 valid=10s;
location / {
try_files $uri /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass api:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
customer-api.conf
server {
listen 80;
index index.php index.html;
root /var/www/customer-api/public;
server_name customer-api.posbytz.local;
resolver 127.0.0.11 valid=10s;
location / {
try_files $uri /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass customer-api:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
The problem
I want to access both api and customer-api containers from web-app container. The problem is when I try curl http://nginx I'am only getting response from the api container. Is there any way to access the customer-api container through the nginx container?
What I tried
When I manually mapped the IP of nginx container (172.21.0.9) with their respective server_name in the /etc/hosts file on the web-app container it seems to work.
What I added on /etc/hosts file on web-app container:
172.21.0.9 api.posbytz.local
172.21.0.9 customer-api.posbytz.local
Is there any other way to achieve this without manual intervention?
Finally made it to work by changing the nginx configuration on customer-api.conf to listen on port 81 ie. listen 80; to listen 81;. Now http://nginx resolves to http://api:9000 and http://nginx:81 resolves to http://customer-api:9000
You can use aliases:
networks:
some-network:
aliases:
- api.posbytz.local
- customer-api.posbytz.local