Docker fake localhost access for any LAN device - docker

After many failed attemps I am asking here. Not a programming question per say but I believe relevant to the community.
I have been trying to access my docker-compose dev website from my phone on LAN like this https://lan_server_ip:8000 (for CSS checking) but I am running into all sorts of issues and would like to know if there is a simple proxy type solution that would allow me to access my development server from LAN.
main issue is that my symfony app only recognizes a single trusted ip 127.0.0.1 set within the app itself.
I need a solution without having to modify the symfony app since adding ips for devices is unpractical; especially with dynamic ip.
The idea would be to have an interface to connect to instead of main nginx that would make the website believe that all requests are coming from the host machine 127.0.0.1.
Or modify my nginx configuration with a new server or location ? I tried rewriting headers but no success so far.
At the moment using responsive design on browsers but it is far from accurate and up to date. My only real solution is to find issues from the production website.
I believe adding a new container to take care of this but how ?
my docker-compose.yaml
version: "3"
services:
nginx:
container_name: nginx
image: "${NGINX_IMAGE}"
build: build/nginx
restart: always
env_file: .env
ports:
- "8000:443"
volumes:
- "${APP_HOST_NGINX_CONF}:${APP_CONTAINER_NGINX_CONF}:ro"
- "${APP_HOST_CERTS}:${APP_CONTAINER_CERTS}"
- "${APP_HOST_DIR}/public:${APP_CONTAINER_DIR}/public:ro"
- "/etc/localtime:/etc/localtime:ro"
networks:
app_network:
depends_on:
- app
app:
container_name: app
image: "${APP_IMAGE}"
restart: always
build: build/app
env_file: .env
networks:
app_network:
volumes:
- type: bind
source: ${APP_HOST_DIR}
target: ${APP_CONTAINER_DIR}
- type: bind
source: ${PHP_INI}
target: /usr/local/etc/php/php.ini
- type: bind
source: /etc/localtime
target: /etc/localtime:ro
depends_on:
- database
database:
container_name: mariadb
image: "mariadb:${MARIADB_VERSION}"
restart: always
env_file: .env
volumes:
- "${SQL_INIT}:/docker-entrypoint-initdb.d"
- type: bind
source: ${MARIADB_DATA_DIR}
target: /var/lib/mysql
- type: bind
source: ${MARIADB_LOG_DIR}
target: /var/logs/mysql
- type: bind
source: ${MARIADB_CERTS_DIR}
target: /etc/certs/
- type: bind
source: /etc/localtime
target: /etc/localtime:ro
ports:
- "3306:3306"
networks:
app_network:
command: [
"mysqld",
"--character-set-server=utf8mb4",
"--collation-server=utf8mb4_general_ci",
"--require_secure_transport=OFF",
"--bind-address=0.0.0.0",
"--ssl-key=/etc/certs/server-key.pem",
"--ssl-cert=/etc/certs/server-cert.pem",
"--ssl-ca=/etc/certs/ca-cert.pem",
]
networks:
app_network:
nginx.conf
#./images/nginx/build/default.conf
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name 127.0.0.1;
ssl_certificate /etc/nginx/certs/dev.crt;
ssl_certificate_key /etc/nginx/certs/dev.key;
index index.php index.html;
root /var/www/app/public;
client_max_body_size 128M;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
error_log /var/log/nginx/app.error.log;
access_log /var/log/nginx/app.access.log;
}

I am not an expert in Symphony and the trusted hosts mechanism you described
in your question comments but, if the actual problem is simulating that your requests are coming from 127.0.0.1, you could try configuring nginx to provide that value in the Host header when proxying your application.
Using FastCGI you could obtain the desired result including the following line within the location ~ \.php$ block in your nginx configuration:
fastcgi_param HTTP_HOST 127.0.0.1;
For example:
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param HTTP_HOST 127.0.0.1;
}
Please, take my words with caution: as I said, I don't fully understand Symphony and I don't want to cause you any kind of problem.

Related

Docker ngnix with tag nginx:latest seems causes a major issue - direct acces to web directory

Upgrading Nginx docker with image tag Nginx:latest causes not executing PHP files and give direct access to web directory!
Upgrading docker-compose.yml from nginx:1.18.0 to Nginx:latest seems to cause a major issue.
Ngnix container not executing PHP files anymore and give direct access to all content of web repository
Details:
Extract of docker-compose.yml (full reproductible example below)
webserver:
#image: nginx:1.8.0
image: nginx:latest
and then "docker-composer up -d"
raises the issue.
Effect:
Nginx 1.18.0 not executing PHP files (using php7.4-fpm) and give direct access to web contains
eg: domain.com/index.php can then be directly downloaded!
First elements:
image nginx:latest or image nginx produce the same effect
image nginx:1.8.0 (nor any explicit x.y.z tag) does not produce this issue
Troubling facts:
nginx image with tag: nginx:mainline download version # nginx version: nginx/1.21.5
nginx image with tag: nginx:latest download a 1.8.0 version # nginx version: nginx/1.8.0
Probable issue :
image nginx:latest has the following file (extract)
/etc/nginx/nginx.conf
html {
(...)
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*; # THIS LINE IS NEW - instantiated a default site
}
Don't know if this point has been noticed
Is a Dockerfile with "rm /etc/nginx/sites-enabled/" cmd an acceptable workaround or a prerequisite?
Reproducible example
docker-compose.yml
version: "3"
services:
cms_php:
image: php:7.4-fpm
container_name: cms_php
restart: unless-stopped
networks:
- internal
- external
volumes:
- ./src:/var/www/html
webserver:
# image: nginx:1.18.0 # OK
# image: nginx:1.17.0 # OK
# image: nginx:mainline # OK
image: nginx:latest # NOK
# image: nginx # NOK
container_name: webserver
depends_on:
- cms_php
restart: unless-stopped
ports:
- 80:80
volumes:
- ./src:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d/
networks:
- external
networks:
external:
driver: bridge
internal:
driver: bridge
nginx-conf/nginx.conf
server {
listen 80;
listen [::]:80;
server_name localhost;
index index.php index.html index.htm;
root /var/www/html;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass cms_php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off; access_log off;
}
location = /robots.txt {
log_not_found off; access_log off; allow all;
}
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
}
src/index.php
<?php echo "Hi..."; ?>
With the below setup, I am able to get the desired data. I didn't have to make changes to your files. You may have an issue with your paths/setup. Try to imitate my setup. I am using nginx:latest.
$ curl localhost:80
Hi...
Running docker processes in this setup
$ docker-compose ps
Name Command State Ports
-----------------------------------------------------------------------
cms_php docker-php-entrypoint php-fpm Up 9000/tcp
webserver /docker-entrypoint.sh ngin ... Up 0.0.0.0:80->80/tcp
Folder structure
$ tree
.
├── docker-compose.yaml
├── nginx-conf
│ └── nginx.conf
└── src
└── index.php
2 directories, 3 files
src/index.php
$ cat src/index.php
<?php echo "Hi..."; ?>
docker-compose.yaml
$ cat docker-compose.yaml
version: "3"
services:
cms_php:
image: php:7.4-fpm
container_name: cms_php
restart: unless-stopped
networks:
- internal
- external
volumes:
- ./src:/var/www/html
webserver:
image: nginx:latest
container_name: webserver
depends_on:
- cms_php
restart: unless-stopped
ports:
- 80:80
volumes:
- ./src:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d/
networks:
- external
networks:
external:
driver: bridge
internal:
driver: bridge
nginx-conf/nginx.conf
$ cat nginx-conf/nginx.conf
server {
listen 80;
listen [::]:80;
server_name localhost;
index index.php index.html index.htm;
root /var/www/html;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass cms_php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off; access_log off;
}
location = /robots.txt {
log_not_found off; access_log off; allow all;
}
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
}

Docker: Nginx Reverse Proxy returns error 504 when trying to host multiple site in 1 VPS

I'm trying to host multiple sites using docker in one VPS. I want that each site will have 1 nginx server and 1 php and all the sites will have 1 common mysql database.
This is how containers looks like:
mysql_container (port: 3306)
main_webserver (nginx container) (port 80)
site_1 (site.com)
- nginx container (81:80), php container
site_2 (site2.com)
- another nginx container (82:80), another php container
main_server .conf
server {
listen 80;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
server_name site.com;
location / {
proxy_pass http://<site_container_ip_address>:82/;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
}
}
site1 .conf
server {
listen 80;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
If I try to access site.com:82, it works fine. But in site.com:80 it returns error 504
I run the containers using docker-compose
version: '3.5'
services:
#PHP Service
app:
build:
context: .
dockerfile: Dockerfile
image: digitalocean.com/php
container_name: testapp
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: testapp
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- testapp-network
#Nginx Service
webserver:
image: nginx:alpine
container_name: testwebserver
restart: unless-stopped
tty: true
ports:
- "82:80"
volumes:
- ./:/var/www
- ./nginx/conf.d/:/etc/nginx/conf.d/
networks:
- testapp-network
#Docker Networks
networks:
testapp-network:
driver: bridge
name: testapp_network

Docker Communication between 2 apps inside same container

I have two apps -- app1 is running on localhost/app1. app2 which exposes api is running on localhost/app2 .Both these apps uses nginx server. App1 is makes an HTTP GET request to app2, which throws cURL error 7: Failed to connect.
But when containerizing both of these apps (and running them on the same network), what url should app1 be sending to fetch the api details?
Docker-compose.yml
version: '2'
services:
web:
image: nginx
ports:
- "83:80"
- "443:443"
links:
- php
volumes:
- /code:/var/www/html
php:
build:
context: .
dockerfile: ./Dockerfile
volumes:
- /code:/var/www/html
Code Directory has 2 folders app1 and app2, Where app1 is the application code base and app2 is the api code base
Virutalhost entry mounted inside web container
server {
listen 80;
server_name default;
root /var/www/html;
index index.html index.php;
client_max_body_size 100M;
fastcgi_read_timeout 1800;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
access_log off; }
location ~ \.php$ {
try_files $uri =404;
include fastcgi_params;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass lamp_php_1:9000;
}
}

Multiple nginx websites with one container

So, first let me explain what I am trying to do. I have 2 websites, a frontend and a backend, the frontend is just HTML and vue, which uses the backend to store information (an api)
Websites:
- erp.test (frontend)
- api.erp.test (backend; php, api)
docker-compose.yml
version: '3'
services:
#web
frontend:
build:
context: .
dockerfile: ./environment/nginx/Dockerfile
container_name: frontend
restart: always
ports:
- 80:80
- 442:442
volumes:
- ./environment/nginx/sites-enabled:/etc/nginx/sites-enabled
- ./frontend/public:/usr/share/nginx/html/frontend
- ./api:/usr/share/nginx/html/api
links:
- php
php:
build:
context: .
args:
version: 7.3.0-fpm
dockerfile: ./environment/php/Dockerfile
container_name: php_backend
restart: always
depends_on:
- mysql
mysql:
build:
context: .
args:
version: 5.7
dockerfile: ./environment/mysql/Dockerfile
restart: always
volumes:
- ./environment/mysql/data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: laravel
MYSQL_DATABASE: laravel
ports:
- 13306:3306
command:
build:
context: .
dockerfile: ./environment/command/Dockerfile
container_name: command
restart: always
command: "tail -f /dev/null"
volumes:
- ./frontend:/frontend
This uses the following files for the sites-enabled.
My dockerfile for the nginx environment is the following:
FROM nginx
Config files for the websites:
etc/nginx/sites-enabled/api.erp.test
server {
listen 80;
listen [::]:80;
server_name api.erp.test;
root /usr/share/nginx/html/backend/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php7.3.0-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
etc/nginx/sites-enabled/erp.test
server {
listen 80;
listen [::]:80;
server_name erp.test;
root /usr/share/nginx/html/frontend/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html;
location / {
try_files $uri $uri/ =404;
}
charset utf-8;
}
Both of them files should (I assume) be enabled and work. I checked the container and the files are in the correct position, and I've even added the IP address of the container to my hosts file on my machine like so:
172.18.0.3 erp.test
172.18.0.3 api.erp.test
Whenever I visit them urls, it just goes to the default nginx url and not the specific websites. Any idea what I am doing wrong?
I believe for nginx in docker the virtual host files need to go into /etc/nginx/conf.d not /etc/nginx/sites-enabled
So in your docker-compose.yml change
./environment/nginx/sites-enabled:/etc/nginx/sites-enabled
to
./environment/nginx/sites-enabled:/etc/nginx/conf.d

Docker - Access nginx container using nginx server_name from another container

I have one node.js application (web-app) and two lumen applications (api, customer-api) that are load balanced by an nginx container listening on port 80.
My docker-compose.yml file:
version: '2'
services:
nginx:
build:
context: ../
dockerfile: posbytz-docker/nginx/dockerfile
volumes:
- api
- customer-api
ports:
- "80:80"
networks:
- network
depends_on:
- web-app
- api
- customer-api
web-app:
build:
context: ../
dockerfile: posbytz-docker/web-app-dockerfile
volumes:
- ../web-app:/posbytz/web-app
- /posbytz/web-app/node_modules
ports:
- "3004:3004"
networks:
- network
api:
build:
context: ../
dockerfile: posbytz-docker/api-dockerfile
volumes:
- ../api:/var/www/api
networks:
- network
customer-api:
build:
context: ../
dockerfile: posbytz-docker/customer-api-dockerfile
volumes:
- ../customer-api:/var/www/customer-api
networks:
- network
redis:
image: redis
ports:
- "6379:6379"
networks:
- network
memcached:
image: memcached
ports:
- "11211:11211"
networks:
- network
mysql:
image: mysql:5.7
volumes:
- ./db-data:/var/lib/mysql
ports:
- "3306:3306"
networks:
- network
adminer:
image: adminer
restart: always
ports:
- "9001:8080"
networks:
- network
networks:
network:
driver: bridge
Since I am using a bridged network, I am able to access each container from another container using the container names. But what I want instead is, access the containers using the server_name of their nginx configuation.
Below are the nginx configuration of each application,
web-app.conf:
server {
listen 80;
server_name posbytz.local;
resolver 127.0.0.11 valid=10s;
location / {
proxy_pass http://web-app:3004;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
api.conf:
server {
listen 80;
index index.php index.html;
root /var/www/api/public;
server_name api.posbytz.local;
resolver 127.0.0.11 valid=10s;
location / {
try_files $uri /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass api:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
customer-api.conf
server {
listen 80;
index index.php index.html;
root /var/www/customer-api/public;
server_name customer-api.posbytz.local;
resolver 127.0.0.11 valid=10s;
location / {
try_files $uri /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass customer-api:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
The problem
I want to access both api and customer-api containers from web-app container. The problem is when I try curl http://nginx I'am only getting response from the api container. Is there any way to access the customer-api container through the nginx container?
What I tried
When I manually mapped the IP of nginx container (172.21.0.9) with their respective server_name in the /etc/hosts file on the web-app container it seems to work.
What I added on /etc/hosts file on web-app container:
172.21.0.9 api.posbytz.local
172.21.0.9 customer-api.posbytz.local
Is there any other way to achieve this without manual intervention?
Finally made it to work by changing the nginx configuration on customer-api.conf to listen on port 81 ie. listen 80; to listen 81;. Now http://nginx resolves to http://api:9000 and http://nginx:81 resolves to http://customer-api:9000
You can use aliases:
networks:
some-network:
aliases:
- api.posbytz.local
- customer-api.posbytz.local

Resources