Owncloud behind netscaler and nginx proxy: PR_CONNECT_RESET_ERROR - docker

I try to install a dockerized owncloud instance behind a nginx server (on the same virtual machine). The whole vm-server (redhat8) is behind a netscaler (type unknown, managed by my company) which handles the ssl handshake via wildcard certificate. The nginx server is reached only via port 80 http.
My nginx-conf:
server {
listen 80;
server_name cloud.mydomain.com;
location / {
client_max_body_size 16000m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8082/;
}
}
The docker-compose.yml in use:
version: "3"
services:
owncloud:
image: owncloud/server
container_name: owncloud_server
restart: unless-stopped
depends_on:
- mariadb
- redis
ports:
- "8082:8080"
environment:
- OWNCLOUD_DOMAIN=cloud.mydomain.com
- OWNCLOUD_DB_TYPE=mysql
- OWNCLOUD_DB_NAME=owncloud
- OWNCLOUD_DB_USERNAME=owncloud
- OWNCLOUD_DB_PASSWORD=owncloud
- OWNCLOUD_DB_HOST=mariadb
- OWNCLOUD_ADMIN_USERNAME=admin
- OWNCLOUD_ADMIN_PASSWORD=********
- OWNCLOUD_MYSQL_UTF8MB4=true
- OWNCLOUD_REDIS_ENABLED=true
- OWNCLOUD_REDIS_HOST=redis
- OWNCLOUD_OVERWRITE_CLI_URL=https://cloud.mydomain.com
- OWNCLOUD_OVERWRITE_PROTOCOL=https
- OWNCLOUD_OVERWRITE_HOST=cloud.mydomain.com
- OWNCLOUD_TRUSTED_PROXIES=0.0.0.0/16
- OWNCLOUD_DEFAULT_LANGUAGE=de
healthcheck:
test: ["CMD", "/usr/bin/healthcheck"]
interval: 30s
timeout: 10s
retries: 5
volumes:
- ./files:/mnt/data
networks:
- default
mariadb:
image: mariadb:10.6
container_name: owncloud_mariadb
restart: unless-stopped
environment:
- MYSQL_ROOT_PASSWORD=owncloud
- MYSQL_USER=owncloud
- MYSQL_PASSWORD=owncloud
- MYSQL_DATABASE=owncloud
command: ["--max-allowed-packet=128M", "--innodb-log-file-size=64M"]
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-u", "root", "--password=owncloud"]
interval: 10s
timeout: 5s
retries: 5
volumes:
- ./mysql:/var/lib/mysql
networks:
- default
redis:
image: redis:6
container_name: owncloud_redis
restart: unless-stopped
command: ["--databases", "1"]
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
volumes:
- ./redis:/data
networks:
- default
This happens: a request to https://cloud.mydomain.com returns a correct redirect to https://cloud.mydomain.com/login. But this page shows on Chrome a ERR_CONNECTION_RESET, under firefox a PR_CONNECT_RESET_ERROR. The owncloud server log shows multiple GET /login HTTP/1.0" 200 entries.
I play with the configuration now for days and ran out of ideas. Any help appreciated. Thanks.

Related

Communication Refused between 2 Docker Express Microservices

I am trying to deploy a full stack application with docker containers, currently I have various express microservice images, and a react nginx image that are all being proxied by nginx, when react makes a call to one of my microservices it has to make a synchronous request to another but I am getting connection refused and not sure why, i have tried to do this with both relative and absolute urls to no avail.
Here is my docker compose:
networks:
my_network:
driver: bridge
services:
nginx-service:
image: sahilhakimi/topshelf-nginx:latest
ports:
- 80:80
depends_on:
- identity-service
- client-service
- product-service
- cart-service
networks:
- my_network
client-service:
image: sahilhakimi/topshelf-frontend:latest
networks:
- my_network
identity-service:
image: sahilhakimi/topshelf-identity:latest
environment:
TOKEN_KEY: 7e44d0deaa035296b0cb10b3ea46ac627707e51c277908b609c0739d1adafc22e7e543faf9d43028ae64cccdf506e7994a1b0075880aae61a6cdb87d087bc082759a12c398d7e3291fd54539f1b260c7a5f6c79e2c4d586a180ed42b93866583c8da85acf261939d2171ef4b18765a205ee35f8cd1b989a8473589642176f6d1
REFRESH_TOKEN_KEY: b7fb9c5209f55dc4b50d0faef5b4a0b3bdaa81db1e813779ef6c064de0a934e0c2e027fe5e2a9ab7b6d6c06a5d7099ccd837f7698ba9b5a8bee5e1acd86823c0eb29945d4965fe0db59d95c2defb60dbebb1b034dd22132115f14a50e175bd44bc919a91b7b6731e25908fc3b8800d06e90c6b1288b42c8b2378c2a619931e33
AWS_ACCESS_KEY_ID: AKIATQAAJCDKIHEBIK5B
AWS_SECRET_ACCESS_KEY: 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
MONGO_URI: mongodb://root:rootpassword#mongodb_container:27017/Users?authSource=admin
REDIS_PASSWORD: eYVX7EwVmmxKPCDmwMtyKVge8oLd2t81
REDIS_HOST: cache
REDIS_PORT: 6379
clientUrl: http://client-service:5173
depends_on:
- mongodb_container
- cache
networks:
- my_network
product-service:
image: sahilhakimi/topshelf-product:latest
environment:
TOKEN_KEY: 7e44d0deaa035296b0cb10b3ea46ac627707e51c277908b609c0739d1adafc22e7e543faf9d43028ae64cccdf506e7994a1b0075880aae61a6cdb87d087bc082759a12c398d7e3291fd54539f1b260c7a5f6c79e2c4d586a180ed42b93866583c8da85acf261939d2171ef4b18765a205ee35f8cd1b989a8473589642176f6d1
REFRESH_TOKEN_KEY: b7fb9c5209f55dc4b50d0faef5b4a0b3bdaa81db1e813779ef6c064de0a934e0c2e027fe5e2a9ab7b6d6c06a5d7099ccd837f7698ba9b5a8bee5e1acd86823c0eb29945d4965fe0db59d95c2defb60dbebb1b034dd22132115f14a50e175bd44bc919a91b7b6731e25908fc3b8800d06e90c6b1288b42c8b2378c2a619931e33
AWS_ACCESS_KEY_ID: AKIATQAAJCDKIHEBIK5B
AWS_SECRET_ACCESS_KEY: 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
MONGO_URI: mongodb://root:rootpassword#mongodb_container:27017/Products?authSource=admin
KAFKA_HOST: kafka:9092
depends_on:
- mongodb_container
- kafka
networks:
- my_network
cart-service:
image: sahilhakimi/topshelf-cart:latest
environment:
TOKEN_KEY: 7e44d0deaa035296b0cb10b3ea46ac627707e51c277908b609c0739d1adafc22e7e543faf9d43028ae64cccdf506e7994a1b0075880aae61a6cdb87d087bc082759a12c398d7e3291fd54539f1b260c7a5f6c79e2c4d586a180ed42b93866583c8da85acf261939d2171ef4b18765a205ee35f8cd1b989a8473589642176f6d1
REFRESH_TOKEN_KEY: b7fb9c5209f55dc4b50d0faef5b4a0b3bdaa81db1e813779ef6c064de0a934e0c2e027fe5e2a9ab7b6d6c06a5d7099ccd837f7698ba9b5a8bee5e1acd86823c0eb29945d4965fe0db59d95c2defb60dbebb1b034dd22132115f14a50e175bd44bc919a91b7b6731e25908fc3b8800d06e90c6b1288b42c8b2378c2a619931e33
REDIS_PASSWORD: eYVX7EwVmmxKPCDmwMtyKVge8oLd2t81
REDIS_HOST: cache
REDIS_PORT: 6379
KAFKA_HOST: kafka:9092
depends_on:
- cache
- kafka
networks:
- my_network
mongodb_container:
image: mongo:latest
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: rootpassword
ports:
- 27017:27017
volumes:
- mongodb_data_container:/data/db
networks:
- my_network
cache:
image: redis:6.2-alpine
restart: always
ports:
- "6379:6379"
command: redis-server --save 20 1 --loglevel warning --requirepass eYVX7EwVmmxKPCDmwMtyKVge8oLd2t81
volumes:
- cache:/data
networks:
- my_network
zookeeper:
image: zookeeper:latest
networks:
- my_network
kafka:
image: wurstmeister/kafka:latest
networks:
- my_network
ports:
- 29092:29092
environment:
KAFKA_LISTENERS: EXTERNAL_SAME_HOST://:29092,INTERNAL://:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,EXTERNAL_SAME_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL_SAME_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_CREATE_TOPICS: "addToCart:1:1,invalidCartProduct:1:1"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_HEAP_OPTS: "-Xmx256M -Xms128M"
KAFKA_JVM_PERFORMANCE_OPTS: " -Xmx256m -Xms256m"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
depends_on:
- zookeeper
volumes:
mongodb_data_container:
cache:
driver: local
The request that is giving me trouble is in the cart service and currently looks like this:
const response = await axios.post(
"http://product-service:3001/api/product/checkout",
{
cart: JSON.parse(cart),
token: req.body.token,
},
{
headers: {
"Content-Type": "application/json",
Cookie: `accessToken=${cookies.accessToken}; refreshToken=${cookies.refreshToken}`,
},
}
);
and here is my nginx configuration as well,
upstream client{
server client-service;
}
upstream identity{
server identity-service:3000;
}
upstream product{
server product-service:3001;
}
upstream cart{
server cart-service:3002;
}
server {
listen 80;
location / {
client_max_body_size 100M;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://client;
}
location /api/user/ {
client_max_body_size 100M;
proxy_pass http://identity;
}
location /api/product/{
client_max_body_size 100M;
proxy_pass http://product;
}
location /api/cart/{
client_max_body_size 100M;
proxy_pass http://cart;
}
}
The error log in my docker container is :
cause: Error: connect ECONNREFUSED 127.0.0.1:3001
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1495:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 3001
}
}
I tried the above and was expecting my request to properly be made to product service without connection refused, i know that the request is initially being proxied to cart service as wanted by nginx

How to setup phpmyadmin docker container Vultr

I'm trying to get phpmyadmin to work on my live server in Vultr. I have a full-stack react app for the front-end and express Node Js for the back-end as well as mysql for database and phpmyadmin to create tables and stuff. Both React app and Express Node js work, but phpmyadmin doesn't work.
Below is my docker-compose file:
version: '3.7'
services:
mysql_db:
image: mysql
container_name: mysql_container
restart: always
cap_add:
- SYS_NICE
volumes:
- ./data:/var/lib/mysql
ports:
- "3306:3306"
env_file:
- .env
environment:
MYSQL_ROOT_PASSWORD: "${MYSQL_ROOT_PASSWORD}"
MYSQL_HOST: "${MYSQL_HOST}"
MYSQL_DATABASE: "${MYSQL_DATABASE}"
MYSQL_USER: "${MYSQL_USER}"
MYSQL_PASSWORD: "${MYSQL_PASSWORD}"
networks:
- react_network
phpmyadmin:
depends_on:
- mysql_db
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin_container
restart: always
ports:
- "8080:80"
env_file:
- .env
environment:
- PMA_HOST=mysql_db
- PMA_PORT=3306
- PMA_ABSOLUTE_URI=https://my-site.com/admin
- PMA_ARBITRARY=1
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
networks:
- react_network
api:
restart: always
image: mycustomimage
ports:
- "3001:80"
container_name: server_container
env_file:
- .env
depends_on:
- mysql_db
environment:
MYSQL_HOST_IP: mysql_db
networks:
- react_network
client:
image: mycustomimage
ports:
- "3000:80"
restart: always
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
container_name: client_container
networks:
- react_network
nginx:
depends_on:
- api
- client
build: ./nginx
container_name: nginx_container
restart: always
ports:
- "443:443"
- "80"
volumes:
- ./nginx/conf/certificate.crt:/etc/ssl/certificate.crt:ro
- ./nginx/certs/private.key:/etc/ssl/private.key:ro
- ./nginx/html:/usr/share/nginx/html
networks:
- react_network
volumes:
data:
conf:
certs:
webconf:
html:
networks:
react_network:
Below is my nginx configuration file:
upstream client {
server client:3000;
}
upstream api {
server api:3001;
}
server {
listen 443 ssl http2;
server_name my-site.com;
ssl_certificate /etc/ssl/certificate.crt;
ssl_certificate_key /etc/ssl/private.key;
location / {
proxy_pass http://client;
}
location /admin {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://phpmyadmin:8080;
}
location /sockjs-node {
proxy_pass http://client;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
}
}
server {
listen 80;
server_name my-site.com www.my-site.com;
return 301 https://my-site.com$request_uri;
}
I honestly don't know what I'm missing here! If anyone can help me please!!
I get a 502 Bad Gateway error!

How to solve nginx 53 access forbidden by rule error?

I am running docker-compose droplet on an ubuntu server via digitalOcean.
I have a registerd domain but I am unable to reach my phpAdmin.
my nginx config for the phpmyadmin:
location /nothinhere {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
rewrite ^/nothinhere(/.*)$ $1 break;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://DOCKERIP:8183/;
}
My docker-compose.yml:
services:
mysql:
build: .
image: ghcr.io/userName/mysql:1
command:
- "--default-authentication-plugin=mysql_native_password"
container_name: dbcontainer
cap_add:
- SYS_NICE
ports:
- 3307:3306
restart: always
networks:
- mynetwork
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 1s
retries: 120
phpmyadmin:
build: .
image: ghcr.io/userName/php:1
container_name: dev_pma
networks:
- mynetwork
environment:
PMA_HOST: dbcontainer
PMA_PORT: 3307
PMA_ARBITRARY: 1
PMA_ABSOLUTE_URI: https://www.example.com/nothinhere
restart: always
ports:
- 8183:80
server:
container_name: server
build: .
image: ghcr.io/userName/server:1
networks:
- mynetwork
ports:
- 4000:4000
client:
build: .
image: ghcr.io/userName/client:1
container_name: FE
networks:
- mynetwork
ports:
- 3000:3000
environment:
- CHOKIDAR_USEPOLLING=true
tty: true
volumes:
db_data:
networks:
mynetwork:
The image for the phpAdmin is phpmyadmin/phpmyadmin
All my containers are running on the server but I am unable to get to the admin site.
Containers:
PhpMyAdmin:
nginx logs:
2022/04/22 20:55:00 [error] 783603#783603: send() failed (111: Connection refused) while resolving, resolver: 172.17.0.1:53
2022/04/22 20:55:25 [error] 783603#783603: *1 dev_pma could not be resolved (110: Operation timed out)
2022/04/22 21:07:33 [error] 795471#795471: *5 access forbidden by rule,
2022/04/22 21:07:52 [alert] 795471#795471: *2 open socket #11 left in connection 4
2022/04/22 21:07:52 [alert] 795471#795471: aborting
2022/04/22 21:14:40 [error] 796363#796363: *53 access forbidden by rule
I dont have any deny/allow rules in my config
EDIT:
I changed my nginx config to:
location /nothinhere {
proxy_pass http://DOCKERIP:8183/;
}
and now im getting error:1408F10B:SSL error which might indicate that my SSL is blocking somethiing.

Communication between services in docker containers with traefik

I'm trying to dockerize my microservices and i'm so newbie on traefik.
I have multiple micro services which connect each other in projects. I made a common project to handle common containers.
# docker/docker-compose.yml
version: '3'
services:
mysql:
container_name: common-mysql
build:
context: ./mysql
restart: on-failure
networks:
- web
environment:
- MYSQL_USER=${DATABASE_USER}
- MYSQL_PASSWORD=${DATABASE_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
ports:
- "3306:3306"
mongo:
container_name: common-mongo
build:
context: ./mongo
restart: on-failure
networks:
- web
ports:
- "27017:27017"
memcached:
container_name: common-memcached
build:
context: ./memcached
restart: on-failure
rabbitmq:
container_name: common-rabbitmq
build:
context: ./rabbitmq
restart: on-failure
networks:
- web
ports:
- "5672:5672"
- "15672:15672"
environment:
- RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
- RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
elasticsearch:
container_name: common-elasticsearch
build:
context: ./elasticsearch
restart: on-failure
networks:
- web
ports:
- "9200:9200"
- "9300:9300"
environment:
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
traefik:
container_name: "traefik"
build:
context: ./traefik
restart: on-failure
networks:
- web
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "./traefik/config/static.yml:/etc/traefik/traefik.yml"
- "./traefik/config/dynamic.yml:/etc/traefik/dynamic.yml"
- "./traefik/certs:/etc/certs"
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`traefik.web.local`)"
- "traefik.http.routers.traefik.tls=true"
- "traefik.http.routers.api.service=api#internal"
blackfire:
container_name: common-blackfire
image: blackfire/blackfire
networks:
- web
restart: always
ports: [ "8707" ]
environment:
BLACKFIRE_SERVER_ID: ${BLACKFIRE_SERVER_ID}
BLACKFIRE_SERVER_TOKEN: ${BLACKFIRE_SERVER_TOKEN}
networks:
web:
external: true
Then, i created docker composes and nginx confs for the microservices.
For project 1:
version: "3.1"
services:
app1-nginx:
container_name: app1-nginx
build: docker/nginx
working_dir: /application
restart: on-failure
networks:
- web
volumes:
- ./var/nginx_logs:/var/log/nginx
- ./docker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- "./docker/certs:/etc/certs"
labels:
- "traefik.enable=true"
- "traefik.http.routers.app1-nginx.rule=Host(`app1.web.local`)"
- "traefik.http.routers.app1-nginx.tls=true"
depends_on:
- app1-fpm
app1-fpm:
container_name: app1-fpm
build: docker/php-fpm
working_dir: /application
restart: on-failure
networks:
- web
volumes:
- .:/application:cached
- ./var/log:/application/var/log
- ./docker/php-fpm/php-ini-overrides.ini:/usr/local/etc/php/conf.d/php-overrides.ini
- ./docker/php-fpm/xdebug.ini:/usr/local/etc/php/conf.d/xdebug.ini
- ~/.ssh/:/root/.ssh/
labels:
- traefik.enable=false
environment:
BLACKFIRE_CLIENT_ID: ${BLACKFIRE_CLIENT_ID}
BLACKFIRE_CLIENT_TOKEN: ${BLACKFIRE_CLIENT_TOKEN}
networks:
web:
external: true
server {
server_name app1.web.local;
listen 80;
listen 443 ssl;
ssl_certificate /etc/certs/local-cert.pem;
ssl_certificate_key /etc/certs/local-key.pem;
root /application/public;
index index.php;
if (!-e $request_filename) {
rewrite ^.*$ /index.php last;
}
location ~ \.php$ {
fastcgi_pass app1-fpm:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PHP_VALUE "error_log=/var/log/nginx/application_php_errors.log";
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
include fastcgi_params;
}
}
For project 2:
version: "3.1"
services:
app2-nginx:
container_name: app2-nginx
build: docker/nginx
working_dir: /application
restart: on-failure
networks:
- web
volumes:
- ./var/nginx_logs:/var/log/nginx
- ./docker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- "./docker/certs:/etc/certs"
labels:
- "traefik.enable=true"
- "traefik.http.routers.app2-nginx.rule=Host(`app2.web.local`)"
- "traefik.http.routers.app2-nginx.tls=true"
depends_on:
- app2-fpm
app2-fpm:
container_name: app2-fpm
build: docker/php-fpm
working_dir: /application
restart: on-failure
networks:
- web
volumes:
- .:/application:cached
- ./var/log:/application/var/log
- ./docker/php-fpm/php-ini-overrides.ini:/usr/local/etc/php/conf.d/php-overrides.ini
- ./docker/php-fpm/xdebug.ini:/usr/local/etc/php/conf.d/xdebug.ini
- ~/.ssh/:/root/.ssh/
labels:
- traefik.enable=false
environment:
BLACKFIRE_CLIENT_ID: ${BLACKFIRE_CLIENT_ID}
BLACKFIRE_CLIENT_TOKEN: ${BLACKFIRE_CLIENT_TOKEN}
networks:
web:
external: true
server {
server_name app2.web.local;
listen 80;
listen 443 ssl;
ssl_certificate /etc/certs/local-cert.pem;
ssl_certificate_key /etc/certs/local-key.pem;
root /application/public;
index index.php;
if (!-e $request_filename) {
rewrite ^.*$ /index.php last;
}
location ~ \.php$ {
fastcgi_pass app2-fpm:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PHP_VALUE "error_log=/var/log/nginx/application_php_errors.log";
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
include fastcgi_params;
}
}
When i docker compose up for both of them, services work great.
But my second project requires the first one and send a request from project.
I send the request from second to the one as 'https://app1.web.local' it fails and throws 443 connection refused.
If i change the url as ip it works or if i add the host name with ip inside container it works too.
But the ip changes every time while container ups.
How can i solve this? Thanks.

wordpress docker image and nginx reverse proxy

I'm trying to use docker-compose to create dynamic and fast development environment and I want to use nginx to route all services. This is my configuration:
docker-compose.yml
version: '3.1'
services:
nginx:
image: nginx
ports:
- 80:80
volumes:
- ./nginx:/etc/nginx/conf.d
wordpress:
image: wordpress
restart: always
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: exampleuser
WORDPRESS_DB_PASSWORD: examplepass
WORDPRESS_DB_NAME: exampledb
volumes:
- ./wordpress:/var/www/html
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: exampledb
MYSQL_USER: exampleuser
MYSQL_PASSWORD: examplepass
MYSQL_RANDOM_ROOT_PASSWORD: '1'
volumes:
- ./db:/var/lib/mysql
nginx conf.d
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://wordpress:80/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
}
But it doesn't work, it is always trying to move from http://localhost to http://localhost:8080
What should I do?
Here are the main issues to address in your sample code:
Both nginx and wordpress Docker images listen on port 80 by default. So you should map wordpress to a different one. For example 8080
All the images will not be able to see each other unless you set up a network for them.
Update nginx configuration to remove the port for wordpress. Being in the same network they see each other usin their host names only (so their image name)
Had to change the way you declare the volumes used by wordpress and mysql images
So this is what I suggest to have:
docker-compose
version: '3.1'
services:
nginx:
image: nginx
ports:
- 80:80
volumes:
- ./nginx:/etc/nginx/conf.d
networks:
- backend
wordpress:
image: wordpress
ports:
- 8080:80
restart: always
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: exampleuser
WORDPRESS_DB_PASSWORD: examplepass
WORDPRESS_DB_NAME: exampledb
volumes:
- wordpress:/var/www/html
networks:
- backend
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: exampledb
MYSQL_USER: exampleuser
MYSQL_PASSWORD: examplepass
MYSQL_RANDOM_ROOT_PASSWORD: '1'
volumes:
- db:/var/lib/mysql
networks:
- backend
volumes:
wordpress:
db:
networks:
backend:
driver: bridge
nginx.conf
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://wordpress/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
}
You can check more details about networking in Docker Compose in the documentation.

Resources