Communication Refused between 2 Docker Express Microservices - docker

I am trying to deploy a full stack application with docker containers, currently I have various express microservice images, and a react nginx image that are all being proxied by nginx, when react makes a call to one of my microservices it has to make a synchronous request to another but I am getting connection refused and not sure why, i have tried to do this with both relative and absolute urls to no avail.
Here is my docker compose:
networks:
my_network:
driver: bridge
services:
nginx-service:
image: sahilhakimi/topshelf-nginx:latest
ports:
- 80:80
depends_on:
- identity-service
- client-service
- product-service
- cart-service
networks:
- my_network
client-service:
image: sahilhakimi/topshelf-frontend:latest
networks:
- my_network
identity-service:
image: sahilhakimi/topshelf-identity:latest
environment:
TOKEN_KEY: 7e44d0deaa035296b0cb10b3ea46ac627707e51c277908b609c0739d1adafc22e7e543faf9d43028ae64cccdf506e7994a1b0075880aae61a6cdb87d087bc082759a12c398d7e3291fd54539f1b260c7a5f6c79e2c4d586a180ed42b93866583c8da85acf261939d2171ef4b18765a205ee35f8cd1b989a8473589642176f6d1
REFRESH_TOKEN_KEY: b7fb9c5209f55dc4b50d0faef5b4a0b3bdaa81db1e813779ef6c064de0a934e0c2e027fe5e2a9ab7b6d6c06a5d7099ccd837f7698ba9b5a8bee5e1acd86823c0eb29945d4965fe0db59d95c2defb60dbebb1b034dd22132115f14a50e175bd44bc919a91b7b6731e25908fc3b8800d06e90c6b1288b42c8b2378c2a619931e33
AWS_ACCESS_KEY_ID: AKIATQAAJCDKIHEBIK5B
AWS_SECRET_ACCESS_KEY: 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
MONGO_URI: mongodb://root:rootpassword#mongodb_container:27017/Users?authSource=admin
REDIS_PASSWORD: eYVX7EwVmmxKPCDmwMtyKVge8oLd2t81
REDIS_HOST: cache
REDIS_PORT: 6379
clientUrl: http://client-service:5173
depends_on:
- mongodb_container
- cache
networks:
- my_network
product-service:
image: sahilhakimi/topshelf-product:latest
environment:
TOKEN_KEY: 7e44d0deaa035296b0cb10b3ea46ac627707e51c277908b609c0739d1adafc22e7e543faf9d43028ae64cccdf506e7994a1b0075880aae61a6cdb87d087bc082759a12c398d7e3291fd54539f1b260c7a5f6c79e2c4d586a180ed42b93866583c8da85acf261939d2171ef4b18765a205ee35f8cd1b989a8473589642176f6d1
REFRESH_TOKEN_KEY: b7fb9c5209f55dc4b50d0faef5b4a0b3bdaa81db1e813779ef6c064de0a934e0c2e027fe5e2a9ab7b6d6c06a5d7099ccd837f7698ba9b5a8bee5e1acd86823c0eb29945d4965fe0db59d95c2defb60dbebb1b034dd22132115f14a50e175bd44bc919a91b7b6731e25908fc3b8800d06e90c6b1288b42c8b2378c2a619931e33
AWS_ACCESS_KEY_ID: AKIATQAAJCDKIHEBIK5B
AWS_SECRET_ACCESS_KEY: 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
MONGO_URI: mongodb://root:rootpassword#mongodb_container:27017/Products?authSource=admin
KAFKA_HOST: kafka:9092
depends_on:
- mongodb_container
- kafka
networks:
- my_network
cart-service:
image: sahilhakimi/topshelf-cart:latest
environment:
TOKEN_KEY: 7e44d0deaa035296b0cb10b3ea46ac627707e51c277908b609c0739d1adafc22e7e543faf9d43028ae64cccdf506e7994a1b0075880aae61a6cdb87d087bc082759a12c398d7e3291fd54539f1b260c7a5f6c79e2c4d586a180ed42b93866583c8da85acf261939d2171ef4b18765a205ee35f8cd1b989a8473589642176f6d1
REFRESH_TOKEN_KEY: b7fb9c5209f55dc4b50d0faef5b4a0b3bdaa81db1e813779ef6c064de0a934e0c2e027fe5e2a9ab7b6d6c06a5d7099ccd837f7698ba9b5a8bee5e1acd86823c0eb29945d4965fe0db59d95c2defb60dbebb1b034dd22132115f14a50e175bd44bc919a91b7b6731e25908fc3b8800d06e90c6b1288b42c8b2378c2a619931e33
REDIS_PASSWORD: eYVX7EwVmmxKPCDmwMtyKVge8oLd2t81
REDIS_HOST: cache
REDIS_PORT: 6379
KAFKA_HOST: kafka:9092
depends_on:
- cache
- kafka
networks:
- my_network
mongodb_container:
image: mongo:latest
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: rootpassword
ports:
- 27017:27017
volumes:
- mongodb_data_container:/data/db
networks:
- my_network
cache:
image: redis:6.2-alpine
restart: always
ports:
- "6379:6379"
command: redis-server --save 20 1 --loglevel warning --requirepass eYVX7EwVmmxKPCDmwMtyKVge8oLd2t81
volumes:
- cache:/data
networks:
- my_network
zookeeper:
image: zookeeper:latest
networks:
- my_network
kafka:
image: wurstmeister/kafka:latest
networks:
- my_network
ports:
- 29092:29092
environment:
KAFKA_LISTENERS: EXTERNAL_SAME_HOST://:29092,INTERNAL://:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,EXTERNAL_SAME_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL_SAME_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_CREATE_TOPICS: "addToCart:1:1,invalidCartProduct:1:1"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_HEAP_OPTS: "-Xmx256M -Xms128M"
KAFKA_JVM_PERFORMANCE_OPTS: " -Xmx256m -Xms256m"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
depends_on:
- zookeeper
volumes:
mongodb_data_container:
cache:
driver: local
The request that is giving me trouble is in the cart service and currently looks like this:
const response = await axios.post(
"http://product-service:3001/api/product/checkout",
{
cart: JSON.parse(cart),
token: req.body.token,
},
{
headers: {
"Content-Type": "application/json",
Cookie: `accessToken=${cookies.accessToken}; refreshToken=${cookies.refreshToken}`,
},
}
);
and here is my nginx configuration as well,
upstream client{
server client-service;
}
upstream identity{
server identity-service:3000;
}
upstream product{
server product-service:3001;
}
upstream cart{
server cart-service:3002;
}
server {
listen 80;
location / {
client_max_body_size 100M;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://client;
}
location /api/user/ {
client_max_body_size 100M;
proxy_pass http://identity;
}
location /api/product/{
client_max_body_size 100M;
proxy_pass http://product;
}
location /api/cart/{
client_max_body_size 100M;
proxy_pass http://cart;
}
}
The error log in my docker container is :
cause: Error: connect ECONNREFUSED 127.0.0.1:3001
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1495:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 3001
}
}
I tried the above and was expecting my request to properly be made to product service without connection refused, i know that the request is initially being proxied to cart service as wanted by nginx

Related

How to setup phpmyadmin docker container Vultr

I'm trying to get phpmyadmin to work on my live server in Vultr. I have a full-stack react app for the front-end and express Node Js for the back-end as well as mysql for database and phpmyadmin to create tables and stuff. Both React app and Express Node js work, but phpmyadmin doesn't work.
Below is my docker-compose file:
version: '3.7'
services:
mysql_db:
image: mysql
container_name: mysql_container
restart: always
cap_add:
- SYS_NICE
volumes:
- ./data:/var/lib/mysql
ports:
- "3306:3306"
env_file:
- .env
environment:
MYSQL_ROOT_PASSWORD: "${MYSQL_ROOT_PASSWORD}"
MYSQL_HOST: "${MYSQL_HOST}"
MYSQL_DATABASE: "${MYSQL_DATABASE}"
MYSQL_USER: "${MYSQL_USER}"
MYSQL_PASSWORD: "${MYSQL_PASSWORD}"
networks:
- react_network
phpmyadmin:
depends_on:
- mysql_db
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin_container
restart: always
ports:
- "8080:80"
env_file:
- .env
environment:
- PMA_HOST=mysql_db
- PMA_PORT=3306
- PMA_ABSOLUTE_URI=https://my-site.com/admin
- PMA_ARBITRARY=1
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
networks:
- react_network
api:
restart: always
image: mycustomimage
ports:
- "3001:80"
container_name: server_container
env_file:
- .env
depends_on:
- mysql_db
environment:
MYSQL_HOST_IP: mysql_db
networks:
- react_network
client:
image: mycustomimage
ports:
- "3000:80"
restart: always
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
container_name: client_container
networks:
- react_network
nginx:
depends_on:
- api
- client
build: ./nginx
container_name: nginx_container
restart: always
ports:
- "443:443"
- "80"
volumes:
- ./nginx/conf/certificate.crt:/etc/ssl/certificate.crt:ro
- ./nginx/certs/private.key:/etc/ssl/private.key:ro
- ./nginx/html:/usr/share/nginx/html
networks:
- react_network
volumes:
data:
conf:
certs:
webconf:
html:
networks:
react_network:
Below is my nginx configuration file:
upstream client {
server client:3000;
}
upstream api {
server api:3001;
}
server {
listen 443 ssl http2;
server_name my-site.com;
ssl_certificate /etc/ssl/certificate.crt;
ssl_certificate_key /etc/ssl/private.key;
location / {
proxy_pass http://client;
}
location /admin {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://phpmyadmin:8080;
}
location /sockjs-node {
proxy_pass http://client;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
}
}
server {
listen 80;
server_name my-site.com www.my-site.com;
return 301 https://my-site.com$request_uri;
}
I honestly don't know what I'm missing here! If anyone can help me please!!
I get a 502 Bad Gateway error!

How to solve nginx 53 access forbidden by rule error?

I am running docker-compose droplet on an ubuntu server via digitalOcean.
I have a registerd domain but I am unable to reach my phpAdmin.
my nginx config for the phpmyadmin:
location /nothinhere {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
rewrite ^/nothinhere(/.*)$ $1 break;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://DOCKERIP:8183/;
}
My docker-compose.yml:
services:
mysql:
build: .
image: ghcr.io/userName/mysql:1
command:
- "--default-authentication-plugin=mysql_native_password"
container_name: dbcontainer
cap_add:
- SYS_NICE
ports:
- 3307:3306
restart: always
networks:
- mynetwork
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 1s
retries: 120
phpmyadmin:
build: .
image: ghcr.io/userName/php:1
container_name: dev_pma
networks:
- mynetwork
environment:
PMA_HOST: dbcontainer
PMA_PORT: 3307
PMA_ARBITRARY: 1
PMA_ABSOLUTE_URI: https://www.example.com/nothinhere
restart: always
ports:
- 8183:80
server:
container_name: server
build: .
image: ghcr.io/userName/server:1
networks:
- mynetwork
ports:
- 4000:4000
client:
build: .
image: ghcr.io/userName/client:1
container_name: FE
networks:
- mynetwork
ports:
- 3000:3000
environment:
- CHOKIDAR_USEPOLLING=true
tty: true
volumes:
db_data:
networks:
mynetwork:
The image for the phpAdmin is phpmyadmin/phpmyadmin
All my containers are running on the server but I am unable to get to the admin site.
Containers:
PhpMyAdmin:
nginx logs:
2022/04/22 20:55:00 [error] 783603#783603: send() failed (111: Connection refused) while resolving, resolver: 172.17.0.1:53
2022/04/22 20:55:25 [error] 783603#783603: *1 dev_pma could not be resolved (110: Operation timed out)
2022/04/22 21:07:33 [error] 795471#795471: *5 access forbidden by rule,
2022/04/22 21:07:52 [alert] 795471#795471: *2 open socket #11 left in connection 4
2022/04/22 21:07:52 [alert] 795471#795471: aborting
2022/04/22 21:14:40 [error] 796363#796363: *53 access forbidden by rule
I dont have any deny/allow rules in my config
EDIT:
I changed my nginx config to:
location /nothinhere {
proxy_pass http://DOCKERIP:8183/;
}
and now im getting error:1408F10B:SSL error which might indicate that my SSL is blocking somethiing.

Nextcloud in Docker with Caddy proxy

I’m trying to install Nextcloud on my server with Docker using a Caddy reverse proxy. Caddy is working for other services so I will just copy the Caddyfile here.
There are 3 ways I tried accessing it on the Docker host machine:
localhost:8080 - working
IP of host machine - it says it is not a trusted domain
domain - 502 Bad Gateway
Please help I’ve already tried multiple configurations but can not get it working.
Caddyfile:
{domain} {
tls {email}
tls {
dns godaddy
}
# Enable basic compression
gzip
# Service discovery via well-known
redir /.well-known/carddav /remote.php/carddav 301
redir /.well-known/caldav /remote.php/caldav 301
proxy / http://nextcloud:8080 {
# X-Forwarded-For, etc...
transparent
# Nextcloud best practices and security
header_downstream Strict-Transport-Security "max-age=15552000;"
header_downstream Referrer-Policy "strict-origin-when-cross-origin"
header_downstream X-XSS-Protection "1; mode=block"
header_downstream X-Content-Type-Options "nosniff"
header_downstream X-Frame-Options "SAMEORIGIN"
}
}
docker-compose file:
version: '3.7'
services:
db:
container_name: nextcloud-db
image: mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
- db:/var/lib/mysql
env_file:
- ./nextcloud/config/db.env
environment:
- MYSQL_ROOT_PASSWORD={pw}
networks:
- db
app:
container_name: nextcloud
image: nextcloud
ports:
- 8080:80
volumes:
- nextcloud:/var/www/html
env_file:
- ./nextcloud/config/db.env
environment:
- MYSQL_HOST=db
- NEXTCLOUD_TRUSTED_DOMAINS="localhost {host ip} {domain}"
restart: always
networks:
- proxy
- db
depends_on:
- db
volumes:
db:
nextcloud:
networks:
db:
Figured it out.
In the Caddyfile the nextcloud port should be 80 instead of 8080 as it is in the inner network.

Nginx-proxy doesn't forward to container exposing port 3001 and rewrites URL to static IP

I got a web application running on Ruby on Rails with SOLR in docker-compose. It exposes port 3001, and I want a subdomain URL than my university possesses (and I have access to the configuration panel where I can only specify the "target", what is the IP, I guess, of my local server on which the web application is running).
I first tried to do this redirection without nginx, but the URL data.chembiosys.de was just redirected to http://static.ip:3001
The app is running though, and is accessible.
So I wanted to try to use nginx as a reverse proxy, but the effect is basically the same:
- I need to specify the port number and the IP of my server in the configuration panel of the domain name of interest
- when I type "data.chembiosys.de" in the browser, it shows the IP and the port number
What I do is that I first create a nginx-proxy network:
sudo docker network create nginx-proxy
Then I start nginx-proxy with docker-compose.yml:
version: "3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /home/myhome/Projects/nginx-proxy/conf/my_conf.conf:/etc/nginx/conf.d/my_proxy.conf:ro
whoami:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=whoami.local
networks:
default:
external:
name: nginx-proxy
In the second volume, I copy to the nginx-proxy container the following config file:
server {
listen 80;
server_name http://mystaticip:3001;
client_max_body_size 2G;
return 301 http://data.chembiosys.de$request_uri;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host data.chembiosys.de;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://mystaticip:3001;
}
}
And finally, I run the rails app docker-compose.yml
version: '3'
services:
db:
image: mysql:5.7
container_name: seek-mysql_cbs
restart: always
env_file:
- docker/db.env
volumes:
- seek-mysql-db_cbs:/var/lib/mysql
seek: # The SEEK application
#build: .
image: fairdom/seek:1.7
container_name: seek_cbs
command: docker/entrypoint.sh
restart: always
environment:
RAILS_ENV: production
SOLR_PORT: 8983
NO_ENTRYPOINT_WORKERS: 1
env_file:
- docker/db.env
volumes:
- seek-filestore_cbs:/seek/filestore
- seek-cache_cbs:/seek/tmp/cache
ports:
- "3001:3000"
depends_on:
- db
- solr
links:
- db
- solr
seek_workers: # The SEEK delayed job workers
#build: .
image: fairdom/seek:1.7
container_name: seek-workers_cbs
command: docker/start_workers.sh
restart: always
environment:
RAILS_ENV: production
SOLR_PORT: 8983
env_file:
- docker/db.env
volumes:
- seek-filestore_cbs:/seek/filestore
- seek-cache_cbs:/seek/tmp/cache
depends_on:
- db
- solr
links:
- db
- solr
solr:
image: fairdom/seek-solr
container_name: seek-solr_cbs
volumes:
- seek-solr-data_cbs:/opt/solr/server/solr/seek/data
restart: always
volumes:
seek-filestore_cbs:
driver: local-persist
driver_opts:
mountpoint: /home/myhome/Projects/ChemBioSys/docker_volumes/seek-filestore_cbs
seek-mysql-db_cbs:
driver: local-persist
driver_opts:
mountpoint: /home/myhome/Projects/ChemBioSys/docker_volumes/seek-mysql-db_cbs
seek-solr-data_cbs:
driver: local-persist
driver_opts:
mountpoint: /home/myhome/Projects/ChemBioSys/docker_volumes/seek-solr-data_cbs
seek-cache_cbs:
driver: local-persist
driver_opts:
mountpoint: /home/myhome/Projects/ChemBioSys/docker_volumes/seek-cache_cbs
networks:
default:
external:
name: nginx-proxy
I have the feeling that nginx-proxy is just failing to connect the URL to the app. What an I doing wrong and how to connect the app to the URL with nginx? And also, how to avoid the rewrite of the URL to the IP:port?
P.S. The static IP I got from the SysAdmins is alphanumerical and I see the following warning when the nginx-proxy docker-compose runs:
nginx-proxy | [warn] 30#30: server name "http://pc08.ian.uni-jena.de:3001" has suspicious symbols in /etc/nginx/conf.d/my_proxy.conf:3

Error 502 accessing nextcloud via docker with nginx

Heyo!
Update: I figured it out and added my answer.
I'm currently in the process of learning docker and I've written a docker-compose file that should launch nginx, gitea, nextcloud and route them all via domain name as a reverse proxy.
All is going well except for with nextcloud. I can access it via localhost:3001 but not via the nginx reverse proxy. All is well with gitea, it works both ways.
The error I'm getting is:
nginx_proxy | 2018/08/10 00:17:34 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: cloud.example.ca, request: "GET / HTTP/1.1", upstream: "http://172.19.0.4:3001/", host: "cloud.example.ca"
docker-compose.yml:
version: '3.1'
services:
nginx:
container_name: nginx_proxy
image: nginx:latest
restart: always
volumes:
// Here I'm swapping out my default.conf for the container's by mounting my
directory over theirs.
- ./nginx-conf:/etc/nginx/conf.d
ports:
- 80:80
- 443:443
networks:
- proxy
nextcloud_db:
container_name: nextcloud_db
image: mariadb:latest
restart: always
volumes:
- nextcloud_db:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/cloud_db_root
MYSQL_PASSWORD_FILE: /run/secrets/cloud_db_pass
MYSQL_DATABASE: devcloud
MYSQL_USER: devcloud
secrets:
- cloud_db_root
- cloud_db_pass
networks:
- database
gitea_db:
container_name: gitea_db
image: mariadb:latest
restart: always
volumes:
- gitea_db:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/cloud_db_root
MYSQL_PASSWORD_FILE: /run/secrets/cloud_db_pass
MYSQL_DATABASE: gitea
MYSQL_USER: gitea
secrets:
- cloud_db_root
- cloud_db_pass
networks:
- database
nextcloud:
image: nextcloud
container_name: nextcloud
ports:
- 3001:80
volumes:
- nextcloud:/var/www/html
restart: always
networks:
- proxy
- database
gitea:
container_name: gitea
image: gitea/gitea:latest
environment:
- USER_UID=1000
- USER_GID=1000
restart: always
volumes:
- gitea:/data
ports:
- 3000:3000
- 22:22
networks:
- proxy
- database
volumes:
nextcloud:
nextcloud_db:
gitea:
gitea_db:
networks:
proxy:
database:
secrets:
cloud_db_pass:
file: cloud_db_pass.txt
cloud_db_root:
file: cloud_db_root.txt
My default.conf that gets mounted into /etc/nginx/conf.d/default.conf
upstream nextcloud {
server nextcloud:3001;
}
upstream gitea {
server gitea:3000;
}
server {
listen 80;
listen [::]:80;
server_name cloud.example.ca;
location / {
proxy_pass http://nextcloud;
}
}
server {
listen 80;
listen [::]:80;
server_name git.example.ca;
location / {
proxy_pass http://gitea;
}
}
I of course have my hosts file setup to route the domains to localhost. I've done a bit of googling but nothing I've found so far seems to align with what I'm running into. Thanks in advance!
Long story short, one does not simply reverse proxy to port 80 with nextcloud. It's just not allowed. I have it deployed and working great with a certificate over 443! :)

Resources