Nginx Reverse Proxy Problem: Using Docker-compose and Rundeck - docker

Setting up my rundeck application within a docker container and using nginx to reverse proxy. Presume my problem is originating from the proxy that is being received back into the server.
When I access the desired URL (https://vmName.Domain.corp/rundeck) I am able to see the login page, even though it doesn't have any UI. Once I enter the default admin:admin information I am directed to a 404 page. I pasted below one of the error logs from the docker-compose logs. You'll notice it's going to /etc/nginx to find rundeck's logo.
I can't determine if the problem is in my docker-compose file or nginx' config file.
Example of error log:
production_nginx | 2021-02-04T08:17:50.770544192Z 2021/02/04 08:17:50 [error] 29#29: *8 open() "/etc/nginx/html/assets/jquery-aafa4de7f25b530ee04ba20028b2d154.js" failed (2: No such file or directory), client: 10.243.5.116, server: vmName.Domain.corp, request: "GET /assets/jquery-aafa4de7f25b530ee04ba20028b2d154.js HTTP/1.1", host: "vmName.Domain.corp", referrer: "https://vmName.Domain.corp/rundeck/user/login"
If curious, I can access Rundeck's logo if I go to: https://vmName.Domain.corp/rundeck/assets/jquery-aafa4de7f25b530ee04ba20028b2d154.js"
Here's more information on my set-up
/nginx/sites-enabled/docker-compose.yml (main machine)
rundeck:
image: ${RUNDECK_IMAGE:-jordan/rundeck:latest}
container_name: production_rundeck
ports:
- 4440:4440
environment:
RUNDECK_GRAILS_SERVER_URL: "https://vmName.Domain.corp/rundeck"
RUNDECK_GRAILS_URL: "https://vmName.Domain.corp/rundeck"
RUNDECK_SERVER_FORWARDED: "true"
RDECK_JVM_SETTINGS: "-Xmx1024m -Xms256m -XX:MaxMetaspaceSize=256m -server -Dfile.encoding=UTF-8 -Drundeck.jetty.connector.forwarded=true -Dserver.contextPath=/rundeck -Dserver.https.port:4440"
#RUNDECK_SERVER_CONTEXTPATH: "https://vmName.Domain.corp/rundeck"
RUNDECK_MAIL_FROM: "rundeck#vmName.Domain.corp"
EXTERNAL_SERVER_URL: "https://vmName.Domain.corp/rundeck"
SERVER_URL: "https://vmName.Domain.corp/rundeck"
volumes:
- /etc/rundeck:/etc/rundeck
- /var/rundeck
- /var/lib/mysql
- /var/log/rundeck
- /opt/rundeck-plugins
nginx:
image: nginx:latest
container_name: production_nginx
links:
- rundeck
volumes:
- /etc/nginx/sites-enabled:/etc/nginx/conf.d
depends_on:
- rundeck
ports:
- 80:80
- 443:443
restart: always
networks:
default:
external:
name: vmName
nginx/sites-enabled/default.conf (main machine)
# Route all HTTP traffic through HTTPS
# ====================================
server {
listen 80;
server_name vmName;
return 301 https://vmName$request_uri;
}
server {
listen 443 ssl;
server_name vmName;
ssl_certificate /etc/nginx/conf.d/vmName.Domain.corp.cert;
ssl_certificate_key /etc/nginx/conf.d/vmName.Domain.corp.key;
return 301 https://vmName.Domain.corp$request_uri;
}
# ====================================
# Main webserver route configuration
# ====================================
server {
listen 443 ssl;
server_name vmName.Domain.corp;
ssl_certificate /etc/nginx/conf.d/vmName.Domain.corp.cert;
ssl_certificate_key /etc/nginx/conf.d/vmName.Domain.corp.key;
#===========================================================================#
## MAIN PAGE
location /example-app {
rewrite ^/example-app(.*) /$1 break;
proxy_pass http://example-app:5000/;
proxy_set_header Host $host/example-app;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# #Rundeck
location /rundeck/ {
# rewrite ^/rundeck(.*) /$1 break;
proxy_pass http://rundeck:4440/;
proxy_set_header Host $host/rundeck;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
[image container]/etc/rundeck/ rundeck-config.properties:
# change hostname here
grails.serverURL=https://vmName.Domain.corp/rundeck
grails.mail.default.from = rundeck#vmName.Domain.corp
server.useForwardHeaders = true
[image container]/etc/rundeck/ framework.properties:
framework.server.name = vmName.Domain.corp
framework.server.hostname = vmName.Domain.corp
framework.server.port = 443
framework.server.url = https://vmName.Domain.corp/rundeck

It seems related to the Rundeck image/network problem, I did a working example with the official one, take a look:
nginx.conf (located at config folder, check the docker-compose file volumes section):
server {
listen 80 default_server;
server_name rundeck-cl;
location / {
proxy_pass http://rundeck:4440;
}
}
docker-compose:
version: "3.7"
services:
rundeck:
build:
context: .
args:
IMAGE: ${RUNDECK_IMAGE:-rundeck/rundeck:3.3.9}
container_name: rundeck-nginx
ports:
- 4440:4440
environment:
RUNDECK_GRAILS_URL: http://localhost
RUNDECK_SERVER_FORWARDED: "true"
nginx:
image: nginx:alpine
volumes:
- ./config/nginx.conf:/etc/nginx/conf.d/default.conf:ro
ports:
- 80:80
Dockerfile:
ARG IMAGE
FROM ${IMAGE}
Build with: docker-compise build and run with docker-compose up.
rundeck-config.properties content:
#loglevel.default is the default log level for jobs: ERROR,WARN,INFO,VERBOSE,DEBUG
loglevel.default=INFO
rdeck.base=/home/rundeck
#rss.enabled if set to true enables RSS feeds that are public (non-authenticated)
rss.enabled=false
# Bind address and server URL
server.address=0.0.0.0
server.servlet.context-path=/
grails.serverURL=http://localhost
server.servlet.session.timeout=3600
dataSource.dbCreate = update
dataSource.url = jdbc:h2:file:/home/rundeck/server/data/grailsdb;MVCC=true
dataSource.username =
dataSource.password =
#Pre Auth mode settings
rundeck.security.authorization.preauthenticated.enabled=false
rundeck.security.authorization.preauthenticated.attributeName=REMOTE_USER_GROUPS
rundeck.security.authorization.preauthenticated.delimiter=,
# Header from which to obtain user name
rundeck.security.authorization.preauthenticated.userNameHeader=X-Forwarded-Uuid
# Header from which to obtain list of roles
rundeck.security.authorization.preauthenticated.userRolesHeader=X-Forwarded-Roles
# Redirect to upstream logout url
rundeck.security.authorization.preauthenticated.redirectLogout=false
rundeck.security.authorization.preauthenticated.redirectUrl=/oauth2/sign_in
rundeck.api.tokens.duration.max=30d
rundeck.log4j.config.file=/home/rundeck/server/config/log4j.properties
rundeck.gui.startpage=projectHome
rundeck.clusterMode.enabled=true
rundeck.security.httpHeaders.enabled=true
rundeck.security.httpHeaders.provider.xcto.enabled=true
rundeck.security.httpHeaders.provider.xxssp.enabled=true
rundeck.security.httpHeaders.provider.xfo.enabled=true
rundeck.security.httpHeaders.provider.csp.enabled=true
rundeck.security.httpHeaders.provider.csp.config.include-xcsp-header=false
rundeck.security.httpHeaders.provider.csp.config.include-xwkcsp-header=false
rundeck.storage.provider.1.type=db
rundeck.storage.provider.1.path=keys
rundeck.projectsStorageType=db
framework.properties file content:
# framework.properties -
#
# ----------------------------------------------------------------
# Server connection information
# ----------------------------------------------------------------
framework.server.name = 85845cd30fe9
framework.server.hostname = 85845cd30fe9
framework.server.port = 4440
framework.server.url = http://localhost
# ----------------------------------------------------------------
# Installation locations
# ----------------------------------------------------------------
rdeck.base=/home/rundeck
framework.projects.dir=/home/rundeck/projects
framework.etc.dir=/home/rundeck/etc
framework.var.dir=/home/rundeck/var
framework.tmp.dir=/home/rundeck/var/tmp
framework.logs.dir=/home/rundeck/var/logs
framework.libext.dir=/home/rundeck/libext
# ----------------------------------------------------------------
# SSH defaults for node executor and file copier
# ----------------------------------------------------------------
framework.ssh.keypath = /home/rundeck/.ssh/id_rsa
framework.ssh.user = rundeck
# ssh connection timeout after a specified number of milliseconds.
# "0" value means wait forever.
framework.ssh.timeout = 0
# ----------------------------------------------------------------
# System-wide global variables.
# ----------------------------------------------------------------
# Expands to ${globals.var1}
#framework.globals.var1 = value1
# Expands to ${globals.var2}
#framework.globals.var2 = value2
rundeck.server.uuid = a14bc3e6-75e8-4fe4-a90d-a16dcc976bf6

Related

How to reverse proxy a docker image at domain root and another image at a subdomain with swag?

I'm using "linuxserver"'s swag image to reverse-proxy my docker-compose images. I want to serve one of my images as my domain root and the other one as a subdomain (e.g. root-image # mysite.com and subdomain-image # staging.mysite.com). Here are the steps I went through:
redirected my domain and subdomain names to my server in Cloudflare (pinging them shows my server's IP and this is step OK! )
Configured Cloudflare DNS config for swag (this working OK!)
Configured docker-compose file:
swag:
image: linuxserver/swag:version-1.14.0
container_name: swag
cap_add:
- NET_ADMIN
environment:
- PUID=1000
- PGID=1000
- URL=mysite.com
- SUBDOMAINS=www,staging
- VALIDATION=dns
- DNSPLUGIN=cloudflare #optional
- EMAIL=me#mail.com #optional
- ONLY_SUBDOMAINS=false #optional
- STAGING=false #optional
volumes:
- /docker-confs/swag/config:/config
ports:
- 443:443
- 80:80 #optional
restart: unless-stopped
root-image:
image: ghcr.io/me/root-image
container_name: root-image
restart: unless-stopped
subdomain-image:
image: ghcr.io/me/subdomain-image
container_name: subdomain-image
restart: unless-stopped
Defined a proxy conf for my root-image (at swag/config/nginx/proxy-confs/root-image.subfolder.conf)
location / {
include /config/nginx/proxy.conf;
resolver 127.0.0.11 valid=30s;
set $upstream_app root-image;
set $upstream_port 3000;
set $upstream_proto http;
proxy_pass $upstream_proto://$upstream_app:$upstream_port;
}
Commented out the nginx's default location / {} block (at swag/config/nginx/site-confs/default)
Defined a proxy conf for my subdomain-image (at swag/config/nginx/proxy-confs/subdomain-image.subdomain.conf)
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name staging.*;
include /config/nginx/ssl.conf;
client_max_body_size 0;
location / {
include /config/nginx/proxy.conf;
resolver 127.0.0.11 valid=30s;
set $upstream_app subdomain-image;
set $upstream_port 443;
set $upstream_proto https;
proxy_pass $upstream_proto://$upstream_app:$upstream_port;
}
}
Both images expose port 3000. Now my root image is working fine (OK!), but I'm getting 502 error for my subdomain image. I checked the nginx error log, but it doesn't show anything meaningful for me:
*35 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.xxx.xxxx.xxx, server: staging.*, request: "GET /favicon.ico HTTP/2.0", upstream: "https://xxx.xxx.xxx:443/favicon.ico", host: "staging.mysite.com", referrer: "https://staging.mysite.com"
Docker logs for all the 3 containers are also fine (without showing any warnings or errors).
Which step I'm going wrong or is there anything I missed? Thanks for helping

gitea in docker behind jwilder/nginx-proxy and jrcs/letsencrypt-nginx-proxy-companion

I am stuck deploying docker image gitea/gitea:1 behind a reverse proxy jwilder/nginx-proxy with jrcs/letsencrypt-nginx-proxy-companion for automatic certificate updates.
gitea is running and I can connect by the http adress with port 3000.
The proxy is running also, as I have multiple apps and services e.g. sonarqube working well.
This is my docker-compose.yml:
version: "2"
services:
server:
image: gitea/gitea:1
environment:
- USER_UID=998
- USER_GID=997
- DB_TYPE=mysql
- DB_HOST=172.17.0.1:3306
- DB_NAME=gitea
- DB_USER=gitea
- DB_PASSWD=mysqlpassword
- ROOT_URL=https://gitea.myhost.de
- DOMAIN=gitea.myhost.de
- VIRTUAL_HOST=gitea.myhost.de
- LETSENCRYPT_HOST=gitea.myhost.de
- LETSENCRYPT_EMAIL=me#web.de
restart: always
ports:
- "3000:3000"
- "222:22"
expose:
- "3000"
- "22"
networks:
- frontproxy_default
volumes:
- /mnt/storagespace/gitea_data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
frontproxy_default:
external: true
default:
When i call https://gitea.myhost.de the result is
502 Bad Gateway (nginx/1.17.6)
This is the log entry:
2020/09/13 09:57:30 [error] 14323#14323: *15465 no live upstreams while connecting to upstream, client: 77.20.122.169, server: gitea.myhost.de, request: "GET / HTTP/2.0", upstream: "http://gitea.myhost.de/", host: "gitea.myhost.de"
and this is the relevant entry in nginx/conf/default.conf:
# gitea.myhost.de
upstream gitea.myhost.de {
## Can be connected with "frontproxy_default" network
# gitea_server_1
server 172.23.0.10 down;
}
server {
server_name gitea.myhost.de;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
# Do not HTTPS redirect Let'sEncrypt ACME challenge
location /.well-known/acme-challenge/ {
auth_basic off;
allow all;
root /usr/share/nginx/html;
try_files $uri =404;
break;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
server_name gitea.myhost.de;
listen 443 ssl http2 ;
access_log /var/log/nginx/access.log vhost;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_certificate /etc/nginx/certs/gitea.myhost.de.crt;
ssl_certificate_key /etc/nginx/certs/gitea.myhost.de.key;
ssl_dhparam /etc/nginx/certs/gitea.myhost.de.dhparam.pem;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/nginx/certs/gitea.myhost.de.chain.pem;
add_header Strict-Transport-Security "max-age=31536000" always;
include /etc/nginx/vhost.d/default;
location / {
proxy_pass http://gitea.myhost.de;
}
}
Maybe it's a problem, I used a gitea backup for this container as suggested in https://docs.gitea.io/en-us/backup-and-restore/
What can I do to get this running? I have read this https://docs.gitea.io/en-us/reverse-proxies/ but maybe I missed something. The main point is to get letsencrypt-nginx-proxy-companion automatically managing the certificates.
Any help and tip is highly appreciated.
I believe all you are missing is your VIRTUAL_PORT setting in your gitea container's environment. This tells the reverse proxy container which port to connect with when routing incoming requests from your VIRTUAL_HOST domain, effectively adding along the lines of ":3000" to your upstream server in the nginx conf. This is also the case when your containers are all on the same host. By default, the reverse proxy container only listens on port 80 on that service, but since gitea docker container uses another default port of 3000, you need to tell that to the reverse proxy container essentially. See below using snippet from your compose file.
services:
server:
image: gitea/gitea:1
environment:
- USER_UID=998
- USER_GID=997
- DB_TYPE=mysql
- DB_HOST=172.17.0.1:3306
- DB_NAME=gitea
- DB_USER=gitea
- DB_PASSWD=mysqlpassword
- ROOT_URL=https://gitea.myhost.de
- DOMAIN=gitea.myhost.de
- VIRTUAL_HOST=gitea.myhost.de
- VIRTUAL_PORT=3000 <-------------------***Add this line***
- LETSENCRYPT_HOST=gitea.myhost.de
- LETSENCRYPT_EMAIL=me#web.de
restart: always
ports:
- "3000:3000"
- "222:22"
expose:
- "3000"
- "22"
networks:
- frontproxy_default
volumes:
- /mnt/storagespace/gitea_data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
frontproxy_default:
external: true
default:
P.S.: It is not required to expose the ports if all containers are on the same host and there was no other reason other than attempting to get this to work for it.

I cannot deploy the project. Problem with docker-compose and nginx

There is a working script in yii2, I can't deploy it.
Docker builds the project, everything is fine, but if i substitute the nginx config from docker into nginx / sites-available / default, then an error appears:
nginx: [emerg] host not found in upstream "app:9000" in /etc/nginx/sites-enabled/default:111
I read on the forums that i need to add the depends_on directive:
depend_on:
-app
but in this case, errors start to appear in:
docker-compose -f docker-compose.yml up -d
I substituted different versions - from "1" to "3.4", errors still appear. Here's the last one:
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.environment: 'XDEBUG_CONFIG'
Invalid top-level property "environment". Valid top-level sections for this Comp ose file are: services, version, networks, volumes, and extensions starting with "x-".
You might be seeing this error because you're using the wrong Compose file versi on. Either specify a supported version (e.g "2.2" or "3.3") and place your servi ce definitions under the services key, or omit the version key and place you r service definitions at the root of the file to use version 1.
For more on the Compose file format versions, see https://docs.docker.com/compos e/compose-file/
The project only works on php-7.0
Here is the original (from project) nginx config:
## FRONTEND ##
server {
listen 80 default;
root /app/frontend/web;
index index.php index.html;
server_name yii2-starter-kit.dev;
charset utf-8;
# location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|pdf|ppt|txt|bmp|rtf|js)$ {
# access_log off;
# expires max;
# }
location / {
try_files $uri $uri/ /index.php?$args;
}
client_max_body_size 32m;
# There is a VirtualBox bug related to sendfile that can lead to
# corrupted files, if not turned-off
# sendfile off;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass php-fpm;
fastcgi_index index.php;
include fastcgi_params;
## Cache
# fastcgi_pass_header Cookie; # fill cookie valiables, $cookie_phpsessid for exmaple
# fastcgi_ignore_headers Cache-Control Expires Set-Cookie; # Use it with caution because it is cause SEO problems
# fastcgi_cache_key "$request_method|$server_addr:$server_port$request_uri|$cookie_phpsessid"; # generating unique key
# fastcgi_cache fastcgi_cache; # use fastcgi_cache keys_zone
# fastcgi_cache_path /tmp/nginx/ levels=1:2 keys_zone=fastcgi_cache:16m max_size=256m inactive=1d;
# fastcgi_temp_path /tmp/nginx/temp 1 2; # temp files folder
# fastcgi_cache_use_stale updating error timeout invalid_header http_500; # show cached page if error (even if it is outdated)
# fastcgi_cache_valid 200 404 10s; # cache lifetime for 200 404;
# or fastcgi_cache_valid any 10s; # use it if you want to cache any responses
}
}
## BACKEND ##
server {
listen 80;
root /app/backend/web;
index index.php index.html;
server_name backend.yii2-starter-kit.dev;
charset utf-8;
client_max_body_size 16m;
# There is a VirtualBox bug related to sendfile that can lead to
# corrupted files, if not turned-off on Vagrant based setup
# sendfile off;
location / {
try_files $uri $uri/ /index.php?$args;
}
# location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|pdf|ppt|txt|bmp|rtf|js)$ {
# access_log off;
# expires max;
# }
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass php-fpm;
fastcgi_index index.php;
include fastcgi_params;
}
}
## STORAGE ##
server {
listen 80;
server_name storage.yii2-starter-kit.dev;
root /app/storage/web;
index index.html;
# expires max;
# There is a VirtualBox bug related to sendfile that can lead to
# corrupted files, if not turned-off
# sendfile off;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass php-fpm;
fastcgi_index index.php;
include fastcgi_params;
}
}
## PHP-FPM Servers ##
upstream php-fpm {
server app:9000;
}
## MISC ##
### WWW Redirect ###
#server {
# listen 80;
# server_name www.yii2-starter-kit.dev;
# return 301 http://yii2-starter-kit.dev$request_uri;
#}
Original (from project) docker-compose.yml
data:
image: busybox:latest
volumes:
- ./:/app
entrypoint: tail -f /dev/null
app:
build: docker/php
working_dir: /app
volumes_from:
- data
expose:
- 9000
links:
- db
- mailcatcher
environment:
XDEBUG_CONFIG: "idekey=PHPSTORM remote_enable=On remote_connect_back=On"
nginx:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./:/app
- ./docker/nginx/vhost.conf:/etc/nginx/conf.d/vhost.conf
links:
- app
mailcatcher:
image: schickling/mailcatcher:latest
ports:
- "1080:1080"
db:
image: mysql:5.7
volumes:
- /var/lib/mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: yii2-starter-kit
MYSQL_USER: ysk_dbu
MYSQL_PASSWORD: ysk_pass
Help please make correct configs for nginx and docker-compose.
At yii2 newbie.
I would be very grateful for any help and advice.
Thank you in advance!
A Compose file without a version: key at the top level is a version 1 Compose file. This is a very old version of the Compose YAML file format that doesn't support networks, volumes, or other modern Compose features. You should select either version 2 or 3. (Version 3 is a little more oriented towards Docker's Swarm orchestrator, so for some specific options in a single-host setup you may need to specify version 2.) You need to specify a top-level version: key, and then the services you have need to go under a top-level services: key.
version: '3.8'
services:
app: { ... }
nginx: { ... }
mailcatcher: { ... }
db: { ... }
This will actually directly address your immediate issue. As discussed in Networking in Compose, Compose (with a version 2 or 3 config file) will create a default network for you and register containers so that their service names (like app) are usable as host names. You do not need links: or other configuration.
There are also a number of other unnecessary options in the Compose file you show. You don't need to repeat an image's WORKDIR as a container's working_dir:; you don't need to expose: ports (as distinct from publishing ports: out to the host); it's not really great practice to overwrite the code that gets COPYed into an image with volumes: from the host.
In modern Docker you also tend to not use data-volume containers. Instead, newer versions of Compose have a top-level volumes: key that can declare named volumes. You'd use this, for example, for your backing database storage.
The net result of all of this would be a Compose file like:
# Specify a current Compose version.
version: '3.8'
# Declare that we'll need a named volume for the database storage.
volumes:
mysql_data:
# The actual Compose-managed services.
services:
app:
build:
# If the application is in ./app, then the build context
# directory must be the current directory (to be able to
# COPY app ./
# ).
context: .
dockerfile: docker/php/Dockerfile
environment:
XDEBUG_CONFIG: "idekey=PHPSTORM remote_enable=On remote_connect_back=On"
nginx:
# Build a separate image that will also
# FROM nginx
# COPY app /usr/share/nginx/html
# COPY docker/nginx/vhost.conf /etc/nginx/conf.d
build:
context: .
dockerfile: docker/nginx/Dockerfile
ports:
- "80:80"
mailcatcher:
image: schickling/mailcatcher:latest
ports:
- "1080:1080"
db:
image: mysql:5.7
volumes:
# Use the named volume we declared above
- mysql_data:/var/lib/mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: yii2-starter-kit
MYSQL_USER: ysk_dbu
MYSQL_PASSWORD: ysk_pass

"host not found in upstream ..." when using 'kubectl apply -f' but works in 'docker-compose up'

I have four images in docker which are two services, a frontend and a reverseproxy.
When I was using docker-compose up, it works and the services were running.
But when I want to use kubectl apply -f to run in pod. There are [emerg] 1#1: host not found in upstream "backend-user:8080" in /etc/nginx/nginx.conf:11 in logs. I have no idea about this.
Here is my images:
bmjlearntocode/reverseproxy latest
bmjlearntocode/udacity-frontend local
bmjlearntocode/udacity-restapi-user latest
bmjlearntocode/udacity-restapi-feed latest
Here is the docker-compose.yaml
version: "3"
services:
reverseproxy:
image: bmjlearntocode/reverseproxy
ports:
- 8080:8080
restart: always
depends_on:
- backend-user
- backend-feed
backend-user:
image: bmjlearntocode/udacity-restapi-user
volumes:
- $HOME/.aws:/root/.aws
environment:
POSTGRESS_USERNAME: $POSTGRESS_USERNAME
POSTGRESS_PASSWORD: $POSTGRESS_PASSWORD
POSTGRESS_DB: $POSTGRESS_DB
POSTGRESS_HOST: $POSTGRESS_HOST
AWS_REGION: $AWS_REGION
AWS_PROFILE: $AWS_PROFILE
AWS_BUCKET: $AWS_BUCKET
JWT_SECRET: $JWT_SECRET
URL: "http://localhost:8100"
backend-feed:
image: bmjlearntocode/udacity-restapi-feed
volumes:
- $HOME/.aws:/root/.aws
environment:
POSTGRESS_USERNAME: $POSTGRESS_USERNAME
POSTGRESS_PASSWORD: $POSTGRESS_PASSWORD
POSTGRESS_DB: $POSTGRESS_DB
POSTGRESS_HOST: $POSTGRESS_HOST
AWS_REGION: $AWS_REGION
AWS_PROFILE: $AWS_PROFILE
AWS_BUCKET: $AWS_BUCKET
JWT_SECRET: $JWT_SECRET
URL: "http://localhost:8100"
frontend:
image: bmjlearntocode/udacity-frontend:local
ports:
- "8100:80"
Here is the nginx.conf
worker_processes 1;
events { worker_connections 1024; }
error_log /dev/stdout debug;
http {
sendfile on;
upstream user {
server backend-user:8080;
}
upstream feed {
server backend-feed:8080;
}
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
server {
listen 8080;
location /api/v0/feed {
proxy_pass http://feed;
}
location /api/v0/users {
proxy_pass http://user;
}
}
}
When I use docker-compose up, it works.
Here is the reverseproxy-deployment.ymal
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: reverseproxy
name: reverseproxy
spec:
replicas: 2
template:
metadata:
labels:
service: reverseproxy
spec:
containers:
- image: bmjlearntocode/reverseproxy
name: reverseproxy
imagePullPolicy: Always
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "1024Mi"
cpu: "500m"
ports:
- containerPort: 8080
restartPolicy: Always
I am not sure about why the nginx can not find the 'back-end' when the pod start, but can run in docker-compose up
I think its because backend-user only meaningful to docker network. When you define service in docker-compose file, you can access it with by its name. Obviously you cant access backend-user:8080 within kubernetes cluster because its undefined.
In kubernetes, you need Service resource for that kind of access. Also, you need Ingress resource or Service which its type nodeport for accesing apps from outside of kubernetes cluster(e.g from your browser).
you can check this resources. they are well written.
https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/
https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/
also you can put your nginx and back-end service in same pod and use localhost instead of hostname/servicename. so nginx can find upstream server.
I think this was solved by using backend-user and backend-feed as service name instead of backend-user-svc and backend-feed-svc, OR modifying the nginx.conf and set the upstreams to user these two backend-user-svc and backend-feed-svc instead of backend-user and backend-feed

NGINX reverse proxy not working for .NET core webAPI running in Docker

I have a simple example webAPI in .NET core, running in a docker container. I'm running Nginx also in a docker container as a reverse proxy for https redirection. The webAPI is accessible on http, but when accessing the https url, the API is not responding.
I have tried many different configurations in the nginx.conf file. I've tried using localhost, 0.0.0.0, and 127.0.0.1. I've tried using several different ports such as 5000, 8000, and 80. I've tried using upstream and also specifying the url on the proxy_pass line directly.
docker-compose.yml:
version: '3.4'
networks:
blogapi-dev:
driver: bridge
services:
blogapi:
image: blogapi:latest
depends_on:
- "postgres_image"
build:
context: .
dockerfile: Dockerfile
ports:
- "8000:80"
expose:
- "8000"
environment:
DB_CONNECTION_STRING: "host=postgres_image;port=5432;database=blogdb;username=bloguser;password=bloguser"
ASPNETCORE_ENVIRONMENT: development
#REMOTE_DEBUGGING: ${REMOTE_DEBUGGING}
networks:
- blogapi-dev
tty: true
stdin_open: true
postgres_image:
image: postgres:latest
ports:
- "5000:80"
restart: always
volumes:
- db_volume:/var/lib/postgresql/data
- ./BlogApi/dbscripts/seed.sql:/docker-entrypoint-initdb.d/seed.sql
environment:
POSTGRES_USER: "bloguser"
POSTGRES_PASSWORD: "bloguser"
POSTGRES_DB: blogdb
networks:
- blogapi-dev
nginx-proxy:
image: nginx:latest
container_name: nginx-proxy
ports:
- 80:80
- 443:443
networks:
- blogapi-dev
depends_on:
- "blogapi"
volumes:
- ./nginx-proxy/nginx.conf:/etc/nginx/nginx.conf
- ./nginx-proxy/error.log:/etc/nginx/error_log.log
- ./nginx-proxy/cache/:/etc/nginx/cache
- /etc/letsencrypt/:/etc/letsencrypt/
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./:/etc/nginx/
networks:
blogapi-dev:
driver: bridge
volumes:
db_volume:
nginx.conf:
events {}
http {
upstream backend {
server 127.0.0.1:8000;
}
server {
server_name local.website.dev;
rewrite ^(.*) https://local.website.dev$1 permanent;
}
server {
listen 443 ssl;
ssl_certificate localhost.crt;
ssl_certificate_key localhost.key;
ssl_ciphers HIGH:!aNULL:!MD5;
server_name local.website.dev;
location / {
proxy_pass http://backend;
}
}
}
Startup.cs:
namespace BlogApi
{
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
var connectionString = Environment.GetEnvironmentVariable("DB_CONNECTION_STRING");
services.AddDbContext<BlogContext>(options =>
options.UseNpgsql(
connectionString));
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseMvc();
}
}
When I go to http://127.0.0.1:8000/api/blog, the browser returns the json response from my api. This tells me that the app us up and running on port 8000, although it should not be accessable via http:
[{"id":1,"title":"Title 1","body":"Body 1","timeStamp":"1999-01-08T04:05:06"},{"id":2,"title":"Title 2","body":"Body 2","timeStamp":"2000-01-08T04:05:06"},{"id":3,"title":"Title 3","body":"Body 3","timeStamp":"2001-01-08T04:05:06"},{"id":4,"title":"Title 4","body":"Body 4","timeStamp":"2002-01-08T04:05:06"}]
When I go to 127.0.0.1, the browser properly redirects to https://local.website.dev/ but I get no response from the api, just the chrome local.website.dev refused to connect. ERR_CONNECTION_REFUSED. I get the same response when to go to https://local.website.dev/api/blog.
Also, this is the output from docker-compose up:
Starting blog_postgres_image_1 ... done
Starting blog_blogapi_1 ... done
Starting nginx-proxy ... done
Attaching to blog_postgres_image_1, blog_blogapi_1, nginx-proxy
blogapi_1 | Hosting environment: development
blogapi_1 | Content root path: /app
blogapi_1 | Now listening on: http://[::]:80
blogapi_1 | Application started. Press Ctrl+C to shut down.
postgres_image_1 | 2019-06-27 11:20:49.441 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_image_1 | 2019-06-27 11:20:49.441 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_image_1 | 2019-06-27 11:20:49.577 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_image_1 | 2019-06-27 11:20:49.826 UTC [25] LOG: database system was shut down at 2019-06-27 10:26:12 UTC
postgres_image_1 | 2019-06-27 11:20:49.893 UTC [1] LOG: database system is ready to accept connections
I got it working. There were a couple of issues. First, I was missing some boilerplate at the top of the nginx.conf file. Second, I needed to set the proxy_pass to the name of the docker container housing the service that I wanted to route to, in my case http://blogapi/, instead of localhost.
nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
proxy_set_header Host $host;
proxy_pass_request_headers on;
gzip on;
gzip_proxied any;
map $sent_http_content_type $expires {
default off;
~image/ 1M;
}
server {
listen 80;
listen [::]:80;
server_name localhost;
return 301 https://172.24.0.1$request_uri;
}
server {
listen 443 ssl;
server_name localhost;
ssl_certificate localhost.crt;
ssl_certificate_key localhost.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://blogapi/;
}
}
}
With the above configuration, I can access the webAPI at: https://172.24.0.1/api/blog/ If http://localhost/api/blog is entered, the browser will redirect to https://172.24.0.1/api/blog/ The IP address is the address of the blogapi-dev bridge network gateway as shown below.
docker inspect 20b
"Networks": {
"blog_blogapi-dev": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"20bd90d1a80a",
"blogapi"
],
"NetworkID": "1edd39000ac3545f9a738a5df33b4ea90e082a2be86752e7aa6cd9744b72d6f0",
"EndpointID": "9201d8a1131a4179c7e0198701db2835e3a15f4cbfdac2a4a4af18804573fea9",
"Gateway": "172.24.0.1",
"IPAddress": "172.24.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:18:00:03",
"DriverOpts": null
}
}

Resources