One nginx config for multiple dockerized ShinyR apps under different locations - docker

we're trying to host a bunch of dockerized shinyR applications on a server. Each shinyR app is provided via docker-compose under a specific port. Each app should be available under its own location, e.g. https://example.com/app1 and https://example.com/app2
Requests under location app1 shall be directed to http://localhost:.
This is an excerpt of the nginx config file we're using:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com;
root /var/www/html;
location /app1 {
proxy_pass http://127.0.0.1:3040/;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
...
}
What I receive is the bare shinyR application which is good but the client requests the proper assets (various JS libraries, images, etc) under the wrong location (/ and not /app1/) as can be seen in my browsers dev console:
GET https://example.com/shiny-javascript-1.7.4/shiny.min.js net::ERR_ABORTED 404
I can request this asset via https://example.com/app1/shiny-javascript-1.7.4/shiny.min.js, so it's served properly somehow.
My first question is, whether this is rather a shinyR or an NGINX problem? My guess is that shiny is telling the browser to request the assets from a wrong location and that we should start digging there but since I'm neither an expert on shinyR nor nginx it's a bit hard to tell where to start debugging.
For the sake of completeness here's the compose file running the shinyR apps
version: '3'
services:
app1:
image: "registry.gitlab.com/dezim/somefoo/bar:latest"
container_name: "app1"
command: R -e "shiny::runApp(appDir='/home/app', port=3040, host='0.0.0.0')"
restart: unless-stopped
ports:
- '127.0.0.1:3040:3040'

Related

Unable to deploy Growthbook service (A/B Testing) together with Nginx sidecar container for handling SSL/TLS encryption into Amazon ECS

I want to deploy Growthbook (A/B Testing tool) container along with Nginx reverse proxy for handling SSL/TLS encryption(i.e SSL Termination) into AWS ECS. I was trying to deploy using Docker compose file(i.e Docker ECS context). The problem is, it is creating all the necessary resources like Network Load Balances, Target Groups, ECS task definitions etc. And Abruptly fails creating ECS service and tries to delete all the resources it created when I ran docker compose --project-name growthbook up. Reason it says is "Nginx sidecar container exited".
Here is my docker-compose.yml file:
# docker-compose.yml
version: "3"
x-aws-vpc: "vpc-*************"
services:
growthbook:
image: "growthbook/growthbook:latest"
ports:
- 3000:3000
- 3100:3100
environment:
- MONGODB_URI=<mongo_db_connection_string>
- JWT_SECRET=<jwt_secret>
volumes:
- uploads:/usr/local/src/app/packages/back-end/uploads
nginx-tls-sidecar:
image: <nginx_sidecar_image>
ports:
- 443:443
links:
- growthbook
volumes:
uploads:
And Here is Dockerfile used to build nginx sidecar image:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
COPY ssl.key /etc/nginx/ssl.key
COPY ssl.crt /etc/nginx/ssl.crt
In the above Dockerfile, SSL keys and certificates are self-signed generated using openssl and are in order.
And Here is my nginx.conf file:
# nginx Configuration File
# https://wiki.nginx.org/Configuration
# Run as a less privileged user for security reasons.
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
pid /var/run/nginx.pid;
http {
#Redirect to https, using 307 instead of 301 to preserve post data
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name localhost;
# Protect against the BEAST attack by not using SSLv3 at all. If you need to support older browsers (IE6) you may need to add
# SSLv3 to the list of protocols below.
ssl_protocols TLSv1.2;
# Ciphers set to best allow protection from Beast, while providing forwarding secrecy, as defined by Mozilla - https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:AES128:AES256:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK;
ssl_prefer_server_ciphers on;
# Optimize TLS/SSL by caching session parameters for 10 minutes. This cuts down on the number of expensive TLS/SSL handshakes.
# The handshake is the most CPU-intensive operation, and by default it is re-negotiated on every new/parallel connection.
# By enabling a cache (of type "shared between all Nginx workers"), we tell the client to re-use the already negotiated state.
# Further optimization can be achieved by raising keepalive_timeout, but that shouldn't be done unless you serve primarily HTTPS.
ssl_session_cache shared:SSL:10m; # a 1mb cache can hold about 4000 sessions, so we can hold 40000 sessions
ssl_session_timeout 24h;
# Use a higher keepalive timeout to reduce the need for repeated handshakes
keepalive_timeout 300; # up from 75 secs default
# remember the certificate for a year and automatically connect to HTTPS
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains';
ssl_certificate /etc/nginx/ssl.crt;
ssl_certificate_key /etc/nginx/ssl.key;
location / {
proxy_pass http://localhost:3000; # TODO: replace port if app listens on port other than 80
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
location / {
proxy_pass http://localhost:3100; # TODO: replace port if app listens on port other than 80
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
Basically, Growthbook exposes its service on http://localhost:3000 and http://localhost:3100. And Nginx sidecar container only listens on port 443. I need to be able to proxy to both endpoints which are exposed by Growthbook from Nginx port 443.
Help is much appreciated, if you guys find any mistakes in my configuration :)
By default, Growthbook service does not provide TLS encryption. So I used Nginx as sidecar handling SSL Termination. As a end result, I need to be able to run Growthbook service using TLS encryption hosted on AWS ECS.

Running Container Registry in Gitlab with relative url

I'm running gitlab and nginx in docker. I had to use an unbound version of nginx because there's other applications running and I can only publish ports 80 and 443. So far I've managed to access gitlab with my.domain.com/gitlab/ and can also login, upload projects, etc...
I've wanted to use the container registry for uploading and storing images from my projects, which is why I've uncommented gitlab_rails['registry_enabled'] = true in gitlab.rb. Now the container registry is visible for my projects, however I get a Docker Connection Error when trying to access the page.
My question is, are there any other settings I have to tweak to get the inbuilt container registry running or did I already mess things up in the way I've set this up? Especially since I need gitlab to run on a relative url and my projects url for cloning/pulling is something like https://<server.ip.address>/gitlab/group-name/test-project, even though browser tabs show https://my.domain.com/gitlab/group-name/test-project
So far, this is my setup:
nginx
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name my.domain.com www.my.domain.com;
server_tokens off;
ssl_certificate /etc/nginx/ssl/live/my.domain.com/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/my.domain.com/privkey.pem;
##############
## GITLAB ##
##############
location /gitlab/ {
root /home/git/gitlab/public;
proxy_http_version 1.1;
proxy_pass http://<server.ip.address>:8929/gitlab/;
gzip off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
gitlab docker-compose.yml
services:
git:
image: gitlab/gitlab-ce:14.10.0-ce.0
restart: unless-stopped
container_name: gitlab
hostname: 'my.domain.com'
environment:
GITLAB_OMNIBUS_CONFIG: |
gitlab_rails['gitlab_shell_ssh_port'] = 2224
external_url 'http://<server.ip.address>:8929/gitlab/'
gitlab_rails['registry_enabled'] = true
ports:
- "8929:8929"
- "2224:2224"

Dockerize pgAdmin - The CSRF tokens do not match

I've been trying to fix an issue which is when I try to login to pgAdmin (in docker container) behind Nginx Proxy I'm getting an error that The CSRF tokens do not match.
See https://en.wikipedia.org/wiki/Cross-site_request_forgery
Frankly, the problem is related within nginx or not I'm not sure but configuration files as below:
Docker Swarm Service :
pgAdmin:
image: dpage/pgadmin4
networks:
- my-network
ports:
- 9102:80
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
- PGADMIN_CONFIG_SERVER_MODE=True
volumes:
- /home/docker-container/pgadmin/persist-data:/var/lib/pgadmin
- /home/docker-container/pgadmin/persist-data/servers.json:/pgadmin4/servers.json
deploy:
placement:
constraints: [node.hostname == my-host-name]
Nginx Configuration:
server {
listen 443 ssl;
server_name my-server-name;
location / {
proxy_pass http://pgAdmin/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-CSRF-Token $http_x_pga_csrftoken;
}
ssl_certificate /home/nginx/ssl/certificate.crt;
ssl_certificate_key /home/nginx/ssl/private.key;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_prefer_server_ciphers on;
server {
listen 80;
server_name my-server-name;
return 301 https://my-server-name $request_uri;
}
I can able to access to pgAdmin in two ways :
The first way is direct host ip like 172.23.53.2:9102
The second way is via Nginx proxy.
When I try to access to pgAdmin via direct host ip there is no error but when I try to access to via dns ( like my-server.pgadmin.com ) I'm getting an error when I logged into pgAdmin dashboard.
The error is :
Bad Request. The CSRF tokens do not match.
My first opinion about this error is nginx does not pass CSRF Token header to pgAdmin. For these reason I've changed nginx configuration file many many times but I'm still getting this error.
What could be source of this error and how could I solve this problem?
Try to use the default ports "5050:80". It's solved the same issue on my side.
Using strings is also recommended.
Cf: https://docs.docker.com/compose/compose-file/compose-file-v3/#ports
I used pgadmin4 deployed by Apache httpd,
the deployment method is similar, I also had the same problem,
my solution is Apache httpd loaded the lib of Apr/Aprl-util /pcre, Apache httpd will use token.

Reverse Proxy with Hostnames with Docker compose

I have 2 APIs and a front end that work in docker containers. They can all be created via docker-compose and I have an Nginx reverse proxy that should route to them using their hostnames i.e. api-1.my-project.localhost. In fact, I have had this process working, but every so often, like this morning, I get the "Non-existent domain" or "We can't find this site" when visiting it using these hostnames.
I don't believe the docker-compose.yml is all that important but I have included it here:
version: "3.2"
services:
api-1:
# Runs on port 5000
image: my-companies-docker-registry.api-1
api-2:
# Runs on port 5000
image: my-companies-docker-registry.api-2
main-front-end:
# Runs on port 3000
image: my-companies-docker-registry.main-front-end
my-project:
image: nginx
depends_on:
- api-1
- api-2
- main-front-end
ports:
- 80:80
volumes:
- type: bind
read_only: true
source: ./nginx
target: /etc/nginx
My nginx config looks like so:
events {}
http {
# Api 1
upstream api-1{
server api-1:5000;
}
server {
listen 80;
server_name api-1.my-project.localhost;
location / {
proxy_pass http://api-1;
}
}
# Api 2
upstream api-2 {
server api-2:5000;
}
server {
listen 80;
server_name api-2.my-project.localhost;
location / {
proxy_pass http://api-2;
}
}
# Main Front end
upstream main-front-end {
server main-front-end:3000;
}
server {
listen 80 default_server deferred;
server_name my-project.localhost www.my-project.localhost;
location / {
proxy_pass http://main-front-end;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
}
The aim of this is so that I can access the site using http://my-project.localhost and the APIs using api-1.my-project.localhost and api-2.my-project.localhost. As I have listen 80 default_server deferred by visiting http://localhost I can hit nginx, which lets me see my front end site. So docker-compose has bound the ports correctly.
At some point in the past, I have been able to access the sites using the my-project.localhost suffix but now this is no longer the case. This site suggests adding a host entry for every site, I don't remember doing this before, as there are a lot of sites, this would have taken a while but it is possible I did and now they have been removed. I am also aware that Nginx is not docker, so I have no idea how those hostnames would have been extracted and added to my machines host resolution process.
How could this have worked before and not now? And how can I get my setup to work without making manual host file changes?

Couldn't start nginx-proxy docker image with error "Contents of /etc/nginx/conf.d/default.conf did not change"

I am playing with "nginx-proxy", pulled the image "jwilder/nginx-proxy:latest" to my local host, tried to start it but got this error "Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'", and when I tried to go to the server at port 80: it returned 503 Bad Gateway:
Here is my docker-compose file:
version: '3.0'
services:
proxy:
image: jwilder/nginx-proxy:latest
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /etc/nginx/vhost.d
- /usr/share/nginx/html
- /docker/certs:/etc/nginx/certs:ro
network_mode: "bridge"
and the error I've got
WARNING: /etc/nginx/dhparam/dhparam.pem was not found. A pre-generated dhparam.pem will be used for now while a new one is being generated in the background.Once the new dhparam.pem is in place, nginx will be reloaded.
forego | starting dockergen.1 on port 5000
forego | starting nginx.1 on port 5100
dockergen.1 | 2018/07/18 04:14:14 Generated '/etc/nginx/conf.d/default.conf' from 1 containers
dockergen.1 | 2018/07/18 04:14:14 Running 'nginx -s reload'
dockergen.1 | 2018/07/18 04:14:14 Watching docker events
dockergen.1 | 2018/07/18 04:14:14 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
Any idea is much appreciated. Cheers
It's default behavior for this image, you can see /etc/nginx/conf.d/default.conf:
server {
server_name _;
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
So when your visit, it will give 503 error.
This is a service discovery service, so you need to use it.
See the official example:
docker-compose.yaml
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
whoami:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=whoami.local
If you use docker-compose up to start it, then have a look at /etc/nginx/conf.d/default.conf again, you will see:
server {
server_name _;
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# whoami.local
upstream whoami.local {
## Can be connected with "a_default" network
# a_whoami_1
server 172.20.0.2:8000;
}
server {
server_name whoami.local;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://whoami.local;
}
}
Here, jwilder/nginx-proxy watch the events of docker, and add a proxy to nginx reverse settings.
So if execute curl -H "Host: whoami.local" localhost on your host machine, it will print I'm 5b129ab83266.
server 172.20.0.2 in nginx settings is your application container's ip, it will changes everytime you start a new container, so with this method, you can free to know the ip of your application container, just use inverse proxy from nginx.
Many service such as marathon-lb which is a component of marathon who known as a mesos framework also could afford such function, maybe k8s also? Anyway, you need to know principle of this image, a useful doc for your reference: http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
Hung Vu, was that the entirety of your docker-compose file, or just the section for jwilder/nginx-proxy? I ask because I experienced this error when I had simply forgotten to add the "virtual_host" environment variable for my other services, after dropping in that service definition for the nginx-proxy. Doh! :-) See its docs, or atline's example here.
I ran into this issue in 2021, but I was able to resolve it. Documenting as this thread is at the top of Google searches.
tl;dr: Run the containers without a custom NGINX config at least once
This happened to me as I was migrating from one host to another. I migrated all of my files over and then started running my containers one by one.
For me, I had a custom NGINX config file that proxied a path to a separate docker container that was not created yet.
- "~/gotti/volumes/nginx-configs:/etc/nginx/vhost.d"
in this mount, I had a mydomain.conm config file with the following contents
# reverse proxy
location /help {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://ghost:2368; <----- Problem: This container didn't exist yet.
proxy_redirect off;
}
This invalid reference prevented NGINX from proxying my app and thus prevented a SSL cert from being issued.

Resources