docker nginx reverse proxy css and image rendering issue - docker

I have a environment where I have 2 tomcat containers are exposed say dev and test on ports 8080 and 8081 respectively. I am able to access the tomcat instances with host and port combinations as below.
http://<ip>:8080
http://<ip>:8081
Now I am trying to setup an nginx container as a proxy to send all /dev requests to dev(8080) container and all /test requests to test(8081) container.
Below is my docker-compose.yml
version: "3.5"
services:
web1:
image: "tomcat:latest"
container_name: "web1"
ports:
- "8080:8080"
web2:
image: "tomcat:latest"
container_name: "web2"
ports:
- "8081:8080"
nginx:
image: "nginx:latest"
container_name: "nginx"
ports:
- "8000:80"
volumes:
- "./nginx.conf:/etc/nginx/nginx.conf"
#- "./default.conf:/etc/nginx/conf.d/default.conf"
Below is my nginx.conf file
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
client_max_body_size 0;
location / {
}
location /dev {
proxy_pass http://35.239.73.252:8080/;
}
location /test {
proxy_pass http://35.239.73.252:8081/;
}
}
}
Now the problem is when i try to load my tomcat containers directly they work fine. But when they are accessed through nginx with uri paths /dev and /test the pages are broken and images and css are not loaded.
What could the issue and how to fix it.

I believe you need a closing "/" after both your location path and your target. Here's a working example from a project of mine:
location ^~ /ll/ {
proxy_pass http://werther:8080/;
}

Related

Run qgis server with nginx in docker container (docker-compose)

I need to run QGIS server together with NGINX. I have to setup the environment using docker-compose. I am using the docker-compose file as mentioned in the comment.
And nginx.conf as below -
events {
worker_connections 4096;
}
http {
# error_log /etc/nginx/error/error.log warn; #./nginx/error.log warn;
client_max_body_size 20m;
proxy_cache_path /etc/nginx/cache keys_zone=one:500m max_size=1000m;
server {
listen 80;
server_name xx.xx.xx.xxx;
# return 301 https://localhost:80$request_uri;
return 301 https://$server_name$request_uri;
# return 301 https://localhost:8008;
}
server {
listen 443 ssl http2;
server_name xx.xx.xx.xxx; # localhost;
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1d;
ssl_session_tickets off;
#ssl_certificate /etc/nginx/ssl.crt;
#ssl_certificate_key /etc/nginx/ssl.key;
ssl_protocols TLSv1.2;
ssl_ciphers EECDH+AESGCM:EDH+AESGCM:EECDH:EDH:!MD5:!RC4:!LOW:!MEDIUM:!CAMELLIA:!ECDSA:!DES:!DSS:!3DES:!NULL;
ssl_prefer_server_ciphers on;
keepalive_timeout 70;
location /qgis/ {
proxy_pass http://qgis:8080;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
}
}
}
After docker-compose up the nginx container is always in restarting state. docker-compose logs are as below -
web_server_1 | 2021/05/12 16:53:45 [emerg] 1#1: host not found in upstream "qgis" in /etc/nginx/nginx.conf:40
web_server_1 | nginx: [emerg] host not found in upstream "qgis" in /etc/nginx/nginx.conf:40
Thanks in advance!!
Use something like this as docker-compose.yml
services:
web_server:
image: nginx
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./mime.types:/etc/nginx/conf/mime.types
- ./public:/data/www
- ./tile_cache:/tile_cache
- ./logs:/logs
ports:
- "80:80"
- "443:443"
restart: always
networks:
tile_network:
aliases:
- webserver
qgis_server:
image: camptocamp/qgis-server
volumes:
- ./qgisserver:/etc/qgisserver/
restart: always
environment:
- QGIS_PROJECT_FILE=/etc/qgisserver/project.qgs
networks:
tile_network:
aliases:
- qgis
Add in you nginx.conf the following location:
location /qgis/ {
proxy_pass http://qgis/;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
}
So you have a web server, that hide the QGIS Server and show it on ULR localhost/qgis

What am I doing wrong with this Rest api and nginx front end reverse proxy?

I am trying to get the front end and backend working together for the spring boot pet clinic app. I have already done ng --prod on a windows pc and then used github to transfer my code to a VM. I had it working once but only on IE but it doesn't again I don't know what's wrong. Please help it's done my head in for a few weeks.
nginx.conf file:
worker_processes 1;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile off;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
#access_log /var/log/nginx/access.log;
#error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
server {
#listen 8443 ssl;
listen 4200;
#server_name localhost;
#ssl_certificate localhost.crt;
#ssl_certificate_key localhost.key;
location / {
root /AngularApp/dist;
index index.html;
}
location /api/ {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 20;
proxy_read_timeout 20;
proxy_pass http://springcommunity:9966/petclinic;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
Dockefile for front end.
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
RUN rm /etc/nginx/nginx.conf
COPY /nginx.conf /etc/nginx/nginx.conf
COPY /AngularApp /AngularApp
WORKDIR /AngularApp
docker-compose file:
version: '3.7'
services:
springcommunity:
image: springcommunity/spring-petclinic-rest
ports:
- "9966:9966"
web:
container_name: nginx
build: .
ports:
- "4200:4200"
depends_on:
- springcommunity
links:
- springcommunity
restart: always
environment.prod.ts and environment.ts file before ng --prod (production)
export const environment = {
production: true,
REST_API_URL:'http://localhost:9966/petclinic/'
};
Things I have tried and failed:
export const environment = {
production: true,
REST_API_URL:'http://springcommunity:9966/petclinic'
};
Exposing 4200 in the Dockerfile for the front end.
I have tried port mapping in docker compose:
example:
4200:9966
9966:4200
Exposing 9966 as well in the compose file.
The front end and backend work but just not together, only individually I have a feeling that one container needs to be delayed the front end I have done some google searching but can't find a viable option. I have no idea how to do it, please help.
Update 5/06/2020
I am currently running a wait-for.sh so the backend runs before the the web container but now nginx exits with a error code 0. I am also trying to see the nginx error logs but I can't get to this could someone please shed some light on this?
If your frontend can't reach the backend on that VM may be that your docker containers are not o the same network.
You can try REST_API_URL:'http://0.0.0.0:9966/petclinic'
Or can specify a custom networks in docker-compose file and use REST_API_URL:'http://springcommunity:9966/petclinic'
https://docs.docker.com/compose/networking/#specify-custom-networks

jwilder/nginx-proxy with cloudflare SSL doesnt

I'm having problem with using jwilder/nginx-proxy with cloudflare ssl (origin key, FULL type SSL).
Everything is working fine (in http) until I activate DNS Proxy of Cloudflare. With the server returning 521 (Web Server Down).
Here's my docker-compose.yaml
version: "2"
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
ports:
- 80:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs
network_mode: bridge
saraswati-global:
image: asia.gcr.io/ordent-production/ordent/saraswati-global
ports:
- 3000:3000
environment:
- VIRTUAL_HOST=beta.saraswati.global
- VIRTUAL_PORT=3000
- VIRTUAL_PROTO=https
network_mode: bridge
api-healed-id:
image: asia.gcr.io/ordent-production/ordent/api.healed.id
ports:
- 4001:4001
environment:
- VIRTUAL_HOST=dev.healed.id
- VIRTUAL_PORT=4001
- VIRTUAL_PROTO=https
network_mode: bridge
Maybe you guys could help me with the configuration -
Here's the nginx configuration created by above config :
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:D
ssl_prefer_server_ciphers off;
resolver 172.26.0.2;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# beta.saraswati.global
upstream beta.saraswati.global {
## Can be connected with "bridge" network
# ordent-production-host_saraswati-global_1
server 172.17.0.3:3000;
}
server {
server_name beta.saraswati.global;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass https://beta.saraswati.global;
}
}
# dev.healed.id
upstream dev.healed.id {
## Can be connected with "bridge" network
# ordent-production-host_api-healed-id_1
server 172.17.0.4:4001;
}
server {
server_name dev.healed.id;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass https://dev.healed.id;
}
}
The issue is caused because of this when you are defining the nginx-proxy service:
ports:
- 80:80
As you enabled SSL on Cloudflare, the default port will be 443, not 80.
So the nginx-proxy needs to listen to 443 port and the correct way is:
ports:
- 443:443

Using Nginx as a proxy for Jenkins using Docker

I'm using Nginx as a proxy for Jenkins server. Both in Docker container.
The idea is Jenkins running on port 8080, with port 8080 exposed. Nginx listening on port 80 and redirecting traffic to Jenkins on port 8080. If you try to access port 8080 directly it will refuse the connection.
Please see docker-compose.yml file:
version: '3.7'
services:
master:
build: ./jenkins-master
networks:
- jenkins-net
volumes:
- jenkins-log:/var/log/jenkins
- jenkins-data:/var/jenkins_home
nginx:
build: ./jenkins-nginx
ports:
- "80:80"
networks:
- jenkins-net
networks:
jenkins-net:
volumes:
jenkins-log:
jenkins-data:
Jenkins-master Dockerfile:
FROM jenkins/jenkins:alpine
LABEL maintainer=''
USER root
RUN mkdir /var/log/jenkins
RUN mkdir /var/cache/jenkins
RUN chown -R jenkins:jenkins /var/log/jenkins
RUN chown -R jenkins:jenkins /var/cache/jenkins
USER jenkins
ENV JAVA_OPTS='-Xmx8192m'
ENV JENKINS_OPTS=' --handlerCountMax=300 -- logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war'
This is the nginx.conf file:
server {
listen 80;
server_name localhost;
access_log off;
location / {
proxy_pass http://master:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_max_temp_file_size 0;
proxy_connect_timeout 150;
proxy_send_timeout 100;
proxy_read_timeout 100;
proxy_buffer_size 8k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
this is my jenkins-nginx Dockerfile:
FROM nginx:mainline-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY jenkins.conf /etc/nginx/conf.d/jenkins.conf
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx"]
Adding Nginx Dockerfile for completeness:
FROM nginx:mainline-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY jenkins.conf /etc/nginx/conf.d/jenkins.conf
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx"]
Jenkins.conf file:
daemon off;
user nginx;
worker_processes 2;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
accept_mutex off;
}
http {
include /etc/nginx/mime.types;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request"'
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" ';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
client_max_body_size 300m;
client_body_buffer_size 128k;
gzip on;
gzip_http_version 1.0;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_proxied any;
gzip_types text/plain text/css text/xml text/javascript application/xml application/xml+rss application/javascript application/json;
gzip_disable 'MSIE [1-6]\.';
gzip_vary on;
include /etc/nginx/conf.d/*.conf;
}
The problem is both works independently but as soon as I try to connect them on one network they crash.
Error throws localhost refused to connect on both services
You need to expose port 8080 on your docker-compose
ports:
- 8080
- 50000:50000
This may help in nginx (jenkins.conf)
proxy_redirect http://master:8080/;
First of all you publish the master and nginx on port 80. This is way too much. Just publish port 80 on nginx. The other ports on master are not needed, except you want to bind this port 50000 on your local address and port.
Container in the same network can resolve the names and reaches their ports without being published. Keep in mind Container cannot call localhost to reach your host. It would just solve the container itself. Use the container names inside the configurations and container itself.
UPDATE:
I've setup my configuration like the following. This worked for me.
docker-compose.yaml:
version: '3.7'
services:
master:
image: jenkins/jenkins:alpine
networks:
- jenkins-net
volumes:
- jenkins-log:/var/log/jenkins
- jenkins-data:/var/jenkins_home
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- type: bind
source: ./nginx.conf
target: /etc/nginx/conf.d/default.conf
networks:
- jenkins-net
networks:
jenkins-net:
volumes:
jenkins-log:
jenkins-data:
nginx.conf:
server {
listen 80;
server_name localhost;
access_log off;
location / {
proxy_pass http://master:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_max_temp_file_size 0;
proxy_connect_timeout 150;
proxy_send_timeout 100;
proxy_read_timeout 100;
proxy_buffer_size 8k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
Then i was able to call http://localhost and worked as expected. Hope it works as expected and you can adapt it to your personal needs.
This is the configuration that worked in my case:
docker-compose.yml:
version: '3.7'
services:
master:
build: ./jenkins-master
networks:
- jenkins-net
volumes:
- jenkins-log:/var/log/jenkins
- jenkins-data:/var/jenkins_home
nginx:
build: ./jenkins-nginx
ports:
- "80:80"
volumes:
- type: bind
source: ./jenkins-nginx/nginx.conf
target: /etc/nginx/conf.d/default.conf
networks:
- jenkins-net
networks:
jenkins-net:
volumes:
jenkins-log:
jenkins-data:
Nginx-Dockerfile:
FROM nginx:mainline-alpine
COPY ./jenkins.conf /etc/nginx/conf.d/jenkins.conf
COPY ./nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx"]
Jenkins-Dockerfile:
FROM jenkins/jenkins:alpine
LABEL maintainer=''
USER root
RUN mkdir /var/log/jenkins
RUN mkdir /var/cache/jenkins
RUN chown -R jenkins:jenkins /var/log/jenkins
RUN chown -R jenkins:jenkins /var/cache/jenkins
USER jenkins
ENV JAVA_OPTS='-Xmx8192m'
ENV JENKINS_OPTS=' --handlerCountMax=300 --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war'
nginx.conf:
server {
listen 80;
server_name localhost;
access_log off;
location / {
proxy_pass http://master:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_max_temp_file_size 0;
proxy_connect_timeout 150;
proxy_send_timeout 100;
proxy_read_timeout 100;
proxy_buffer_size 8k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
jenkins.conf:
daemon off;
user nginx;
worker_processes 2;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
accept_mutex off;
}
http {
include /etc/nginx/mime.types;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request"'
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" ';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
client_max_body_size 300m;
client_body_buffer_size 128k;
gzip on;
gzip_http_version 1.0;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_proxied any;
gzip_types text/plain text/css text/xml text/javascript application/xml application/xml+rss application/javascript application/json;
gzip_disable 'MSIE [1-6]\.';
gzip_vary on;
include /etc/nginx/conf.d/*.conf;
}
For me it worked after I changed the name for jenkins.conf with nginx.conf. Also I have used this git repo: https://github.com/lucasp90/jenkins-nginx and it worked fine.

Passing environment variables from Docker to Nginx configuration files not working

I am really stumped and can use help figuring out why my environment variables aren't transferring from Docker to nginx config files.
I have a docker-compose.yml
nginx:
image: nginx
container_name: proxier
volumes:
- ./conf/nginx.conf:/etc/nginx/nginx.conf
- ./conf/server.nginx.conf.tpl:/etc/nginx/server.nginx.conf.tpl
- ./build/web:/srv/static:ro
- ./docker/proxier:/tmp/docker
ports:
- "80:80"
- "443:443"
environment:
- HOST_EXTERNAL_IP=localhost
- DEVSERVER_PORT=8000
- DEVSERVICE_PORT=5000
command: /bin/bash -c "env && envsubst '$$HOST_EXTERNAL_IP $$DEVSERVER_PORT $$DEVSERVICE_PORT' < /etc/nginx/server.nginx.conf.tpl > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
I have an nginx.conf file
user nginx;
worker_processes 1;
error_log /dev/stdout warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
client_max_body_size 100g;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout main;
sendfile off;
tcp_nopush on;
keepalive_timeout 65;
gzip on;
server_tokens off;
upstream app {
server myapp:8000 fail_timeout=0;
}
include /etc/nginx/server.nginx.conf.tpl;
}
I have a server.nginx.conf.tpl file
server {
listen 80;
listen 443 ssl http2 default_server;
server_name localhost;
index index.html;
location ^~ /services/ {
proxy_pass https://myurl.com;
proxy_set_header USER_DN $ssl_client_s_dn;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location / {
proxy_http_version 1.1;
proxy_set_header Connection "keep-alive";
proxy_pass http://${HOST_EXTERNAL_IP}:${DEVSERVER_PORT}; # Won't read environment variables here
}
}
When I run this however, I get the error
nginx: [emerg] unknown "host_external_ip" variable I am using envsubst correctly to pass the environment variable from docker per the docs
Do not copy nginx.conf directly. Instead create a shell file to generate the nginx file e.g.
echo 'you nginx conf goes here with $envVariable' > location/to/conf/folder/nginx.conf
and run that file inside the container. So when that shell file will run. It will replace the environment variables that you set with it's actual value in the nginx.conf.
Do not forget to skip $ of nginx variables.

Resources