I would like to proxy_pass to the related service conditionally based on environment variable.. What I mean, prox_pass adress should be change based on NODE_ENV variable..
What is the best approach of doing this ? Can I use if statement like as below for proxy_pass? If yes how should I do this ? Apart from this, I tried to create a bash as below as below to pass environment variable to nginx but could not able to set and pass $NGINX_BACKEND_ADDRESS to nginx conf somehow. Any help will be appreciated
if ($NODE_ENV == "development) {
proxy_pass http://myservice-dev;
}
nginx.conf
server {
listen 3000;
location / {
root /usr/src/app;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location /csrf/token {
proxy_pass ${NGINX_BACKEND_ADDRESS}/csrf/token;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /export/apis {
proxy_pass ${NGINX_BACKEND_ADDRESS}/export/apis;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
entrypoint.sh
#!/usr/bin/env sh
set -eu
export NODE_ENV=development
if ["$NODE_ENV" == "development"]
then
export NGINX_BACKEND_ADDRESS=http://backend-dev
elif ["$NODE_ENV" == "stage"]
then
export NGINX_BACKEND_ADDRESS=http://backend-stage
elif ["$NODE_ENV" == "development"
then
export NGINX_BACKEND_ADDRESS=http://backend-preprod
elif ["$NODE_ENV" == "development"]
then
export NGINX_BACKEND_ADDRESS=http://backend
else
echo "Error in reading environment variable in nginx-conf.sh."
fi
echo "Will proxy requests for to ${NGINX_BACKEND_ADDRESS}*"
exec /nginx-conf.sh "$#"
Dockerfile
FROM nginx:alpine AS production-build
WORKDIR /usr/src/app
ARG NODE_ENVIRONMENT=development
ENV NODE_ENV=$NODE_ENVIRONMENT
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf.template /etc/nginx/conf.d/default.conf.template
COPY nginx-conf.sh /
RUN chgrp -R root /var/cache/nginx /var/run /var/log/nginx /var/run/nginx.pid && \
chmod -R 775 /var/cache/nginx /var/run /var/log/nginx /var/run/nginx.pid
USER nginx
COPY --from=builder /usr/src/app/dist .
ENTRYPOINT ["/nginx-conf.sh", $NODE_ENVIRONMENT]
EXPOSE 3000
CMD ["nginx", "-g", "daemon off;"]
The Docker Hub nginx image (as of nginx:1.19) has a facility to do environment-variable replacement in configuration files:
[...] this image has a function, which will extract environment variables before nginx starts. [...] this function reads template files in /etc/nginx/templates/*.template and outputs the result of executing envsubst to /etc/nginx/conf.d.
So your first step is to rename your configuration file as is (including proxy_pass ${NGINX_BACKEND_ADDRESS}/...) to something like default.conf.template and put it in the required directory.
I would directly pass that address in your deploy-time configuration. I would not include it in the image in any way. (Imagine setups like "a developer is trying to run this stack on their local desktop system" where none of the URLs in the entrypoint script are right.) That also lets you get rid of pretty much all the code here; you would just have
# Dockerfile
FROM ... AS builder
...
FROM nginx:1.21-alpine
COPY nginx/nginx.conf.template /etc/nginx/conf.d/default.conf.template
COPY --from=builder /usr/src/app/dist /usr/share/nginx/html
# Permissions, filesystem layout, _etc._ are fine in the base image
# Use the base image's ENTRYPOINT/CMD
# docker-compose.yml
version: '3.8'
services:
proxy:
build: .
ports: ['8000:80']
environment:
- NGINX_BACKEND_ADDRESS=https://backend-prod.example.com
If you are in fact using Compose, you can use multiple docker-compose.yml files to provide settings for specific environments.
# docker-compose.local.yml
# Run the backend service locally too in development mode
version: '3.8'
services:
backend: # not in docker-compose.yml
build: backend
# and other settings as required
nginx: # overrides docker-compose.yml settings
environment:
- NGINX_BACKEND_ADDRESS=http://backend
# no other settings
docker-compose -f docker-compose.yml -f docker-compose.local.yml up
if you want to run an if statement in your dockerfile, then you can use the RUN command in the dockerfile, for example using bash,
RUN if [[ -z "$arg" ]] ; then echo Argument not provided ; else echo Argument is $arg ; fi
The way i normally do this is I have a generic nginx proxy and i then just pass in the url and protocol as env vars
ubuntu#vps-f116ed9f:/opt/docker_projects/docker_examples/load_balancer$ cat proxy.conf
server {
listen 80 default_server;
resolver 127.0.0.11 valid=1s;
set $protocol $PROXY_PROTOCOL;
set $upstream $PROXY_UPSTREAM;
location / {
proxy_pass $protocol://$upstream$request_uri;
proxy_pass_header Authorization;
proxy_http_version 1.1;
proxy_ssl_server_name on;
proxy_set_header Host $upstream;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Connection "";
proxy_buffering off;
proxy_read_timeout 5s;
proxy_redirect off;
proxy_ssl_verify off;
client_max_body_size 0;
}
}
ubuntu#vps-f116ed9f:/opt/docker_projects/docker_examples/load_balancer$ cat Dockerfile
FROM nginx:1.13.8
ENV PROXY_PROTOCOL=http PROXY_UPSTREAM=example.com
COPY proxy.conf /etc/nginx/conf.d/default.template
COPY start.sh /
CMD ["/start.sh"]
I then have a start script that will substitue the env vars into my proxy_config.
ubuntu#vps-f116ed9f:/opt/docker_projects/docker_examples/load_balancer$ cat start.sh
#!/usr/bin/env bash
envsubst '$PROXY_PROTOCOL,$PROXY_UPSTREAM' < /etc/nginx/conf.d/default.template > /etc/nginx/conf.d/default.conf
exec nginx -g 'daemon off;'
Related
I am working on a migration pipeline where I have added two different docker files each for celery and flower. But my celery running tasks do not appear on Flower although they are running fine, neither my Flower UI is visible.
This is my docker file for celery.
FROM python:3.10
WORKDIR /app
COPY requirements ./requirements
RUN pip install --no-cache-dir -r ./requirements/production.txt
COPY . .
ENTRYPOINT celery -A app worker -l INFO
My Docker file for flower and the Nginx file.
FROM mher/flower:0.9.7
EXPOSE 5555
ENTRYPOINT flower --broker=redis://0.0.0.0:6379/1
events {}
http {
server {
listen 80;
# server_name your.server.url;
charset utf-8;
location / {
proxy_pass http://localhost:5555;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
Thanks for any ideas that I can throw at this.
I know there are a bunch of these questions but I can't find any that answer my question so here is another one. I am trying to create a simple NGINX proxy that just proxies my existing site to localhost. The existing site is jackiergleason.com. I set up my NGINX like this...
server {
listen 443 ssl;
server_name localhost;
ssl_certificate /usr/src/app/host.cert;
ssl_certificate_key /usr/src/app/host.key;
location / {
proxy_set_header Host $host;
proxy_ssl_name $host;
proxy_ssl_server_name on;
proxy_ssl_session_reuse off;
proxy_pass https://jackiergleason.com;
}
}
I use docker to run it locally like this..
FROM nginx:latest
WORKDIR /usr/src/app
RUN pwd
COPY host.cert /usr/src/app
COPY host.key /usr/src/app
RUN ls -al /usr/src/app
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
Then I run these 2 commands...
docker build . -t jrg/proxy:latest
docker run -it --rm -d -p 8080:443 --name web jrg/proxy:latest
But when I try to access https://localhost:8080 and accept the security warning I get...
What am I missing?
I also tried using port 443 but I get the same result
Replacing
proxy_set_header Host $host;
proxy_ssl_name $host;
With
proxy_set_header Host jackiergleason.com;
proxy_ssl_name jackiergleason.com;
Worked
As I deploy my first ever rails app to server, I keep getting this error right when from home url.
The page you were looking for doesn't exist. You may have mistyped the address or the page may have moved.
My configuration:
Dockerfile
FROM ruby:3.0.0-alpine3.13
RUN apk add --no-cache --no-cache --update alpine-sdk nodejs postgresql-dev yarn tzdata
WORKDIR /app
COPY Gemfile .
COPY Gemfile.lock .
RUN bundle install
COPY . .
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 4000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0", "-e", "production", "-p", "4000"]
entrypoint.sh
#!/bin/sh
set -e
# Remove a potentially pre-existing server.pid for Rails.
rm -f /app/tmp/pids/server.pid
# Then exec the container's main process (what's set as CMD in the Dockerfile).
exec "$#"
docker-compose.yml
central:
build:
context: ./central
dockerfile: Dockerfile
command: sh -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 4000 -e production -b '0.0.0.0'"
volumes:
- ./central:/usr/src/app
env_file:
- ./central/.env.prod
stdin_open: true
tty: true
depends_on:
- centraldb
centraldb:
image: postgres:12.0-alpine
volumes:
- centraldb:/var/lib/postgresql/data/
env_file:
- ./central/.env.prod.db
nginx:
image: nginx:1.19.0-alpine
volumes:
- ./nginx/prod/certbot/www:/var/www/certbot
- ./central/public/:/home/apps/central/public/:ro
ports:
- 80:80
- 443:443
depends_on:
- central
links:
- central
restart: unless-stopped
command: '/bin/sh -c ''while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g "daemon off;"'''
nginx.conf
upstream theapp {
server central:4000;
}
server {
listen 80;
server_name thedomain.com;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name thedomain.com;
if ($scheme = http) {
return 301 https://$server_name$request_uri;
}
# hidden ssl config
root /home/apps/central/public;
index index.html;
location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ {
try_files $uri #rails;
access_log off;
gzip_static on;
# to serve pre-gzipped version
expires max;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
break;
}
location ~ /\. {
deny all;
}
location ~* ^.+\.(rb|log)$ {
deny all;
}
location / {
try_files $uri #rails;
}
location #rails {
proxy_pass http://theapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
client_max_body_size 4G;
keepalive_timeout 10;
}
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /home/apps/central/public;
}
error_log /var/log/nginx/central_error.log;
access_log /var/log/nginx/central_access.log;
}
As I check the log file of nginx but it doesn't show anything.
The app work well in my local and even in production with ports config (of course if I go to thedomain.com:4000), but I need to serve it with Nginx in production with thedomain.com so I need solution for this.
I'm running a docker container and a .NET Core service on Centos 7. I use the following Dockerfile to generate my container (Angular app):
FROM node:11-alpine as builder
COPY package.json ./
RUN npm i && mkdir /ng-app && cp -R ./node_modules ./ng-app
WORKDIR /ng-app
COPY . .
RUN $(npm bin)/ng build --prod --build-optimizer=false
FROM nginx:1.13.3-alpine
RUN rm -f /etc/nginx/conf.d/default.conf
COPY ./docker/nginx.conf /etc/nginx/conf.d/
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /ng-app/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
I build the image with:
docker build -t myservice .
And run container with:
docker run -d -it --name=msrv --net=host myservice
netstat -plntu run on the server has the following two lines which are of interest here (when container is running):
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1469/nginx: master
tcp6 0 0 :::5000 :::* LISTEN 2936/dotnet
The .NET Core service is accessible via the public address of the server. With --net=host option I thought that the app running in my container should be able to connect to the service using the address http://localhost:5000/api/webtoken (to get the security token on login) but I get net:ERR_CONNECTION_REFUSED when I try to access it. I can get the token just fine when I use Postman.
Everything seems to be in order, but I can't connect from container to the dotnet service. What am I doing wrong?
In case it might help, here is the nginx.conf file:
server {
listen 80;
set $backend_addr http://localhost:5000;
location /api/ {
proxy_pass $backend_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri$args $uri$args/ $uri $uri/ /index.html =404;
}
}
I am deploying my Next.js / Nginx docker image from Container Registry to Compute Engine
Once deployed the application is running as expected, but its running on port 3000 instead of 80 - i.e. I want to access it at <ip_address> but I can only access it on <ip_address>:3000. I have setup a reverse proxy in Nginx to forward port 3000 to 80 but it does not seem to be working.
When I run docker-compose up the app is accessible on localhost (rather than localhost:3000)
Dockerfile
FROM node:alpine as react-build
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
RUN npm install --global pm2
COPY . /usr/src/app
RUN npm run build
# EXPOSE 3000
EXPOSE 80
CMD [ "pm2-runtime", "start", "npm", "--", "start" ]
docker-compose.yml
version: '3'
services:
nextjs:
build: ./
nginx:
build: ./nginx
ports:
- 80:80
./nginx/Dockerfile
FROM nginx:alpine
# Remove any existing config files
RUN rm /etc/nginx/conf.d/*
# Copy config files
# *.conf files in "conf.d/" dir get included in main config
COPY ./default.conf /etc/nginx/conf.d/
./nginx/default.conf
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=STATIC:10m inactive=7d use_temp_path=off;
server {
listen 80;
gzip on;
gzip_proxied any;
gzip_comp_level 4;
gzip_types text/css application/javascript image/svg+xml;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# BUILT ASSETS (E.G. JS BUNDLES)
# Browser cache - max cache headers from Next.js as build id in url
# Server cache - valid forever (cleared after cache "inactive" period)
location /_next/static {
#proxy_cache STATIC;
proxy_pass http://localhost:3000;
}
# STATIC ASSETS (E.G. IMAGES)
# Browser cache - "no-cache" headers from Next.js as no build id in url
# Server cache - refresh regularly in case of changes
location /static {
#proxy_cache STATIC;
proxy_ignore_headers Cache-Control;
proxy_cache_valid 60m;
proxy_pass http://locahost:3000;
}
# DYNAMIC ASSETS - NO CACHE
location / {
proxy_pass http://locahost:3000;
}
}