I am working on a static website using Gatsby for the development and Nginx for serving the static files.
I am also using Docker for the deployment to test and production and Traefik for routing traffic to the docker container of the application.
I have an environment variable which I defined in the application file, and that environment variable is called from a .env file in the root folder of the application.
However, when that environment variable is invoked in the application, it throws an error:
undefined
Here's the code:
Dockerfile
# Set base image
FROM node:latest AS builder
# Set working directory
WORKDIR /app
# Copy package.json and install packages
COPY package.json .
RUN npm install
# Copy other project files and build
COPY . ./
RUN npm run build
# Set nginx image
FROM nginx:latest
# Nginx config
RUN rm -rf /etc/nginx/conf.d/default.conf
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
# Static build
COPY --from=builder /app/public /usr/share/nginx/html
# Set working directory
WORKDIR /usr/share/nginx/html
# Start Nginx server
CMD ["/bin/bash", "-c", "nginx -g \"daemon off;\""]
.env
GATSBY_API_URL=https://myapi.mywebsite.com
docker-compose.yml
version: "3"
services:
web:
image: my-website
build:
context: .
dockerfile: Dockerfile
environment:
GATSBY_API_URL: ${GATSBY_API_URL}
expose:
- "80"
labels:
- traefik.enable=true
- traefik.http.routers.my-website.rule=Host(`my-website.com`)
restart: always
volumes:
- .:/app
networks:
default:
external:
name: traefik-proxy
index.js
const onSubmit = async (values) => {
try {
const res = await axios.post(`${process.env.GATSBY_API_URL}/api/EmployeeDetail/verify`, values)
// console.log(res, 'verify endpoint');
if( res.data.requestSuccessful === true ) {
dispatchVerifyData({
type : 'UPDATE_VERIFY_DATA',
verifyData: {
res: res.data.responseData,
loanType: values.loanType
}
})
handleNext()
} else {
setIsSuccessful({
status: false,
message: res.data.message
})
}
} catch (error) {
//error state Unsuccessful
console.log(error, 'error')
setIsSuccessful({
status: false,
})
}
}
.dockerignore
node_modules
npm-debug.log
.DS_Store
.bin
.git
.gitignore
.bundleignore
.bundle
.byebug_history
.rspec
tmp
log
test
config/deploy
public/packs
public/packs-test
yarn-error.log
coverage/
.env
.env.production
Nginx default.conf
server {
listen 80;
add_header Cache-Control no-cache;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
expires -1;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
I can't still seem to tell what is the cause of the issue that causes the application to return an undefined error whenever the environment variable is invoked. Any form of help will be highly appreciated.
I finally figured it after some long hours of debugging with my colleagues.
Here are a few things I learnt:
Firstly, by default, Gatsby supports 2 environments:
Development. If you run gatsby develop, then you will be in the development environment.
Production. If you run gatsby build or gatsby serve, then you will be in the production environment.
If you note, however, we are running npm run build in our Dockerfile which is equivalent to gatsby build, so this informs the application that we are running in the production. environment.
Secondly, defining Environment Variables for Client-side JavaScript
For Project Env Vars that you want to access in client-side browser JavaScript, you can define an environment config file, .env.development and/or .env.production, in your root folder. Depending on your active environment, the correct one will be found and its values embedded as environment variables in the browser JavaScript.
In otherwords, we will need to rename our environment config file from .env to .env.production to allow the Gatsby application to recognize it in our production environment.
Thirdly, defining Environment Variables using prefixes
In addition to these Project Environment Variables defined in .env.* files, you could also define OS Env Vars. OS Env Vars which are prefixed with GATSBY_ will become available in browser JavaScript.
If you note very well we are already defining this in our .env config file as - GATSBY_API_URL=https://myapi.mywebsite.com, so we have no issues with that.
Fourthly, removing the env. config files from .dockerignore
If we observe clearly how the values of environment variables are embedded in the browser Javascript for Client-side JavaScript, you will see that it is done during build time and not run time.
Therefore, we need to remove the .env.* config files from .dockerignore and also remove the environment option in the docker-compose.yml file, since it is unnecessary anymore because we don't embed the values of the environment variables during the run time.
So our code will look like this now:
Dockerfile
# Set base image
FROM node:latest AS builder
# Set working directory
WORKDIR /app
# Copy package.json and install packages
COPY package.json .
RUN npm install
# Copy other project files and build
COPY . ./
RUN npm run build
# Set nginx image
FROM nginx:latest
# Nginx config
RUN rm -rf /etc/nginx/conf.d/default.conf
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
# Static build
COPY --from=builder /app/public /usr/share/nginx/html
# Set working directory
WORKDIR /usr/share/nginx/html
# Start Nginx server
CMD ["/bin/bash", "-c", "nginx -g \"daemon off;\""]
.env.production
GATSBY_API_URL=https://myapi.mywebsite.com
docker-compose.yml
version: "3"
services:
web:
image: my-website
build:
context: .
dockerfile: Dockerfile
expose:
- "80"
labels:
- traefik.enable=true
- traefik.http.routers.my-website.rule=Host(`my-website.com`)
restart: always
volumes:
- .:/app
networks:
default:
external:
name: traefik-proxy
index.js
const onSubmit = async (values) => {
try {
const res = await axios.post(`${process.env.GATSBY_API_URL}/api/EmployeeDetail/verify`, values)
// console.log(res, 'verify endpoint');
if( res.data.requestSuccessful === true ) {
dispatchVerifyData({
type : 'UPDATE_VERIFY_DATA',
verifyData: {
res: res.data.responseData,
loanType: values.loanType
}
})
handleNext()
} else {
setIsSuccessful({
status: false,
message: res.data.message
})
}
} catch (error) {
//error state Unsuccessful
console.log(error, 'error')
setIsSuccessful({
status: false,
})
}
}
.dockerignore
node_modules
npm-debug.log
.DS_Store
.bin
.git
.gitignore
.bundleignore
.bundle
.byebug_history
.rspec
tmp
log
test
config/deploy
public/packs
public/packs-test
yarn-error.log
coverage/
Nginx default.conf
server {
listen 80;
add_header Cache-Control no-cache;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
expires -1;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
That's all.
I hope this helps
Related
After testing that my website can successfully be deployed locally with docker, I'm trying to run a docker container directly on my GCP virtual instance. Inside my cloudbuilder.yaml file is the following:
steps:
# running docker-compose
- name: 'docker/compose:1.26.2'
args: ['up', '--build']
timeout: '1600s'
In running gcloud builds submit . --config=cloudbuild.yaml --timeout=1h, I get the following error at the end of it:
ERROR
Creating jkl-api ... done
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BUILD FAILURE: Build step failure: build step 0 "docker/compose:1.26.2" failed: context deadline exceeded
ERROR: (gcloud.builds.submit) build 9712fc75-9b47-43a7-a84d-a208897fe00d completed with status "FAILURE"
Why am I getting this error?
Edit:
As per #Samantha Létourneau's comment, I decided I want to instead directly build the images for my project then run them instead of using docker-compose. I was able to successfully build and push a docker image to the Container Registry with this cloudbuilder.yaml file:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/lawma-project-356604/lawma-image', '.']
# Docker Push
- name: 'gcr.io/cloud-builders/docker'
args: ['push',
'gcr.io/lawma-project-356604/lawma-image']
But when i try and deploy a container i get the following error:
Cloud Run error: The user-provided container failed to start and listen on the port defined provided by the PORT=80 environment variable.
and this error [1]:
nginx: [emerg] host not found in upstream "lawma-api" in /etc/nginx/conf.d/default.conf:20
heres my nginx.conf file:
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
error_page 500 502 503 504 /50x.html;
location / {
try_files $uri /index.html;
add_header Cache-Control "no-cache";
}
location /static {
expires 1y;
add_header Cache-Control "public";
}
location /api {
proxy_pass http://lawma-api:8000;
}
}
and my Dockerfile:
#Build step #1: build the React frontend
FROM node:16-alpine as build-step
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY lawmaapp/package.json ./
COPY lawmaapp/public ./public
COPY lawmaapp/src ./src
EXPOSE 80
RUN npm install
RUN npm run build
#Build step #2: build an nginx container \
FROM nginx:stable-alpine
COPY --from=build-step /app/build /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
Why am i getting the error [1]?
My problem is:
I made a simple web page with Next.js. I take some content from /pages/api/ endpoints with JSON format and show it in pages and components. Compiling locally (npm run dev or npm run start) is fine. When I run it on Docker on Windows 10, I don't have a problem again. And this is working on Heroku and Vercel. The point I'm having trouble with is: I gave the project to the Devops team and they ran it on Docker. They also directed a domain to this project. But the api endpoints are meaninglessly trying to access an ID URL. When I look from Chrome Devtools it looks like this:
http://013cfdde4910:3000/api/en/brands
I get the following errors as console message.
Mixed Content: The page at 'https://beta..com/technologies' was loaded over HTTPS, but requested an insecure resource 'http://013cfdde4910:3000/api/en/menu'. This request has been blocked; the content must be served over HTTPS.
Actually it should be http://localhost:3000/api/en/brands. I don't understand where 013cfdde4910 here is coming from and why.
I checked to see if it is among the codes. I looked at the Source of the project that is live, that is, published by the Devops team, from Google Devtools.
_next/static > chunk > pages > _app-eac17e226e00cc01a313.js
When I searched here, I found 013cfdde4910 String in 3 places. Why did it come here when it was built?
For example, the minified code continues as follows.
.... return(0,u.useEffect)((function(){fetch("".concat("http://013cfdde4910:3000","/api/").concat(t.locale,"/) brands")).....
I can see it as localhost in my local build and when I publish it in Windows 10 Docker and look there. So what is causing the localhost -> 013cfdde4910 conversion and how can I solve it?
Dockerfile I use:
FROM node:current-alpine as base
WORKDIR /app
COPY package.json ./
RUN npm install
COPY . .
FROM base AS build
ENV NODE_ENV=production
WORKDIR /build
COPY --from=base /app ./
RUN npm run build
FROM node:current-alpine AS production
ENV NODE_ENV=production
WORKDIR /app
COPY --from=build /build/package*.json ./
COPY --from=build /build/.next ./.next
COPY --from=build /build/public ./public
RUN npm install
EXPOSE 3000
CMD npm run start
I use docker-compose.yml
version: "3"
services:
ui:
container_name: web
restart: always
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- ./:/app
- /app/node_modules
- /app/.next
env_file:
- .env
And this is next.config.js
module.exports = {
webpackDevMiddleware: (config) => {
config.watchOptions = {
poll: 1000,
aggregateTimeout: 300,
};
return config;
},
async headers() {
return [
{
source: "/api/:path*",
headers: [
{ key: "Access-Control-Allow-Credentials", value: "true" },
{ key: "Access-Control-Allow-Origin", value: "*" },
{
key: "Access-Control-Allow-Methods",
value: "GET,OPTIONS,PATCH,DELETE,POST,PUT",
},
{
key: "Access-Control-Allow-Headers",
value:
"X-CSRF-Token, X-Requested-With, Accept, Accept-Version, Content-Length, Content-MD5, Content-Type, Date, X-Api-Version",
},
],
},
];
},
reactStrictMode: true,
generateEtags: false,
i18n: {
locales: ["en", "tr"],
defaultLocale: "en",
localeDetection: false,
},
env: {
VERSION: process.env.VERSION,
MODE: process.env.MODE,
APP_NAME: process.env.APP_NAME,
HOST: process.env.HOST,
HOSTNAME: process.env.HOSTNAME,
PORT: process.env.PORT,
GA_TRACKING_ID: process.env.GA_TRACKING_ID,
},
generateBuildId: async () => {
return new Date().toDateString();
},
eslint: {
ignoreDuringBuilds: false,
},
};
Thanks for posting the config, it appears the hostname is grabbed as the docker container ID - You do not post the actual fetch code but my guess is - this environment variable is used.
`HOSTNAME: process.env.HOSTNAME`
It is either passed as a environment variable or interpreted to be the container ID, you need a way to create a domain:3000/api/brands which could be part of the Docker deploy, maybe a nginx proxy.
My Problem:
I am using Ubuntu 18.04 and a docker-compose based solution with two Docker images, one to handle Python/uWSGI and one for my NGINX reverse proxy. No matter what I change, it always seems like WSGI is unable to detect my default application. Whenever I run docker-compose up, and navigate to localhost:5000 I get the above default splash.
The complete program appears to work on our CentOS 7 machines. However, when I try to execute it on my Ubuntu test machine, I can only get the "Welcome to NGINX!" page.
Directory Structure:
/app
- app.conf
- app.ini
- app.py
- docker-compose.py
- Dockerfile-flask
- Dockerfile-nginx
- requirements.txt
/templates
(All code snippets have been simplified to help isolate the problem)
Here is an example of my docker traceback:
clocker_flask_1
[uWSGI] getting INI configuration from app.ini
current working directory: /app
detected binary path: /usr/local/bin/uwsgi
uwsgi socket 0 bound to TCP address 0.0.0.0:5000 fd 3
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
*** Operational MODE: preforking+threaded ***
WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x558072010e70 pid: 1 (default app)
clocker_nginx_1
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
Here is my docker-compose.yaml:
# docker-compose.yml
version: '3'
services:
flask:
image: webapp-flask
build:
context: .
dockerfile: Dockerfile-flask
volumes:
- "./:/app:z"
- "/etc/localtime:/etc/localtime:ro"
environment:
- "EXTERNAL_IP=${EXTERNAL_IP}"
nginx:
image: webapp-nginx
build:
context: .
dockerfile: Dockerfile-nginx
ports:
- 5000:80
depends_on:
- flask
Dockerfile-flask:
FROM python:3
ENV APP /app
RUN mkdir $APP
WORKDIR $APP
EXPOSE 5000
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD [ "uwsgi", "--ini", "app.ini" ]
Dockerfile-nginx
FROM nginx:latest
EXPOSE 80
COPY app.conf /etc/nginx/conf.d
app.conf
server {
listen 80;
root /usr/share/nginx/html;
location / { try_files $uri #app; }
location #app {
include uwsgi_params;
uwsgi_pass flask:5000;
}
}
app.py
# Home bit
#application.route('/')
#application.route('/home', methods=["GET", "POST"])
def home():
return render_template(
'index.html',
er = er
)
if __name__ == "__main__":
application.run(host='0.0.0.0')
app.ini
[uwsgi]
protocol = uwsgi
module = app
callable = application
master = true
processes = 2
threads = 2
socket = 0.0.0.0:5000
vacuum = true
die-on-term = true
max-requests = 1000
The nginx image comes with a main configuration file, /etc/nginx/nginx.conf, which loads every conf file in the conf.d folder -- including your nemesis in this case, a stock /etc/nginx/conf.d/default.conf. It reads as follows (trimmed a bit for concision):
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
So, your app.conf and this configuration are both active. The reason why this default one wins, though, is because of the server_name directive that it has (and yours lacks) -- when you're hitting localhost:5000, nginx matches based on the hostname and sends your request there.
To fix this easily, you can just remove that file in your Dockerfile-nginx:
RUN rm /etc/nginx/conf.d/default.conf
I have a functional fullstack application running through docker-compose. Works like a charm. Only problem is that the team has to rebuild the entire application to reflect changes. That means bringing the entire thing down with docker-compose down.
I'm looking for help to update the file(s) below to allow for either hot reloads OR simply enable browser refreshes to pickup UI changes
NOTES:
I have "dev" and "prod" npm scripts. Both behave as they were prod (currently produce a static build folder and point to it)
Any help would be greatly appreciated :)
package.json
{
"name": "politicore",
"version": "1.0.1",
"description": "Redacted",
"repository": "Redacted",
"author": "Redacted",
"license": "LicenseRef-LICENSE.MD",
"private": true,
"engines": {
"node": "10.16.3",
"yarn": "YARN NO LONGER USED - use npm instead."
},
"scripts": {
"dev": "docker-compose up",
"dev-force": "docker-compose up --build --force-recreate",
"dev-force-d": "docker-compose up --build --force-recreate -d",
"prod-up": "docker-compose -f docker-compose-prod.yml up",
"prod-up-force": "docker-compose -f docker-compose-prod.yml up --build --force-recreate",
"prod-up-force-d": "docker-compose -f docker-compose-prod.yml up --build --force-recreate -d",
"dev-down": "docker-compose down",
"dev-down-remove": "docker-compose down --remove-orphans",
"prod-down": "docker-compose down",
"prod-down-remove": "docker-compose down --remove-orphans"
}
}
nginx dev config file
server {
listen 80;
listen 443;
server_name MyUrl.com www.MyUrl.com;
server_tokens off;
proxy_hide_header X-Powered-By;
proxy_hide_header Server;
add_header X-XSS-Protection "1; mode=block";
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload';
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-Permitted-Cross-Domain-Policies master-only;
add_header Referrer-Policy same-origin;
add_header Expect-CT 'max-age=60';
add_header Feature-Policy "accelerometer none; ambient-light-sensor none; battery none; camera none; gyroscope none;";
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location /graphql {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_pass http://api:5000;
proxy_redirect default;
}
}
docker-compose dev file
version: '3.6'
services:
api:
build:
context: ./services/api
dockerfile: Dockerfile-dev
restart: always
volumes:
- './services/api:/usr/src/app'
- '/usr/src/app/node_modules'
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
env_file:
- common/.env
client:
build:
context: ./services/client
dockerfile: Dockerfile-dev
restart: always
volumes:
- './services/client:/usr/src/app'
- '/usr/src/app/node_modules'
ports:
- 80:80
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
depends_on:
- api
stdin_open: true
Client Service dockerfile
FROM node:10 as builder
WORKDIR /usr/src/app
COPY package.json /usr/src/app/package.json
RUN npm install
COPY . .
RUN npm run build
FROM nginx:alpine
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
COPY nginx/dev.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
API dockerfile (dev & prod)
FROM node:10
WORKDIR /usr/src/app
COPY package.json /usr/src/app/package.json
RUN npm install
CMD ["npm", "start"]
Filetree Picture
As I understand it, your nginx file defines 2 areas to serve: location / and location /graphql.
The first (location /) is serving up static files from /usr/share/nginx/html inside the container. Those files are created during your docker build. Since those are produced in a multi-stage docker build, you will need to change your strategy up. Here are several options that may help guide you.
Option 1
One option is to build local and mount a volume.
Perform npm run build on your box (perhaps even with a filewatcher to perform builds any time *.js files change
Add - ./build:/usr/share/nginx/html to list of volumes for client service
The trade-off here is that you have to forego a fully dockerized build (if that's something that matters heavily to you and your team).
Option 2
Utilize a hot-reloading node server for local development and build a docker image for production environments. It's hard to tell from the files whether the client is react, angular, vuejs, etc., but typically they have a pattern from running local dev servers.
The trade-off here is that you run locally differently than running in production.
Option 3
Combine nginx and nodejs into one docker image with hot reloading inside.
Build a local docker image that contains nodejs and nginx
(You already have a volume mount into client of your app src files)
Set up the image to run npm run build inside the container every time a file changes in that mounted volume
The trade-off here is that you may have more than 1 process running in a docker container (a big no-no).
Option 4
A variation of option 3 where you run 2 docker containers.
Declare a top-level volume client_build
volumes:
- client_build:
Create a docker service in docker-compose with 2 volumes
- ./services/client:/usr/src/app
- client_build:/usr/src/app/build
Add the build volume to your client service: - client_build:/usr/share/nginx/html
Make sure nginx hot-reloads when that dir changes
I am trying to start an NGINX server within a docker container configured through docker-compose. The catch is, however, that I would like to substitute an environment variable inside of the http section, specifically within the "upstream" block.
It would be awesome to have this working, because I have several other containers that are all configured through environment variables, and I have about 5 environments that need to be running at any given time. I have tried using "envsubst" (as suggested by the official NGINX docs), perl_set, and set_by_lua, however none of them appear to be working.
Below is the NGINX config, as it is after my most recent trial
user nginx;
worker_processes 1;
env NGINXPROXY;
load_module modules/ngx_http_perl_module.so;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
perl_set $nginxproxy 'sub { return $ENV{"NGINXPROXY"}; }';
upstream api-upstream {
server ${nginxproxy};
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile off;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Below is the NGINX dockerfile
# build stage
FROM node:latest
WORKDIR /app
COPY ./ /app
RUN npm install
RUN npm run build
# production stage
FROM nginx:1.17.0-perl
COPY --from=0 /app/dist /usr/share/nginx/html
RUN apt-get update && apt-get install -y gettext-base
RUN rm /etc/nginx/conf.d/default.conf
RUN rm /etc/nginx/nginx.conf
COPY default.conf /etc/nginx/conf.d
COPY nginx.conf /etc/nginx
RUN mkdir /certs
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
Below is the section of the docker-compose.yml for the NGINX server (with names and IPs changed). The envsubst command is intentionally commented out at this point in my troubleshooting.
front-end:
environment:
- NGINXPROXY=172.31.67.100:9300
build: http://gitaccount:password#gitserver.com/group/front-end.git#develop
container_name: qa_front_end
image: qa-front-end
restart: always
networks:
qa_network:
ipv4_address: 172.28.0.215
ports:
- "9080:80"
# command: /bin/bash -c "envsubst '$$NGINXPROXY' < /etc/nginx/nginx.conf > /etc/nginx/nginx.conf && nginx -g 'daemon off;'"
What appears to be happening is when I reference the $nginxproxy variable in the upstream block (right after "server"), I get output that makes it look like it's referencing the string literal "$nginxproxy" rather than substituting the value of the variable.
qa3_front_end | 2019/06/18 12:35:36 [emerg] 1#1: host not found in upstream "${nginx_upstream}" in /etc/nginx/nginx.conf:19
qa3_front_end | nginx: [emerg] host not found in upstream "${nginx_upstream}" in /etc/nginx/nginx.conf:19
qa3_front_end exited with code 1
When I attempt to use envsubst, I get an error that makes it sound like the command messed with the format of the nginx.conf file
qa3_front_end | 2019/06/18 12:49:02 [emerg] 1#1: no "events" section in configuration
qa3_front_end | nginx: [emerg] no "events" section in configuration
qa3_front_end exited with code 1
I'm pretty stuck, so thanks in advance for your help.
Since nginx 1.19 you can now use environment variables in your configuration with docker-compose. I used the following setup:
# file: docker/nginx/templates/default.conf.conf
upstream api-upstream {
server ${API_HOST};
}
# file: docker-compose.yml
services:
nginx:
image: nginx:1.19-alpine
volumes:
- "./docker/nginx/templates:/etc/nginx/templates/"
environment:
NGINX_ENVSUBST_TEMPLATE_SUFFIX: ".conf"
API_HOST: api.example.com
I'm going off script a little from the example in the documentation. Note the extra .conf extension on the template file - this is not a typo. In the docs for the nginx image it is suggested to name the file, for example, default.conf.template. Upon startup, a script will take that file, substitute the environment variables, and then output the file to /etc/nginx/conf.d/ with the original file name, dropping the .template suffix.
By default that suffix is .template, but this breaks syntax highlighting unless you configure your editor. Instead, I specified .conf as the template suffix. If you only name your file default.conf the result will be a file named /etc/nginx/conf.d/default and your site won't be served as expected.
You can avoid some of the hassles with Compose interpreting environment variables by defining your own entrypoint. See this simple example:
entrypoint.sh (make sure this file is executable)
#!/bin/sh
export NGINXPROXY
envsubst '${NGINXPROXY}' < /config.template > /etc/nginx/nginx.conf
exec "$#"
docker-compose.yml
version: "3.7"
services:
front-end:
image: nginx
environment:
- NGINXPROXY=172.31.67.100:9300
ports:
- 80:80
volumes:
- ./config:/config.template
- ./entrypoint.sh:/entrypoint.sh
entrypoint: ["/entrypoint.sh"]
command: ["nginx", "-g", "daemon off;"]
My config file has the same content as your nginx.conf, aside from the fact that I had to comment the lines using the Perl module.
Note that I had to mount my config file to another location before I could envsubst it. I encountered some strange behaviour in the form that the file ends up empty after the substitution, which can be avoided by this approach. It shouldn't be a problem in your specific case, because you already embed it in your image on build time.
EDIT
For completeness, to change your setup as little as possible, you just have to make sure that you export your environment variable. Adapt your command like this:
command: ["/bin/bash", "-c", "export NGINXPROXY && envsubst '$$NGINXPROXY' < /etc/nginx/nginx.conf > /etc/nginx/nginx.conf && nginx -g 'daemon off;'"]
...and you should be good to go. I would always recommend the "cleaner" way with defining your own entrypoint, though.
So after some wrestling with this issue, I managed to get it working similarly to the answer provided by bellackn. I am going to post my exact solution here, in case anybody else needs to reference a complete solution.
Step1: Write your nginx.conf or default.conf how you would normally write it. Save the file as "nginx.conf.template", or "default.conf.template" depending on which you are trying to substitute variables into.
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream api-upstream {
server 192.168.25.254;
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile off;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Step2: Substitute a variable in the format ${VARNAME} for whatever value(s) you want to replace with an environment variable:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream api-upstream {
server ${SERVER_NAME};
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile off;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Step 3: In your docker-file, copy your nginx configuration files (your nginx.conf.template or default.conf.template) into your container at the appropriate location:
# build stage
FROM node:latest
WORKDIR /app
COPY ./ /app
RUN npm install
RUN npm run build
# production stage
FROM nginx:1.17.0-perl
COPY --from=0 /app/dist /usr/share/nginx/html
RUN apt-get update && apt-get install -y gettext-base
RUN rm /etc/nginx/conf.d/default.conf
RUN rm /etc/nginx/nginx.conf
#-----------------------------------#
|COPY default.conf /etc/nginx/conf.d|
|COPY nginx.conf.template /etc/nginx|
#-----------------------------------#
RUN mkdir /certs
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
Step 4: Set your environment variable in your docker-compos.yml file using the "environment" section label. Make sure your environment variable name matches whatever variable name you chose within your nginx config file. Use the "envsubt" command within your docker container to substitute your variable values in for your variables within your nginx.conf.template, and write the output to a file named nginx.conf in the correct location. This can be done within the docker-compose.yml file by using the "command" section label:
version: '2.0'
services:
front-end:
environment:
- SERVER_NAME=172.31.67.100:9100
build: http://git-account:git-password#git-server.com/project-group/repository-name.git#branch-ame
container_name: qa_front_end
image: qa-front-end-vue
restart: always
networks:
qa_network:
ipv4_address: 172.28.0.215
ports:
- "9080:80"
command: >
/bin/sh -c
"envsubst '
$${SERVER_NAME}
'< /etc/nginx/nginx.conf.template
> /etc/nginx/nginx.conf
&& nginx -g 'daemon off;'"
Step 5: Run your stack with docker-compose up (with whatever additional switches you need) and your nginx server should now start with whatever value you supplied in the "environment" section of your docker-compose.yml
As mentioned in the solution above, you can also define your own entry point, however this solution has also proven to work pretty well, and keeps everything contained into a single configuration file, giving me the ability to run a stack of services directly from git with nothing but a docker-compose.yml file.
A big thank you to everybody who took the time to ready through this, and bellackn for taking the time to help me solve the issue.
Like already explained in Jody's answer, nowadays the official Nginx Docker image supports parsing templates. This uses envsubst and its handling ensures not to mess with Nginx variables such as $host and all. Nice. However, envsubst does not support default values like a regular shell and Docker Compose do when using ${MY_VAR:-My Default}. So, this built-in templating would always need a full setup of all variables, even when using the defaults.
To define defaults in the image itself, one can use a custom entry point to first set the defaults and then simply delegate to the original entrypoint. Like a docker-defaults.sh:
#!/usr/bin/env sh
set -eu
# As of version 1.19, the official Nginx Docker image supports templates with
# variable substitution. But that uses `envsubst`, which does not allow for
# defaults for missing variables. Here, first use the regular command shell
# to set the defaults:
export PROXY_API_DEST=${PROXY_API_DEST:-http://host.docker.internal:8000/api/}
# Due to `set -u` this would fail if not defined and no default was set above
echo "Will proxy requests for /api/* to ${PROXY_API_DEST}*"
# Finally, let the original Nginx entry point do its work, passing whatever is
# set for CMD. Use `exec` to replace the current process, to trap any signals
# (like Ctrl+C) that Docker may send it:
exec /docker-entrypoint.sh "$#"
Along with, say, some docker-nginx-default.conf:
# After variable substitution, this will replace /etc/nginx/conf.d/default.conf
server {
listen 80;
listen [::]:80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location /api/ {
# Proxy API calls to another destination; the default for the variable is
# set in docker-defaults.sh
proxy_pass $PROXY_API_DEST;
}
}
In the Dockerfile copy the template into /etc/nginx/templates/default.conf.template and set the custom entry point:
FROM nginx:stable-alpine
...
# Each time Nginx is started it will perform variable substition in all template
# files found in `/etc/nginx/templates/*.template`, and copy the results (without
# the `.template` suffix) into `/etc/nginx/conf.d/`. Below, this will replace the
# original `/etc/nginx/conf.d/default.conf`; see https://hub.docker.com/_/nginx
COPY docker-nginx-default.conf /etc/nginx/templates/default.conf.template
COPY docker-defaults.sh /
# Just in case the file mode was not properly set in Git
RUN chmod +x /docker-defaults.sh
# This will delegate to the original Nginx `docker-entrypoint.sh`
ENTRYPOINT ["/docker-defaults.sh"]
# The default parameters to ENTRYPOINT (unless overruled on the command line)
CMD ["nginx", "-g", "daemon off;"]
Now using, e.g., docker run --env PROXY_API_DEST=https://example.com/api/ ... will set a value, which in this example will default to http://host.docker.internal:8000/api/ if not set (which is actually http://localhost:8000/api/ on the local machine).
According to official documentation https://hub.docker.com/_/nginx
section "Using environment variables in nginx configuration (new in 1.19)"
you can use environment variables.
But it's does not work due to bug inside docker container script:
https://github.com/nginxinc/docker-nginx/blob/master/entrypoint/20-envsubst-on-templates.sh#L25
running this script always fails with error:
/docker-entrypoint.d/20-envsubst-on-templates.sh: line 25: 3: Bad file descriptor
I created issue https://github.com/nginxinc/docker-nginx/issues/645
and pull request https://github.com/nginxinc/docker-nginx/pull/646
As workaround for now I copied this script and change it locally/
You could switch to a more advanced nginx docker image. For example nginx4docker, it implements a bunch of basic env variables that can be set through docker and you don't have to fiddle around with nginx basic templating and all it's drawbacks.
nginx4docker could also be extended with your custom env variables. only mount a file that lists all your env variables to docker ... --mount $(pwd)/CUSTOM_ENV:/ENV ...
My solution is coping entrypoint sh file into /docker-entrypoint.d directory of nginx container. As mentioned above, you need to copy .template file. But you dont need to create two seperate files.
Copy the file config file with temporary name in Dockerfile. But it's important to not use ENTRYPOINT command in Dockerfile
FROM nginx
...
COPY ./default.conf /etc/nginx/conf.d/default.conf.temp
create an sh file named 05-docker-entrypoint.sh in your project directory (host) and put the following code into the sh file as mentioned above
#!/usr/bin/env sh
set -eu
envsubst '${MY_VARIABLE}' < /etc/nginx/conf.d/default.conf.temp > /etc/nginx/conf.d/default.conf
exec "$#"
Mount 05-docker-entrypoint.sh using docker-compose.yml file to /docker-entrypoint.d directory of nginx container or copy it using Dockerfile. This two options are looking like this :
Option 1. (i prefer this) Mounting file using compose file :
web:
expose:
- 80
environment:
- MY_VARIABLE=blabala
volumes:
- ./05-docker-entrypoint.sh:/docker-entrypoint.d/05-docker-entrypoint.sh
....
Option 2.
Use Dockerfile to copy files into container
Final Dockerfile with Option2 looks like :
FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf.temp
COPY ./05-docker-entrypoint.sh /docker-entrypoint.d/05-docker-entrypoint.sh