I have a functional fullstack application running through docker-compose. Works like a charm. Only problem is that the team has to rebuild the entire application to reflect changes. That means bringing the entire thing down with docker-compose down.
I'm looking for help to update the file(s) below to allow for either hot reloads OR simply enable browser refreshes to pickup UI changes
NOTES:
I have "dev" and "prod" npm scripts. Both behave as they were prod (currently produce a static build folder and point to it)
Any help would be greatly appreciated :)
package.json
{
"name": "politicore",
"version": "1.0.1",
"description": "Redacted",
"repository": "Redacted",
"author": "Redacted",
"license": "LicenseRef-LICENSE.MD",
"private": true,
"engines": {
"node": "10.16.3",
"yarn": "YARN NO LONGER USED - use npm instead."
},
"scripts": {
"dev": "docker-compose up",
"dev-force": "docker-compose up --build --force-recreate",
"dev-force-d": "docker-compose up --build --force-recreate -d",
"prod-up": "docker-compose -f docker-compose-prod.yml up",
"prod-up-force": "docker-compose -f docker-compose-prod.yml up --build --force-recreate",
"prod-up-force-d": "docker-compose -f docker-compose-prod.yml up --build --force-recreate -d",
"dev-down": "docker-compose down",
"dev-down-remove": "docker-compose down --remove-orphans",
"prod-down": "docker-compose down",
"prod-down-remove": "docker-compose down --remove-orphans"
}
}
nginx dev config file
server {
listen 80;
listen 443;
server_name MyUrl.com www.MyUrl.com;
server_tokens off;
proxy_hide_header X-Powered-By;
proxy_hide_header Server;
add_header X-XSS-Protection "1; mode=block";
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload';
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-Permitted-Cross-Domain-Policies master-only;
add_header Referrer-Policy same-origin;
add_header Expect-CT 'max-age=60';
add_header Feature-Policy "accelerometer none; ambient-light-sensor none; battery none; camera none; gyroscope none;";
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location /graphql {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_pass http://api:5000;
proxy_redirect default;
}
}
docker-compose dev file
version: '3.6'
services:
api:
build:
context: ./services/api
dockerfile: Dockerfile-dev
restart: always
volumes:
- './services/api:/usr/src/app'
- '/usr/src/app/node_modules'
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
env_file:
- common/.env
client:
build:
context: ./services/client
dockerfile: Dockerfile-dev
restart: always
volumes:
- './services/client:/usr/src/app'
- '/usr/src/app/node_modules'
ports:
- 80:80
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
depends_on:
- api
stdin_open: true
Client Service dockerfile
FROM node:10 as builder
WORKDIR /usr/src/app
COPY package.json /usr/src/app/package.json
RUN npm install
COPY . .
RUN npm run build
FROM nginx:alpine
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
COPY nginx/dev.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
API dockerfile (dev & prod)
FROM node:10
WORKDIR /usr/src/app
COPY package.json /usr/src/app/package.json
RUN npm install
CMD ["npm", "start"]
Filetree Picture
As I understand it, your nginx file defines 2 areas to serve: location / and location /graphql.
The first (location /) is serving up static files from /usr/share/nginx/html inside the container. Those files are created during your docker build. Since those are produced in a multi-stage docker build, you will need to change your strategy up. Here are several options that may help guide you.
Option 1
One option is to build local and mount a volume.
Perform npm run build on your box (perhaps even with a filewatcher to perform builds any time *.js files change
Add - ./build:/usr/share/nginx/html to list of volumes for client service
The trade-off here is that you have to forego a fully dockerized build (if that's something that matters heavily to you and your team).
Option 2
Utilize a hot-reloading node server for local development and build a docker image for production environments. It's hard to tell from the files whether the client is react, angular, vuejs, etc., but typically they have a pattern from running local dev servers.
The trade-off here is that you run locally differently than running in production.
Option 3
Combine nginx and nodejs into one docker image with hot reloading inside.
Build a local docker image that contains nodejs and nginx
(You already have a volume mount into client of your app src files)
Set up the image to run npm run build inside the container every time a file changes in that mounted volume
The trade-off here is that you may have more than 1 process running in a docker container (a big no-no).
Option 4
A variation of option 3 where you run 2 docker containers.
Declare a top-level volume client_build
volumes:
- client_build:
Create a docker service in docker-compose with 2 volumes
- ./services/client:/usr/src/app
- client_build:/usr/src/app/build
Add the build volume to your client service: - client_build:/usr/share/nginx/html
Make sure nginx hot-reloads when that dir changes
Related
After testing that my website can successfully be deployed locally with docker, I'm trying to run a docker container directly on my GCP virtual instance. Inside my cloudbuilder.yaml file is the following:
steps:
# running docker-compose
- name: 'docker/compose:1.26.2'
args: ['up', '--build']
timeout: '1600s'
In running gcloud builds submit . --config=cloudbuild.yaml --timeout=1h, I get the following error at the end of it:
ERROR
Creating jkl-api ... done
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BUILD FAILURE: Build step failure: build step 0 "docker/compose:1.26.2" failed: context deadline exceeded
ERROR: (gcloud.builds.submit) build 9712fc75-9b47-43a7-a84d-a208897fe00d completed with status "FAILURE"
Why am I getting this error?
Edit:
As per #Samantha Létourneau's comment, I decided I want to instead directly build the images for my project then run them instead of using docker-compose. I was able to successfully build and push a docker image to the Container Registry with this cloudbuilder.yaml file:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/lawma-project-356604/lawma-image', '.']
# Docker Push
- name: 'gcr.io/cloud-builders/docker'
args: ['push',
'gcr.io/lawma-project-356604/lawma-image']
But when i try and deploy a container i get the following error:
Cloud Run error: The user-provided container failed to start and listen on the port defined provided by the PORT=80 environment variable.
and this error [1]:
nginx: [emerg] host not found in upstream "lawma-api" in /etc/nginx/conf.d/default.conf:20
heres my nginx.conf file:
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
error_page 500 502 503 504 /50x.html;
location / {
try_files $uri /index.html;
add_header Cache-Control "no-cache";
}
location /static {
expires 1y;
add_header Cache-Control "public";
}
location /api {
proxy_pass http://lawma-api:8000;
}
}
and my Dockerfile:
#Build step #1: build the React frontend
FROM node:16-alpine as build-step
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY lawmaapp/package.json ./
COPY lawmaapp/public ./public
COPY lawmaapp/src ./src
EXPOSE 80
RUN npm install
RUN npm run build
#Build step #2: build an nginx container \
FROM nginx:stable-alpine
COPY --from=build-step /app/build /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
Why am i getting the error [1]?
I am working on a static website using Gatsby for the development and Nginx for serving the static files.
I am also using Docker for the deployment to test and production and Traefik for routing traffic to the docker container of the application.
I have an environment variable which I defined in the application file, and that environment variable is called from a .env file in the root folder of the application.
However, when that environment variable is invoked in the application, it throws an error:
undefined
Here's the code:
Dockerfile
# Set base image
FROM node:latest AS builder
# Set working directory
WORKDIR /app
# Copy package.json and install packages
COPY package.json .
RUN npm install
# Copy other project files and build
COPY . ./
RUN npm run build
# Set nginx image
FROM nginx:latest
# Nginx config
RUN rm -rf /etc/nginx/conf.d/default.conf
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
# Static build
COPY --from=builder /app/public /usr/share/nginx/html
# Set working directory
WORKDIR /usr/share/nginx/html
# Start Nginx server
CMD ["/bin/bash", "-c", "nginx -g \"daemon off;\""]
.env
GATSBY_API_URL=https://myapi.mywebsite.com
docker-compose.yml
version: "3"
services:
web:
image: my-website
build:
context: .
dockerfile: Dockerfile
environment:
GATSBY_API_URL: ${GATSBY_API_URL}
expose:
- "80"
labels:
- traefik.enable=true
- traefik.http.routers.my-website.rule=Host(`my-website.com`)
restart: always
volumes:
- .:/app
networks:
default:
external:
name: traefik-proxy
index.js
const onSubmit = async (values) => {
try {
const res = await axios.post(`${process.env.GATSBY_API_URL}/api/EmployeeDetail/verify`, values)
// console.log(res, 'verify endpoint');
if( res.data.requestSuccessful === true ) {
dispatchVerifyData({
type : 'UPDATE_VERIFY_DATA',
verifyData: {
res: res.data.responseData,
loanType: values.loanType
}
})
handleNext()
} else {
setIsSuccessful({
status: false,
message: res.data.message
})
}
} catch (error) {
//error state Unsuccessful
console.log(error, 'error')
setIsSuccessful({
status: false,
})
}
}
.dockerignore
node_modules
npm-debug.log
.DS_Store
.bin
.git
.gitignore
.bundleignore
.bundle
.byebug_history
.rspec
tmp
log
test
config/deploy
public/packs
public/packs-test
yarn-error.log
coverage/
.env
.env.production
Nginx default.conf
server {
listen 80;
add_header Cache-Control no-cache;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
expires -1;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
I can't still seem to tell what is the cause of the issue that causes the application to return an undefined error whenever the environment variable is invoked. Any form of help will be highly appreciated.
I finally figured it after some long hours of debugging with my colleagues.
Here are a few things I learnt:
Firstly, by default, Gatsby supports 2 environments:
Development. If you run gatsby develop, then you will be in the development environment.
Production. If you run gatsby build or gatsby serve, then you will be in the production environment.
If you note, however, we are running npm run build in our Dockerfile which is equivalent to gatsby build, so this informs the application that we are running in the production. environment.
Secondly, defining Environment Variables for Client-side JavaScript
For Project Env Vars that you want to access in client-side browser JavaScript, you can define an environment config file, .env.development and/or .env.production, in your root folder. Depending on your active environment, the correct one will be found and its values embedded as environment variables in the browser JavaScript.
In otherwords, we will need to rename our environment config file from .env to .env.production to allow the Gatsby application to recognize it in our production environment.
Thirdly, defining Environment Variables using prefixes
In addition to these Project Environment Variables defined in .env.* files, you could also define OS Env Vars. OS Env Vars which are prefixed with GATSBY_ will become available in browser JavaScript.
If you note very well we are already defining this in our .env config file as - GATSBY_API_URL=https://myapi.mywebsite.com, so we have no issues with that.
Fourthly, removing the env. config files from .dockerignore
If we observe clearly how the values of environment variables are embedded in the browser Javascript for Client-side JavaScript, you will see that it is done during build time and not run time.
Therefore, we need to remove the .env.* config files from .dockerignore and also remove the environment option in the docker-compose.yml file, since it is unnecessary anymore because we don't embed the values of the environment variables during the run time.
So our code will look like this now:
Dockerfile
# Set base image
FROM node:latest AS builder
# Set working directory
WORKDIR /app
# Copy package.json and install packages
COPY package.json .
RUN npm install
# Copy other project files and build
COPY . ./
RUN npm run build
# Set nginx image
FROM nginx:latest
# Nginx config
RUN rm -rf /etc/nginx/conf.d/default.conf
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
# Static build
COPY --from=builder /app/public /usr/share/nginx/html
# Set working directory
WORKDIR /usr/share/nginx/html
# Start Nginx server
CMD ["/bin/bash", "-c", "nginx -g \"daemon off;\""]
.env.production
GATSBY_API_URL=https://myapi.mywebsite.com
docker-compose.yml
version: "3"
services:
web:
image: my-website
build:
context: .
dockerfile: Dockerfile
expose:
- "80"
labels:
- traefik.enable=true
- traefik.http.routers.my-website.rule=Host(`my-website.com`)
restart: always
volumes:
- .:/app
networks:
default:
external:
name: traefik-proxy
index.js
const onSubmit = async (values) => {
try {
const res = await axios.post(`${process.env.GATSBY_API_URL}/api/EmployeeDetail/verify`, values)
// console.log(res, 'verify endpoint');
if( res.data.requestSuccessful === true ) {
dispatchVerifyData({
type : 'UPDATE_VERIFY_DATA',
verifyData: {
res: res.data.responseData,
loanType: values.loanType
}
})
handleNext()
} else {
setIsSuccessful({
status: false,
message: res.data.message
})
}
} catch (error) {
//error state Unsuccessful
console.log(error, 'error')
setIsSuccessful({
status: false,
})
}
}
.dockerignore
node_modules
npm-debug.log
.DS_Store
.bin
.git
.gitignore
.bundleignore
.bundle
.byebug_history
.rspec
tmp
log
test
config/deploy
public/packs
public/packs-test
yarn-error.log
coverage/
Nginx default.conf
server {
listen 80;
add_header Cache-Control no-cache;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
expires -1;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
That's all.
I hope this helps
I have a set of docker containers that are generated from yaml files. These containers work ok - I can access localhost, ping http://web from the nginx container, and list the port mapping (see snippet1)
I now want to deploy them to another machine, so I used the docker commit, save, load, and run to create an image, copy the image and deploy new containers (see snippet2).
But after I deploy the containers, they don't run properly (I cannot access localhost, cannot ping http://web from the nginx container, and port mapping is empty - see snippet3)
The .yml file is in snippet4
and the nginx .conf files are in snippet5
What can be the problem?
Thanks,
Avner
EDIT:
From the responses below, I understand that instead of using "docker commit", I should build the container on the remote host using one of 2 options:
option1 - copy the code to the remote host and apply a modified docker-compose, and build from source using a modified docker-compose
option2 - create an image on local machine, push it to a docker repository, pull it from there, using a modified docker-compose
I'm trying to follow option1 (as a start), but still have problems.
I filed a new post here that describes the problem
END EDIT:
snippet1 - original containers work ok
# the original containers
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
26ba325e737d webserver_nginx "nginx -g 'daemon of…" 3 hours ago Up 43 minutes 0.0.0.0:80->80/tcp, 443/tcp webserver_nginx_1
08ef8a443658 webserver_web "flask run --host=0.…" 3 hours ago Up 43 minutes 0.0.0.0:8000->8000/tcp webserver_web_1
33c13a308139 webserver_postgres "docker-entrypoint.s…" 3 hours ago Up 43 minutes 0.0.0.0:5432->5432/tcp webserver_postgres_1
# can access localhost
curl http://localhost:80
<!DOCTYPE html>
...
# can ping web container from the nginx container
docker exec -it webserver_nginx_1 bash
root#26ba325e737d:/# ping web
PING web (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: icmp_seq=0 ttl=64 time=0.138 ms
64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.123 ms
...
# List port mappings for the container
docker port webserver_nginx_1
80/tcp -> 0.0.0.0:80
snippet2 - deploy the containers (currently still using the deployed containers on localhost)
# create new docker images from the containers
docker commit webserver_nginx_1 webserver_nginx_image2
docker commit webserver_postgres_1 webserver_postgres_image2
docker commit webserver_web_1 webserver_web_image2
# save the docker images into .tar files
docker save webserver_nginx_image2 > /tmp/webserver_nginx_image2.tar
docker save webserver_postgres_image2 > /tmp/webserver_postgres_image2.tar
docker save webserver_web_image2 > /tmp/webserver_web_image2.tar
# load the docker images from tar files
cat /tmp/webserver_nginx_image2.tar | docker load
cat /tmp/webserver_postgres_image2.tar | docker load
cat /tmp/webserver_web_image2.tar | docker load
# Create containers from the new images
docker run -d --name webserver_web_2 webserver_web_image2
docker run -d --name webserver_postgres_2 webserver_postgres_image2
docker run -d --name webserver_nginx_2 webserver_nginx_image2
# stop the original containers and start the deployed containers
docker stop webserver_web_1 webserver_nginx_1 webserver_postgres_1
docker stop webserver_web_2 webserver_nginx_2 webserver_postgres_2
docker start webserver_web_2 webserver_nginx_2 webserver_postgres_2
snippet3 - deployed containers don't work
# the deployed containers
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
15ef8bfc0ceb webserver_nginx_image2 "nginx -g 'daemon of…" 3 hours ago Up 4 seconds 80/tcp, 443/tcp webserver_nginx_2
d6d228599f81 webserver_postgres_image2 "docker-entrypoint.s…" 3 hours ago Up 3 seconds 5432/tcp webserver_postgres_2
a8aac280ea01 webserver_web_image2 "flask run --host=0.…" 3 hours ago Up 4 seconds 8000/tcp webserver_web_2
# can NOT access localhost
curl http://localhost:80
curl: (7) Failed to connect to localhost port 80: Connection refused
# can NOT ping web container from the nginx container
docker exec -it webserver_nginx_2 bash
root#15ef8bfc0ceb:/# ping web
ping: unknown host
# List port mappings for the container
docker port webserver_nginx_2
# nothing is being shown
snippet4 - the .yml files
cat /home/user/webServer/docker-compose.yml
version: '3'
services:
web:
restart: always
build: ./web
expose:
- "8000"
volumes:
- /home/user/webServer/web:/home/flask/app/web
command: /usr/local/bin/gunicorn -w 2 -t 3600 -b :8000 project:app
depends_on:
- postgres
stdin_open: true
tty: true
nginx:
restart: always
build: ./nginx
ports:
- "80:80"
volumes:
- /home/user/webServer/web:/home/flask/app/web
depends_on:
- web
postgres:
restart: always
build: ./postgresql
volumes:
- data1:/var/lib/postgresql
expose:
- "5432"
volumes:
data1:
,
cat /home/user/webServer/docker-compose.override.yml
version: '3'
services:
web:
build: ./web
ports:
- "8000:8000"
environment:
- PYTHONUNBUFFERED=1
- FLASK_APP=run.py
- FLASK_DEBUG=1
volumes:
- /home/user/webServer/web:/usr/src/app/web
- /home/user/webClient/:/usr/src/app/web/V1
command: flask run --host=0.0.0.0 --port 8000
nginx:
volumes:
- /home/user/webServer/web:/usr/src/app/web
- /home/user/webClient/:/usr/src/app/web/V1
depends_on:
- web
postgres:
ports:
- "5432:5432"
snippet5 - the nginx .conf files
cat /home/user/webServer/nginx/nginx.conf
# Define the user that will own and run the Nginx server
user nginx;
# Define the number of worker processes; recommended value is the number of
# cores that are being used by your server
worker_processes 1;
# Define the location on the file system of the error log, plus the minimum
# severity to log messages for
error_log /var/log/nginx/error.log warn;
# Define the file that will store the process ID of the main NGINX process
pid /var/run/nginx.pid;
# events block defines the parameters that affect connection processing.
events {
# Define the maximum number of simultaneous connections that can be opened by a worker process
worker_connections 1024;
}
# http block defines the parameters for how NGINX should handle HTTP web traffic
http {
# Include the file defining the list of file types that are supported by NGINX
include /etc/nginx/mime.types;
# Define the default file type that is returned to the user
default_type text/html;
# Define the format of log messages.
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# Define the location of the log of access attempts to NGINX
access_log /var/log/nginx/access.log main;
# Define the parameters to optimize the delivery of static content
sendfile on;
tcp_nopush on;
tcp_nodelay on;
# Define the timeout value for keep-alive connections with the client
keepalive_timeout 65;
# Define the usage of the gzip compression algorithm to reduce the amount of data to transmit
#gzip on;
# Include additional parameters for virtual host(s)/server(s)
include /etc/nginx/conf.d/*.conf;
}
,
cat /home/user/webServer/nginx/myapp.conf
# Define the parameters for a specific virtual host/server
server {
# Define the server name, IP address, and/or port of the server
listen 80;
# Define the specified charset to the “Content-Type” response header field
charset utf-8;
# Configure NGINX to deliver static content from the specified folder
location /static {
alias /home/flask/app/web/instance;
}
location /foo {
root /usr/src/app/web;
index index5.html;
}
location /V1 {
root /usr/src/app/web;
index index.html;
}
# Configure NGINX to reverse proxy HTTP requests to the upstream server (Gunicorn (WSGI server))
location / {
root /;
index index1.html;
resolver 127.0.0.11;
set $example "web:8000";
proxy_pass http://$example;
# Redefine the header fields that NGINX sends to the upstream server
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Define the maximum file size on file uploads
client_max_body_size 10M;
client_body_buffer_size 10M;
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
#
# Custom headers and headers various browsers *should* be OK with but aren't
#
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
#
# Tell client that this pre-flight info is valid for 20 days
#
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
return 204;
}
if ($request_method = 'POST') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
}
if ($request_method = 'GET') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
}
}
}
You need to copy the docker-compose.yml file to the target machine (with some edits) and re-run it there. If you're not also rebuilding the images there, you need to modify the file to not include volumes: referencing a local source tree and change the build: block to reference some image:. Never run docker commit.
The docker-compose.yml file mostly consists of options equivalent to docker run options, but in a more convenient syntax. For example, when docker-compose.yml says
services:
nginx:
ports:
- "80:80"
That's equivalent to a docker run -p 80:80 option, but when you later do
docker run -d --name webserver_nginx_2 webserver_nginx_image2
That option is missing. Docker Compose also implicitly creates a Docker network for you and without the equivalent docker run --net option, connectivity between containers doesn't work.
All of these options must be specified every time you docker run the container. docker commit does not persist them. In general you should never run docker commit, especially if you already have Dockerfiles for your images; in the scenario you're describing here the image that comes out of docker commit won't be any different from what you could docker build yourself, except that it will lose some details like the default CMD to run.
As the commenters suggest, the best way to run the same setup on a different machine is to set up a Docker registry (or use a public service like Docker Hub), docker push your built images there, copy only the docker-compose.yml file to the new machine, and run docker-compose up there. (Think of it like the run.sh script that launches your containers.) You have to make a couple of changes: replace the build: block with the relevant image: and delete the volumes: that reference local source code. If you need the database data this needs to be copied separately.
services:
web:
restart: always
image: myapp/web
depends_on:
- postgres
ports:
- "8000:8000"
# expose:, command:, and core environment: should be in Dockerfile
# stdin_open: and tty: shouldn't matter for a Flask app
# volumes: replace code in the image and you're not copying that
The way you've tried to deploy your app on another server is not how Docker recommends you to do it. You need to transfer all of your files that need to be present in order to get the application running along with a Dockerfile to build the image and a docker-compose.yml to define how each of the services of your application gets deployed. Docker has a nice document to get started with.
If your intention was to deploy the application on another server for redundancy and not to create a replica, consider using docker with swarm mode.
I am trying to set up a new docker-compose file.
version: '3'
services:
webserver:
image: nginx:latest
container_name: redux-webserver
# working_dir: /application
volumes:
- ./www:/var/www/
- ./docker/nginx/site.conf:/etc/nginx/conf.d/default.conf
ports:
- "7007:80"
Currently it is very simple. But I copy the following config:
# Default server configuration
#
server {
listen 7007 default_server;
listen [::]:7007 default_server;
root /var/www;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name example;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/=404;
}
location /redux {
alias /var/www/Redux/src;
try_files $uri #redux;
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_param SCRIPT_FILENAME $request_filename;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
}
location #redux {
rewrite /redux/(.*)$ /redux/index.php?/$1 last;
}
# pass PHP scripts to FastCGI server
#
location ~ \.php$ {
include snippets/fastcgi-php.conf;
#fastcgi_split_path_info ^(.+\.php)(/.+)$;
# With php-fpm (or other unix sockets):
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
#fastcgi_index index.php;
# With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
}
But now when I try to start it with docker-compose run webserver I get the following error:
2019/07/20 08:55:09 [emerg] 1#1: open() "/etc/nginx/snippets/fastcgi-php.conf" failed (2: No such file or directory) in /etc/nginx/conf.d/default.conf:59
nginx: [emerg] open() "/etc/nginx/snippets/fastcgi-php.conf" failed (2: No such file or directory) in /etc/nginx/conf.d/default.conf:59
I understand it does not find the file fastcgi-php.conf. But why is this? Shouldn't that file be included in the standart nginx installation?
/etc/nginx/snippets/fastcgi-php.conf is in nginx-full package, but the image nginx:latest you used did not install nginx-full package.
To have it, you need to write your own dockerfile base from nginx:latest & install nginx-full in it:
Dockerfile:
FROM nginx:latest
RUN apt-get update && apt-get install -y nginx-full
docker-compose.yaml:
version: '3'
services:
webserver:
build: .
image: mynginx:latest
Put Dockerfile & docker-compose.yaml in the same folder then up it.
Additional, if you do not mind use other folk's repo(means not official), you can just search one from dockerhub, e.g. one I find from dockerhub (schleyk/nginx-full):
docker run -it --rm schleyk/nginx-full ls -alh /etc/nginx/snippets/fastcgi-php.conf
-rw-r--r-- 1 root root 422 Apr 6 2018 /etc/nginx/snippets/fastcgi-php.conf
You are trying to use a docker compose config that does not account for your trying to load fastcgi / php specific options.
You can use another image and link it to your web server like:
volumes:
- ./code:/code
- ./site.conf:/etc/nginx/conf.d/site.conf
links:
- php
php:
image: php:7-fpm
volumes:
- ./code:/code
Source, with a more thorough explanation: http://geekyplatypus.com/dockerise-your-php-application-with-nginx-and-php7-fpm/
I am trying to start an NGINX server within a docker container configured through docker-compose. The catch is, however, that I would like to substitute an environment variable inside of the http section, specifically within the "upstream" block.
It would be awesome to have this working, because I have several other containers that are all configured through environment variables, and I have about 5 environments that need to be running at any given time. I have tried using "envsubst" (as suggested by the official NGINX docs), perl_set, and set_by_lua, however none of them appear to be working.
Below is the NGINX config, as it is after my most recent trial
user nginx;
worker_processes 1;
env NGINXPROXY;
load_module modules/ngx_http_perl_module.so;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
perl_set $nginxproxy 'sub { return $ENV{"NGINXPROXY"}; }';
upstream api-upstream {
server ${nginxproxy};
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile off;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Below is the NGINX dockerfile
# build stage
FROM node:latest
WORKDIR /app
COPY ./ /app
RUN npm install
RUN npm run build
# production stage
FROM nginx:1.17.0-perl
COPY --from=0 /app/dist /usr/share/nginx/html
RUN apt-get update && apt-get install -y gettext-base
RUN rm /etc/nginx/conf.d/default.conf
RUN rm /etc/nginx/nginx.conf
COPY default.conf /etc/nginx/conf.d
COPY nginx.conf /etc/nginx
RUN mkdir /certs
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
Below is the section of the docker-compose.yml for the NGINX server (with names and IPs changed). The envsubst command is intentionally commented out at this point in my troubleshooting.
front-end:
environment:
- NGINXPROXY=172.31.67.100:9300
build: http://gitaccount:password#gitserver.com/group/front-end.git#develop
container_name: qa_front_end
image: qa-front-end
restart: always
networks:
qa_network:
ipv4_address: 172.28.0.215
ports:
- "9080:80"
# command: /bin/bash -c "envsubst '$$NGINXPROXY' < /etc/nginx/nginx.conf > /etc/nginx/nginx.conf && nginx -g 'daemon off;'"
What appears to be happening is when I reference the $nginxproxy variable in the upstream block (right after "server"), I get output that makes it look like it's referencing the string literal "$nginxproxy" rather than substituting the value of the variable.
qa3_front_end | 2019/06/18 12:35:36 [emerg] 1#1: host not found in upstream "${nginx_upstream}" in /etc/nginx/nginx.conf:19
qa3_front_end | nginx: [emerg] host not found in upstream "${nginx_upstream}" in /etc/nginx/nginx.conf:19
qa3_front_end exited with code 1
When I attempt to use envsubst, I get an error that makes it sound like the command messed with the format of the nginx.conf file
qa3_front_end | 2019/06/18 12:49:02 [emerg] 1#1: no "events" section in configuration
qa3_front_end | nginx: [emerg] no "events" section in configuration
qa3_front_end exited with code 1
I'm pretty stuck, so thanks in advance for your help.
Since nginx 1.19 you can now use environment variables in your configuration with docker-compose. I used the following setup:
# file: docker/nginx/templates/default.conf.conf
upstream api-upstream {
server ${API_HOST};
}
# file: docker-compose.yml
services:
nginx:
image: nginx:1.19-alpine
volumes:
- "./docker/nginx/templates:/etc/nginx/templates/"
environment:
NGINX_ENVSUBST_TEMPLATE_SUFFIX: ".conf"
API_HOST: api.example.com
I'm going off script a little from the example in the documentation. Note the extra .conf extension on the template file - this is not a typo. In the docs for the nginx image it is suggested to name the file, for example, default.conf.template. Upon startup, a script will take that file, substitute the environment variables, and then output the file to /etc/nginx/conf.d/ with the original file name, dropping the .template suffix.
By default that suffix is .template, but this breaks syntax highlighting unless you configure your editor. Instead, I specified .conf as the template suffix. If you only name your file default.conf the result will be a file named /etc/nginx/conf.d/default and your site won't be served as expected.
You can avoid some of the hassles with Compose interpreting environment variables by defining your own entrypoint. See this simple example:
entrypoint.sh (make sure this file is executable)
#!/bin/sh
export NGINXPROXY
envsubst '${NGINXPROXY}' < /config.template > /etc/nginx/nginx.conf
exec "$#"
docker-compose.yml
version: "3.7"
services:
front-end:
image: nginx
environment:
- NGINXPROXY=172.31.67.100:9300
ports:
- 80:80
volumes:
- ./config:/config.template
- ./entrypoint.sh:/entrypoint.sh
entrypoint: ["/entrypoint.sh"]
command: ["nginx", "-g", "daemon off;"]
My config file has the same content as your nginx.conf, aside from the fact that I had to comment the lines using the Perl module.
Note that I had to mount my config file to another location before I could envsubst it. I encountered some strange behaviour in the form that the file ends up empty after the substitution, which can be avoided by this approach. It shouldn't be a problem in your specific case, because you already embed it in your image on build time.
EDIT
For completeness, to change your setup as little as possible, you just have to make sure that you export your environment variable. Adapt your command like this:
command: ["/bin/bash", "-c", "export NGINXPROXY && envsubst '$$NGINXPROXY' < /etc/nginx/nginx.conf > /etc/nginx/nginx.conf && nginx -g 'daemon off;'"]
...and you should be good to go. I would always recommend the "cleaner" way with defining your own entrypoint, though.
So after some wrestling with this issue, I managed to get it working similarly to the answer provided by bellackn. I am going to post my exact solution here, in case anybody else needs to reference a complete solution.
Step1: Write your nginx.conf or default.conf how you would normally write it. Save the file as "nginx.conf.template", or "default.conf.template" depending on which you are trying to substitute variables into.
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream api-upstream {
server 192.168.25.254;
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile off;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Step2: Substitute a variable in the format ${VARNAME} for whatever value(s) you want to replace with an environment variable:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream api-upstream {
server ${SERVER_NAME};
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile off;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Step 3: In your docker-file, copy your nginx configuration files (your nginx.conf.template or default.conf.template) into your container at the appropriate location:
# build stage
FROM node:latest
WORKDIR /app
COPY ./ /app
RUN npm install
RUN npm run build
# production stage
FROM nginx:1.17.0-perl
COPY --from=0 /app/dist /usr/share/nginx/html
RUN apt-get update && apt-get install -y gettext-base
RUN rm /etc/nginx/conf.d/default.conf
RUN rm /etc/nginx/nginx.conf
#-----------------------------------#
|COPY default.conf /etc/nginx/conf.d|
|COPY nginx.conf.template /etc/nginx|
#-----------------------------------#
RUN mkdir /certs
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
Step 4: Set your environment variable in your docker-compos.yml file using the "environment" section label. Make sure your environment variable name matches whatever variable name you chose within your nginx config file. Use the "envsubt" command within your docker container to substitute your variable values in for your variables within your nginx.conf.template, and write the output to a file named nginx.conf in the correct location. This can be done within the docker-compose.yml file by using the "command" section label:
version: '2.0'
services:
front-end:
environment:
- SERVER_NAME=172.31.67.100:9100
build: http://git-account:git-password#git-server.com/project-group/repository-name.git#branch-ame
container_name: qa_front_end
image: qa-front-end-vue
restart: always
networks:
qa_network:
ipv4_address: 172.28.0.215
ports:
- "9080:80"
command: >
/bin/sh -c
"envsubst '
$${SERVER_NAME}
'< /etc/nginx/nginx.conf.template
> /etc/nginx/nginx.conf
&& nginx -g 'daemon off;'"
Step 5: Run your stack with docker-compose up (with whatever additional switches you need) and your nginx server should now start with whatever value you supplied in the "environment" section of your docker-compose.yml
As mentioned in the solution above, you can also define your own entry point, however this solution has also proven to work pretty well, and keeps everything contained into a single configuration file, giving me the ability to run a stack of services directly from git with nothing but a docker-compose.yml file.
A big thank you to everybody who took the time to ready through this, and bellackn for taking the time to help me solve the issue.
Like already explained in Jody's answer, nowadays the official Nginx Docker image supports parsing templates. This uses envsubst and its handling ensures not to mess with Nginx variables such as $host and all. Nice. However, envsubst does not support default values like a regular shell and Docker Compose do when using ${MY_VAR:-My Default}. So, this built-in templating would always need a full setup of all variables, even when using the defaults.
To define defaults in the image itself, one can use a custom entry point to first set the defaults and then simply delegate to the original entrypoint. Like a docker-defaults.sh:
#!/usr/bin/env sh
set -eu
# As of version 1.19, the official Nginx Docker image supports templates with
# variable substitution. But that uses `envsubst`, which does not allow for
# defaults for missing variables. Here, first use the regular command shell
# to set the defaults:
export PROXY_API_DEST=${PROXY_API_DEST:-http://host.docker.internal:8000/api/}
# Due to `set -u` this would fail if not defined and no default was set above
echo "Will proxy requests for /api/* to ${PROXY_API_DEST}*"
# Finally, let the original Nginx entry point do its work, passing whatever is
# set for CMD. Use `exec` to replace the current process, to trap any signals
# (like Ctrl+C) that Docker may send it:
exec /docker-entrypoint.sh "$#"
Along with, say, some docker-nginx-default.conf:
# After variable substitution, this will replace /etc/nginx/conf.d/default.conf
server {
listen 80;
listen [::]:80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location /api/ {
# Proxy API calls to another destination; the default for the variable is
# set in docker-defaults.sh
proxy_pass $PROXY_API_DEST;
}
}
In the Dockerfile copy the template into /etc/nginx/templates/default.conf.template and set the custom entry point:
FROM nginx:stable-alpine
...
# Each time Nginx is started it will perform variable substition in all template
# files found in `/etc/nginx/templates/*.template`, and copy the results (without
# the `.template` suffix) into `/etc/nginx/conf.d/`. Below, this will replace the
# original `/etc/nginx/conf.d/default.conf`; see https://hub.docker.com/_/nginx
COPY docker-nginx-default.conf /etc/nginx/templates/default.conf.template
COPY docker-defaults.sh /
# Just in case the file mode was not properly set in Git
RUN chmod +x /docker-defaults.sh
# This will delegate to the original Nginx `docker-entrypoint.sh`
ENTRYPOINT ["/docker-defaults.sh"]
# The default parameters to ENTRYPOINT (unless overruled on the command line)
CMD ["nginx", "-g", "daemon off;"]
Now using, e.g., docker run --env PROXY_API_DEST=https://example.com/api/ ... will set a value, which in this example will default to http://host.docker.internal:8000/api/ if not set (which is actually http://localhost:8000/api/ on the local machine).
According to official documentation https://hub.docker.com/_/nginx
section "Using environment variables in nginx configuration (new in 1.19)"
you can use environment variables.
But it's does not work due to bug inside docker container script:
https://github.com/nginxinc/docker-nginx/blob/master/entrypoint/20-envsubst-on-templates.sh#L25
running this script always fails with error:
/docker-entrypoint.d/20-envsubst-on-templates.sh: line 25: 3: Bad file descriptor
I created issue https://github.com/nginxinc/docker-nginx/issues/645
and pull request https://github.com/nginxinc/docker-nginx/pull/646
As workaround for now I copied this script and change it locally/
You could switch to a more advanced nginx docker image. For example nginx4docker, it implements a bunch of basic env variables that can be set through docker and you don't have to fiddle around with nginx basic templating and all it's drawbacks.
nginx4docker could also be extended with your custom env variables. only mount a file that lists all your env variables to docker ... --mount $(pwd)/CUSTOM_ENV:/ENV ...
My solution is coping entrypoint sh file into /docker-entrypoint.d directory of nginx container. As mentioned above, you need to copy .template file. But you dont need to create two seperate files.
Copy the file config file with temporary name in Dockerfile. But it's important to not use ENTRYPOINT command in Dockerfile
FROM nginx
...
COPY ./default.conf /etc/nginx/conf.d/default.conf.temp
create an sh file named 05-docker-entrypoint.sh in your project directory (host) and put the following code into the sh file as mentioned above
#!/usr/bin/env sh
set -eu
envsubst '${MY_VARIABLE}' < /etc/nginx/conf.d/default.conf.temp > /etc/nginx/conf.d/default.conf
exec "$#"
Mount 05-docker-entrypoint.sh using docker-compose.yml file to /docker-entrypoint.d directory of nginx container or copy it using Dockerfile. This two options are looking like this :
Option 1. (i prefer this) Mounting file using compose file :
web:
expose:
- 80
environment:
- MY_VARIABLE=blabala
volumes:
- ./05-docker-entrypoint.sh:/docker-entrypoint.d/05-docker-entrypoint.sh
....
Option 2.
Use Dockerfile to copy files into container
Final Dockerfile with Option2 looks like :
FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf.temp
COPY ./05-docker-entrypoint.sh /docker-entrypoint.d/05-docker-entrypoint.sh