What is the role of nginx when dockerizing an app? - docker

I am new to docker and I've been working on dockerizing and deploying my app to an internal server at work.
The structure is that I have a Dockerfile for my react + nginx server and a flask backend.
Then I use docker-compose to merge these Dockerfiles.
I've been following the format that other people at my work have written previously, so I am not fully grasping all aspects.
I am especially confused about the role of nginx.
The Dockerfile that contains both react and nginx looks like this:
FROM node:latest as building
RUN npm config set proxy <proxy for my company>
RUN npm config set https-proxy <proxy for my company>
WORKDIR /app
ENV PATH /node_modules/.bin:$PATH
COPY package.json /app/
COPY ./ /app/
RUN npm install
RUN npm install react-scripts#3.0.1 -g
RUN npm run build
FROM nginx
RUN rm -rf /etc/nginx/conf.d
COPY deployment/nginx.conf /etc/nginx/nginx.conf
COPY --from=building /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
and my customized nginx.conf looks like
user root;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server {
server_name <internal_server_box>
;
listen [::]:80;
listen 80;
root /usr/share/nginx/html;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
location /v1 {
proxy_pass <backend_container>:5000;
}
}
client_max_body_size 2M;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/conf.d/*.conf;
}
I am not sure what nginx does here because I can still make this app accessible from the outside just by putting up the react app without the nginx. I read somewhere that it could function as some kind of a gateway, but it wasn't clear for me.
It would be great if anyone can explain why we need nginx to make the server up while we can just put it up (make it accessible outside the internal server box) without it.

In the general case, there is no need or requirement to install nginx if you want to dockerize something.
If what you are dockerizing is a web app of some sort (in the broadest sense, i.e. something which people will use their browsers or an HTTP API to communicate with) and it can only handle a single client connection at a time, the benefit of a web server in between is to provide support for multiple concurrent clients.
Many web frameworks allow you to serve a single user or a small number of users without a web server, but this does not scale to production use with as many concurrent clients as the hardware can handle. When you deploy, you add a web server in between to take care of spawning as many instances of your server-side client handling code as necessary to keep up, as well as handle normal web server tasks like resource limits, access permissions, redirection, logging, SSL negotiation for HTTPS, etc.

The nginx has two important roles here. (Neither is specific to Docker.) As you say it's not strictly required, but this seems like a sound setup to me.
The standard Javascript build tooling (Webpack, for instance) ultimately compiles to a set of static files that get sent across to the browser. If you have a "frontend" container, it never actually runs your React code, it just serves it up. While most frameworks have a built-in development server, they also tend to come with a big "not for production use" disclaimer. You can see this in your Dockerfile: the first-stage build compiles the application, and in the second stage, it just copies in the built artifacts.
There are some practical issues that are solved if the browser can see the Javascript code and the underlying API on the same service. (The browser code can just include links to /v1/... without needing to know a hostname; you don't have to do tricks to work around CORS restrictions.) That's what the proxy_pass line in the nginx configuration does.
I consider the overall pattern of this Dockerfile to be a very standard Docker setup. First it COPYs some code in; it compiles or packages it; and then it sets up a minimal runtime package that contains only what's needed to run or serve the application. Your build tools and local HTTP proxy settings don't appear in the final image. You can run the resulting image without any sort of attached volumes. This matches the sort of Docker setups I've built for other languages.

Related

Vultr Docker Setup With SSL

I'm trying to spin up a dockerized website (React JS) being hosted on my Vultr server. I used the one click install feature that Vultr provides to install Docker (Ubuntu 20.04).
I can get my website started with HTTP and a port number 8080. But what I'm looking to accomplish are the following:
How to add SSL to my website with docker(since my website is dockerized).
PS: I already have a domain name.
How to get rid of the port number as it doesn't look very professional.
For number 2, I did try adding a reverse proxy but not sure if I did it correctly.
Also, not sure if this was the right approach, but I did install letsencrypt in my host Vultr machine (nginx was need too for some reason). I navigated to my domain name and sure enough I do see my website secured (https) with the "Welcome to Nginx" landing page. But again, is this the correct way? If so, how do I display my react website secured instead of the default "Welcome to Nginx" landing page.
Or should I have not installed nginx or letsencrypt in the host machine?
As you can see, I'm an absolute beginner in docker!
This is my Dockerfile-prod
FROM node as build
WORKDIR /usr/src/app
COPY package*.json ./
RUN yarn cache clean && yarn --update-checksums
COPY . ./
RUN yarn && yarn build
# Stage - Production
FROM nginx
COPY --from=build /usr/src/app/build /usr/share/nginx/html
EXPOSE 80 443
#Make sites-available directory
RUN mkdir "etc/nginx/sites-available"
#Make sites-enabled directory
RUN mkdir "etc/nginx/sites-enabled"
# add nginx live config
ADD config/*****.com /etc/nginx/sites-available/*****.com
# create symlinks
RUN ln -s /etc/nginx/sites-available/*****.com /etc/nginx/sites-enabled/*****
# make certs dir as volume
VOLUME ["/etc/letsencrypt"]
CMD ["nginx", "-g", "daemon off;"]
I have to configuration files. PS: I was trying following this guy's repo repo
I feel like everything is there in front of me to get it to work, but I just can't figure it out. If you guys can help me out, I'd really appreciate it! Thanks in advance!
config 1 *****-live.com
server {
listen 80;
listen [::]:80;
server_name *****.com www.*****.com;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
}
server {
listen 443 ssl;
server_name *****.com www.*****.com;
ssl_certificate /etc/letsencrypt/live/*****.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/*****.com/privkey.pem;
location / {
proxy_pass http://172.17.0.2:8080;
}
}
config 2 *****-staging.com
server {
listen 80;
listen [::]:80;
server_name *****.com www.*****.com;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
}
This is my host machine directory to letsencrypt/live

What could be the cause of this error "could not be resolved (110: Operation timed out)"?

I am actually working in a company and to improve SEO, i am trying to setup our angular (10) web app with prerender.io to send rendered html to crawlers visiting our website.
The app is dockerized and exposed using an nginx server. To avoid conflict with existing nginx conf (after few try using it), i (re)started configuration from the .conf file provided in the prerender.io documentation (https://gist.github.com/thoop/8165802) but impossible for me to get any response from the prerender service.
I am always facing:
"502: Bad Gateway" (client side) and
"could not be resolved (110: Operation timed out)" (server side) when i send a request with Googlebot as User-agent.
After building and running my docker image, the website is correctly exposed on port 80. It is fully accessible when i use a web browser, but the error occurs when i try a request as a bot (using curl -A Googlebot http://localhost:80).
To verify if the prerender service correctly receive my request when needed i tried to use an url generated on pipedream.com, but the request never comes.
I tried using different resolver (8.8.8.8 and 1.1.1.1) but nothing changed.
I tried to increase the resolver_timeout to let more time but still the same error.
I tried to install curl in the container because my image is based on an alpine image, curl was successfully installed but nothing changed.
Here is my nginx conf file :
server {
listen 80 default_server;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri #prerender;
}
location #prerender {
proxy_set_header X-Prerender-Token TOKEN_HERE;
set $prerender 0;
if ($http_user_agent ~* "googlebot|bingbot|yandex|baiduspider|twitterbot|facebookexternalhit|rogerbot|linkedinbot|embedly|quora link preview|showyoubot|outbrain|pinterest\/0\.|pinterestbot|slackbot|vkShare|W3C_Validator|whatsapp") {
set $prerender 1;
}
if ($args ~ "_escaped_fragment_") {
set $prerender 1;
}
if ($http_user_agent ~ "Prerender") {
set $prerender 0;
}
if ($uri ~* "\.(js|css|xml|less|png|jpg|jpeg|gif|pdf|doc|txt|ico|rss|zip|mp3|rar|exe|wmv|doc|avi|ppt|mpg|mpeg|tif|wav|mov|psd|ai|xls|mp4|m4a|swf|dat|dmg|iso|flv|m4v|torrent|ttf|woff|svg|eot)") {
set $prerender 0;
}
#resolve using Google's DNS server to force DNS resolution and prevent caching of IPs
resolver 8.8.8.8;
resolver_timeout 60s;
if ($prerender = 1) {
#setting prerender as a variable forces DNS resolution since nginx caches IPs and doesnt play well with load balancing
set $prerender "service.prerender.io";
rewrite .* /$scheme://$host$request_uri? break;
proxy_pass http://$prerender;
}
if ($prerender = 0) {
rewrite .* /index.html break;
}
}
}
And here is my Dockerfile:
FROM node:12.7-alpine AS build
ARG environment=production
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build -- --configuration $environment
# Two stage build because we do not need node-related things
FROM nginx:1.17.1-alpine
RUN apk add --no-cache curl
COPY --from=build /usr/src/app/dist/app /usr/share/nginx/html
COPY prerender-nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
Hope you will have a track or an idea to help me
Erroneous part would be
curl -A Googlebot http://localhost:80
The way how prerender works, is accesses the fqdn you have sent to original webserver. So localhost:80 will not be accessible.
Try passing proper hostname, kind of
curl -H "Host: accessiblefrom.public.websitefqdn:80" http://localhost:80
Check out example on
https://github.com/Voronenko/self-hosted-prerender

Showing updates in browser with hitting refresh using Docker to deploy for Vue.js

I am new to docker and following the tutorial on https://cli.vuejs.org/guide/deployment.html#docker-nginx.
I have managed to get it work, but wondering if there is a way to get the browser to update without having to hit the refresh button. Here are the steps I take.
Change content.
Stop container.
Remove container.
Build container.
Run container.
Refresh browser.
I am wondering if there is a way to avoid step 6 to see the new content in the browser.
Here is my docker file
FROM node:latest as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM nginx as production-stage
RUN mkdir /app
COPY --from=build-stage /app/dist /app
COPY nginx.conf /etc/nginx/nginx.conf
and here is my nginx.conf file
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
location / {
root /app;
index index.html;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
Problem
Hot reloading is generally implemented using file system specific event APIs to determine when and what to “reload”. Since you’re using Vue in a docker container running a different OS, Vue has no way of communicating with your file system via API (and shouldn’t, for obvious security reasons).
Solution
Ensure Hot Reloading Is Enabled
You’ll need to ensure hot reloading is enabled in Vue.us and force Vue.is to use polling rather than sockets to determine when to reload.
Ensure hot reloading is enabled by disabling minification in development mode, webpack target is not “node” and NODE_ENV is not “production”.
Use Polling
Update webpack config to include the following:
watchOptions: {
poll: 1000 // Check for changes every second
}
or
Set environment variable CHOKIDAR_USEPOLLING in your development docker or docker compose file to true.
References
Disabling Hot Reload in Vue.js: https://vue-loader.vuejs.org/guide/hot-reload.html#usage
Watch Options: https://webpack.js.org/configuration/watch/#watchoptionspoll

How to log errors in VueJS project, deployed on nginx docker image?

I'm having a VueJS project, which is deployed using a nginx docker image.
My .Dockerfile looks like:
# build stage
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm i npm#latest -g && npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=build-stage /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
And my nginx.conf looks like:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /usr/share/nginx/html;
index index.html;
location / {
# Support the HTML5 History mode of the vue-router.
# https://router.vuejs.org/en/essentials/history-mode.html
try_files $uri $uri/ /index.html;
}
}
Now I'm looking for a way to set up some kind of logger, so that any potential errors that may happens in the application, to be written in some .log file on the server. Since the application is already on the production, I would like when I will open this file (where I will be able to log the errors) to see any potential warnings / errors that are present in the application and that may harm the normal flow of the users.
Since VueJS is a client side JavaScript frame-work, I'm not sure how to put all of this together and on which side I will need to add the logger, so therefore I shared the code from the docker-sizing flow. At the moment what I'm doing is just adding the console.log messages, but this will not help to me when I will want to review the potential errors.
I have used a 3rd party logging service like Loggly to do this. Create an account, and then you can use https://www.npmjs.com/package/loggly to setup a connection:
var loggly = require('loggly');
var client = loggly.createClient({
token: "your-really-long-input-token",
subdomain: "your-subdomain",
auth: {
username: "your-username",
password: "your-password"
}
});
And then you simply log like this:
client.log('127.0.0.1 - oops i did it again');
You then log into their service to see the log and a bunch of tools for analysis.
There are other logging services like Logzio, etc.

LetsEncrypt in a Docker (docker-compose) app container not working

I'm using docker-compose for a rails app to have an app and db container. In order to test some app functionality I need SSL...so I'm going with LetsEncrypt vs self-signed.
The app uses nginx, and the server is ubuntu 14.04 lts, with the phusion passenger docker image as a base image (lightweight debian)
Normally with LetsEncrypt, I run the usual ./certbot-auto certonly --webroot -w /path/to/app/public -d www.example.com
My server runs nginx (proxy passing the app to the container), so I've hopped into the container to run the certbot command without issue.
However, when I try to go to https://test-app.example.com it doesn't work. I can't figure out why.
Error on site (Chrome):
This site can’t be reached
The connection was reset.
Curl gives a bit better error:
curl: (35) Unknown SSL protocol error in connection to test-app.example.com
Server nginx app.conf
upstream test_app { server localhost:4200; }
server {
listen 80;
listen 443 default ssl;
server_name test-app.example.com;
# for SSL
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_dhparam /etc/ssl/dhparam.pem;
ssl_prefer_server_ciphers on;
ssl_ciphers 'ECDHE-RSA-blahblahblah-SHA';
location / {
proxy_set_header Host $http_host;
proxy_pass http://test_app;
}
}
Container's nginx app.conf
server {
server_name _;
root /home/app/test/public;
ssl_certificate /etc/letsencrypt/live/test-app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/test-app.example.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_dhparam /etc/ssl/dhparam.pem;
ssl_prefer_server_ciphers on;
ssl_ciphers 'ECDHE-RSA-blahblah-SHA';
passenger_enabled on;
passenger_user app;
passenger_ruby /usr/bin/ruby2.3;
passenger_app_env staging;
location /app_test/assets/ {
passenger_enabled off;
alias /home/app/test/public/assets/;
gzip_static on;
expires +7d;
add_header Cache-Control public;
break;
}
}
In my Dockerfile, I have:
# expose port
EXPOSE 80
EXPOSE 443
In my docker-compose.yml file I have:
test_app_app:
build: "."
env_file: config/test_app-application.env
links:
- test_app_db:postgres
environment:
app_url: https://test-app.example.com
ports:
- 4200:80
And with docker ps it shows up as:
Up About an hour 443/tcp, 0.0.0.0:4200->80/tcp
I am now suspecting it's because the server's nginx - the "front-facing" server - doesn't have the certs, but I can't run the LetsEncrypt command without an app location.
I tried running the manual LetsEncrypt command on the server, but because I presumably have port 80 exposed, I get this: socket.error: [Errno 98] Address already in use Did I miss something here?
What do I do?
Fun one.
I would tend to agree that it's likely due to not getting the certs.
First and foremost read my disclaimer at the end. I would try to use DNS authentication., IMHO it's a better method for something like Docker. A few ideas come to mind. Easiest that answers your question would be a docker entrypoint script that gets the certs first and then starts nginx:
#!/bin/bash
set -ea
#get cert
./certbot-auto certonly --webroot -w /path/to/app/public -d www.example.com
#start nginx
nginx
This is "okay" solution, IMHO, but is not really "automated" (which is part of the lets encrypt goals). It doesn't really address renewing the certificate down the road. If that's not a concern of yours, then there you go.
You could get really involved and create an entrypoint script that detects when the cert expires and then rerun the command to renew it and then reloads nginx.
A much more complicated (but also more scalable solution) would be to create a docker image that's sole purpose in life is to handle lets_encrypt certificates and renewals and then provide a way of distributing those certificates to other containers, eg: nfs (or shared docker volumes if you are really careful).
For anyone in the future reading this: this was written before compose hooks was an available feature, which would be by far the best way of handling something like this.
Please read this disclaimer:
Docker is not really the best solution for this, IMHO. Docker images should be static data. Because lets encrypt certificates expire after 3 months, that means your container should have a shelf-life of three months or less (or, like I said above, account for renewing). "Thats fine!" I hear you say. But that would also mean you are constantly getting a new certificate issued each time you start the container (with the entrypoint method). At the very least, that means that the previous certificate gets revoked every time. I don't know what the ramifications are for doing this with Lets Encrypt. They may only give you so many revokes before they think something fishy is going on.
What I tend to do most often is actually use configuration management and use nginx as the "front" on the host system. Or rely on some other mechanism to handle SSL termination. But that doesn't answer your question of how to get Lets Encrypt to work with docker. :-)
I hope that helps or points you in a better direction. :-)
I knew I was missing one small thing. As stated in the question, since the nginx on the server is the 'front-facing' nginx, with the container's nginx specifically for the app, the server's nginx needed to know about the SSL.
The answer was super simple. Copy the certs over! (Kudos to my client's ops lead)
I cat the fullchain.pem and privkey.pem in the docker container and created the associated files in /etc/ssl on the server.
On the server's /etc/nginx/sites-enabled/app.conf I added:
ssl_certificate /etc/ssl/test-app-fullchain.pem;
ssl_certificate_key /etc/ssl/test-app-privkey.pem;
Checked configuration and restarted nginx. Boom! Worked like a charm. :)

Resources