As I deploy my first ever rails app to server, I keep getting this error right when from home url.
The page you were looking for doesn't exist. You may have mistyped the address or the page may have moved.
My configuration:
Dockerfile
FROM ruby:3.0.0-alpine3.13
RUN apk add --no-cache --no-cache --update alpine-sdk nodejs postgresql-dev yarn tzdata
WORKDIR /app
COPY Gemfile .
COPY Gemfile.lock .
RUN bundle install
COPY . .
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 4000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0", "-e", "production", "-p", "4000"]
entrypoint.sh
#!/bin/sh
set -e
# Remove a potentially pre-existing server.pid for Rails.
rm -f /app/tmp/pids/server.pid
# Then exec the container's main process (what's set as CMD in the Dockerfile).
exec "$#"
docker-compose.yml
central:
build:
context: ./central
dockerfile: Dockerfile
command: sh -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 4000 -e production -b '0.0.0.0'"
volumes:
- ./central:/usr/src/app
env_file:
- ./central/.env.prod
stdin_open: true
tty: true
depends_on:
- centraldb
centraldb:
image: postgres:12.0-alpine
volumes:
- centraldb:/var/lib/postgresql/data/
env_file:
- ./central/.env.prod.db
nginx:
image: nginx:1.19.0-alpine
volumes:
- ./nginx/prod/certbot/www:/var/www/certbot
- ./central/public/:/home/apps/central/public/:ro
ports:
- 80:80
- 443:443
depends_on:
- central
links:
- central
restart: unless-stopped
command: '/bin/sh -c ''while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g "daemon off;"'''
nginx.conf
upstream theapp {
server central:4000;
}
server {
listen 80;
server_name thedomain.com;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name thedomain.com;
if ($scheme = http) {
return 301 https://$server_name$request_uri;
}
# hidden ssl config
root /home/apps/central/public;
index index.html;
location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ {
try_files $uri #rails;
access_log off;
gzip_static on;
# to serve pre-gzipped version
expires max;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
break;
}
location ~ /\. {
deny all;
}
location ~* ^.+\.(rb|log)$ {
deny all;
}
location / {
try_files $uri #rails;
}
location #rails {
proxy_pass http://theapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
client_max_body_size 4G;
keepalive_timeout 10;
}
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /home/apps/central/public;
}
error_log /var/log/nginx/central_error.log;
access_log /var/log/nginx/central_access.log;
}
As I check the log file of nginx but it doesn't show anything.
The app work well in my local and even in production with ports config (of course if I go to thedomain.com:4000), but I need to serve it with Nginx in production with thedomain.com so I need solution for this.
Related
i am going crazy not figuring this out..
I have a docker network created named isolated-network, and i have an nginx server and an API created in the network. Nginx on port 80 is exposed to the host so i can access it, but the API isnt. I know i could expose the API to the host too, but i'd like for it to be isolated and only accessible through nginx through say /api.
I have configured the Nginx at /api to route to http://my-api:8000, however i get 404 not found in return when accessing http://locahost/api. If i do 'docker exec -it nginx sh' and try to curl the same route http://my-api:8000 i get the expected response.
Is what i'm trying even possible? I have not found any example trying to do the same. If i can't route to http://my-api:8000, can i atleast send the API request to it and receive the response?
in below i have an example that nginx will route traffic to php-fpm container
docker file:
FROM php:8.1-fpm
USER root
RUN mkdir -p /var/www/ && mkdir -p /home/www/.composer && chown www:www /var/www/html && chown www:www /home/www/.composer
#groupadd -g 1000 www && useradd -ms /bin/bash -G www -g 1000 www &&
RUN usermod -aG sudo www
RUN chown -R www /var/www
USER root
EXPOSE 9000
CMD ["php-fpm"]
docker compose file:
version: "3"
services:
webserver:
image: nginx:1.21.6-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- ./configfiles/nginx/conf-fpm:/etc/nginx/conf.d
php-fpm:
image: php:8.1-fpm
working_dir: /var/www
stdin_open: true
tty: true
build:
context: ./configfiles/php-fpm
dockerfile: Dockerfile
volumes:
- ./src:/var/www
nginx config file
server {
listen 80;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
proxy_read_timeout 3600;
proxy_connect_timeout 3600;
proxy_send_timeout 3600;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php-fpm:9000;
fastcgi_read_timeout 3600;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
I am using docker compose to serve up a front-end (vue.js) a back-end and a nginx reverse-proxy.
When I navigate to a route and hit refresh I get a 404 nginx error.
Here is part of my docker compose, omitted a few lines for brevity
version: '3'
services:
# Proxies requests to internal services
dc-reverse-proxy:
image: nginx:1.17.10
container_name: reverse_proxy_demo
depends_on:
- dc-front-end
- dc-back-end
volumes:
- ./reverse_proxy/nginx.conf:/etc/nginx/nginx.conf
ports:
- 5004:80
dc-front-end:
..
container_name: dc-front-end
ports:
- 8080:80
# API
dc-back-end:
container_name: dc-back-end
ports:
- 5001:5001
here is the nginx.conf that belongs to the reverse proxy service
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name 127.0.0.1;
location / {
proxy_pass http://dc-front-end:80;
proxy_set_header X-Forwarded-For $remote_addr;
}
location /dc-back-end/ {
proxy_pass http://dc-back-end:5001/;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
and this is the nginx.conf for the front-end
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
root /app;
#root /usr/share/nginx/html;
location / {
index index.html;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
and finally the docker file for the front-end service
# build stage
FROM node:16-alpine3.12 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
I have tried using try_files $uri $uri/ /index.html; in both nginx files but it still gives 404 on page refresh or if i try and navigate to the page in the browser (rather than clicking a link)
As usual the laws of Stackoverflow dictate that you only solve your own question once you post the question.
docker file was wrong. threw me as everything else worked
FROM node:16-alpine3.12 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM nginx:stable-alpine as production-stage
RUN mkdir /app
COPY --from=build-stage /app/dist /app
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
I am trying to deploy a simple Django Rest Framework app to the production server using Docker. My aim is to install Nginx with a proxy and Certbot for a regular Let'sEncrypt SSL at the same time. I manage my dependencies in DockerFiles and docker-compose.
So the folder structure has the following view:
app
DockerFile
nginx
DockerFile
init-letsencrypt.sh
nginx.conf
docker-compose.yml
My idea is to hold all the configs in app/docker-compose.yml and start many different instances from the same source. But I do not have any nginx or certbot config in app/DockerFile - that's only for Django Rest Framework and that works well. But in docker-compose.yml I have the following code:
version: '3'
'services':
app:
container_name: djangoserver
command: gunicorn prototyp.wsgi:application --env DJANGO_SETTINGS_MODULE=prototyp.prod_settings --bind 0.0.0.0:8000 --workers=2 --threads=4 --worker-class=gthread
build:
context: ./api
dockerfile: Dockerfile
restart: always
ports:
- "8000:8000"
depends_on:
- otherserver
otherserver:
container_name: otherserver
build:
context: ./otherserver
dockerfile: Dockerfile
restart: always
nginx:
build: ./nginx
ports:
- 80:80
depends_on:
- app
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
This makes me to build "app", "otherserver", "nginx" and "certbot".
The most important parts are in "nginx" folder.
I used this manual and cloned file "init-letsencrypt.sh" from the source just the way it was described. Then I tried to bash it:
nginx/DockerFile:
FROM nginx:1.19.0-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
RUN mkdir -p /usr/src/app
COPY init-letsencrypt.sh /usr/src/app
WORKDIR /usr/src/app
RUN chmod +x init-letsencrypt.sh
ENTRYPOINT ["/usr/src/app/init-letsencrypt.sh"]
In nginx/nginx.conf I have the following code:
upstream django {
server app:8000;
}
server {
listen 80;
server_name app.com www.app.com;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name app.com www.app.com;
access_log /var/log/nginx-access.log;
error_log /var/log/nginx-error.log;
ssl_certificate /etc/letsencrypt/live/app.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location ^/static/rest_framework/((img/|css/|js/|fonts).*)$ {
autoindex on;
access_log off;
alias /usr/src/app/static/rest_framework/$1;
}
location / {
proxy_pass http://django;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_body_buffer_size 256k;
proxy_connect_timeout 120;
proxy_send_timeout 120;
proxy_read_timeout 120;
proxy_buffer_size 64k;
proxy_buffers 4 64k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
client_max_body_size 100M;
}
}
So, with this configuration when I do "docker-compose build", the build works without any errors and everything is successfully built. But as soon as I do "docker-compose up" I have the problem that certbot and nginx are not connect and the app is working only when I use http://app.com:8000 instead of https://app.com.
In console I do not have any errors.
What do I do wrong? What have I missed? Any help will be appreciated.
I see in your setup you try to run let's encrypt from within the nginx container. But I believe there are two better way that I describe in details here and here.
The idea behind the first method is to have a docker-compose file to initiate the letsencrypt certificate, and another docker-compose file to run the system and renew the certificate.
So without further ado, here is the file structure and content that is working really well for me (you still need to adapt the files to suit your needs)
./setup.sh
./docker-compose-initiate.yaml
./docker-compose.yaml
./etc/nginx/templpates/default.conf.template
./etc/nginx/templpates-initiation/default.conf.template
The setup in 2 phases:
In the first phase "the initiation phase" we will run an nginx container, and a certbot container just to obtain the ssl certificate for the first time and store it on the host ./etc/letsencrypt folder
I the second phase "the operation phase" we run all necessary services for the app including nginx that will use the letsencrypt folder this time to serve https on port 443, a certbot container will also run (on demand) to renew the certificate. We can add a cron job for that. So the setup.sh script is a simple convenience script that runs the commands one after another:
#!/bin/bash
# the script expects two arguments:
# - the domain name for which we are obtaining the ssl certificatee
# - the Email address associated with the ssl certificate
echo DOMAIN=$1 >> .env
echo EMAIL=$2 >> .env
# Phase 1 "Initiation"
docker-compose -f ./docker-compose-first.yaml up -d nginx
docker-compose -f ./docker-compose-first.yaml up certbot
docker-compose -f ./docker-compose-first.yaml down
# Phase 2 "Operation"
crontab ./etc/crontab
docker-compose -f ./docker-compose.yaml up -d
Phase 1: The ssl certificate initiation phase:
./docker-compose-initiate.yaml
version: "3"
services:
nginx:
container_name: nginx
image: nginx:latest
environment:
- DOMAIN
ports:
- 80:80
volumes:
- ./etc/nginx/templates-initiate:/etc/nginx/templates:ro
- ./etc/letsencrypt:/etc/letsencrypt:ro
- ./certbot/data:/var/www/certbot
certbot:
container_name: certbot
image: certbot/certbot:latest
depends_on:
- nginx
command: >-
certonly --reinstall --webroot --webroot-path=/var/www/certbot
--email ${EMAIL} --agree-tos --no-eff-email
-d ${DOMAIN}
volumes:
- ./etc/letsencrypt:/etc/letsencrypt
- ./certbot/data:/var/www/certbot
./etc/nginx/templates-initiate/default.conf.template
server {
listen [::]:80;
listen 80;
server_name $DOMAIN;
location ~/.well-known/acme-challenge {
allow all;
root /var/www/certbot;
}
}
Phase 2: The operation phase
./docker-compose.yaml
services:
app:
{{your_configurations_here}}
{{other_services...}}:
{{other_services_configuraitons}}
nginx:
container_name: nginx
image: nginx:latest
restart: always
environment:
- DOMAIN
depends_on:
- app
ports:
- 80:80
- 443:443
volumes:
- ./etc/nginx/templates:/etc/nginx/templates:ro
- ./etc/letsencrypt:/etc/letsencrypt
- ./certbot/data:/var/www/certbot
- /var/log/nginx:/var/log/nginx
certbot:
container_name: certbot
image: certbot/certbot:latest
depends_on:
- nginx
command: >-
certonly --reinstall --webroot --webroot-path=/var/www/certbot
--email ${EMAIL} --agree-tos --no-eff-email
-d ${DOMAIN}
volumes:
- ./etc/letsencrypt:/etc/letsencrypt
- ./certbot/data:/var/www/certbot
./etc/nginx/templates/default.conf.template
server {
listen [::]:80;
listen 80;
server_name $DOMAIN;
return 301 https://$host$request_uri;
}
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name $DOMAIN;
ssl_certificate /etc/letsencrypt/live/$DOMAIN/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/$DOMAIN/privkey.pem;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://app:80;
}
}
The second method uses two docker images: http-proxy and http-proxy-acme-companion that were developed specifically for this reason. I suggest looking at the blog post for further details.
As I see, you havenot exposed port 443 for nginx container:
nginx:
build: ./nginx
ports:
- 80:80
- 443:443
depends_on:
Add more 443 port.
I'm trying to setup a docker environment using docker-compose with images of
rails (running puma), nginx, mysql, elasticsearch.
But when I try to call it using HTTParty.get('http://lvh.me:8888')
it's fail and I got an error message
Errno::ECONNREFUSED (Failed to open TCP connection to lvh.me:8888
(Connection refused - connect(2) for "lvh.me" port 8888))
My Docker-Compose file:
version: '3.3'
services:
beecomredis:
image: redis:4.0.8
ports:
- "6379:6379"
beecomdb:
image: mysql:5.7.21
volumes:
- ./mysql_data/mysql:/var/lib/mysql
ports:
- "6603:3306"
beecomec:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.2
#container_name: elasticsearch
environment:
- http.cors.enabled=true
- http.cors.allow-origin="*"
- node.master=true
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./elastic_data/elasticsearch/data:/usr/share/elasticsearch/data
- ./elastic_data/elasticsearch/logs:/usr/share/elasticsearch/logs
ports:
- '9200:9200'
- '9300:9300'
beecomnginx:
build:
context: .
dockerfile: Dockerfile-nginx
ports:
- "8888:80"
beecom:
build: .
command: foreman start
volumes:
- .:/beecom
expose:
- "3000"
depends_on:
- beecomdb
- beecomredis
- beecomec
- beecomnginx
My nginx.conf file:
upstream rails_app {
server beecom:3000;
}
server {
# define your domain
server_name www.example.com;
# define the public application root
root $RAILS_ROOT/public;
index index.html;
# define where Nginx should write its logs
access_log $RAILS_ROOT/log/nginx.access.log;
error_log $RAILS_ROOT/log/nginx.error.log;
# deny requests for files that should never be accessed
location ~ /\. {
deny all;
}
location ~* ^.+\.(rb|log)$ {
deny all;
}
# serve static (compiled) assets directly if they exist (for rails production)
location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ {
try_files $uri #rails;
access_log off;
gzip_static on;
# to serve pre-gzipped version
expires max;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
break;
}
# send non-static file requests to the app server
location / {
try_files $uri #rails;
}
location #rails {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://rails_app;
}
}
My DockerFile:
FROM ruby:2.5.0-slim
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
default-libmysqlclient-dev \
mysql-client \
libmagickwand-dev \
imagemagick \
curl \
git \
gnupg2 \
/sources.list.d/yarn.list \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN curl -sL https://deb.nodesource.com/setup_9.x | bash - && apt-get install -y --no-install-recommends nodejs
RUN gem update --system
RUN mkdir -p /beecom
WORKDIR /beecom
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
COPY package.json package.json
ENV RAILS_ENV development
ENV RACK_ENV development
RUN bundle install
RUN set :environment, 'development'
COPY config/puma.rb config/puma.rb
COPY . /beecom
EXPOSE 3000
CMD [ "foreman", "start" ]
And finally my Dockerfile-nginx :
# Base image:
FROM nginx
# Install dependencies
RUN apt-get update -qq && apt-get -y install apache2-utils
# establish where Nginx should look for files
ENV RAILS_ROOT /beecom
# Set our working directory inside the image
RUN mkdir -p $RAILS_ROOT
WORKDIR $RAILS_ROOT
# create log directory
RUN mkdir log
# copy over static assets
COPY ./public public/
# Copy Nginx config template
COPY ./config/nginx.conf /tmp/docker.nginx
# substitute variable references in the Nginx config template for real values from the environment
# put the final config in its place
RUN envsubst '$RAILS_ROOT' < /tmp/docker.nginx > /etc/nginx/conf.d/default.conf
RUN rm -rf /etc/nginx/sites-available/default
ADD config/nginx.conf /etc/nginx/sites-enabled/nginx.conf
EXPOSE 80
# Use the "exec" form of CMD so Nginx shuts down gracefully on SIGTERM (i.e. `docker stop`)
CMD [ "nginx", "-g", "daemon off;" ]
Oh and I forgot to post my Puma config:
# Puma can serve each request in a thread from an internal thread pool.
# The `threads` method setting takes two numbers a minimum and maximum.
# Any libraries that use thread pools should be configured to match
# the maximum value specified for Puma. Default is set to 5 threads for minimum
# and maximum, this matches the default thread size of Active Record.
#
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }.to_i
threads threads_count, threads_count
# Specifies the `port` that Puma will listen on to receive requests, default is 3000.
#
port ENV.fetch("PORT") { 3000 }
# Specifies the `environment` that Puma will run in.
#
environment ENV.fetch("RAILS_ENV") { "development" }
# Specifies the number of `workers` to boot in clustered mode.
# Workers are forked webserver processes. If using threads and workers together
# the concurrency of the application would be max `threads` * `workers`.
# Workers do not work on JRuby or Windows (both of which do not support
# processes).
#
workers ENV.fetch("WEB_CONCURRENCY") { 3 }
# Use the `preload_app!` method when specifying a `workers` number.
# This directive tells Puma to first boot the application and load code
# before forking the application. This takes advantage of Copy On Write
# process behavior so workers use less memory. If you use this option
# you need to make sure to reconnect any threads in the `on_worker_boot`
# block.
#
# preload_app!
# The code in the `on_worker_boot` will be called if you are using
# clustered mode by specifying a number of `workers`. After each worker
# process is booted this block will be run, if you are using `preload_app!`
# option you will want to use this block to reconnect to any threads
# or connections that may have been created at application boot, Ruby
# cannot share connections between processes.
#
# on_worker_boot do
# ActiveRecord::Base.establish_connection if defined?(ActiveRecord)
# end
# Allow puma to be restarted by `rails restart` command.
plugin :tmp_restart
Finally I found that adding a binding to my config/puma.rb file.
bind 'tcp://0.0.0.0:8888'
Resolved my problem.
UPDATED:
Finally the best way I found to fix this was to change my nginx.config like so:
upstream MyAppUpstream_website {
server website:3000;
}
server {
listen 443 ssl;
listen 80;
server_name lvh.me
keepalive_timeout 900;
ssl_certificate /[key];
ssl_certificate_key /[key];
# define the public application root
root $RAILS_ROOT/public;
index index.html;
# define where Nginx should write its logs
access_log $RAILS_ROOT/log/nginx.access.log;
error_log $RAILS_ROOT/log/nginx.error.log;
# deny requests for files that should never be accessed
location ~ /\. {
deny all;
}
location ~* ^.+\.(rb|log)$ {
deny all;
}
# serve static (compiled) assets directly if they exist (for rails production)
location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ {
try_files $uri #rails;
access_log off;
gzip_static on;
# to serve pre-gzipped version
expires max;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
break;
}
# send non-static file requests to the app server
location / {
try_files $uri #rails;
}
location #rails {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://MyAppUpstream_website;
}
}
I am attempting to use an NGINX container to host a static web application. This container should also redirect certain requests (i.e. www.example.com/api/) to another container on the same network.
I am getting the "host not found in upstream" issue when calling docker-compose build, even though I am enforcing that the NGINX container is the last to be built.
I have tried the following solutions:
Enforcing a network name and aliases (as per Docker: proxy_pass to another container - nginx: host not found in upstream)
Adding a "resolver" directive (as per Docker Networking - nginx: [emerg] host not found in upstream and others), both for 8.8.8.8 and 127.0.0.11.
Rewriting the nginx.conf file to have the upstream definition before the location that will redirect to it, or after it.
I am running on a Docker for Windows machine that is using a mobylinux VM to run the relevant container(s). Is there something I am missing? It isn't obvious to me that the "http://webapi" address should resolve correctly, as the images are built but not running when you are calling docker-compose.
nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
upstream docker-webapi {
server webapi:80;
}
server {
listen 80;
server_name localhost;
location / {
root /wwwroot/;
try_files $uri $uri/ /index.html;
}
location /api {
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://docker-webapi;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
docker-compose:
version: '3'
services:
webapi:
image: webapi
build:
context: ./src/Api/WebApi
dockerfile: Dockerfile
volumes:
- /etc/example/secrets/:/app/secrets/
ports:
- "61219:80"
model.api:
image: model.api
build:
context: ./src/Services/Model/Model.API
dockerfile: Dockerfile
volumes:
- /etc/example/secrets/:/app/secrets/
ports:
- "61218:80"
webapp:
image: webapp
build:
context: ./src/Web/WebApp/
dockerfile: Dockerfile
ports:
- "80:80"
depends_on:
- webapi
Dockerfile:
FROM nginx
RUN mkdir /wwwroot
COPY nginx.conf /etc/nginx/nginx.conf
COPY wwwroot ./wwwroot/
EXPOSE 80
RUN service nginx start
You issue is below
RUN service nginx start
You never run service command inside docker, because there is no init system. Also RUN commands are run during build time.
nginx original image has everything you need for nginx to start fine. So just remove the line and it would work.
By default nginx image has below CMD instruction
CMD ["nginx" "-g" "daemon off;"]
You can easily find that out by running the below command
docker history --no-trunc nginx | grep CMD