I'm very new to Docker and Nginx and this might be a stupid question but how can I point my nginx box to look at the files in Rails public? Basically, I have an nginx box, and an application box. I would like to know where I can put those files so that the nginx box can read them.
version: "3"
services:
api:
build: "./api"
env_file:
- .env-dev
ports:
- "3000:3000"
depends_on:
- db
volumes:
- .:/app/api
command: rails server -b "0.0.0.0"
nginx:
build: ./nginx
env_file: .env-dev
volumes:
- .:/app/nginx
depends_on:
- api
links:
- api
ports:
- "80:80"
...
Api dockerfile:
FROM ruby:2.4.1-slim
RUN apt-get update && apt-get install -qq -y \
build-essential \
libmysqlclient-dev \
nodejs \
--fix-missing \
--no-install-recommends
ENV INSTALL_PATH /api
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY Gemfile $INSTALL_PATH
RUN bundle install
COPY . .
EXPOSE 3000
Nginx Dockerfile:
FROM nginx
ENV INSTALL_PATH /nginx
RUN mkdir -p $INSTALL_PATH
COPY nginx.conf /etc/nginx/nginx.conf
# COPY ?
EXPOSE 80
nginx config (this is correctly being copied over)
daemon off;
worker_processes: 1;
events { worker_connections: 1024; }
http {
sendfile on;
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 500;
gzip_disable "MSIE [1-6]\.";
gzip_types text/plain text/xml text/css
text/comma-separated-values
text/javascript
application/x-javascript
application/atom+xml;
# Rails Api
upstream api {
server http://api/;
}
# Configuration for the server
server {
# Running port
listen 80;
# Proxying the connections connections
location /api {
proxy_pass http://api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
error_page 500 502 503 504 /public/50x.html
error_page 404 /public/404.html
location = /50x.html {
root /api/public;
}
location = /404.html {
root /api/public
}
}
}
Now, when I go to localhost:80 it show the generic nginx folder. However, I'm unsure how to link the public dir of rails api/public/ to the nginx container.
Can I just COPY path/to/rails/public path/nginx. Where is nginx expecting to find those files?
Edit
I believe I should be putting them in /var/www/app_name, correct?
I think what you want to achieve should be done by mounting a volume of container 'api' to container 'nginx', something like this:
version: "3"
services:
api:
image: apiimg
volumes:
- apivol:/path/to/statics
nginx:
image: nginximg
volumes:
- apivol:/var/www/statics
volumes:
apivol: {}
So there's a shared volume declared for all containers, apivol, which is mapped to /path/to/statics on your Rails container, and to /var/www/statics in your nginx container. This way you don't need to copy anything manually into the nginx container.
The default location for static content on nginx is /etc/nginx/html, but you could put it in var/www/app_name as long as you remember to add
root /var/www/app_name
in the corresponding location block for your static content.
Related
I'm confused about making a connection from nginx alpine to nginx alpine
both use laravel 9
on the host I can access both using http://localhost:8080
and http://localhost:5001
but when I try to use guzzle in frontend like this
$response = $http->get('http://dashboard:5001');
result is
cURL error 7: Failed to connect to dashboard_service port 5001 after 0 ms:
Connection refused
and I try to curl from frontend container to dashboard container the result is connection refused.I can ping it, but curl not work
this is my docker-compose.yml
version: "3.8"
networks:
networkname:
services:
frontend:
build:
context: .
dockerfile: ./file/Dockerfile
container_name: frontend
ports:
- 8080:80
volumes:
- ./frontend:/code
- ./.docker/php-fpm.conf:/etc/php8/php-fpm.conf
- ./.docker/php.ini-production:/etc/php8/php.ini
- ./.docker/nginx.conf:/etc/nginx/nginx.conf
- ./.docker/nginx-laravel.conf:/etc/nginx/modules/nginx-laravel.conf
networks:
- networkname
dashboard:
build:
context: .
dockerfile: ./file/Dockerfile
container_name: dashboard
ports:
- 5001:80
volumes:
- ./dashboard:/code
- ./.docker/php-fpm.conf:/etc/php8/php-fpm.conf
- ./.docker/php.ini-production:/etc/php8/php.ini
- ./.docker/nginx.conf:/etc/nginx/nginx.conf
- ./.docker/nginx-laravel.conf:/etc/nginx/modules/nginx-laravel.conf
networks:
- networkname
this is my dockerfile
FROM alpine:latest
WORKDIR /var/www/html/
# Essentials
RUN echo "UTC" > /etc/timezone
RUN apk add --no-cache zip unzip curl sqlite nginx supervisor
# Installing PHP
RUN apk add --no-cache php8 \
php8-common \
php8-fpm \
# Installing composer
RUN curl -sS https://getcomposer.org/installer -o composer-setup.php
RUN php composer-setup.php --install-dir=/usr/local/bin --filename=composer
RUN rm -rf composer-setup.php
# Configure supervisor
RUN mkdir -p /etc/supervisor.d/
COPY .docker/supervisord.ini /etc/supervisor.d/supervisord.ini
# Configure PHP
RUN mkdir -p /run/php/
RUN mkdir -p /test
RUN touch /run/php/php8.0-fpm.pid
CMD ["supervisord", "-c", "/etc/supervisor.d/supervisord.ini"]
this is my nginx conf
server {
listen 80;
server_name localhost;
root /code/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";
index index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass localhost:9000;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
I'm confused about having to set it up in docker, nginx or alpine linux
Thanks.
Hello if you try this from inside container's php you needn't check it with port
Type such from php:
$response = $http->get('http://dashboard');
But, if you check this from out container you may enter like ip:port
$response = $http->get('http://127.0.0.1:5001');
If you are connecting to a docker container within the same network, then use the internal port. If not, use the external port.
In your case, you are trying to connect to the dashboard container from within networkname network. So try http://dashboard instead of http://dashboard:5001
I'm trying to run a platform building on Django, Docker, Nginx and Gunicorn from my Ubuntu server.
Before to ask you, i'm reading about my problem and i did on my nginx.conf:
location / {
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
...
}
Then, on my Gunicorn settings:
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "-t 90", "config.wsgi:application"]
The problem persists and server always returns: 502 Bad Gateway. When i try to access to:
http://34.69.240.210:8000/admin/
From browser, the server redirect to
http://34.69.240.210:8000/admin/login/?next=/admin/
But show the error:
My Dockerfile:
FROM python:3.8
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY . /code/
RUN pip install -r requirements.txt
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "-t 90", "config.wsgi:application"]
My docker-compose.yml:
version: "3.8"
services:
django_app:
build: .
volumes:
- static:/code/static
- .:/code
nginx:
image: nginx:1.15
ports:
- 8000:8000
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static:/code/static
depends_on:
- django_app
volumes:
.:
static:
My Nginx file:
upstream django_server {
server django_app:8000 fail_timeout=0;
}
server {
listen 8000;
server_name 34.69.240.210;
keepalive_timeout 5;
client_max_body_size 4G;
location / {
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
proxy_pass http://django_server;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
}
Any idea what can i do to fix it?
Thank you.
Well, for the record.
My problem was the database connection. My Docker container couldn't connect to potsgres local database.
So, i added this line to postgresql.conf:
listen_addresses = '*'
Then, i added this line to pg_hba.conf
host all all 0.0.0.0/0 md5
Restart Postgres:
sudo service postgresql restart
My host postgres ip:
172.17.0.1
Test from inside docker:
psql -U myuser -d databasename -h 172.17.0.1 -W
Done! :)
I am deploying my Next.js / Nginx docker image from Container Registry to Compute Engine
Once deployed the application is running as expected, but its running on port 3000 instead of 80 - i.e. I want to access it at <ip_address> but I can only access it on <ip_address>:3000. I have setup a reverse proxy in Nginx to forward port 3000 to 80 but it does not seem to be working.
When I run docker-compose up the app is accessible on localhost (rather than localhost:3000)
Dockerfile
FROM node:alpine as react-build
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
RUN npm install --global pm2
COPY . /usr/src/app
RUN npm run build
# EXPOSE 3000
EXPOSE 80
CMD [ "pm2-runtime", "start", "npm", "--", "start" ]
docker-compose.yml
version: '3'
services:
nextjs:
build: ./
nginx:
build: ./nginx
ports:
- 80:80
./nginx/Dockerfile
FROM nginx:alpine
# Remove any existing config files
RUN rm /etc/nginx/conf.d/*
# Copy config files
# *.conf files in "conf.d/" dir get included in main config
COPY ./default.conf /etc/nginx/conf.d/
./nginx/default.conf
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=STATIC:10m inactive=7d use_temp_path=off;
server {
listen 80;
gzip on;
gzip_proxied any;
gzip_comp_level 4;
gzip_types text/css application/javascript image/svg+xml;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# BUILT ASSETS (E.G. JS BUNDLES)
# Browser cache - max cache headers from Next.js as build id in url
# Server cache - valid forever (cleared after cache "inactive" period)
location /_next/static {
#proxy_cache STATIC;
proxy_pass http://localhost:3000;
}
# STATIC ASSETS (E.G. IMAGES)
# Browser cache - "no-cache" headers from Next.js as no build id in url
# Server cache - refresh regularly in case of changes
location /static {
#proxy_cache STATIC;
proxy_ignore_headers Cache-Control;
proxy_cache_valid 60m;
proxy_pass http://locahost:3000;
}
# DYNAMIC ASSETS - NO CACHE
location / {
proxy_pass http://locahost:3000;
}
}
I found it very strange, I had setup an rails app, posgres db, and a nginx server for production only but the ngix only able to start if I type
docker-compose -f docker-compose.yml -f production.yml up --build
but not the pre-build
docker-compose -f docker-compose.yml -f production.yml build
then
docker-compose up
the rails app and db is starting just fine is just that nginx is not started and the port is revert back to port 3000 instead of 80 which i found very strange isn't they doing the same thing?
nginx.conf
# This is a template. Referenced variables (e.g. $INSTALL_PATH) need
# to be rewritten with real values in order for this file to work.
upstream rails_app {
server unix:///webapp/tmp/sockets/puma.sock;
}
server {
listen 80;
# define your domain
server_name 127.0.0.1 localhost www.example.com;
# define the public application root
root /providre_api/public;
# define where Nginx should write its logs
access_log /providre_api/log/nginx.access.log;
error_log /providre_api/log/nginx.error.log;
# deny requests for files that should never be accessed
location ~ /\. {
deny all;
}
location ~* ^.+\.(rb|log)$ {\
deny all;
}
# serve static (compiled) assets directly if they exist (for rails production)
location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ {
try_files $uri #rails;
access_log off;
gzip_static on; # to serve pre-gzipped version
expires max;
add_header Cache-Control public;
# Some browsers still send conditional-GET requests if there's a
# Last-Modified header or an ETag header even if they haven't
# reached the expiry date sent in the Expires header.
add_header Last-Modified "";
add_header ETag "";
break;
}
# send non-static file requests to the app server
location / {
try_files $uri #rails;
}
location #rails {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://rails_app;
}
}
web.Dockerfile
# Base image:
FROM nginx
# Install dependencies
RUN apt-get update -qq && apt-get -y install apache2-utils
# establish where Nginx should look for files
ENV INSTALL_PATH /providre_api
# Set our working directory inside the image
WORKDIR $INSTALL_PATH
# create log directory
RUN mkdir log
# copy over static assets
COPY public public/
# Copy Nginx config template
COPY docker/web/nginx.conf /tmp/docker.nginx
# substitute variable references in the Nginx config template for real values from the environment
# put the final config in its place
RUN envsubst '$INSTALL_PATH' < /tmp/docker.nginx > /etc/nginx/conf.d/default.conf
EXPOSE 80
# Use the "exec" form of CMD so Nginx shuts down gracefully on SIGTERM (i.e. `docker stop`)
CMD [ "nginx", "-g", "daemon off;" ]
docker-compose.yml
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
restart: always
ports:
- "5433:5432"
environment:
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: ''
app:
command: bundle exec puma -C config/puma.rb
ports:
- "3000"
depends_on:
- db
docker-compose.override.yml
version: '3'
services:
app:
build:
context: .
dockerfile: ./docker/app/Dockerfile
volumes:
- .:/providre_api
ports:
- "3000:3000"
production.yml
version: '3'
services:
app:
build:
context: .
dockerfile: ./docker/app/prod.Dockerfile
volumes:
- .:/providre_api
ports:
- "3000"
nginx:
container_name: web
build:
context: .
dockerfile: ./docker/web/web.Dockerfile
depends_on:
- app
volumes:
- ./docker/web/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- 80:80
Sorry my bad I didn't test it fully so the docker-compose up will used the normal docker-compose.yml so I have to repeat and use docker-compose -f docker-compose.yml -f production.yml up instead
I'm having problems with nginx reverse proxying as a docker container. My question is about how to correctly proxy pass nginx in a default docker network?
Here's my docker-compose.yml (unnecessary details omitted for brevity)
version: '3'
networks:
nginx_default:
external: true
services:
db:
image: postgres:10.2
ports:
- "5432:5432"
environment: ...
postgrest:
image: postgrest/postgrest
ports:
- "3000:3000"
environment: ...
nginx:
restart: always
image: nginx:latest
ports:
- "8080:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/sites-enabled/ruler
command: [nginx-debug, '-g', 'daemon off;']
webapp:
build:
context: "./"
dockerfile: Dockerfile.dev
volumes: ...
ports:
- "3001:3001"
environment: ...
Here's my nginx.conf
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name _;
gzip on;
gzip_proxied any;
gzip_types text/plain text/xml text/css application/x-javascript;
gzip_vary on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log warn;
location / {
try_files $uri #node;
}
location /api/ {
try_files $uri #postgrest;
}
location #node {
proxy_pass http://webapp:3001;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_cache_control;
add_header X-Proxy-Cache $upstream_cache_status;
}
location #postgrest {
proxy_pass http://postgrest:3000;
proxy_http_version 1.1;
default_type application/json;
proxy_set_header Connection "";
proxy_hide_header Content-Location;
add_header Content-Location /api/$upstream_http_content_location;
}
}
And my Dockerfile.dev
FROM node:8.9
WORKDIR /client
CMD npm run dev -- -p 3001
When I do $ docker-compose up -d everything starts without an error. After that I can successfully do $ curl http://127.0.0.1:3001/ (webapp) and $ curl http://127.0.0.1:3000 (postgrest).
But when I try $ curl http://127.0.0.1:8080/ (nginx should handle here the proxying) I get default nginx welcome page. Also $ curl http://127.0.0.1:8080/api/ is not hitting the API :/
What may be the cause? Using $ docker inspect I see that every container is in the same default network.
Edit: Using $ docker-compose logs seems like the default network is not used at all O_o
docker-compose logs
WARNING: Some networks were defined but are not used by any service: nginx_default
Attaching to ruler_webapp_1, ruler_nginx_1
webapp_1 |
webapp_1 | > ruler# dev /client
webapp_1 | > next "-p" "3001"
webapp_1 |
webapp_1 | > Using external babel configuration
webapp_1 | > Location: "/client/.babelrc"
webapp_1 | DONE Compiled successfully in 1741ms09:04:49
webapp_1 |
My guess is you mapped your local nginx configuration file to the wrong file on the container side. The default configuration file for the nginx image is located at /etc/nginx/conf.d/default.conf so the volume of the nginx container should be:
./nginx/nginx.conf:/etc/nginx/conf.d/default.conf:ro
You can check your configuration file is used correctly by executing:
docker-compose exec nginx nginx -T
Side notes:
Never use the latest tag, because in some time you may face broken compatibility issues. Use fixed version tag 1, 1.13 etc. instead
You don't need to publish ports everywhere, eg. 3000:3000, 3001:3001. Those ports will be accessible internally by containers
Your config is a partial config and not a complete nginx config. So it needs to go inside conf.d inside the container and not on nginx.conf or sites-enabled. So change
volumes:
- ./nginx/nginx.conf:/etc/nginx/sites-enabled/ruler
to
volumes:
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
And now it should start working