Below is my ngionx.conf
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
access_log /var/log/nginx/access.log main;
location /beta/ {
proxy_pass http://localhost:9001;
}
location /qa/ {
proxy_pass http://localhost:9002;
}
location /alpha/ {
proxy_pass http://localhost:9003;
}
location / {
proxy_pass http://www.google.com;
}
}
}
and below is my docker-compose.yml
version: '3'
services:
Reverse-proxy:
image: nginx
ports:
- 80:80
volumes:
- /nginx.conf:/etc/nginx/nginx.conf
restart: always
GQLbeta:
image: gql-beta
ports:
- 9001:80
restart: always
GQLqa:
image: gql-qa
ports:
- 9002:80
restart: always
GQLalpha:
image: gql-alpha
ports:
- 9003:80
restart: always
When I run docker-compose up -d, all container is running good.
Then I went localhost:80 on my browerser, it show
which I expected to see google page.
And when i went to localhost/beta, it will show
502 Bad Gateway
which i expected will go to localhost: 9001
Why this happened? Am i miss something to setup?
localhost in the docker container is the container itself, so you should to give a names to your app containers and describe them as a upstreams - it will fix your 502. With default location, try this:
location / {
return 301 http://google.com;
}
Related
I use nginx in a docker to connect my two wordpress websites, which are dockerized too.
I can set up one website with the following settings:
In docker-compose.yml
nginx:
image: nginx:alpine
volumes:
- ./web_ndnb_prod/src:/var/www/html
- ./nginx/conf.d:/etc/nginx/conf.d:ro
depends_on:
- web_ndnb_test
- web_ndnb_prod
In my NGINX conf file located in /nginx/conf.d
server {
[...]
root /var/www/html/;
[...]
}
However to add a 2nd website, I try to change the root and the websites return a 404
In docker-compose.yml
nginx:
image: nginx:alpine
volumes:
- ./web_ndnb_prod/src:/var/www/web_ndnb_prod
- ./web_ndnb_test/src:/var/www/web_ndnb_test
- ./nginx/conf.d:/etc/nginx/conf.d:ro
depends_on:
- web_ndnb_test
- web_ndnb_prod
In one of the 2 NGINX conf files
server {
[...]
root /var/www/web_ndnb_prod/;
[...]
}
If I execute
sudo docker exec -ti nginx ls /var/www/web_ndnb_prod
It outputs the wordpress files correctly
Why does Nginx not find them?
Edit 1
The main nginx.conf file is
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
I am currently an application with the following components, all running in docker containers on an AWS server:
angularJs
nodeJS
nginx (using jwilder/nginx-proxy image)
letsencrypt (using jrcs/letsencrypt-nginx-proxy-companion:v1.12 image)
The application allows me to upload files from the frontend, which sends them to an API endpoint in the nodeJs backend. Multiple files can be uploaded together, and these are all base64 encoded and sent in the same request.
When the files are small (up to about 5Mb total) this works perfectly fine, but recently I've tried slightly larger files (still less than 10Mb total) and I am experiencing the following error in my Chrome browser:
{"message":"request entity too large","additionalMessage":null,"dictionaryKey":"server_errors.general_error","hiddenNotification":false,"handledError":false}
Inspecting the traffic, I realised that the request was never making it to my backend, and thus assume that this error is caused by nginx blocking the request, presumably via the client_max_body_size property in my nginx config. Checking the logs of the nginx docker container, it doesn't actually show me any errors, but does warn that a tmp file is used (note I hgave masked IPs and URLs):
nginx.1 | 2021/11/12 03:19:15 [warn] 389#389: *9 a client request body is buffered to a temporary file /var/cache/nginx/client_temp/0000000001, client: 100.00.00.000, server: my-host-url.com, request: "POST /api/docs/upload HTTP/2.0", host: "my-host-url.com", referrer: "https://my-host-url.com/"
After some googling, I found and followed this article https://learn.coderslang.com/0018-how-to-fix-error-413-request-entity-too-large-in-nginx/ which explains the issue pretty clearly and even references the same docker image that I use. Unfortunately this has not fixed the issue. I also read the nginx docs which show that this property can be applied at
http, server, location
level, and so I updated my nginx config accordingly and restarted nginx on it's own, and also shut down and restarted the containers. Still no luck :(
My docker/nginx config is as follows, noting that I am now using client_max_body_size 0; to completely disable the check instead of just increasing the size
Docker compose
version: '2.4'
services:
nginx-proxy:
image: jwilder/nginx-proxy
environment:
# DEBUG: "true"
DEFAULT_HOST: my-host-url.com
ports:
- '80:80'
- '443:443'
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- nginx-certs:/etc/nginx/certs:ro
- nginx-vhost:/etc/nginx/vhost.d
- nginx-html:/usr/share/nginx/html
- ./nginx.conf:/etc/nginx/nginx.conf
sysctls:
- net.core.somaxconn=65536
mem_limit: 200m
labels:
- com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy
networks:
- common
restart: 'always'
logging:
options:
max-size: 100m
max-file: '5'
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion:v1.12
depends_on:
- nginx-proxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- nginx-certs:/etc/nginx/certs:rw
- nginx-vhost:/etc/nginx/vhost.d
- nginx-html:/usr/share/nginx/html
# environment:
# DEBUG: "true"
networks:
- common
restart: 'always'
nginx.conf copied to container
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
gzip on;
client_max_body_size 0;
proxy_connect_timeout 301;
proxy_send_timeout 301;
proxy_read_timeout 301;
send_timeout 301;
include /etc/nginx/conf.d/*.conf;
server {
client_max_body_size 0;
location / {
client_max_body_size 0;
}
}
}
daemon off;
I have a very simple PHP app built using Docker Compose using NGINX and Busybox. It's working fine locally but I can't get it to start up properly in its container on AWS ECS. The app structure is as follows:
Root
--conf
nginx.conf
php.ini
supervisord.conf
--logs
--sites
default.vhost
--www
--default
index.php
aws-compose.yml
docker-compose.yml
Dockerfile
The important files are:
aws-compose.yml
version: '2'
services:
web:
image: nginx:alpine
cpu_shares: 100
mem_limit: 262144000
restart: always
ports:
- "80:80"
- "443:443"
links:
- php
volumes:
- /sites:/etc/nginx/conf.d
- /conf/nginx.conf:/etc/nginx/nginx.conf
volumes_from:
- code
php:
image: '195367337191.dkr.ecr.us-east-1.amazonaws.com/myapp-php-dev:rv1'
cpu_shares: 100
mem_limit: 262144000
build: .
restart: always
working_dir: /var/www
volumes_from:
- code
code:
image: busybox
tty: true
volumes:
- /www:/var/www
Dockerfile:
FROM php:7.1-fpm
RUN pecl install xdebug
RUN docker-php-ext-enable xdebug
COPY conf/php.ini /etc/php/7.1/fpm/conf.d/40-custom.ini
Conf/nginx.conf:
user root;
worker_processes 1;
# daemon off;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
gzip on;
include /etc/nginx/conf.d/*;
}
Conf/php.ini:
; Enable XDebug
zend_extension = xdebug.so
; XDebug configuration
xdebug.remote_enable = 1
xdebug.renite_enable = 1
xdebug.max_nesting_level = 1000
xdebug.profiler_enable_trigger = 1
xdebug.profiler_output_dir = "/var/log"
date.timezone = "America/New_York"
; Show PHP errors
display_errors = 1
Conf/supervisord.conf:
[supervisord]
nodaemon=true
[program:nginx]
command = /usr/sbin/nginx
user = root
autostart = true
[program:php7-fpm]
command = /usr/sbin/php-fpm7.0 -FR
user = root
autostart = true
Sites/default.vhost:
server {
server_name default;
root /var/www/default;
index index.html index.php;
client_max_body_size 100M;
fastcgi_read_timeout 1800;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
access_log off;
}
location ~ \.php$ {
try_files $uri =404;
include fastcgi_params;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass php:9000;
}
}
The issue seems to be with the references in the volumes section of the "web" service in aws-compose.yml as when I remove them completely the container starts but shows the default Nginx "needs configuration" holding page. With them in-situ as below:
volumes:
- /sites:/etc/nginx/conf.d
- /conf/nginx.conf:/etc/nginx/nginx.conf
I get the following error in the tasks section of AWS ECS under "stopped":
Status reason CannotStartContainerError: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:424: container init caused \"rootfs_linux.go:58: mounting \\"/conf/nginx.conf\\" to rootfs \\"
For info in my local docker-compose.yml file the references are as follows:
- ./sites:/etc/nginx/conf.d
- ./conf/nginx.conf:/etc/nginx/nginx.conf
But this errors when I spin it up on AWS stating the syntax doesn't conform with the required RegEx pattern.
Can anybody help?
I am having hard time figuring out configuration to load locally running dockerised web app in the domain.
Below are:
docker-compose.yml
version: "3"
services:
ui:
build: ./ui
volumes:
- ./ui:/app/sr
container_name: ui
ports:
- "4200:4200"
networks:
- webnet
links:
- api
api:
build: ./api
ports:
- "0.0.0.0:5000:5000"
volumes:
- ./api:/app
container_name: api
networks:
- webnet
networks:
webnet:
nginx/conf.d/ui.example.conf
server {
listen 80;
#listen [::]:80;
server_name ui.example.de www.ui.example.de;
location / {
proxy_pass http://ui:4200/;
#proxy_buffering off;
#proxy_set_header X-Real-IP $remote_addr;
}
}
nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
It runs on local machine under the ip
http://138.246.XXX.XX:4200 as well as in http://138.246.XXX.XX
But when I try to access through the web with http://ui.example.com, it gives Error 503.
I also tried with ip from docker network and my machine ip i.e. http://138.246.XXX.XX:4200 in proxy pass in ui.example.conf.
[NOTE]: I removed default from nginx/sites_enabled. Now that is empty as I am only trying for reverse proxy with nginx.
Does anyone have any idea, what am I missing here?
Try ELB health checker status by connecting container
I'm using ansible to deploy nginx-vts , prometheus, exporter and php everything works fine but nginx keeps stopping and exiting .... if i run the same images with docker compose they tun normally . the nginx image is based on alpine .
this is my playbook
- hosts: localhost
connection: local
tasks:
- name: x-vts
docker_container:
name: x-nginx
image: x:latest
state: started
volumes:
- ./php:/var/www/html/x.com
- ./site.conf:/etc/nginx/conf.d/x.com.conf:ro
ports:
- 80:80
- name: php
docker_container:
name: x-php
image: php:fpm
state: started
volumes:
- ./php:/var/www/html/x.com
- name: nginx-vts-exporter
docker_container:
name: x-Exporter
image: sophos/nginx-vts-exporter:latest
state: started
ports:
- 9913:9913
command:
- NGINX_HOST=http://nginx:80
- name: prom
docker_container:
name: x-prometheus
image: prom/prometheus:latest
state: started
ports:
- 9090:9090
volumes:
- ./monitor/prometheus.yml:/etc/prometheus/prometheus.yml
this is my nginx config file
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
vhost_traffic_status_zone;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format json_combined escape=json '{"time_local":"$time_local", '
'"proxy_addr":"$remote_addr", '
'"remote_addr":"$http_x_forwarded_for", '
'"remote_user":"$remote_user", '
'"request":"$request", '
'"status":"$status", '
'"body_bytes_sent":"$body_bytes_sent", '
'"request_time":"$request_time", '
'"upstream_connect_time":"$upstream_connect_time", '
'"upstream_header_time":"$upstream_header_time", '
'"upstream_response_time":"$upstream_response_time", '
'"http_referrer":"$http_referer", '
'"http_user_agent":"$http_user_agent"}';
access_log /dev/stdout json_combined;
error_log /dev/stderr info;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
gzip on;
include /etc/nginx/conf.d/*.conf;
server {
listen 11050;
server_name nginx_vts_status
access_log off;
location /status {
vhost_traffic_status_bypass_limit on;
vhost_traffic_status_bypass_stats on;
vhost_traffic_status_display;
vhost_traffic_status_display_format json;
}
}
}
IMPORTANT NOTE :
if i delete the line
include /etc/nginx/conf.d/*.conf;
the image launches but it does not work correctly is there anything wrong ?
It looks like you have a syntax error in your nginx configuration. You should consult to docker logs (eg: docker logs nginx) in order to see nginx stdout. It can help to show where you have syntax errors.