I'm currently working on a Next.js project from an SSH connection (I need to work in SSH because of cookie issues with my the api requests).
I also use Docker to build an image for react and a web service because I'm using a nginx server. So when I enable my services, the app loads, I got access to the app, and when I make a change, it works. BUT I have to reload the browser tab to see the change. Apparently my web service don't like the hmr of webpack, I got this log from it :
web_1 | 192.168.10.1 - - [25/Mar/2022:08:45:03 +0000] "GET /_next/webpack-hmr HTTP/1.1" 404 936 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.82 Safari/537.36"
Here is my docker-compose.yml:
version: '3'
services:
web:
networks:
- webgateway
- default
build: ./docker/web
depends_on:
- react
volumes:
- $PWD/docker/web/etc/nginx.conf:/etc/nginx/nginx.conf
- $PWD/docker/web/etc/default.conf:/etc/nginx/conf.d/default.conf
labels:
traefik.enable: true
traefik.http.routers.test.tls: false
react:
networks:
- default
build: ./frontend
environment:
HOST_LOCAL: $HOST_LOCAL
COMPOSE_PROJECT_NAME: $COMPOSE_PROJECT_NAME
env_file:
- .local
volumes:
- ./frontend:/opt/services/react
networks:
webgateway:
external: true
Here is my conf for my service web:
docker/web/Dockerfile :
FROM nginx:1.13-alpine
RUN apk update && apk add bash
docker/web/etc/default.conf :
upstream app {
server react:3000;
}
server {
listen 80;
charset utf-8;
client_max_body_size 20M;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location /api/v {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_api;
}
location #proxy_to_app {
proxy_connect_timeout 600s;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
}
docker/web/etc/default.conf :
user root;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Thanks for your time in advance.
I've figured it out, it's a next/webpack_hmr configuration issue, nothing to do with docker or ngnix config...
Using a middleware for refreshing the modules fixed my issue.
Related
I have an application composed of containerized web-services deployed with docker-compose (it's a test env). One of the containers is nginx that operates as a reverse proxy for services and also serves static files. A public domain name points to the host machine and nginx has a server section that utilizes it.
The problem I am facing is that I can't talk to nginx by that public domain name from the containers launched on this same machine - connection always timeouts. (For example, I tried doing a curl https://<mypublicdomain>.com)
Referring by the containers name (using docker's hostnames) works just fine. Reuqests to the same domain name from other machines also work ok.
I understand this has to do with how docker does networking, but fail to find any docs that would outline what exactly goes wrong here. Could anyone explain the root of the issue to me or maybe just point in the right direction?
(For extra context: originally I was going to use this to set up monitoring with prometheus and blackbox exporter to make it see the server the same way anyone from the outside would do + to automatically check that SSL is working. For now I pulled back to point the prober to nginx by its docker hostname)
Nginx image
FROM nginx:stable
COPY ./nginx.conf /etc/nginx/nginx.conf.template
COPY ./docker-entrypoint.sh /docker-entrypoint.sh
COPY ./dhparam/dhparam-2048.pem /dhparam-2048.pem
COPY ./index.html /var/www/index.html
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
docker-compose.yaml
version: "3"
networks:
mainnet:
driver: bridge
services:
my-gateway:
container_name: my-gateway
image: aturok/manuwor_gateway:latest
restart: always
networks:
- mainnet
ports:
- 80:80
- 443:443
expose:
- "443"
volumes:
- /var/stuff:/var/www
- /var/certs:/certsdir
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
(I show only the nginx service, as others are irrelevant - I would for example spin up a nettools container and not connect it to the mainnet network - still expect the requests to reach nginx, since I am using the public domain name. The problem also happens with the containers connected to the same network)
nginx.conf (normally it comes with a bunch of env vars, replaced + removed irrelevant backend)
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
#include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name mydomain.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name mydomain.com;
ssl_certificate /certsdir/fullchain.pem;
ssl_certificate_key /certsdir/privkey.pem;
server_tokens off;
ssl_buffer_size 8k;
ssl_dhparam /dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
root /var/www/;
index index.html;
location / {
root /var/www;
try_files $uri /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
Note: certificates are ok when I access the server from elsewhere
Trying to deploy react app with nginx docker and can't get subfolders working. Have read all suggestions for very similar cases here and still no result. I have docker-compose container with nginx running with custom config and port mapping 9999:80. On attempt to visit any subfolder directly I get 404 from nginx. Attaching my nginx config.
What is in the log of nginx container on attempt to get /statistics subfolder:
frontend_1 | 172.21.0.1 - - [18/Sep/2019:14:01:27 +0000] "GET /statistics HTTP/1.1" 404 556 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36" "-"
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
server {
root /usr/share/nginx/html;
location / {
try_files $uri /index.html;
}
}
}
In my case it helped to change listen port from defaut 80 to 8080
server {
listen 8080;
root /usr/share/nginx/html;
location / {
try_files $uri $uri/ /index.html;
}
}
I am really stumped and can use help figuring out why my environment variables aren't transferring from Docker to nginx config files.
I have a docker-compose.yml
nginx:
image: nginx
container_name: proxier
volumes:
- ./conf/nginx.conf:/etc/nginx/nginx.conf
- ./conf/server.nginx.conf.tpl:/etc/nginx/server.nginx.conf.tpl
- ./build/web:/srv/static:ro
- ./docker/proxier:/tmp/docker
ports:
- "80:80"
- "443:443"
environment:
- HOST_EXTERNAL_IP=localhost
- DEVSERVER_PORT=8000
- DEVSERVICE_PORT=5000
command: /bin/bash -c "env && envsubst '$$HOST_EXTERNAL_IP $$DEVSERVER_PORT $$DEVSERVICE_PORT' < /etc/nginx/server.nginx.conf.tpl > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
I have an nginx.conf file
user nginx;
worker_processes 1;
error_log /dev/stdout warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
client_max_body_size 100g;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout main;
sendfile off;
tcp_nopush on;
keepalive_timeout 65;
gzip on;
server_tokens off;
upstream app {
server myapp:8000 fail_timeout=0;
}
include /etc/nginx/server.nginx.conf.tpl;
}
I have a server.nginx.conf.tpl file
server {
listen 80;
listen 443 ssl http2 default_server;
server_name localhost;
index index.html;
location ^~ /services/ {
proxy_pass https://myurl.com;
proxy_set_header USER_DN $ssl_client_s_dn;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location / {
proxy_http_version 1.1;
proxy_set_header Connection "keep-alive";
proxy_pass http://${HOST_EXTERNAL_IP}:${DEVSERVER_PORT}; # Won't read environment variables here
}
}
When I run this however, I get the error
nginx: [emerg] unknown "host_external_ip" variable I am using envsubst correctly to pass the environment variable from docker per the docs
Do not copy nginx.conf directly. Instead create a shell file to generate the nginx file e.g.
echo 'you nginx conf goes here with $envVariable' > location/to/conf/folder/nginx.conf
and run that file inside the container. So when that shell file will run. It will replace the environment variables that you set with it's actual value in the nginx.conf.
Do not forget to skip $ of nginx variables.
I have a environment where I have 2 tomcat containers are exposed say dev and test on ports 8080 and 8081 respectively. I am able to access the tomcat instances with host and port combinations as below.
http://<ip>:8080
http://<ip>:8081
Now I am trying to setup an nginx container as a proxy to send all /dev requests to dev(8080) container and all /test requests to test(8081) container.
Below is my docker-compose.yml
version: "3.5"
services:
web1:
image: "tomcat:latest"
container_name: "web1"
ports:
- "8080:8080"
web2:
image: "tomcat:latest"
container_name: "web2"
ports:
- "8081:8080"
nginx:
image: "nginx:latest"
container_name: "nginx"
ports:
- "8000:80"
volumes:
- "./nginx.conf:/etc/nginx/nginx.conf"
#- "./default.conf:/etc/nginx/conf.d/default.conf"
Below is my nginx.conf file
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
client_max_body_size 0;
location / {
}
location /dev {
proxy_pass http://35.239.73.252:8080/;
}
location /test {
proxy_pass http://35.239.73.252:8081/;
}
}
}
Now the problem is when i try to load my tomcat containers directly they work fine. But when they are accessed through nginx with uri paths /dev and /test the pages are broken and images and css are not loaded.
What could the issue and how to fix it.
I believe you need a closing "/" after both your location path and your target. Here's a working example from a project of mine:
location ^~ /ll/ {
proxy_pass http://werther:8080/;
}
I have two Docker containers on the same network. One of them is a Spring Boot server app, the other is a React client app. I'm trying to get the client to make AJAX calls to the server. When I run them both locally on my machine, outside of Docker, everything works. When I run them with my docker configuration and using an Nginx proxy, I get 502 bad gateway errors.
Here is my docker-compose configuration:
version: '3'
video-server:
build:
context: .
dockerfile: video-server_Dockerfile
container_name: video-server
networks:
- videoManagerNetwork
environment:
- VIDEO_MANAGER_DIR=/opt/videos
volumes:
- ${VIDEO_MANAGER_DIR_PROD}:/opt/videos
video-client:
build:
context: .
dockerfile: video-client_Dockerfile
container_name: video-client
networks:
- videoManagerNetwork
ports:
- 9000:80
networks:
videoManagerNetwork:
As you can see, both containers are given explicit names and are on the same network. video-client is the Nginx React app, video-server is the Spring Boot app.
Here is my Nginx config:
worker_processes auto;
events {
worker_connections 8000;
multi_accept on;
}
http {
log_format compression '$remote_addr - $remote_user [$time_local] '
'"$request" $status $upstream_addr '
'"$http_referer" "$http_user_agent"';
include /etc/nginx/mime.types;
default_type text/plain;
server {
listen 80;
# TODO make sure the log is written to a docker volume
access_log /var/log/nginx/access.log compression;
root /var/www;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
location /api/ {
proxy_set_header Host $http_host;
proxy_pass http://video-server:8080/api/;
}
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
location ~* \.(?:css|js)$ {
try_files $uri =404;
expires 1y;
access_log off;
add_header Cache-Control "public";
}
location ~ ^.+\..+$ {
try_files $uri =404;
}
}
}
As you can see, I'm proxying all calls to /api/ to my video-server container. This should be working. I even shelled into the video-client container docker exec -it video-client bash, installed curl, and was able to successfully make calls to the other container, ie http://video-server:8080/api/categories.
I'm looking for suggestions about what the problem with my configuration could be. I'm not particularly experienced with Nginx, so I'm assuming I'm doing something wrong there.
Edit
I finally figured out what was necessary to make this work. I would still be interested to understand why this helps.
I added the following lines to the "http" section of the Nginx config, and the problem was solved:
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
So it looks like this changed the buffer and timeout settings. Why did this help?