I'm writing apk uploading functionality in my React App. Every time I try to upload an apk file bigger than 10 mb, I get back an error: 413 Request Entity Too Large. I've already used client_max_body_size 888M; directive. Can anybody explain me, what I do wrong?
Here is my nginx.conf:
user nginx;
worker_processes auto;
error_log /var/log/nginx/errors.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/accesses.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
listen [::]:80;
server_name up-app.unpluggedsystems.app;
# return 301 http://$server_name$request_uri;
root /usr/share/nginx/html;
index index.html;
client_max_body_size 888M;
location /api/7/apps/search {
proxy_pass http://ws75.aptoide.com/api/7/apps/search;
proxy_set_header X-Forwarded-For $remote_addr;
}
location ~ \.(apk)$ {
proxy_pass https://pool.apk.aptoide.com;
proxy_set_header X-Forwarded-For $remote_addr;
}
location /api {
proxy_pass https://up-app.unpluggedsystems.app;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
And Dockerfile that I use (maybe something wrong here?):
# build environment
FROM node:13.12.0-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
COPY create-env-file.sh ./create-env-file.sh
RUN npm install
RUN npm install react-scripts#3.4.1 -g
COPY . ./
RUN npm run build
# production environment
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY fullchain.crt /etc/ssl/fullchain.crt
COPY unpluggedapp.key.pem /etc/ssl/unpluggedapp.key.pem
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
I FE Junior, and I so weak in such things. Maybe I put client_max_body_size in wrong place?
Please move client_max_body_size under http{} section
http {
# some code here
sendfile on;
client_max_body_size 888M;
#...
}
Make sure to restart nginx After modifying the configuration file.
You can try
client_max_body_size 0;
(set to no limit, but this is not recommended for production env.)
More over, Remember that if you have SSL, that will require you to set the above limitation for the SSL server and location{} in nginx.conf too. If your client (browser) tries to upload on http, and you expect them to get 301'd to https, nginx server will actually drop the connection before the redirect due to the file being too large for the http server, so the limitation has to be set in both http and https.
The trick is to put in both http{} and server{} (for vhost)
http {
# some code
client_max_body_size 0;
}
server {
# some code
client_max_body_size 0;
}
Related
I am using docker compose to serve up a front-end (vue.js) a back-end and a nginx reverse-proxy.
When I navigate to a route and hit refresh I get a 404 nginx error.
Here is part of my docker compose, omitted a few lines for brevity
version: '3'
services:
# Proxies requests to internal services
dc-reverse-proxy:
image: nginx:1.17.10
container_name: reverse_proxy_demo
depends_on:
- dc-front-end
- dc-back-end
volumes:
- ./reverse_proxy/nginx.conf:/etc/nginx/nginx.conf
ports:
- 5004:80
dc-front-end:
..
container_name: dc-front-end
ports:
- 8080:80
# API
dc-back-end:
container_name: dc-back-end
ports:
- 5001:5001
here is the nginx.conf that belongs to the reverse proxy service
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name 127.0.0.1;
location / {
proxy_pass http://dc-front-end:80;
proxy_set_header X-Forwarded-For $remote_addr;
}
location /dc-back-end/ {
proxy_pass http://dc-back-end:5001/;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
and this is the nginx.conf for the front-end
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
root /app;
#root /usr/share/nginx/html;
location / {
index index.html;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
and finally the docker file for the front-end service
# build stage
FROM node:16-alpine3.12 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
I have tried using try_files $uri $uri/ /index.html; in both nginx files but it still gives 404 on page refresh or if i try and navigate to the page in the browser (rather than clicking a link)
As usual the laws of Stackoverflow dictate that you only solve your own question once you post the question.
docker file was wrong. threw me as everything else worked
FROM node:16-alpine3.12 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM nginx:stable-alpine as production-stage
RUN mkdir /app
COPY --from=build-stage /app/dist /app
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
I am trying to get the front end and backend working together for the spring boot pet clinic app. I have already done ng --prod on a windows pc and then used github to transfer my code to a VM. I had it working once but only on IE but it doesn't again I don't know what's wrong. Please help it's done my head in for a few weeks.
nginx.conf file:
worker_processes 1;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile off;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
#access_log /var/log/nginx/access.log;
#error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
server {
#listen 8443 ssl;
listen 4200;
#server_name localhost;
#ssl_certificate localhost.crt;
#ssl_certificate_key localhost.key;
location / {
root /AngularApp/dist;
index index.html;
}
location /api/ {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 20;
proxy_read_timeout 20;
proxy_pass http://springcommunity:9966/petclinic;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
Dockefile for front end.
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
RUN rm /etc/nginx/nginx.conf
COPY /nginx.conf /etc/nginx/nginx.conf
COPY /AngularApp /AngularApp
WORKDIR /AngularApp
docker-compose file:
version: '3.7'
services:
springcommunity:
image: springcommunity/spring-petclinic-rest
ports:
- "9966:9966"
web:
container_name: nginx
build: .
ports:
- "4200:4200"
depends_on:
- springcommunity
links:
- springcommunity
restart: always
environment.prod.ts and environment.ts file before ng --prod (production)
export const environment = {
production: true,
REST_API_URL:'http://localhost:9966/petclinic/'
};
Things I have tried and failed:
export const environment = {
production: true,
REST_API_URL:'http://springcommunity:9966/petclinic'
};
Exposing 4200 in the Dockerfile for the front end.
I have tried port mapping in docker compose:
example:
4200:9966
9966:4200
Exposing 9966 as well in the compose file.
The front end and backend work but just not together, only individually I have a feeling that one container needs to be delayed the front end I have done some google searching but can't find a viable option. I have no idea how to do it, please help.
Update 5/06/2020
I am currently running a wait-for.sh so the backend runs before the the web container but now nginx exits with a error code 0. I am also trying to see the nginx error logs but I can't get to this could someone please shed some light on this?
If your frontend can't reach the backend on that VM may be that your docker containers are not o the same network.
You can try REST_API_URL:'http://0.0.0.0:9966/petclinic'
Or can specify a custom networks in docker-compose file and use REST_API_URL:'http://springcommunity:9966/petclinic'
https://docs.docker.com/compose/networking/#specify-custom-networks
I am really stumped and can use help figuring out why my environment variables aren't transferring from Docker to nginx config files.
I have a docker-compose.yml
nginx:
image: nginx
container_name: proxier
volumes:
- ./conf/nginx.conf:/etc/nginx/nginx.conf
- ./conf/server.nginx.conf.tpl:/etc/nginx/server.nginx.conf.tpl
- ./build/web:/srv/static:ro
- ./docker/proxier:/tmp/docker
ports:
- "80:80"
- "443:443"
environment:
- HOST_EXTERNAL_IP=localhost
- DEVSERVER_PORT=8000
- DEVSERVICE_PORT=5000
command: /bin/bash -c "env && envsubst '$$HOST_EXTERNAL_IP $$DEVSERVER_PORT $$DEVSERVICE_PORT' < /etc/nginx/server.nginx.conf.tpl > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
I have an nginx.conf file
user nginx;
worker_processes 1;
error_log /dev/stdout warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
client_max_body_size 100g;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout main;
sendfile off;
tcp_nopush on;
keepalive_timeout 65;
gzip on;
server_tokens off;
upstream app {
server myapp:8000 fail_timeout=0;
}
include /etc/nginx/server.nginx.conf.tpl;
}
I have a server.nginx.conf.tpl file
server {
listen 80;
listen 443 ssl http2 default_server;
server_name localhost;
index index.html;
location ^~ /services/ {
proxy_pass https://myurl.com;
proxy_set_header USER_DN $ssl_client_s_dn;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location / {
proxy_http_version 1.1;
proxy_set_header Connection "keep-alive";
proxy_pass http://${HOST_EXTERNAL_IP}:${DEVSERVER_PORT}; # Won't read environment variables here
}
}
When I run this however, I get the error
nginx: [emerg] unknown "host_external_ip" variable I am using envsubst correctly to pass the environment variable from docker per the docs
Do not copy nginx.conf directly. Instead create a shell file to generate the nginx file e.g.
echo 'you nginx conf goes here with $envVariable' > location/to/conf/folder/nginx.conf
and run that file inside the container. So when that shell file will run. It will replace the environment variables that you set with it's actual value in the nginx.conf.
Do not forget to skip $ of nginx variables.
I'm struggling to setup nginx inside a docker container. I basically have two containers:
a php:7-apache container that serves a dynamic website, including its static contents.
a nginx container, with a volume mounted inside it as a /home/www-data/static-content folder (I do this in my docker-compose.yml), to try to serve a static website (unrelated to the one served by the apache container).
I want to use the domain dynamic.localhost to serve my dynamic website, and static.localhost to serve my static website only made up of static files.
I have the following Dockerfile for my nginx container:
########## BASE #########
FROM nginx:stable
########## CONFIGURATION ##########
ARG DEBIAN_FRONTEND=noninteractive
ENV user www-data
COPY ./nginx.conf /etc/nginx/nginx.conf
COPY ./site.conf /etc/nginx/conf.d/default.conf
RUN touch /var/run/nginx.pid && \
chown -R ${user}:${user} /var/run/nginx.pid && \
chown -R www-data:www-data /var/cache/nginx
RUN chown -R ${user} /home/${user}/ && \
chgrp -R ${user} /home/${user}/
USER ${user}
As you see I'm using two configuration files for nginx: nginx.conf and site.conf.
Here is nginx.conf (it's not important because there is nothing special in it but if I'm doing something wrong just let me know):
worker_processes auto;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/conf.d/*.conf;
}
And here is the file site.conf that I have been failing miserably at writing correctly for days now:
server {
listen 8080;
server_name static.localhost;
root /home/www-data/static-content;
location / {
try_files $uri =404;
}
}
server {
listen 8080;
server_name dynamic.localhost;
location / {
proxy_pass http://dynamic;
proxy_redirect off;
proxy_set_header Host $host:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port 8080;
proxy_set_header X-Forwarded-Host $host:8080;
}
}
(http://dynamic passes the request to the apache container that I name "dynamic").
So basically I keep getting 404 for whatever file I try to access in my static-content directory. E.g.:
static.localhost:8080/index.html should serve /home/www-data/static-content/index.html but I get 404 instead.
static.localhost:8080/css/style.css should serve /home/www-data/static-content/css/style.css but I get 404 too.
I tried various things, like writing try_files /home/www-data/static-content/$uri, but I didn't get any result. I read some parts of nginx documentation and searched on Stack Overflow but nothing that I found helped me. If I made a stupid mistake I apologize, but the only thing that I care about now is to get this to work, and to understand what I'm doing wrong.
Thanks
I solved my problems by simply not using a volume for the static files but copying them in the container. I suspect it's a problem of permissions with the way the volume is mounted by docker-compose and the nginx process running as non-root.
It's not a perfect solution since I have to give up on using a volume but it'll do.
I had a same issue and I had to change permissions to mounted volume files for user group.
On dockerfile
RUN useradd -G root,www-data -u 1000 ${user}
and changed permissions on host files
chmod 775 /home/www-data/static-content -R
I'm having trouble trying to get the following to work in Docker
What I want is that when the user requests http://localhost/api then NGINX reverse proxies to my .Net Core API running in another container.
Container Host: Windows
Container 1: NGINX
dockerfile
FROM nginx
COPY ./nginx.conf /etc/nginx/nginx.conf
nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server {
location /api1 {
proxy_pass http://api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Container 2: .Net Core API
Dead simple - API exposed on port 80 in the container
Then there is the docker-compose.yml
docker-compose.yml
version: '3'
services:
api1:
image: api1
build:
context: ./Api1
dockerfile: Dockerfile
ports:
- "5010:80"
nginx:
image: vc-nginx
build:
context: ./infra/nginx
dockerfile: Dockerfile
ports:
- "5000:80"
Reading the Docker documentation it states:
Links allow you to define extra aliases by which a service is
reachable from another service. They are not required to enable
services to communicate - by default, any service can reach any other
service at that service’s name.
So as my API service is called api1, I've simply referenced this in the nginx.conf file as part of the reverse proxy configuration:
proxy_pass http://api1;
Something is wrong as when I enter http:\\localhost\api I get a 404 error.
Is there a way to fix this?
The problem is the nginx location configuration.
The 404 error is right, because your configuration is proxying request from http://localhost/api/some-resource to a missing resource, because your mapping is for /api1 path and you're asking for /api.
So you should only change the location to /api and it will work.
Keep in mind that requests to http://localhost/api will be proxied to http://api1/api (the path is kept). If your backend is configured to expose api with a prefixing path this is ok, otherwise you will receive another 404 (this time from your service).
To avoid this you should rewrite the path before proxying the request with a rule like this:
# transform /api/some-resource/1 to /some-resource/1
rewrite /api/(.*) /$1 break;