Docker push error "413 Request Entity Too Large" - docker-registry

I setup the registry v2 and use nginx as reverse proxy. When I push the image to registy, it error out 413 Request Entity Too Large.
I have modify the client_max_body_size as 20MB in nginx.conf. The push still fails.
client_max_body_size 20M;
What is the body size in docker push? How can I limit the body size?

Docker documentation mentions the limit should be turned off:
http {
...
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
...
}

For anyone getting this error in Kubernetes you need to add this annotation to the registry Ingress resource:
nginx.ingress.kubernetes.io/proxy-body-size: "0"

You should increase the available memory to 300 MB with:
client_max_body_size 300M;

Related

NGINX vs OpenResty Cache Performance

I have a simple NGINX proxy configured, with some simple caching, and it's performance is behaving oddly in OpenResty vs vanilla NGINX.
Under load testing (300rpm) the vanilla NGINX works just fine, however, the moment I switch the from NGINX over to OpenResty, I get a portion of requests which suddenly hang, unresponsive, taking 20+ seconds to return.
My nginx.conf looks as follows:
events {
worker_connections 1024;
}
http {
proxy_cache_path /var/cache keys_zone=pagecache:10m;
server {
listen 80;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
ssl_certificate /etc/ssl/mycert.pem;
ssl_certificate_key /etc/ssl/mycert.key;
location / {
proxy_cache pagecache;
proxy_cache_key $host$request_uri;
proxy_cache_lock on;
proxy_pass http://ssl-proxy-test.s3-website-eu-west-1.amazonaws.com/;
add_header X-Cache-Status $upstream_cache_status;
}
}
}
My Dockerfile for NGINX looks like this:
FROM nginx
COPY certificates /etc/ssl
COPY nginx.conf /etc/nginx/nginx.conf
And for OpenResty looks like this:
FROM openresty/openresty:buster
COPY certificates /etc/ssl
COPY nginx.conf /usr/local/openresty/nginx/conf/nginx.conf
I've tried this on several OpenResty builds (buster, bionic, xenial), and get the same results on each.
The slow requests do however return 304 with a Cache-Status: HIT header, and don't appear to make it through to the upstream server, which makes me think the bottleneck must be while reading the cached data from memory/disk? Rather than coming from upstream.
I'm new to OpenResty, so am not entirely sure how much it differs in respect to vanilla NGINX concerning its cache behaviour.
Any advice on where to start debugging this? Or what might be the cause?
After trying my load tests on some different infrastructure I found that this problem only seemed to occur on AWS Elastic Container Service.
Switching to a Docker image which is based on Centos/Amazon Linux seemed to get things working much more consistently.
Still a little unsure as to the real cause, but at least have something working.

How do I map a location to an upstream server in Nginx?

I've got several Docker containers acting as web servers on a bridge network. I want to use Nginx as a proxy that exposes a service (web) outside the bridge network and embeds content from other services (i.e. wiki) using server side includes.
Long story short, I'm trying to use the configuration below, but my locations aren't working properly. The / location works fine, but when I add another location (e.g. /wiki) or change / to something more specific (e.g. /web) I get a message from Nginx saying that it "Can't get /wiki" or "Can't get /web" respectively:
events {
worker_connections 1024;
}
http {
upstream wiki {
server wiki:3000;
}
upstream web {
server web:3000;
}
server {
ssi on;
location = /wiki {
proxy_pass http://wiki;
}
location = / {
proxy_pass http://web;
}
}
}
I've attached to the Nginx container and validated that I can reach the other containers using CURL- they appear to be working properly.
I've also read the Nginx pitfalls and know that using hostnames (wiki, web) isn't ideal, but I don't know the IP addresses ahead of time and have tried to counter any DNS issues by telling docker-compose that the nginx container depends on web and wiki.
Any ideas?
You must turn proxy_pass http://wiki; to proxy_pass http://wiki/;.
As I know, Nginx would take two different way with/without backslash at the end of uri. You may find more details about proxy_pass directive on nginx.org.
In your case, a backslash(/) is essential as a uri to be passed to server. You've already got error message "Can't get /wiki". In fact, this error message means that there is no /wiki in server wiki:3000, not in Nginx scope.
Getting better knowing about proxy_pass directive with/without uri would help you much.
I hope this would help.

iOS 11 devices fail to access nginx HTTPS site secured with LetsEncrypt (Protocol error)

Since a couple of days, users that just updated to iOS 11 cannot access my website.
It's hosted via a nginx reverse proxy that is using LetsEncrypt to provide SSL.
The client experience is, that if you click a link, usually the safari window just disappears or shows a generic error.
Using the debugger, there's an error: [Error[ Failed to load resource: The operation couldn't be completed. Protocol Error
This only happens with iOS devices since the update to iOS 11.
My Server is running on DigitalOcean with the docker image jwilder/nginx-proxy.
Ok, I actually found the issue to be related to an improper implementation of HTTP2 in iOS11.
This post shed some light on the situation:
http://www.essential.exchange/2017/09/18/ios-11-about-to-release-things-to-be-aware-of/
The jwilder/nginx-proxy docker image is using http2 by default and as far as I can see you can't change that either.
No, to solve the issue, remove the http2 keyword in your server configuration for now.
This:
server {
listen x.x.x.x:443 ssl http2;
server_name xxxx;
[...]
}
Becomes:
server {
listen x.x.x.x:443 ssl;
server_name xxxx;
[...]
}
If you're running jwilder/nginx-proxy you will have to change /app/nginx.tmpl too, otherwise, the config file will be rewritten at one point.
Hope this answer helps some people struggling with the same problem.
If you find another solution to fix this, please add it below. I haven't had too much time to look for solutions as it took me forever to find this one.

How to make S3 serve the same file using http and https in a rails app?

I have an rails application running in Amazon EC2 and with files served in S3.
My problem is: All my application in running normally in http and I'd like to put on https. But, it's a pre-requisite that the same file responds either to http and https.
For example: if I have a file http://domain.s3.amazon.com/file.js, it should be respond to https://domain.s3.amazon.com/file.js as well.
My scripts will be used by other customers in http and https environments, so it's mandatory that its served as http and https, otherwise the browser will give this message:
[blocked] The page at 'https://mycustomerurl' was loaded over HTTPS, but ran insecure content from 'http://mydomain.com/myfile.js': this content should also be loaded over HTTPS.
How can I do that?
Thanks
PS: I've seen some samples, but the whole app goes to https, and I have this specific requisite
As long as the domain is the same, the easiest way to do this is to drop the protocol at the beginning of the url.
Just do a request for //domain.s3.amazon.com/file.js
I finally found a solution.
At the end, it was not an issue to be solved at application level but at server configuration level.
I've bought a certificate and installed in my server. Then I configured nginx with:
worker_processes 1;
http {
server {
listen 443;
ssl on;
ssl_certificate /usr/local/nginx/conf/cert.pem;
ssl_certificate_key /usr/local/nginx/conf/cert.key;
keepalive_timeout 70;
}
}

104: Connection reset by peer: nginx + rainbows + over 1 mb uploads

I am running ThreadPool rainbows + nginx (unix socket)
On large file uploads I am getting the following in nginx error log (nothing in the application log):
readv() failed (104: Connection reset by peer) while reading upstream
The browser receives response:
413 Request Entity Too Large
Why does this happen?
"client_max_body_size 80M;" is set both on http and server level (just in case) in nginx
nginx communicates with rainbows over a unix socket (upstream socket + location # proxy_pass)
I don't see anything in the other logs. I have checked:
rainbows log
foreman log
application log
dmesg and /var/log/messages
This happens when uploading a file ~> 1 MB size
The ECONNRESET (Connection reset by peer) error means that connection was uncleanly closed by a backend application. This usually happens if backend application dies, e.g. due to segmentation fault, or killed by the OOM killer. To find out exact reason you have to examine your backend logs (if any) and/or system logs.
Maybe you have client_max_body_size set into your nginx.conf that limits the size of the body to 1Mb, e.g.
client_max_body_size 1M;
In this case you'd need to remove it to allow uploading files of more than 1M.
Turns out Rainbows had a configuration option called client_max_body_size that defaulted to 1 MB
The option is documented here
If this options is on, Rainbows will 413 to large requests silently. You might not know it's breaking unless you run something in front of it.
Rainbows! do
# let nginx handle max body size
client_max_body_size nil
end

Resources