I've deployed an on prem instance of Nexus OSS, that is reached behind a Nginx reverse proxy.
On any attempt to push docker images to a repo created on the Nexus registry I'm bumping into a
413 Request Entity Too Large in the middle of the push.
The nginx.conf file is looking like so:
http {
client_max_body_size 0;
upstream nexus_docker {
server nexus:1800;
}
server {
server_name nexus.services.loc;
location / {
proxy_pass http://nexus_docker/;
proxy_set_header Host $http_post;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
The nginx is deployed using docker, and I've successfully logged in to it using docker login.
I've tried multiple other flags, such as the chunkin and such. But nothing seems to work.
That's due to your server block having a default value for client_max_body_size of around 1MB in size when unset.
To resolve this, you will need to add the following line to your server block:
# Unlimit large file uploads to avoid "413 Request Entity Too Large" error
client_max_body_size 0;
http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size
As it turns out, the linux distro running the containered nginx server was itself running a variation of nginx for any incoming request.
Once we set the client_max_body_size to 0 on the nginx configuration file which the OS ran, it worked.
Related
I deployed an nginx:1.22.1 instance alongside a static react app server on a worker node in a docker swarm. This is docker swarm mode, not classic swarm.
The advertise address that I listed when joining the swarm is internal to the data center, I do not know if that matters because I can still access these services with the public addresses.
Both containers are pinned to the same worker node and communicate over a user-created overlay network.
I can retrieve the full bundle directly from the react app server over the public network.
I cannot retrieve the full bundle through the nginx reverse-proxy server over the public network.
When I attempt to fetch the bundle using chrome browser as the user-agent I get 2 errors:
net::ERR_INCOMPLETE_CHUNKED_ENCODING 200 (OK).
The app bundle is cutoff mid js function as if a chunk of data was not transmitted.
Rarely, the upstream server will send html and not a js bundle. But I receive that whole response body and it is not truncated like the js bundle.
I have played with all kinds of configuration and cannot get it to work.
(most relevant)
This is my configuration under /etc/nginx/conf.d/default.conf
resolver 127.0.0.11 valid=10s;
error_log /dev/stdout info;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
ssl_certificate /etc/nginx/certs/nginx.pem;
ssl_certificate_key /etc/nginx/certs/key.pem;
client_max_body_size 100M;
proxy_buffers 8 1024k;
proxy_buffer_size 1024k;
proxy_max_temp_file_size 1024m;
location / {
set $reactapp reacthost;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://$reactapp:3000/;
proxy_redirect off;
}
root /usr/share/nginx/html;
}
error_page 500 502 503 504 /50x.html;
}
I use the variable $reactapp for service discovery after nginx server start. See NGINX blog here.
Note that the nginx:1.22.1 instance runs with user nginx after it is deployed to the stack. I only see this below message when I deploy via docker stack. If I start the container directly using docker engine, I do not see it.
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf (read-only file system?)
However, I can exec into the container as the nginx user, access /var/cache/nginx/, and create a directory.
I do not know if:
my server / location configuration is plain bad.
The NGINX server cannot write a part of the container it needs to write when the service is deployed in stack mode.
If I cannot access the server properly over the public network via the overlay network.
Prior to using docker stack I was able to use this reverse proxy.
The two containers were on the same host without swarm mode running.
The containers communicated over a bridge network.
The reverse proxy server port was published on the public interface of the server it was deployed on.
The NGINX server started after the upstream server.
Because there is no depends_on key honored in stack mode I have to allow DNS service discovery after the NGINX server starts up. Placing them on an overlay gives me more flexibility in how I do my deployments, but this has become a bit muddled. There are enough differences between the two environments that it has become difficult to get the stack to behave as I expect.
In the past I tried setting up Jfrog Artifactory OSS and was able to get it through my reverse proxy exposed outside my home network, and I was able to push to it VIA my computer local CLI and through Drone CI but it took an abnormal amount of time (roughly 5 min) to push to my own registry when pushing to DockerHub or Gitlab took a matter of seconds.
My container is really small (think MBs) and I never have any issues with pushing it to any other remote registry. I always thought it might have been the registry and the fact it was running on an old machine until now.
I recently discovered my git solution Gitea has a registry built in, so I did the same, I got everything set up and mapped and once again it took an abnormal amount of time (roughly 5 min) to push to my own registry (this time backed by Gitea).
This leads me to think my issues is Nginx Proxy Manager related. I found some documenation online but it was really general and vague, I have the current proxy config below and it still has the issue. Could anyone point me in the right direction? I also included a few other posts related to this issue.
server {
set $forward_scheme http;
set $server "192.168.X.XX";
set $port 3000;
listen 8080;
#listen [::]:8080;
listen 4443 ssl http2;
#listen [::]:4443;
server_name my.domain.com;
# Let's Encrypt SSL
include conf.d/include/letsencrypt-acme-challenge.conf;
include conf.d/include/ssl-ciphers.conf;
ssl_certificate /etc/letsencrypt/live/npm-47/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/npm-47/privkey.pem;
# Force SSL
include conf.d/include/force-ssl.conf;
access_log /data/logs/proxy-host-10_access.log proxy;
error_log /data/logs/proxy-host-10_error.log warn;
#Additional fields I added ontop of the default Nginx Proxy Manager config
proxy_buffering off; proxy_ignore_headers "X-Accel-Buffering";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
location / {
# Proxy!
include conf.d/include/proxy.conf;
}
# Custom
include /data/nginx/custom/server_proxy[.]conf;
}
I also checked the live logs for Gitea and I see the requests coming real time and processed really fast, but there is always a significant delay before it receives the next request which makes me think the Nginx Proxy Manager is not correctly forwarding the requests or there is some setting that I missed. Any help would be greatly appreciated!
Some of the settings I got to try were from the below sources
Another registry
Another stack overflow suggestion
I've got several Rails websites running in Docker dev containers. Docker is running in WSL (Ubuntu 20.04) on Windows 11. Nginx is running in Ubuntu as a reverse proxy, IIS is turned off in Windows. The Ubuntu /etc/hosts file is automatically populated from the hosts file in Windows. It is set up like this because others on the team are running Linux on Macs but I switch between Rails and .Net development.
An example website is mysite1.localhost which is exposed on port 8081 on Docker and there is an entry of '127.0.0.1 mysite1.localhost' in both hosts files.
The problem I have is browsing (Chrome on Windows) localhost:8081 returns 200 from the website, great, but using the hostname mysite1.localhost returns 502 Bad Gateway.
I am assuming Nginx doesn't know about Docker or something like that?
Here is the mysite1.conf for Nginx:
server {
listen 80;
listen [::]:80;
server_name mysite1.localhost;
resolver 127.0.0.1;
location ~* "^/shared-nav" {
proxy_set_header Accept-Encoding "";
proxy_pass http://localhost:3000/stuff$is_args$args;
}
location / {
ssi on;
ssi_silent_errors off;
log_subrequest on;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8081;
add_header Cache-Control "no-cache";
if ($request_filename ~* ^.*?/([^/]*?)$) {
set $filename $1;
}
if ($filename ~* ^.*?\.(eot)|(ttf)|(woff)|(woff2)$) {
add_header Access-Control-Allow-Origin *;
}
}
}
I can see two problems in nginx/error.log:
2023/02/02 08:50:10 [warn] 2841#2841: conflicting server name "mysite1.localhost" on 0.0.0.0:80, ignored
2023/02/02 08:50:12 [error] 2845#2845: *52 connect() failed (111: Connection refused) while connecting to upstream, client: ::1, server: mysite1.localhost, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8081/", host: "mysite1.localhost"
It doesn't seem to matter whether or not docker is running.
For the conflicting server name waring, I've tried looking for temporary files that need deleting but cannot find anything.
Most of the other questions I've looked at involve solving problems with containerized Nginx where as this is sitting in WSL.
Please let me know if I can better explain the problem, thanks for any help.
One thing I didn't take into consideration was using VS Code to dev in the containers means that the ports were forwarded to Windows - although the containers are running in WSL, VS Code is running in Windows.
So, I could either turn IIS back on and use it as a reverse proxy or try Nginx for Windows. I've opted for the latter as it means I can share the same config files as the Linux guys and will see how it works out, for now I can browse the websites by hostname.
If anyone else needs to work with this set up, I'm happy to pass on my experiences.
I'm receiving the error Authentication required after I login in the Wildfly 13 Management Console.
If I type a user or password wrong, it asks again, but if I type correctly it shows the page with the error message (so I assume the user and password are correct, but something else after that gives the error).
I'm using docker to run a nginx container and a wildfly container.
The nginx listens externally on port 9991 and proxy pass the request to the wildfly container, but it shows the error described before.
It just happens with the Wildfly Console, every other request proxied, even request proxied to a websocket or to Wildfly on port 8080, are done successfully.
The Wildfly container listens externally on port 9990 and I can access the console successfully in this port. If on docker I map the port "9992:9990" I still can access the console successfully through port 9992.
So, it seems that this is not related to docker, but to the Wildfly Console itself. Probably some kind of authentication that is not happening successfully when using a reverse proxy in the middle.
I have a demo docker project on https://github.com/lucasbasquerotto/pod/tree/0.0.6, and you can download the tag 0.0.6 that has everything setup to work with Wildfly 13 and nginx, and to simulate this error.
git clone -b 0.0.6 --single-branch --depth 1 https://github.com/lucasbasquerotto/pod.git
cd pod
docker-compose up -d
Then, if you access the container directly in http://localhost:9990 with user monitor and password Monitor#70365 everything works.
But if you access http://localhost:9991 with the same credentials, through the nginx reverse proxy, you receive the error.
My nginx.conf file:
upstream docker-wildfly {
server wildfly:9990;
}
location / {
proxy_pass http://docker-wildfly;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
I've also tried with:
proxy_set_header X-Forwarded-Proto $scheme;
And also with the Authorization header (just the 2nd line and also with both):
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
And also defining the host header with the port (instead of just $host):
proxy_set_header Host $server_addr:$server_port;
I've tried the above configurations isolated and combined together. All to no avail.
Any sugestions?
Has anyone successfully accessed the Wildfly Console through a reverse proxy?
Update (2018-09-22)
It seems Wildfly uses a digest authentication (instead of basic).
I see the header in the console like the following:
Authorization: Digest username="monitor", realm="ManagementRealm", nonce="AAAAAQAAAStPzpEGR3LxjJcd+HqIX2eJ+W8JuzRHejXPcGH++43AGWSVYTA=", uri="/console/index.html", algorithm=MD5, response="8d5b2b26adce452555d13598e77c0f63", opaque="00000000000000000000000000000000", qop=auth, nc=00000005, cnonce="fe0e31dd57f83948"
I don't see much documentation about using nginx to proxy pass requests with digest headers (but I think it should be transparent).
One question I saw here in SO is https://serverfault.com/questions/750213/http-digest-authentication-on-proxied-server, but there is no answer so far.
I saw that there is the nginx non-official module https://www.nginx.com/resources/wiki/modules/auth_digest/, but in the github repository (https://github.com/atomx/nginx-http-auth-digest) it says:
The ngx_http_auth_digest module supplements Nginx's built-in Basic
Authentication module by providing support for RFC 2617 Digest
Authentication. The module is currently functional but has only been
tested and reviewed by its author. And given that this is security
code, one set of eyes is almost certainly insufficient to guarantee
that it's 100% correct. Until a few bug reports come in and some of
the ‘unknown unknowns’ in the code are flushed out, consider this
module an ‘alpha’ and treat it with the appropriate amount of
skepticism.
Also it doesn't seem to me allright to hardcode the user and pass in a file to be used by nginx (the authentication should be transparent to the reverse proxy in this case).
In any case, I tried it and it correctly asks me to authenticate, even if the final destination does not have a digest authentication, like when trying to connect to the wildfly site (not console), it asks when trying to connect to nginx (before proxying the request), then it forwards successfully to the destination, except in the case of wildfly console, it keeps asking me to authenticate forever.
So I think this is not the solution. The problem seems to be in what the nginx is passing to the Wildfly Console.
I had the same problem with the HAL management console v3.3 and 3.2
I could not get ngnix HTTPS working due to authentication errors, even though the page prompted http basic auth user and pass
This was tested in standalone mode on the same server
My setup was :
outside (https) -> nginx -> http://halServer:9990/
This resulted in working https but with HAL authentication errors (seen in the browsers console) the webpage was blank.
At first access the webpage would ask http basic auth credentials normally, but then almost all https requests would return an authentication error
I managed to make it work correctly by first enabling the HAL console https with a self signed certificate and then configuring nginx to proxy pass to the HAL HTTPS listener
Working setup is :
outside (https) -> nginx (https) -> https://halServer:9993/
Here is the ngnix configuration
server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;
server_name halconsole.mywebsite.com;
# SSL
ssl_certificate /keys/hal_fullchain.pem;
ssl_certificate_key /keys/hal_privkey.pem;
ssl_trusted_certificate /keys/hal_chain.pem;
# security
include nginxconfig.io/security.conf;
# logging
access_log /var/log/nginx/halconsole.mywebsite.com.access.log;
error_log /var/log/nginx/halconsole.mywebsite.com.error.log warn;
# reverse proxy
location / {
# or use static ip, or nginx upstream
proxy_pass https://halServer:9993;
include nginxconfig.io/proxy.conf;
}
# additional config
include nginxconfig.io/general.conf;
include nginxconfig.io/letsencrypt.conf;
}
# subdomains redirect
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name *.halconsole.mywebsite.com;
# SSL
ssl_certificate /keys/hal_fullchain.pem;
ssl_certificate_key /keys/hal_privkey.pem;
ssl_trusted_certificate /keys/hal_chain.pem;
return 301 https://halconsole.mywebsite.com$request_uri;
}
proxy.conf
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
# Proxy headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Forwarded $proxy_add_forwarded;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-By $server_addr;
# Proxy timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
The easiest way to enable https console is by using the console itself
generate a java JKS keystore using either the command line keytool or a GUI program
I like GUIs, so I used Key Store Explorer https://github.com/kaikramer/keystore-explorer
copy keystore file on the halServer server where it has read access (no need to keep it secret) i placed mine inside wildfly data dir in a "keystore" directory.
# your file paths might differ, don't copy paste
cp /home/someUser/sftp_uploads/managementKS /opt/wildfly/standalone/data/keystore/managementKS
set permissions
# your file paths might differ, don't copy paste
chown --recursive -H wildfly:wildfly /opt/wildfly/standalone/data/keystore
(use vpn) login to cleartext console http://halServer:9990/
add keystore : navigate :
configuration -> subsystems -> security (elytron) -> other settings (click view button)
stores -> keystore -> add
...
Name = managementKS
Type = JKS
Path = keystore/managementKS
Relative to = jboss.server.data.dir
Credential Reference Clear Text = keystore-password click Add
result in standalone.xml
<key-store name="managementKS">
<credential-reference clear-text="keystore-password"/>
<implementation type="JKS"/>
<file path="keystore/managementKS" relative-to="jboss.server.data.dir"/>
</key-store>
add key manager : navigate :
ssl -> key manager -> add
...
Name = managementKM
Credential Reference Clear Text = keystore-password
Key Store = managementKS
result in standalone.xml
<key-manager name="managementKM" key-store="managementKS">
<credential-reference clear-text="keystore-password"/>
</key-manager>
add ssl context : navigate :
ssl -> server ssl context -> add
...
Name = managementSSC
Key Manager = managementKM
...
Edit added : Protocols = TLSv1.2
save
result in standalone.xml
<server-ssl-contexts>
<server-ssl-context name="managementSSC" protocols="TLSv1.2" key-manager="managementKM"/>
</server-ssl-contexts>
go back
runtime -> server (click view button)
http management interface (edit)
set secure socket binding = management-https
set ssl context = managementSSC
save
restart wildfly
systemctl restart wildfly
I am using Artifactory for storing docker images. Artifactory setup is using v1 repository to store images. When working from one of he linux machine i am able to pull and push the images from the Artifactory. But when working on my Windows laptop if I am trying to pull the image from the Artifactory it gives me below error
akash#AKASH-WS01 MINGW64 ~
$ docker pull mydocker.abc.com:5903/ubuntu
Using default tag: latest
Error response from daemon: unknown: Unsupported docker v2 repository request for 'demo-docker'
I am using .dockercfg file for authentication and have information stored to it. "demo-docker" is a user
Why docker pull command is using v2 repository when mydocker.abc.com:5903/ubuntu is on v1.Is there any way to make docker pull to use v1
I had the same problem, I adjusted my nginx to resolve the issue:
Artifactory Version: 4.15.0
Docker Version: 1.12.0
Stop Nginx service (service nginx stop)
Open your conf file in nginx (/etc/nginx/sites-enabled/default.conf) and change following line in it:
rewrite ^/(v1|v2)/(.*) /api/docker/build-images/$1/$2;
to
rewrite ^/(v2)/(.*) /api/docker/build-images/$1/$2;
Example below:
server {
listen 8000 ssl;server_name artifactory.corpintra.net;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
access_log /var/log/nginx/build-docker-access.log;
error_log /var/log/nginx/build-docker-error.log;
rewrite ^/(v2)/(.*) /api/docker/build-images/$1/$2;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://localhost:8081/artifactory/;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}}
Restart Nginx (service nginx restart)