In the past I tried setting up Jfrog Artifactory OSS and was able to get it through my reverse proxy exposed outside my home network, and I was able to push to it VIA my computer local CLI and through Drone CI but it took an abnormal amount of time (roughly 5 min) to push to my own registry when pushing to DockerHub or Gitlab took a matter of seconds.
My container is really small (think MBs) and I never have any issues with pushing it to any other remote registry. I always thought it might have been the registry and the fact it was running on an old machine until now.
I recently discovered my git solution Gitea has a registry built in, so I did the same, I got everything set up and mapped and once again it took an abnormal amount of time (roughly 5 min) to push to my own registry (this time backed by Gitea).
This leads me to think my issues is Nginx Proxy Manager related. I found some documenation online but it was really general and vague, I have the current proxy config below and it still has the issue. Could anyone point me in the right direction? I also included a few other posts related to this issue.
server {
set $forward_scheme http;
set $server "192.168.X.XX";
set $port 3000;
listen 8080;
#listen [::]:8080;
listen 4443 ssl http2;
#listen [::]:4443;
server_name my.domain.com;
# Let's Encrypt SSL
include conf.d/include/letsencrypt-acme-challenge.conf;
include conf.d/include/ssl-ciphers.conf;
ssl_certificate /etc/letsencrypt/live/npm-47/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/npm-47/privkey.pem;
# Force SSL
include conf.d/include/force-ssl.conf;
access_log /data/logs/proxy-host-10_access.log proxy;
error_log /data/logs/proxy-host-10_error.log warn;
#Additional fields I added ontop of the default Nginx Proxy Manager config
proxy_buffering off; proxy_ignore_headers "X-Accel-Buffering";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
location / {
# Proxy!
include conf.d/include/proxy.conf;
}
# Custom
include /data/nginx/custom/server_proxy[.]conf;
}
I also checked the live logs for Gitea and I see the requests coming real time and processed really fast, but there is always a significant delay before it receives the next request which makes me think the Nginx Proxy Manager is not correctly forwarding the requests or there is some setting that I missed. Any help would be greatly appreciated!
Some of the settings I got to try were from the below sources
Another registry
Another stack overflow suggestion
I have hit a wall trying to fix a problem with my openProject installation. I installed it following the instructions in this guide. Then, I added an A record for my public IP and subdomain using world4you. I also created SSL certificates with Let's Encrypt:
mkdir /var/www/certbot/openproject.invert.at
certbot certonly --webroot -w /var/www/certbot/openproject.invert.at -d openproject.invert.at
Then I created and modified a file named /etc/nginx/sites-enabled/openproject.eeg_invert.de as follows:
server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;
# ssl_certificate /etc/letsencrypt/live/openproject.eeg_invert.de/fullchain.pem;
# ssl_certificate_key /etc/letsencrypt/live/openproject.eeg_invert.de/privkey.pem;
access_log /var/log/nginx/access_openproject.eeg_invert.de.log;
error_log /var/log/nginx/error_openproject.eeg_invert.de.log;
server_name openproject.eeg_invert.de;
if ($http_user_agent ~* ".*SemrushBot.*") {return 403;}
location '/.well-known/acme-challenge' {
root /var/www/certbot/openproject.eeg_invert.de;
}
location / {
if ($scheme = http) {
return 301 https://$server_name$request_uri;
}
proxy_set_header X-Script-Name /;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Remote-User $remote_user;
proxy_set_header Host $http_host;
# proxy_redirect http:// https://;
proxy_pass http://localhost:8080;
}
}
I reloaded nginx and everything worked just fine. However, I updated this application using cd /docker/openproject/compose && docker-compose pull && docker-compose up -d and reloaded nginx but now I am getting this message on chrome:
This page isn’t working openproject.eeg_invert.de redirected you too many times.
Try clearing your cookies.
ERR_TOO_MANY_REDIRECTS
I backed up all relevant docker volumes and the entire project folder (where the compose file is located) before updating. I am in no way an expert in IT so what I did so far is to run docker-compose down after updating. Then I restored the project folder and ran docker-compose up -d.
The problem is that now I am still getting the same error. I looked at the nginx error log files, but nothing comes up. I tried disabling some of the options at random from the nginx files to see if that changes something but it is always the same.
I have hit a wall now and I would very much appreciate your help! Thanks in advance for any suggestions or ideas you may have.
Did you try clearing cookies in your browser? There are many reasons why this issue can occur:
Issues with the browser's cache/cookies. The browser may be caching faulty data that leads to the redirection error.
The browser extensions. Sometimes a browser extension can cause a redirection error.
The website's URL. A misconfiguration in URL settings can cause the redirection error.
WordPress cache. The website cache could be causing a redirect loop.
SSL certificate. A misconfigured security protocol (SSL certificate) can cause a redirect loop.
Third-party services and plugins. A faulty WordPress plugin could be causing the redirection loop.
The site's .htaccess file. A user-level configuration file WordPress uses to rewrite URLs to the index.php file. The website URL is defined as a value in the database.
I need your help to set my Laradock (with Docker) using Nginx and SSL "fake" certificate on my local machine.
I have no idea how to setup it. Could you please help me?
Thanks
To enable SSL with the current version of laradock (as of Nov 2019) with a self signed certificate you must enable it in the nginx settings. Inside the folder nginx/sites remove the comments below line 6 "# For https" :
# For https
listen 443 ssl default_server;
listen [::]:443 ssl default_server ipv6only=on;
ssl_certificate /etc/nginx/ssl/default.crt;
ssl_certificate_key /etc/nginx/ssl/default.key;
restart nginx : docker-compose restart nginx
and you're ready.
If google-chrome complains you can enable the flag at chrome://flags/#allow-insecure-localhost to allow even invalid certificates.
The solution given only allows for https://localhost, however you might need to generate your own when using custom domain pointing to localhost, e.g https://testing.dev
I've written a gist to this — https://gist.github.com/r0lodex/0fe03fc8d22241d79cba65107b30868b
Hopefully this will help those who are still searching.
I run onlyoffice with docker docker run -i -t -d -p 80:80 onlyoffice/documentserver and a nginx load balancer which provide ssl encryption.
My question is, how can i provide a authentication? without to touch the load balancer.
The Problem is, everybody can use the server.
The Problem is, everybody can use the server.
We would recommend to enable JWT on the Document Server.
It is supported by the NC connector
http basic auth works, tested with nextcloud integration:
root#e54c225ab8aa:/# cat /etc/nginx/conf.d/onlyoffice-documentserver.conf
include /etc/nginx/includes/onlyoffice-http.conf;
server {
listen 0.0.0.0:80;
listen [::]:80 default_server;
server_tokens off;
include /etc/nginx/includes/onlyoffice-documentserver-*.conf;
}root#e54c225ab8aa:/#
insert eg.:
auth_basic “Administrator’s Area”;
auth_basic_user_file /etc/nginx/.htpasswd;
and restart nginx /etc/init.d/nginx restart
I'm hosting my own docker-registry in a docker container. It's fronted by nginx running in a separate container to add basic auth. Checking the _ping routes I can see that nginx is routing appropriately. When calling docker login from boot2docker (on Mac OSX) I get this error:
FATA[0003] Error response from daemon: Invalid registry endpoint https://www.example.com:8080/v1/: Get https://www.example.com:8080/v1/_ping: x509: certificate signed by unknown authority. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add --insecure-registry www.example.com:8080 to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/www.example.com:8080/ca.crt
Which is odd - because it's a valid CA SSL cert. I've tried adding --insecure-registry in EXTRA-ARGS as per these instructions: https://github.com/boot2docker/boot2docker#insecure-registry but initially the 'profile' file doesn't exist it. If I create it, and add
EXTRA_ARGS="--insecure-registry www.example.com:8080"
I see no improvement. I wanted to isolate the example and so tried docker login from an ubuntu VM (not boot2docker). Now I get a different error:
Error response from daemon:
The docker registry is run directly from the public hub, e.g.
docker run -d -p 5000:5000 registry
(Note that nginx routes from 8080 to 5000). Any help and/or resources to help debug this would be much appreciated.
UPDATE
I was looking to a guide to help comprehensively solve this problem. Specifically:
Create a private registry
Secure the registry with basic Auth
Use the registry from boot2docker
I have created the registry and tested locally, it works. I have secured the registry with nginx adding basic auth.
The trouble is now actually using the registry from two types of client:
1) Non boot2docker client.
One of the answers below helped with this. I added --insecure-registry flag to options in /etc/default/docker and now I can talk to my remote docker registry.
However, this isn't compatible with auth as docker login gets an error:
2015/01/15 21:33:57 HTTP code 401, Docker will not send auth headers over HTTP.
So, if I want to use auth I'll need to use HTTPS. I already have this server serving over HTTPS but that doesn't work if I set --insecure-registry. There appears to be a certificate trust issue, which I'm confident I can solve on non-boot2docker but..
2) For a boot2docker client, I can't get --insecure-registry to work or certificates to be trusted?
UPDATE 2
Following this stack exchange question I managed to add the ca to my ubuntu VM and I can now use from non boot2docker client. However, there is still a lot of odd behavior.
Even though my current user is a member of the docker group (so I don't have to use sudo) I now have to use sudo or I get the following error when trying to login or pull from my private registry
user#ubuntu:~$ docker login example.com:8080
WARNING: open /home/parallels/.dockercfg: permission denied
parallels#ubuntu:~$ docker pull example.com:8080/hw:1
WARNING: open /home/parallels/.dockercfg: permission denied
And when running containers pulled from my private registry for the first time, I have to specify them by image ID - not their name.
Edit the docker file
sudo vim /etc/default/docker
Add the DOCKER_OPTS
DOCKER_OPTS="$DOCKER_OPTS --insecure-registry=www.example.com:8080"
Restarting the docker service
sudo service docker restart
Run the following command:
boot2docker ssh "echo $'EXTRA_ARGS=\"--insecure-registry <YOUR INSECURE HOST>\"' | sudo tee -a /var/lib/boot2docker/profile && sudo /etc/init.d/docker restart"
Docker version > 1.3.1 communicates over HTTPS by default when connecting to docker registry
If you are using Nginx to proxy_pass to port 5000 where docker registry is listening you will need to terminate docker client's SSL connection to docker registry at webserver/LB (Nginx in this case). To verify if Nginx is terminating SSL connection well use cURL https://www.example.com:8081/something where 8081 is another port set up for testing SSL cert.
If you don't care if your docker client connects to the registry over HTTP and not HTTPS, add
OPTIONS="--insecure-registry www.example.com:8080"
in /etc/sysconfig/docker (or equivalent in other distros) and restart docker service.
Hope it helps.
As of Docker version 1.3.1, if your registry doesn't support HTTPS, you must add it as an insecure registry. For boot2docker, this is a bit more complicated than usual. See: https://github.com/boot2docker/boot2docker#insecure-registry
The relevant commands are:
$ boot2docker init
$ boot2docker up
$ boot2docker ssh
$ echo 'EXTRA_ARGS="--insecure-registry <YOUR INSECURE HOST>"' | sudo tee -a /var/lib/boot2docker/profile
$ sudo /etc/init.d/docker restart
If you want to add SSL certificates to the boot2docker instance, it's going to be something similar (boot2docker ssh followed by sudo).
For ubuntu, please modify file /etc/default/docker
DOCKER_OPTS="$DOCKER_OPTS --insecure-registry=10.27.19.230:5000"
For rehl, please modify file /etc/sysconfig/docker
other_args="--insecure-registry 10.27.19.230:5000"
Register an SSL key from https://letsencrypt.org/ If you need more instructions, refer this link.
Enable SSL for nginx. Attention to SSL part in the code below, after register SSL key, you have fullchain.pem, privkey.pem, dhparam.pem using it for nginx to enable SSL.
`
server {
listen 443;
server_name docker.mydomain.com;
# SSL
ssl on;
ssl_certificate /etc/nginx/conf.d/fullchain.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/conf.d/dhparam.pem;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location /v2/ {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
# To add basic authentication to v2 use auth_basic setting plus add_header
auth_basic "registry.localhost";
auth_basic_user_file /etc/nginx/conf.d/registry.password;
add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
It resolves my problem, hopes it help you.
Try running the daemon with the args:
docker -d --insecure-registry="www.example.com:8080"
instead of setting EXTRA_ARGS