When I try to pull an image from my local mirror, it works :
$ docker login -u docker -p mypassword nexus3.pleiade.mycomp.fr:5000
$ docker pull nexus3.pleiade.mycomp.fr:5000/hello-world
Using default tag: latest
latest: Pulling from **hello-world**
78445dd45222: Pull complete
Digest: sha256:c5515758d4c5e1e838e9cd307f6c6a0d620b5e07e6f927b07d05f6d12a1ac8d7
Status: Downloaded newer image for **nexus3.pleiade.mycomp.fr:5000/hello-world:latest**
But then, when I want to use this registry as mirror, it is just ignored, images are always pulled from web Docker hub, not from my local mirror :
$ ps -ef | grep docker
/usr/bin/dockerd -H fd:// --storage-driver=overlay2 --registry-mirror=https://nexus3.pleiade.mycomp.fr:5000
$ docker info
Registry Mirrors:
https://nexus3.pleiade.mycomp.fr:5000/
$ docker rmi nexus3.pleiade.mycomp.fr:5000/hello-world
_
$ docker pull hello-world
Using default tag: latest
latest: Pulling from **library/hello-world**
78445dd45222: Pull complete
Digest: sha256:c5515758d4c5e1e838e9cd307f6c6a0d620b5e07e6f927b07d05f6d12a1ac8d7
Status: Downloaded newer image for **hello-world:latest**
I know for sure it doesn't use my mirror, because when I unset the proxy settings, it cannot reach hello-world image.
Is it a Docker bug, or am I missing something ?
Docker info (short) :
Server Version: 1.13.1
Storage Driver: overlay2
(...)
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.8.0-37-generic
Operating System: Ubuntu 16.10
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 15.67 GiB
(...)
Registry Mirrors:
https://nexus3.pleiade.edf.fr:5000/
UPDATE :
Doing "journalctl -xe", I can see some useful information :
level=error msg="Attempting next endpoint for pull after error: Get https://nexus3.pleiade.mycomp.fr:5000/v2/library/hello-world/manifests/latest: no basic auth credentials"
It looks related to : https://github.com/docker/docker/issues/20097, but the workaround is not working : when I replace --registry-mirror=https://nexus3.pleiade.mycomp.fr:5000 by --registry-mirror=https://docker:password#nexus3.pleiade.mycomp.fr:5000
I get exactly the same error.
If it matters, the nexus is using a self signed certificate which has been copied to /etc/docker/certs.d/nexus3.pleiade.mycomp.fr:5000/ca.crt and this allowed to login via "docker login".
It's a docker bug : https://github.com/docker/docker/issues/30880
The workaround is to set up a https reverse proxy setting a hard-coded authentication header.
Here is an example config from Felipe C. :
In nginx docker config, add :
proxy_set_header Authorization "Basic a2luZzppc25ha2Vk";
Full example:
server {
listen *:443 ssl http2;
server_name docker.domain.blah.net;
ssl on;
include ssl/domain.blah.net.conf;
# allow large uploads of files - refer to nginx documentation
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
proxy_pass http://127.0.0.1:8083/;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Authorization "Basic YWRtaW46YWRtaW4xMjM=";
#proxy_set_header X-Forwarded-Proto "https";
}
}
server {
listen *:80;
server_name docker.domain.blah.net;
return 301 https://$server_name$request_uri;
}
Another way is docker logout other servers.
And enable the registry config Allow anonymous docker pull ( Docker Bearer Token Realm required ).
It worked for me to add a /etc/docker/daemon.json:
{
"registry-mirrors": [ "nexus3.pleiade.mycomp.fr" ],
"max-concurrent-downloads": 20
}
I may be late to the party but i hope this helps someone. I was facing the same issue and getting the auth error in nexus logs.
It turns out I had to enable anonymous docker pull in my nexus repository settings
Also after doing so check under Security->Realms that Docker Bearer Token Realm is active and given high priority
You can add basic auth in URL and it works for me. Something like
https://username:password#nexus3.pleiade.mycomp.fr:5000
Related
I am currently learning to set up nginx but I am already having an issue. There are gitlab and nextcloud running on my vps and both are accessible with the right port. Therefore I created a nginx config with a simple proxy_pass command but I always reveice 502 Bad Gateway.
Nextcloud, Gitlab and NGINX are docker container and NGINX has port 80 opened. The remaining two containers are having port 3000 and 3100 opened.
/etc/nginx/conf.d/gitlab.domain.com.conf
upstream gitlab {
server x.x.x.x:3000;
}
server {
listen 80;
server_name gitlab.domain.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://gitlab/;
}
}
/var/logs/error.log
2018/04/12 08:10:41 [error] 7#7: *1 connect() failed (113: Host is unreachable) while connecting to upstream, client: xx.201.226.19, server: gitlab.domain.com, request: "GET / HTTP/1.1", upstream: "http://xxx.249.7.15:3000/", host: "gitlab.domain.com"
2018/04/12 08:10:42 [error] 7#7: *1 connect() failed (113: Host is unreachable) while connecting to upstream, client: xx.201.226.19, server: gitlab.domain.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://xxx.249.7.15:3000/favicon.ico", host: "gitlab.domain.com", referrer: "http://gitlab.domain.com/
What is wrong with my configuration?
I think you could get away with a config way simpler than that.
Maybe something like this:
http {
...
server {
listen 80;
charset utf-8;
...
location / {
proxy_pass http://gitlab:3000;
}
}
}
I assume you are using docker's internal DNS for accessing the containers for example gitlab points to the gitlab containers internal IP. If that is the case then you can open up a container and try ping the gitlab container from the other container.
For example you can ping the gitlab container from the nginx container like this:
$ docker ps (use this to get the container id)
Now do:
$ docker exec -it <container_id_for_nginx_container> bash
# apt-get update -y
# apt-get install iputils-ping -y
# ping -c 2 gitlab
If you can't ping it then it means the containers have trouble communicating with each other. Are you using docker-compose? If you are then I would suggest look at the "links" keyword which is used to link containers that should be able to communicate with each other. So for example you would probably link the gitlab container to postgresql.
Let me know if this helps.
Another option that uses the advantage that your Docker containers are just processes in an isolated own control group is to bind each process (container) to a port on the host network (instead of an isolated network group). This bypasses Docker routing, so beware of the caveat that ports may not overlap on the host machine (no different than any normal process sharing the same host network.
You mentioned running Nginx and Nextcloud (I assume you are using the nextcloud fpm image because of FastCGI support). In this case, I had to do the following on my Arch Linux machine:
/usr/share/webapps/nextcloud is bounded (bind mounted) to the container at /var/www/html.
The UID of both host and container process must be the same (in my case, user host http and container www-data are UID=33)
The 443 server block in nginx.conf must set root to the host's nextcloud path, root /usr/share/webapps/nextcloud;.
The FastCGI script path for each server block that calls php-fpm over FastCGI must be adjusted to refer to the Docker container's Nextcloud base path, fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;. In other words, you cannot use $document_root as you normally would, because this points to the host's nextcloud root path.
Optional: Adjust paths to database and Redis in the config.php file to not use localhost, rather the hostname of the host machine. localhost seems to reference the container's host despite having been bound to the host machine's main network.
I want to serve a flask app that uses embedded bokeh serve from a server on my local network. To illustrate I made an example using the bokeh serve example and a docker image to replicate the server. The docker image runs Nginx and Gunicorn. I think there is a problem with my nginx configuration routing the requests to the /bkapp uri.
I have detailed the problem and provided all source code in the following git repo
I have started a discussion on bokeh google group
Single Container
In order to reduce the complexity of running nginx in its own container I built this image that runs nginx in same container as the web app
Installation
NOTE: I am using Docker version 17.09.0-ce
Download or clone repo and navigate to this directory (single_container).
# as root
docker build -f Dockerfile -t single_container .
build
start a terminal session in new container
# as root
docker run -ti single_container:latest
In new container start nginx
nginx
now start gunicorn
gunicorn -w 1 -b :8000 flask_gunicorn_embed:app
start
in a separate terminal (on host machine) find the IP address of the single_container container you are running
#as root
docker ps
# then do copy CONTAINER ID and inspect it
docker inspect [CONTIANER ID] | grep IPAddress
find
PROBLEM
Using IP found above (with container running) check out in firefox with inspector.
As you can in screenshot above (see screenshots folder "single_container_broken.png" for raw the get request just hangs
broke_1
I can verify that nginx is serving the static files though by navigating to /bkapp/static/ (see bokeh_recipe/single_container/nginx/bokeh_app.conf for config)
static
Another oddidy is that I try to hit the embedded bokeh server directly (with /bkapp/) but i end up with a 400 (denied?)
bkapp
Note about app
to reduce complexity of dynamically assigning available ports to tornado workers I hard coded in 46518 for port to talk to bokeh serve
nginx config
I know you could just look at bokeh_recipe/single_container/nginx/bokeh_app.conf but I want to show it here.
I think I need to config nginx to make explicit that the "request" to bkapp to the 127.0.0.1:46518 is originating FROM the server not the client.
## Define the parameters for a specific virtual host/server
server {
# define what port to listen on
listen 80;
# Define the specified charset to the “Content-Type” response header field
charset utf-8;
# Configure NGINX to deliver static content from the specified folder
# NOTE this should be a docker volume shared from the bokehrecipe_web container (css, js for bokeh serve)
location /bkapp/static/ {
alias /home/flask/app/web/static/;
autoindex on;
}
# Configure NGINX to reverse proxy HTTP requests to the upstream server (Gunicorn (WSGI server))
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host:$server_port;
proxy_buffering off;
}
# deal with the http://127.0.0.1/bkapp/autoload.js (note hard coded port for now)
location /bkapp/ {
proxy_pass http://127.0.0.1:46518;
}
}
I have setup my own Docker Registry, but I did not want it on the root URL so when I created the service I used the REGISTRY_HTTP_PREFIX environment variable and set it to /registry/, thus the URL to the registry is https://tools.example.com/registry. This is being proxied by Nginx which has Basic Auth setup on it.
I tested access to the registry using a Browser and I was able to get it to show that there are no repositories by going to http://tools.example.com/registry/v2/_catalog:
This led me to think that it was workoing. However when I try to login to the registry using the Docker command line, I get the Basic Auth challenge but then it fails to login because the URL is incorrect, e.g.
docker login -u russells -p xxxxxxxx https://tools.example.com/registry/
Error response from daemon: login attempt to https://tools.example.com/v2/ failed with status: 404 Not Found
As can be seen from the error, the prefix is not being added properly. SO how can I login to the registry so I can push images. Is there an environment variable or something that I am missing to make the docker login work properly?
Update - 2017-08-12 2253 BST
I Have been playing around with the configuration a bit, but I am still not getting very far.
As requested here are my configuration files.
nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
keepalive_timeout 65;
upstream docker-registry {
server registry:5000;
}
map $upstream_http_docker_distribution_api_version $docker_distribution_api_version {
'' 'registry/2.0';
}
server {
listen 15000;
server_name tools.example.com;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
# required to avoid HTTP 411
chunked_transfer_encoding on;
location /registry/ {
# Do not allow connections from docker 1.5. and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$") {
return 404;
}
auth_basic "Docker Registry";
auth_basic_user_file /etc/nginx/.htpasswd;
add_header 'Docker-Distribution-Api-Version' $docker_distribution_api_version always;
proxy_pass http://docker-registry/registry/;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
}
My Docker Registry service is deployed as registry and is running on the default port of 5000. Looking at this now I think I have got things confused. I do not need the registry to answer on the prefix itself, just Nginx.
For example if I leave the location set to / then I can login, but if I change this to /registry/ then I am not able to. I am beginning to think that the two are conflicting each other.
Registry
I have not set a configuration for the Registry other than the one environment variable - REGISTRY_HTTP_PREFIX, which maybe surplus to requirements in this setup.
Update - 2017-08-15 1100 BST
In order to test the prefix for the registry I created a registry container with the following configuration file:
version: 0.1
auth:
htpasswd:
realm: Docker Registry
path: /auth/etc/htpasswd
storage:
filesystem:
rootdirectory: /var/lib/registry
maxthreads: 100
http:
addr: 0.0.0.0:5000
prefix: /registry/
tls:
certificate: /auth/ssl/certs/registry.cert
key: /auth/ssl/private/registry.key
As this is using self signed certificates I updated my Docker engine by placing the certificate in /etc/docker/certs.d/host-lin-01:5000.
I then created the container with the following command:
docker run -it --rm -p 5000:5000 --name registry_test -v ~/workspaces/docker/registry/etc/registry.yml:/etc/docker/registry/config.yml -v ~/workspaces/docker/registry:/auth registry:2
If I try and login to the registry with the command:
docker login -u russells -p xxxxxx https://host-lin-01:5000/registry
I get the following error:
Error response from daemon: login attempt to https://host-lin-01:5000/v2/ failed with status: 404 Not Found
Now if I remove the perfix: /registry/ line from the registry yaml file and restart the container and then login all is well:
docker login -u russells -p xxxxxx https://turtle-host-03:5000/
Login Succeeded
What is strange, however, is that the login works for any prefix I put on the end of the login URL, e.g.
docker login -u russells -p xxxxxx https://turtle-host-03:5000/registry/fred/34
Login Succeeded
I do not understand this. I must be misunderstanding what the prefix setting does.
You issue is the application of your Basic Auth. So you have Nginx with Basic Auth which is backed by a plain registry.
You are able to authenticate urls and see the _catalog blank json, and that make you feel it is working. But technically what is happening is that your Nginx is asking for username/password, which gets it and then passes is on to you docker registry. Which in turn has no authentication.
Now when you use docker login you are expecting a authenticated registry but you have a authenticated nginx and non-authenticated registry. So you need to ditch the below lines of code from your nginx config
auth_basic "Docker Registry";
auth_basic_user_file /etc/nginx/.htpasswd;
Also when launching your registry you need to define the below environment variables
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
Make sure you map /auth/htpasswd from your host to the registry container. Do this and the setup should work. Also make sure to setup the server certificates in your docker client system
Optional Changes
Next part of this answer is optional as such. Since you are using Nginx and Registry both. I would suggest you ditch the REGISTRY_HTTP_PREFIX from your registry and change the proxy_pass to
proxy_pass http://docker-registry/;
I am reading a lot these days about how to setup and run a docker stack. But one of the things I am always missing out on is how to setup that particular containers respond to access through their domain name and not just their container name using docker dns.
What I mean is, that say I have a microservice which is accessible externally, for example: users.mycompany.com, it will go through to the microservice container which is handling the users api
Then when I try to access the customer-list.mycompany.com, it will go through to the microservice container which is handling the customer lists
Of course, using docker dns I can access them and link them into a docker network, but this only really works for wanting to access container to container, but not internet to container.
Does anybody know how I should do that? Or the best way to set that up.
So, you need to use the concept of port publishing, so that a port from your container is accessible via a port from your host. Using this, you can can setup a simple proxy_pass from an Nginx that will redirect users.mycompany.com to myhost:1337 (assuming that you published your port to 1337)
So, if you want to do this, you'll need to setup your container to expose a certain port using:
docker run -d -p 5000:5000 training/webapp # publish image port 5000 to host port 5000
You can then from your host curl your localhost:5000 to access the container.
curl -X GET localhost:5000
If you want to setup a domain name in front, you'll need to have a webserver instance that allows you to proxy_pass your hostname to your container.
i.e. in Nginx:
server {
listen 80;
server_name users.mycompany.com;
location / {
proxy_pass http://localhost:5000;
}
}
I would advise you to follow this tutorial, and maybe check the docker run reference.
For all I know, Docker doesn't provide this feature out of the box. But surely there are several workarounds here. In fact you need to deploy a DNS on your host that will distinguish the containers and resolve their domain names in dynamical IPs. So you could give a try to:
Deploy some of Docker-aware DNS solutions (I suggest you to use SkyDNSv1 / SkyDock);
Configure your host to work with this DNS (by default SkyDNS makes the containers know each other by name, but the host is not aware of it);
Run your containers with explicit --hostname (you will probably use scheme container_name.image_name.dev.skydns.local)
You can skip step #2 and run your browser inside container too. It will discover the web application container by hostname.
Here is a one solution with the nginx and docker-compose:
users.mycompany.com is in nginx container on port 8097
customer-list.mycompany.com is in nginx container on port 8098
Nginx configuration:
server {
listen 0.0.0.0:8097;
root /root/for/users.mycompany.com
...
}
server {
listen 0.0.0.0:8098;
root /root/for/customer-list.mycompany.com
...
}
server {
listen 0.0.0.0:80;
server_name users.mycompany.com;
location / {
proxy_pass http://0.0.0.0:8097;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
server {
listen 0.0.0.0:80;
server_name customer-list.mycompany.com;
location / {
proxy_pass http://0.0.0.0:8098;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
Docker compose configuration :
services:
nginx:
container_name: MY_nginx
build:
context: .docker/nginx
ports:
- '80:80'
...
I'm hosting my own docker-registry in a docker container. It's fronted by nginx running in a separate container to add basic auth. Checking the _ping routes I can see that nginx is routing appropriately. When calling docker login from boot2docker (on Mac OSX) I get this error:
FATA[0003] Error response from daemon: Invalid registry endpoint https://www.example.com:8080/v1/: Get https://www.example.com:8080/v1/_ping: x509: certificate signed by unknown authority. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add --insecure-registry www.example.com:8080 to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/www.example.com:8080/ca.crt
Which is odd - because it's a valid CA SSL cert. I've tried adding --insecure-registry in EXTRA-ARGS as per these instructions: https://github.com/boot2docker/boot2docker#insecure-registry but initially the 'profile' file doesn't exist it. If I create it, and add
EXTRA_ARGS="--insecure-registry www.example.com:8080"
I see no improvement. I wanted to isolate the example and so tried docker login from an ubuntu VM (not boot2docker). Now I get a different error:
Error response from daemon:
The docker registry is run directly from the public hub, e.g.
docker run -d -p 5000:5000 registry
(Note that nginx routes from 8080 to 5000). Any help and/or resources to help debug this would be much appreciated.
UPDATE
I was looking to a guide to help comprehensively solve this problem. Specifically:
Create a private registry
Secure the registry with basic Auth
Use the registry from boot2docker
I have created the registry and tested locally, it works. I have secured the registry with nginx adding basic auth.
The trouble is now actually using the registry from two types of client:
1) Non boot2docker client.
One of the answers below helped with this. I added --insecure-registry flag to options in /etc/default/docker and now I can talk to my remote docker registry.
However, this isn't compatible with auth as docker login gets an error:
2015/01/15 21:33:57 HTTP code 401, Docker will not send auth headers over HTTP.
So, if I want to use auth I'll need to use HTTPS. I already have this server serving over HTTPS but that doesn't work if I set --insecure-registry. There appears to be a certificate trust issue, which I'm confident I can solve on non-boot2docker but..
2) For a boot2docker client, I can't get --insecure-registry to work or certificates to be trusted?
UPDATE 2
Following this stack exchange question I managed to add the ca to my ubuntu VM and I can now use from non boot2docker client. However, there is still a lot of odd behavior.
Even though my current user is a member of the docker group (so I don't have to use sudo) I now have to use sudo or I get the following error when trying to login or pull from my private registry
user#ubuntu:~$ docker login example.com:8080
WARNING: open /home/parallels/.dockercfg: permission denied
parallels#ubuntu:~$ docker pull example.com:8080/hw:1
WARNING: open /home/parallels/.dockercfg: permission denied
And when running containers pulled from my private registry for the first time, I have to specify them by image ID - not their name.
Edit the docker file
sudo vim /etc/default/docker
Add the DOCKER_OPTS
DOCKER_OPTS="$DOCKER_OPTS --insecure-registry=www.example.com:8080"
Restarting the docker service
sudo service docker restart
Run the following command:
boot2docker ssh "echo $'EXTRA_ARGS=\"--insecure-registry <YOUR INSECURE HOST>\"' | sudo tee -a /var/lib/boot2docker/profile && sudo /etc/init.d/docker restart"
Docker version > 1.3.1 communicates over HTTPS by default when connecting to docker registry
If you are using Nginx to proxy_pass to port 5000 where docker registry is listening you will need to terminate docker client's SSL connection to docker registry at webserver/LB (Nginx in this case). To verify if Nginx is terminating SSL connection well use cURL https://www.example.com:8081/something where 8081 is another port set up for testing SSL cert.
If you don't care if your docker client connects to the registry over HTTP and not HTTPS, add
OPTIONS="--insecure-registry www.example.com:8080"
in /etc/sysconfig/docker (or equivalent in other distros) and restart docker service.
Hope it helps.
As of Docker version 1.3.1, if your registry doesn't support HTTPS, you must add it as an insecure registry. For boot2docker, this is a bit more complicated than usual. See: https://github.com/boot2docker/boot2docker#insecure-registry
The relevant commands are:
$ boot2docker init
$ boot2docker up
$ boot2docker ssh
$ echo 'EXTRA_ARGS="--insecure-registry <YOUR INSECURE HOST>"' | sudo tee -a /var/lib/boot2docker/profile
$ sudo /etc/init.d/docker restart
If you want to add SSL certificates to the boot2docker instance, it's going to be something similar (boot2docker ssh followed by sudo).
For ubuntu, please modify file /etc/default/docker
DOCKER_OPTS="$DOCKER_OPTS --insecure-registry=10.27.19.230:5000"
For rehl, please modify file /etc/sysconfig/docker
other_args="--insecure-registry 10.27.19.230:5000"
Register an SSL key from https://letsencrypt.org/ If you need more instructions, refer this link.
Enable SSL for nginx. Attention to SSL part in the code below, after register SSL key, you have fullchain.pem, privkey.pem, dhparam.pem using it for nginx to enable SSL.
`
server {
listen 443;
server_name docker.mydomain.com;
# SSL
ssl on;
ssl_certificate /etc/nginx/conf.d/fullchain.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/conf.d/dhparam.pem;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location /v2/ {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
# To add basic authentication to v2 use auth_basic setting plus add_header
auth_basic "registry.localhost";
auth_basic_user_file /etc/nginx/conf.d/registry.password;
add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
It resolves my problem, hopes it help you.
Try running the daemon with the args:
docker -d --insecure-registry="www.example.com:8080"
instead of setting EXTRA_ARGS