Nexus3: Push to Docker Group Repo - docker

I have Nexusv3.6 and created a Docker repo docker-repo (type: hosted) and a Docker group docker-group (type: group).
For both I enabled HTTPS connector:
docker-repo on Port 8101 and docker-group on Port 8102.
I added docker-repo to my docker-group.
Now I am able to push/pull an image to/from docker-repo directly like:
docker push myhost.com:8101/mymimage:latest
But when I try to push to the group like this:
docker push myhost.com:8102/docker-repo/mymimage:latest
I get an error saying: error parsing HTTP 404 response body: invalid character '<' looking for beginning of value
Any ideas what's the problem here?

I solved this problem with NGINX as follows:
Updated.
In the following example, "repository/docker" is a group that combines docker-proxy and docker-hosted repositories.
All HEAD* and GET requests are proxied to the docker repository group (hosted + proxy).
All "changing" requests are proxied to the docker-hosted repository directly.
*One exception. HEAD /v2/.../blobs/ should be proxied to the hosted repo because it called before push blobs to the hosted repo and we have to check the blob existence in the hosted repo. Otherwise we get an error: blob unknown: blob unknown to registry
server {
listen *:443 default_server ssl;
.........................
location ~ ^/(v1|v2)/[^/]+/?[^/]+/blobs/ {
if ($request_method ~* (POST|PUT|DELETE|PATCH|HEAD) ) {
rewrite ^/(.*)$ /repository/docker-hosted/$1 last;
}
rewrite ^/(.*)$ /repository/docker/$1 last;
}
location ~ ^/(v1|v2)/ {
if ($request_method ~* (POST|PUT|DELETE|PATCH) ) {
rewrite ^/(.*)$ /repository/docker-hosted/$1 last;
}
rewrite ^/(.*)$ /repository/docker/$1 last;
}
location / {
proxy_pass http://nexus:8081/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}
}
You can verify the settings by running:
# pull via proxy
docker pull nexus.your.domain/ubuntu
# push to the hosted repository
docker push nexus.your.domain/ubuntu

According to the official documentation about repository groups for docker:
A repository group is the recommended way to expose all your
repositories for read access to your users.
and, from the documentation about pushing images in private registries
You can not push to a repository group or a proxy repository.

Related

NGINX reverse proxy with registered domain name not working (cannot call API)

Nginx version 1.14.0
nginx.conf file (the "fake" domain below is officially registered):
http {
#default stuff...
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
server {
listen 80;
server_name mydomain.registered.com;
location / {
proxy_pass http://127.0.0.1:8001/;
#I also tried with the server's actual IP
#I also tried with or without the slash at the end
}
}
#default stuff...
}
Assuming a "fake" IP of the server being 10.70.0.65, my APIs are:
http://10.70.0.65:8001/auth/login
http://10.70.0.65:8001/auth/logout
etc...
My above API server is a running docker container with port mapping as below:
0.0.0.0:8000-8001->8000-8001/tcp, :::8000-8001->8000-8001/tcp
When calling the APIs directly using the server's IP, e.g. http://10.70.0.65:8001/auth/login, it works. But calling using the domain name, e.g. http://mydomain.registered.com/auth/login, does not work.
How can I make it works?
Try to pass Host: header to your backend:
location / {
proxy_pass http://127.0.0.1:8001/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}

NGinx and multiple docker containers but one subdomain

My Goal
I want to use a single NGinx docker container as a proxy.
I want it to respond to traffic on my domain: "sub.domain.com" and listen on port 80 and 443.
When traffic comes in on /admin I want it to direct all traffic to one docker container (say... admin_container:6000).
When traffic comes in on /api I want it to direct all traffic to another docker container (say... api_container:5500).
When traffic comes in on any other path (/anything_else) I want it to direct all traffic to another docker container (say... website_container:5000).
Some Helpful Context
Real quick, let me provide some context in case it's helpful. I have a NodeJS website running in a docker container. I'd like to also have an Admin section, created with ASP.NET Core that runs in a second docker container. I'd like both of these websites to share and make use of a single ASP.NET Core Web Api project, running in a third docker container. So, one NodeJS project and two ASP.NET Core projects, that all live on a single subdomain:
sub.domain.com/
Serves the main website
sub.domain.com/admin
Serves the Admin website
sub.domain.com/api
Serves API endpoints and handles Database connectivity
What I have So Far
So far I have the NGinX reverse proxy set up and a single docker container for the NodeJS application. Currently all traffic on :80 is redirected to 443. All traffic on 443 is directed to the NodeJS docker container, running privately on :5000. I'll admit I'm not great with NGinx and don't fully understand how this works.
The NGinx.conf file
worker_processes 2;
events { worker_connections 1024; }
http {
sendfile on;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
upstream docker-nodejs {
server nodejs_prod:5000;
}
server {
listen 80;
server_name sub.domain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name sub.domain.com;
ssl_certificate /etc/nginx/ssl/combined.crt;
ssl_certificate_key /etc/nginx/ssl/mysecretkeyfile.key;
location / {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_pass http://docker-nodejs;
}
}
}
Quick Note: In the above code "nodejs_prod:5000" refers to the container who's name is "nodejs_prod" and listens on port 5000. I don't understand how that works, but it is working. Somehow docker is creating DNS entries in the private network, one for each container name. I'm actually using docker-compose.
My Actual Question: How will this NGinx.conf file look when I have 2 more websites (each a docker container). It's important that /admin and /api are sent to the correct docker containers, and not handled by the "catch all" location. I'm imagining that I'll have a "catch all" location which captures all traffic that DOESN'T START WITH /admin OR /api.
Thank you!
In your nginx config you want to add location rules, such as
location /admin {
proxy_pass http://admin_container:6000/;
}
location /api {
proxy_pass http://api_container:5500/;
}
This redirects /admin to admin_container port 6000, and /api to api_container and port 5500.
docker-compose creates a network between all it's containers. Which is why http://api_container:5500/ points to the container named api_container and the port 5500. This can be used for communication between containers. You can read more about it here https://docs.docker.com/compose/networking/

Docker push nexus private repo fail, 413 Request Entity Too Large

I've deployed an on prem instance of Nexus OSS, that is reached behind a Nginx reverse proxy.
On any attempt to push docker images to a repo created on the Nexus registry I'm bumping into a
413 Request Entity Too Large in the middle of the push.
The nginx.conf file is looking like so:
http {
client_max_body_size 0;
upstream nexus_docker {
server nexus:1800;
}
server {
server_name nexus.services.loc;
location / {
proxy_pass http://nexus_docker/;
proxy_set_header Host $http_post;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
The nginx is deployed using docker, and I've successfully logged in to it using docker login.
I've tried multiple other flags, such as the chunkin and such. But nothing seems to work.
That's due to your server block having a default value for client_max_body_size of around 1MB in size when unset.
To resolve this, you will need to add the following line to your server block:
# Unlimit large file uploads to avoid "413 Request Entity Too Large" error
client_max_body_size 0;
http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size
As it turns out, the linux distro running the containered nginx server was itself running a variation of nginx for any incoming request.
Once we set the client_max_body_size to 0 on the nginx configuration file which the OS ran, it worked.

Error when accessing the Wildfly Management Console - Authentication required

I'm receiving the error Authentication required after I login in the Wildfly 13 Management Console.
If I type a user or password wrong, it asks again, but if I type correctly it shows the page with the error message (so I assume the user and password are correct, but something else after that gives the error).
I'm using docker to run a nginx container and a wildfly container.
The nginx listens externally on port 9991 and proxy pass the request to the wildfly container, but it shows the error described before.
It just happens with the Wildfly Console, every other request proxied, even request proxied to a websocket or to Wildfly on port 8080, are done successfully.
The Wildfly container listens externally on port 9990 and I can access the console successfully in this port. If on docker I map the port "9992:9990" I still can access the console successfully through port 9992.
So, it seems that this is not related to docker, but to the Wildfly Console itself. Probably some kind of authentication that is not happening successfully when using a reverse proxy in the middle.
I have a demo docker project on https://github.com/lucasbasquerotto/pod/tree/0.0.6, and you can download the tag 0.0.6 that has everything setup to work with Wildfly 13 and nginx, and to simulate this error.
git clone -b 0.0.6 --single-branch --depth 1 https://github.com/lucasbasquerotto/pod.git
cd pod
docker-compose up -d
Then, if you access the container directly in http://localhost:9990 with user monitor and password Monitor#70365 everything works.
But if you access http://localhost:9991 with the same credentials, through the nginx reverse proxy, you receive the error.
My nginx.conf file:
upstream docker-wildfly {
server wildfly:9990;
}
location / {
proxy_pass http://docker-wildfly;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
I've also tried with:
proxy_set_header X-Forwarded-Proto $scheme;
And also with the Authorization header (just the 2nd line and also with both):
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
And also defining the host header with the port (instead of just $host):
proxy_set_header Host $server_addr:$server_port;
I've tried the above configurations isolated and combined together. All to no avail.
Any sugestions?
Has anyone successfully accessed the Wildfly Console through a reverse proxy?
Update (2018-09-22)
It seems Wildfly uses a digest authentication (instead of basic).
I see the header in the console like the following:
Authorization: Digest username="monitor", realm="ManagementRealm", nonce="AAAAAQAAAStPzpEGR3LxjJcd+HqIX2eJ+W8JuzRHejXPcGH++43AGWSVYTA=", uri="/console/index.html", algorithm=MD5, response="8d5b2b26adce452555d13598e77c0f63", opaque="00000000000000000000000000000000", qop=auth, nc=00000005, cnonce="fe0e31dd57f83948"
I don't see much documentation about using nginx to proxy pass requests with digest headers (but I think it should be transparent).
One question I saw here in SO is https://serverfault.com/questions/750213/http-digest-authentication-on-proxied-server, but there is no answer so far.
I saw that there is the nginx non-official module https://www.nginx.com/resources/wiki/modules/auth_digest/, but in the github repository (https://github.com/atomx/nginx-http-auth-digest) it says:
The ngx_http_auth_digest module supplements Nginx's built-in Basic
Authentication module by providing support for RFC 2617 Digest
Authentication. The module is currently functional but has only been
tested and reviewed by its author. And given that this is security
code, one set of eyes is almost certainly insufficient to guarantee
that it's 100% correct. Until a few bug reports come in and some of
the ‘unknown unknowns’ in the code are flushed out, consider this
module an ‘alpha’ and treat it with the appropriate amount of
skepticism.
Also it doesn't seem to me allright to hardcode the user and pass in a file to be used by nginx (the authentication should be transparent to the reverse proxy in this case).
In any case, I tried it and it correctly asks me to authenticate, even if the final destination does not have a digest authentication, like when trying to connect to the wildfly site (not console), it asks when trying to connect to nginx (before proxying the request), then it forwards successfully to the destination, except in the case of wildfly console, it keeps asking me to authenticate forever.
So I think this is not the solution. The problem seems to be in what the nginx is passing to the Wildfly Console.
I had the same problem with the HAL management console v3.3 and 3.2
I could not get ngnix HTTPS working due to authentication errors, even though the page prompted http basic auth user and pass
This was tested in standalone mode on the same server
My setup was :
outside (https) -> nginx -> http://halServer:9990/
This resulted in working https but with HAL authentication errors (seen in the browsers console) the webpage was blank.
At first access the webpage would ask http basic auth credentials normally, but then almost all https requests would return an authentication error
I managed to make it work correctly by first enabling the HAL console https with a self signed certificate and then configuring nginx to proxy pass to the HAL HTTPS listener
Working setup is :
outside (https) -> nginx (https) -> https://halServer:9993/
Here is the ngnix configuration
server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;
server_name halconsole.mywebsite.com;
# SSL
ssl_certificate /keys/hal_fullchain.pem;
ssl_certificate_key /keys/hal_privkey.pem;
ssl_trusted_certificate /keys/hal_chain.pem;
# security
include nginxconfig.io/security.conf;
# logging
access_log /var/log/nginx/halconsole.mywebsite.com.access.log;
error_log /var/log/nginx/halconsole.mywebsite.com.error.log warn;
# reverse proxy
location / {
# or use static ip, or nginx upstream
proxy_pass https://halServer:9993;
include nginxconfig.io/proxy.conf;
}
# additional config
include nginxconfig.io/general.conf;
include nginxconfig.io/letsencrypt.conf;
}
# subdomains redirect
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name *.halconsole.mywebsite.com;
# SSL
ssl_certificate /keys/hal_fullchain.pem;
ssl_certificate_key /keys/hal_privkey.pem;
ssl_trusted_certificate /keys/hal_chain.pem;
return 301 https://halconsole.mywebsite.com$request_uri;
}
proxy.conf
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
# Proxy headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Forwarded $proxy_add_forwarded;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-By $server_addr;
# Proxy timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
The easiest way to enable https console is by using the console itself
generate a java JKS keystore using either the command line keytool or a GUI program
I like GUIs, so I used Key Store Explorer https://github.com/kaikramer/keystore-explorer
copy keystore file on the halServer server where it has read access (no need to keep it secret) i placed mine inside wildfly data dir in a "keystore" directory.
# your file paths might differ, don't copy paste
cp /home/someUser/sftp_uploads/managementKS /opt/wildfly/standalone/data/keystore/managementKS
set permissions
# your file paths might differ, don't copy paste
chown --recursive -H wildfly:wildfly /opt/wildfly/standalone/data/keystore
(use vpn) login to cleartext console http://halServer:9990/
add keystore : navigate :
configuration -> subsystems -> security (elytron) -> other settings (click view button)
stores -> keystore -> add
...
Name = managementKS
Type = JKS
Path = keystore/managementKS
Relative to = jboss.server.data.dir
Credential Reference Clear Text = keystore-password click Add
result in standalone.xml
<key-store name="managementKS">
<credential-reference clear-text="keystore-password"/>
<implementation type="JKS"/>
<file path="keystore/managementKS" relative-to="jboss.server.data.dir"/>
</key-store>
add key manager : navigate :
ssl -> key manager -> add
...
Name = managementKM
Credential Reference Clear Text = keystore-password
Key Store = managementKS
result in standalone.xml
<key-manager name="managementKM" key-store="managementKS">
<credential-reference clear-text="keystore-password"/>
</key-manager>
add ssl context : navigate :
ssl -> server ssl context -> add
...
Name = managementSSC
Key Manager = managementKM
...
Edit added : Protocols = TLSv1.2
save
result in standalone.xml
<server-ssl-contexts>
<server-ssl-context name="managementSSC" protocols="TLSv1.2" key-manager="managementKM"/>
</server-ssl-contexts>
go back
runtime -> server (click view button)
http management interface (edit)
set secure socket binding = management-https
set ssl context = managementSSC
save
restart wildfly
systemctl restart wildfly

How to use docker v1 repository

I am using Artifactory for storing docker images. Artifactory setup is using v1 repository to store images. When working from one of he linux machine i am able to pull and push the images from the Artifactory. But when working on my Windows laptop if I am trying to pull the image from the Artifactory it gives me below error
akash#AKASH-WS01 MINGW64 ~
$ docker pull mydocker.abc.com:5903/ubuntu
Using default tag: latest
Error response from daemon: unknown: Unsupported docker v2 repository request for 'demo-docker'
I am using .dockercfg file for authentication and have information stored to it. "demo-docker" is a user
Why docker pull command is using v2 repository when mydocker.abc.com:5903/ubuntu is on v1.Is there any way to make docker pull to use v1
I had the same problem, I adjusted my nginx to resolve the issue:
Artifactory Version: 4.15.0
Docker Version: 1.12.0
Stop Nginx service (service nginx stop)
Open your conf file in nginx (/etc/nginx/sites-enabled/default.conf) and change following line in it:
rewrite ^/(v1|v2)/(.*) /api/docker/build-images/$1/$2;
to
rewrite ^/(v2)/(.*) /api/docker/build-images/$1/$2;
Example below:
server {
listen 8000 ssl;server_name artifactory.corpintra.net;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
access_log /var/log/nginx/build-docker-access.log;
error_log /var/log/nginx/build-docker-error.log;
rewrite ^/(v2)/(.*) /api/docker/build-images/$1/$2;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://localhost:8081/artifactory/;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}}
Restart Nginx (service nginx restart)

Resources