Both-ends authenticated Docker Registry Proxy using Nexus - docker

I am running Sonatype Nexus as a private docker-registry with ldap based auth ( or the relevant part, every user / server has its own authentication ).
I want to setup a second nexus server which will be a docker-registry proxy (cache/forward) to be used with --registry-mirror, mirroring the private registry stated above.
What i tried
I configere a docker registry proxy:
with a private registry as backend
and authentication towards the backend ( is that actually the right assumption?
and i configured a ssl-offloader as usual form https://proxy.domain.tld to they nexus docker-proxy port ( 10090 )
Then i configured the docker-engine with --registr-mirror=https://proxy.mydomain.tld
And used docker login https://proxy.mydomain.tld i could use the credentials i have in my ldap correctly, but since both the backend as also the proxy share the same ldap server, i am not sure against which i authed.
Problems/Questions:
a) I need to make the forward registry proxy authenticated "per user based"
b) can the proxy access the private registry authenticated ( using service account )
Does docker login in the case above actually authentiactes with the proxy or with the underlying private registry?
Does this setup work at all? Did i make a conceptual mistake?

It seems that you're asking that if you have a scenario like user->Nexus A->Nexus B, can Nexus A forward the credentials on to Nexus B?
If so, the answer is no. Nexus A would have its own credentials used to authenticate to Nexus B. Since Nexus A is trying to represent potentially the full contents of what is available in Nexus B, it may required elevated privileges to fetch content to satisfy the demands of all the users of A.
You should be able to structure A and B to serve the same content based on permissions though, just allow A to fetch it all from B.

Related

Multiple user authentication for Docker Private Registry running inside Kubernetes

I'm running a docker private registry inside a kubernetes cluster using the standard registry:2 image. The image has basic functionality to provide user authentication using the Apache htpasswd utility.
In my case multiple users need to access the repository and therefore need to setup username passwords for multiple different users. What would be the best approach to implement this.
I got the single user htpsswd based authentication working, but does not seem to find a way to enable auth for multiple users i.e. having proper access control.
The registry is SSL enabled.(TLS at the ingress level)
There are multiple ways this could be done. First of all its possible to have multiple users in the htpasswd file. It was not working with docker becasue docker required the passwords to be hashed using bcrypt algorithm.
Use the -B flag while creating the htpasswd file.
sudo htpasswd -c -B /etc/apache2/.htpasswd <username1>
Another way this could be done, is using nginx authentication annotations.
nginx.ingress.kubernetes.io/auth-url: "url to auth service"
If the service return 200, nginx forwards the request or else returns authentication error response. With this you could have a lot of custom logic as you create and manage the authentication server.

Nexus with Private Google container Repo

I'm trying to proxy a private Google Container Registry with Nexus 3 Repository OSS.
Not sure how to do the authentication bit.
I found a suggestion for Artifactory:
Create a new Docker remote registry repository
Uncheck the Enable Token Authentication flag
Set the URL as https://gcr.io
Under the advanced tab, set the username as _json_key​
Under the advanced tab, set the password to the contents of the JSON Key File
Did not work with Nexus.
Any advice maybe pls ?
You need to use a service account with an API Key in order to authenticate,
take a look into this blog that shows how to create the private registry with Google container registry and Nexus OSS.

docker registry - multiple auth methods

I have the following problem: I am hosting a docker registry and I need to use it with 2 different "clients".
One of those clients is gitlab where I use token auth mechanism. On the other side I have a different client which only support basic auth, so if I configure the registry to allow basic auth I cannot use gitlab and vice versa.
Is there a best/common practice for this type of scenarios? Is it even possible to have 2 auth mechanisms for the same registry?
Thanks!

Nexus Docker Registry - Failling anonymous pull

I'm using Sonatype Nexus as a Private Docker Registry.
While it works with authenticated users, trying to use anonymous user to pull images doesn't work. This happens only on a docker client.
Using the Nexus UI (not logged in) I'm able to browse images on my repo. But trying to pull the images I get an 'Unauthorized' error.
The following is a capture stream of communication between the Docker Client and the Nexus repository:
Wireshark packet capture
This is strange, as the anonymous access is enabled, and according to the docs, I may have a Docker Hosted Registry (with RW access through HTTPs port) and a Docker Group Registry, pointing to a Docker Hosted Registry, with RO/Anonymous access.
This feature was added in Nexus 3.6. According to the documentation:
Under Security > Realms, enable the “Docker Bearer Token Realm”
Uncheck “Force basic authentication” in the repository configuration
Nexus caused me quite some headache until i found a rather obscure sonatype post
that states not to change the anonymous realm.
So the steps I followed to get this working: (tested in Nexus 3.19.1 to 3.38.1)
Same as the Answer by #andrewdotn (Enable the Docker Bearer Token
Realm in the Security > Realms section)
Enable the anonymous access FOR the Local Authorizing Realm (as stated in the above mentioned link)
Create the docker(proxy) Repository (in this example to proxy hub.docker.com)
3.1. enable the HTTP / HTTPS endpoint (depending if you ssl to nexus or use a reverse proxy)
3.2. enable "Allow anonymous docker pull (Docker Bearer Token Realm required)"
3.3. enter "https://registry-1.docker.io" as "Location of the remote repository" (for the docker-hub)
3.4. set the "Docker Index" to use the docker hub index (aka.: "Use Docker Hub")
3.5. save
make sure your anonymous user has the right to read the new repository (the default anon-role will allow read access to quite a bit more, but should already allow anon pull)
4.1. (OPTIONAL) If you want to restrict the anonymous user as much as possible (i.e.: to only allow docker pull) crate a role "nx-docker_read" (or similar) and give it the "nx-repository-view-docker--read"*. (this will allow the any user in the group to pull images from any docker repository, that allows anon pull, but not see anything on the web-ui)
4.2. (if u did 4.1) now all that's left is to change the group of the anon user to ur new role (in my example "nx-docker_read") and remove it from "nx-anonymous" => anon-users can no longer brows nexus on the web-ui but can still pull images
Docker Registry API requires authentication for registry access, even for the pull operations so does Nexus 3.
Dockerhub always requires an access token, even for pulls.
But the reason why you can pull anonymously from dockerhub is that it uses a token server which automatically gives out access tokens to anonymous users.
This mecanism is not available for the moment with Nexus 3.0.1.
Perhaps it will be implemented (https://issues.sonatype.org/browse/NEXUS-10813).
So for the moment with Nexus 3, it will always require to be logged in before to pull an image (eventually with the anonymous user is your rights are setted this way).

Secure gateway between Bluemix CF apps and containers

Can I use Secure-Gateway between my Cloud Foundry apps on Bluemix and my Bluemix docker container database (mongo)? It does not work for me.
Here the steps I have followed:
upload secure gw client docker image on bluemix
docker push registry.ng.bluemix.net/NAMESPACE/secure-gateway-client:latest
run the image with token as a parameter
cf ic run registry.ng.bluemix.net/edevregille/secure-gateway-client:latest GW-ID
when i look at the logs of the container secure-gateway, I get the following:
[INFO] (Client PID 1) Setting log level to INFO
[INFO] (Client PID 1) There are no Access Control List entries, the ACL Deny All flag is set to: true
[INFO] (Client PID 1) The Secure Gateway tunnel is connected
and the secure-gateway dashboard interface shows that it is connected too.
But then, when I try to add the MongoDB database (running also on my Bluemix at 134.168.18.50:27017->27017/tcp) as a destination from the service secure-gateway dashboard, nothing happened: the destination is not created (does not appear).
I am doing something wrong? Or is it just that this not a supported use case?
1) The Secure Gateway is a service used to integrate resources from a remote (company) data center into Bluemix. Why do you want to use the SG to access your docker container on Bluemix?
2) From a technical point of view the scenario described in the question should work. However, you need to add rule to the access control list (ACL) to allow access to the docker container with your MongoDB. When you are running the SG it has a console to type in commands. You could use something like allow 134.168.18.50:27017 as command to add the rule.
BTW: There is a demo using the Secure Gateway to connect to a MySQL running in a VM on Bluemix. It shows how to install the SG and add a ACL rule.
Added: If you are looking into how to secure traffic to your Bluemix app, then just use https instead of http. It is turned on automatically.

Resources