Minimum Privileges for Nexus Private Docker Registry - docker

I have setup a private docker registry on Nexus over http and I am able to pull/push with no issues when I 'docker login' using the nexus administrator account. To maintain security best practices I do not want to be using an administrator account simply to pull and push images so I want to make another account for that purpose.
I have created a role is nexus and granted it 'add' privileges but these seem to be insufficient for the docker login command. Which other privileges are required?

It took a bit of trial and error but it seems that at a minimum you need the following:
Add
Edit
Read
In my case specifically, I used the pre-built repository-view type add, edit and read privileges for my repository.

You'll need to grant the nx-repository-view--* privilege for the docker repository.

Related

Failing to Push docker image from Jenkins on GCE instance to google container registry

I am trying to push docker image from jenkins configured on compute engine with default service account. But it is failing with this error:
[Docker] ERROR: failed to push image gcr.io/project-id/sms-impl:work ERROR: Build step failed with exception com.github.dockerjava.api.exception.DockerClientException: Could not push image: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
What do I need to do?
To authenticate to Container Registry, use gcloud as a Docker credential helper. To do so, run the following command:
gcloud auth configure-docker
You need to run this command once to authenticate to Container Registry. We strongly recommend that you use this method when possible. It provides secure, short-lived access to your project resources. Please follow steps as link 1.
At the bottom of the page that was linked, you will see a further link to Using GCR with GCP, in particular, this section describes what you need to do.
To summarize, the service account needs the permissions to write to the storage bucket for GCR. Since you mentioned you were using the default service account, it further will need the access scopes set for that instance. The default only grants 'read' unless you have specified all scopes.
A few ways to do this:
When you create the instance using gcloud, specify --scopes https://www.googleapis.com/auth/devstorage.read_write
In the console, select the scope specifically or select "all scopes", e.g.:
(... many lines of scopes omitted ...)
You can also add the scopes after the fact, if needed, by editing the instance while it is stopped.
Note that the first push for a project may additionally require "admin" rights, in order to create the bucket.

IoT Edge : device can't download my module from Azure Container Registry but it can from dockerhub

I followed this azure example to develop my module connectedbarmodule in python for Azure IoT Edge. Then , I followed this link to deploy my module in my device (raspberry pi 3). However, my module can't be downloaded. Then, I executed the following command on my device :
sudo docker logs -f edgeAgent
I have the following error:
Error calling Create module ConnectedBarModule:
Get https://iotedgeregistery.azurecr.io/v2/connectedbarmodule/manifests/0.0.1-amd64:
unauthorized: authentication required)
This is an url regarding my Azure Container Registry where the image of my module is stored. I don't know how to get the credentials for iotedge to download my module.
I tested the case to pu the image not in the Azure Container Registry but in my dockerhub account and it works, my device can download the module.
If someone has an idea, this would be very kind.
Thank you in advance.
Your Azure Container Registry is private. Hence, you need to add the credentials for it in order for the edgeAgent to be download images from private registries:
Through the Azure Portal: In the first step of "Set Modules"
When done through deployments in Visual Studio Code:
"In the VS Code explorer, open the .env file. Update the fields with
the username and password values that you copied from your Azure
container registry." (https://learn.microsoft.com/en-us/azure/iot-edge/tutorial-c-module#add-your-registry-credentials)
For your issue, you can use the command docker login -u <ACR username> -p <ACR password> <ACR login server> which shows in the example you posted. About the authentication of Azure Container Registry, there are two ways you can choose.
One is that use the user and password which shows in your ACR on the Azure portal.
Another is that you can use the Azure Service Principal, you can set the permission for the user. Follow document Azure Container Registry authentication with service principals. I would suggest this way much more than the first because it's safer.
It's just an advice. Hope this will help you and if you need more help please show me the message.

GitLab Docker Registry Push Failed - Access Denied

I'm having trouble pushing to GitLab Container Registry.
I can login successfully using my username and a personal access token but when I try to push the image to the registry, I get the following error:
$ docker push registry.gitlab.com/[groupname]/dockerfiles/nodemon
The push refers to a repository
[registry.gitlab.com/[groupname]/dockerfiles/nodemon]
15d2ea6e1aeb: Preparing
2260f979a949: Preparing
f8e848bb8c20: Preparing
740a5345706a: Preparing
5bef08742407: Preparing
denied: requested access to the resource is denied
I assume the issue is not with authentication because when I run a docker login registry.gitlab.com, I get a Login Succeeded message.
Where is the problem?
How should I push my images to GitLab Container Registry?
I got it working by including api scope to my personal access token.
The docs states The minimal scope needed is read_registry. But that probably applies for read only access.
Reference: https://gitlab.com/gitlab-com/support-forum/issues/2370#note_44796408
In my case it was really dumb, maybe even a gitlab bug :
I renamed the gitlab project after the creation of the container registry, so the container registry url was still with the old name ...
The project name under gitlab had the typo error corrected but not the registry link and it led to this error
Had a similar issue, it was because of the url that was used for tagging and pushing the repo.
It should be
docker push registry.gitlab.com/[account or group-name]/[reponame]/imagename
It was previously a correct answer to say that the personal access token needs to include the api permission, and several answers on this page say exactly that.
Recently, GitLab appear to have improved the granularity of their permission system. So if you want to push container images to the GitLab Docker registry, you can create a token merely with the read_registry and write_registry permissions. This is likely to be a lot safer than giving full permissions.
I have tested this successfully today.
Enable the personal access token by adding api scope as per this guidelines. After creating the token and username, use these credentials for logging into the Docker environment or pushing.
Deploy tokens created under CI/CD setup is not sufficient for pushing the image to a Docker registry.
I had the same issue.
In my case, the issue was I had AutoDevOps enabled before, which seem to generate a deploy token automatically.
Now deploy tokens are just API keys basically for deployment.
But GitLab has a special handling for gitlab-deploy-token which you can then access via $CI_DEPLOY_USER and $CI_DEPLOY_PASSWORD as a predefined variable.
However, I did not double-check the default token.
In my case, it only had read_registry, of course though, it also needs write_registry permissions.
If you do this, then you can follow the official documentation.
Alternatively, you can apparently also switch to $CI_REGISTRY_USER and $CI_REGISTRY_PASSWORD, which are ephemeral, however.

Can access to Bluemix container registry be access controlled?

I want only the CI tools or dedicated users to have write access to the Bluemix Docker registry. Developer or cloud admin accounts should not have write access to the registry. How can this be done?
You can now issue read-only or read-write tokens for IBM Bluemix Container Registry using the container-registry plugin for the bx command.
Tokens can either be non-expiring (unless revoked) or expire after 24 hours.
The use case of automating access is well covered by the documentation.
At this time it is not possible to have different image access levels for the users in the same cf org.
Security settings for clusters and deployments in IBM Bluemix Container service is documented here: https://console.bluemix.net/docs/containers/cs_security.html#cs_security
It may help you in your requirement ...

How to use private quay.io images with fleet and CoreOS

I've been trying to deploy containers with fleet on a CoreOS cluster. However, some of the docker images are privately stored on quay.io requiring a login.
Now I could add a docker login as a precondition to every relevant unit file, but that doesn't seem right. I'm sure there must be a way to store the respective registry credentials somewhere docker can find it when trying to download the image.
Any ideas?
The best way to do this is with a Quay "robot account", which is a separate set of credentials than your regular account. This is helpful for two reasons:
they can be revoked if needed
can be limited to a subset of your repositories
When you make a new robot account, if you click "view credentials", you will get the credentials pre-formatted for common use-cases, such as Docker and Kubernetes.
In this case, you want "Docker Configuration", which is placed at ~/.docker/config.json on the server(s). Docker will automatically use this to authenticate with Quay.io.

Resources