Use multiple logins in the same docker-compose.yml file - docker

I am trying to pull images from the same Artifactory repo using 2 different access tokens. This is because one image is available to one user, and another one is accessible by another user.
I tried using docker login, but I can login only once to a repo. Is there a way to specify in the docker-compose.yml file a user and token that Compose should use in order to pull the image?

The docker-compose file specification does not support providing credentials per service / image.
But putting this technicality aside, the described use case clearly indicates there is a user who needs access to both images...

Related

Nexus 3 Docker Content Selector selects too many images

I am using Nexus 3 as a docker repository and want to create a user that has only read-only access to a specific docker image (and its related tags)
For this I created a Content Selector with the following query (The name of the image is test for demonstration purposes):
format == "docker" and path =~ "^(/v2/|/v2/library/)?(test(/.*)?)?$".
Then I created a Privilege with the action read, bound that to a role and added it to the user.
All is well, when I use the limited user I can fetch the image and not push.
However, I can still pull images I should not be able to pull.
Consider the following: I create an image called testaaa:1 on the docker registry. Afterwards I docker login to the registry using my user with read-only access. I am suddenly able to pull docker pull hub.my-registry.com/testaaa:1 even though according to the query I should not be able to.
I tested the query in a Java Regex Tester, the query would not select testaaa. Am I missing something? I am having a hard time finding clues on this topic.
EDIT: Some more testing reveals that my user is actually able to pull all images from this registry. The Content Selector query I used is exactly the one suggested by the Sonatype documentation Content Selectors and Docker - REST API vs Docker Client
I have figured it out. The issue was not the Content Selector query, but a capability that I previously added. The capability granted any authenticated user the role nx-anonymous which lets anyone view any repository in Nexus. This meant that any authenticated user was allowed to read/pull any image from the repository.
This error was entirely on my part. In case anyone has similar issues go have a look in the Nexus Settings -> System -> Capabilities and check if there are any capabilities that give your users unwanted roles.

Unable to send google container registry in docker image

I'm trying to send my first image to gcr(google container reg.) via local bash, but somehow I couldn't do it even though I added my current user as 'owner' to the project. In the last link that gave me an error, the following was written.
{"errors":[{"code":"UNAUTHORIZED","message":"Unauthorized access."}]}
Also, my ubuntu distribution ip that I use on wsl2 was banned by google on the grounds that I tried too much. This is my 2nd problem that I need to solve.
I encountered my problem in the first item through powershell on my local computer.
What should I do in this case?
The refusal to connect to GCP might be related to the IP ban that you mentioned, was there any specified length to the ban? Usually, an email is sent with more details about the ban. Otherwise, there is specific documentation dealing with authenticating to Container Registry. The documentation lists several authentication methods:
gcloud credential helper
Standalone credential helper
Access token
JSON key file
Which of these methods are you having issues with? The documentation lists the procedure to authenticate properly with each of these methods. Is the correct account configured? It could be a different account or a service account is being used instead.

How can I output all network requests to pull a Docker image?

In order to request access to a docker image on a public container registry from within a corporate network I need to obtain a list of all the URLs that will be requested during the pull. From what I can see, the initial call returns a json manifest and subsequent requests will be needed.
How can I get visibility of all the URLs requested when invoking docker pull my-image?
The registry API docker uses is publicly available that clarifies each API call possible. What you should see is:
A GET to the /v2/ API to check authorization settings
A query to the auth server to get a token if using an external auth server
A GET for the image manifest
A GET for the image config
A series of GET requests, one for each layer
The digests for the config and each layer will change with each image pushed, so best to whitelist the entire repository path for GET requests.
Note that many will take a different approach to this, setting up a local registry that all nodes in the network can pull from, and pushes to update that registry are done from a controlled node that performs all the security checks before ingesting new images. This handles the security needs, controlling what enters the network, without needing to whitelist individual URLs to an external resource.

Transferring data from gcloud vm instance in one google account to GCS in a different google account

The title says it all. I have a VM instance set up in my google cloud for generating some model data. A friend of mine also has a new account. We're both basically using the free credits Google provides. We're trying to figure out if there is a way that I can generate the data in my VM instance and then transfer it to my friend's GCS Bucket. He hasn't set up any buckets yet, so we're also open to suggestions on the type of storage that would help us do this task.
I realize I can set up a persistent disk and mount it to my own VM instance. But that isn't our goal right now. We just need to know if there is a way to transfer data from one Google account to another. Any input is appreciated.
There is a way to do this by having your friend create the bucket and then he gives your email permission to access the bucket. Then from your VM you can us the gsutil command to copy the files to the bucket.
1) Have your friend create the bucket in the console.
2) In the permissions section, he will Add Member and add your email and provide you with Storage Object Creator role
3) Then you SSH into your VM and use the following gsutil command to copy the files. For example gsutil cp testfile.txt gs://friend_bucket
4) If you get 403 error, you probably have to run gcloud auth login first

Keycloak and Docker - Cannot set two types of URLs

I use standalone version of keycloak in docker-based application.
Since Keycloak 1.9.2 there is an "auth-server-url-for-backend-requests" attribute removed from keycloak properties.
This field was by me to indicate the internal ip address of auth server (inside a dock).
The external one (auth-server-url) is used for redirection purpose.
My question is: how to replace former auth-server-url-for-backend-request to solve a problem of having different network addresses inside docker and outside of it.
According to the following links, it appears you can use the same DNS for external requests as you would for internal. See these:
keycloak issue
http://keycloak.github.io/docs/userguide/keycloak-server/html_single/index.html#d4e4114
You should set the KEYCLOAK_FRONTEND_URL parameter in the Dockerfile or docker-compose.yml (if you use them). In other case your should set this parameter in Keycloak General settings UI.
Eg.:
It is quite tricky because you shouldn't set the real front-end's URL, however you should set the URL which is used by front-end. I have the same problem so you can see some examples in my SO question/answer

Resources