Nexus 3 Docker Content Selector selects too many images - docker

I am using Nexus 3 as a docker repository and want to create a user that has only read-only access to a specific docker image (and its related tags)
For this I created a Content Selector with the following query (The name of the image is test for demonstration purposes):
format == "docker" and path =~ "^(/v2/|/v2/library/)?(test(/.*)?)?$".
Then I created a Privilege with the action read, bound that to a role and added it to the user.
All is well, when I use the limited user I can fetch the image and not push.
However, I can still pull images I should not be able to pull.
Consider the following: I create an image called testaaa:1 on the docker registry. Afterwards I docker login to the registry using my user with read-only access. I am suddenly able to pull docker pull hub.my-registry.com/testaaa:1 even though according to the query I should not be able to.
I tested the query in a Java Regex Tester, the query would not select testaaa. Am I missing something? I am having a hard time finding clues on this topic.
EDIT: Some more testing reveals that my user is actually able to pull all images from this registry. The Content Selector query I used is exactly the one suggested by the Sonatype documentation Content Selectors and Docker - REST API vs Docker Client

I have figured it out. The issue was not the Content Selector query, but a capability that I previously added. The capability granted any authenticated user the role nx-anonymous which lets anyone view any repository in Nexus. This meant that any authenticated user was allowed to read/pull any image from the repository.
This error was entirely on my part. In case anyone has similar issues go have a look in the Nexus Settings -> System -> Capabilities and check if there are any capabilities that give your users unwanted roles.

Related

Use multiple logins in the same docker-compose.yml file

I am trying to pull images from the same Artifactory repo using 2 different access tokens. This is because one image is available to one user, and another one is accessible by another user.
I tried using docker login, but I can login only once to a repo. Is there a way to specify in the docker-compose.yml file a user and token that Compose should use in order to pull the image?
The docker-compose file specification does not support providing credentials per service / image.
But putting this technicality aside, the described use case clearly indicates there is a user who needs access to both images...

Unable to send google container registry in docker image

I'm trying to send my first image to gcr(google container reg.) via local bash, but somehow I couldn't do it even though I added my current user as 'owner' to the project. In the last link that gave me an error, the following was written.
{"errors":[{"code":"UNAUTHORIZED","message":"Unauthorized access."}]}
Also, my ubuntu distribution ip that I use on wsl2 was banned by google on the grounds that I tried too much. This is my 2nd problem that I need to solve.
I encountered my problem in the first item through powershell on my local computer.
What should I do in this case?
The refusal to connect to GCP might be related to the IP ban that you mentioned, was there any specified length to the ban? Usually, an email is sent with more details about the ban. Otherwise, there is specific documentation dealing with authenticating to Container Registry. The documentation lists several authentication methods:
gcloud credential helper
Standalone credential helper
Access token
JSON key file
Which of these methods are you having issues with? The documentation lists the procedure to authenticate properly with each of these methods. Is the correct account configured? It could be a different account or a service account is being used instead.

How can I output all network requests to pull a Docker image?

In order to request access to a docker image on a public container registry from within a corporate network I need to obtain a list of all the URLs that will be requested during the pull. From what I can see, the initial call returns a json manifest and subsequent requests will be needed.
How can I get visibility of all the URLs requested when invoking docker pull my-image?
The registry API docker uses is publicly available that clarifies each API call possible. What you should see is:
A GET to the /v2/ API to check authorization settings
A query to the auth server to get a token if using an external auth server
A GET for the image manifest
A GET for the image config
A series of GET requests, one for each layer
The digests for the config and each layer will change with each image pushed, so best to whitelist the entire repository path for GET requests.
Note that many will take a different approach to this, setting up a local registry that all nodes in the network can pull from, and pushes to update that registry are done from a controlled node that performs all the security checks before ingesting new images. This handles the security needs, controlling what enters the network, without needing to whitelist individual URLs to an external resource.

How to generate the URL which shows the live logs in mesos for a job

I am developing a UI in which I need to show the live logs (stdout and stderr) of jobs running in a mesos slave. I am finding out a way in which I will be able to generate a URL which will point to the mesos logs for the job. Is there a way to do the same? Basically, I need to know the slave id, executor id, master id etc. for generating the URL. Is there a way to find these information?
The sandbox URL is of the form http://
$slave_url:5050/read.json?$work_dir/work/slaves/$slave_id/frameworks/$framework_id/executors/$executor_id/runs/$container_id/stdout, and you can even use the browse.json endpoint to browse around within the sandbox.
Alternatively, you can use the mesos tail $task_id CLI command to access these logs.
For more details, see the following mailing list thread: http://search-hadoop.com/m/RFt15skyLE/Accessing+stdout%252Fstderr+of+a+task+programmattically
How about using reverse approach. You need to present live logs from stderr and stdout. How about storing them outside of mesos slave e.g., elastic-search? You will get nearly live updates, old logs available after, nice search options.
From version 0.27.0 Mesos supports ContainerLogger. You can write your own implementation of ContainerLogger that will push logs to central logs repository (Graylog, Logstash, e.t.c) and then expose it in your UI.
Mesos offers a REST interface where you get the information you want. Visit with your browser http://<MESOS_MASTER_IP>:5050/help (using default port) to check the options you have to query (for example, you can get the information you need from http://<MESOS_MASTER_IP>:5050/master/state.json). Check this link to see an example using it.

changing gerrit's canonical web url

I have had an issue with setting up my gerrit server. The machine has Ubuntu 12.04 LTS Server 64-bit installed on it. I am setting up git and gerrit as a way to manage source code and code review.
I require internal and external access to it. I setup a DNS that would work externally. However, during the initial setup, i left the canonicalWebUrl to its default value. It usually take's the machine's hostname (in this case it was vmserver).
The issue I was running into is exactly as explained here https://stackoverflow.com/questions/14702198/the-requested-url-openid-was-not-found-on-this-server, where after trying to sign in/register account with OPEN ID, it was saying url not found.
For some reason, it was changing the url in the address bar from the the DNS i setup to the CanonicalWebURL.
I tried to change the canonical web url in the gerrit.conf file found in etc of the gerrit site. After restarting the server, however, we were able to see the git project files present as they should be, but the account that was administrator seemed to no longer be registered and none of the projects were visible through gerrit.
I was wondering if there was a special procedure to changing the canonical web url in gerrit without disrupting access to a server?
any help or information on canonical urls would be much appreciated as i cannot find too much information on them.
edit:
looking deeper, i found some information that is way over my head regarding "submodules"
i do not understand if this is what i am looking for or not.
https://gerrit-review.googlesource.com/#/c/36190/
The canonical web url must be set, and it sounds like you have done that correctly.
I suspect the issue you are seeing is caused by changing the canonical web url - some OpenID providers (Google being the big one) will return a different user ID based on the URL of the request. This is a privacy thing and cannot be changed. So previous users will now show up as new users and won't be in their old groups (Administrators group in this case).
If you don't have many users, it might be easiest to migrate them by hand. You can modify the database to map the new user ID to the old user account.

Resources