Pushing into different nexus repository always lands into one repo - docker

I am trying to push an image into nexsus repo (myrepo):
docker push myreposerver/myrepo/httpd:2.4.28-alpine
And I see that it lands in another repo as:
myreposerver/otherrepo/myrepo/httpd:2.4.28-alpine
This is wrong, but I can't find where are config settings which are responsible for this behavior in Nexus UI.

https://help.sonatype.com/display/NXRM3/Private+Registry+for+Docker
The docker client does not allow a context as part of the path to a
registry, as the namespace and image name are embedded in the URLs it
uses. This is why requests to repositories on the repository manager
are served on a specific and separate port from the rest of the
application instead of how most other repositories serve content via a
path i.e. //
So, in a nutshell if one wants to setup separate Docker repositories in Nexus they have to be on separate ports, exactly as described here: http://www.sonatype.org/nexus/2017/02/16/using-nexus-3-as-your-repository-part-3-docker-images/

Related

Deploy image to kubernetes without storing the image in a dockerhub

I'm trying to migrate from docker-maven-plugin to kubernetes-maven-plugin for an test-setup for local development and jenkins-builds. The point of the setup is to eliminate differences between the local development and the jenkins-server. Since docker built the image, the image is stored in the local repository and doesn't have to be uploaded to a central server where the base-images are located. So we can basically verify our build without uploading anything to the server and the images is discarded after the task is done (running integrationstests).
Is there a similar way to trick kubernetes to store the image into the local repository without having to take the roundtrip to a central repository? Eg, behave as if the image is already downloaded? Note that I still need to fetch the base-image from the central repository.
If you don't want to use any docker repo (public or private), you can use what is called Pre-pulled-images.
This is a bit annoying as you need to make sure all the kubernetes nodes have the images there and also set the ImagePullPolicy to Never in every kubernetes manifest.
In your case, if what you call local repository is some private docker registry, you just need to store the credentials to the private registry in a kubernetes secret and either patch you default service account with ImagePullSecrets or your actual deployment/pod manifest. More details about that https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

Setting up Docker repository using subdomain method

docker login - how to log in only once for any docker repositories
I set up the on premise Artifactory to host some Docker repositories, using the subdomain approach, i.e. repo1.mycompany.com, repo2.mycompany.com, etc. Everything is working fine. My question is, look like I need to do the 'docker login repo1.mycompany.com' for each repository. Is there a way to log in only once, for all the repositories, and then when pulling/pushing images from/to any repository, there's no need to log in again?
No code to shown here. This is all about setup.
No need to login for each repo.
With subdomain method each docker repository is considered as a docker registry for the client this is why you need to login to each one you want to use.
For the pull from any without login you can use a virtual repository and aggregate all your locals in it. So you'll need to login the the virtual only to be able to pull from any (through the virtual). But push will be limited to the default deployment target repo defined in the virtual one.
Another alternative is to use repo path instead of subdomains. With this approach you'll be able to login on Artifactory and use all repo :
docker login mycompany.com
docker pull/push mycompany.com/repo1/imageName
docker pull/push mycompany.com/repo2/imageName

Nexus repository configuration with dockerization

Is it possible to configure Nexus repository manager (3.9.0) in a way which is suitable for a Docker based containerized environment?
We need a customized docker image which contains basic configurations for the nexus repository manager, like project specific repositories, LDAP based authentication for users. We found that most of the nexus configurations live in the database (OrientDB) used by nexus. We also found that there is a REST interface offered by nexus to handle configurations by 3rd parties, but we found no configuration exporter/importer capabilites besides backup (directory servers ha LDIF, application servers ha command line scripts, etc.).
Right now we export the configuration as backup files, and during the customized docker image build we copy those backup file back to the file system in the container:
FROM sonatype/nexus3:latest
[...]
# Copy backup files
COPY backup/* ${NEXUS_DATA}/backup/
When the conatiner starts up it will pick up the backup files and the nexus will be configured the way we need. However though, it would be much better if there was a way which would allow us the handle these configurations via a set of config files.
All that data is stored under /nexus-data, so you can create an initial docker container with a docker volume or a host directory that would keep all that data. After you preconfigured that instance you can distribute your customized docker image with that docker volume containing nexus data. Or if you used a host directory you can simply copy over all that data is similar fashion as you do now, but use /nexus-data directory instead.
You can find more information at DockerHub under Persistent Data.

How to list images in docker registry being on registry server?

Currently I'm pushing images from one machine to another. The success of it I can determine base on HTTP status from pushing machine or base on logs from the registry server. At this point I want to search through what really is in my registry on my server. What I found till now is the API calls from outside and that if even when you call it you have to know exact name of the image and how it is tagged. In my case, I want just to enlist what images currently are in my registry when I have direct access to it. I did not find any related command.
The docker CLI doesn't have functionality to search a registry, but you can use the registry's REST API. Assuming you're using the registry:2 image, then you can list all the repositories using the catalog endpoint:
curl https://my-registry:5000/v2/_catalog
{"repositories":["busybox","redis","ubuntu"]}
And then you can query the tags for a repository:
curl https://my-registry:5000/v2/busybox/tags/list
{"name":"busybox","tags":["2","latest"]}
Here's the full Registry API spec.

How to disallow push to docker repository

I am currently setting up a local cluster at my work using docker. Basically everything works fine, the only thing I worry about is, that other devs that use my setup may eventually push the local builds to a remote repository.
Since this would be a catastrophe because we are not allowed to upload the companies artefacts anywhere else than internal servers - is there a way to prevent other users from pushign to a remote docker repo?
docker repo == docker registry?
Not sure I get the full picture about your desired workflow, but here are two options:
Use registry authentication and make sure that only authorised people push
Configure networking / dns / hosts to resolve to the correct registry - e.g. docker-registry.mycompany.com resolves to the local registry for devs and to the remote registry for others.

Resources