Change default registry for docker push commands - docker

There is an option to set registry-mirrors in daemon.json of docker to config default registry while pulling images.
Is there another way so I can push docker images to a local registry by default (without specifying local registry URL)?
To describe more, we have 2 nexus repositories, one as proxy of default registry (so we pull images through it), and one hosted one.
We want all developers push their images to our hosted registry, not docker hub.

registry-mirrors is a mirror of Docker Hub, so the design is to push to the authoritative source, and if a pull to one of the Hub mirrors fails, that will also fall back to Hub.
It looks like what you are trying to do is merge a multiple namespaces into one, the namespace for Hub, and the namespace for your local Nexus repositories. Doing that is dangerous because deploying the same image reference on different machines, or when the Nexus instance is unavailable for any reason, will result in a dependency confusion attack.
The design of image references is to always specify the registry when you don't want to use Docker Hub. This avoids the dependency confusion attacks seen by other package repositories.
If you're worried about developers pushing images to Hub that they shouldn't be, then I'd recommend setting up an HTTP proxy that rejects POST and PUT requests to registry-1.docker.io (a pull uses GET and HEAD requests), and make sure all developers use that proxy (typically via a network policy that doesn't allow direct access without the proxy).

Related

What is the best way to deliver docker containers to remote host?

I'm new in docker and docker-compose. I'm using of docker-compose file with several services. I have containers and images on the local machine when I work with docker-compose and my task is to deliver them remote host.
I found several solutions:
I could build my images and push them to the some registry and pull
them on production server. But for this option I need private
registry. And as I think- registry is an unnecessary element. I
wanna run countainers directly.
Save docker image to tar and load it to remote host. I saw this post
Moving docker-compose containersets around between hosts
, but in this case I need to have shell scripts. Or I can use
docker directly
(Docker image push over SSH (distributed)),
but in this case I'm losing the benefits of docker-compose.
Use docker-machine (https://github.com/docker/machine) with general driver. But in this case I can use it for deployng only from
one machine, or I need to configure certificates
(How to set TLS Certificates for a machine in docker-machine). And, again, it isn't simple solution, as for me.
Use docker-compose and parameter host (-H) - But in the last option I need to build images on remote host. Is it possible to
build image on local mashine and push it to remote host?
I could use docker-compose push (https://docs.docker.com/compose/reference/push/) to remote host,
but for this I need to create registry on remote host and I need to
add and pass hostname as parameter to docker compose every time.
What is the best practice to deliver docker containers to remote host?
Via a registry (your first option). All container-oriented tooling supports it, and it's essentially required in cluster environments like Kubernetes. You can use Docker Hub, or an image registry from a public-cloud provider, or a third-party option, or run your own.
If you can't use a registry then docker save/docker load is the next best choice, but I'd only recommend it if you're in something like an air-gapped environment where there's no network connectivity between the build system and the production systems.
There's no way to directly push an image from one system to another. You should avoid enabling the Docker network API for security reasons: anyone who can reach a network-exposed Docker socket can almost trivially root its host.
Independently of the images you will also need to transfer the docker-compose.yml file itself, plus any configuration files you bind-mount into the containers. Ordinary scp or rsync works fine here. There is no way to transfer these within the pure Docker ecosystem.

how to pull docker images from localhost docker private registry to GKE?

I have my own docker private registry created in my host machine[localhost] and I intend to make use of localhost private registry to pull images in google Kubernetes engine.
How do I make it happen?
You won't be able to use either your locally built docker images (which can be listed by running docker images on your local machine) or your locally set up docker private registry (unless you make it available under some public IP which doesn't make much sense if it's your home computer). Those images can be used by your local kubernetes cluster but not by GKE.
In GKE we generally use GCR (Google Container Registry) for storing images that are used by our Kubernetes Engine. We can build them directly from code (e.g. pulled from our github account) on a cloudshell vm (simply click Cloud Shell icon in your GCP Console). You can build them directly on this machine and you can push them to your GCR directly from there.
Alternatively, if you build your images locally, but by "locally" I mean this time the nodes where kubernetes is installed (so in case of GKE they need to be present on every worker node), you can also use them without a need of pulling them from any external registry. The only requirement is that they are available on all kubernetes worker nodes. You can force kubernetes to always use your local images, present on your nodes, instead of trying to pull them from a registry by specifying:
imagePullPolicy: Never
in your Pod or Deployment specification. More details on that you can find in this answer.

Is it possible to run a docker registry for gitlab externally on a remote host?

I'm just curious if there's a chance to move out the gitlab registry to a different host? I can see, that it is possible to move the storage to another place, but what about the service itself?
I just want to run the registry on another place but want to make use of the authentication and the UI features of gitlab to manage it. May be this is not possible, but please someone shed some light on it.
Yup, you can disable the bundled registry and run a registry yourself, while still using GitLab's authentication and UI.
GitLab's built-in registry is basically just a deployment of the Docker Registry. You can run it, or another compatible registry, and then configure GitLab to use it as described here: Disable Container Registry but use GitLab as an auth endpoint.

Google Cloud Kubernetes accessing private Docker Hub hosted images

Is it possible, to pull private images from Docker Hub to a Google Cloud Kubernetes cluster?
Is this recommended, or do I need to push my private images also to Google Cloud?
I read the documentation, but I found nothing that could explain me this clearly. It seems that it is possible, but I don´t know if it's recommended.
There is no restriction to use any registry you want. If you just use the image name, (e.g., image: nginx) in pod specification, the image will be pulled from public docker hub registry with tag assumed as :latest
As mentioned in the Kubernetes documentation:
The image property of a container supports the same syntax as the
docker command does, including private registries and tags. Private
registries may require keys to read images from them.
Using Google Container Registry
Kubernetes has native support for the Google Container Registry (GCR), when running on Google
Compute Engine (GCE). If you are running your cluster on GCE or Google
Kubernetes Engine, simply use the full image name (e.g.
gcr.io/my_project/image:tag). All pods in a cluster will have read
access to images in this registry.
Using AWS EC2 Container Registry
Kubernetes has native support for the AWS EC2 Container Registry, when nodes are AWS EC2 instances.
Simply use the full image name (e.g.
ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag) in the Pod
definition. All users of the cluster who can create pods will be able
to run pods that use any of the images in the ECR registry.
Using Azure Container Registry (ACR)
When using Azure Container Registry you can authenticate using either an admin user or a
service principal. In either case, authentication is done via standard
Docker authentication. These instructions assume the azure-cli command
line tool.
You first need to create a registry and generate credentials, complete
documentation for this can be found in the Azure container registry
documentation.
Configuring Nodes to Authenticate to a Private Repository
Here are the recommended steps to configuring your nodes to use a private
registry. In this example, run these on your desktop/laptop:
Run docker login [server] for each set of credentials you want to use. This updates $HOME/.docker/config.json.
View $HOME/.docker/config.json in an editor to ensure it contains just the credentials you want to use.
Get a list of your nodes, for example:
if you want the names: nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')
if you want to get the IPs: nodes=$(kubectl get nodes -o jsonpath='{range
.items[*].status.addresses[?(#.type=="ExternalIP")]}{.address}
{end}')
Copy your local .docker/config.json to the home directory of root on each node.
for example: for n in $nodes; do scp ~/.docker/config.json root#$n:/root/.docker/config.json; done
Use cases:
There are a number of solutions for configuring private registries.
Here are some common use cases and suggested solutions.
Cluster running only non-proprietary (e.g. open-source) images. No need to hide images.
Use public images on the Docker hub.
No configuration required.
On GCE/Google Kubernetes Engine, a local mirror is automatically used for improved speed and availability.
Cluster running some proprietary images which should be hidden to those outside the company, but visible to all cluster users.
Use a hosted private Docker registry.
It may be hosted on the Docker Hub, or elsewhere.
Manually configure .docker/config.json on each node as described above.
Or, run an internal private registry behind your firewall with open read access.
No Kubernetes configuration is required.
Or, when on GCE/Google Kubernetes Engine, use the project’s Google Container Registry.
It will work better with cluster autoscaling than manual node configuration.
Or, on a cluster where changing the node configuration is inconvenient, use imagePullSecrets.
Cluster with a proprietary images, a few of which require stricter access control.
Ensure AlwaysPullImages admission controller is active. Otherwise, all Pods potentially have access to all images.
Move sensitive data into a “Secret” resource, instead of packaging it in an image.
A multi-tenant cluster where each tenant needs own private registry.
Ensure AlwaysPullImages admission controller is active. Otherwise, all Pods of all tenants potentially have access to all
images.
Run a private registry with authorization required.
Generate registry credential for each tenant, put into secret, and populate secret to each tenant namespace.
The tenant adds that secret to imagePullSecrets of each namespace.
Consider reading the Pull an Image from a Private Registry document if you decide to use a private registry.
There are 3 types of registries:
Public (Docker Hub, Docker Cloud, Quay, etc.)
Private: This would be a registry running on your local network. An example would be to run a docker container with a registry image.
Restricted: That is one registry that needs some credentials to validate. Google Container Registry (GCR) in an example.
As you are well saying, in a public registry, such as Docker Hub, you can have private images.
Private and Restricted registries are more secure obviously, as one of them is not even exposed to internet (ideally), and the other one needs credentials.
I guess you can achieve an acceptable security level with any of them. So, it is matter of choice. If you feel your application is critical, and you don't want to run any risk, you should have it in GCR, or in a private registry.
If you feel like it is important, but not critical, you could have it in any public repository, making it private. This will give a layer of security.

Artifactory how many repository can I create?

I have artifactory pro license.
I want to use the artifactory for docker repository.
As you know, docker repository support user namespace like this,
example.com/username/imagename:tag
but artifactory use repository name instead of username.
but i wanna to use username space and apply permission for each user for their repository.
so, how many repository supported?
Using an Artifactory pro myself, I confirm a docker registry support as many namespace (not just username) as you would need.
All I need to do is:
login
docker login my-registry
tag
docker tag my_tag my-registry:my_label/my_tag
push
docker push my-registry:my_label/my_tag
With "my-registry" being the name of the server referencing your artifactory docker registry, as configured by "Configuring Artifactory / Configuring a Reverse Proxy / Configuring NGINX "
That is because Docker requires the URL of any repository it connects to conform to a specific format (http(s)://<host>:<port>/v1), and Artifactory requires a specific URL format (http://<host>:<port>/artifactory/api/docker/<docker_repository>).
Hence the need for a reverse proxy.
But: there is no notion of username, only namespace.
As mentioned in Artifactory Docker Registry:
With the fine-grained access control provided by built-in security features, Artifactory offers secure Docker push and pull with local Docker repositories as fully functional, secure, private Docker registries.
But those built-in security features are for user authentication to Artifactory in general, not specific to a docker registry which has no notion of username: if a user has permission of pushing to a docker registry, it pushing to any part of it.
I want is to perform ACLs on a namespace basis.
As far as I know, this would not be supported.
You might configure NGiNX to filter that for you, but Artifactory itself does not provide docker registry namespace-based ACL.
So I want to create a repository for each user and grant permission to that repository to use an artifactory like a docker hub. So I'm wondering how many repositories I can create in an artifactory
That implies two things:
different local docker repositories: there are no official limit to the number of repos, only local storage quota limits.
different NGiNX reverse proxied domain names: each separate registry needs to have its own domain name.

Resources