Where Docker default registry URL is configured? - docker

I am refering to this link - Docker pull.
By default, docker pull pulls images from Docker Hub (https://hub.docker.com).
I would like to know where this link is configured on our local machine setup.
I am using Docker on Windows 10.

You cannot change the default domain of a docker image. This is by design:
Your Docker installation with this "private registry defined in the config file" would be incompatible with every other Docker installation out there. Running docker pull debian needs to pull from the same place on every Docker install.
A developer using Docker on their box will use debian, centos and ubuntu official images. Your hacked up Docker install would just serve your own versions of those images (if they're present) and this will break things.
You should identify your image through the full URL:
<your-private-registry>/<repository>/<image>:<tag>
The default domain docker.io (the "docker hub") is hardcoded in docker's code.
For example here:
https://github.com/docker/distribution/blob/master/reference/normalize.go
Check the function splitDockerDomain which sets docker.io as registry if it's not provided by the user.

Related

"sudo docker push" fails with a seemingly bogus error message

Here is my terminal log (Ubuntu 22.04.1, Docker version 20.10.22, build 3a2c30b):
paul#desktop:~/work/arc/code$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
pauljurczak/arc latest 4f3f22791983 35 minutes ago 880MB
sub-1 latest 4f3f22791983 35 minutes ago 880MB
neo-1 latest 3dcf55bb7458 3 days ago 891MB
arc-tut latest 3a9aee91689b 4 days ago 230MB
paul#desktop:~/work/arc/code$ sudo docker push pauljurczak/arc
Using default tag: latest
The push refers to repository [docker.io/pauljurczak/arc]
An image does not exist locally with the tag: pauljurczak/arc
paul#desktop:~/work/arc/code$ sudo docker push pauljurczak/arc:latest
The push refers to repository [docker.io/pauljurczak/arc]
An image does not exist locally with the tag: pauljurczak/arc
pauljurczak/arc
paul#desktop:~/work/arc/code$ sudo docker push latest
Using default tag: latest
The push refers to repository [docker.io/library/latest]
An image does not exist locally with the tag: latest
I created pauljurczak/arc:latest image as shown there. I'm trying to push it to my pauljurczak/arc repository. The error messages don't make any sense to me. Why pauljurczak/arc in my push command is considered a tag? Why An image does not exist locally with the tag: latest, while there exists several images with that tag? What is happening here? I'm following push command description at https://docs.docker.com/docker-hub/repos/#pushing-a-docker-container-image-to-docker-hub.
When viewing my repository with Chrome, I see this info:
That's exactly what I was doing.
It seems that skipping sudo makes it work. Why is that?
It looks like you have both Docker Desktop and the standalone Docker Engine installed. This means you have two different Docker daemons running. The Docker Engine one is accessible via /var/run/docker.sock, given appropriate permissions; Docker Desktop runs a hidden Linux virtual machine (even on native Linux) and makes a similar socket file available in your home directory.
Docker Desktop uses the Docker CLI "context" feature to point docker at the socket in your home directory. That configuration is also stored in a file in your home directory.
This is where sudo makes a difference. When you run sudo docker ..., and it reads $HOME/.docker/contexts/, that home directory is now root's home directory, probably /root. That doesn't have any of the Docker Desktop related configuration in it, and so you use the default /var/run/docker.sock Docker Engine socket instead.
As you note, just consistently not using sudo will resolve this. (You could potentially need sudo to access the Docker Engine socket, which can all but trivially be used to root the host; the Docker Desktop VM is a little more isolated.) Uninstalling Docker Desktop and making sure to only use the standalone Docker Engine on native Linux also would resolve this.

Docker: get list of all the registries configured on a host

Can docker be connected to more than one registry at a time and how to figure out which registries it is currently connected too?
$ docker help | fgrep registr
login Log in to a Docker registry
logout Log out from a Docker registry
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
As you can see, there is no option to list the registries. I did find
a way by running:
$ docker system info | fgrep -i registr
Registry: https://index.docker.io/v1/
So... one regsitry at a time only? It is not like apt where one can point to more than one source? Anybody can point me to some good documentation about docker and registries?
Oddly, I search the web to no vail.
Aside from docker login, Docker isn't "connected to a registry" per se. Registry names are part of the image name, and Docker will connect to a registry server if it needs to pull an image.
As a specific example, the official Docker image for Elasticsearch is on a non-default registry run by Elastic. The example in that documentation is
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.17.0
# ^^^^^^^^^^^^^^^^^
# registry host name
You don't need to otherwise configure your system to connect to that registry, download an index, or anything else. In fact, you don't even need this docker pull command; if you directly docker run the image, Docker will download it if it doesn't have a copy locally.
The default registry is Docker Hub, docker.io, and this cannot be changed.
There are several alternate registries out there. The various public-cloud providers each have their own, and there are also several free-standing image registries. Each has its own instructions on how to set it up. You always need to include the registry name as part of the image name. The Google Container Registry has a simple name syntax, for example, so if you use GCR then you can
# build an image locally, labeled to be stored in GCR
# (this step does not contact or use GCR at all)
docker build gcr.io/my-name/my-image:tag
# authenticate to the registry
# (normally GCR has a Google-specific login sequence)
docker login https://gcr.io
# push the image
docker push gcr.io/my-name/my-image:tag
# run the image, pulling it if not present
docker run ... gcr.io/my-name/my-image:tag

GCE doesn't deploy GCR image correctly

I have followed this guide from Google documentation in order to be able to push a custom Docker image to Google Container Registry and then be able to start a new GCE instance with this image. At first I wanted to try using an anaconda3 public image from docker hub without any modification (in order to test).
So here is the steps I have done so far after installing gcloud and docker:
gcloud auth configure-docker -> configured docker with my gcloud crendentials
docker pull continuumio/anaconda3 -> pulled the public image
docker tag continuumio/anaconda3 eu.gcr.io/my-project-id/anaconda3 -> tagged the local image with the registry name as specified in the doc
docker push eu.gcr.io/my-project-id/anaconda3 -> pushed the image to GCR
Good ! I am now able to see my image trough GCR interface, and also able to deploy it with GCE. I choose to deploy it with a f1-micro instance, Container-Optimized OS 67-10575.62.0 stable, 10 Go boot disk, Allow HTTP traffic, etc.
But when I connect with ssh to the freshly new created VM instance, I can't find anaconda3 librairies (which are supposed to be created in /opt/conda). Instead, I can see a /opt/google directory which makes me think that the image has not been deployed correctly and GCE is using a default image...
So I tried to check if the image was pushed correctly in GCR, so I decided to delete my local image and pull it once again from GCR:
docker rmi -f eu.gcr.io/my-project-id/anaconda3
docker pull eu.gcr.io/my-project-id/anaconda3:latest
I run the image
docker run -t -i eu.gcr.io/my-project-id/anaconda3
and I can see that everything is fine, I have anaconda3 installed correctly inside /opt/conda with all the toolts needed (Pandas, Numpy, Jupyter notebook, etc.)
I tried to find people with the same problem as me without any success... maybe I have done something wrong in my proccess ?
Thanks !
TL;DR My problem is that I have pushed an anaconda3 image on Google GCR, but when I launch a virtual instance with this image, I do not have anaconda on it
It's normal that you can't find anaconda libraries installed directly on the GCE instance.
Actually, when you choose to deploy a container image on a GCE VM instance, a Docker container is started from the image you provide (in your example, eu.gcr.io/my-project-id/anaconda3). The libraries are not installed on the host, but rather inside that container (run docker ps to see it, but normally it has the same name as your VM instance). If you run something like :
docker exec -it <docker_container_name> ls /opt/conda
Then you should see the anaconda libraries, only existing inside the container.
When you run docker run -t -i eu.gcr.io/my-project-id/anaconda3, you're actually starting the container and running an interactive bash session inside that container (see the CMD here). That's why you can find anaconda libraries : you are inside the container!
Containerization software (docker here) provides isolation between your host and your containers. I'll suggest you to read documentation about containerization, Docker and how to run containers on Container-Optimized OS.

Make Nginx image available in local/private repository for production safe perspective in kubernetes

How can we make nginx image available in my local/private repository in kubernetes?
Lets say i am using nginx image tag version x.x. I have tested it in my dev and test env and want to move it to prod.
What if the image is not present in nginx repository?
Is there a way to pull the x.x version of nginx to our local/private repository?
There is a high risk if the image is not available. So it would be helpful if anyone guide me how we handle this.
If you have docker installed in your machine, pull docker image
$ docker pull nginx:x.x
Now, you can't use this local docker image inside Kubernetes. You need to do additional thing
Push this image into your docker registry in cloud.
$ docker tag nginx:x.x <your-registry>/nginx:x.x
$ docker push <your-registry>/nginx:x.x
And then use <your-registry>/nginx:x.x from your registry.

How I use a local container in a swarm cluster

A colleague find out Docker and want to use it for our project. I start to use Docker for test. After reading an article about Docker swarm I want to test it.
I have installed 3 VM (ubuntu server 14.04) with docker and swarm. I followed some How To ( http://blog.remmelt.com/2014/12/07/docker-swarm-setup/ and http://devopscube.com/docker-tutorial-getting-started-with-docker-swarm/). My cluster work. I can launch for exemple a basic apache container (the image was pull in the Docker hub) but I want to use my own image (an apache server with my web site).
I tested to load an image (after save it in a .tar) but this option isn't supported by the clustering mode, same thing with the import option.
So my question is : Can I use my own image without to push it in the Docker hub and how I do this ?
If your own image is based on a Dockerfile that you build you can execute the build command on your project while targeting the swarm.
However if the image wasn't built, but created manually you need to have a registry in between that you can push to, either docker hub or some other registry solution like https://github.com/docker/docker-registry

Resources