What Docker command can I use after login to Docker registry? - docker

I am new to Docker. I know the default registry is 'docker hub'. And there are tutorials on navigating 'Docker Hub', e.g. search image etc. But that kind of operations are performed in Docker Hub UI via web.
I was granted a private Docker registry. After I login using the command like docker login someremotehost:8080, I do not know what command to use to navigate around inside the registry. I do not know what images are available and what their tags are.
Could anyone share some info/link on what command to use to explore private remote registry after user login?
Also, to use images from the private registry, the name I need to use becomes something like 'my.registry.address:port/repositoryname.
Is there a way to change the configuration of my docker application, so that it will make my.registry the default registry, and I can just use repositoryname, without specifying registry name in every docker command?

There are no standard CLI commands to interact with remote registries beyond docker pull and docker push. The registry itself might provide some sort of UI (for example, Amazon ECR can list images through the standard AWS console), or your local development team might have a wiki that lists out what's generally available.
You can't change the default Docker registry. You have a pretty strong expectation that e.g. ubuntu is really docker.io/library/ubuntu and not something else.

For the Docker there are only two commands for communication of registry:
Docker Pull and Docker Push
And about the docker private registry there is no any default setting in docker to get the pull from only from the specific registry. The reason for this is name of docker image.For official docker image there is direct name like Centos . But in the docker registry there is also some images which is created by non-official organisation or person. In that kind of docker images there is always name of user or organisation like pivotaldata/centos. So this naming convention is help to docker find the image in docker registry in public(via login) or public registry.
In the case you want to interact more with private repo you can write your own batch script or bash script. Like I have created a batch script which pull all the tag from the private repo if user give the wrong tag.
#echo off
docker login --username=xxxx --password=xxxx
docker pull %1:%2
IF NOT %ERRORLEVEL%==0 (
echo "Specified Version is Not Found "
echo "Available Version for this image is :"
for /f %%i in (' curl -s -H "Content-Type:application/json" -X POST -d "{\"username\":\"user\",\"password\":\"password\"}" https://hub.docker.com/v2/users/login ^|jq -r .token ') do set TOKEN=%%i
curl -sH "Authorization: JWT %TOKEN%" "https://hub.docker.com/v2/repositories/%1/tags/" | jq .results[].name
)

Related

Docker: get list of all the registries configured on a host

Can docker be connected to more than one registry at a time and how to figure out which registries it is currently connected too?
$ docker help | fgrep registr
login Log in to a Docker registry
logout Log out from a Docker registry
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
As you can see, there is no option to list the registries. I did find
a way by running:
$ docker system info | fgrep -i registr
Registry: https://index.docker.io/v1/
So... one regsitry at a time only? It is not like apt where one can point to more than one source? Anybody can point me to some good documentation about docker and registries?
Oddly, I search the web to no vail.
Aside from docker login, Docker isn't "connected to a registry" per se. Registry names are part of the image name, and Docker will connect to a registry server if it needs to pull an image.
As a specific example, the official Docker image for Elasticsearch is on a non-default registry run by Elastic. The example in that documentation is
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.17.0
# ^^^^^^^^^^^^^^^^^
# registry host name
You don't need to otherwise configure your system to connect to that registry, download an index, or anything else. In fact, you don't even need this docker pull command; if you directly docker run the image, Docker will download it if it doesn't have a copy locally.
The default registry is Docker Hub, docker.io, and this cannot be changed.
There are several alternate registries out there. The various public-cloud providers each have their own, and there are also several free-standing image registries. Each has its own instructions on how to set it up. You always need to include the registry name as part of the image name. The Google Container Registry has a simple name syntax, for example, so if you use GCR then you can
# build an image locally, labeled to be stored in GCR
# (this step does not contact or use GCR at all)
docker build gcr.io/my-name/my-image:tag
# authenticate to the registry
# (normally GCR has a Google-specific login sequence)
docker login https://gcr.io
# push the image
docker push gcr.io/my-name/my-image:tag
# run the image, pulling it if not present
docker run ... gcr.io/my-name/my-image:tag

Can I query an OpenShift Docker repository whether a given image _exists_ without pulling it?

I have a situation where I need to wait for an image to show up on a given docker registry (this is an OpenShift external registry) before continuing my script with additional oc-commands.
Before that happens, the external docker registry has no knowledge what so ever of this image. Afterwards it is available as :latest.
Can this be done programatically? (Preferably without trying to download the image)
My order of preference:
oc command
docker command
A REST api (using oc or docker credentials)
assuming the repository from openshift works similarly to dockerhub, you can do this:
curl --silent -f -lSL https://index.docker.io/v1/repositories/$1/tags/$2 > /dev/null
source

How to access the locally built docker-image on the docker-swarm manager?

While trying to build a service on docker-machine i got an error of "image doesn't exist" on that docker-machine manager node. As I checked the docker images command on the manager node, no image was there as expected. But on the root docker side I have those images. I want to access these images on the manager node. I've read few articles where it was mentioned that, maybe I've to upload that image on the docker hub then pull it from that hub. But I want to access it locally. Is there any way to do this as I'm newbie to docker.
This is the command what I tried on my manager machine:
docker#manager:~$ docker service create --name "api-client" -p 4200:4200 api_client
This is my docker images output:
REPOSITORY TAG IMAGE ID CREATED SIZE
api_client latest 097b19c4deb8 27 hours ago 1.15GB
But on my docker#manager terminal, my docker image folder is empty.
The problem is that there is no repository to hold the image. The repository needs to be pulled from to a repository to each node in the Swarm before it can execute. In general you need to do the following:
Setup a repository, if you want a local repository there is a guide here, but it will be some hassle to get it up and running i and "insecure http" version. An easier way is to get yourself a free docker hub account and put your image there.
Tag your local image with the repository name. Howto is shown in the guide above.
docker tag <local image> <repository>/<image:tag>
Login to the repository (if in cloud) and push your image to the repository
docker login
docker push <repository>/<image>:<tag>
To run the image (your command)
docker service create --name "api-client" -p 4200:4200 <repository>/<image>:<tag>
Your can also try to pull an image into the local cache of a node using
docker pull <repository>/<image>:<tag>

Docker pull private registery first and docker hub if not found image

I try to find a solution with docker pull to pull my image from my private registry first,and then docker hub if not found in my private registry.
Currently i can pull like this if i want to go to my private registry: docker pull #hostname_private_registery/#image_name
i don't want to use #hostname_private_registery in the command, because i already i will have a big trouble with the dev.
As of now, the from command does not include a fallback on fail option. You could, however, check the availability of your private registry beforehand in some kind of script, then use string replaces on your dockerfile ARG values to chose your respective active registry.
You can use the following shell script to achieve this.
if docker pull #hostname_private_registery/#image_name ; then
echo "Image pulled from local registry"
else
docker pull #image_repo/#image_name
echo "Image pulled from DockerHub"
fi
You can replace the echo with whatever you need to do after the pull.

How to run container in a remote docker host with Jenkins

I have two servers:
Server A: Build server with Jenkins and Docker installed.
Server B: Production server with Docker installed.
I want to build a Docker image in Server A, and then run the corresponding container in Server B. The question is then:
What's the recommended way of running a container in Server B from Server A, once Jenkins is done with the docker build? Do I have to push the image to Docker hub to pull it in Server B, or can I somehow transfer the image directly?
I'm really not looking for specific Jenkins plugins or stuff, but rather, from a security and architecture standpoint, what's the best approach to accomplish this?
I've read a ton of posts and SO answers about this and have come to realize that there are plenty of ways to do it, but I'm still unsure what's the ultimate, most common way to do this. I've seen these alternatives:
Using docker-machine
Using Docker Restful Remote API
Using plain ssh root#server.b "docker run ..."
Using Docker Swarm (I'm super noob so I'm still unsure if this is even an option for my use case)
Edit:
I run Servers A and B in Digital Ocean.
Docker image can be saved to a regular tar archive:
docker image save -o <FILE> <IMAGE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_save/
Then scp this tar archive to another host, and run docker load to load the image:
docker image load -i <FILE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_load/
This save-scp-load method is rarely used. The common approach is to set up a private Docker registry behind your firewall. And push images to or pull from that private registry. This doc describes how to deploy a container registry. Or you can choose registry service provided by a third party, such as Gitlab's container registry.
When using Docker repositories, you only push/pull the layers which have been changed.
You can use Docker REST API. Jenkins HTTP Request plugin can be used to make HTTP requests. You can run Docker commands directly on a remote Docker host setting the DOCKER_HOST environment variable. To export an the environment variable to the current shell:
export DOCKER_HOST="tcp://your-remote-server.org:2375"
Please be aware of the security concerns when allowing TCP traffic. More info.
Another method is to use SSH Agent Plugin in Jenkins.

Resources