How to pull from multiple private registries with docker-compose? - docker

I've attempted to pull two images from two different projects registries (gitlab container registry). All that in a docker-compose.yml file.
How can I configure my gitlab-ci.yml or configure variables (whatever works) in order to pull my images properly without any access problems ?
I have found a solution using docker login with a deploy token to have read-only access to my project registry. The problem is that works if I had only one image to pull : How to build, push and pull multiple docker containers with gitlab ci?

You can use docker login multiple times before running docker-compose, one for each registry, and they will stack.

thanks to #CCH, just complete by :
In my case I push in registry server with tag v3.0
and my docker-compose.yml try to pull on production server with tag v3
so I run :
docker tag registry.server.tdl/my-username/my-project/my-registry-name:v3.0 registry.server.tdl/my-username/my-project/my-registry-name:v3
to add tag used in my docker-compose.yml

Related

Use cache docker image for gitlab-ci

I was wondering is it possible to use cached docker images in gitlab registry for gitlab-ci?
for example, I want to use node:16.3.0-alpine docker image, can I cache it in my gitlab registry and pull it from that and speed up my gitlab ci instead of pulling it from docker hub?
Yes, GitLab's dependency proxy features allow you to configure GitLab as a "pull through cache". This is also beneficial for working around rate limits of upstream sources like dockerhub.
It should be faster in most cases to use the dependency proxy, but not necessarily so. It's possible that dockerhub can be more performant than a small self-hosted server, for example. GitLab runners are also remote with respect to the registry and not necessarily any "closer" to the GitLab registry than any other registry over the internet. So, keep that in mind.
As a side note, the absolute fastest way to retrieve cached images is to self-host your GitLab runners and hold images directly on the host. That way, when jobs start, if the image already exists on the host, the job will start immediately because it does not need to pull the image (depending on your pull configuration). (that is, assuming you're using images in the image: declaration for your job)
I'm using a corporate Gitlab instance where for some reason the Dependency Proxy feature has been disabled. The other option you have is to create a new Docker image on your local machine, then push it into the Container Registry of your personal Gitlab project.
# First create a one-line Dockerfile containing "FROM node:16.3.0-alpine"
docker pull node:16.3.0-alpine
docker build . -t registry.example.com/group/project/image
docker login registry.example.com -u <username> -p <token>
docker push registry.example.com/group/project/image
where the image tag should be constructed based on the example given on your project's private Container Registry page.
Now in your CI job, you just change image: node:16.3.0-alpine to image: registry.example.com/group/project/image. You may have to run the docker login command (using a deploy token for credentials, see Settings -> Repository) in the before_script section -- I think maybe newer versions of Gitlab will have the runner authenticate to the private Container Registry using system credentials, but that could vary depending on how it's configured.

Push Docker image to Gitlab Registry when repository have two Docker images (client and server)

I have a Gitlab Repository that hosts a web app made with React / NodeJS, So, I have the client and server in the same repo.
App is working, and I want to use Gitlab Registry my Docker images (client and server).
Thing is my repository has the name: gitlab.com/group/project
And it is expecting a Docker image with the same name.
Instead, I have two Docker images:
registry.gitlab.com/group/project_api
registry.gitlab.com/group/project_client
So, it won't let me push my images. I get:
denied: requested access to the resource is denied
How can I do it ? I don't want to make two repositories.
I could solve it using:
docker push registry.gitlab.com/group/project/api
docker push registry.gitlab.com/group/project/client
Here is what it looks like in the UI:
As specified in the relevant gitlab documentation chapter, you can use up to three levels for your images names:
registry.gitlab.com/group/project:tag
registry.gitlab.com/group/project/image1:tag
registry.gitlab.com/group/project/module1/image1:tag

URL of docker hub image for usage in azure service fabric

I have created docker hub repo and also created and pushed a docker image of python application to the repo.
However, I cannot find the correct Url of the image that I have to provide to the other services which will use this image. for eg azure service fabric or Kubernetes.
How can I find the exact URL? Through PowerShell or through the browser...
You don't usually download images by url. Instead, you use the docker CLI with the repository and image name.
If it's a private repo, login first, by using docker login
more about login
Use docker pull {reponame/imagename:tag} to download an image to your machine.
more about pull
Replace {reponame} with the repository name.
Replace {imagename} with the name you used with docker push.
Replace {tag} with the tag you put on the image (or latest).
For example, I use this line to get my docker hub image:
docker pull loekd/nanoserver:2.0

How to pull docker images from public registry and push it to private openshift?

I need to pull all images from an openshift template file, in my case it's openwhisk.
I'm trying to deploy this project on a private network so I don't have access to docker's official repository from there and thus need to push the images myself.
I was hoping there is a script/tool to automate this process.
There is no such available tool/script but you can write small shell script to do it.
If public dockerhub registry not allowed then either use private separate registry
or
Pull the image in your local laptop then tag it and push to openshift registry.
After pushing all the image to openshift, import your openshift template to deploy your application.
Below is the steps for single image. you can define list of image and loop it over the list.
docker pull imagename
oc login https://127.0.0.1:8443 --token=<hidden_token> #copy from https://your_openshift_server:port/console/command-line
oc project test
oc create imagestream imagename
docker login -u `oc whoami` -p `oc whoami --show-token` your_openshift_server:port
docker tag imagename your_openshift_server:port/openshift_projectname/imagename:tag
docker push your_openshift_server:port/openshift_projectname/imagename:tag
you can get more details on page suggested by graham-dumpleton
.
Graham Dumpleton's book talks about this. You create a list (JSON) of all the images used and import that into the openshift namespace. Since your OpenShift is offline/disconnected, you'll also change any remote registry to the URL of the internal, hosted registry.
Example that imports all JBoss images: https://github.com/projectatomic/adb-utils/blob/master/services/openshift/templates/common/jboss-image-streams.json

Docker - is it necessary to push images to remote server?

I have successfully built some Docker images:
Now I would like to start my microservices by docker-compose, unfortunatelly I am unable to pull those images i.e. repository callista/discovery-server not found: does not exist or no pull access I solved this error by logging into my DockerHub account and pushining those images to remote server. But it seems to me like a little overkill to send such larges images (which are likely to change pretty soon) over the Internet over and over again twice (push&pull).
Is it possible to configure Docker to install those images locally and not to pull from remote server?
I use Docker 1.8 and work on Windows 10.
Do you need to run this images in a server different from the one you build then?
If you need you have some alternatives:
As #engineer-dollery said, you can run a registry into your network, than you would not need to send it over the internet, only in your network. Docs: https://docs.docker.com/registry/deploying/
You could use the docker save and docker import to move then around too. Docs: https://docs.docker.com/engine/reference/commandline/save/
But if the server you run the images is the same you build then...
...than you could just add the tag image to your docker-compose services, and do a docker-compose build, as #lauri said, but with the image docker-compose will create a image with that name after the build, and then you could do docker run using than. Or do a docker-compose up --build so it will always build than again if something changes into the Dockerfile
If you define build option in docker-compose.yml, you should be able to build images locally with Docker Compose and then it uses those images without pulling. By default Docker Compose builds images if they are not found locally. If you want to rebuild images just add --build option docker-compose up command docker-compose up --build
Docker Compose build reference:
https://docs.docker.com/compose/compose-file/#build

Resources