OpenshiftV3 Adding Docker Images from external Repository - docker-registry

I am new to OpenshiftV3 and I am just wondering if it's possible to add all images or sync openshift with images in an external docker-registry.
Example : All docker images found in https://registry.somehost.com can be visible in my openshift project.

It is not possible today to synchronize an entire registry - you'll need to do it repo by repo using "oc import-image foo --all --confirm"

Related

Create Docker Image with pre-Compiled Files

we are developing an application for our customer. The customer must not see the code, since we do not offer the source code to them. Our offer only contains the setup, maintenance and running the Application.
So, we have the Source code here in our private Git. We compile it with the dockerfile and make a docker image out of it.
Since we have no remote access to the customer's Container registry, we cannot simply push a new release version to it.
Is there a way to get new release versions into the customer's registry, without copying the release code to the customer?
Maybe pre-compiling, then copy the compiled Code to the customer?
Greetings and thanks in advance!
A docker image can be saved as a tar file using
docker save -o <filename.tar> <image_name>
You can send that file to your customer, and they can load that file as an image using
docker load -i <filename.tar>
Now they can push that image to their private repository.
1 approach should be pushing the docker image to your private hosted Docker registry. And then at your Customer's place, you can use a tool like Nexus(Check here) and configure the Proxy Docker repository which will pull the images from your private docker registry. In this fashion, you are not publishing your code but Docker Image to the customer and they can pull it.
For Proxy Repository information, Check here

Can't push the docker image to gcp-cluster

So I did a tutorial based on tensorflow-servings and Kubernetes. All steps are working fine except the docker image pushing to the cluster.
this is the tutorial that i have tried.
https://www.tensorflow.org/tfx/serving/serving_kubernetes
And when I'm trying to push the docker image it gives an error like this,
I have tried to create the cluster with scopes also. But the result is same as above.
The command I use to create a cluster with scopes:
gcloud container clusters create resnet-serving-cluster --num-nodes 5 --scopes=storage-rw
So what is the wrong with this? Have I done something wrong???
Ok found the answer. My project ID and registry name are not equal. I re-tag the docker image with new registry name providing my project id and push it. It works.
There may be a variety of reasons.
1) I'd recommend to start with check if full API access has been granted.
2) Update gcloud components gcloud components update
3) Use gsutil to make sure you have permission to write to the bucket:
$ gsutil acl get gs://<my-bucket>
You are trying to push your image into your private registry on gcloud. Please verify if you can access your private registry:
gcloud container images list-tags gcr.io/"your-project"/"image"
all information about gcloud private registry you can find here:
Additional helpful information you can find here
Please notice that:
By default, project Owners and Editors have push and pull permissions
for that project's Container Registry bucket.
Project Viewers have pull permission only.

Pulling Docker Images - Manifest not found

I'm trying to download a tagged docker image
docker pull clkao/postgres-plv8:10-2
and, in a compose file,
postgres:
image: clkao/postgres-plv8:10-2
But receive a manifest not found exception.
Unless I'm mistaken, that tag exists in Docker Hub, however I notice that it doesn't appear on the tags list.
Am I doing something wrong? Or is this perhaps an issue with Docker Hub or the way that repo has been set up?
If it isn't 'my fault', what's a recommendation to move forward? Create my own Dockerfile perhaps?
You might also try
docker pull -a <image>.
The -a will pull all versions of that image, which at least lets you know what is there.
(This is less useful if you really need a specific version, but helped me when I tried to pull an image that for some reason did not have a 'latest' tag.)
Edit: This is actually a really bad idea, since it will pull down the entire history, which for many repositories could be many GB. Better to go look at the repository site and see what tag you want. Note to self: don't post answers when you are tired. :-(
You get the error message because there exist no tag with "10-2".
You can try to figure out why and contact the repository owner or you can try to build your own one.
I just got over this "manifest for / not found: manifest unknown: The named manifest is not known to the registry."
Using
docker login <repo>
Check the docker's image also not only that the tag exists, I was trying to run Flyway version 5.0.1 for an image flyway/flyway which version did not exist, it existed only in version flyway/flyway:latest it seems, whereas 5.0.1 existed and I pulled it but in/from a different repository name, with repository name boxfuse/flyway.
for the error message 'docker manifest unknown'
When you use docker pull, without a tag, it will default to the tag :latest. Make sure that when we are building a image add tag latest or we can access the image by the tag name after image name with colon
I think you are trying to tag your image as v8.10.2. Make sure while tagging image locally you use same tag which you want to pull in future. So steps will be like below:
docker build -t clkao/postgres-pl:v8.10.2 .
docker push clkao/postgres-pl:v8.10.2
docker pull clkao/postgres-pl:v8.10.2
If this is from Git via docker.pkg.github.com then you need to switch to use ghcr.io. The former is deprecated and does not support the manifest endpoint so some docker clients, when they attempt to download various resources, fail with this error message. If you instead publish your image to ghcr (Github Container Repository), the docker image pulling process should complete successfully.
cd <dir with Dockerfile in it>
docker build -f Dockerfile -t ghcr.io/<org_id>/<project_id>:<version> .
docker push ghcr.io/<org_id>/<project_id>:<version>
More info here: https://docs.github.com/en/packages/working-with-a-github-packages-registry/migrating-to-the-container-registry-from-the-docker-registry
Note: The Container registry is currently in public beta and subject
to change. During the beta, storage and bandwidth are free. To use the
Container registry, you must enable the feature preview. For more
information, see "Introduction to GitHub Packages" and "Enabling
improved container support with the Container registry."

How to use a git url in a Dockerfile as base image?

I have tried things like:
FROM https://github.com/someone/somerepo.git#master:/
But it doesn't work. Any suggestion?
The doc
https://docs.docker.com/engine/reference/builder/#from
says
The image can be any valid image – it is especially easy to start by pulling an image from the Public Repositories.
so either your
FROM
references an image available on your host, that you can see with a
docker images
or it references an image on the Docker Hub
https://hub.docker.com/
for example
https://hub.docker.com/_/debian/
or
https://hub.docker.com/_/ubuntu/
or
https://hub.docker.com/_/alpine/
or any other
So it seems, at the moment, you can't use a git repo

Docker CD workflow - making docker hosts pull new images and deploy them

I'm setting up a CI/CD workflow for my organization but I'm missing the final piece of the puzzle. Surely this is a solved problem, or do I have to write my own?
The full picture.
I'm running a few EC2 instances on AWS, each running docker in its native swarm mode. A few services are running here which I've started manually via docker service create ....
When a developer commits source code a trigger is sent to jenkins to pull the new code and build a new docker image which is then pushed to my private registry.
All is well and good up to here, but how do I get the new image onto my docker hosts and the running container automatically updated to the new version?
Docker documentation states (here) that the registry can send events to configurable endpoints when a new image gets pushed onto it. This is what I want to automatically react to by having my docker hosts then pull the new image and stop, destroy and restart the service using that new version (with the same env flags, labels, etc etc), but I'm not finding any solution to this that fits my use case.
I've found v2tec/watchtower but it's not swarm-aware nor can it pull from a private registry at the time of writing this question.
Preferably I want a docker image I can deploy on my docker manager which listens to registry events (after pointing the registry config at it) and does the magic I need.
Cost is an issue, but time is less so, so I'm more inclined writing my own solution than I am adopting a fee-based service for this.
One option you have is to SSH to swarm master from Jenkins using SSH plugin and pull the new image and update the service when new image is pushed to the registry.

Resources