Cannot pull Docker image out of OpenShift cluster - docker

I would like to pull a Docker image that was built inside an OpenShift Container Platform 3.9 cluster out of that cluster. To this end I try the following:
username=$(oc whoami)
api_token=$(oc whoami -t)
docker login -u $username -p $api_token my-cluster:443
image=$(oc get is/my-is -o jsonpath='{.status.tags[0].items[0].dockerImageReference}')
docker pull $image
Now docker login works, but docker image produces the error message
lookup docker-registry.default.svc on 1.2.3.4: no such host
where 1.2.3.4 is a placeholder for my local nameserver according to /etc/resolv.conf and $image is of the form docker-registry.default.svc:5000/registry/my-is#sha256:my-id.
Am I doing something wrong or could it be that the cluster administrator must first expose the registry (but should it not be exposed by default)? If I try oc get svc -n default as suggested here I get this error message:
User "my-user" cannot list services in project "default"
So what steps are needed (preferably without intervention by the cluster's administrator) for me successfully pulling out that image? Would the situation change if the pull occurred in a container also executing inside the OpenShift cluster?

The lead provided in a comment was the right one. (Thanks!). The following script now does work; no intervention by a cluster admin was required:
username=$(oc whoami)
api_token=$(oc whoami -t)
docker login -u $username -p $api_token my-cluster:443
docker pull my-cluster:443/my-project/my-is
docker images

Related

Pulling an image from local Docker registry

I installed a Docker registry to my server like below;
docker run -d -p 5000:5000 --name registry registry:2
So after that I pushed Alpine image to that registry.
docker pull alpine
docker image tag alpine localhost:5000/alpinetest
docker push localhost:5000/alpinetest
So the problem is I want to access this image from another server.
So I can run the command below from client to Docker registry's server;
user#clientserver ~
$ curl 10.10.2.18:5000/v2/_catalog
{"repositories":["alpinetest"]}
So how can I pull this "Alpinetest" image from another "clientserver"?
For example the command below is not working;
user#clientserver ~
$ docker pull 10.10.2.18:5000/alpinetest:latest
Using default tag: latest
Error response from daemon: Get "https://10.10.2.18:5000/v2/": http: server gave HTTP response to HTTPS client
Thanks!
On the machine that wants to pull the image, create or edit /etc/docker/daemon.json and enter this:
{
"insecure-registries": ["10.10.2.18:5000"]
}
and then run:
sudo systemctl restart docker
Just be aware that the registry is, just like it says, insecure. This setup shouldn't be used when the registry is accessed over the internet or in any other environment that you don't have full control over. But it's definitely nice for local tests.

Docker Outside Of Docker login works but image pull from artifactory gives authentication is required error

In the docker host I create .docker/config.json with :
docker login https://project-docker.artifactory.company.com -u User -p Password
In the parent docker, then :
docker pull project-docker.artifactory.company.com/MyImage:1.0.0
works fine, the image is downloaded from the company artifactory register.
And I do :
docker rmi project-docker.artifactory.company.com/MyImage:1.0.0
to remove it from local repo.
Then I run a docker outside of docker with :
docker run -v /var/run/docker.sock:/var/run/docker.sock AnotherImage
Inside the child container (or sibbling), I do :
docker login https://project-docker.artifactory.company.com -u User -p Password
to create the /docker/config.json authentication file, it works fine.
Then when I do :
docker pull project-docker.artifactory.company.com/MyImage:1.0.0
I get the error message :
Error response from daemon: Get https://project-docker.artifactory.company.com/v2/MyImage/manifests/1.0.0: unknown: Authentication is required
When I do :
curl -uUser:Password https://project-docker.artifactory.company.com/v2/MyImage/manifests/1.0.0
then curl manage to download that response.
So curl works from inside the container sibbling, but the docker deamon fails whith the docker pull made from the sibbling.
I know we use virtual sub repositories, because project-docker.artifactory.company.com points to the same IP as project2-docker.artifactory.company.com and docker.artifactory.company.com .
How I can have more precise info about where it goes wrong ?

GCP: Unable to pull docker images from our GCP private container registry on ubuntu/debian VM instances

I am trying to pull a docker container from our private GCP container registry on a regular VM instance (i.e. ubuntu-1904) running on Google Cloud, but I am getting the following error:
user#test ~ $ sudo docker pull example.io/docker-dev/name:v01
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I followed those instructions, i.e., run the gcloud auth configure-docker command, which outputs a success message.
However, when running the docker pull command again, I get the exact same error.
A couple of extra tests that might help to provide feedback:
If I pull from a different registry, it works (for example, docker run hello-world pulls and runs the hello-world image)
I tested the same command (docker pull example.io/docker-dev/name:v01) on my local computer (Mac) instead of the vm instance and works perfectly.
I have also created vm instances and enable the option "Deploy a container image to this VM instance", providing the container address (example.io/docker-dev/name:v01), and also works. However, I don't want to use this option because it selects automatically a "Container-Optimized" boot disk, which I prefer not to use due to the limitations
Question:
Why I cannot pull docker images from my private container registry on a Ubuntu o Debian VM, even though docker seems to work very well pulling images from other repositories (docker hub)?
I did this yesterday. Just run gcloud auth configure-docker then run
VERSION=2.0.0
OS=linux # or "darwin" for OSX, "windows" for Windows.
ARCH=amd64 # or "386" for 32-bit OSs, "arm64" for ARM 64.
After that you can download the docker-credential-gcr
wget "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz"
Then run
tar cvzf --to-stdout ./docker-credential-gcr_linux_amd64-2.0.0.tar.gz /usr/bin/docker-credential-gcloud && sudo chmod +x /usr/bin/docker-credential-gcloud
And finally run
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://gcr.io
Now you will be able to pull you image :)
For me, on a container-os optimized instance, it helped to just run:
docker-credential-gcr configure-docker
https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#starting_a_docker_container_via_cloud-config
Note the default policy for compute instances:
VM instances, including those in Google Kubernetes Engine clusters,
must have the correct storage access scopes configured to push or pull
images. By default, VMs can pull images when Container Registry is in
the same project.
If you run gcloud auth configure-docker, the auth information is saved under your personal directory.
When you then run sudo docker pull example.io/docker-dev/name:v01, it looks for auth info under root directory and doesn't find anything there.
You should run both with or without sudo.

How to run an AWS Lambda Layer in a Docker container?

I would like to run a Docker container to see what is in a public Lambda Layer.
Following the aws sam layers docs using a sam app with only the pytorch layer I produced the Docker tag then I tried pulling the Docker image which fails with pull access denied / repo may require auth.
I did try aws ecr get-login --no-include-email to auth correctly though still couldn't access the image.
So I think the issue maybe that I am not authorised to pull the image of the lambda layer or the image doesn't exist. It is not clear to me
Alternatively it would be good to download the public Lambda Layer and then I could use https://github.com/lambci/docker-lambda to inspect it
More context about what I tried
So the Lambda Layer I would like to investigate is:
arn:aws:lambda:eu-west-1:934676248949:layer:pytorchv1-py36:1
The docker tag I prodcued is:
python3.6-0ffbca5374c4d95e8e10dbba8
Then I tried pulling the Docker image with:
docker run -it --entrypoint=/bin/bash samcli/lambda:python3.6-0ffbca5374c4d95e8e10dbba8 -i
docker run -it --entrypoint=/bin/bash <aws_account_id>.dkr.ecr.<region>.amazonaws.com/samcli/lambda:python3.6-0ffbca5374c4d95e8e10dbba8 -i
Which both failed with the error:
docker: Error response from daemon: pull access denied for samcli/lambda, repository does not exist or may require 'docker login'.
.
Just a quick potential answer (I've not read the links you provided as I am not at my computer), given you mentioned aws ecr get-login --no-include-email I am assuming you are trying to pull a docker image from AWS's docker repository service.
The line docker run -it --entrypoint=/bin/bash samcli/lambda:python3.6-0ffbca5374c4d95e8e10dbba8 -i, with default config, will look at docker hubs repository. If you are trying to pull a docker image in AWS I would expect something more like docker run -it --entrypoint=/bin/bash aws_account_id.dkr.ecr.region.amazonaws.com/samcli/lambda:python3.6-0ffbca5374c4d95e8e10dbba8 -i (again not saying that cammand will work but something like it to go along with your aws repo sign in command).
Since https://hub.docker.com/samcli/lambda is a 404 I suspect this is one of those occasions the error message is exactly right, the repo does not exist.

Openshift & docker - which registry can I use for Minishift?

It is easy to work with Openshift as a Container As A Service, see the detailed steps. So, via the docker client I can work with Openshift.
I would like to work on my laptop with Minishift. That's the local version of Openshift on your laptop.
Which docker registry should I use in combination with Minishift? Minishift doesn't have it's own registry - I guess.
So, I would like to do:
$ maven clean install -- building the application
$ oc login to your minishift environment
$ docker build -t myproject/mynewapplication:latest
$ docker tag -- ?? normally to a openshift docker registry entry
$ docker push -- ?? to a local docker registry?
$ on 1st time: $ oc new-app mynewapplication
$ on updates: $ oc rollout latest dc/mynewapplication-n myproject
I use just docker and oc cluster up which is very similar. The internal registry that is deployed has an address in the 172.30.0.0/16 space (ie. the default service network).
$ oc login -u system:admin
$ oc get svc -n default | grep registry
docker-registry ClusterIP 172.30.1.1 <none> 5000/TCP 14m
Now, this service IP is internal to the cluster, but it can be exposed on the router:
$oc expose svc docker-registry -n default
$oc get route -n default | grep registry
docker-registry docker-registry-default.127.0.0.1.nip.io docker-registry 5000-tcp None
In my example, the route was docker-registry-default.127.0.0.1.nip.io
With this route, you can log in with your developer account and your token
$oc login -u developer
$docker login docker-registry-default.127.0.0.1.nip.io -p $(oc whoami -t) -u developer
Login Succeeded
Note: oc cluster up is ephemeral by default; the docs can provide instructions on how to make this setup persistent.
One additional note is that if you want OpenShift to try to use some of it's native builders, you can simply run oc new-app . --name <appname> from within the your source code directory.
$ cat Dockerfile
FROM centos:latest
$ oc new-app . --name=app1
--> Found Docker image 49f7960 (5 days old) from Docker Hub for "centos:latest"
* An image stream will be created as "centos:latest" that will track the source image
* A Docker build using binary input will be created
* The resulting image will be pushed to image stream "app1:latest"
* A binary build was created, use 'start-build --from-dir' to trigger a new build
* This image will be deployed in deployment config "app1"
* The image does not expose any ports - if you want to load balance or send traffic to this component
you will need to create a service with 'expose dc/app1 --port=[port]' later
* WARNING: Image "centos:latest" runs as the 'root' user which may not be permitted by your cluster administrator
--> Creating resources ...
imagestream "centos" created
imagestream "app1" created
buildconfig "app1" created
deploymentconfig "app1" created
--> Success
Build scheduled, use 'oc logs -f bc/app1' to track its progress.
Run 'oc status' to view your app.
There is an internal image registry. You login to it and push images just like you suggest. You just need to know the address and what credentials you need. For details see:
http://cookbook.openshift.org/image-registry-and-image-streams/how-do-i-push-an-image-to-the-internal-image-registry.html

Resources