I am using minikube to develop my Kubernetes application. I have a private azure registry where my images are saved. Whenever I start the app, k8s start to pull an image. It throws the following error
Failed to pull image "myregistry.azurecr.io/myapp:mytag": rpc error: code = Unknown desc = Error response from daemon: Get https://myregistry.azurecr.io/v2/myapp/manifests/mytag: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.
I am configuring my minikube using this documentation. where first, I log-in to acr using below command,
az acr login --name myregistry.azurecr.io --expose-token
And after using the token provided by the above command, I log-in to my private docker-registry by the below command in minikube ssh.
docker login myregistry.azurecr.io -u 00000000-0000-0000-0000-000000000000
After that as per mention in the document, I copy the .docker/config.json to /var/lib/kubelet/config.json in minikube ssh. Still I am facing above error.
If I manually pull the image using the docker pull command, it works. I tried with imagepullsecret also and it is working. But from the above method, getting an authentication error. Do I have missing any step here? Can you please help me?
Thanks...
It seems all the steps are right. Maybe you can check if you really copy the config file to all the minikube nodes. In default, the command minikube ssh connect the control plane. You can check if the nodes' IP addresses is right when you copy the config file to them.
But in my opinion, it's not a good way to use the way like this. It's better and more convenient to use the imagePullSecret and service account.
Related
The documentation says I have to use jfrog.io and not jfrog.com. I also tried to login into jfrog.com, which did not work.
So it looks like acme.jfrog.io/acme is the right way to access my Docker registry.
Note: Also the hostname was missing in the description. I was only able to upload when specifying the full name, and setting the registry as insecure in my Docker configuration.
Is this a known issue? Or limitation of the free offering?
sudo docker login jfrog.io/acme
Username: admin
Password:
Error response from daemon: Get https://jfrog.io/v2/: x509: certificate is valid for jfrog.com, *.jfrog.com, not jfrog.io
Indeed, you should be using my-account.jfrog.io and not my-account.jfrog.com.
The docker login command which you are running is wrong. It is missing your account name (as a sub domain), so instead of calling jfrog.io you should be calling my-account.jfrog.io (for example daniel.jfrog.io)
The reason for getting the certificate error is that when trying to perform docker login directly to jfrog.io (without the subdomain) docker is trying to access an invalid URL - jfrog.io/v2. As a result it is being redirected to an 403 error page on jfrog.com which does not match the jfrog.io certificate.
To test your docker repository please follow the following steps:
Login to your repository with docker login command. Make sure to use your account name instead of my-account. Please notice that you do not need the repository name for the login command.
docker login my-account.jfrog.io
Pull the hello-world image from the Dockerhub
docker pull hello-world
Tag the hello-world image so it can be pushed to your repository (assuming it is a local repository and you have the permissions to push an image). Make sure to use your account name and repo name
docker tag hello-world my-account.jfrog.io/my-repo/hello-world
Push the tagged image to your repository
docker push my-account.jfrog.io/my-repo/hello-world
I have a spring boot application running locally on docker on my machine.
The image name is springio/spring-rest-hello-world and the tag is latest. I want to push this to openshift online and get it running.
I think to do that the easiest way is to push the image up (correct me if I'm wrong).
Here is my attempt.
oc login https://console-openshift-console.apps.us-east-1.starter.openshift-online.com:6443
oc project playpen
docker login -u myUser -p myToken default-route-openshift-image-registry.apps.us-east-1.starter.openshift-online.com
docker tag springio/spring-rest-hello-world default-route-openshift-image-registry.apps.us-east-1.starter.openshift-online.com:5000/springio/spring-rest-hello-world
docker push default-route-openshift-image-registry.apps.us-east-1.starter.openshift-online.com:5000/springio/spring-rest-hello-world
With the error being...
Get https://default-route-openshift-image-registry.apps.us-east-1.starter.openshift-online.com:5000/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers
I am a newb at this stuff so I am pretty sure I have something wrong. Appreciate it if someone could give me a hand with the steps required.
thanks
Thanks to the comment provided. No need to include the docker port when tagging the image.
I also needed to tag the image with the ocp project name included in the tag.
docker tag springio/spring-rest-hello-world default-route-openshift-image-registry.apps.us-east-1.starter.openshift-online.com/playpen/spring-rest-hello-world:latest
I'm trying to execute docker commands inside of a Docker container (don't ask why). To do so I start up a container by running.
sudo docker run -v /var/run/docker.sock:/var/run/docker.sock -it my_docker_image
I am able to run all of the docker commands (pull, login, images, etc) but when I try to push to my remote (Gitlab) registry I get denied access. Yes, I did do a docker login and was able to successfully log in.
When looking at the Gitlab logs I see an error telling me no access token was sent with the push. After I do a docker login I see a /root/.docker/config.json with the remote url and a string of random characters (my credentials in base 64 I believe)? I'm using an access token as my password because i have MFA enabled on my Gitlab server.
Appreciate the help!
I ended up resolving the issue by using docker:stable as my runner image. Not quite sure what the problem was with the centos:centos7 image.
So I have my docker image uploaded to my projects registry. I can navigate to https://console.cloud.google.com/gcr/images/ and I see my Image listed there.
Now I want to run a VM on this project and on this one use docker to run this very image.
This is the command within my VM:
sudo /usr/bin/docker run eu.gcr.io/my-project-name/example001
The response is:
Unable to find image 'eu.gcr.io/.../example001:latest' locally
/usr/bin/docker: Error response from daemon: unauthorized: You don't have the needed permissions to perform this op
eration, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.goo
gle.com/container-registry/docs/advanced-authentication.
See '/usr/bin/docker run --help'.
Please see the image attached. I can list my images if I define "eu.gcr.io/..." as my project path. However the machine seems to run on ".gcr.io" so therefor not able to access my image? How would I fix this - and why is my image on "eu.gcr.io" and the machine on ".gcr.io", I cant find a method to change this (either move the image to gcr.io or move the machine to, eu.gcr.io). However I'm not sure if this is the issue.
Maybe it is an authentication issue with docker?
VM basically cannot be on ".gcr.io", it can run in non-European region/zone, but it shouldn't be a problem.
From GCP access control point of view registry is just a bucket.
So I believe first thing you need to check is that VM has access to Google Cloud Storage.
With gcloud:
gcloud compute instances describe <instance-name>
check if VM has scope to read from devstorage:
serviceAccounts:
- email: ...-compute#developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
- ...
This scope should be in place to read from registry:
https://www.googleapis.com/auth/devstorage.read_only
If you don't have such scope on VM, but have there gcloud configured, you can use gcloud as credential helper:
gcloud auth configure-docker
as stated in doc you referred: https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper
The answer is found here:
https://serverfault.com/questions/900026/gcp-no-access-to-container-registry-from-compute-engine
It is the docker command which needs the authorization. Not the hostname (eu.gcr.io) is the issue here. I used 'gcloud docker -- pull ...' command to get the image from the repository to use within my VM.
after you create linux VM on GCP, SSH to it, you have to install Google SDK 1 using Cloud SDK with scripts]1 or manually.
If you are running Ubuntu follow the documentation here if you are installing with Red Hat or CentOS follow the documentation here after finishing the Google SDK you have to Run gcloud init to initialize the SDK, just open a terminal and tape [gcloud init] you have to configure your profile. after that you have to install Docker
sudo apt-get -y install docker-ce
sudo systemctl start docker
You need to have access to the registries which you will be pushing to and pulling from.
Configured Docker to use gcloud as a credential helper. To use gcloud as the crediential helper, run the command:
gcloud auth configure-docker
After that you can pull or push images on your registry using the gcloud command with the docker as shown below:
Push: gcloud docker -- push gcr.io/google-containers/example-image:latest
pull: gcloud docker -- pull gcr.io/google-containers/example-image:latest
Working on getting development environment setup in Minikube and ran across an issue pulling images from the https://quay.io/v2/ registry.
I have ran the command:
eval $(minikube docker-env) .
Which allows me to build my local Dockerfile in Minikube and it does a great job with that and deployments work great with local images.
I then used helm to install
helm install stable/mssql-linux .
Which worked fine and its image points to this microsoft/mssql-server-linux:2017-CU3 HERE
I am also working with redis-ha and installed like so:
helm install stable/redis-ha --set="rbac.create=false"
The rbac.create=false seems to allow it to install in Minikube without causing all sorts of issues. However, despite creating deployments and services...the deployments ultimately fail because it cant pull the image.
I get the following error:
Failed to pull image "quay.io/smile/redis:4.0.8r0": rpc error: code = Unknown desc = Error response from daemon: Get https://quay.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
The deployments point to this registry image: quay.io/smile/redis:4.0.8r0
I have changed my DNS pretty much everywhere I could to point to 8.8.8.8 as it seems like it cant resolve the URL. It could also just be that I need to add the registry someplace? I kind of feel that its registry specific since Minikube docker daemon appears to be able to pull from docker hub but not quay.io.
If I use a terminal that is not running eval $(minikube docker-env) and use the docker daemon on my host computer I can pull the quay.io/smile/redis:4.0.8r0 image just fine...ssh into minikube and try and it cant pull.
Minikube version
minikube version: v0.25.0
Docker for Mac
Version 17.12.0-ce-mac55 (23011)
as it seems like it cant resolve the URL
What lead you to believe that, when the error clearly states that it has a Client.Timeout exceeded while awaiting headers? It resolved the registry to an IP address, and even apparently opened a network connection to what it thinks is the registry's IP and port. But after that, the networking stack in minikube did not, in fact, allow the traffic out. Observe that the error wasn't DNS, and it wasn't connection refused, it was connection timed out. That is almost always a firewall-esque behavior.
That smells very, very much like a corporate HTTP proxy, since your machine can interact with the Internet but minikube cannot.
There are a ton of troubleshooting steps one could go through, however, if you are interested in a very quick win, you can, from your working host computer, run docker save quay.io/smile/redis:4.0.8r0 | ssh-into-minikube "docker load" and treat minikube as if it were airgapped.
I dont know what the underlying reason was...perhaps Minikube just being fragile but ended up:
Removing minikube
rm -rf ~/.minikube
Running start again
minikube start --vm-driver=hyperkit
Reran init helm init
Now everything is pulling as it should....