Current Situation
I have a Kubernetes cluster created on DigitalOcean. I want to deploy a Docker image that is hosted in a private that in turn belongs to an organization in Docker Hub.
Docker Hub organization name (sample): myorg
Docker Hub repository name (sample): myorg/mo-server
So in order to push a new image I use docker push myorg/mo-server
(Note: The example above contains a dash (-) in the name of the image which I have in the real name as well)
Problem
When I try to deploy that docker image to kubernetes using kubectl the deployment always ends up in status ErrImagePull. Error message:
ailed to pull image "index.docker.io/myorg/mo-server": rpc error: code = Unknown desc = Error response from daemon: pull access denied for myorg/mo-server, repository does not exist or may require 'docker login'
What I tried so far
Because it is a private repository I'm creating a secret beforehand. For this, I'm using the username and E-Mail of myself.
set DOCKER_REGISTRY_SERVER=https://index.docker.io/v1/
set DOCKER_USER=sarensw
set DOCKER_EMAIL=stephan#myorg.com
set DOCKER_PASSWORD=...
The credentials are the same as when I use docker login. Then I create a secret using:
kubectl create secret docker-registry regcred
--docker-server=%DOCKER_REGISTRY_SERVER%
--docker-username=%DOCKER_USER%
--docker-password=%DOCKER_PASSWORD%
--docker-email=%DOCKER_EMAIL%
Then, I use kubectl create to create a new deployment.
kubectl create -f ci\helper\kub-deploy-staging.yaml
kub-deploy-staging.yaml looks as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mo-server
spec:
replicas: 1
selector:
matchLabels:
app: mo-server
template:
metadata:
labels:
app: mo-server
spec:
containers:
- name: mo-server
image: index.docker.io/myorg/mo-server
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
imagePullSecrets:
- name: regcred
The result is ErrImagePull as described above.
I'm pretty sure that the image: index.docker.io/myorg/mo-server is the culprit because it is an organization image that I try to use with a normal account. And all the tutorials for accessing a private image do not take organizations into account.
So what am I doing wrong?
(one of many similar) references: https://gist.github.com/rkuzsma/b9a0e342c56479f5e58d654b1341f01e
I suspect this happens because of the docker registry variable with which you have created your secret, please try substituting index.docker.io with registry.hub.docker.com as this is the official dockerhub registry URL. If you are using Google cloud you can also try docker.io
As I see you are trying to set your variable with "set" command, and please try now with "export" as in mentioned in gist file that you follow
export DOCKER_REGISTRY_SERVER=https://index.docker.io/v1/
export DOCKER_USER=Type your dockerhub username, same as when you `docker login`
export DOCKER_EMAIL=Type your dockerhub email, same as when you `docker login`
export DOCKER_PASSWORD=Type your dockerhub pw, same as when you `docker login`
kubectl create secret docker-registry myregistrykey \
--docker-server=$DOCKER_REGISTRY_SERVER \
--docker-username=$DOCKER_USER \
--docker-password=$DOCKER_PASSWORD \
--docker-email=$DOCKER_EMAIL
Then try again and let us know the result please.
Related
I got problem with connecting my k3s cluster to GitLab Docker Registry.
On cluster I got created secret in default namespace like this
kubectl create secret docker-registry regcred --docker-server=https://gitlab.domain.tld:5050 --docker-username=USERNAME --docker-email=EMAIL --docker-password=TOKEN
Then in Deployment config I got this secret included, my config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app.kubernetes.io/name: "app"
app.kubernetes.io/version: "1.0"
namespace: default
spec:
template:
metadata:
labels:
app: app
spec:
imagePullSecrets:
- name: regcred
containers:
- image: gitlab.domain.tld:5050/group/appproject:1.0
name: app
imagePullPolicy: Always
ports:
- containerPort: 80
But the created pod is still unable to pull this image.
There is still error message of:
failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden
Can you help me, where the error may be?
If I try connect to this GitLab registry via secrets above on local docker, it working fine, docker login is right, also a pulling of this image.
Thanks
To pull from a private container registry on Gitlab you must first create a Deploy Token similar to how the pipeline or similar "service" would access it. Go to the repository then go to Settings -> Repository -> Deploy Tokens
Give the deploy token a name, and a username(it says optional but we'll be able to use this custom username with the token) and make sure it has read_registry access. That is all it needs to pull from the registry. If you later need to push then you would need write_registry. Once you click create deploy token it will show you the token be sure to copy it as you won't see it again.
Now just recreate your secret in your k8s cluster.
kubectl create secret docker-registry regcred --docker-server=<private gitlab registry> --docker-username=<deploy token username> --docker-password=<deploy token>
Make sure to apply the secret to the same namespace as your deployment that is pulling the image.
[See Docs] https://docs.gitlab.com/ee/user/project/deploy_tokens/#gitlab-deploy-token
My story is:
1, I create a spring-boot project, with a Dockerfile inside.
2, I successfully create the docker image IN LOCAL with above docker file.
3, I have a minikube build a K8s for my local.
4, However, when I try to apply the k8s.yaml, it tells me that there is no such docker image. Obviously my docker app search in public docker hub, so what I can do?
Below is my dockerfile
FROM openjdk:17-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
expose 8080
ENTRYPOINT ["java","-jar","/app.jar"]
Below is my k8s.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pkslow-springboot-deployment
spec:
selector:
matchLabels:
app: springboot
replicas: 2
template:
metadata:
labels:
app: springboot
spec:
containers:
- name: springboot
image: cicdstudy/apptodocker:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
labels:
app: springboot
name: pkslow-springboot-service
spec:
ports:
- port: 8080
name: springboot-service
protocol: TCP
targetPort: 8080
nodePort: 30080
selector:
app: springboot
type: NodePort
In Kubernetes there is no centralized built-in Container Image Registry exist.
Depending on the container runtime in the K8S cluster nodes you have, it might search first dockerhub to pull images.
Since free pull is not suggested or much allowed by Dockerhub now, it is suggested to create an account for development purposes. You will get 1 private repository and unlimited public repository which means that whatever you pushed to public repositories, there somebody can access it.
If there is no much concern on Intellectual Property issues, you can continue that free account for development purposes. But when going production you need to change that account with a service/robot account.
Create an Account on DockerHub https://id.docker.com/login/
Login into your DockerHub account locally on the machine where you are building your container image
docker login --username=yourhubusername --email=youremail#company.com
Build,re-tag and push your image once more (go to the folder where Dockerfile resides)
docker build -t mysuperimage:v1 .
docker tag mysuperimage:v1 yourhubusername/mysuperimage:v1
docker push yourhubusername/mysuperimage:v1
Create a secret for image registry credentials
kubectl create secret docker-registry regcred --docker-server=https://index.docker.io/v1/ --docker-username= --docker-password= --docker-email=
Create a service account for deployment
kubectl create serviceaccount yoursupersa
Attach secret to the service account named "yoursupersa"
kubectl patch serviceaccount yoursupersa -p '{"imagePullSecrets": [{"name": "docker-registry"}]}'
Now create your application as deployment resource object in K8S
kubectl create deployment mysuperapp --image=yourhubusername/mysuperimage:v1 --port=8080
Then patch your deployment with service account which has attached registry credentials.(which will cause for re-deployment)
kubectl patch deployment mysuperapp -p '{"spec":{"template":{"spec":{"serviceAccountName":"yoursupersa"}}}}'
the last step is expose your service
kubectl expose deployment/mysuperapp
Then everything is awesome! :)
if you just want to be able to pull images from your local computer with minikube you can use eval $(minikube docker-env) this leads to all docker related commands being used on your minikube cluster to use your local docker daemon. so a pull will first look in your hosts local images instead of hub.docker.io.
more information can be found here
We've just bought a docker hub pro user so that we don't have to worry about pull rate limits.
Now, I'm currently having a problem trying to to set the docker hub pro user. Is there a way to set the credentials for hub.docker.com globally?
In the kubernetes docs I found following article: Kubernetes | Configure nodes for private registry
On every node I executed a docker login with the credentials, copied the config.json to /var/lib/kubelet and restarted kubelet. But I'm still getting an ErrImagePull because of those rate limits.
I've copied the config.json to the following places:
/var/lib/kubelet/config.json
/var/lib/kubelet/.dockercfg
/root/.docker/config.json
/.docker/config.json
There is an option to use a secret for authentification. The problem is, that we would need to edit hundreds of statefulsets, deployments and deamonsets. So it would be great to set the docker user globally.
Here's the config.json:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "[redacted]"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.13 (linux)"
}
}
To check if it actually logs in with the user I've created an access token in my account. There I can see the last login with said token. The last login was when I executed the docker login command. So the images that I try to pull aren't using those credentials.
Any ideas?
Thank you!
Kubernetes implements this using image pull secrets. This doc does a better job at walking through the process.
Using the Docker config.json:
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
Or you can pass the settings directly:
kubectl create secret docker-registry <name> --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
Then use those secrets in your pod definitions:
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
Or to use the secret at a user level (Add image pull secret to service account)
kubectl get serviceaccounts default -o yaml > ./sa.yaml
open the sa.yaml file, delete line with key resourceVersion, add lines with imagePullSecrets: and save.
kind: ServiceAccount
metadata:
creationTimestamp: "2020-11-22T21:41:53Z"
name: default
namespace: default
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: afad07eb-f58e-4012-9ccf-0ac9762981d5
secrets:
- name: default-token-gkmp7
imagePullSecrets:
- name: regcred
Finally replace the serviceaccount with the new updated sa.yaml file
kubectl replace serviceaccount default -f ./sa.yaml
We use docker-registry as a proxy cache in our Kubernetes clusters, Docker Hub credentials may be set in the configuration. Docker daemons on Kubernetes nodes are configured to use the proxy by setting registry-mirror in /etc/docker/daemon.json.
This way, you do not need to modify any Kubernetes manifest to include pull secrets. Our complete setup is described in a blog post.
I ran into the same problem as OP. It turns out, putting docker credential files for kubelet works for kubernetes version 1.18 or higher. I have tested here and can confirm that kubelet 1.18 picks up the config.json placed in /var/lib/kubelet correctly and authenticates the docker registry.
I have successfully built Docker images and ran them in a Docker swarm. When I attempt to build an image and run it with Docker Desktop's Kubernetes cluster:
docker build -t myimage -f myDockerFile .
(the above successfully creates an image in the docker local registry)
kubectl run myapp --image=myimage:latest
(as far as I understand, this is the same as using the kubectl create deployment command)
The above command successfully creates a deployment, but when it makes a pod, the pod status always shows:
NAME READY STATUS RESTARTS AGE
myapp-<a random alphanumeric string> 0/1 ImagePullBackoff 0 <age>
I am not sure why it is having trouble pulling the image - does it maybe not know where the docker local images are?
I just had the exact same problem. Boils down to the imagePullPolicy:
PC:~$ kubectl explain deployment.spec.template.spec.containers.imagePullPolicy
KIND: Deployment
VERSION: extensions/v1beta1
FIELD: imagePullPolicy <string>
DESCRIPTION:
Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always
if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated.
More info:
https://kubernetes.io/docs/concepts/containers/images#updating-images
Specifically, the part that says: Defaults to Always if :latest tag is specified.
That means, you created a local image, but, because you use the :latest it will try to find it in whatever remote repository you configured (by default docker hub) rather than using your local. Simply change your command to:
kubectl run myapp --image=myimage:latest --image-pull-policy Never
or
kubectl run myapp --image=myimage:latest --image-pull-policy IfNotPresent
I had this same ImagePullBack error while running a pod deployment with a YAML file, also on Docker Desktop.
For anyone else that finds this via Google (like I did), the imagePullPolicy that Lucas mentions above can also be set in the deployment yaml file. See the spec.templage.spec.containers.imagePullPolicy in the yaml snippet below (3 lines from the bottom).
I added that and my app deployed successfully into my local kube cluser, using the kubectl yaml deploy command: kubectl apply -f .\Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app-deployment
labels:
app: web-app
spec:
replicas: 3
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: node-web-app:latest
imagePullPolicy: Never
ports:
- containerPort: 3000
You didn't specify where myimage:latest is hosted, but essentially ImagePullBackoff means that I cannot pull the image because either:
You don't have networking setup in your Docker VM that can get to your Docker registry (Docker Hub?)
myimage:latest doesn't exist in your registry or is misspelled.
myimage:latest requires credentials (you are pulling from a private registry). You can take a look at this to configure container credentials in a Pod.
May be I'm not getting something right, but my ImageStream returnes "! error: Import failed (Unauthorized): you may not have access to the Docker image "my_registry:5000/project/my_image:latest"".
I have set up all needed steps to connect to external registry (created secret and added it to current projects's serviceaccount/default and serviceaccount/builder accounts). All deploymentconfigs with specified image: my_registry:5000/project/my_image:latest are working great, node can successfully pull the image and create a pod.
But when I am making image stream with:
from:
kind: DockerImage
name: my_registry:5000/project/my_image:latest
I get error that I am not authorised.
So what am i doing wrong? Is there any additional account I should give rights for pull?
oc describe sa/builder
Name: builder
Namespace: nginx
Labels: <none>
Image pull secrets: builder-dockercfg-8ogvt
my_registry
Mountable secrets: builder-token-v6w8q
builder-dockercfg-8ogvt
my_registry
Tokens: builder-token-0j8p5
builder-token-v6w8q
and
oc describe sa/default
Name: default
Namespace: nginx
Labels: <none>
Image pull secrets: default-dockercfg-wmm1h
my_registry
Mountable secrets: default-token-st7k9
default-dockercfg-wmm1h
Tokens: default-token-m2aoq
default-token-st7k9
The solution depends upon your particular infrastructure configuration, but here are some pointers which worked for me -
Assuming your private external registry has Certificates, please check if those certificates are properly imported, if thats not the case, then please add the registry as insecure.
Docker pull, build config, imagestream pull - all work in different manner.
Also it is recommended that pull secret name should be same as hostname of registry authentication endpoint. (If not using insecure registry).
For ex. Registry FQDN Name:5000/yourapp:latest (Certificates need this to work properly).
Please take a look here
oc secrets link default <pull_secret_name> --for=pull
I ran into the same problem when I was trying to import an image from a docker registry hosted in another Openshift cluster. After some debugging I found the problem: Unable to find a secret to match https://docker-dev.xxxx.com:443/openshift/token (docker-dev.xxxx.com:443/openshift/token)
The Openshift Docker registry is using the OAuth of Openshift. So you have to create a secret where the --docker-server is pointing to the /openshift/token endpoint. eg:
oc secrets new-dockercfg registry.example.com \
--docker-server=https://registry.example.com:443/openshift/token \
--docker-username=default/puller-sa \
--docker-password=<token> \
--docker-email=someone#example.com