My pod can't be created because of the following problem:
Failed to pull image "europe-west3-docker.pkg.dev/<PROJECT_ID>/<REPO_NAME>/my-app:1.0.0": rpc error: code = Unknown desc = Error response from daemon: Get https://europe-west3-docker.pkg.dev/v2/<PROJECT_ID>/<REPO_NAME>/my-app/manifests/1.0.0: denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/<PROJECT_ID>/locations/europe-west3/repositories/<REPO_NAME>" (or it may not exist)
I've never experienced anything like it. Maybe someone can help me out.
Here is what I did:
I set up a standrd Kubernetes cluster on Google Cloud in the Zone europe-west-3-a
I started to follow the steps described here https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
I built the docker imager and pushed it to the Artifcats repository
I can confirm the repo and the image are present, both in the Google Console as well as pulling the image with docker
Now I want to deploy my app, here is the deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: europe-west3-docker.pkg.dev/<PROJECT_ID>/<REPO_NAME>/my-app:1.0.0
imagePullPolicy: Always
ports:
- containerPort: 8080
The pod fails to create due to the error mentioned above.
What am I missing?
I encountered the same problem, and was able to get it working by executing:
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=roles/artifactregistry.reader
with ${PROJECT} = the project name and ${EMAIL} = the default service account, e.g. something like 123456789012-compute#developer.gserviceaccount.com.
I suspect I may have removed some "excess permissions" too eagerly in the past.
I think the tutorial is in error.
I was able to get this working by:
Creating a Service Account and key
Assigning the account Artifact Registry permissions
Creating a Kubernetes secret representing the Service Account
Using imagePullSecrets
PROJECT=[[YOUR-PROJECT]]
REPO=[[YOUR-REPO]]
LOCATION=[[YOUR-LOCATION]]
# Service Account and Kubernetes Secret name
ACCOUNT="artifact-registry" # Or ...
# Email address of the Service Account
EMAIL=${ACCOUNT}#${PROJECT}.iam.gserviceaccount.com
# Create Service Account
gcloud iam service-accounts create ${ACCOUNT} \
--display-name="Read Artifact Registry" \
--description="Used by GKE to read Artifact Registry repos" \
--project=${PROJECT}
# Create Service Account key
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL} \
--project=${PROJECT}
# Grant Service Account role to reader Artifact Reg
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=roles/artifactregistry.reader
# Create a Kubernetes Secret representing the Service Account
kubectl create secret docker-registry ${ACCOUNT} \
--docker-server=https://${LOCATION}-docker.pkg.dev \
--docker-username=_json_key \
--docker-password="$(cat ${PWD}/${ACCOUNT}.json)" \
--docker-email=${EMAIL} \
--namespace=d{NAMESPACE}
Then:
IMAGE="${LOCATION}-docker.pkg.dev/${PROJECT}/${REPO}/my-app:1.0.0"
echo "
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
imagePullSecrets:
- name: ${ACCOUNT}
containers:
- name: my-app
image: ${IMAGE}
imagePullPolicy: Always
ports:
- containerPort: 8080
" | kubectl apply --filename=- --namespace=${NAMESPACE}
NOTE There are other ways to achieve this.
You could use the cluster's default (Compute Engine) Service Account instead of a special-purpose Service Account as here but the default Service Account is more broadly used and granting it greater powers may be too broad.
You could add the imagePullSecrets to the GKE namespace's default service account. This would give any deployment in that namespace the ability to pull from the repository and that may also be too broad.
I think there's a GKE-specific way to grant a cluster service account GCP (!) roles.
Related
A colleague created a K8s cluster for me. I can run services in that cluster without any problem. However, I cannot run services that depend on an image from Amazon ECR, which I really do not understand. Probably, I made a small mistake in my deployment file and thus caused this problem.
Here is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
Here is my service file:
apiVersion: v1
kind: Service
metadata:
name: hello-svc
labels:
app: hello
spec:
type: NodePort
ports:
- port: 5000
nodePort: 30002
protocol: TCP
selector:
app: hello
On the master node, I have run this to ensure kubernetes knows about the deployment and the service.
kubectl create -f dep.yml
kubectl create -f service.yml
I used the K8s extension in vscode to check the logs of my pods.
This is the error I get:
Error from server (BadRequest): container "hello" in pod
"hello-deployment-xxxx-49pbs" is waiting to start: trying and failing
to pull image.
Apparently, pulling is an issue..... This is not happening when using a public image from the public docker hub. Logically, this would be a rights issue. But looks like it is not. I get no error message when running this command on the master node:
docker pull xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
This command just pulls my image.
I am confused now. I can pull my image with docker pull on the master node . But K8s fails doing the pull. Am I missing something in my deployment file? Some property that says: "repositoryIsPrivateButDoNotComplain"? I just do not get it.
How to fix this so K8s can easily use my image from Amazon ECR?
You should create and use secretes for the ECR authorization.
This is what you need to do.
Create a secrete for the Kubernetes cluster, execute the below-given shell script from a machine from where you can access the AWS account in which ECR registry is hosted. Please change the placeholders as per your setup. Please ensure that the machine on which you execute this shell script should have aws cli installed and aws credential configured. If you are using a windows machine then execute this script in Cygwin or git bash console.
#!/bin/bash
ACCOUNT=<AWS_ACCOUNT_ID>
REGION=<REGION>
SECRET_NAME=<SECRETE_NAME>
EMAIL=<SOME_DUMMY_EMAIL>
TOKEN=`/usr/local/bin/aws ecr --region=$REGION --profile <AWS_PROFILE> get-authorization-token --output text --query authorizationData[].authorizationToken | base64 -d | cut -d: -f2`
kubectl delete secret --ignore-not-found $SECRET_NAME
kubectl create secret docker-registry $SECRET_NAME \
--docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
Change the deployment and add a section for secrete which you're pods will be using while downloading the image from ECR.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
imagePullSecrets:
- name: SECRET_NAME
Create the pods and service.
IF it succeeds, then still the secret will expire in 12 hours, to overcome that setup a crone ( for recreating the secretes on the Kubernetes cluster periodically. For setting up crone use the same script which is given above.
For the complete picture of how it is happening under the hood please refer to below diagram.
Regards
Amit Meena
For 12 Hour problem, If you are using Kubernetes 1.20, Please configure and use Kubelet image credential provider
https://kubernetes.io/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/
You need to enable alpha feature gate KubeletCredentialProviders in your kubelet
If using Lower Kubernetes Version and this feature is not available then use https://medium.com/#damitj07/how-to-configure-and-use-aws-ecr-with-kubernetes-rancher2-0-6144c626d42c
I have a single private repository on Docker. It contains a simple ASP.Net project. The full URL is https://hub.docker.com/repository/docker/MYUSERNAME/testrepo. I can push an image to it using these commands:
$ docker tag myImage MYUSERNAME/testrepo
$ docker push MYUSERNAME/testrepo
I have created this secret in Kubernetes:
$ kubectl create secret docker-registry mysecret --docker-server="MYUSERNAME/testrepo" --docker-username=MY_USERNAME --docker-password="MY_DOCKER_PASSWORD" --docker-email=MY_EMAIL
Which successfully creates a secret in Kubernetes with my username and password. Next, I apply a simple deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: weather-deployment
labels:
app: weather
spec:
replicas: 3
selector:
matchLabels:
app: weather
template:
metadata:
labels:
app: weather
spec:
containers:
- name: weather
image: MYUSERNAME/testrepo:latest
ports:
- containerPort: 80
imagePullSecrets:
- name: mysecret
The deployment fails with this message:
$ Failed to pull image "MYUSERNAME/testrepo:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for MYUSERNAME/testrepo, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
What am I doing wrong?
You should provide correct registry url --docker-server="MYUSERNAME/testrepo".
It is not docker image name. It should be your private registry url, if you use docker hub then the value should be --docker-server="https://index.docker.io/v1/". From this document
<your-registry-server> is your Private Docker Registry FQDN. (https://index.docker.io/v1/ for DockerHub)
I've been trying to create a deployment of docker image to Kubernetes cluster without luck, my deployment.yaml looks like:
apiVersion: v1
kind: Pod
metadata:
name: application-deployment
labels:
app: application
spec:
serviceAccountName: gitlab
automountServiceAccountToken: false
containers:
- name: application
image: example.org:port1/foo/bar:latest
ports:
- containerPort: port2
volumes:
- name: foo
secret:
secretName: regcred
But it fails to get the image.
Failed to pull image "example.org:port1/foo/bar:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://example.org:port1/v2/foo/bar/manifests/latest: denied: access forbidden
The secret used in deployment.yaml, was created like this:
kubectl create secret docker-registry regcred --docker-server=${CI_REGISTRY} --docker-username=${CI_REGISTRY_USER} --docker-password=${CI_REGISTRY_PASSWORD} --docker-email=${GITLAB_USER_EMAIL}
Attempt #1: adding imagePullSecrets
...
imagePullSecrets:
- name: regcred
results in:
Failed to pull image "example.org:port1/foo/bar:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://example.org:port1/v2/foo/bar/manifests/latest: unauthorized: HTTP Basic: Access denied
Solution:
I've created deploy token under Settings > Repository > Deploy Tokens > (created one with read_registry scope)
And added given values to environment variables and an appropriate line now looks like:
kubectl create secret docker-registry regcred --docker-server=${CI_REGISTRY} --docker-username=${CI_DEPLOY_USER} --docker-password=${CI_DEPLOY_PASSWORD}
I've got the problematic line from tutorials & Gitlab docs, where they've described deploy tokens but further used problematic line in examples.
I reproduced your issue and the problem is with password you used while creating a repository's secret. When creating a secret for gitlab repository you have to use personal token created in gitlab instead of a password.
You can create a token by going to Settings -> Access Tokens. Then you have to pick a name for your token, expiration date and token's scope.
Then create a secret as previously by running
kubectl create secret docker-registry regcred --docker-server=$docker_server --docker-username=$docker_username --docker-password=$personal_token
While creating a pod you have to include
imagePullSecrets:
- name: regcred
You need add the imagePullSecret on your deployment, so your pod will be:
apiVersion: v1
kind: Pod
metadata:
name: application-deployment
labels:
app: application
spec:
serviceAccountName: gitlab
automountServiceAccountToken: false
containers:
- name: application
image: example.org:port1/foo/bar:latest
ports:
- containerPort: port2
imagePullSecrets:
- name: regcred
Be sure that the secret and pod is running on same namespace.
Also make sure that the container you are pulling exist and with the right tag.
I notice you are trying to run the command on pipeline on gitlab-ci, check after run the create secret command that your secret is right (with the variables replacement).
You can verify if you can login to registry and pull the image manually on some other linux to by sure that the credentials are right.
creating a secret didn't work for me at first, though I had to specify the namespace for the secret and it worked.
kubectl delete secret -n ${NAMESPACE} regcred --ignore-not-found
kubectl create secret -n ${NAMESPACE} docker-registry regcred --docker-server=${CI_REGISTRY} --docker-username=${CI_DEPLOY_USERNAME} --docker-password=${CI_DEPLOY_PASSWORD} --docker-email=${GITLAB_USER_EMAIL}
I've deployed my first app on my Kubernetes prod cluster a month ago.
I could deploy my 2 services (front / back) from gitlab registry.
Now, I pushed a new docker image to gitlab registry and would like to redeploy it in prod:
Here is my deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
reloader.stakater.com/auto: "true"
labels:
app: espace-client-client
name: espace-client-client
namespace: espace-client
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: espace-client-client
spec:
containers:
- envFrom:
- secretRef:
name: espace-client-client-env
image: registry.gitlab.com/xxx/espace_client/client:latest
name: espace-client-client
ports:
- containerPort: 3000
resources: {}
restartPolicy: Always
imagePullSecrets:
- name: gitlab-registry
I have no clue what is inside gitlab-registry. I didn't do it myself, and the people who did it left the crew :( Nevertheless, I have all the permissions, so, I only need to know what to put in the secret, and maybe delete it and recreate it.
It seems that secret is based on my .docker/config.json
➜ espace-client git:(k8s) ✗ kubectl describe secrets gitlab-registry
Name: gitlab-registry
Namespace: default
Labels: <none>
Annotations: <none>
Type: kubernetes.io/dockerconfigjson
Data
====
.dockerconfigjson: 174 bytes
I tried to delete existing secret, logout with
docker logout registry.gitlab.com
kubectl delete secret gitlab-registry
Then login again:
docker login registry.gitlab.com -u myGitlabUser
Password:
Login Succeeded
and pull image with:
docker pull registry.gitlab.com/xxx/espace_client/client:latest
which worked.
file: ~/.docker/config.json is looking weird:
{
"auths": {
"registry.gitlab.com": {}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.09.6 (linux)"
},
"credsStore": "secretservice"
}
It doesn't seem to contain any credential...
Then I recreate my secret
kubectl create secret generic gitlab-registry \
--from-file=.dockerconfigjson=/home/julien/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
I also tried to do :
kubectl create secret docker-registry gitlab-registry --docker-server=registry.gitlab.com --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
and deploy again:
kubectl rollout restart deployment/espace-client-client -n espace-client
but I still have the same error:
Error from server (BadRequest): container "espace-client-client" in pod "espace-client-client-6c8b88f795-wcrlh" is waiting to start: trying and failing to pull image
You have to update the gitlab-registry secret because this item is used to let Kubelet to pull the protected image using credentials.
Please, delete the old secret with kubectl -n yournamespace delete secret gitlab-registry and recreate it typing credentials:
kubectl -n yournamespace create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD[ --docker-email=DOCKER_EMAIL]
where:
- DOCKER_REGISTRY_SERVER is the GitLab Docker registry instance
- DOCKER_USER is the username of the robot account to pull images
- DOCKER_PASSWORD is the password attached to the robot account
You could ignore docker-email since it's not mandatory (note the square brackets).
After following the link below, I can successfully pull my private images in Docker Hub from my Pods: Pull from Private repo
However, attempting to pull a Docker Store image doesn't seem to work.
I am able to pull this store image locally on my deskop using docker pull store/oracle/database-instantclient:12.2.0.1 and the same credentials that have been stored in Kubernetes as a secret.
What is the correct way to pull a Docker Store image from Kubernetes Pods?
Working pod config for my private repo/image:
image: index.docker.io/<privaterepo>/<privateimage>
I have tried the following in my pod config, none work:
image: store/oracle/database-instantclient:12.2.0.1
image: oracle/database-instantclient:12.2.0.1
image: index.docker.io/oracle/database-instantclient:12.2.0.1
image: index.docker.io/store/oracle/database-instantclient:12.2.0.1
All of the above attempts return the same error (with different image paths):
Failed to pull image "store/oracle/database-instantclient:12.2.0.1": rpc error: code = Unknown desc = Error response from daemon: repository store/oracle/database-instantclient not found: does not exist or no pull access
I managed to run this in minikube by setting up a secret with my docker login:
kubectl create secret docker-registry dockerstore \
--docker-server=index.docker.io/v1/ \
--docker-username={docker store username} \
--docker-password={docker store password} \
--docker-email={your email}
Then kubectl create -f testreplicaset.yaml
on
#testreplicaset.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: oracle-instantclient
labels:
app: oracle-instantclient
spec:
replicas: 1
selector:
matchLabels:
app: oracle-instantclient
template:
metadata:
labels:
app: oracle-instantclient
spec:
containers:
- name: oracle-instantclient-container
image: store/oracle/database-instantclient:12.2.0.1
env:
ports:
imagePullSecrets:
- name: dockerstore
I can't tell exactly why it doesn't work for you, but it might give more clues if you ssh into your kubernetes node and try docker pull in there.