I have a local kubernetes cluster up and running using k3s. It works like a charm so far.
On it I'm running a custom Docker registry from which I want to pull images for other deployments.
The registry is exposed to the host by means of a NodePort service. Internally it has port 5000, externally it's on port 31320.
I can push docker images to the registry from the host by tagging them as myhostname:31320/myimage:latest. This works great too.
Now I want to use this image in a basic Job deployment. I'm using the whole tag myhostname:31320/myimage:latest as container image entry like this:
apiVersion: batch/v1
kind: Job
metadata:
name: hello-world
spec:
template:
metadata:
name: hello-world-pod
spec:
containers:
- name: hello-world
image: myhostname:31320/myimage:latest
restartPolicy: Never
Unfortunately, I keep getting a 400 BadRequest error stating: image can't be pulled. If I try using the internal service name of the registry and the internal port instead, like in private-registry:5000/myimage:latest, I'm getting the same error.
I suppose I cannot use private-registry:5000/myimage:latest because that's just not the tag of the image. I cannot push the image to private-registry:5000/myimage:latest because the host private-registry is only known inside the cluster and the port 5000 is not exposed to the host.
So... I'm stuck. What am I going to do about this? How do I get to push images from the host to the registry and allow them to be pulled from inside the cluster?
Kubernetes has a rich documentation on how to implement multiple registries to allow further deployments/pods to access to public or even private registries, to do so you can create an image pull secret k8s ressource (docs), you can either create it by running this command:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword>
or by deploying this resource in your cluster:
apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
namespace: awesomeapps
data:
# Make sure the you convert the whole file to base64!
# cat registry.json | base64 -d
.dockerconfigjson: <registry.json>
type: kubernetes.io/dockerconfigjson
registry.json example
{
"auths": {
"your.private.registry.example.com": {
"username": "janedoe",
"password": "xxxxxxxxxxx",
"email": "jdoe#example.com",
"auth": "c3R...zE2"
}
}
}
And now you can simply attache this imagePullSecret resource you can attache it to your deployment:
apiVersion: batch/v1
kind: Job
metadata:
name: hello-world
spec:
template:
metadata:
name: hello-world-pod
spec:
imagePullSecrets:
- name: regcred
containers:
- name: hello-world
image: myhostname:31320/myimage:latest
restartPolicy: Never
PS
You might also consider adding your registry in docker daemon as insecure registry if you encounter other issues.
you can check this SO question
Related
I've created my own image just called v2 but when I do kubectl get pods, it keeps erroring out...with Failed to pull image "v2": rpc error: code = Unknown desc = Error response from daemon: pull access denied for v2, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
I'm using minukube by the way
This is my deployment file, also called v2.yaml
apiVersion: v1
kind: Service
metadata:
name: v2
spec:
selector:
name: v2
ports:
- port: 8080
targetPort: 80
---
# ... Deployment YAML definition
apiVersion: apps/v1
kind: Deployment
metadata:
name: v2
spec:
replicas: 1
selector:
matchLabels:
name: v2
template:
metadata:
labels:
name: v2
spec:
containers:
- name: v2
image: v2
ports:
- containerPort: 80
imagePullPolicy: IfNotPresent
---
# ... Ingress YAML definition
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: v2
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /second
pathType: Prefix
backend:
service:
name: v2
port:
number: 8080
any help gratefully received
My suspicion is that you have built your container image against your local Docker daemon, rather than minikube's. Hence, because your imagePullPolicy is set to IfNotPresent, the node will try pulling it from Docker hub (the default container registry).
You can run minikube ssh to open a shell and then run docker image ls to verify the image is not present in the minikube Docker daemon.
The solution here is to first run the following command from your local shell (i.e. not the one in minikube):
$ eval $(minikube -p minikube docker-env)
It will set up your current shell to use minikube's docker daemon. After that, in the same shell, rebuild your image. Now when minikube tries pulling the image, it should find it and bring up the pod successfully.
As the error message indicate v2, repository does not exist it is because of image: v2 . there is no image available in docker hub with name v2. if it is in your repository on docker hub then mention it in the form <reponame>/v2.
A colleague created a K8s cluster for me. I can run services in that cluster without any problem. However, I cannot run services that depend on an image from Amazon ECR, which I really do not understand. Probably, I made a small mistake in my deployment file and thus caused this problem.
Here is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
Here is my service file:
apiVersion: v1
kind: Service
metadata:
name: hello-svc
labels:
app: hello
spec:
type: NodePort
ports:
- port: 5000
nodePort: 30002
protocol: TCP
selector:
app: hello
On the master node, I have run this to ensure kubernetes knows about the deployment and the service.
kubectl create -f dep.yml
kubectl create -f service.yml
I used the K8s extension in vscode to check the logs of my pods.
This is the error I get:
Error from server (BadRequest): container "hello" in pod
"hello-deployment-xxxx-49pbs" is waiting to start: trying and failing
to pull image.
Apparently, pulling is an issue..... This is not happening when using a public image from the public docker hub. Logically, this would be a rights issue. But looks like it is not. I get no error message when running this command on the master node:
docker pull xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
This command just pulls my image.
I am confused now. I can pull my image with docker pull on the master node . But K8s fails doing the pull. Am I missing something in my deployment file? Some property that says: "repositoryIsPrivateButDoNotComplain"? I just do not get it.
How to fix this so K8s can easily use my image from Amazon ECR?
You should create and use secretes for the ECR authorization.
This is what you need to do.
Create a secrete for the Kubernetes cluster, execute the below-given shell script from a machine from where you can access the AWS account in which ECR registry is hosted. Please change the placeholders as per your setup. Please ensure that the machine on which you execute this shell script should have aws cli installed and aws credential configured. If you are using a windows machine then execute this script in Cygwin or git bash console.
#!/bin/bash
ACCOUNT=<AWS_ACCOUNT_ID>
REGION=<REGION>
SECRET_NAME=<SECRETE_NAME>
EMAIL=<SOME_DUMMY_EMAIL>
TOKEN=`/usr/local/bin/aws ecr --region=$REGION --profile <AWS_PROFILE> get-authorization-token --output text --query authorizationData[].authorizationToken | base64 -d | cut -d: -f2`
kubectl delete secret --ignore-not-found $SECRET_NAME
kubectl create secret docker-registry $SECRET_NAME \
--docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
Change the deployment and add a section for secrete which you're pods will be using while downloading the image from ECR.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
imagePullSecrets:
- name: SECRET_NAME
Create the pods and service.
IF it succeeds, then still the secret will expire in 12 hours, to overcome that setup a crone ( for recreating the secretes on the Kubernetes cluster periodically. For setting up crone use the same script which is given above.
For the complete picture of how it is happening under the hood please refer to below diagram.
Regards
Amit Meena
For 12 Hour problem, If you are using Kubernetes 1.20, Please configure and use Kubelet image credential provider
https://kubernetes.io/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/
You need to enable alpha feature gate KubeletCredentialProviders in your kubelet
If using Lower Kubernetes Version and this feature is not available then use https://medium.com/#damitj07/how-to-configure-and-use-aws-ecr-with-kubernetes-rancher2-0-6144c626d42c
I am trying to build ci/cd locally with jenkins and minikube.
I run minikube on my machine (host) with docker driver, and run jenkins in a container too.
Both on the same docker network.
To run kubectl commands inside a jenkins pipeline I need to
access the minikube from my container that is running jenkins.
I've tried to use the container name as a host but it didn't work.
I'm out of ideas for attempts can someone help me?
Went in to same issue: cannot access $(minikube ip) from external docker container while access from host machine is fine.
running the docker container with --network host option solved the issue.
Running kubectl commands from a pod (container) is possible and simple to achieve. Although it's more practical and recommended to use Kubernetes API instead.
For both of them you are required to give the right permissions to your pods so they can authenticate to be able to make k8s API calls (kubectl is just an application that talks to your cluster through the API).
Here is a good example by mster:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: k8s-101
spec:
replicas: 3
template:
metadata:
labels:
app: k8s-101
spec:
serviceAccountName: k8s-101-role
containers:
- name: k8s-101
imagePullPolicy: Always
image: yourrepo/yourcontainer
ports:
- name: app
containerPort: 3000
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k8s-101-role
subjects:
- kind: ServiceAccount
name: k8s-101-role
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-101-role
Here we are giving cluster-role rights to the Deployment Pods and consider it as a bad example as it's dangerous, it exposes your cluster.
Next you have to prepare your containers to have kubectl built in:
Download & Build kubectl inside the container
Build your application, copying kubectl to your container
Voila! kubectl provides a rich cli for managing your kubernetes cluster
If you prefer to talk directly to the API, you don't need to do anything else. Just go to the documentation to understand how to make calls, and also check Access Clusters Using the Kubernetes API.
I have a single private repository on Docker. It contains a simple ASP.Net project. The full URL is https://hub.docker.com/repository/docker/MYUSERNAME/testrepo. I can push an image to it using these commands:
$ docker tag myImage MYUSERNAME/testrepo
$ docker push MYUSERNAME/testrepo
I have created this secret in Kubernetes:
$ kubectl create secret docker-registry mysecret --docker-server="MYUSERNAME/testrepo" --docker-username=MY_USERNAME --docker-password="MY_DOCKER_PASSWORD" --docker-email=MY_EMAIL
Which successfully creates a secret in Kubernetes with my username and password. Next, I apply a simple deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: weather-deployment
labels:
app: weather
spec:
replicas: 3
selector:
matchLabels:
app: weather
template:
metadata:
labels:
app: weather
spec:
containers:
- name: weather
image: MYUSERNAME/testrepo:latest
ports:
- containerPort: 80
imagePullSecrets:
- name: mysecret
The deployment fails with this message:
$ Failed to pull image "MYUSERNAME/testrepo:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for MYUSERNAME/testrepo, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
What am I doing wrong?
You should provide correct registry url --docker-server="MYUSERNAME/testrepo".
It is not docker image name. It should be your private registry url, if you use docker hub then the value should be --docker-server="https://index.docker.io/v1/". From this document
<your-registry-server> is your Private Docker Registry FQDN. (https://index.docker.io/v1/ for DockerHub)
I have this Kubernetes Job instance:
apiVersion: batch/v1
kind: Job
metadata:
name: job
spec:
template:
spec:
containers:
name: job
image: 172.30.34.145:5000/myproj/app:latest
command: ["/bin/sh", "-c", "$(COMMAND)"]
serviceAccount: default
serviceAccountName: default
restartPolicy: Never
How can I write the image name so it always pull from within my own namespace.
I'd like to set it like this:
image: app:latest
But it fails saying it's unable to pull the image
To pull from a different repository then dockerhub you need to specify the host:port part in the image name. As far as I am aware at this point there is no option to change to location of default registry in docker daemon.
If you are very fixed on the idea, you could fiddle with DNS so it resolves to your image registry instead of dockers one, but that would cut you off from docker hub completely.