How to change our k8s cluster from docker.io to our private registry so we don't have to mention the docker registry host on every image?
you could set your MutatingAdmissionWebhook to modify image.spec.containers.image values
I wouldn't recommend doing this though.
This is a Community Wiki answer partially based on other user's comments, so feel free to edit it and add any additional details you consider important.
As you can read in this section of the official kubernetes documentation, it can be configured on a container runtime level ( in this case Docker ) by creating a Secret based on existing Docker credentials and later you can refer to such alternative cofiguration in your Pod specification as follows:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets: 👈
- name: regcred
Related
I got problem with connecting my k3s cluster to GitLab Docker Registry.
On cluster I got created secret in default namespace like this
kubectl create secret docker-registry regcred --docker-server=https://gitlab.domain.tld:5050 --docker-username=USERNAME --docker-email=EMAIL --docker-password=TOKEN
Then in Deployment config I got this secret included, my config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app.kubernetes.io/name: "app"
app.kubernetes.io/version: "1.0"
namespace: default
spec:
template:
metadata:
labels:
app: app
spec:
imagePullSecrets:
- name: regcred
containers:
- image: gitlab.domain.tld:5050/group/appproject:1.0
name: app
imagePullPolicy: Always
ports:
- containerPort: 80
But the created pod is still unable to pull this image.
There is still error message of:
failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden
Can you help me, where the error may be?
If I try connect to this GitLab registry via secrets above on local docker, it working fine, docker login is right, also a pulling of this image.
Thanks
To pull from a private container registry on Gitlab you must first create a Deploy Token similar to how the pipeline or similar "service" would access it. Go to the repository then go to Settings -> Repository -> Deploy Tokens
Give the deploy token a name, and a username(it says optional but we'll be able to use this custom username with the token) and make sure it has read_registry access. That is all it needs to pull from the registry. If you later need to push then you would need write_registry. Once you click create deploy token it will show you the token be sure to copy it as you won't see it again.
Now just recreate your secret in your k8s cluster.
kubectl create secret docker-registry regcred --docker-server=<private gitlab registry> --docker-username=<deploy token username> --docker-password=<deploy token>
Make sure to apply the secret to the same namespace as your deployment that is pulling the image.
[See Docs] https://docs.gitlab.com/ee/user/project/deploy_tokens/#gitlab-deploy-token
I am trying to avoid having to create three different images for separate deployment environments.
Some context on our current ci/cd pipeline:
For the CI portion, we build our app into a docker container and then submit that container to a security scan. Once the security scan is successful, the container gets put into a private container repository.
For the CD portion, using helm charts, we pull the container from the repository and then deploy to a company managed Kubernetes cluster.
There was an ask and the solution was to use a piece of software in the container. And for some reason (I'm the devops person and not the software engineer) the software needs environment variables (specific to the deployment environment) passed to it when it starts. How would we be able to start and pass environment variables to this software at deployment?
I could just create three different images with the environment variables but I feel like that is an anti-pattern. It takes away from the flexibility of having one image that can be deployed to different environments.
Can any one point me to resources that can accomplish starting an application with specific environment variables using Helm? I've looked but did not find a solution or anything that pointed me to the right direction. As a plan b, I'll just create three different images but I want to make sure that there is not a better way.
Depending on the container orchestration, you can pass the env in differnt ways:
Plain docker:
docker run -e MY_VAR=MY_VAL <image>
Docker compose:
version: '3'
services:
app:
image: '<image>'
environment:
- MY_VAR=my-value
Check on docker-compose docs
Kubernetes:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
spec:
containers:
- name: app
image: <image>
env:
- name: MY_VAR
value: "my value"
Check on kubernetes-docu
Helm:
Add the values in your values.yaml:
myKey: myValue
Then reference it in your helm template:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
spec:
containers:
- name: app
image: <image>
env:
- name: MY_VAR
value: {{ .Values.myKey }}
Check out the helm docs.
We've just bought a docker hub pro user so that we don't have to worry about pull rate limits.
Now, I'm currently having a problem trying to to set the docker hub pro user. Is there a way to set the credentials for hub.docker.com globally?
In the kubernetes docs I found following article: Kubernetes | Configure nodes for private registry
On every node I executed a docker login with the credentials, copied the config.json to /var/lib/kubelet and restarted kubelet. But I'm still getting an ErrImagePull because of those rate limits.
I've copied the config.json to the following places:
/var/lib/kubelet/config.json
/var/lib/kubelet/.dockercfg
/root/.docker/config.json
/.docker/config.json
There is an option to use a secret for authentification. The problem is, that we would need to edit hundreds of statefulsets, deployments and deamonsets. So it would be great to set the docker user globally.
Here's the config.json:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "[redacted]"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.13 (linux)"
}
}
To check if it actually logs in with the user I've created an access token in my account. There I can see the last login with said token. The last login was when I executed the docker login command. So the images that I try to pull aren't using those credentials.
Any ideas?
Thank you!
Kubernetes implements this using image pull secrets. This doc does a better job at walking through the process.
Using the Docker config.json:
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
Or you can pass the settings directly:
kubectl create secret docker-registry <name> --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
Then use those secrets in your pod definitions:
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
Or to use the secret at a user level (Add image pull secret to service account)
kubectl get serviceaccounts default -o yaml > ./sa.yaml
open the sa.yaml file, delete line with key resourceVersion, add lines with imagePullSecrets: and save.
kind: ServiceAccount
metadata:
creationTimestamp: "2020-11-22T21:41:53Z"
name: default
namespace: default
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: afad07eb-f58e-4012-9ccf-0ac9762981d5
secrets:
- name: default-token-gkmp7
imagePullSecrets:
- name: regcred
Finally replace the serviceaccount with the new updated sa.yaml file
kubectl replace serviceaccount default -f ./sa.yaml
We use docker-registry as a proxy cache in our Kubernetes clusters, Docker Hub credentials may be set in the configuration. Docker daemons on Kubernetes nodes are configured to use the proxy by setting registry-mirror in /etc/docker/daemon.json.
This way, you do not need to modify any Kubernetes manifest to include pull secrets. Our complete setup is described in a blog post.
I ran into the same problem as OP. It turns out, putting docker credential files for kubelet works for kubernetes version 1.18 or higher. I have tested here and can confirm that kubelet 1.18 picks up the config.json placed in /var/lib/kubelet correctly and authenticates the docker registry.
I have multi node kubernetes setup. I am trying to allocate a Persistent volume dynamically using storage classes with NFS volume plugin.
I found storage classes examples for glusterfs, aws-ebs, etc.but, I didn't find any example for NFS.
If I create PV and PVC only then NFS works very well(Without storage class).
I tried to write storage class file for NFS, by referring other plugins. please refer it below,
nfs-storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: kube-system
name: my-storage
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
labels:
kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/nfs
parameters:
path: /nfsfileshare
server: <nfs-server-ip>
nfs-pv-claim.yaml
apiVersion: v1
metadata:
name: demo-claim
annotations:
volume.beta.kubernetes.io/storage-class: my-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
It didn't worked. So, my question is, Can we write a storage class for NFS? Does it support dynamic provisioing?
As of August 2020, here's how things look for NFS persistence on Kubernetes:
You can
Put an NFS volume on a Pod directly:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
nfs:
path: /foo/bar
server: wherever.dns
Manually create a Persistent Volume backed by NFS, and mount it with a Persistent Volume Claim (PV spec shown below):
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
Use the (now deprecated) NFS PV provisioner from external-storage. This was last updated two years ago, and has been officially EOL'd, so good luck. With this route, you can make a Storage Class such as the one below to fulfill PVCs from your NFS server.
Update: There is a new incarnation of this provisioner as kubernetes-sigs/nfs-subdir-external-provisioner! It seems to work in a similar way to the old nfs-client provisioner, but is much more up-to-date. Huzzah!
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-nfs
provisioner: example.com/nfs
mountOptions:
- vers=4.1
Evidently, CSI is the future, and there is a NFS CSI driver. However, it doesn't support dynamic provisioning yet, so it's not really terribly useful.
Update (December 2020): Dynamic provisioning is apparently in the works (on master, but not released) for the CSI driver.
You might be able to replace external-storage's NFS provisioner with something from the community, or something you write. In researching this problem, I stumbled on a provisioner written by someone on GitHub, for example. Whether such provisioners perform well, are secure, or work at all is beyond me, but they do exist.
I'm looking into doing the same thing. I found https://github.com/kubernetes-incubator/external-storage/tree/master/nfs, which I think you based your provisioner on?
I think an nfs provider would need to create a unique directory under the path defined. I'm not really sure how this could be done.
Maybe this is better of as an github issue on the kubernetes repo.
Dynamic storage provisioning using NFS doesn't work, better use glusterfs. There's a good tutorial with fixed to common problems while setting up.
http://blog.lwolf.org/post/how-i-deployed-glusterfs-cluster-to-kubernetes/
I also tried to enable the NFS provisioner on my kubernetes cluster and at first it didn't work, because the quickstart guide does not mention that you need to apply the rbac.yaml as well (I opened a PR to fix this).
The nfs provisioner works fine for me if I follow these steps on my cluster:
https://github.com/kubernetes-incubator/external-storage/tree/master/nfs#quickstart
$ kubectl create -f deploy/kubernetes/deployment.yaml
$ kubectl create -f deploy/kubernetes/rbac.yaml
$ kubectl create -f deploy/kubernetes/class.yaml
Then you should be able to create PVCs like this:
$ kubectl create -f deploy/kubernetes/claim.yaml
You might want to change the folders used for the volume mounts in deployment.yaml to match it with your cluster.
The purpose of StorageClass is to create storage, e.g. from cloud providers (or "Provisioner" as they call it in the kubernetes docs). In case of NFS you only want to get access to existing storage and there is no creation involved. Thus you don't need a StorageClass. Please refer to this blog.
If you are using AWS, I believe you can use this image to create a NFS server:
https://hub.docker.com/r/alphayax/docker-volume-nfs
I am running kubeadm alpha version to set up my kubernates cluster.
From kubernates , I am trying to pull docker images which is hosted in nexus repository.
When ever I am trying to create a pods , It is giving "ImagePullBackOff" every time. Can anybody help me on this ?
Detail for this are present in https://github.com/kubernetes/kubernetes/issues/41536
Pod definition :
apiVersion: v1
kind: Pod
metadata:
name: test-pod
labels:
name: test
spec:
containers:
- image: 123.456.789.0:9595/test
name: test
ports:
- containerPort: 8443
imagePullSecrets:
- name: my-secret
You need to refer to the secret you have just created from the Pod definition.
When you create the secret with kubectl create secret docker-registry my-secret --docker-server=123.456.789.0 ... the server must exactly match what's in your Pod definition - including the port number (and if it's a secure one then it also must match up with the docker command line in systemd).
Also, the secret must be in the same namespace where you are creating your Pod, but that seems to be in order.
I received similar error while launching containers from the amazon ECR registry. The issue was that I didn;t mention the exact "Image URI" location in deployment file.