Adding docker cert to new K8s nodes - docker

I am very new to Kubernetes and am working with an eks cluster. I am trying to pull images and I have added a cert to /etc/docker/certs.d// and I am able to pull fine after logging in. However when I create a deployment to deploy apps to my pods, it seems like I have to manually ssh into my EKS nodes and copy over the cert. Otherwise, I am left with a x509 Certificate error. Additionally, if I terminate a node and new nodes are created, those new nodes obviously don't have the cert anymore in which I have to copy over the cert again. Is there a way to configure a secret or configmap so that new nodes will automatically have this cert. I know you can add a mount to a configmap, but it seems like this only works for pods.
Also, what is the best way I can replace these certs for cases where the certs expire (i.e. pulling images from ECR)?

You can use the secret to pull the docker and storing the cert in Kubernetes level but yes you are right it will work with POD. There is no way you can manage or inject at the node level.
The only option you are left with to create the custom AMI and use that for creating the nodes inside the EKS node so by default you will be having that cert into the Node if scale up or down.
https://aws.amazon.com/premiumsupport/knowledge-center/eks-custom-linux-ami/

Related

How to update kubernetes certificates with docker desktop on MacOS

I am using kubernetes with docker desktop on MacOS Monterey.
I have problem with starting kubernetes, because 1 year passed and my kubernetes certificates are invalid.
How can I renew them ?
Error message:
Error: Kubernetes cluster unreachable: Get "https://kubernetes.docker.internal:6443/version": EOF
I tried to install kubeadm but I think it is only suitable if I use minikube.
Edit:
I am using Mac with M1 chip.
You will need to create a new set of certificates and keys in order to update the certificates used by Docker Desktop for MacOS. After that, you will need to add the new certificates and keys to the Kubernetes configuration file. Create a certificate signing request (CSR) first, then use the CSR to create new certificates and keys. The Kubernetes configuration file needs to be updated to point to the new certificates and keys after they have been obtained in the appropriate directory structure. Finally, in order for the brand-new certificates and keys to take effect, you will need to restart your Kubernetes cluster.
Using the minikube command-line tool.Use the minikube delete command to get rid of the existing cluster is the first step in updating the certificates. The minikube start command can be used to create a new cluster with the updated certificates after the cluster has been deleted. Finally, save the cluster configuration file with the most recent certificates by employing the minikube get-kube-config command.
Check for the kubernetes version if you are using an older version upgrade it to the latest version,the Kubernetes version can be upgraded after a Docker Desktop update. However, when a new Kubernetes version is added to Docker Desktop, the user needs to reset its current cluster in order to use the newest version.

What is the best approach to having the certs inside a container in kubernetes?

I see that normally a new image is created, that is, a dockerfile, but is it a good practice to pass the cert through environment variables? with a batch that picks it up and places it inside the container
Another approach I see is to mount the certificate on a volume.
What would be the best approximation to have a single image for all environments?
Just like what happens with software artifacts, I mean.
Creating a new image for each environment or renewal I find it tedious, but if it has to be like this...
Definitely do not bake certificates into the image.
Because you tagged your question with azure-aks, I recommend using the Secrets Store CSI Driver to mount your certificates from Key Vault.
See the plugin project page on GitHub
See also this doc Getting Certificates and Keys using Azure Key Vault Provider
This doc is better, more thorough and worth going through even if you're not using the nginx ingress controller Enable NGINX Ingress Controller with TLS
And so for different environments, you'd pull in different certificates from one or more key vaults and mount them to your cluster. Please also remember to use different credentials/identities to grab those certs.
The cloud native approach would be to not have your application handle the certificates but to externalize that completely to a different mechanism.
You could have a look at service meshs. They mostly work with the sidecar pattern, where a sidecar container is running in the pod, that handles en-/decryption of your traffic. The iptables inside the pod are configured in a way, that all traffic must go through the sidecar.
Depending on your requirements you can check out istio and linkerd as service mesh solutions.
If a service mesh is not an option I would recommend to store your certs as secret and mount that as volume into your container.

Pulling images from private repository in kubernetes without using imagePullSecrets

I am new to kubernetes deployments so I wanted to know is it possible to pull images from private repo without using imagePullSecrets in the deployment yaml files or is it mandatory to create a docker registry secret and pass that secret in imagePullSecrets.
I also looked at adding imagePullSecrets to a service account but that is not the requirement I woul love to know that if I setup creds in variables can kubernetes use them to pull those images.
Also wanted to know how can it be achieved and reference to a document would work
Thanks in advance.
As long as you're using Docker on your Kubernetes nodes (please note that Docker support has itself recently been deprecated in Kubernetes), you can authenticate the Docker engine on your nodes itself against your private registry.
Essentially, this boils down to running docker login on your machine and then copying the resulting credentials JSON file directly onto your nodes. This, of course, only works if you have direct control over your node configuration.
See the documentation for more information:
If you run Docker on your nodes, you can configure the Docker container runtime to authenticate to a private container registry.
This approach is suitable if you can control node configuration.
Docker stores keys for private registries in the $HOME/.dockercfg or $HOME/.docker/config.json file. If you put the same file in the search paths list below, kubelet uses it as the credential provider when pulling images.
{--root-dir:-/var/lib/kubelet}/config.json
{cwd of kubelet}/config.json
${HOME}/.docker/config.json
/.docker/config.json
{--root-dir:-/var/lib/kubelet}/.dockercfg
{cwd of kubelet}/.dockercfg
${HOME}/.dockercfg
/.dockercfg
Note: You may have to set HOME=/root explicitly in the environment of the kubelet process.
Here are the recommended steps to configuring your nodes to use a private registry. In this example, run these on your desktop/laptop:
Run docker login [server] for each set of credentials you want to use. This updates $HOME/.docker/config.json on your PC.
View $HOME/.docker/config.json in an editor to ensure it contains just the credentials you want to use.
Get a list of your nodes; for example:
if you want the names: nodes=$( kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}' )
if you want to get the IP addresses: nodes=$( kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(#.type=="ExternalIP")]}{.address} {end}' )
Copy your local .docker/config.json to one of the search paths list above.
for example, to test this out: for n in $nodes; do scp ~/.docker/config.json root#"$n":/var/lib/kubelet/config.json; done
Note: For production clusters, use a configuration management tool so that you can apply this setting to all the nodes where you need it.
If the Kubernetes cluster is private, you can deploy your own, private (and free) JFrog Container Registry using its Helm Chart in the same cluster.
Once it's running, you should allow anonymous access to the registry to avoid the need for a login in order to pull images.
If you prevent external access, you can still access the internal k8s service created and use it as your "private registry".
Read through the documentation and see the various options.
Another benefit is that JCR (JFrog Container Registry) is also a Helm repository and a generic file repository, so it can be used for more than just Docker images.

Can a self-signed cert secure multiple CNs / FQDNs?

This is a bit of a silly setup, but here's what I'm looking at right now:
I'm learning Kubernetes
I want to push custom code to my Kubernetes cluster, which means the code must be available as a Docker image available from some Docker repository (default is Docker Hub)
While I'm willing to pay for Docker Hub if I have to (though I'd rather avoid it), I have concerns about putting my custom code on a third-party service. Sudden rate limits, security breaches, sudden ToS changes, etc
To this end, I'm running my own Docker registry within my Kubernetes cluster
I do not want to configure the Docker clients running on the Kubernetes nodes to trust insecure (HTTP) Docker registries. If I do choose to pull any images from an external registry (e.g. public images like nginx I may pull from Docker Hub instead of hosting locally) then I don't want to be vulnerable to MITM attacks swapping out the image
Ultimately I will have a build tool within the cluster (Jenkins or otherwise) pull my code from git, build the image, and push it to my internal registry. Then all nodes pulling from the registry live within the cluster. Since the registry never needs to receive images from sources outside of the cluster or delivery them to sources outside of the cluster, the registry does not need a NodePort service but can instead be a ClusterIP service.... ultimately
Until I have that ultimate setup ready, I'm building images on my local machine and wish to push them to the registry (from the internet)
Because I don't plan on making the registry accessible from the outside world (eventually), I can't utilize Let's Encrypt to generate valid certs for it (even if I were making my Docker registry available to the outside world, I can't use Let's Encrypt, anyway without writing some extra code to utilize certbot or something)
My plan is to follow the example in this StackOverflow post: generate a self-signed cert and then launch the Docker registry using that certificate. Then use a DaemonSet to make this cert trusted on all nodes in the cluster.
Now that you have the setup, here's the crux of my issue: within my cluster my Docker registry can be accessed via a simple host name (e.g. "docker-registry"), but outside of my cluster I need to either access it via a node IP address or a domain name pointing at a node or a load balancer.
When generating my self-signed cert I was asked to provide a CN / FQDN for the certificate. I put in "docker-registry" -- the internal host name I plan to utilize. I then tried to access my registry locally to push an image to it:
> docker pull ubuntu
> docker tag ubuntu example.com:5000/my-ubuntu
> docker push example.com:5000/my-ubuntu
The push refers to repository [example.com:5000/my-ubuntu]
Get https://example.com:5000/v2/: x509: certificate is valid for docker-registry, not example.com
I can generate a certificate for example.com instead of for docker-registry, however I worry that I'll have issues configuring the service or connecting to my registry from within my cluster if I provide my external domain like this instead of an internal host name.
This is why I'm wondering if I can just say that my self-signed cert applies to both example.com and docker-registry. If not, two other acceptable solutions would be:
Can I tell the Docker client not to verify the host name and just trust the certificate implicitly?
Can I tell the Docker registry to deliver one of two different certificates based on the host name used to access it?
If none of the three options are possible, then I can always just forego pushing images from my local machine and start the process of building images within the cluster -- but I was hoping to put that off until later. I'm learning a lot right now and trying to avoid getting distracted by tangential things.
Probably the easiest way to solve your problem would be to use Docker's insecure-registry feature. The concern you mention about this in your post (that it would open you up to security risks later) probably won't apply as the feature works by specifying specific IP addresses or host names to trust.
For example you could configure something like
{
"insecure-registries" : [ "10.10.10.10:5000" ]
}
and the only IP address that your Docker daemons will access without TLS is the one at that host and port number.
If you don't want to do that, then you'll need to get a trusted TLS certificate in place. The issue you mentioned about having multiple names per cert is usually handled with the Subject Alternative Name field in a cert. (indeed Kubernetes uses that feature quite a bit).

How to use customise Ubuntu image for node when GKE cluster create?

I have GKE and I need to use customised Ubuntu image for GKE nodes. I am planning to enable autoscaling. So I require to install TLS certificates to trust the private docker registry in each nodes. It's possible for existing nodes manually. But when I enable auto scale the cluster, it will spin up the nodes. Then docker image pull request will fail, because of the docker cannot trust the private docker registry which hosted in my on premise.
I have created a customised Ubuntu image and uploaded to image in GCP. I was trying to create a GKE and tried to set the node's OS image as that image which I created.
Do you know how to create a GKE cluster with customised Ubuntu Image? Has anyone experienced with incidents like this?
Node pools in GKE are based off GCE instance templates and can't be modified. That means that you aren't allowed to set metadata such as startup-scripts or make them based on custom images.
However, an alternative approach might be deploying a privileged DaemonSet that manipulates the underlying OS settings and resources.
Is important to mention that giving privileges to resources in Kubernetes must be done carefully.
You can add a custom pool where the image is Ubuntu and be sure to add the special GCE instance metadata startup-script and then you can put your customization on it.
But my advice is to put the URL of a shell script stored in a bucket of the same project, GCE will download every time a new node is created and will execute it on the startup as a root.
https://cloud.google.com/compute/docs/startupscript#cloud-storage

Resources