How to update kubernetes certificates with docker desktop on MacOS - docker

I am using kubernetes with docker desktop on MacOS Monterey.
I have problem with starting kubernetes, because 1 year passed and my kubernetes certificates are invalid.
How can I renew them ?
Error message:
Error: Kubernetes cluster unreachable: Get "https://kubernetes.docker.internal:6443/version": EOF
I tried to install kubeadm but I think it is only suitable if I use minikube.
Edit:
I am using Mac with M1 chip.

You will need to create a new set of certificates and keys in order to update the certificates used by Docker Desktop for MacOS. After that, you will need to add the new certificates and keys to the Kubernetes configuration file. Create a certificate signing request (CSR) first, then use the CSR to create new certificates and keys. The Kubernetes configuration file needs to be updated to point to the new certificates and keys after they have been obtained in the appropriate directory structure. Finally, in order for the brand-new certificates and keys to take effect, you will need to restart your Kubernetes cluster.
Using the minikube command-line tool.Use the minikube delete command to get rid of the existing cluster is the first step in updating the certificates. The minikube start command can be used to create a new cluster with the updated certificates after the cluster has been deleted. Finally, save the cluster configuration file with the most recent certificates by employing the minikube get-kube-config command.
Check for the kubernetes version if you are using an older version upgrade it to the latest version,the Kubernetes version can be upgraded after a Docker Desktop update. However, when a new Kubernetes version is added to Docker Desktop, the user needs to reset its current cluster in order to use the newest version.

Related

Finding deployed Google Tag Manager server-side version in GCP

I've recently joined a new company which already has a version of Google Tag Manager server-side up and running. I am new to Google Cloud Platform (GCP), and I have not been able to find the supposed docker image in the image repository for our account. Or, at least I am trying to figure out how to check if there is one and how do I correlate its digest to what image we've deployed that is located at gcr.io/cloud-tagging-10302018/gtm-cloud-image.
I've tried deploying it both automatically provisioned in my own cloud account and also running the manual steps and got it working. But I can't for the life of me figure out how to check which version we have deployed at our company as it is already live.
I suspect it is quite a bit of an old version (unless it auto-updates?), seeing as the GTM server-side docker repository has had frequent updates.
Being new to the whole container imaging with docker, I figured I could use Cloud shell to check it that way, but it seems when setting up the specific Appengine instance with the shell script provided (located here), it doesn't really "load" a docker image as if you'd deployed it yourself. At least I don't think so, because I can't find any info using docker commands in the Cloud shell of said GCP project running the flexible Appengine environment.
Any guidance on how to find out which version of GTM server-side is running in our Appengine instance?
To check what docker images your App Engine Flex uses is by ssh to the instance. To ssh to your App Engine instances is by going to the instance tab then choosing the correct service and version then click the ssh button or you can access it by using this gcloud command on your terminal or cloud shell:
gcloud app instances ssh "INSTANCE_ID" --service "SERVICE_NAME" --version "VERSION_ID" --project "PROJECT_ID"
Once you have successfully ssh to your instance, run docker images command to list your docker images

Problems setting up the OTA Community Edition workspace

Im having trouble setting up my advancedtelematic/ota-community-edition workspace found at https://github.com/advancedtelematic/ota-community-edition
I have installed all the applications listed(mostly via chocolatey). When running the make start, with the docker configuration on my windows machine, I land up with the following error:
Can't open /proc/1204/fd/63 for reading, No such file or directory
20036:error:02001003:system library:fopen:No such process:../openssl-1.1.1k/crypto/bio/bss_file.c:69:fopen('/proc/1204/fd/63','r')
20036:error:2006D080:BIO routines:BIO_new_file:no such file:../openssl-1.1.1k/crypto/bio/bss_file.c:76:
make: *** [Makefile:34: start_start-all] Error 1
I wrote some logs and its happening when making use of openssl req keys inside the new_server() method.
The full log of for the process is below
make start
* The control plane node must be running for this command
To start a cluster, run: "minikube start"
* minikube v1.24.0 on Microsoft Windows 10 Pro 10.0.19042 Build 19042
* Kubernetes 1.22.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.22.3
* Using the docker driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
* Restarting existing docker container for "minikube" ...
* Preparing Kubernetes v1.18.3 on Docker 20.10.8 ...
* Verifying Kubernetes components...
- Using image kubernetesui/dashboard:v2.3.1
- Using image kubernetesui/metrics-scraper:v1.0.7
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, dashboard, default-storageclass
! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.21.2, which may have incompatibilites with Kubernetes 1.18.3.
- Want kubectl v1.18.3? Try 'minikube kubectl -- get pods -A'
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
WARNING: version difference between client (1.21) and server (1.18) exceeds the supported minor version skew of +/-1
serviceaccount/weave-net configured
clusterrole.rbac.authorization.k8s.io/weave-net configured
clusterrolebinding.rbac.authorization.k8s.io/weave-net configured
role.rbac.authorization.k8s.io/weave-net configured
rolebinding.rbac.authorization.k8s.io/weave-net configured
daemonset.apps/weave-net configured
read EC key
writing EC key
read EC key
writing EC key
sending request to cert
Can't open /proc/1204/fd/63 for reading, No such file or directory
20036:error:02001003:system library:fopen:No such process:../openssl-1.1.1k/crypto/bio/bss_file.c:69:fopen('/proc/1204/fd/63','r')
20036:error:2006D080:BIO routines:BIO_new_file:no such file:../openssl-1.1.1k/crypto/bio/bss_file.c:76:
make: *** [Makefile:34: start_start-all] Error 1
You should use the new repo https://github.com/uptane/ota-community-edition is the new one.
The prolem is that you are using Windows instead of Linux, Use Ubuntu 20.04 for example.
From messing around with this ota-community-edition solution. I think that it's just broken all around. Missing APIs, Unknown APIs, Very bad usage instructions.
At some point the original creators of the repo got bought by another comapny, and from what it looks ota-community-edition is just not usable and not mantained.
Update:
For device you should use: https://github.com/advancedtelematic/aktualizr/tree/5336fd20bb59ebfcc4ef0285128dece7e0412867 newer versions are broken. It might be fixable (By you by messing around somewhere), After you'll use the Download API call newer versions will try to execute event:8443 call which will fail.
(https://github.com/advancedtelematic/libaktualizr-demo-app)
https://github.com/simao/ota-cli this one for server interaction.
you'll have to edit: .../ota-community-edition/templates/services .toml files of campaigner, director and registry to export the host... you can see app.tmpl.yaml for an example of how to do it.

What is the best alternative for docker-machine with digitalocean droplets

I've been using docker-machine for creating and connecting to digitalocean droplets. In August 2021 it is deprecated and with the latest MacOs Monterey, I started to get more errors. I can find some ways to work it out, but I don't want to spend more time to a deprecated library.
Docker-machine was really helpful at creating digital ocean droplets. Writing a one line of code was creating the droplet, installing docker to the remote droplet, configuring local certs and machine folders.
docker-machine create --driver digitalocean --digitalocean-image ubuntu-20-04-x64 --digitalocean-size "s-2vcpu-2gb" --digitalocean-region=lon1 --digitalocean-access-token <acces-token> droplet-name
A couple of months ago I started adding this bit otherwise, it was not able to create the droplet.
--engine-install-url "https://releases.rancher.com/install-docker/19.03.14.sh"
After creating the droplet connecting it is as easy as eval $(docker-machine env droplet-name)
I would really like to know what are other alternatives people use. According to digitalocean it requires a bunch of steps
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04
There should be some other ways that I can create and connect to remote droplets. Please share any methods you are using for docker operations on remotes hosts.
I came up with using docker context is the easiest solution for who are using docker-machine. I couldn't manage to run docker-machine after upgrading my operating system to macOS Monterey
Closest docker-machine experience is creating a context with
docker context create remote --docker “host=ssh://user#remotemachine”
The remote host is named as 'remote' here.
You can switch between hosts by using docker context use remote
The best part of using docker-machine or docker context is, I don't need to copy my docker-compose files to remote. It provides local terminal experience which is very helpful when there are multiple hosts to deploy.
For more information, you can see the official documentation below
https://www.docker.com/blog/how-to-deploy-on-remote-docker-hosts-with-docker-compose/

Adding docker cert to new K8s nodes

I am very new to Kubernetes and am working with an eks cluster. I am trying to pull images and I have added a cert to /etc/docker/certs.d// and I am able to pull fine after logging in. However when I create a deployment to deploy apps to my pods, it seems like I have to manually ssh into my EKS nodes and copy over the cert. Otherwise, I am left with a x509 Certificate error. Additionally, if I terminate a node and new nodes are created, those new nodes obviously don't have the cert anymore in which I have to copy over the cert again. Is there a way to configure a secret or configmap so that new nodes will automatically have this cert. I know you can add a mount to a configmap, but it seems like this only works for pods.
Also, what is the best way I can replace these certs for cases where the certs expire (i.e. pulling images from ECR)?
You can use the secret to pull the docker and storing the cert in Kubernetes level but yes you are right it will work with POD. There is no way you can manage or inject at the node level.
The only option you are left with to create the custom AMI and use that for creating the nodes inside the EKS node so by default you will be having that cert into the Node if scale up or down.
https://aws.amazon.com/premiumsupport/knowledge-center/eks-custom-linux-ami/

How to use SSL certificates for Kubernetes Pods with the client role, not server role

I have an application running in Kubernetes Pods. It needs to make https calls to some external service. And this external service asks for SSL certificate. I understand if w/o Kubernetes, the certs can be downloaded somewhere on a Linux machine, applications on the machine refer to that certs by path. But how to install certs for Kubernetes cluster? What's the best practise?
Here are some info that I've collected by searching online, correct me if I'm wrong. But just couldn't find answers to my question.
lots of guide/blogs/docmentation are talking about signing
certificates to other clients when provide services from Kubernetes,
but for my case, it's the other way around --- call external
services with ssl certificates from Kubernetes.
I thought of installing ssl certificate up-front in cluster
machines. but
1) can the installation be automatically done in a
secured way?
2) looks like pods sharing files from host machine is
anti-parttern and may not even supported by Kubernetes, especially
in Google Kubernetes Engine(we use GKE) .
So... any insights or suggestions will be very much appreciated. Thanks!
I think your question is not about Kubernetes, but rather how to add certificates to the way you're packaging your container images (likely with Docker).
You can easily install CA certificates to your container images by running the following commands:
If you're using a debian/ubuntu-based image:
apt-get install -qqy ca-certificates
If you're on alpine-based images:
apk add --no-cache ca-certificates
These will make sure your container has the up-to-date root CA certificates that are used to verify validity of TLS certificates presented by public websites/services.
There are 3 ways you can do it , choose any one only based on your convenience :
1.via Only few lines of Code Change in Application - this leads burden on developer.
2.via Only few additional lines in Dockerfile while building container image - this set responsibility on either developer or devops based on who owns dockercontainer creation.
Best way - via Only Kubernetes Deployment yaml/Helm chart changes while deploying Pod by putting cert in config map mapped to CA root location of pod. -this is one time activity put entire responsibility on dev ops and will automatically apply on all future apps.
detailed steps on each of this with pro and cons and example is here :
https://medium.com/#paraspatidar/add-self-signed-or-ca-root-certificate-in-kubernetes-pod-ca-root-certificate-store-cb7863cb3f87

Resources