While looking for a kubernetes equivalent of the docker-compose watchtower container, I stumbled upon renovate. It seems to be a universal tool to update docker tags, dependencies and more.
They also have an example of how to run the service itself inside kubernetes, and I found this blogpost of how to set renovate up to check kubernetes manifests for updates (?).
Now the puzzle piece that I'm missing is some super basic working example that updates a single pod's image tag, and then figuring out how to deploy that in a kubernetes cluster. I feel like there needs to be an example out there somewhere but I can't find it for the life of me.
To explain watchtower:
It monitors all containers running in a docker compose setup and pulls new versions of images once they are available, updating the containers in the process.
I found one keel which looks like watchtower:
Kubernetes Operator to automate Helm, DaemonSet, StatefulSet & Deployment updates
Alternatively, there is duin
Docker Image Update Notifier is a CLI application written in Go and delivered as a single executable (and a Docker image) to receive notifications when a Docker image is updated on a Docker registry.
The Kubernetes provider allows you to analyze the pods of your Kubernetes cluster to extract images found and check for updates on the registry.
I think there is a confusion regarding what Renovate does.
Renovate updates files inside GIT repositories not on the Kubernetes API server.
The Kubernetes manager which you are probably referencing updates K8 manifests, Helm charts and so on inside of GIT repository.
Related
Quick question, do i need to docker compose up on airflow when i amend a secret in kubectl?
I've changed a password using the command line and kubectl in vscode and just want to know if it is necessary to run docker compose up now that it has been changed or not?
If you've installed your airflow system using helm charts directly on k8s. Then you don't have to do anything. Secrets are automatically refreshed inside pods by the kubelet. And you don't have to manipulate docker directly when you already have k8s installed and are interacting with it using kubectl. That's the whole point of having k8s.
If you're using both, you shouldn't, really. Just interact with k8s and forget about docker. You will almost never have to think about docker unless you are debugging some serious problem with k8s system itself.
Nah. Docker compose has nothing to do with it. You probably just need to restart your Pods somehow. I always just do a "Redeploy" through our Rancher interface. I'm sure there is a way to do that with kubectl as well. You just need to get the secret into the Pods, the image itself is unchanged.
I have docker images hosted external at docker hub which get updates every week.
Currently i did
Docker pull
Update some config files in the docker
Docker commit
Docker push
Then manually change the image name at kubernetes deployment yaml file.
What is the best practise for me to automate this? Can this be initiated in kubernetes?
K8s doesn't support such a functionality (yet, at least!)
but you can use GitOps tools like Flux to automate this procedure,
also you could use scheduled jobs of k8s combined to bash or python scripts to automate the task.
you better check out this post too:
Auto Update Container Image When New Build Released on Kubernetes
How can I use Helm to deploy a docker image from my local file system?
What's the syntax in the chart for that?
I'm not looking to pull from the local docker daemon/repository, but from the file system proper.
file:///mumble/whatever as produced by docker build -o type=local,dest=mumble/whatever
The combined Helm/Kubernetes/Docker stack doesn't really work that way. You must build your image locally and push it to some sort of registry before you can run it (or be using a purely local environment like minikube). It's unusual to have "an image in the filesystem" at all.
Helm knows nothing about building images; for that matter, Helm on its own doesn't really know much about images at all. It knows how to apply the Go templating language to text files to (hopefully) produce YAML, and to send those to the Kubernetes cluster, but Helm has pretty limited awareness of what those mean. In a Helm chart you'd typically write something like
image: {{ .Values.registry }}/{{ .Values.image }}:{{ .Values.tag }}
but that doesn't "mean" anything to Helm.
Kubernetes does understand image: with its normal meaning, but again that generally refers to an image that lives in Docker Hub or a similar image registry, not something "on a filesystem". The important thing for Kubernetes is that it expects to have many nodes, so if an image isn't already on a node, it knows how to pull it from the registry. A very typical setup in fact is for nodes to be automatically created and destroyed (via the cluster autoscaler) and you'd almost never have anything of importance on the node outside of a pod.
This means, in practical use, you basically must have a container registry. You can use Docker Hub, or run one locally, or use something provided by your cloud provider. If you're using a desktop-oriented Kubernetes setup, both minikube and kind have instructions for running a registry as part of that setup.
Basically the one exception this is, in minikube, you can use minikube's Docker daemon directly by running
eval $(minikube docker-env)
docker build -t my-image .
This builds the image normally, keeping it "inside Docker space", except here it's inside the minikube VM.
A Docker image in a file isn't especially useful. About the only thing you can do with it is docker load it; even plain Docker without Kubernetes can't directly run it. If you happened to have the file, you could docker load it into minikube the same way we built it above, or minikube load it. If it's at all an option, though, using the standard Docker registry system will generally be easier than trying to replicate it with plain files.
I have an Image which i should add a dependency to it. Therefore I have tried to change the image when is running on the container and create new Image.
I have follow this article with the following commands after :
kubectl run my-app --image=gcr.io/my-project-id/my-app-image:v1 --port 8080
kubectl get pods
kubectl exec -it my-app-container-id -- /bin/bash
then in the shell of container, i have installed the dependency using "pip install NAME_OF_Dependncy".
Then I have exited from the shell of container and as it have been explained in the article, i should commit the change using this command :
sudo docker commit CONTAINER_ID nginx-template
But I can not find the corresponding command for Google Kubernetes Engine with kubectl
How should i do the commit in google container engine?
As with K8s Version 1.8. There is no way to do Hot Fix changes directly to the images.For example, Committing new image from running container. If you still change or add something by using exec it will stay until the container is running. It's not best practice in K8s eco-system.
The recommended way is to use Dockerfile and customise the images according to the necessity and requirements.After that, you can push that images to the registry(public/ private ) and deploy it with K8s manifest file.
Solution to your issue
Create a Dockerfile for your images.
Build the image by using Dockerfile.
Push the image to the registry.
write the deployment manifest file as well service manifest file.
apply the manifest file to the k8s cluster.
Now If you want to change/modify something, you just need to change/modify the Dockerfile and follow the remaining steps.
As you know that containers are a short living creature which does not have persist changed behaviour ( modified configuration, changing file system).Therefore, It's better to give new behaviour or modification at the Dockerfile.
Kubernetes Mantra
Kubernetes is Cloud Native product which means it does not matter whether you are using Google Cloud, AWS or Azure. It needs to have consistent behaviour on each cloud provider.
People using Docker have probably used dockerfiles as master templates for their containers.
Does Kubernetes allow re-use of existing dockerfiles? Or will people need to port that to Kubernetes .yaml -style templates?
I'm not aware of tools for doing so or people that have been trying this.
Dockerfiles and the Kubernetes resource manifests (the yaml files) are somewhat orthogonal. While you could pull some information from the Dockerfile to prepopulate the Kubernetes manifest, it'd only be able to fill in a very small subset of the options available.
You can think of Dockerfiles as describing what is packaged into your container image, while the Kubernetes manifests specify how your container image is deployed -- which ports are exposed, environment variables are added, volumes are mounted, services made available to it; how it should be scheduled, health checked, restarted; what its resource requirements are; etc.
I think what you are referring to are your docker-compose files. These guys are responsible for orchestrating your 'service'. If you have docker-compose files there is a tool that can help convert them to k8 manifests.
https://github.com/kubernetes/kompose