Automate docker pull commit and update kubernetes - docker

I have docker images hosted external at docker hub which get updates every week.
Currently i did
Docker pull
Update some config files in the docker
Docker commit
Docker push
Then manually change the image name at kubernetes deployment yaml file.
What is the best practise for me to automate this? Can this be initiated in kubernetes?

K8s doesn't support such a functionality (yet, at least!)
but you can use GitOps tools like Flux to automate this procedure,
also you could use scheduled jobs of k8s combined to bash or python scripts to automate the task.
you better check out this post too:
Auto Update Container Image When New Build Released on Kubernetes

Related

How to build a docker image inside Airflow

I want to deploy an application to Airflow that accepts a config file as a parameter, pulls the git repository specified by said config, then transforms it into a Docker image, then uploads that image to GCP's Artifact Registry. What is the best practice for building a docker image inside an Airflow DAG?
I have tried orchestrating a manually-triggered cloud build run via Airflow - I have not been able to pass the necessary substitutions into the cloudbuild.yaml file using the CloudBuildCreateBuildOperator, nor have I been able to specify the workspace.
I have also created a docker image that itself can create new docker images (when the docker.sock file is mounted as a volume). However, using a KubernetesPodOperator to call this seems to go against the design philosophy of Airflow, since this task would be affecting the host machine by building new docker images directly on it.
It's not the responsability of Airflow to apply this kind of use case.
Airflow is a pipeline and tasks orchestrator based on DAGs (direct acyclic graph).
Your need corresponds to usual CI CD pipelines.
It's better to delegate this work on a tool like Cloud Build or Gitlab CI for example.
From Cloud Build, you can apply and automate all the actions specified in your question.
When you will build your image in the CI CD part, you can then in the Airflow DAG, use a Docker image if needed with KubernetesPodOperator.
This would be more coherent because each concern will be put in the right place and on the right tool.

Test Docker image on Kubernetes, without Docker Daemon

I am new in containers, so I will try to explain my issue as detailed as I can.
I run a Jenkins flow on a Kubernetes agent, that builds a Docker image and push it on a repository. I want to modify the Jenkins flow so the image is tested (some functional tests) before pushed to the repository. I found this project on Github https://github.com/GoogleContainerTools/container-structure-test that is convenient for testing, but unfortunately it requires Docker Daemon that is not available on my Kubernetes agent.
Has anyone tried this before? Or does anyone know any workaround? Thanks!
I tried to include a docker container in the pod I use for the Kubernetes agent, create a separate testing file and use this container to run the tests for the image (without the use of the Github project). However, the absence of Docker Daemon is the problem in this case as well.
For running containers inside Jenkins on Kuberntes agents, you can either use Jenkins Docker in Docker agent or Jenkins Podman agent which is a containerless docker alternative with the same cli.
Then, encapsulate tests in a container image and run them inside either of the above agents.
Disclaimer: I wrote above posts.
Also note that there's an option not to use docker daemon for the project you mentioned. Use tar driver instead.

Using Renovate in Kubernetes like Docker-Compose's Watchtower

While looking for a kubernetes equivalent of the docker-compose watchtower container, I stumbled upon renovate. It seems to be a universal tool to update docker tags, dependencies and more.
They also have an example of how to run the service itself inside kubernetes, and I found this blogpost of how to set renovate up to check kubernetes manifests for updates (?).
Now the puzzle piece that I'm missing is some super basic working example that updates a single pod's image tag, and then figuring out how to deploy that in a kubernetes cluster. I feel like there needs to be an example out there somewhere but I can't find it for the life of me.
To explain watchtower:
It monitors all containers running in a docker compose setup and pulls new versions of images once they are available, updating the containers in the process.
I found one keel which looks like watchtower:
Kubernetes Operator to automate Helm, DaemonSet, StatefulSet & Deployment updates
Alternatively, there is duin
Docker Image Update Notifier is a CLI application written in Go and delivered as a single executable (and a Docker image) to receive notifications when a Docker image is updated on a Docker registry.
The Kubernetes provider allows you to analyze the pods of your Kubernetes cluster to extract images found and check for updates on the registry.
I think there is a confusion regarding what Renovate does.
Renovate updates files inside GIT repositories not on the Kubernetes API server.
The Kubernetes manager which you are probably referencing updates K8 manifests, Helm charts and so on inside of GIT repository.

How to start a docker container by jenkins?

I am trying to start a container using Jenkins and a Dockerfile in my SCM.
Jenkins uses the Dockerfile from my SCM repository and builds the image on a remote server having a Dockerfile. This is done using the "cloud bees docker build and publish plugin".
When I ssh to the server, I see that the image has been built with the tags I had defined in Jenkins.
# docker image ls
What I am not able to do is run a container for the image that has been built. How to get the image-id and start the container? Shouldn't it have been very simple given many plugins are provided?
Could your problem be related to how to refer to the recently created docker in order tu run it? Can you provide an extract of your pipeline and how you are trying to achieve this?
It that was the case, there are different solutions, one being specifying a tag during the Docker creation, so you can then refer to it to run it.
In reply to how to work with image-ids, the docker build process will return the image id of the docker it creates. You can capture that id, and then use to run the docker.
start the container yourself on the VM by using standard docker run command.
use a software like watchtower to restart the container with an updated version when available

How to pull new docker images and restart docker containers after building docker images on gitlab?

There is an asp.net core api project, with sources in gitlab.
Created gitlab ci/cd pipeline to build docker image and put the image into gitlab docker registry
(thanks to https://medium.com/faun/building-a-docker-image-with-gitlab-ci-and-net-core-8f59681a86c4).
How to update docker containers on my production system after putting the image to gitlab docker registry?
*by update I mean:
docker-compose down && docker pull && docker-compose up
Best way to do this is to use Image puller, lot of open sources are available, or you can write your own on the Shell. There is one here. We use swarm, and we use this hook concept to be triggered from our CI-CD pipeline. Once our build stage is done, we http the hook url, and the docker pulls the updated image. One disadvantage with this is you need a daemon to watch your hook task, that it doesnt crash or go down. So my suggestion is to run this hook task as a docker container with restart-policy as RestartAlways

Resources