How can I throw /run/containerd/containerd.sock into k8s pod? - docker

I am using Digital Ocean Kubernetes cluster as well as Jenkins Helm Chart running inside this cluster. Digital Ocean using containerd://1.4.13 as container runtime and to run containerized tests inside Jenkins Pipeline, I need to download sources from the repository, build a Dockerfile with unit tests and docker-compose.yml with integration tests (in addition to the application there is also a database) and run it all. The problem here is that I can't throw /run/containerd/containerd.sock from the node that runs this pod into the pod itself, because I didn't find any functionality in kubernetes similar to throwing a specific file inside a container in docker. How can I solve this problem by leaving containerized tests? Thank you in advance!

Related

Move Jenkins config from one Kubernetes cluster to another

I have inherited a Jenkins installation which is used by multiple remote teams, and running on an Amazon EKS cluster. For reasons that are not really relevant, I need to move this Jenkins workload to a new EKS cluster.
Deploying jenkins itself is not a major issue, I am doing so using helm. The persistence of the existing jenkins deployment is bound to an Amazon EBS volume, the persistence of the new deployment will also be. The mount point will be /var/jenkins_home
I'm trying to find a simple way of migrating everything from the current jenkins installation and configuration to the new one. This includes mainly:
Authorization Strategy (RBAC)
Jobs
Plugins
Cloud and Agent Config
I know that everything required is most likely in Jenkins Home. Could I in theory just dump out the current Jenkins Home folder and import into the new running jenkins container using kubetl cp or something like that? Is there an easier way? Is this unsafe in some way?

Jenkins configuration using command line

I am trying to move the complete eco-system of our SAAS product to Kubernetes (and use Docker containers).
I am supposed to give a bash script which will set up everything. Only manual intervention should be setting up the Kubernetes cluster and mounting Persistent Volumes.
We were using Jenkins for code deployment and cron jobs. I am able to create the Jenkins service but I can not find ways to configure it using the command line. Tried finding ways online but can not find any good documentation.
First welcome to kubernetes, second, there are a lot of tools, templates over there, I would recommend you to check what is Helm
This is the Jenkins chart if you want to check:
https://github.com/helm/charts/tree/master/stable/jenkins
There is also a "fork" of jenkins for containerized environments, that I like, you can check more about Jenkins-X here
You can use helm package manager and simply install the Jenkin stable version.
Before using helm you have to setup tiller on kubernetes cluster.
$ helm install --name my-release stable/jenkins
here stable version of jenkin using helm.
https://github.com/helm/charts/tree/master/stable/jenkins
I can add that you can store Jenkins home folder as well as plugins and artifacts folder on persistent volume and mount that volume to Jenkins pod as a part of Helm installation. You can also make daily snapshots/backups of Jenkins disk. In this way Jenkins deployment becomes very smooth, quick and reliable.

How to update k8s pods in another namespace using Jenkins - Minikube

I have created a cluster using minikube which has 2 namespaces, dev and infra. dev contains my UI and backend apps while infra contains my Jenkins StatefulSet. I set Jenkins and added the Kubernetes plugin (v 1.1.3). Now I want to create a Jenkins job so that I can redeploy services in my dev namespace.
However, when my Jenkins job runs, I can see that it spins a new pod in the infra namespace as expected for the build, but this pod does not have access to the kubeconfig or the kubectl command. How do I promote builds in this case?
Here is my Kubernetes Cloud Configuration
And here is the console output of a sample job
The sample job above does nothing, I was just testing to make sure that it spins off a pod of its own every time it is run.
How can I use these Jenkins jobs now to redeploy my services/pods in the dev namespace?
this pod does not have access to the kubeconfig or the kubectl command
You need to use a jenkins agent docker image that has those commands
You also need that agent pod to use a service account that has permission to access the dev namespace if you want to change things there

Jenkins pipeline using docker on existing slaves

We have the following jenkins setup:
Jenkins master
Jenkins Slave1
Jenkins Slave2
Jenkins Slave3
Those are all virtual machines and the slaves do always exist. They don't spawn automatically up and down.
Now we have builds which needs a lot of tools (maven, python, aws cli, ...). We can install every tool on every slave and everything will work fine.
But we want to build a docker approach.
Nearly all the tutorials I've seen are using slaves in Docker. They use some orchestration tool like Kubernetes and are creating slaves in Docker, do their stuff and delete the pod again.
We don't have the possibility to do this:
Question: Is it a decent approach to use an 'old' Jenkins setup with
real VM slaves on which we use docker?
What I'm thinking about is writing a pipeline and in each stage we use a docker container:
start build (it will choose a slave, e.g. Slave1)
pipeline will start
stage1: spin up e.g. a python container: git clone and execute python commands. mount volume to workspace??
stage2: sping up e.g. aws container and mount the content of the workspace and execute new commands etc.
Can someone evaluate this approach?
This is a very good approach. In fact the way to do that is documented under jenkins docs under Using multiple containers section.
In each stage you basically spin up a container with the necessary tools available and you can use a volume to presist output from the stage into the workspace so that other
stages can use it.

CI/CD with Docker - what is the final deployment step?

I am developing a small website (Ruby/Sinatra) to be used internally where I work. (Simply, it crunches some source data and generates reports.)
I'm want to deploy it using Docker and have a set up that works on my dev environment, but I'm trying to understand the workflow for "production" deployment (we're using Jenkins).
I've read lots of articles about deployment workflows using Docker, but they all seem to stop at "and then push your image to the Docker registry". What seems to be missing is how to then take that image and actually update the application.
I appreciate that every application is likely to be different, but what is the next step? I'm aware of lots of different frameworks like Chef, Puppet, Ansible that could be used, but my question really is - how do I integrate that into my CI/CD pipeline? E.g. does a job "push" the changes to the production server, or should a Jenkins slave be running on the production server to execute a job directly on the server?
There are several orchestration tools like docker-swarm, kubernetes and rancher. In docker swarm for example you create services and can update the versions in blue-green deployment manner also for just one instance (then there is no blue-green :) ) and if you just use docker run you should check your running container, stop and remove it if its running an start your docker container with the newer image version.
It depends on how your application is configured to run. In my case, I have a call to "docker run" in a systemd script. It's configured to just restart if it ever stops.
So, in my Jenkinsfile, after I push the image to the registry, I do a "docker pull" (my Jenkins agent is running on the same box that the application is running on), and then a "docker stop". That causes the application to exit, then restarts, which causes it to get the new version that was just pulled, and now it's running the new version.

Resources