share volume between agent block in jenkins k8s - jenkins

we have a jenkins pipeline that looks sort of like this
pipeline {
agent {
kubernetes {
inheritFrom 'default'
yamlFile 'pipeline_agent.yaml'
}
}
stages {
stage('build backend') { ... }
stage('build frontend') { ... }
stage('combine to jar file') { ... }
stage('test') { ... }
}
}
our pipeline_agent.yaml then contains containers for all different stages, so one for building with gradle, one with node for building the frontend, a separate custom one for testing, and so on.
And so this "works" and the output is great, but the issue is that we are provisioning a lot of resources that are not used for the entire duration on the pipeline.
For example it would be great to release all unused containers and then increase the resource requests for the test stage.
But I am not entirely sure how to do that.
I can add an agent block in each stage and then I would be able to provision containers as needed, but then I don't think I can share data/build output between them.
Is there any "standard" or good way of doing this sort of dynamic provisioning?

I think the best way is to have an agent block at each stage freeing up the containers after the job completes. As for sharing data between containers, you might want to use a shared persistent volume mounted in each container. This can be achieved by the NFS server provisioner
The NFS server provisioner would create Persistent Volume(Local one should work in your case) and an NFS storageclass. Your agent pods then can then mount and share the volume provisioned by this storageclass.

Best and simple way for temporary volume sharing between containers in a pod, is used the EmptyDir
Communicate Between Containers in the Same Pod Using a Shared Volume

Related

I can't mix ECS and k8s slaves

My top level pipeline directive looks like this:
pipeline {
agent {
kubernetes {
defaultContainer "mycontainer"
yaml 'my full podspec'
}
}
Then later in a stage I want to use ECS like this:
stage('my stage') {
agent { node { label 'my-ecs-cluster'; customWorkspace 'myworkspace'; } }
I get this error:
Also: java.lang.ClassCastException: com.cloudbees.jenkins.plugins.amazonecs.ECSSlave cannot be cast to org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave
If I put the ecs agent in the toplevel pipeline directive and then switch to k8s slaves in subsequent stages I don't get an error. So this is only an issue one way. Very frustrating. I only have a few stages that need the ecs slave for so having to define that as the default in the pipeline definition is a pain.
The bigger issue too is the if I define the ECS slaves in the pipeline definition they are provisioned every time and I have some cases where I don't want them to be provisioned at all because they aren't needed.
Just to add more info, setting agent any or none I don't think will work either because when I specify the agent in multiple stages they use a different container. I want to be able to set the ECS agent at the top-level pipeline when required because then all the stages will run with the same container and use the same workspace.

Building Docker images on Jenkins to use in next stage

Using the kubernetes-plugin how does one build an image in a prior stage for use in a subsequent stage?
Looking at the podTemplate API it feels like I have to declare all my containers and images up front.
In semi-pseudo code, this is what I'm trying to achieve.
pod {
container('image1') {
stage1 {
$ pull/build/push 'image2'
}
}
container('image2') {
stage2 {
$ do things
}
}
}
Jenkins Kubernetes Pipeline Plugin initializes all slave pods during Pipeline Startup. This also means that all container images which are used within the pipeline need to be available in some registry. Probably you can give us more context what you try to achieve, maybe there are other solutions for your problem.
There are for sure ways to dynamically create a pod from a build container and connect it as slave during buildtime but I feel already that this approach is not solid and will bring some complications.

How to run unit tests in Jenkins in separate Docker containers?

In our codebase, we have multiple neural networks (classification, object detection, etc.) for which we have written some unit tests which we want to run in Jenkins at some specified point (the specific point is not relevant, e.g. whenever we merge some feature branch in the master branch).
The issue is that due to external constraints, each neural net needs another version of keras/tensorflow and a few other packages, so we cannot run them all in the same Jenkins environment. The obvious solution to this is Docker containers (we have specialized Docker images for each one) and ideally we would want to tell Jenkins to execute each unittest in a Docker container that we specify beforehand.
Does anyone know how to do that with Jenkins? I searched online, but the solutions I have found seem a bit hacky to me.
It looks like a candidate for Jenkins pipelines, especially docker agents
pipeline {
agent none
stages {
stage('Back-end') {
agent {
docker { image 'maven:3-alpine' }
}
steps {
sh 'mvn --version'
}
}
stage('Front-end') {
agent {
docker { image 'node:7-alpine' }
}
steps {
sh 'node --version'
}
}
}
}
This allows for the actual Jenkins agent to spin up a docker container to do the work. You say you have images already so you're most of the way.

How can load the Docker section of my Jenkins pipeline (Jenkinsfile) from a file?

I have multiple pipelines using Jenkinsfiles that retrieve a docker image from a private registry. I would like to be able to load the docker specific information into the pipelines from a file, so that I don’t have to modify all of my Jenkinsfiles when the docker label or credentials change. I attempted to do this using the example jenkinsfile below:
def common
pipeline {
agent none
options {
timestamps()
}
stages {
stage('Image fetch') {
steps{
script {
common = load('/home/jenkins/workspace/common/docker_image')
common.fetchImage()
}
}
}
}
With docker_image containing:
def fetchImage() {
agent {
docker {
label “target_node ”
image 'registry-url/image:latest'
alwaysPull true
registryUrl 'https://registry-url’
registryCredentialsId ‘xxxxxxx-xxxxxx-xxxx’
}
}
}
I got the following error when I executed the pipeline:
Required context class hudson.FilePath is missing Perhaps you forgot
to surround the code with a step that provides this, such as:
node,dockerNode
How can I do this using a declarative pipeline?
There are a few issues with this:
You can allocate a node only at top level
pipeline {
agent ...
}
Or you can use a per-stage node allocation like so:
pipeline {
agent none
....
stages {
stage("My stage") {
agent ...
steps {
// run my steps on this agent
}
}
}
}
You can check the docs here
The steps are supposed to be executed on the allocated node (or in some cases they can be executed without allocating a node at all).
Declarative Pipeline and Scripted Pipeline are two different things. Yes, it's possible to mix them, but scripted pipeline is meant to either abstract some logic into a shared library, or to provide you a way to be a "hard core master ninja" and write your own fully custom pipeline using the scripted pipeline and none of the declarative sugar.
I am not sure how your Docker <-> Jenkins connection is setup, but you would probably be better if you install a plugin and use agent templates to provide the agents you need.
If you have a Docker Swarm you can install the Docker Swarm Plugin and then in your pipeline you can just configure pipeline { agent { label 'my-agent-label' } }. This will automatically provision your Jenkins with an agent in a container which uses the image you specified.
If you have exposed /var/run/docker.sock to your Jenkins, then you could use Yet Another Docker Plugin, which has the same concept.
This way you can remove the agent configuration into the agent template and your pipeline will only use a label to have the agent it needs.

Jenkins on Kubernetes Load Testing

We are using Jenkins slaves on containers & have Kubernetes as our orchestrator. Jenkins Master is on a standalone instance. Now, for 5-6 parallel builds the set up is working just fine. However, we wanted to do some load testing to check how many parallel builds ie how many containers can we spin up in this set up.
Is there any tool out there for such testing ? Any recommended way ?
There are no tools to do this. Kubernetes spawns pods until resource depletion; when there is not enough resources, Kubernetes waits for free resources.
So you can try to increase the numbers of parallel builds till it uses all resources, also you can monitor usage of resources in your monitoring system.
Used the job DSL plugin for my usecase . THis creates different jobs based on repo you give and as many as you want. And once created, it builds it.
for(int i=0;i<101;i++){
createJob(i)
}
def createJob(int i){
pipelineJob("PerfTest-${i}") {
def repo = 'https://<bitbucket-url>'
description("Pipeline for $repo")
definition {
cpsScm {
scm {
git {
remote {
url(repo)
credentials('mycreds')
}
branches('refs/heads/perfTest')
scriptPath('Jenkinsfile')
extensions { }
}
}
}
}
queue("PerfTest-${i}")
}
}

Resources