Parallel jenkins agents at kubernetes with kubernetes plugin - jenkins

I'm using Jenkins version 2.190.2 and Kubernetes plugin 1.19.0
I have this jenkins as master at kubernetes cluster at AWS.
This jenkins have kubernetes plugin configured and it's running ok.
I have some pod templates and containers configured that are running.
I'm able to run declarative pipelines specifying agent and container.
My problem is that I'm unable to run jobs parallel.
When more than one job is executed at same time, first job starts, pod is created and execute stuff. The second job waits to the first jobs ends, even if use different agents.
EXAMPLE:
Pipeline 1
pipeline {
agent { label "bash" }
stages {
stage('init') {
steps {
container('bash') {
echo 'bash'
sleep 300
}
}
}
}
}
Pipeline 2
pipeline {
agent { label "bash2" }
stages {
stage('init') {
steps {
container('bash2') {
echo 'bash2'
sleep 300
}
}
}
}
}
This is the org.csanchez.jenkins.plugins.kubernetes log. I've uploaded to wetransfer -> we.tl/t-ZiSbftKZrK
I've read a lot of this problem and I've configured jenkins start with this JAVA_OPTS but problem is not solved.
-Dhudson.slaves.NodeProvisioner.initialDelay=0
-Dhudson.slaves.NodeProvisioner.MARGIN=50
-Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
Kubernetes plugin is configured with:
Kubernetes cloud / Concurrency Limit = 50. I've configured without value but the problem still occurs
Kubernetes cloud / Pod retention = never
Pod template / Concurrency Limit without value. I've configured with 10 but the problem still occurs
Pod template / Pod retention = Default
What configuration I'm missing or what errors i'm doing?

Finally I've solved my problem due to another problem.
We started to get errors at create normal pods because our kubernetes node at aws hadn't enough free ip's. Due to this error we scaled our nodes and now jenkins pipelines can be running parallel with diferents pods and containers.

your pods are created in parallel
Oct 31, 2019 3:13:30 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch
Created Pod: default/bash-4wjrk
...
Oct 31, 2019 3:13:30 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch
Created Pod: default/bash2-3rxck
but your bash2 pod is failing with
Caused by: java.net.UnknownHostException: jenkins-jnlp.default.svc.cluster.local

You should use Parallel Stages. Which you can find described in the Jenkins documentation for pipeline syntax.
Stages in Declarative Pipeline may declare a number of nested stages within a parallel block, which will be executed in parallel. Note that a stage must have one and only one of steps, stages, or parallel. The nested stages cannot contain further parallel stages themselves, but otherwise behave the same as any other stage, including a list of sequential stages within stages. Any stage containing parallel cannot contain agent or tools, since those are not relevant without steps.
In addition, you can force your parallel stages to all be aborted when one of them fails, by adding failFast true to the stage containing the parallel. Another option for adding failfast is adding an option to the pipeline definition: parallelsAlwaysFailFast()
An example pipeline might look like this:
Jenkinsfile
pipeline {
agent none
stages {
stage('Run pod') {
parallel {
stage('bash') {
agent {
label "init"
}
steps {
container('bash') {
echo 'bash'
sleep 300
}
}
}
stage('bash2') {
agent {
label "init"
}
steps {
container('bash') {
echo 'bash'
sleep 300
}
}
}
}
}
}
}

Related

Building Jenkins job through slave machine which has more disk space

i have a main Jenkins pipeline job which calls other multiple sub jobs during build time.
Also i have 2 Jenkins slave machines, where slave1 has 100GB space left slave2 has 30GB space left.
But during build time Jenkins is using slave2 instead of slave1 which has more space compared to slave2.
How to configure Jenkins so that, it will use slave machine which has more space?
In the Jenkins pipeline you can mention where you want to run your job like below:
Scripted Pipeline
node('labelName'){
stage('...') {
...
}
}
Declarative Pipeline
pipeline {
agent {
label 'agentLabaleName'
}
stages {
stage('...') {
steps {
.....
}
}
}
}
more information can be found here

Jenkins Docker pipeline stuck on "Waiting for next available executor"

In my project I have a Jenkins pipeline, which should execute two stages on a provided Docker image, and a third stage on the same machine but outside the container. Running this third stage on the same machine is crucial, because the previous stages produces some output that is needed later. These files are stored on the machine trough mounted volumes.
In order to be sure these files are accessible in the third stage, I manually select a specific node. Here is my pipeline (modified a little, because it's from work):
pipeline {
agent {
docker {
label 'jenkins-worker-1'
image 'custom-image:1.0'
registryUrl 'https://example.com/registry'
args '-v $HOME/.m2:/root/.m2'
}
}
stages {
stage('Test') {
steps {
sh 'mvn test'
}
}
stage('Package') {
steps {
sh 'mvn package'
sh 'mv target workdir/'
}
}
stage('Upload') {
agent {
node {
label 'jenkins-worker-1'
}
}
steps {
sh 'uploader.sh workdir'
}
}
}
}
The node is preconfigured for uploading, so I can't simply upload built target from Docker container, it has to be done from the physical machine.
And here goes my problem: while the first two stages works perfectly fine, the third stage cannot start, because: "Waiting for next available executor" suddenly appears in logs. It's obvious the node is waiting for itself, I cannot use another machine. It looks like Docker is blocking something and Jenkins thinks the node is busy, so it waits eternally.
I look for a solution, that will allow me to run stages both in and outside the container, on the same machine.
Apparently the nested stages feature would solve this problem, but unfortunately it's available since version 1.3 of pipeline plugin, but my node has 1.2.9.

Jenkins declarative pipeline: input with conditional steps without blocking the executor

I'm trying to get the following features to work in Jenkins' Declarative Pipeline syntax:
Conditional execution of certain stages only on the master branch
input to ask for user confirmation to deploy to a staging environment
While waiting for confirmation, it doesn't block an executor
Here's what I've ended up with:
pipeline {
agent none
stages {
stage('1. Compile') {
agent any
steps {
echo 'compile'
}
}
stage('2. Build & push Docker image') {
agent any
when {
branch 'master'
}
steps {
echo "build & push docker image"
}
}
stage('3. Deploy to stage') {
when {
branch 'master'
}
input {
message "Deploy to stage?"
ok "Deploy"
}
agent any
steps {
echo 'Deploy to stage'
}
}
}
}
The problem is that stage 2 needs the output from 1, but this is not available when it runs. If I replace the various agent directives with a global agent any, then the output is available, but the executor is blocked waiting for user input at stage 3. And if I try and combine 1 & 2 into a single stage, then I lose the ability to conditionally run some steps only on master.
Is there any way to achieve all the behaviour I'm looking for?
You need to use the stash command at the end of your first step and then unstash when you need the files
I think these are available in the snippet generator
As per the documentation
Saves a set of files for use later in the same build, generally on
another node/workspace. Stashed files are not otherwise available and
are generally discarded at the end of the build. Note that the stash
and unstash steps are designed for use with small files. For large
data transfers, use the External Workspace Manager plugin, or use an
external repository manager such as Nexus or Artifactory

Do I have to use a node block in Declarative Jenkins pipelines?

I was reading about the best practices of a Jenkins pipeline.
I have created a declarative pipeline which is not executing parallel jobs and I want to run everything on the same slave.
I use:
agent {
label 'xxx'
}
The rest of my pipeline looks like:
pipeline {
agent {
label 'xxx'
}
triggers {
pollSCM pipelineParams.polling
}
options {
buildDiscarder(logRotator(numToKeepStr: '3'))
}
stages {
stage('stage1') {
steps {
xxx
}
}
stage('stage2') {
steps {
xxx
}
}
}
post {
always {
cleanWs()
}
failure {
xxx"
}
success {
xxx
}
}
}
Now I read the best practices here.
Point 4 is telling:
Do: All Material Work Within a Node
Any material work within a pipeline should occur within a node block.
Why? By default, the Jenkinsfile script itself runs on the Jenkins
master, using a lightweight executor expected to use very few
resources. Any material work, like cloning code from a Git server or
compiling a Java application, should leverage Jenkins distributed
builds capability and run an agent node.
I suspect this is for scripted pipelines.
Now my questions are:
Do I ever have to create a node inside a stage in a declarative pipeline (it is possible) or do I have to use agent inside the stage when I want to run my stage on another specific agent?
My current pipeline has defined a label which is on 4 agents. But my whole pipeline is always executed on one agent (what I want) but I would suspect it's executing stage1 on slaveX and maybe stage2 on slaveY. Why is this not happening?
The documentation is quite misleading.
What the documentation is suggesting is to take advantage of distributed builds. Distributed builds activated either by using the agent or node block.
The agent should be used when you want to run the pipeline almost exclusively on one node. The node block allows for more flexibilty as it allows you to specify where a granular task should be executed.
If you running the pipeline on some agent and you encapsulate a step with node with the same agent, there won't be any effect execpt that a new executor will be allocated to the step encapsulated with node.
There is no obvious benefit in doing so. You will simply be consuming executors that you don't need.
In conclusion, you are already using distributed builds when using agent and this is what the documentation is vaguely recommending.

Jenkins workflow job: limit where it can run

I have two Jenkins workflow jobs that start the same job with different parameters, namely, the branch they build. The latter job builds the project on several platforms. The "head" job, that is the worklflow job may start on different machines. Also, there are two linux machines in the setup.
And sometimes it so happens that one of them (say, master) starts on one of the linux machines, and the other one starts on the other. Both of them have to build a target on a linux machine, and since both of them are busy, both jobs stall.
With usual jobs, one can limit where they can run, however, I couldn't find how to limit where a workflow job can run. Obviously, it should be done using the groovy script, but it escapes me how exactly.
Is there a solution to that?
here's a Jenkinsfile to do it globally (this is telling jenkins the entire pipeline must be run on a slave with these three labels):
pipeline {
agent { label 'docker && git && rbenv' }
stages {
stage('commit_stage') {
steps {
echo 'building stuff'
}
}
}
}
you can also select a certain slave or certain capabilities via the node step for any stage or part of a stage:
pipeline {
agent { label 'docker && git && rbenv' }
stages {
stage('commit_stage') {
steps {
// this overrides the top-level agent requirements
node('linux_with_zsh') {
echo 'building stuff'
}
}
}
}
}

Resources