Building Jenkins job through slave machine which has more disk space - jenkins

i have a main Jenkins pipeline job which calls other multiple sub jobs during build time.
Also i have 2 Jenkins slave machines, where slave1 has 100GB space left slave2 has 30GB space left.
But during build time Jenkins is using slave2 instead of slave1 which has more space compared to slave2.
How to configure Jenkins so that, it will use slave machine which has more space?

In the Jenkins pipeline you can mention where you want to run your job like below:
Scripted Pipeline
node('labelName'){
stage('...') {
...
}
}
Declarative Pipeline
pipeline {
agent {
label 'agentLabaleName'
}
stages {
stage('...') {
steps {
.....
}
}
}
}
more information can be found here

Related

Jenkins declarative pipeline, run the same job on multiple agents

I have three slave nodes and they all have the label "general-slave".
Now this pipeline only select one slave randomly and run the job:
pipeline {
agent { label 'general-slave' }
stages {
...
}
}
How do I run the job on all three slaves? The stages are long so am trying to avoid repeating.
This might sound straight forward but I can't seem to find a good answer. Thanks!

Parallel jenkins agents at kubernetes with kubernetes plugin

I'm using Jenkins version 2.190.2 and Kubernetes plugin 1.19.0
I have this jenkins as master at kubernetes cluster at AWS.
This jenkins have kubernetes plugin configured and it's running ok.
I have some pod templates and containers configured that are running.
I'm able to run declarative pipelines specifying agent and container.
My problem is that I'm unable to run jobs parallel.
When more than one job is executed at same time, first job starts, pod is created and execute stuff. The second job waits to the first jobs ends, even if use different agents.
EXAMPLE:
Pipeline 1
pipeline {
agent { label "bash" }
stages {
stage('init') {
steps {
container('bash') {
echo 'bash'
sleep 300
}
}
}
}
}
Pipeline 2
pipeline {
agent { label "bash2" }
stages {
stage('init') {
steps {
container('bash2') {
echo 'bash2'
sleep 300
}
}
}
}
}
This is the org.csanchez.jenkins.plugins.kubernetes log. I've uploaded to wetransfer -> we.tl/t-ZiSbftKZrK
I've read a lot of this problem and I've configured jenkins start with this JAVA_OPTS but problem is not solved.
-Dhudson.slaves.NodeProvisioner.initialDelay=0
-Dhudson.slaves.NodeProvisioner.MARGIN=50
-Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
Kubernetes plugin is configured with:
Kubernetes cloud / Concurrency Limit = 50. I've configured without value but the problem still occurs
Kubernetes cloud / Pod retention = never
Pod template / Concurrency Limit without value. I've configured with 10 but the problem still occurs
Pod template / Pod retention = Default
What configuration I'm missing or what errors i'm doing?
Finally I've solved my problem due to another problem.
We started to get errors at create normal pods because our kubernetes node at aws hadn't enough free ip's. Due to this error we scaled our nodes and now jenkins pipelines can be running parallel with diferents pods and containers.
your pods are created in parallel
Oct 31, 2019 3:13:30 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch
Created Pod: default/bash-4wjrk
...
Oct 31, 2019 3:13:30 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch
Created Pod: default/bash2-3rxck
but your bash2 pod is failing with
Caused by: java.net.UnknownHostException: jenkins-jnlp.default.svc.cluster.local
You should use Parallel Stages. Which you can find described in the Jenkins documentation for pipeline syntax.
Stages in Declarative Pipeline may declare a number of nested stages within a parallel block, which will be executed in parallel. Note that a stage must have one and only one of steps, stages, or parallel. The nested stages cannot contain further parallel stages themselves, but otherwise behave the same as any other stage, including a list of sequential stages within stages. Any stage containing parallel cannot contain agent or tools, since those are not relevant without steps.
In addition, you can force your parallel stages to all be aborted when one of them fails, by adding failFast true to the stage containing the parallel. Another option for adding failfast is adding an option to the pipeline definition: parallelsAlwaysFailFast()
An example pipeline might look like this:
Jenkinsfile
pipeline {
agent none
stages {
stage('Run pod') {
parallel {
stage('bash') {
agent {
label "init"
}
steps {
container('bash') {
echo 'bash'
sleep 300
}
}
}
stage('bash2') {
agent {
label "init"
}
steps {
container('bash') {
echo 'bash'
sleep 300
}
}
}
}
}
}
}

In Jenkins why do we see two different executors performing the same stage?

I have a Jenkins declarative pipeline (Jenkins version 2.138.3)
On Jenkins screen why do i see that the same stage say 'compile' is being executed on the same executor....?
Image attached
Example: pipeline named 'multi-branch-pipeline-1' running on agent called 'agent-1' with three executors...here the stage 'stage-promotion' is being executed on the two different executors (2,3).
pipeline {
agent {label 'agent-1'}
stages{
stage('compile') {
agent {label 'agent-1'}
}
stage('stage-promotion') {
agent {label 'agent-1'}
}
}
}
Afaik, nested agent declarations work that way, the outer will keep its executor allocated and usable while the inner is running.
Since you use the same label (or rather direct addressing with agent name) it amounts to a 2nd executor on the same agent, there seems to be no logic for this rather unusual case, as you could just omit the agent declaration that amount to the same label.
Following will do the same as yours:
pipeline {
agent {label 'agent-1'}
stages{
stage('compile') {
// runs on agent-1
}
stage('stage-promotion') {
// runs on agent-1
}
}
}
Nested agents are very useful when you want to temporarily switch the machine in your pipeline:
pipeline {
agent {label 'A'}
stages{
stage('start server') {
// runs on machine x with label A
}
stage('test') {
agent {label 'B'}
// runs on machine y with label B
}
stage('stop server and archive logs') {
// runs on **the same machine as in stage start server**, same workspace etc.
}
}
}
Important part is, in the last stage, we can be sure to be on the same machine in the same workspace with no wait-time (i.e. executor contention) as in the first stage. If you use agent declarations inside the stages only, it could happen that we end up in a different machine than in the first stage if you have more than 1 agent with label A connected.

Do I have to use a node block in Declarative Jenkins pipelines?

I was reading about the best practices of a Jenkins pipeline.
I have created a declarative pipeline which is not executing parallel jobs and I want to run everything on the same slave.
I use:
agent {
label 'xxx'
}
The rest of my pipeline looks like:
pipeline {
agent {
label 'xxx'
}
triggers {
pollSCM pipelineParams.polling
}
options {
buildDiscarder(logRotator(numToKeepStr: '3'))
}
stages {
stage('stage1') {
steps {
xxx
}
}
stage('stage2') {
steps {
xxx
}
}
}
post {
always {
cleanWs()
}
failure {
xxx"
}
success {
xxx
}
}
}
Now I read the best practices here.
Point 4 is telling:
Do: All Material Work Within a Node
Any material work within a pipeline should occur within a node block.
Why? By default, the Jenkinsfile script itself runs on the Jenkins
master, using a lightweight executor expected to use very few
resources. Any material work, like cloning code from a Git server or
compiling a Java application, should leverage Jenkins distributed
builds capability and run an agent node.
I suspect this is for scripted pipelines.
Now my questions are:
Do I ever have to create a node inside a stage in a declarative pipeline (it is possible) or do I have to use agent inside the stage when I want to run my stage on another specific agent?
My current pipeline has defined a label which is on 4 agents. But my whole pipeline is always executed on one agent (what I want) but I would suspect it's executing stage1 on slaveX and maybe stage2 on slaveY. Why is this not happening?
The documentation is quite misleading.
What the documentation is suggesting is to take advantage of distributed builds. Distributed builds activated either by using the agent or node block.
The agent should be used when you want to run the pipeline almost exclusively on one node. The node block allows for more flexibilty as it allows you to specify where a granular task should be executed.
If you running the pipeline on some agent and you encapsulate a step with node with the same agent, there won't be any effect execpt that a new executor will be allocated to the step encapsulated with node.
There is no obvious benefit in doing so. You will simply be consuming executors that you don't need.
In conclusion, you are already using distributed builds when using agent and this is what the documentation is vaguely recommending.

Jenkins Pipeline: Are agents required to utilize Jenkinsfile?

I am investigating the use of Jenkins Pipeline (specifically using Jenkinsfile). The context of my implementation is that I'm deploying a Jenkins instance using Chef. Part of this deployment may include some seed jobs, which will pull job configurations from source control (Jenkinsfile), to automate creation of our build jobs via Chef.
I've investigated the Jenkins documentation for both Pipeline as well as Jenkinsfile, and it seems to me that in order to use Jenkins Pipeline agents are required to be configured and set up in addition to Jenkins Master.
Am I understanding this correctly? Must Jenkins agents exist in order to use Jenkins Pipeline's Jenkinsfile? This specific line in the Jenkinsfile documentation leads me to believe this to be true:
Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Building..'
}
}
stage('Test') {
steps {
echo 'Testing..'
}
}
stage('Deploy') {
steps {
echo 'Deploying....'
}
}
}
}
The Declarative Pipeline example above contains the minimum necessary
structure to implement a continuous delivery pipeline. The agent
directive, which is required, instructs Jenkins to allocate an
executor and workspace for the Pipeline.
Thanks in advance for any Jenkins guidance!
The 'agent' part of the pipeline is required however this does not mean that you are required to have an external agent in addition to your master. If all you have is the master this pipeline will execute on the master. If you have additional agents available the pipeline would execute on whichever agent happens to be available when you run the pipeline.
If you go into
Manage Jenkins -> Manage Nodes and Clouds, you can see 'Master' itself is treated as one of the Default Nodes. With declarative format agent anyindicates any available agent which (including 'Master' as well from node configuration see below).
In case if you configure any New node, this can then be treated as New Agent in the pipeline agent any can be replaced by agent 'Node_Name'
You may can refer this LINK which give hint on Agent, Node and Slave briefly.

Resources