I can't mix ECS and k8s slaves - jenkins

My top level pipeline directive looks like this:
pipeline {
agent {
kubernetes {
defaultContainer "mycontainer"
yaml 'my full podspec'
}
}
Then later in a stage I want to use ECS like this:
stage('my stage') {
agent { node { label 'my-ecs-cluster'; customWorkspace 'myworkspace'; } }
I get this error:
Also: java.lang.ClassCastException: com.cloudbees.jenkins.plugins.amazonecs.ECSSlave cannot be cast to org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave
If I put the ecs agent in the toplevel pipeline directive and then switch to k8s slaves in subsequent stages I don't get an error. So this is only an issue one way. Very frustrating. I only have a few stages that need the ecs slave for so having to define that as the default in the pipeline definition is a pain.
The bigger issue too is the if I define the ECS slaves in the pipeline definition they are provisioned every time and I have some cases where I don't want them to be provisioned at all because they aren't needed.
Just to add more info, setting agent any or none I don't think will work either because when I specify the agent in multiple stages they use a different container. I want to be able to set the ECS agent at the top-level pipeline when required because then all the stages will run with the same container and use the same workspace.

Related

Run multiple stages of Jenkins declarative pipe line in same docker slave

Objective
Objective here to migrate a scripted jenkins pipe line to declarative .Scripted pipe line is running on docker slave managed by kubernetes and the working syntax is as below
slave = 'dtr#tes.com/namespace/image:1.0'
dockerNode(image:slave)
{
stage('1'){echo "1"}
stage('2'){echo "2"}
}
The scripted pipe line is working perfect .
Concerns
Trying to use dockerNode to declrative pipeline but in declarative the dockerNode syntax is allowed only after stepes inside a stage
eg:
pipeline{
agent any
stages{
stage('1and2'}{
dockerNode(image:slave){
echo "1"
echo "2"
}
}
}
}
This is making concern to club bulky steps in to one stage than in to multiple one .So we would like your help to understand how can we better align and have multiple stages that is running in same container always .The container images is managed by kubernetes (kube pod with docker images)
To use one container for all steps you need to specify it in agent section
pipeline{
agent {
label 'docker-agent-label'
}
}
To use like so you need to configure pod template in 'Manage jenkins' -> 'Manage nodes and
Clouds' -> 'Configure clouds' -> 'Add new cloud' or use existing one.
It must be kubernetes if your jenkins host integrated with k8s.
UPDATE:

How can load the Docker section of my Jenkins pipeline (Jenkinsfile) from a file?

I have multiple pipelines using Jenkinsfiles that retrieve a docker image from a private registry. I would like to be able to load the docker specific information into the pipelines from a file, so that I don’t have to modify all of my Jenkinsfiles when the docker label or credentials change. I attempted to do this using the example jenkinsfile below:
def common
pipeline {
agent none
options {
timestamps()
}
stages {
stage('Image fetch') {
steps{
script {
common = load('/home/jenkins/workspace/common/docker_image')
common.fetchImage()
}
}
}
}
With docker_image containing:
def fetchImage() {
agent {
docker {
label “target_node ”
image 'registry-url/image:latest'
alwaysPull true
registryUrl 'https://registry-url’
registryCredentialsId ‘xxxxxxx-xxxxxx-xxxx’
}
}
}
I got the following error when I executed the pipeline:
Required context class hudson.FilePath is missing Perhaps you forgot
to surround the code with a step that provides this, such as:
node,dockerNode
How can I do this using a declarative pipeline?
There are a few issues with this:
You can allocate a node only at top level
pipeline {
agent ...
}
Or you can use a per-stage node allocation like so:
pipeline {
agent none
....
stages {
stage("My stage") {
agent ...
steps {
// run my steps on this agent
}
}
}
}
You can check the docs here
The steps are supposed to be executed on the allocated node (or in some cases they can be executed without allocating a node at all).
Declarative Pipeline and Scripted Pipeline are two different things. Yes, it's possible to mix them, but scripted pipeline is meant to either abstract some logic into a shared library, or to provide you a way to be a "hard core master ninja" and write your own fully custom pipeline using the scripted pipeline and none of the declarative sugar.
I am not sure how your Docker <-> Jenkins connection is setup, but you would probably be better if you install a plugin and use agent templates to provide the agents you need.
If you have a Docker Swarm you can install the Docker Swarm Plugin and then in your pipeline you can just configure pipeline { agent { label 'my-agent-label' } }. This will automatically provision your Jenkins with an agent in a container which uses the image you specified.
If you have exposed /var/run/docker.sock to your Jenkins, then you could use Yet Another Docker Plugin, which has the same concept.
This way you can remove the agent configuration into the agent template and your pipeline will only use a label to have the agent it needs.

What does the agent mean in jenkins?

I am trying to use jenkins. But when I reading the Declarative Pipeline Syntax, I confused by the agent term
https://jenkins.io/doc/book/pipeline/syntax/#scripted-pipeline
What does the agent stand for?
Is that mean I can set the pipeline runtime folder path?
How to create an agent?
How set a label for agent?
I can feel you :-D.
Here are the answers:
The agent section specifies where the entire Pipeline, or a specific stage, will execute in the Jenkins environment depending on where the agent section is placed. The section must be defined at the top-level inside the pipeline block, but stage-level usage is optional. - Content copied from the agent section
NO, this has nothing to do with the pipeline runtime folder path.
You can for example Create an agent/node by the following tutorial:
How to Setup Jenkins Agent/Slave Using Password and ssh Keys. -
But there are many other ways to create an agent e.g. using a Docker-Container (...).
You can Set a label under the Configuration of the Node.
You can use a label in your pipeline like:
pipeline {
agent { label 'labelName' }
(...)
}
While #adbo covered questions asked, Jenkins glossary describes agent really well:
typically a machine, or container, which connects to a Jenkins controller and executes tasks when directed by the controller.
You can choose to run entire pipeline on any available agent (agent any at the top of the pipeline) or run a specific stage on the agent of choice e.g. run build stage in a specific environ by overriding agent in that stage:
agent { docker { image 'my image' } }

Do I have to use a node block in Declarative Jenkins pipelines?

I was reading about the best practices of a Jenkins pipeline.
I have created a declarative pipeline which is not executing parallel jobs and I want to run everything on the same slave.
I use:
agent {
label 'xxx'
}
The rest of my pipeline looks like:
pipeline {
agent {
label 'xxx'
}
triggers {
pollSCM pipelineParams.polling
}
options {
buildDiscarder(logRotator(numToKeepStr: '3'))
}
stages {
stage('stage1') {
steps {
xxx
}
}
stage('stage2') {
steps {
xxx
}
}
}
post {
always {
cleanWs()
}
failure {
xxx"
}
success {
xxx
}
}
}
Now I read the best practices here.
Point 4 is telling:
Do: All Material Work Within a Node
Any material work within a pipeline should occur within a node block.
Why? By default, the Jenkinsfile script itself runs on the Jenkins
master, using a lightweight executor expected to use very few
resources. Any material work, like cloning code from a Git server or
compiling a Java application, should leverage Jenkins distributed
builds capability and run an agent node.
I suspect this is for scripted pipelines.
Now my questions are:
Do I ever have to create a node inside a stage in a declarative pipeline (it is possible) or do I have to use agent inside the stage when I want to run my stage on another specific agent?
My current pipeline has defined a label which is on 4 agents. But my whole pipeline is always executed on one agent (what I want) but I would suspect it's executing stage1 on slaveX and maybe stage2 on slaveY. Why is this not happening?
The documentation is quite misleading.
What the documentation is suggesting is to take advantage of distributed builds. Distributed builds activated either by using the agent or node block.
The agent should be used when you want to run the pipeline almost exclusively on one node. The node block allows for more flexibilty as it allows you to specify where a granular task should be executed.
If you running the pipeline on some agent and you encapsulate a step with node with the same agent, there won't be any effect execpt that a new executor will be allocated to the step encapsulated with node.
There is no obvious benefit in doing so. You will simply be consuming executors that you don't need.
In conclusion, you are already using distributed builds when using agent and this is what the documentation is vaguely recommending.

Jenkins Pipeline: Are agents required to utilize Jenkinsfile?

I am investigating the use of Jenkins Pipeline (specifically using Jenkinsfile). The context of my implementation is that I'm deploying a Jenkins instance using Chef. Part of this deployment may include some seed jobs, which will pull job configurations from source control (Jenkinsfile), to automate creation of our build jobs via Chef.
I've investigated the Jenkins documentation for both Pipeline as well as Jenkinsfile, and it seems to me that in order to use Jenkins Pipeline agents are required to be configured and set up in addition to Jenkins Master.
Am I understanding this correctly? Must Jenkins agents exist in order to use Jenkins Pipeline's Jenkinsfile? This specific line in the Jenkinsfile documentation leads me to believe this to be true:
Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Building..'
}
}
stage('Test') {
steps {
echo 'Testing..'
}
}
stage('Deploy') {
steps {
echo 'Deploying....'
}
}
}
}
The Declarative Pipeline example above contains the minimum necessary
structure to implement a continuous delivery pipeline. The agent
directive, which is required, instructs Jenkins to allocate an
executor and workspace for the Pipeline.
Thanks in advance for any Jenkins guidance!
The 'agent' part of the pipeline is required however this does not mean that you are required to have an external agent in addition to your master. If all you have is the master this pipeline will execute on the master. If you have additional agents available the pipeline would execute on whichever agent happens to be available when you run the pipeline.
If you go into
Manage Jenkins -> Manage Nodes and Clouds, you can see 'Master' itself is treated as one of the Default Nodes. With declarative format agent anyindicates any available agent which (including 'Master' as well from node configuration see below).
In case if you configure any New node, this can then be treated as New Agent in the pipeline agent any can be replaced by agent 'Node_Name'
You may can refer this LINK which give hint on Agent, Node and Slave briefly.

Resources