I have a simple Jenkins pipeline script like this:
pipeline {
agent {
label 'agent2'
}
stages {
stage('test') {
steps {
build job: 'doSomething'
}
}
}
}
When running the job, it starts correctly on the node "agent2", but it runs as 'jenkins' (the OS shell user of the master server where Jenkins is installed) instead of the OS SSH Shell user of the node.
The node has own credentials assigned to it, but they are not used.
When I run the job "doSomething" on its own and set "Restrict where this project can be run" to "node1" everything is fine. The job then is run by the correct user.
This behaviour can be easily recreated:
1. create a new OS user (pw: testrpm)
sudo useradd -m testrpm
sudo passwd testrpm
2. create a new node and use this settings:
3. create a freestyle job (called 'doSomething) with a shell step which does 'whoami'
doSomething Job XML
4. create a pipeline job and paste this code into the pipeline script
pipeline {
agent {
label 'agent2'
}
stages {
stage('test') {
steps {
build job: 'doSomething'
}
}
}
}
test-pipeline Job XML
5. run the pipeline and check the console output of the 'doSomething' job. The output of 'whoami' is not 'testrpm' but 'jenkins'
Can somebody explain this behaviour and tell me where the error is ?
If I change the "Usage" of the built-in node to "Only build jobs with label expressions matching this node", than it works as expected an the "doSomething" job is called with the user testrpm:
Related
I've created a Jenkinsfile in my Git repository that is defined as this:
pipeline {
//None parameter in the agent section means that no global agent will be allocated for the entire Pipeline’s
//execution and that each stage directive must specify its own agent section.
agent none
stages {
stage('Build') {
agent {
docker {
//This image parameter (of the agent section’s docker parameter) downloads the python:3.8
//Docker image and runs this image as a separate container. The Python container becomes
//the agent that Jenkins uses to run the Build stage of the Pipeline project.
image 'python:3.8.3'
}
}
steps {
//This sh step runs the Python command to compile the application
sh 'pip install -r requirements.txt'
}
}
}
}
When I tried to run the job with this Pipeline, I've got the following error:
I also tried to use image python:latest but this option didn't work either.
Can someone explain me :)?
Go to Computer Management -> Local Users and Groups and make sure the user used by jenkins is added to the docker-users group
I'm currently attempting to run Groovy code using Jenkins Pipeline, but I'm finding that my scripts run the Groovy part of the code on the Master rather than the Slaves despite me noting the agent.
I have a Repo in Git named JenkinsFiles containing the below JenkinsFile, as well as a repo named CommonLibrary, which contains the code run in the Stages.
Here's my JenkinsFile :
#Library(['CommonLibrary']) _
pipeline {
agent { label 'Slave' }
stages {
stage("Preparation") {
agent { label 'Slave && Windows' }
steps {
Preparation()
}
}
}
}
Here's the Preparation.groovy file :
def call() {
println("Running on : " + InetAddress.localHost.canonicalHostName)
}
Unfortunately, I always seem to get the Master returned when I run the Pipeline in Jenkins. I've tried manually installing Groovy on the Slave, and have also removed the Executors on the Master. Any Powershell that gets run triggers correctly on the Slave, and the $NODE_NAME value returns as the Slave, but it's just the Groovy commands that seem to run on the Master.
Any help would be greatly appreciated. Thanks! - Tor
In my project I have a Jenkins pipeline, which should execute two stages on a provided Docker image, and a third stage on the same machine but outside the container. Running this third stage on the same machine is crucial, because the previous stages produces some output that is needed later. These files are stored on the machine trough mounted volumes.
In order to be sure these files are accessible in the third stage, I manually select a specific node. Here is my pipeline (modified a little, because it's from work):
pipeline {
agent {
docker {
label 'jenkins-worker-1'
image 'custom-image:1.0'
registryUrl 'https://example.com/registry'
args '-v $HOME/.m2:/root/.m2'
}
}
stages {
stage('Test') {
steps {
sh 'mvn test'
}
}
stage('Package') {
steps {
sh 'mvn package'
sh 'mv target workdir/'
}
}
stage('Upload') {
agent {
node {
label 'jenkins-worker-1'
}
}
steps {
sh 'uploader.sh workdir'
}
}
}
}
The node is preconfigured for uploading, so I can't simply upload built target from Docker container, it has to be done from the physical machine.
And here goes my problem: while the first two stages works perfectly fine, the third stage cannot start, because: "Waiting for next available executor" suddenly appears in logs. It's obvious the node is waiting for itself, I cannot use another machine. It looks like Docker is blocking something and Jenkins thinks the node is busy, so it waits eternally.
I look for a solution, that will allow me to run stages both in and outside the container, on the same machine.
Apparently the nested stages feature would solve this problem, but unfortunately it's available since version 1.3 of pipeline plugin, but my node has 1.2.9.
I'm trying to get the following features to work in Jenkins' Declarative Pipeline syntax:
Conditional execution of certain stages only on the master branch
input to ask for user confirmation to deploy to a staging environment
While waiting for confirmation, it doesn't block an executor
Here's what I've ended up with:
pipeline {
agent none
stages {
stage('1. Compile') {
agent any
steps {
echo 'compile'
}
}
stage('2. Build & push Docker image') {
agent any
when {
branch 'master'
}
steps {
echo "build & push docker image"
}
}
stage('3. Deploy to stage') {
when {
branch 'master'
}
input {
message "Deploy to stage?"
ok "Deploy"
}
agent any
steps {
echo 'Deploy to stage'
}
}
}
}
The problem is that stage 2 needs the output from 1, but this is not available when it runs. If I replace the various agent directives with a global agent any, then the output is available, but the executor is blocked waiting for user input at stage 3. And if I try and combine 1 & 2 into a single stage, then I lose the ability to conditionally run some steps only on master.
Is there any way to achieve all the behaviour I'm looking for?
You need to use the stash command at the end of your first step and then unstash when you need the files
I think these are available in the snippet generator
As per the documentation
Saves a set of files for use later in the same build, generally on
another node/workspace. Stashed files are not otherwise available and
are generally discarded at the end of the build. Note that the stash
and unstash steps are designed for use with small files. For large
data transfers, use the External Workspace Manager plugin, or use an
external repository manager such as Nexus or Artifactory
At the pipeline level I specify the agent and node (with both the label and custom workspace). When the pipeline kicks off it runs on the specified node, but when it hits the 'build job' picks the first available node. I tried playing with the NodeLabel plugin, but that didn't work either.
This is my Jenkinsfile:
pipeline {
agent {
node {
label "Make Build Server"
customWorkspace "$Workspace"
}
}
options {
skipDefaultCheckout()
}
stages {
stage('PreBuild'){
steps{
input 'Did you authenticate the server through all the firewalls?'
}
}
stage('Housekeeping'){
steps{
build job: 'Housekeeping'
}
}
}
}
When you use the build instruction in a Jenkinsfile, it's telling jenkins you want to build a completely separate job. It is that other job that will need to specify on what agent it will build. If it's a job based on a Jenkinsfile, then that other Jenkinsfile will indicate the agent. If it is a freestyle job, likewise. So the thing you were expecting--that the other job build on the agent you specified in the "parent Jenkinsfile"--is reasonable, but is not the way it works.
Hope this helps!