From my understanding, during a pipeline run, all groovy is executed on the controller. Because of this, it's recommended that network calls should be delegated to bash or some other scripting/programming language so that network requests will be executed on the agent.
Do inline scripts execute on the agent on the controller?
For example:
Do
sh "curl some-url.com"
and
sh "./script-which-calls-curl.sh"
behave the same?
Whether a script runs on the controller or not is independent of whether is is inline or not, it depends on the context of the sh step: it requires an explicit agent, that determines the agent the script will actually run on, which can be the controller.
This minimal pipeline will error due to missing agent/node:
sh "echo hello"
instead you will need to wrap it into a node block (scripted pipeline) or select an appropriate agent (declarative):
// without any arguments any node/agent is taken
// in my setups this normally selects the controller
node {
sh "echo hello"
}
pipeline {
agent any
stages {
stage "My Stage", {
sh "echo hello"
}
}
}
Related
I have a simple Jenkins pipeline script like this:
pipeline {
agent {
label 'agent2'
}
stages {
stage('test') {
steps {
build job: 'doSomething'
}
}
}
}
When running the job, it starts correctly on the node "agent2", but it runs as 'jenkins' (the OS shell user of the master server where Jenkins is installed) instead of the OS SSH Shell user of the node.
The node has own credentials assigned to it, but they are not used.
When I run the job "doSomething" on its own and set "Restrict where this project can be run" to "node1" everything is fine. The job then is run by the correct user.
This behaviour can be easily recreated:
1. create a new OS user (pw: testrpm)
sudo useradd -m testrpm
sudo passwd testrpm
2. create a new node and use this settings:
3. create a freestyle job (called 'doSomething) with a shell step which does 'whoami'
doSomething Job XML
4. create a pipeline job and paste this code into the pipeline script
pipeline {
agent {
label 'agent2'
}
stages {
stage('test') {
steps {
build job: 'doSomething'
}
}
}
}
test-pipeline Job XML
5. run the pipeline and check the console output of the 'doSomething' job. The output of 'whoami' is not 'testrpm' but 'jenkins'
Can somebody explain this behaviour and tell me where the error is ?
If I change the "Usage" of the built-in node to "Only build jobs with label expressions matching this node", than it works as expected an the "doSomething" job is called with the user testrpm:
I need to ssh to a server from a simple jenkin pipeline and make a deploy which is simply moving to a directory and do a git fetch and some other comands (nmp install among others). Thing is that when jenkin job ssh to the remote server it connects ok but then It gets stucked, I have to stop it. I just now modify the script to simply do a "ssh to server " and a "pwd command" to go to the easiest but it connects to it and it get stuck untill I abort. What Am I missing? here is the simpe pipeline script and the output on an screenshot
pipeline {
agent any
stages {
stage('Connect to server') {
steps {
sh "ssh -t -t jenkins#10.x.x.xx"
sh "pwd"
}
}
stage('branch status') {
steps {
sh "git status"
}
}
}
}
Jenkins executes each "sh" step as a separate shell script. Content is written to a temporary file on Jenkins node and only then executed. Each command is executed in separate session and is not aware of previous one. So neither ssh session or changes in environment variable will persist between the two.
More importantly though, you are forcing pseudo-terminal allocation with -t flag. This is pretty much opposite to what you want to achieve, i.e. run shell commands non-interactively. Simply
sh "ssh jenkins#10.x.x.xx pwd"
is enough for your example to work. Placing the commands on separate lines would not work with regular shell script, regardless of Jenkins. However you still need to have private key available on node, otherwise the job will hang, waiting for you to provide password interactively. Normally, you will want to use SSH Agent Plugin to provide private key at runtime.
script {
sshagent(["your-ssh-credentals"]) {
sh "..."
}
}
For execution on longer commands see What is the cleanest way to ssh and run multiple commands in Bash?
I've set up a home based CI server for working with a personal project. Below you can see what happens for the branch "staging". It works fine, however the problems with such a pipeline config are:
1) The only way to stop the instance seem to be to abort the build in jenkins whiсh leads to the exit code 143 and build marked as red instead of green
2) If the machine reboots I have to trigger build manually
3) I suppose there should be a better way of handling this?
Thanks
stage('Staging') {
when {
branch 'staging'
}
environment {
NODE_ENV = 'production'
}
steps {
sh 'docker-compose -f docker-compose/staging.yml build'
sh 'docker-compose -f docker-compose/staging.yml up --abort-on-container-exit'
}
post {
always {
sh 'docker-compose -f docker-compose/staging.yml rm -f -s'
sh 'docker-compose -f docker-compose/staging.yml down --rmi local --remove-orphans'
}
}
}
So, what's the goal here? Are you trying to deploy to staging? If so, what do you mean by that? If jenkins is to launch a long running process (say a docker container running a webserver) then the shell command line must be able to start and then have its exit status tell jenkins pipeline if the start was successful.
One option is to wrap the docker compose in a script that executes, checks and exits with the appropriate exit code. Another is to use yet another automation tool to help (e.g. ansible)
The first question remains, what are you trying to get jenkins to do and how on the commandline will that work. If you can model the command line then you can encapsulate in a script file and have jenkins start it.
Jenkins pipeline code looks like groovy and is much like groovy. This can make us believe that adding complex logic to the pipeline is a good idea, but this turns jenkins into our IDE and that's hard to debug and a trap into which I've fallen several times.
A somewhat easier approach is to have some other tool allow you to easily test on the commandline and then have jenkins build the environment in which to run that command line process. Jenkins handles what it is good at:
scheduling jobs
determining on which nodes jobs run
running steps in parallel
making the output pretty or easily understood by we carbon based life forms.
I am using parallel stages.
Here is a minimum example:
pipeline {
agent any
options {
parallelsAlwaysFailFast() // https://stackoverflow.com/q/54698697/4480139
}
stages {
stage('Parallel') {
parallel {
stage('docker-compose up') {
steps {
sh 'docker-compose up'
}
}
stage('test') {
steps {
sh 'sleep 10'
sh 'docker-compose down --remove-orphans'
}
}
}
}
}
post {
always {
sh 'docker-compose down --remove-orphans'
}
}
}
Currently working on a basic deployment pipeline in Jenkins (with pipeline). I am looking for the best way of doing the following:
When the developer pushes to the development branch, all stages but deploy is executed.
When the developer pushes to the master branch, all stages including deploy is executed.
I have read about matching patterns you can do, but not sure if this is the right way as the information I read was dated.
My Jenkins pipeline file
node {
stage('Preparation') {
git 'git#bitbucket.org:foo/bar.git'
}
stage('Build') {
sh 'mkdir -p app/cache app/logs web/media/cache web/uploads'
sh 'composer install'
}
stage('Test') {
sh 'codecept run'
}
stage('Deploy') {
sh 'mage deploy to:prod'
}
}
There's no magic here. This is just Groovy code. The branch in scope will be available as a parameter in some way. Inside the "stage" block, add an "if" check to compare the branch name with whatever logic you need, and either execute the body or not, depending on what branch is in scope.
Is it possible to somehow interact with the Jenkins Pipeline from the script (bat or sh)? E.g. to initiate new stage?
echo This is a batch file in no stage
pipeline.stage(build)
echo This is a batch file in build stage
I have a pretty compact build job written in PowerShell. My team is now experimenting with the Jenkins Pipeline feature and it would be great to split our ps build code to stages (build core, build modules, test, coverage and so on). We could easily do it by creating function for every stage but it would be inefficient (loading ps modules...)
I would propose you another way:
you may define your different steps as CmdLet in your Powershell script:
function step_1()
{
[CmdletBinding(SupportsShouldProcess=$true)]
param ()
Write-Verbose "step 1"
}
function step_2()
{
[CmdletBinding(SupportsShouldProcess=$true)]
param ()
Write-Verbose "step 2"
}
Then, you can define the Powershell method, like I describe hier: To call a PowerScript from the Groovy-Script
In your Groovy pipeline-script:
node ('MyWindowsSlave') {
stage ('Stage 1') {
PowerShell(". '.\\all-stages.ps1'; stage1 -Verbose")
}
stage ('Stage 2') {
PowerShell(". '.\\all-stages.ps1'; stage2 -Verbose")
}
}
You may also consider to split your Groovy Script in several files, and let the powershell developer using they own Groovy sub-script.
Regards.
You can use the input step (https://jenkins.io/doc/pipeline/steps/pipeline-input-step/) to wait a user response to continue the pipeline:
A simple 'yes' or 'no', a multiple choosen, a password confirmation...
You can control who can response to this input event using ldap groups for example.