I have some troubles about dockerFingerprintFrom on jenkins pipeline.
The code is as follows:
node {
stage("First stage") {
dockerFingerprintFrom([dockerfile: "."])
}
}
It does not work.
I don't even know if this correct.
I would like ask if anyone has a practical example to show me ?
Related
I'm currently working with a Jenkinsfile I can't directly add code to as it is not developed by my team, however I was thinking there might be some way that I could get the owners of the Jenkinsfile (just another team in my company) to allow us to add "pre" and "post" type variables to the Jenkinsfile, where we could than pass in the stages and logic.
A sample Jenkinsfile today might look like
pipeline {
stages {
stage('Clean-Up WS') {
steps {
cleanWs()
}
}
stage('Do more....
And the desired Jenkinsfile might look like
def x = stage('Clean-Up WS') {
steps {
cleanWs()
}
}
pipeline {
stages {
x()
stage('Do more....
Where x in the above example could be passed in via a Jenkins parameter
I've played around with the above and tried with similar syntax but nothing seems to work
Does anyone know if anything like this possible using Jenkinsfiles?
I need to get result of current build execution in postBuildScripts shell call in my Jenkins DSL script. Like ${currentBuild.currentResult} in Jenkins pipelines with values: SUCCESS, UNSTABLE, or FAILURE
I've searched through DSL doc but havent found any solution for that.
My code is something like this:
postBuildScripts {
steps {
shell("""echo \$CURRENT_BUILD_STATUS""")
}
}
So how to get this $CURRENT_BUILD_STATUS in easiest way?
Unfortunately the documentation of the PostBuildScript plugin is missing some interesting parts...
job('example') {
publishers {
postBuildScripts {
steps {
shell('echo $BUILD_RESULT')
}
}
}
}
When i load another groovy file in Jenkinsfile it show me following error.
"Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node"
I made a groovy file which contains a function and i want to call it in my Declarative Jenkinsfile. but it shows an error.
My Jenkinsfile--->
def myfun = load 'testfun.groovy'
pipeline{
agent any
environment{
REPO_PATH='/home/manish/Desktop'
APP_NAME='test'
}
stages{
stage('calling function'){
steps{
script{
myfun('${REPO_PATH}','${APP_NAME}')
}
}
}
}
}
Result--
org.jenkinsci.plugins.workflow.steps.MissingContextVariableException: Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node
Suggest me what is the right way to do it.
You either need to use a scripted pipeline and put "load" instruction inside the node section (see this question) or if you are already using a declarative pipeline (which seems to be the case), you can include it in "environment" section:
environment {
REPO_PATH='/home/manish/Desktop'
APP_NAME='test'
MY_FUN = load 'testfun.groovy'
}
We have to wrap with node {}, so that jenkins executors will execute on node, Incase if we would like to execute on any specific agent node, we can mention like node('agent name'){}
example here :
node {
def myfun = load 'testfun.groovy'
pipeline{
agent any
environment{
REPO_PATH='/home/manish/Desktop'
APP_NAME='test'
}
stages{
stage('calling function'){
steps{
script{
myfun('${REPO_PATH}','${APP_NAME}')
}
}
}
}
}
}
Loading the function in an initial script block inside the pipeline worked for me. Something like below:
def myfun
pipeline {
agent any
environment {
REPO_PATH='/home/manish/Desktop'
APP_NAME='test'
}
stages {
stage('load function') {
steps {
script {
myfun = load 'testfun.groovy'
}
}
}
stage('calling function') {
steps {
script {
myfun("${REPO_PATH}","${APP_NAME}")
}
}
}
}
}
I got this error message when I was calling a sh script that does not not exist in the repository / file system. Please look in the stack trace the following line:
at WorkflowScript.run(WorkflowScript:135)
The 135 marks the line in Jenkinsfile, on which the missing script or error is happening.
Another possibility is that due to earlier/underlying errors, the context has been removed for example by multiple executor machines. This happens if you are missing the node (e.g. script) -block, but especially at the post always -block. You can use the if -check also in other places. After this you get another error, that was causing this error message.
post {
always {
script {
//skip the step if context is missing
if (getContext(hudson.FilePath)) {
echo "It works"
}
}
}
}
See https://docs.cloudbees.com/docs/cloudbees-ci-kb/latest/troubleshooting-guides/how-to-troubleshoot-hudson-filepath-is-missing-in-pipeline-run
What is modern best practice for multi-configuration builds (with Jenkins)?
I want to support multiple branches and multiple configurations.
For example for each version V1, V2 of the software I want builds targeting
platforms P1 and P2.
We have managed to set up multi-branch declarative pipelines. Each build has its own docker so its easy to support multiple platforms.
pipeline {
agent none
stages {
stage('Build, test and deploy for P1) {
agent {
dockerfile {
filename 'src/main/docker/Jenkins-P1.Dockerfile'
}
}
steps {
sh buildit...
}
}
stage('Build, test and deploy for P2) {
agent {
dockerfile {
filename 'src/main/docker/Jenkins-P2.Dockerfile'
}
}
steps {
sh buildit...
}
}
}
}
This gives one job covering multiple platforms but there is no separate red/blue status for each platform.
There is good argument that this does not matter as you should not release unless the build works for all platforms.
However, I would like a separate status indicator for each configuration. This suggests I should use a multi-configuration build which triggers a parameterised build for each configuration as below (and the linked question):
pipeline {
parameters {
choice(name: 'Platform',choices: ['P1', 'P2'], description: 'Target OS platform', )
}
agent {
filename someMagicToGetDockerfilePathFromPlatform()
}
stages {
stage('Build, test and deploy for P1) {
steps {
sh buildit...
}
}
}
}
There are several problems with this:
A declarative pipeline has more constraints over how it is scripted
Multi-configuration builds cannot trigger declarative pipelines (even with the parameterized triggers plugin I get "project is not buildable").
This also begs the question what use are parameters in declarative pipelines?
Is there a strategy that gives the best of both worlds i.e:
pipeline as code
separate status indicators
limited repetition?
This is a partial answer. I think others with better experience will be able to improve on it.
This is currently untested. I may be barking up the wrong tree.
Please comment or add a better answer.
Do not use pipeline parameters except where you need user input
Use a hybrid of a scripted and declarative pipeline
(see also https://stackoverflow.com/a/46675227/1569204)
Have a function which declares a pipeline based on parameters:
(see also https://jenkins.io/doc/book/pipeline/shared-libraries/)
Use nodes to create visible indicators in the pipeline (at least in blue ocean)
So something like the following:
def build(string platform) {
switch(platform) {
case P1:
dockerFile = 'foo'
indicator = 'build for foo'
break
case P2:
dockerFile = 'bar'
indicator = 'build for bar'
break
}
pipeline {
agent {
dockerfile {
filename "$dockerFile"
}
node {
label "$indicator"
}
}
stages {
steps {
echo "build it"
}
}
}
}
The relevant code could be moved to a shared library (even if you don't actually need to share it).
I think the cleanest approach is to have this all in a pipeline similar to the first one you presented, the only modification I would see here is making those parallel, so you would actually try and build/test for both platforms.
To reuse the previous stage's workspace you could do: reuseNode true
Something similar to this flow, that would have parallel build for platforms
pipeline {
agent 'docker'
stages {
stage('Common pre') { ... }
stage('Build all platforms') {
parallel {
stage('Build, test and deploy for P1') {
agent {
dockerfile {
filename 'src/main/docker/Jenkins-P1.Dockerfile'
reuseNode true
}
}
steps {
sh buildit...
}
}
stage('Build, test and deploy for P2') {
agent {
dockerfile {
filename 'src/main/docker/Jenkins-P2.Dockerfile'
reuseNode true
}
}
steps {
sh buildit...
}
}
}
}
stage('Common post parallel') { ... }
}
}
I'm looking into moving our scripted pipelines to declarative pipelines.
I'm using the when key word to skip stages
stage('test') {
// Only do anything if we are on the master branch
when { branch 'master' }
//...
}
This works, however the skipped stage is shown as green. I would prefer if it was shown as gray in the pipeline overview. Is there a way to achieve this?
As you mentioned in your comment I suggest you to use Jenkins Blue Ocean when working with pipelines.
It provides a more modern and user friendly view for your pipeline projects. Even the pipeline itself is display in a much more convenient way.
If the stage is appearing green for you then it's likely still actually running. A skipped stage should look like this in the Jenkins classic stage view. Consider the following code sample, which has three stages, the middle stage being skipped conditionally with the when directive.
pipeline {
agent any
stages {
stage('Always run 1') {
steps { echo "hello world" }
}
stage('Conditionally run') {
when {
expression { return false }
}
steps { echo "doesn't get printed" }
}
stage("Always run 2") {
steps { echo "hello world again" }
}
}
}
This should produce the following line in your build log
Stage "Conditionally run" skipped due to when conditional
Another answerer of this question mentioned Blue Ocean, which definitely presents a beautiful presentation of the stage view. Here is an image of how a skipped stage looks in the Blue Ocean stage view. Note that Blue Ocean is a UI and your job's underlying pipeline code will be the same regardless of which UI you choose to use.