I am currently automating a project in Jenkins. I am using a pipeline that reads and executes a Jenkinsfile from a Source Management Tool (GIT in my case). For it to happen, I give the git URL and supply credentials with 'Jenkins Credentials Provider'and execute the build. It reads the Jenkinsfile and checks out the code, but fails at the next stage:
pipeline{
agent any
stages{
...
stage('Cloning GIT Repo'){
steps{
echo 'Cloning the repository'
git url: 'http://test.git'
}
}
...
It gives the error:
No credentials specified
It there a way for me to use the global credentials, I specified in the Jenkins UI earlier?
You can use credentialsId param
git(
url: 'http://test.git',
credentialsId: 'jenkins-credentials',
branch: "${branch}"
)
https://jenkins.io/doc/book/pipeline/jenkinsfile/#optional-step-arguments
https://jenkins.io/doc/pipeline/examples/#push-git-repo
Related
I have a question when I use pipeline with git SCM, currently I push all Jenkinsfile script in git with master branch. But when I modify one Jenkinsfile script when the another pipeline job be trigger will only show the changes, It's very upset when I only when to check build changes.
for example:
I config pipeline with git SCM (git: xxx/jenkinsJob, branch: master, script: a.jenkinsfile)
# a.jenkinsfile
stage('Checkout external proj') {
steps {
git branch: 'my_specific_branch',
credentialsId: 'my_cred_id',
url: 'ssh://git#test.com/proj/test_proj.git'
}
}
After I modify b.jenkinsJob in git://xxx/jenkinsJob, when I trigger e A pipeline job,
the A job will show two git change for "xxx/jenkinsJob" and "git#test.com/proj/test_proj"
like:
# changes
b.jenkinsfile change message 1
b.jenkinsfile change message 2
b.jenkinsfile change message 3
a.jenkinsfile change message 2
..
test_proj change message
I know how to disable changelog in jenkinsfile.
git changelog: false, branch: 'my_specific_branch', url: 'ssh://git#test.com/proj/test_proj.git'
But in jenkins job configuration page, can not find any way to do that..
https://plugins.jenkins.io/git/
Is there any way to avoid disable changelog in jenkins pipeline script for Git SCM ?, let only show test_proj changes.
thanks!
Currently I use this way to clear changeLog, use currentBuild.getChangeSets().clear()
pipeline {
stages {
stage('Checkout') {
steps {
script {
currentBuild.getChangeSets().clear()
}
git branch: "master", url: 'ssh://xxx/test.git'
}
}
}
}
I am starting to work with jenkinsfiles. The jenkinsfile contains an echo message (i.e. Hello world)
This is my case:
I have jenkins (ver 2.190.1) installed on a pc with s.o. windows (master agent).
My slave agent is a pc with s.o. linux.
I put my jenkinsfile in my scm.
I have successfully configured the pipeline to run the jenkinsfile. Which is done successfully.
Jenkins makes repository checkout on *master agent* and not on *slave agent* (what I want) and *option "Lightweight checkout"* is checked.
I want this behaviour because my pipeline must to work on slave agent. And I don't want my repoository on master agent.
I searched on the net for a possible solution but without results.
Could you give me a suggestion on how to checkout my repository directly on the slave agent?
It can be done this way. From the below example just replace agentLabelName to your agent name.
Scripted Pipeline
node('agentLabelName') {
stage('stageName') {
echo "${env.WORKSPACE}"
//checkout scm // If Jenkinsfile availabe with your SCM
git url: 'https://github.com/samitkumarpatel/test0.git', branch: 'main'
}
}
Declarative Pipeline
pipeline {
agent {
label 'agentLabelName'
}
stages {
stage('stageName') {
steps {
echo "Hello World"
echo "${env.WORKSPACE}"
//checkout scm // If Jenkinsfile availabe with your SCM
git url: 'https://github.com/samitkumarpatel/test0.git', branch: 'main'
}
}
}
}
I have several projects that use a Jenkinsfile which is practically the same. The only difference is the git project that it has to checkout. This forces me to have one Jenkinsfile per project although they could share the same one:
node{
def mvnHome = tool 'M3'
def artifactId
def pomVersion
stage('Commit Stage'){
echo 'Downloading from Git...'
git branch: 'develop', credentialsId: 'xxx', url: 'https://bitbucket.org/xxx/yyy.git'
echo 'Building project and generating Docker image...'
sh "${mvnHome}/bin/mvn clean install docker:build -DskipTests"
...
Is there a way to preconfigure the git location as a variable during the job creation so I can reuse the same Jenkinsfile?
...
stage('Commit Stage'){
echo 'Downloading from Git...'
git branch: 'develop', credentialsId: 'xxx', url: env.GIT_REPO_LOCATION
...
I know I can set it up this way:
This project is parameterized -> String Parameter -> GIT_REPO_LOCATION, default= http://xxxx, and access it with env.GIT_REPO_LOCATION.
The downside is that the user is promted to start the build with the default value or change it. I would need that it were transparent to he user. Is there a way to do it?
You can use the Pipeline Shared Groovy Library plugin to have a library that all your projects share in a git repository. In the documentation you can read about it in detail.
If you have a lot of Pipelines that are mostly similar, the global variable mechanism provides a handy tool to build a higher-level DSL that captures the similarity. For example, all Jenkins plugins are built and tested in the same way, so we might write a step named buildPlugin:
// vars/buildPlugin.groovy
def call(body) {
// evaluate the body block, and collect configuration into the object
def config = [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = config
body()
// now build, based on the configuration provided
node {
git url: "https://github.com/jenkinsci/${config.name}-plugin.git"
sh "mvn install"
mail to: "...", subject: "${config.name} plugin build", body: "..."
}
}
Assuming the script has either been loaded as a Global Shared Library
or as a Folder-level Shared Library the resulting Jenkinsfile will be
dramatically simpler:
Jenkinsfile (Scripted Pipeline)
buildPlugin {
name = 'git'
}
The example shows how a jenkinsfile passes name = git to the library.
I currently use a similar setup and am very happy with it.
Instead of having a Jenkinsfile in each Git repository, you can have an additional git repository from where you get the common Jenkinsfile - this works when using Pipeline type Job and selecting the option Pipeline script from SCM. This way Jenkins checks out the repo where you have the common Jenkinsfile before checking out the user repo.
In case the job can be triggered automatically, you can create a post-receive hook in each git repo that calls the Jenkins Pipeline with the repo as a parameter, so that the user does not have to manually run the job entering the repo as a parameter (GIT_REPO_LOCATION).
In case the job cannot be triggered automatically, the least annoying method I can think of is having a Choice parameter with a list of repositories instead of a String parameter.
I'm trying to set up the credentials for github in a jenkins pipeline job. I have the following in my pipeline script:
pipeline {
agent any
git([url: 'ssh://git#github.com/user/repname/', branch: 'master', credentialsId: 'xxx-xxx-xxx'])
Where does the credentialsId come from? Is this created elsewhere in Jenkins?
Update:
I pulled the credentials id from this page:
But now I am seeing this error:
Started by user anonymous
org.codehaus.groovy.control.MultipleCompilationErrorsException:
startup failed: WorkflowScript: 3: Undefined section "git" # line 3,
column 5.
As you found yourself, it's the credentials id provided in the credentials view.
Now, as for your second problem, you're using declarative pipeline, it requires that you have the following structure:
pipeline {
agent any
stages {
stage('Example') {
steps {
git([url: 'ssh://git#github.com/user/repname/', branch: 'master', credentialsId: 'xxx-xxx-xxx'])
}
}
}
}
E.g. you need to put the git step insde of the stages, stage and steps clauses (documentation of this can be found here).
Alternatively you can use scripted pipeline, then it becomes:
node {
git([url: 'ssh://git#github.com/user/repname/', branch: 'master', credentialsId: 'xxx-xxx-xxx'])
}
However, when you're creating simple pipelines, then declarative pipelines provides a lot of nice to have features. See my answer here for a comparison between declarative and scripted pipelines.
How should pipeline/jenkinsfile syntax look to be triggered by parameterized trigger via curl for example?
I'm having the pipeline that starts with:
pipeline{
parameters {
string(name: 'mycommitid', defaultValue: 'nocommit', description: 'my parameterized build')
}
properties([
parameters([
string(defaultValue: 'nocommit', description: 'fas', name: 'mycommitid')
])
])
node{...}
}
By setting this in the code my pipeline won't be triggered, only if I set it manually in build triggers section in jenkins. But the goal is to use it in the multibranch pipelines and jenkinsfiles.
The output I'm getting is (the hash here is some random number i typed as example):
git rev-parse 4364576456fgds467734ss344c^{commit} # timeout=10
ERROR: Couldn't find any revision to build. Verify the repository and branch configuration for this job.
Finished: FAILURE
And having the commit passed, how do you advise should I build only that single revision?
When defining parameters in groovy pipeline script, you should define them before the groovy script, in the job configuration.
Once you created your parameters in the job configuration you can access them in the groovy, with the syntax ${env.PARAM}.