How to use pipline for checkin devops data.
How to use pipline for checkin devops data
How to use pipline for checkin devops data
Because plus is not use in params... params change the plus in space
let assume the params is www.somewebsite.com?name=%2bsomename
const params = new URLSearchParams(props.location.search);
console.log(params);
the console log look like this +somename
Related
I am working on a multibranch pipeline Jenkins setup and build is triggered using webhook in Git.. Here I have selected Git Branch source as - Git.
When I push any change in git, webhook creates a request body with all push event details. How can I parse "git_http_url" value from this( which will have my git repo url). This value I can then use as ${myrepourl} in jenkins console. Basically I want avoid hardcoding the repo url, it should dynamically take using this parameter.
Please guide.
![webhook request body screenshots][2]][2]
[![attached my jenkins console branch source][1]][1]
[1]: https://i.stack.imgur.com/sdb0l.png
[2]: https://i.stack.imgur.com/icPP9.png
This looks like a not-very-good idea to begin with. I will start by explaining why, outlining the alternatives, and in the end suggesting a solution that might still work if you insist on doing this.
When you configure your pipeline, you need to provide it with Jenkinsfile. It can be pasted inside the configuration ("Pipeline script"), or you can provide a path to it so Jenkins can perform a checkout ("Pipeline script from SCM"). When doing the former, you have one Jenkinsfile, so different branches can't alter it (and so missing the point of having a multibranch). When doing the latter, even if you can parametrize the git repo, you still need to provide a path (as it won't arrive in github notification). In addition, I can trigger your build with my repo, but chances are your pipeline won't be able to properly build my repo anyway. So your pipeline can only build your repo, at which point it's a bit unclear why you insist on not providing your specific pipeline with a path to your specific repo that it specifically knows how to build.
Most people who need multibranch pipelines over Github use one of the plugins specifically written for this purpose, e.g. Github Multibranch plugin or Github organization. These plugins do all the job themselves: they sign up for notifications, process them, and start builds. They also update build status in Github for you.
Finally, if you insist on processing Github notifications yourself, you can use Generic Webhook Trigger plugin that will allow you to trigger the job by POST-ing to a specified URL with a token. This may look like this:
pipeline {
agent { node { label 'master'} }
triggers {
GenericTrigger(causeString: 'Generic Cause',
genericVariables: [
[key: 'DATA', value: '$'], //JSONPath expression meaning "everything"
[key: 'GITHUB_URL', value: '$.project.git_http_url']
],
printContributedVariables: false, // change to 'true' to see all the available variables in the console
printPostContent: false, // change to 'true' to see the dump of the POST data
silentResponse: false,
token: 'my_token')
}
As per the first configuration line, any JSON posted will get flattened and turned into pipeline variables, with a prefix you define (in this case, "DATA_"). E.g. the field git_http_url inside the field project in Github payload will be defined in the pipeline and available to you as DATA_project_git_http_url. As per second configuration line, it will also be available as GITHUB_URL.
You can test your pipeline with e.g.
curl -XPOST -H "Content-Type: application/json" 'http://<jenkins>/generic-webhook-trigger/invoke?token=my_token' --data '{"hello": "world"}'
In this case, the contributed variable will be DATA_hello and it will have the value of world. (The GITHUB_URL variable, naturally, won't be defined.)
If you want to turn this into real Github webhook processor, you need to make sure that Github notifs arrive to <jenkins>/generic-webhook-trigger/invoke?token=my_token. We use nginx for that, but there are many other options.
On Mercurial I've implemented a hook in my hgrc file that activates when some sort of change occurs in Jenkins(i.e tagging or committing). Here is my hook code:
curl -X POST http://tokenusername:115d59a462df750d4f12347975b3d691cf#127.0.0.1:8080/job/pipelinejob/buildWithParameters/mercurial/notifyCommit?url=http://127.0.0.1:85/hg/experimentrepoistory?token=1247
So there's no issue with my hook notifying Jenkins that a change has occurred and the pipeline executes but for some reason I am having trouble getting the commit id or any or the author name's who made the commit etc. I went to the script console in jenkins and wrote the following code in groovy to see if the changeset data from Mercurial transferred over to Jenkins. Also all the libraries are imported
def job = hudson.model.Hudson.instance.getItem("pipelinejob")
def builds = job.getBuilds()
def thisBuild = builds[0]
println('Lets test Mercurial fields ' + thisBuild.getEnvironment()['MERCURIAL_REVISION']) //Lets test Mercurial fields null
It makes me think that MERCURIAL_REVISION for some reason wasnt defined even though I provided a job that has the changeset info. I was reading this documentation https://javadoc.jenkins.io/plugin/mercurial/hudson/plugins/mercurial/MercurialChangeSet.html#MercurialChangeSet-- that lists a bunch of functions that have alot of functions like getCommitId() getNode() etc that gets the information that I need. Problem is I'm not entirely sure how to instantiate MercurialChangeSet with the jenkin jobs pipelinejob that in theory should have the Mercurial commitId information. Thats why I wanted to know if I perhaps missed something obvious regarding accessing MERCURIAL_REVISION
So I found out that I need to enable the Pipeline script from SCM and that I need to put the Jenkinsfile with the pipeline code inside my workspace directory in order to get the changeset information. I am not entirely sure why this works since I would think the Jenkinsfile needs to be in the repo directory of the SCM
I am automating the configuration of Jenkins masters to get to a one-click instantiation. We have 6 standard jobs we create for each instance and I'd like to be able to create them via groovy.init.d scripts but haven't found examples for this type of job.
We use the cloudbees Bitbucket Team/Project plugin that ends up creating jobs of type WorkflowMultibranchProject with additional configuration to connect to our on-prem Bitbucket instance.
Does anyone have samples of creating such a job via groovy? Am I better off trying to use JobDSL to create the job (am doing that already for a Mother Seed job)
[UPDATE] : with the help of the answer below came up with a full sample creating an entire Bitbucket Team/Project Job: https://github.com/redfive/jenkins-init/blob/master/init.groovy.d/core-jobs.groovy
Having used Job DSL, I'm 50/50 undecided if it is easier compared to using Groovy (as Job DSL lacks support for some of the config options).
An example for the similar OrganizationFolder can be found in #coderanger's article on https://coderanger.net/jenkins/:
// Create the top-level item if it doesn't exist already.
def folder = jenkins.items.isEmpty() ? jenkins.createProject(OrganizationFolder, 'MyName') : jenkins.items[0]
// Set up GitHub source.
def navigator = new GitHubSCMNavigator(githubOrg)
navigator.credentialsId = cred.id // Loaded above in the GitHub section.
navigator.traits = [
// Too many repos to scan everything. This trims to a svelte 265 repos at the time of writing.
new jenkins.scm.impl.trait.WildcardSCMSourceFilterTrait('*-cookbook', ''),
// We have a ton of old branches so try to limit to just master and PRs for now.
new jenkins.scm.impl.trait.RegexSCMHeadFilterTrait('^(master|PR-.*)'),
new BranchDiscoveryTrait(1), // Exclude branches that are also filed as PRs.
new OriginPullRequestDiscoveryTrait(1), // Merging the pull request with the current target branch revision.
]
folder.navigators.replace(navigator)
The next time when I set up an instance, I'd likely give that a try.
I am using Jenkins multibranch pipeline pull request feature when i print env.BRANCH_NAME it prints PR-pullrequestnumber. instead of that i want to get branch name how could i get it ?
I was able to get the branch name from env.CHANGE_BRANCH in case of a pull request.
You will get the exact branch name after merging the code. It's just to differ
Is there any Jenkins plugin that helps with the following:
if a directory <XXX*, is present in SVN folder <GoRoCo>
then the <GoRoCo>_<XXX> Jenkins job is called
?
Example:
In job "TEST" , I specify parameters like directory name (A, B , C) and folder name (G1R2) then job "TEST" should trigger the jobs "G1R2_A" , "G1R2_B" and "G1R2_C"
Use Parameterized Trigger Plugin. When specifying jobs to call in the plugin you can use tokens, as in JOB_${PARAM1}_${PARAM2}.
Take a look at that plugin, i think it does exactly what you are looking for:
https://wiki.jenkins-ci.org/display/JENKINS/Files+Found+Trigger
Use Build Flow plugin
With the help this plugin you can run as many jobs with or without parameter.
Use some scripts to create property file with the required parameters for each of the modified project and place them in the workspace directory.
Later you can use parameterised plugin to trigger downstream project like this.
Note: you might also have to delete those properties after triggering the down stream projects.