How can I edit jenkins job's parameter by updating config.xml of jenkins job using curl?
You can use:
curl -X POST 'http://my-cool-jenkins.com:8080/createItem?name=mycooljob' -u username:password --data-binary #config.xml -H "Content-Type:text/xml"
Update:
That url for creating job, for updating use:
curl -X POST 'http://my-cool-jenkins.com:8080/job/mycooljob/config.xml' -u username:password --data-binary #config.xml -H "Content-Type:text/xml"
Just updating content of config.xml file is probably not enough to change in-memory state of Jenkins job. You still need to reload configuration from disk, which can be done either in GUI with jenkins/manage/, using groovy script or simply rebooting the server. After that your example should work.
This really goes down to the fact that Jenkins config.xml are XStream serialized java objects, not actual configuration files. So changing job parameters by manually editing xml files is likely not the best solution. Instead, you could change the job configuration using Jenkins script console. For example to change default parameter value for String parameter, you can run below script in Jenkins console (e.g. http://localhost:8080/jenkins/script):
import hudson.model.ParametersDefinitionProperty
def jobName = "job_name"
def paramName = "param_to_be_changed"
def newParamValue = "param_new_value"
def job = Jenkins.instance.getItem(jobName)
def params = job.getAction(ParametersDefinitionProperty)
def paramToModify = params.getParameterDefinitions().find { param -> param.getName() == paramName }
paramToModify.setDefaultValue(newParamValue)
job.save()
If the job is inside the folder or organization, it is necessary to go one level further, i.e.:
def folderName = "folder_name"
def job = Jenkins.instance.getItem(folderName).getItem(jobName)
After that job state will be stored in config.xml file. After that you can execute the script remotely using curl. Assuming you saved above script to script.groovy file:
# Get breadcrumb from Jenkins
curl -u <username>:<password> 'http://localhost:8080/jenkins/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)'
# Send script to Jenkins console
curl -X POST -u <username>:<password> -H 'Jenkins-Crumb: <crumb>' -H 'Content: text/plain' --data-urlencode "script=$(< script.groovy)" http://localhost:8080/jenkins/scriptText
More details on Parameter API in javadoc
Here is a link of a script that I've been using in order to modify a job's pipeline for the shell: https://raw.githubusercontent.com/iocanel/presentations/382074b5012d6c3ed87042298114e688424eeaed/workspace/editor/jenkins-run-pipeline
Related
I have a question about how to use File Sepc in an API Call in JFrog.
I used Jenkins Artifactory Plugin to upload or download artifacts to JFrog, I try to rewrite the function using JFrog API (GET/PUT) to do the same thing.
but I have now a problem, for some artifacts I used file Spec to set some properties and finally I upload this file spec.
"files": [
{
"pattern": "${file}",
"target": "${target}" """
if (runID) {
uploadSpec += """,
"props": "artifactId=${runID}"
"""
}
uploadSpec += """
}
]
as you can see this artifactId.
in this case when I use JFrog API to upload artifacts how should I set properties?
sh """
curl -sSf -u user:pw -X PUT -T ${zipFile} 'https://${config.artifactory.name}.xxxx:443/artifactory/${path}'
"""
How can I call put api and set "props": "artifactId=${runID}"
any solutions??
First - if you can use the JFrog CLI - you should use it, because it makes it simpler and provides some advanced features out-of-the-box, such as batch parallel uploads/downloads, file-specs, attaching properties, build-info, authentication, etc.
If you still want to use the Artifactory API directly for setting properties, which is indeed a viable good option, you can do one of the following:
Add the properties as matrix parameters as part of the upload (deploy) API call.
In your case, it should be something like:
sh """
curl -sSf -u user:pw -X PUT -T ${zipFile} 'https://${config.artifactory.name}.xxxx:443/artifactory/${path};artifactId=${runID}'
"""
Note the ;key=value in the end of the URL.
Do a second call, after the upload, to set the item properties.
In your case, it should be something like:
Using the set item properties API -
sh """
curl -sSf -u user:pw -X PUT 'https://${config.artifactory.name}.xxxx:443/artifactory/api/storage/${path}?properties=artifactId=${runID}'
"""
or, using the update item properties API-
sh """
curl -sSf -u user:pw -X PATCH 'https://${config.artifactory.name}.xxxx:443/artifactory/api/metadata/${path}' -d '{ "props": { "artifactId" : "${runID}" } }'
"""
For more information, see:
Working with JFrog Properties
Using Properties in Deployment and Resolution
Artifactory REST API - Item Properties
I have configured a Github web hook with the below settings:
Payload URL: https:///github-webhook/
Content Type: application/x-www-form-urlencoded
Events : Pushes, Pull Requests
The Jenkins job that I have, is a pipeline job that has the below enabled:
Build Trigger: GitHub hook trigger for GITScm polling
With the above configuration, I see that in response to an event ie; push/PR in GitHub, the jenkins job gets triggered successfully. In GitHub, under Recent Deliveries for the web hook, I see the details of the payload and a successful Response of 200.
I am trying to get the payload in Jenkins Pipeline for further processing. I need some details eg: PR URL/PR number, refs type, branch name etc for conditional processing in the Jenkins Pipeline.
I tried accessing the "payload" variable (as mentioned in other stack overflow posts and the documentations available around) and printing it as part of the pipeline, but I have had no luck yet.
So my question is, How can I get the payload from GitHub web hook trigger in my Jenkins Pipeline ?
You need to select Content type: application/json in your webhook in GitHub. Then you would be able to access any variable from the payload GitHub sends as follows: $. pull_request.url for pr url, for example.
Unsure if this is possible.
With the GitHub plugin we use (Pipeline Github), PR number is stored in the variable CHANGE_ID.
PR URL is pretty easy to generate given the PR number. Branch name is stored in the variable BRANCH_NAME. In case of pull requests, the global variable pullRequest is populated with lots of data.
Missing information can be obtained from Github by using their API. Here's an example of checking if PR is "behind", you can modify that to your specific requirements:
def checkPrIsNotBehind(String repo) {
withCredentials([usernamePassword(credentialsId: "<...>",
passwordVariable: 'TOKEN',
usernameVariable: 'USER')]) {
def headers = ' -H "Content-Type: application/json" -H "Authorization: token $TOKEN" '
def url = "https://api.github.com/repos/<...>/<...>/pulls/${env.CHANGE_ID}"
def head_sha = sh (label: "Check PR head SHA",
returnStdout: true,
script: "curl -s ${url} ${headers} | jq -r .head.sha").trim().toUpperCase()
println "PR head sha is ${head_sha}"
headers = ' -H "Accept: application/vnd.github.v3+json" -H "Authorization: token $TOKEN" '
url = "https://api.github.com/repos/<...>/${repo}/compare/${pullRequest.base}...${head_sha}"
def behind_by = sh (label: "Check PR commits behind",
returnStdout: true,
script: "curl -s ${url} ${headers} | jq -r .behind_by").trim().toUpperCase()
if (behind_by != '0') {
currentBuild.result = "ABORTED"
currentBuild.displayName = "#${env.BUILD_NUMBER}-Out of date"
error("The head ref is out of date. Please update your branch.")
}
}
}
I am using docker image of jenkins and deployed it on a kubernetes cluster.
I have written a groovy script to run a curl command on a dynamically created slave on jenkins and have also configured slave to run curl commands,but getting the above mentioned error in my jenkins console. I have also checked whether curl is installed on my slave node using where curl, it gives response as /usr/bin/curl.
I have tried to run only the curl command on my slave node, it works. But when I call the groovy script file using Jenkins it gives the error java.io.IOException: Cannot run program "curl": error=2, No such file or directory.
I would guess groovy can not find curl, try calling curl with the full path as in:
def process = ['/usr/bin/curl', 'https://someurl'].execute()
process.consumeProcessOutput(System.out, System.err)
process.waitFor()
as an alternative, if you just need to do a http get request to some url, you can do this in plain groovy without the dependency on curl by:
def response = 'https://someurl'.toURL().text
< edit after comments >
You could also do a post request using pure groovy and something like the following (untested):
def url = 'http://api.duckduckgo.com'.toURL()
def body = 'some data'
url.openConnection().with {
doOutput = true
requestMethod = 'POST'
// send post body
outputStream.withWriter { writer ->
writer << body
}
// set header
setRequestProperty "Content-Type", "application/x-www-form-urlencoded"
// print response
println content.text
}
Need some help on fetching the GitHub payload into the Jenkins file without installing any plugin.
If anyone can provide the Jenkins file code snippet to access the GitHub payload from the webhook. it would be of great help.
I am able to call the Jenkins job from GitHub webhook. but need the payload as well to process further.
Any help would be appreciated. Thanks.
Please find the below groovy script:
stage('Pull Request Info') {
agent {
docker {
image 'maven'
args '-v $HOME/.m2:/home/jenkins/.m2 -ti -u 496 -e MAVEN_CONFIG=/home/jenkins/.m2 -e MAVEN_OPTS=-Xmx2048m'
}
}
steps {
script {
withCredentials([usernameColonPassword(credentialsId: "${env.STASH_CREDENTIAL_ID}",
variable: 'USERPASS')]) {
def hasChanges = maven.updateDependencies()
if (hasChanges != '') {
def pr1 = sh(
script: "curl -s -u ${"$USERPASS" } -H 'Content-Type: application/json' https://xx.example/rest/some/endpoint",
returnStdout: true
)
def pr = readJSON(text: pr1)
println pr.fromRef
}
}
}
}
}
Above script uses, curl to fetch the details about Pull request. I have stored the credentials in the jenkins and created an environment variable for the credentialId.
You can replace the url with your endpoint. You can also modify the script to use jq if you have jq installed in the machine.
In this case, I'm using readJSON to parse the JSON which is part of pipeline utilities plugin. My question would be, why not use the plugin as it provides the needed functionalities?
If you still dont want to use plugin, take a look at json parsing with groovy.
Using Jenkins DSL I can create and publish build info using Artifactory.newBuildInfo but am looking for the complementary method to read the BuildInfo JSON data that is generated on Artifactory. Have trolled through many resources. Any suggestions would be appreciated.
From Artifactory REST API it sure looks like you can retrieve buildInfo. I'd expect this must be exposed from the jenkins plugin as well.
Build Info
Description: Build Info
Since: 2.2.0
Security: Requires a privileged user with deploy permissions (can be anonymous)
Usage: GET /api/build/{buildName}/{buildNumber}
Produces: application/vnd.org.jfrog.build.BuildInfo+json
...
JFrog's project examples on github is a fabulous resource as is their jenkins plugin
From a quick search it looks like you'd define a download spec and then use server.download method (see Working with Pipeline Jobs in Jenkins
def buildInfo1 = server.download downloadSpec
The previous answer creates a new buildInfo, it does not download the original buildInfo into I've been trying for days to try to figure out how to do what the original poster wants to do. The best I've succeeded at is downloading the buildinfo into a hashtable, work with that, then upload the changes doing REST calls.
def curlstr = "curl -H 'X-JFrog-Art-Api:${password}' ${arturl}api/build/${buildName}/${buildNumber}"
def buildInfoString = sh(
script: curlstr,
returnStdout: true
).trim()
buildInfo = (new JsonSlurperClassic().parseText(buildInfoString))
sh("echo '${JsonOutput.toJson(buildInfo)}'|curl -XPUT -H 'X-JFrog-Art-Api:${password}' -H 'Content-Type: application/json' ${arturl}api/build --upload-file - ")
I was able to modify the buildInfo in the artifactory repository using this technique. Not as clean as I would like. I've been unable to get the jfrogCLI to modify existing buildInfo files either.
For whatever it's worth the intent of what I'm trying to do is promote a docker artifact and change the name while doing it. There is no way I've found to express this to artifactory not involving downloading the artifact to docker and then pushing it again. I'd love it if someone from #jfrog could clue me in how to do it.
UPDATE: Attention! I've got the question wrong. This is how you get the local BuildInfo-Object in a declarative pipeline script.
I managed this by using an internal api from jenkins-artifactory-plugin.
// found in org.jfrog.hudson.pipeline.declarative.utils.DeclarativePipelineUtils
/**
* Get build info as defined in previous rtBuildInfo{...} scope.
*
* #param rootWs - Step's root workspace.
* #param build - Step's build.
* #param customBuildName - Step's custom build name if exist.
* #param customBuildNumber - Step's custom build number if exist.
* #return build info object as defined in previous rtBuildInfo{...} scope or a new build info.
*/
public static BuildInfo getBuildInfo(FilePath rootWs, Run<?, ?> build, String customBuildName, String customBuildNumber, String project) throws IOException, InterruptedException {
...
}
Whith this code you can fetch the BuildInfo inside a declarative pipeline script step.
def buildInfo = org.jfrog.hudson.pipeline.declarative.utils.DeclarativePipelineUtils.getBuildInfo(new hudson.FilePath(new java.io.File(env.WORKSPACE)), currentBuild.rawBuild, null, null, null);
UPDATE: Beware of custom build names and numbers. I you have defined a custom build name and/or build number, you have to provide it with the getBuildInfo call.