Tried to search a few sites, including Parameterized remote job is triggered but console says failure
I am attempting to migrate a token based job from existing (using curl) method of calling remote job to plugin based call as follows:
Remote Jenkins Setup: (myserver:8080) Job: MyPipelineFirstJob
Under Job configuration : Build Triggers --> "Trigger builds remotely (e.g., from scripts)" --> Authentication Token --> 108801
Existing job: On Local Jenkins:
curl -v --silent -X POST http://myserver:8080/job/MyPipelineFirstJob/buildWithParameters --data token=108801 --data RELEASE=9.2 --data ARCHITECTURE=ppc64le --data IP=9.99.999.99
New job on local Jenkins:
Now, I need to translate the above to use parameterized-remote-trigger-plugin. So Apart from Remote Host, etc, I have chosen the Auth type as follows in the Global configuration: "Parameterized Remote Trigger Configuration"
"Enable 'build token root' support" is unchecked -- Do not know what this means
Authentication --> Bearer Token Authentication
I see a WARNING message as "Address looks good, but a connection could not be established."
I am calling the below funciton to trigger the remote job:
def handle = triggerRemoteJob(remoteJenkinsName: 'Perf_Jenkins_Server', job: 'MyPipelineFirstJob/buildByToken/buildWithParameters', auth: "108801", parameters: 'RELEASE=HMC9.2.951.2,ARCHITECTURE=ppc64le,HMC_MACHINE=9.99.999.9998')
I have passed the string "108801" based on this site https://www.jenkins.io/doc/pipeline/steps/Parameterized-Remote-Trigger/ which says:
BearerTokenAuth
token (optional)
Type: String
Build Failure: With the above configuration, when build the job, I get this error:
22:07:12 java.lang.ClassCastException: class org.jenkinsci.plugins.ParameterizedRemoteTrigger.pipeline.RemoteBuildPipelineStep.setAuth() expects class org.jenkinsci.plugins.ParameterizedRemoteTrigger.auth2.Auth2 but received class java.lang.String
22:07:12 at org.jenkinsci.plugins.structs.describable.DescribableModel.coerce(DescribableModel.java:492)
22:07:12 at org.jenkinsci.plugins.structs.describable.DescribableModel.injectSetters(DescribableModel.java:429)
22:07:12 at org.jenkinsci.plugins.structs.describable.DescribableModel.instantiate(DescribableModel.java:331)
22:07:12 at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:269)
22:07:12 at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:179)
22:07:12 at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122)
22:07:12 at sun.reflect.GeneratedMethodAccessor493.invoke(Unknown Source)
22:07:12 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
22:07:12 at java.lang.reflect.Method.invoke(Method.java:508)
So, I tried to remove the auth field, and passed it as part of parameters:
def handle = triggerRemoteJob(remoteJenkinsName: 'Perf_Jenkins_Server', job: 'MyPipelineFirstJob/buildByToken/buildWithParameters', parameters: 'token="108801",RELEASE="HMC9.2.951.2",ARCHITECTURE=ppc64le,HMC_MACHINE="9.99.999.9998"')
Note: I have also attempted to add double quotes around the parameter values. Having made these changes, and attempt to build, I get the following error:
22:19:12 ################################################################################################################
22:19:12 Parameterized Remote Trigger Configuration:
22:19:12 - job: MyPipelineFirstJob/buildByToken/buildWithParameters
22:19:12 - remoteJenkinsName: Perf_Jenkins_Server
22:19:12 - parameters: [token="108801",RELEASE="HMC9.2.951.2",ARCHITECTURE=ppc64le,HMC_MACHINE="9.99.999.998"]
22:19:12 - blockBuildUntilComplete: true
22:19:12 - connectionRetryLimit: 5
22:19:12 - trustAllCertificates: false
22:19:12 ################################################################################################################
22:19:12 Connection to remote server failed [404], waiting to retry - 10 seconds until next attempt. URL: http://myserver:8080/job/MyPipelineFirstJob/job/buildByToken/job/buildWithParameters/api/json, parameters:
22:19:22 Retry attempt #1 out of 5
22:19:22 Connection to remote server failed [404], waiting to retry - 10 seconds until next attempt. URL: http://myserver:8080/job/MyPipelineFirstJob/job/buildByToken/job/buildWithParameters/api/json, parameters:
22:19:32 Retry attempt #2 out of 5
Did you notice the additional "job" word : "buildByToken/job/buildWithParameters" in the above o/p? Not sure why!
Questions:
Is the Authentication type of "Bearer Token Authentication" the correct option that matches with the requirement of existing method?
Have I passed the parameters correctly?
How to overcome the failures seen above?
Found the solution: The parameters needs to be separated by a new line. Not a comma or space. So, I added '\n' char between each parameter as shown below and it worked!
def handle = triggerRemoteJob(remoteJenkinsName: 'Perf_Jenkins_Server', job: 'MyPipelineFirstJob', parameters: 'token=108801\nRELEASE=9.2.951.2\nARCHITECTURE=x86_64\nMACHINE_IP="9.99.999.998')
Ref: The below link has an example that uses "\n" as parameter separator.
https://github.com/jenkinsci/parameterized-remote-trigger-plugin/blob/master/README_PipelineConfiguration.md
Note: The above link refers to Snippet Generator. However, that Generator doesn't support "triggerRemoteJob" yet! May be, I would have solved my issue faster!
Jenkins Version: Jenkins 2.249.1
Parameterized Remote Trigger Plugin Version: 3.1.5.1
Related
I'm using the below code to trigger a remote job in a a jenkins pipeline. After the job is trigger, I'd need to parse the logs and retrieve few information. The below code used handle.lastLog() function, but its returning null. Is there a way to get the logs from triggerRemoteJob ?
def handle = triggerRemoteJob (
auth: TokenAuth(apiToken:....'),...,
job: build_job,
parameters: parameters1,
useCrumbCache: true,
useJobInfoCache: true,
overrideTrustAllCertificates: false,
trustAllCertificates: true
)
def status = handle.getBuildStatus()
echo "--------------------------------------------------------------------------------"
echo "Log: " + handle.lastLog()
echo "--------------------------------------------------------------------------------"
You have many ways to parse the remote job console log by saving the output to a file as follows:
One way is to use the handle object to get the remote build URL:
def remoteBuildOutput = handle.getBuildUrl().toString()+"consoleText"
sh "curl -o remote_build_output.txt ${remoteBuildOutput} && cat remote_build_output.txt"
Another way is to output the last build log:
sh "curl -o remote_last_build_output.txt ${env.JENKINS_URL}/job/build_job/lastBuild/consoleText && cat remote_last_build_output.txt"
You can use env.JENKINS_URL if it's set. If not, you can replace it by your Jenkins URL.
build_job is the name of your remote job.
Note that if you're using the /lastBuild/ api call, you can't garantee that the last build is the same build that was triggered by your job (if the build_job is triggered by another system/person at the same time)
One last way is to enable the Parameterized Remote Trigger Plugin logging via the enhancedLogging option.
If set to true, the console output of the remote job is also logged.
def handle = triggerRemoteJob (
job: build_job,
enhancedLogging: true
...
)
You can then save the current job output in a file and parse what ever you want:
sh "curl -o current_build_output.txt ${env.JENKINS_URL}job/${env.JOB_NAME}/${currentBuild.number}/consoleText && cat current_build_output.txt"
${currentBuild.number} can also be replaced by the ${env.BUILD_NUMBER} variable
I am using docker image of jenkins and deployed it on a kubernetes cluster.
I have written a groovy script to run a curl command on a dynamically created slave on jenkins and have also configured slave to run curl commands,but getting the above mentioned error in my jenkins console. I have also checked whether curl is installed on my slave node using where curl, it gives response as /usr/bin/curl.
I have tried to run only the curl command on my slave node, it works. But when I call the groovy script file using Jenkins it gives the error java.io.IOException: Cannot run program "curl": error=2, No such file or directory.
I would guess groovy can not find curl, try calling curl with the full path as in:
def process = ['/usr/bin/curl', 'https://someurl'].execute()
process.consumeProcessOutput(System.out, System.err)
process.waitFor()
as an alternative, if you just need to do a http get request to some url, you can do this in plain groovy without the dependency on curl by:
def response = 'https://someurl'.toURL().text
< edit after comments >
You could also do a post request using pure groovy and something like the following (untested):
def url = 'http://api.duckduckgo.com'.toURL()
def body = 'some data'
url.openConnection().with {
doOutput = true
requestMethod = 'POST'
// send post body
outputStream.withWriter { writer ->
writer << body
}
// set header
setRequestProperty "Content-Type", "application/x-www-form-urlencoded"
// print response
println content.text
}
I created a pipeline which should trigger a job on a different Jenkinsserver.
I use the Remote Trigger Plug-in and I am able to trigger the Job with following statement (currently this is the only statement in my pipeline):
triggerRemoteJob enhancedLogging: true, job: 'myJob', maxConn: 1, remoteJenkinsName: 'MyJenkins'
But after the Job is triggered the pipeline tries to connect a job in a job running on localhost which obviously fails.
I tried to disable some options an found that it works if I disable blockBuildUntilComplete.
From the Log I got following with option enabled:
############################################################################## ##################################
Parameterized Remote Trigger Configuration:
- job: myJob
- remoteJenkinsName: myJenkins
- parameters:
- blockBuildUntilComplete: true
- connectionRetryLimit: 5
################################################################################################################
Triggering non-parameterized remote job 'http://x.x.x.x:8080/job/myJob'
Using globally defined 'Credentials Authentication' as user 'myUser' (Credentials ID 'myCredentials')
Triggering remote job now.
CSRF protection is disabled on the remote server.
Remote job queue number: 47
Remote build started!
Remote build URL: http://localhost:8080/job/myJob /8/
Remote build number: 8
Blocking local job until remote job completes.
calling remote without locking...
Connection to remote server failed , waiting for to retry - 10 seconds until next attempt. URL: http://localhost:8080/job/myJob /8/api/json/, parameters:
Retry attempt #1 out of 5
Connection to remote server failed , waiting for to retry - 10 seconds until next attempt. URL: http://localhost:8080/job/myJob /8/api/json/, parameters:
Retry attempt #2 out of 5
Connection to remote server failed , waiting for to retry - 10 seconds until next attempt. URL: http://localhost:8080/job/myJob /8/api/json/, parameters:
Retry attempt #3 out of 5
Connection to remote server failed , waiting for to retry - 10 seconds until next attempt. URL: http://localhost:8080/job/myJob /8/api/json/, parameters:
Retry attempt #4 out of 5
Connection to remote server failed , waiting for to retry - 10 seconds until next attempt. URL: http://localhost:8080/job/myJob /8/api/json/, parameters:
Retry attempt #5 out of 5
Max number of connection retries have been exeeded.
I changed the names and the IP-Adress of my Jenkins-Server.
I must do some steps after my remote job finished which are depending from its results. So I must wait until the job is done.
Is there a way to do this without the block-option or what must I do to get the option working?
I checked the released of the plugin and I found an improvement for release 3.0.8 called "Extend POST timeout & avoid re-POST after timeout"
I reviewed the change an since it looked good with regards to our problem I updated our plugin (v3.0.7) to the current version.
Now the error does not appear anymore.
In my company, I'm running a pipeline as code project in which my Jenkinsfile gets a dynamic IP from a shell script, and injects that into a PrivateIP environment variable. The next step invokes a custom (in-house developed) plugin that accepts a "servers" argument as IP(s), though supposedly does not parse it correctly, cause the error output indicates an unresolvable host.
I've echoed the PrivateIP variable immediately above the plugin step, and it definitely outputs the correct value.
The plugin works if given a hard-value for IP, but fails if given anything dynamic. Built-ins such as dir don't give similar problems. I haven't been able to get a hold of the plugin developer to report the issue, nor have I gotten any responses for my issue. Is this typical for custom plugins? I've seen some documentation in the plugin developer docs that suggests only the initial environment stage is respected in pipeline plugins, otherwise a #StepContextParameter is needed to get a contextual environment.
stage('Provision') {
environment {
PrivateIP = """${sh(
returnStdout: true,
script: '${WORKSPACE}/cicd/parse-ip.sh'
)}"""
}
steps {
echo "Calling Playbook. PrivateIP: ${PrivateIP}"
customPluginName env: 'AWS',
os: 'Linux',
parameter: '',
password: '',
playbook: 'provision.yaml',
servers: PrivateIP,
gitBranch: '{my branch}',
gitUrl: '{URL}',
username: '{custom user}'
}
}
I'd expect the variable to be respected and execute an Ansible Playbook successfully.
Error
>>> fatal: [ansible_ssh_user={custom user}]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname ansible_ssh_user={custom user}: Name or service not known\r\n", "unreachable": true}
If this in-fact a default behavior of custom plug-ins (not necessarily a bug), what are the good work arounds?
Using Jenkins DSL I can create and publish build info using Artifactory.newBuildInfo but am looking for the complementary method to read the BuildInfo JSON data that is generated on Artifactory. Have trolled through many resources. Any suggestions would be appreciated.
From Artifactory REST API it sure looks like you can retrieve buildInfo. I'd expect this must be exposed from the jenkins plugin as well.
Build Info
Description: Build Info
Since: 2.2.0
Security: Requires a privileged user with deploy permissions (can be anonymous)
Usage: GET /api/build/{buildName}/{buildNumber}
Produces: application/vnd.org.jfrog.build.BuildInfo+json
...
JFrog's project examples on github is a fabulous resource as is their jenkins plugin
From a quick search it looks like you'd define a download spec and then use server.download method (see Working with Pipeline Jobs in Jenkins
def buildInfo1 = server.download downloadSpec
The previous answer creates a new buildInfo, it does not download the original buildInfo into I've been trying for days to try to figure out how to do what the original poster wants to do. The best I've succeeded at is downloading the buildinfo into a hashtable, work with that, then upload the changes doing REST calls.
def curlstr = "curl -H 'X-JFrog-Art-Api:${password}' ${arturl}api/build/${buildName}/${buildNumber}"
def buildInfoString = sh(
script: curlstr,
returnStdout: true
).trim()
buildInfo = (new JsonSlurperClassic().parseText(buildInfoString))
sh("echo '${JsonOutput.toJson(buildInfo)}'|curl -XPUT -H 'X-JFrog-Art-Api:${password}' -H 'Content-Type: application/json' ${arturl}api/build --upload-file - ")
I was able to modify the buildInfo in the artifactory repository using this technique. Not as clean as I would like. I've been unable to get the jfrogCLI to modify existing buildInfo files either.
For whatever it's worth the intent of what I'm trying to do is promote a docker artifact and change the name while doing it. There is no way I've found to express this to artifactory not involving downloading the artifact to docker and then pushing it again. I'd love it if someone from #jfrog could clue me in how to do it.
UPDATE: Attention! I've got the question wrong. This is how you get the local BuildInfo-Object in a declarative pipeline script.
I managed this by using an internal api from jenkins-artifactory-plugin.
// found in org.jfrog.hudson.pipeline.declarative.utils.DeclarativePipelineUtils
/**
* Get build info as defined in previous rtBuildInfo{...} scope.
*
* #param rootWs - Step's root workspace.
* #param build - Step's build.
* #param customBuildName - Step's custom build name if exist.
* #param customBuildNumber - Step's custom build number if exist.
* #return build info object as defined in previous rtBuildInfo{...} scope or a new build info.
*/
public static BuildInfo getBuildInfo(FilePath rootWs, Run<?, ?> build, String customBuildName, String customBuildNumber, String project) throws IOException, InterruptedException {
...
}
Whith this code you can fetch the BuildInfo inside a declarative pipeline script step.
def buildInfo = org.jfrog.hudson.pipeline.declarative.utils.DeclarativePipelineUtils.getBuildInfo(new hudson.FilePath(new java.io.File(env.WORKSPACE)), currentBuild.rawBuild, null, null, null);
UPDATE: Beware of custom build names and numbers. I you have defined a custom build name and/or build number, you have to provide it with the getBuildInfo call.