Problem using XrayExportBuilder in Jenkins pipeline - jenkins

I try to integrate XRay in my Jenkins pipeline (declarative) so I add a step like this:
stage('Export features from Xray') {
steps{
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE'){
step([$class: 'XrayExportBuilder', filePath: 'repoName/src/test/resources/io/cucumber/suite', filter: '21082', serverInstance: '770abc84-96c5-499c-b0d4-84baec112730'])
}
}
}
By the end I get all features exported into one file which is a problem for further test execution process, because only 1st scenario is being processed.
Here is a console log:
Starting XRAY: Cucumber Features Export Task...
##########################################################
#### Xray is exporting the feature files ####
##########################################################
Filter: 21082
Will save the feature files in: repoName/src/test/resources/io/cucumber/suite
###################### Unzipping file ####################
###################### Unzipped file #####################
Successfully exported the Cucumber features
XRAY_RAW_RESPONSE:
XRAY_TEST_EXECS:
XRAY_ISSUES_MODIFIED:
XRAY_IS_REQUEST_SUCCESSFUL: true
XRAY_TESTS:
As far as I know XrayExportBuilder puts features in separate files when it is set up (using jenkins UI) as a part of a custom job (not a pipeline).
Is there any chance to optimize the export in the same way in a pipeline job?
upd. Seems like business logic of xray cloud version doesn't include possibility to use proper export of xray test-cases into separate feature-files in case if these test-cases aren't linked to any jira-task/requirement/story.
The problem comes when you link one test-case to several tickets. Then exportBuilder creates feature-files for each linked jira-issue and as a result duplicates mentioned test-cases into each feature-file

In fact Xray generates one feature per requirement as stated in the documentation: https://docs.getxray.app/display/XRAYCLOUD/Generate+Cucumber+Features
And these rules are the base of all exports of Cucumber features.
If you have a suggestion please use the support link to add it: https://jira.getxray.app/servicedesk/customer/portal/2/user/login?destination=portal%2F2.

Related

Best way to store the deployment path in jenkins

I am creating jenkins pipeline for all our application where I wanted to build and deploy. I can able to achieve that but all the deployment paths are hard coded on the pipeline script.
We have around 8 application and 5 environments. it means I need to specify 40 different deployable path on the pipeline scripts.
I like to know, are they any best way to store the deployment path?. I thought about storing them in XML and reading that while doing the build, but not sure on implementation.
looking for some ideas.
script {
def msbuild tool name: 'Msbuila', type: 'msbuild'
def action "${msbuild}\\msbuild.exe"
def rootPath "${NORKSPACE}\\test\\test";
def sinPath "${rootPath}\\test.sin"
def binPath "${rootPath}\\test\\bin"
bat “nuget restore \"${sinPath}\""
bat "\"${action}\" \"${sinPath)\" "
robocopy("\"${binPath}\" \"\\\\t.test.com\\test\" /MIR /xF ")
}
What I would do is use a config repository, having it configured this way:
Each application is a different repository (example: app_config)
Each environment is a different file
The same enviroment file in each repository is called by the same name
Each enviroment file is a yaml (key:value)
Then on the jenkins pipeline I would get the repo, read the yaml using readYAML (check the command usage and name, theres is a while since I used it) and load it on a map.
Then you use the variables of the map and that should help you
The tricky part is how to match the code repositories and the config repositories. As I mentioned before, I would use the same name and append "_config"

How to send a .deb file from jenkins Pipeline to Spinnaker Pipeline?

I have a jenkins Pipeline that results with a build artifact which is a deb file.
I was planning to use that deb file to fill in the Package option in the Bake Configuration Phase.
It doesn't work and results in
How should I go about doing this?
I guess I need to transfer artifact from the spinnaker pipeline's build trigger(the Jenkins pipeline) but I don't understand how to do that.
Is this of any use? I can't wrap my head around what they want me to do in order to send the file.
Any help is appreciated. Thanks.
What you have done so far looks correct, so we need to debug why it fails.
You have to look at the source of the execution that fails (press the "source" link). This json structure is the execution context (you probably want to install a json formatter plugin in your browser or something to read it). Navigate to trigger.artifacts and look for the artifact there. If you find it, please post the result here.
I think maybe the issue is the name of the deb file. It should have been named like simplenodeappinstaller_1.0-0_amd64.deb. A simple solution is to add the jenkins build number or a timestamp as the release number.
You can also try to activate the artifact decorator (set artifact.decorator.enabled: true in igor-local.yml. This will cause Igor to parse deb and rpm files in a more sensible way, IMHO. See below for more info.
Another thing is that Jenkins sometimes puts all test results and other stuff into the artifacts list, and I think the default maximum number of artifacts returned are 20. This is however configurable under the key BuildArtifactFilter.maxArtifacts (see https://github.com/spinnaker/igor/blob/master/igor-web/src/main/java/com/netflix/spinnaker/igor/build/BuildArtifactFilter.java). Reading the code, it also seems deb-files should already be prioritized over other kinds of artifacts, so I don't really think this is the issue.
Decorator primer
Enabling the artifact decorator will convert this artifact:
fileName: "openmotif22-libs-2.2.4-192.1.3.x86_64.rpm"
displayPath: "openmotif22-libs-2.2.4-192.1.3.x86_64.rpm"
relativePath: "openmotif22-libs-2.2.4-192.1.3.x86_64.rpm"
into
fileName: "openmotif22-libs-2.2.4-192.1.3.x86_64.rpm"
displayPath: "openmotif22-libs-2.2.4-192.1.3.x86_64.rpm"
relativePath: "openmotif22-libs-2.2.4-192.1.3.x86_64.rpm"
reference: "openmotif22-libs-2.2.4-192.1.3.x86_64.rpm"
name: "openmotif22-libs"
type: "rpm"
version: "2.2.4-192.1.3.x86_64"
decorated: "true"
The built in decorator supports deb and rpm, but it can be extended using config like this:
artifact:
# This is a feature toggle for decoration of artifacts.
decorator:
enabled: true
fileDecorators:
- type: docker/image
decoratorRegex: '(.+):(.+)'
identifierRegex: '(.+\/.+:.+)'

Jenkins/Groovy move variables out to a config file

I've been asked to move some variable from a Groovy script out into a configuration file. I'm fine using something like :-
readFile('../xx-software.cfg').split('\n').each { fileName ->
sh "wget ${theURL}${fileName}"
}
However, even though I have added xx-software.cfg into the same directory as my Groovy script it does become available for use within that groovy script.
I hope this makes sense!?
How can I move my variables out into a config file to make it easier for the application support team to make future edits without changing the code?
There are a few approaches you could use.
Firstly, file format for the configuration and how to read the data into variables. You could use Java Properties format, YAML or JSON and these are all handled by the Pipeline Utility Steps plugin with steps here. You can read the file with these steps:
readProperties
readYaml
readJSON
Next problem, how to get the file available to your pipeline so it can be read from the workspace using these steps. Possibilities are:
In source control with your pipeline code. It can be fetched with the pipeline.
In a separate source control for configuration, your pipeline will need a step to fetch it.
Use the Jenkins Config File Provider plugin. It has a step to provide a config file managed in Jenkins.
Provide it as a Custom Tool zipped archive from a binary server like Artifactory. You can use custom tool definition pipeline steps to make this available to the pipeline.
The Config File Provider option might provide any easy way to have a file that can be updated, but there won't be any version control of it.

Can I use step() to create any build-step from any plugin?

I am currently trying to transform my former "GUI"-build-steps into a pipeline groovy script. I formerly had a step from the valgrind plugin to publish the results of a valgrind run.
I found the "step: General Build Step" function in the Pipeline Syntax Snippet Generator and tried to use it to create the valgrind publish results step with the following code:
// file pipeline.groovy
import org.jenkinsci.plugins.valgrind.*;
...
node('Publish Valgrind results')
{
step([$class: 'ValgrindPublisher', ValgrindPublisherConfig: [$class: 'ValgrindPublisherConfig', pattern: 'CppCodeBase/Generated/ValgrindOutput/**']])
}
...
When I run this jenkins complains:
java.lang.UnsupportedOperationException: no known implementation of interface jenkins.tasks.SimpleBuildStep is named ValgrindPublisher
So I am not sure if the problem is that ValgrindPublisher only derives from BuildStepand not from SimpleBuildStepor if my import is faulty.
The more general question would be:
Is it possible to run any build-step from a plugin in a pipeline script and if so, where can I find examples?
No you cannot. You can only use steps from pipelines-compatible plugins and it appears that your ValgrindPublisher plugin is not (yet) pipeline-compatible.
You can check this answer for similar information.

What's the best way to bulk update Jenkins projects?

We have hundreds of Jenkins projects (mostly created from a few templates), often need to make the same change to all of them. e.g. today I need to add a post-build step to delete workspace at the end. Next I need to change the step to copy build result to a shared drive to Nexus repository.
What's the best way to apply such kind of bulk change to Jenkins projects?
You could use Configuration Slicing Plugin which is designed to do this.
It supports many configuration options.
The REST API is quite powerful. The following sequence worked for me:
In loop for all relevant projects (list of projects is available via e.g. /api/xml?tree=jobs[name]):
download config.xml via /job/{name}/config.xml
edit using your favorite scripted xml editor (mine was xmlstarlet)
upload new config xml via /job/{name}/config.xml
Some random notes:
do *BACKUP* before doing anything
I probably could post some bash script example if anyone is interested
Good luck!
EDIT> Example bash script:
#!/bin/bash
jenkinsUrlBase='http://user:token#jenkins'
callJenkins() { # funcPath
curl --silent --show-error -g "${jenkinsUrlBase}${1}"
}
postJenkinsFile() { # funcPath fileName
curl --silent --show-error -g -d "#${2}" "${jenkinsUrlBase}${1}"
}
callJenkins '/api/xml?tree=jobs[name]' | xmlstarlet sel -t -v '//hudson/job/name' | while read projectName ; do
echo "Processing ${projectName}..."
origFile="${projectName}_old.xml"
newFile="${projectName}_new.xml"
callJenkins "/job/${projectName}/config.xml" > "$origFile"
echo " - Updating artifactory url..."
cat "$origFile" \
| xmlstarlet ed -P -u '//maven2-moduleset/publishers/org.jfrog.hudson.ArtifactoryRedeployPublisher/details/artifactoryUrl' -v "http://newServer/artifactory" \
> "${newFile}"
if false ; then
echo " - Commiting new config file..."
postJenkinsFile "/job/${projectName}/config.xml" "$newFile"
else
echo " - Dry run: not commiting new config file"
fi
done
Groovy is by far the best way to bulk update jobs. You may have to do a little digging into the jenkins / plugin api to figure out what api calls to make, but the script console (http://yourJenkinsUrl/script) provides an easy way to play around with the code until you get it right.
To get you started, you can add / remove post-build steps by calling the getPublishersList() method on a job and then calling the add / remove methods.
def publishersList = Jenkins.instance.getJob("JobName").getPublishersList()
publishersList.removeAll { it.class == whatever.plugin.class }
publishersList.add(new PluginConstructor())
If you're not sure what publisher class you need to delete the workspace, I would suggest manually adding the desired configurations to one job, and then run getPublishersList() from the script console on that job. You will see the class you are working with in the list, and then you can go look at the api to see what is required to construct it.
You can then iterate through all your jobs and add the publisher doing something like this:
Jenkins.instance.getView("All Jobs").items.each { job ->
//Maybe some logic here to filter out specific jobs
job.getPublishersList().add(new PluginConstructor())
}
Alternatively, you can use the Jenkins CLI api or the REST api, but in order to update post-build actions, you will have to modify the project configuration xml file (which isn't trivial programmatically configure) and then overwrite the job configuration with the new configuration file.
You can edit the config.xml file with your favorite text tool (I use Python) and then reload the jenkins configuration.
In my setup the jobs are stored in ~/.jenkins/jobs/*/config.xml.
See: https://wiki.jenkins-ci.org/display/JENKINS/Administering+Jenkins
Here is a small example to update foo to bar:
</com.cwctravel.hudson.plugins.extended__choice__parameter.ExtendedChoiceParameterDefinition>
<hudson.model.StringParameterDefinition>
<name>additional_requirements</name>
<description>foo</description>
...
Script:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import absolute_import, division, unicode_literals, print_function
import sys
from lxml import etree
from collections import defaultdict
def change_parameter_description(config_xml_path, parameter_name, new_description):
tree=etree.parse(config_xml_path)
for tag in tree.findall('.//hudson.model.StringParameterDefinition'):
name_tag=tag.find('./name')
if not name_tag.text==parameter_name:
continue
description=tag.find('./description')
description.text=new_description
tree.write(config_xml_path)
for config_xml_path in sys.argv[1:]:
change_parameter_description(config_xml_path, 'additional_requirements', 'bar')
In this small example a regex would work, but if things span several lines, it is better to work with xml tools :-)
The other answers are great, but if you use pipelines, I'd suggest you to use Pipeline Shared Libraries.
We have all our jobs in a git repository. To develop a new feature we try it in a branch, since it is possible to point just one job to a specific branch. When we need to update them, just merge into master. The jobs are treated as code, with a proper release process.

Resources