For data mining reasons, I want to get build numbers range of a jenkins job that were built on a particular day. Is there a plugin that accomplishes this or any other possible way?
Thanks,
Nick
The built-in REST JSON API will give you a list of the builds for a particular job: http://jenkins:8080/job/JOB_NAME/api/json?tree=builds[fullDisplayName,id,number,timestamp]&pretty=true
Produces something like:
{
"builds" : [
{
"fullDisplayName" : "JOB_NAME #113",
"id" : "2014-10-31_23-05-20",
"number" : 113,
"timestamp" : 1414821920808
},
{
"fullDisplayName" : "JOB_NAME #112",
"id" : "2014-10-31_17-26-39",
"number" : 112,
"timestamp" : 1414801599000
},
....
If your build ids are the basic date-stamp (as above), you can do a little string processing to filter the results. Otherwise, you can convert the timestamp to the appropriate date and filter on that.
Most Jenkins pages have a REST API link at the bottom that provides more documentation, though you often need to experiment with the API to figure out what details it can provide.
Update: As #Nick discovered, the builds result is limited to the latest 100 elements by default. According to this Jenkins issue, you can use the hidden allBuilds element to retrieve "all builds". So if you need all builds, use: http://jenkins:8080/job/JOB_NAME/api/json?tree=allBuilds[fullDisplayName,id,number,timestamp]&pretty=true
Jenkins 1.568 also introduced pagination in the API results, so it's possible to retrieve results by range. The Jenkins REST API link describes the syntax if your Jenkins version supports it.
There is the Global Stats Plugin
Which also has a JSON REST API
Related
I developed automated tests in Java. The XML test report is generated with junit 5 and xray-junit-extension. This XML is currently being integrated in Jira/Xray, but unfortunately the labels are not being added to the issue.
I believe that the labels could be integrated in two different ways, 1) through this XML test report, or, alternatively, 2) through the Jenkins pipeline itself.
My XML contains the following property :
Click here to see the screenshot
Similarly to what is written in the Xray documentation :
Click in order to see the screenshot of the documentation
https://docs.getxray.app/display/XRAY/Taking+advantage+of+JU...
The only difference is that in the Xray documentation there is a wrap around the tags property. In my XML I do not have that wrap.
Do you happen to have any idea on why the label is not being added in Jira/Xray?
The second approach would be using the XrayImportBuilder to add a label, using importInfo
step([$class: 'XrayImportBuilder',
endpointName: '/junit',
importFilePath: '/reports/.xml',
projectKey: 'P34AMA',
importToSameExecution: 'false',
//testExecKey: 'TSTLKS-753',
serverInstance: '3146a388-d399-4e55-ae28-8c65404d6f9d',
credentialId:'55287529-194d-4e91-9964-7d740d8d2f61',
importInfo: "{ "fields": {"labels": ["label"]}",
//importInfo = '{"fields": {"labels": ["EOD"]} }'
])
Problem using XrayImportBuilder in Jenkins
But when adding importInfo to my pipeline it ends with an issue :
Click here to see the Jenkins logs
Clcik here to see the Jenkins Import Step
Is anyone aware of any other way to add a label to jira automatically without using the hudson.plugins.jira.JiraIssueUpdater ?
Thank you very much for your help!
Using the REST API I can easily obtain an object corresponding to the given build. But at the time when that build was queued it was given build variables. Some were set at queue time, some were inherited from the build definition.
So far I failed to figure out how I can assess these build variables. The build object does not seem to contain them.
I can clearly see the top level template parameters in the parameters property. But where are the build variables?
Edit 1
We use the on-prem version of Azure DevOps 2020. The highest api version it supports is 6.1-preview and the "Runs - Get" in that version does not seem to return much. Please, observe:
C:\> Invoke-RestMethod "$TfsInstanceUrl/DFDevOps/_apis/pipelines/$($b.definition.id)/runs/$($b.id)?api-version=6.1-preview" -UseDefaultCredentials
_links : #{self=; web=; pipeline.web=; pipeline=}
pipeline : #{url=https://tdc1tfsapp01.dayforce.com/tfs/DefaultCollection/d85566a3-9e95-4891-9fd8-42750a0bc250/_apis/pipelines/8781?revision=19; id=8781; revision=19;
name=PRBuild Stress Test; folder=\}
state : completed
result : succeeded
createdDate : 2022-02-11T15:57:07.4271331Z
finishedDate : 2022-02-12T00:46:22.6090285Z
url : https://tdc1tfsapp01.dayforce.com/tfs/DefaultCollection/d85566a3-9e95-4891-9fd8-42750a0bc250/_apis/pipelines/8781/runs/1399783
resources : #{repositories=}
id : 1399783
name : 20220211.2
C:\>
Does it mean the variables are only available in Azure DevOps Services?
You can try using the API "Runs - Get" to get the build.
Normally, in the response body of this API, you will see the pipeline variables are listed in the variables object.
I'm trying to build a cli app to show a list of dart versions and allow the user to select the one to install and then switch between them.
Note: there is a flutter tool (fvm) that can switch between flutter versions (and the embedded dart tools) but this tool is specifically for dart and needs versions as well as channels.
The fvm tool uses the following endpoint:
https://storage.googleapis.com/flutter_infra/releases/releases_linux.json
But I can't find an equivalent.
Is there any method of obtaining a list of versions for each of the dart channels.
I've found:
https://storage.googleapis.com/dart-archive/channels
but you need to know the full url as I can't find any endpoints that provide a list.
I'm hoping to avoid screen scraping.
You can see how the Dart Archive Page retrieves all the information and use their endpoints:
The endpoints returns in a format such as:
{
"kind": "storage#objects",
"prefixes": [
"channels/<stable|beta|dev>/release/1.11.0/",
...,
"channels/<stable|beta|dev>/release/2.9.3/",
"channels/<stable|beta|dev>/release/29803/", // You might need to filter out results such as this
...,
"channels/<stable|beta|dev>/release/latest/"
]
}
Note: The results are not ordered in any way
Url:
https://www.googleapis.com/storage/v1/b/dart-archive/o?delimiter=%2F&prefix=channels%2F<stable|beta|dev>%2Frelease%2F&alt=json
Replace <stable|beta|dev> with which version do you want the info of.
If you need to collect info about a version you can use:
https://storage.googleapis.com/dart-archive/channels/<stable|beta|dev>/release/< VERSION NUMBER | latest>/VERSION
which will return a json object like :
{
"date": "2020-11-11",
"version": "2.10.4",
"revision": "7c148d029de32590a8d0d332bf807d25929f080e"
}
The tags for the github archive for the SDK (https://github.com/dart-lang/sdk/tags) appear to have the releases tagged reasonably usefully. The downside is that it is weighing in at 1.3GB, and there's no easy way to get a workable shallow clone of that.
I've exported a Cloud Dataflow template from Dataprep as outlined here:
https://cloud.google.com/dataprep/docs/html/Export-Basics_57344556
In Dataprep, the flow pulls in text files via wildcard from Google Cloud Storage, transforms the data, and appends it to an existing BigQuery table. All works as intended.
However, when trying to start a Dataflow job from the exported template, I can't seem to get the startup parameters right. The error messages aren't overly specific but it's clear that for one thing, I'm not getting the locations (input and output) right.
The only Google-provided template for this use case (found at https://cloud.google.com/dataflow/docs/guides/templates/provided-templates#cloud-storage-text-to-bigquery) doesn't apply as it uses a UDF and also runs in Batch mode, overwriting any existing BigQuery table rather than append.
Inspecting the original Dataflow job details from Dataprep shows a number of parameters (found in the metadata file) but I haven't been able to get those to work within my code. Here's an example of one such failed configuration:
import time
from google.cloud import storage
from googleapiclient.discovery import build
from oauth2client.client import GoogleCredentials
def dummy(event, context):
pass
def process_data(event, context):
credentials = GoogleCredentials.get_application_default()
service = build('dataflow', 'v1b3', credentials=credentials)
data = event
gsclient = storage.Client()
file_name = data['name']
time_stamp = time.time()
GCSPATH="gs://[path to template]
BODY = {
"jobName": "GCS2BigQuery_{tstamp}".format(tstamp=time_stamp),
"parameters": {
"inputLocations" : '{{\"location1\":\"[my bucket]/{filename}\"}}'.format(filename=file_name),
"outputLocations": '{{\"location1\":\"[project]:[dataset].[table]\", [... other locations]"}}',
"customGcsTempLocation": "gs://[my bucket]/dataflow"
},
"environment": {
"zone": "us-east1-b"
}
}
print(BODY["parameters"])
request = service.projects().templates().launch(projectId=PROJECT, gcsPath=GCSPATH, body=BODY)
response = request.execute()
print(response)
The above example indicates invalid field ("location1", which I pulled from a completed Dataflow job. I know I need to specify the GCS location, the template location, and the BigQuery table but haven't found the correct syntax anywhere. As mentioned above, I found the field names and sample values in the job's generated metadata file.
I realize that this specific use case may not ring any bells but in general if anyone has had success determining and using the correct startup parameters for a Dataflow job exported from Dataprep, I'd be most grateful to learn more about that. Thx.
I think you need to review this document it explains exactly the syntax required for passing the various pipeline options available including the location parameters needed... 1
Specifically with your code snippet the following does not follow the correct syntax
""inputLocations" : '{{\"location1\":\"[my bucket]/{filename}\"}}'.format(filename=file_name)"
In addition to document1, you should also review the available pipeline options and their correct syntax 2
Please use the links...They are the official documentation links from Google.These links will never go stale or be removed they are actively monitored and maintained by a dedicated team
I am making following REST API call to my JIRA instance.
Am getting the total result as 1, but am not getting any values inside the issues : [ ]
JQL:
http://myjira:8080/rest/api/2/search?startAt=1&maxResults=50&fields=project,status&jql=fields project status jql project=C00195 and key=C00195-2210
But I'm getting an error response:
{"startAt":1,"maxResults":50,"total":1,"issues":[]}
Above JQL is not working in browser as well.
If we remove the key filter then it was working as expected.
Working JQL:
http://myjira:8080/rest/api/2/search?startAt=1&maxResults=50&jql=project=C00095
Response:
{"expand":"schema,names","startAt":1,"maxResults":50,"total":2175,"issues":[{"expand":"operations,versionedRepresentations,editmeta,changelog,renderedFields","id":"12560","self":"http://myjira:8080/rest/api/2/issue/12560","key":"C00095-2215","fields":{"parent":{"id":"12559","key":"C00095-2214","self":"http://myjira:8080/rest/api/2/issue/12559","fields":{"summary":"Task for tagging testing","status":
You need to change startAt from 1 to 0. This resource is counting from zero so by setting it to one you actually skip the single issue that was found.