Cancel job within CircleCI workflow when another workflow with that job is triggered - circleci

Let's say we have a workflow called Workflow1 which contains jobs A, B, C and D.
First developer pushes a change and triggers Workflow1.
Second developer also pushes a change and triggers Workflow1.
Is there a way to ensure that when job C starts in the second developer's workflow, it automatically cancels only job C in the first developer's workflow, without affecting any of the other jobs?

You could implement something using the CircleCI API v2 and some jq wizardry. Note: you'll need to create a personal API token, and store it in an environment variable (let's call it MyToken)
I'm suggesting the below approach, but there could be another (maybe simpler ¯\_(ツ)_/¯) way.
Get the IDs of pipelines in the project that have the created status:
PIPE_IDS=$(curl --header "Circle-Token: $MyToken" --request GET "https://circleci.com/api/v2/project/gh/$CIRCLE_PROJECT_USERNAME/$CIRCLE_PROJECT_REPONAME/pipeline?branch=$CIRCLE_BRANCH"|jq -r '.items[]|select(.state == "created")')
Get the IDs of currently running/on_hold workflows with the name Workflow1 excluding the current workflow ID:
if [ ! -z "$PIPE_IDS" ]; then
for PIPE_ID in $PIPE_IDS
do curl --header "Circle-Token: $MyToken" --request GET "https://circleci.com/api/v2/pipeline/${PIPE_ID}/workflow"|jq -r --arg CIRCLE_WORKFLOW_ID "$CIRCLE_WORKFLOW_ID" '.items[]|select(.status == "on_hold" or .status == "running")|select(.name == "Workflow1")|select(.id != $CIRCLE_WORKFLOW_ID)|.id' >> currently_running_Workflow1s.txt
done
fi
Then (sorry, I'm getting a bit lazy here, and you also need to do some of the work :p), use the currently_running_Workflow1s.txt file generated above and the "Get a workflow's jobs" endpoint to get the job number of each job in the related running Workflow1 workflows whose name matches job C and that has a status running.
Finally, use the "Cancel job" endpoint to cancel each of these jobs.
Note that there might be a slight delay between the "cancel API call" and the job actually being cancelled, so you might want to add
a short sleep, or even better a while loop that checks those
jobs' respective statuses, before moving further.
I hope this helps.

Related

Self-delete Jenkins builds after they're finished

tl;dr: I'd like to delete builds from within their execution, or rather, in a post statement (though the specifics shouldn't matter).
Background: In a project I'm working on, there is a "gateway" job of sorts that aggregates all new job triggers into one launch as long as certain other jobs are still running. For this purpose, this job aborts itself such that there is only ever one instance running (which is often not the latest build).
Unfortunately, this means that in the job preview, the job is often shown as aborted, which is undesirable (ending the job as "successful" or some other status wouldn't improve anything). Thus, I have two options:
Change the abortion logic so the newest build survives and older ones are aborted. This is technically possible, but has other drawbacks due to some internal logic, which is why I'd like to avoid this solution.
Delete the aborted builds once they're finished
However, this is apparently not as easy as just calling the "doDelete" REST API inside the job, and the build discarder can't be set to store 0 builds (it needs to be a positive integer). This is what I tried code-wise (MWE):
steps {
script {
currentBuild.result = 'ABORTED'
error("abort")
}
}
post {
always {
withCredentials([string(credentialsId: 'x', variable: 'TOKEN')]) {
sh "curl -X POST \"https://jenkins.example.com/etc/job/jobname/${env.BUILD_NUMBER}/doDelete\" -i -H 'Authorization: Bearer $TOKEN'"
}
}
}
This code deletes some job information (for instance, the console log is empty), but not the build itself. Thus, my question remains:
How can I make a job delete itself?

PullRequest Build Validation with Jenkins and OnPrem Az-Devops

First off the setup in question:
A Jenkins Instance with several build nodes and on prem Azure-Devops server containing the Git Repositories.
The Repo in question is too large to always build on push for all branches and all devs, so a small workaround was done:
The production branches have a polling enabled twice a day (because of testing duration which is handled downstream more builds would not help with quality)
All other branches have their automated building suppressed. They still can start it manually for Builds/Deployments/Unittests if they so choose.
The jenkinsfile has parameterization for which platforms to build, on prod* all the platforms are true, on all other branches false.
This helps because else the initial build of a feature branch would always build/deploy locally all platforms which would take too much of a load on the server infrastructure.
I added a service endpoint for Jenkins in the Azure Devops, added a Buildvalidation .yml - this basically works because when I call the sourcebranch of the pull request with the merge commitID i added a parameter
isPullRequestBuild which contains the ID of the PR.
snippet of the yml:
- task: JenkinsQueueJob#2
inputs:
serverEndpoint: 'MyServerEndpoint'
jobName: 'MyJob'
isMultibranchJob: true
captureConsole: true
capturePipeline: true
isParameterizedJob: true
multibranchPipelineBranch: $(System.PullRequest.SourceBranch)
jobParameters: |
stepsToPerform=Build
runUnittest=true
pullRequestID=$(System.PullRequest.PullRequestId)
Snippet of the Jenkinsfile:
def isPullRequest = false
if ( params.pullRequestID?.trim() )
{
isPullRequest = true
//do stuff to change how the pipeline should react.
}
In the jenkinsfile I look whether the parameter is not empty and reset the platforms to build to basically all and to run the unittests.
The problem is: if the branch has never run, Jenkins does not already know the parameter in the first run, so it is ignored, building nothing, and returning with 0 because "nothing had to be done".
Is there any way to only run the jenkins build if it hasnt run already?
Or is it possible to get information from the remote call if this was the build with ID 1?
The only other thing would be to Call the Jenkins via web api and check for the last successful build, but in that case I would have have the token somewhere stored in source control.
Am I missing something obvious here? I dont want to trigger the feature branch builds to do nothing more than once, because Devs could lose useful information about their started builds/deployments.
Any ideas appreciated
To whom it may concern with similar problems:
In the end I used the following workaround:
The Jenkins Endpoint is called via a user that only is used for automated builds. So, in case that this user triggered the build, I set everything to run a Pull Request Validation, even if it is the first build. Along the lines of
def causes = currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause')
if (causes != null)
{
def buildCauses= readJSON text: currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause').toString()
buildCauses.each
{
buildCause ->
if (buildCause['userId'] == "theNameOfMyBuildUser")
{
triggeredByAzureDevops = true
}
}
}
getBuildcauses must be allowed to run by a Jenkins Admin for that to work.

How to pass pipeline variables to post build gerrit message?

I have a Pylint running in a Jenkins pipeline. To implement it, I used Gerrit trigger plugin and Next Generation Warnings plugin. Everything is working as expected - Jenkins is joining the review, checks change with pylint and generates report.
Now, I'd like to post pylint score in a custom "Build successful" message. I wanted to pass the pylint score to a environment variable and use it in dedicated window for Gerrit plugin message.
Unfortunately no matter what I try, I cannot pass any "new" variable to the message. Passing parameters embedded in pipeline works (e.g. patchset number).
I created new environment variable in Configure Jenkins menu, tried exporting to shell, writing to it (via $VAR and env. syntax) but nothing works - that is, build message displays raw string like $VAR instead of what variable contains.
What should I do to pass local pylint score (distinct for every pipeline occurence) to the custom build message for Gerrit?
I don't think the custom message can be used for this. This is just supposed to be a static message.
They way I do this is to use the SSH command to perform the review. You can also achieve the same using the REST API.
First I run my linting and white space checking script that will generate a json file with the information I would like to pass to Gerrit. Next I send it to Gerrit using SSH. See below my pipeline script and an example json file.
As a bonus I have added the robot comments. This will now show up in your review as a remark from Jenkins that line 8 of my Jenkins file has a trailing white space. You can easily replace this with your lint result of you like or just ignore it and only put the message. It is easier to use a json file as it will make it easier to create multi line messages
node('master') {
sh """
cat lint_change.json | ssh -p ${env.GERRIT_PORT} ${env.GERRIT_HOST} gerrit review ${env.GERRIT_PATCHSET_REVISION} --json
"""
}
Example json file:
{
"labels": {
"Code-Style": "-1"
},
"message": "Lint Bot Review\nLint Results:\n Errors: 0\n Warnings: 0\n\nWhitespace results:\n Errors: 1",
"robot_comments": {
"Jenkinsfile": [
{
"robot_id": "lint-bot",
"line": "8",
"message": "trailing whitespace."
}
]
}
}
Alternatively, you may want to look at a new gerrit-code-review-plugin that should make this things even easier. However, I have not tried this yet.

Unable to get the result from Google machine learning Cloud REST API

I am trying to run a job using Google Cloud Machine Learning REST-API ml.jobs.project.create
The latest job that I submitted has job id 'drivermonitoring20180109335'. Here on completion of the job, message 'job completed successfully' is displayed but I cannot see any desired output file in the specified location. Output logs can be seen in fig1
Also I would like to keep in-front of you my few observations while running this job id:
i) Running the job took very less time in comparison to any other job that I executed before.
ii) While running jobs before, every job earlier was executed via two different tasks viz a)master-replica-0 and b)service (refer fig2) but this job didn't have master-replica-0 task(refer fig3) I tried to Google the issue, but was unable to find any solution related to the issue.
So I can infer that the task that I was trying to run is being scheduled but the python script that I am trying to run is never scheduled to be executed.
Kindly let me know if you require more screenshots or if you want to have a look at the project structure to help with the issue.
Thanks in advance.
EDIT 1: Added JSON while making API call
POST https://ml.googleapis.com/v1/projects/drivermonitoringsystem/jobs?key={YOUR_API_KEY}
{
"trainingInput": {
"pythonModule": "trainer.retrain",
"args": [
"--bottleneck_dir=ModelTraining/tf_files/bottlenecks \
--model_dir=ModelTraining/tf_files/models/ \
--architecture=mobilenet_0.50_224 \
--output_graph=gs://<BUCKET_NAME>/tf_files/retrained_graph.pb \
--output_labels=gs://<BUCKET_NAME>/tf_files/retrained_labels.txt \
--image_dir=gs://<BUCKET_NAME>/dataset224x224/"
],
"region": "us-central1",
"packageUris": [
"gs://<BUCKET_NAME>/ModelTraining4.tar.gz"
],
"jobDir": "gs://<BUCKET_NAME>/tf_files/",
"runtimeVersion": "1.4"
},
"jobId": "job_id201801101535"
}
I have just run myself some sample jobs using both the gcloud command and the REST API, and everything has just worked fine in both of the cases. It looks like, in your case, the job was never executed, as there is no cluster created for processing the job itself (that is why master-replica-0 is missing).
The jobs that you had run previously and which had worked were launched also using the REST API, or instead with gcloud or a Client Library?
Here I share an example JSON I used when making the API call to ml.projects.jobs.create through the API Explorer link you shared, I suggest you try adapting it to your requirements and check if you got any missing field:
POST https://ml.googleapis.com/v1/projects/<YOUR_PROJECT>/jobs?key={YOUR_API_KEY}
{
"jobId": "<JOB_ID>",
"trainingInput": {
"jobDir": "gs://<LOCATION_TO_STORE_OUTPUTS>",
"runtimeVersion": "1.4",
"region": "<REGION>",
"packageUris": [
"gs://<PATH_TO_YOUR_TRAINER>/trainer-0.0.0.tar.gz"
],
"pythonModule": "<PYTHON_MODULE_TO_RUN>",
"args": [
"--train-files",
"gs://<PATH_TO_YOUR_TRAINING_DATA>/data.csv",
"--eval-files",
"gs://<PATH_TO_YOUR_TEST_DATA>/test.csv",
"--train-steps",
"100",
"--eval-steps",
"10",
"--verbosity",
"DEBUG"
]
}
}
Change TrainingInput to PredictionInput (and the appropriate child fields) if you are trying to run a prediction job instead of a training one, as in this example.

Jenkins/Gerrit: Multiple builds with different labels from one gerrit event

I create two labels in one of our projects that requires builds on Windows and Linux, so the project.config for that project now looks as follows
[label "Verified"]
function = NoBlock
[label "Verified-Windows"]
function = MaxWithBlock
value = -1 Fails
value = 0 No score
value = +1 Verified
[label "Verified-Unix"]
function = MaxWithBlock
value = -1 Fails
value = 0 No score
value = +1 Verified
This works as intended. Submits require that one succesful build reports verified-windows and the other one verified-linux [1].
However, the two builds are now triggered by the same gerrit event (from 'different' servers, see note), but when they report back only one of the two labels 'survives'.
It seems as though, the plugin collates the two messages that arrive into one comment and only accepts whichever label was the first one to be set.
Is this by design or a bug? Can I work around this?
This is using the older version of the trigger: 2.11.1
[1] I got this to work by adding more than one server and then reconfiguring the messages that are sent back via SSH to gerrit. This is cumbersome and quite non-trivial. I think jobs should be able to override the label that a succesful build will set on gerrit.
This can be adressed by using more than one user name, so the verdicts on labels don't get mixed up. However this is only partially satisfactory, since multiple server connections for the same server also duplicate events from the event stream.
I am currently working on a patch for the gerrit trigger plugin for jenkins to address this issue and and make using different labels more efficient.
Maybe you can solve this challenge by using a post build groovy script.
I provided an example at another topic: https://stackoverflow.com/a/32825278
To be more specific as mentioned by arman1991
Install the Groovy Postbuild Plugin:
https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin
Use the following example script as PostBuild action in each of your jobs. Modify it to your needs for the Linux verification.
It will do for you:
collect necessary environment variables and status of the job
build feedback message
build ssh command
execute ssh command -> send feedback to gerrit
//Collect all environment variables of the current build job
def env = manager.build.getEnvironment(manager.listener)
//Get Gerrit Change Number
def change = env['GERRIT_CHANGE_NUMBER']
//Get Gerrit Patch Number
def patch = env['GERRIT_PATCHSET_NUMBER']
//Get Url to current job
def buildUrl = env['BUILD_URL']
//Build Url to console output
def buildConsoleUrl = buildUrl + "/console"
//Verification will set to succeded (+1) and feedback message will be generated...
def result = +1
def message = "\\\"Verification for Windows succeeded - ${buildUrl}\\\""
//...except job failed (-1)...
if (manager.build.result.isWorseThan(hudson.model.Result.SUCCESS)){
result = -1
message = "\\\"Verification for Windows failed - ${buildUrl}\\\""
}
//...or job is aborted
if (manager.build.result == hudson.model.Result.ABORTED){
result = 0
message = "\\\"Verification for Windows aborted - ${buildConsoleUrl}\\\""
}
//Send Feedback to Gerrit via ssh
//-i - Path to private ssh key
def ssh_message = "ssh -i /path/to/jenkins/.ssh/key -p 29418 user#gerrit-host gerrit review ${change},${patch} --label=verified-Windows=${result} --message=${message}"
manager.listener.logger.println(new ProcessBuilder('bash','-c',"${ssh_message}").redirectErrorStream(true).start().text)
I hope this will help you to solve your challenge without using the Gerrit Trigger Plugin to report the results.

Resources