Context
I’m running 3 E2E tests as parallel and my goal is to send notification to Slack when one of test is failed. I tried two approaches.
After e2e is finished, send notification → but, it’s impossible because job cannot be executed when required job(e2e test job) is failed.
When one of parallel tests is failed, send notification → duplicated notifications are sent,
So each test should know whether one of tests already sent notification through environment variable.
To implement second method, I should be able to share environment variable between parallel tests(jobs). But, I couldn’t any clues in docs/discussion/support of CircleCI. How can I do? And, if this is impossible how can I do this in hacky way?
Example
Parallel Tests: A, B, C.
When A is finished eariler than B and C, A set environment variable isTestFailed as true dynamically and sent Slack notification message. After sending, B and C were finished and checked isTestFailed is true. They didn’t send notification because isTestFailed is true and finish the test. Therefore, only one notification message is exist in slack.
Diagram
build ------> e2e test A ------> Send notification to Slack and set boolean flag
------> e2e test B ------> Check boolean flag is true, don't send
------> e2e test C ------> Check boolean flag is true, don't send
Thanks.
You might not need a shared variable.
Couldn't you postpone the notification in a dedicated step which will be triggered on fail only?
You run your tests without sending notification, then you add a step like this:
- run:
name: 'send notification if one or more test failed'
background: false
when: on_fail
command: |
# Do stuff
Hope this could help!
Related
In my company I have a pipeline that runs several jobs. I wanted to get the result of each job and write each of these results in a file or variable, later email it to me. Is there such a possibility? Remembering that: I don't want the result of the pipeline, but the result of each of the jobs that are inside it.
I even tried to make requests via api, but for each pipeline it would have to have a code and that is not feasible at all, the maintenance issue.
When you trigger a job inside a pipeline, you use the build job step.
This step has a property called propagate that:
If enabled (default state), then the result of this step is that of the downstream build (e.g., success, unstable, failure, not built, or aborted). If disabled, then this step succeeds even if the downstream build is unstable, failed, etc.; use the result property of the return value as needed.
You can write a wrapper for calling jobs, that stores the result of each job (and maybe other data useful for debugging, like build url), so you can use it later to construct the contents of an email.
E.g.
def jobResults = [:]
def buildJobAndStoreResult(jobName, jobParams) {
def run = build job: jobName, parameters: jobParams, propagate: false
jobResults[jobName] = [
result: run.result
]
}
Then you can constuct the body of an email by iterating through the map e.g.
emailBody = "SUMMARY\n\n"
jobResults.each() { it ->
str += "${it.key}: ${it.value.result}\n"
}
And use the mail step to send out a report.
It's worth thinking if you want your pipeline to fail after sending the email if any of the called jobs failed, and adding links from your email report to the failed jobs and caller pipeline.
EDIT: I use this plugin to schedule runs btw: https://plugins.jenkins.io/parameterized-scheduler/
I have set slack notification which would automatically be sent when the timer(scheduler) starts the execution.
In the slack channel, it would post something like this when the auto-build starts:
(/ci/master-15): Started by timer with parameters: {URL=https://shop.com, USERNAME=yyy, PASSWORD=xxs, SLACK-NOTIFY-CHANNEL=#JENKINS-REPORTS}
And when the run is complete, a message similar to this would be posted to the channel:
(/ci/master-15): The job is now complete. Here are the execution summary
Groovy:
BUILD_TRIGGERED_BY = currentBuild.getBuildCauses().shortDescription[0]
SEND_SLACK_NOTIF(BUILD_TRIGGER_BY)
Now, If a human rebuilds one of these timer jobs after they're unsuccessful, I would expect it to say "Started by andrea-hyong#gmail.com" but instead
In the pipeline status it shows:
Started by timer with parameters: {URL=https://shop.com, USERNAME=yyy, PASSWORD=xxs, SLACK-NOTIFY-CHANNEL=#JENKINS-REPORTS}
Started by user andrea-hyong#gmail.com
Rebuilds build #14
In the slack message it shows:
Started by timer with parameters: {URL=https://shop.com, USERNAME=yyy, PASSWORD=xxs, SLACK-NOTIFY-CHANNEL=#JENKINS-REPORTS}
How can I make it send the username instead of timer?
The behavior you are seeing is a result of the plugins you are using. I assume the rebuild plugin picks both triggers when rebuilding the Job. Since you have consistent behavior when rebuilding, you can improve your Groovy code to something like the one below.
BUILD_TRIGGERED_BY = currentBuild.getBuildCauses().shortDescription.size() > 1 ? currentBuild.getBuildCauses().shortDescription[1] : currentBuild.getBuildCauses().shortDescription[0]
Let's say we have a workflow called Workflow1 which contains jobs A, B, C and D.
First developer pushes a change and triggers Workflow1.
Second developer also pushes a change and triggers Workflow1.
Is there a way to ensure that when job C starts in the second developer's workflow, it automatically cancels only job C in the first developer's workflow, without affecting any of the other jobs?
You could implement something using the CircleCI API v2 and some jq wizardry. Note: you'll need to create a personal API token, and store it in an environment variable (let's call it MyToken)
I'm suggesting the below approach, but there could be another (maybe simpler ¯\_(ツ)_/¯) way.
Get the IDs of pipelines in the project that have the created status:
PIPE_IDS=$(curl --header "Circle-Token: $MyToken" --request GET "https://circleci.com/api/v2/project/gh/$CIRCLE_PROJECT_USERNAME/$CIRCLE_PROJECT_REPONAME/pipeline?branch=$CIRCLE_BRANCH"|jq -r '.items[]|select(.state == "created")')
Get the IDs of currently running/on_hold workflows with the name Workflow1 excluding the current workflow ID:
if [ ! -z "$PIPE_IDS" ]; then
for PIPE_ID in $PIPE_IDS
do curl --header "Circle-Token: $MyToken" --request GET "https://circleci.com/api/v2/pipeline/${PIPE_ID}/workflow"|jq -r --arg CIRCLE_WORKFLOW_ID "$CIRCLE_WORKFLOW_ID" '.items[]|select(.status == "on_hold" or .status == "running")|select(.name == "Workflow1")|select(.id != $CIRCLE_WORKFLOW_ID)|.id' >> currently_running_Workflow1s.txt
done
fi
Then (sorry, I'm getting a bit lazy here, and you also need to do some of the work :p), use the currently_running_Workflow1s.txt file generated above and the "Get a workflow's jobs" endpoint to get the job number of each job in the related running Workflow1 workflows whose name matches job C and that has a status running.
Finally, use the "Cancel job" endpoint to cancel each of these jobs.
Note that there might be a slight delay between the "cancel API call" and the job actually being cancelled, so you might want to add
a short sleep, or even better a while loop that checks those
jobs' respective statuses, before moving further.
I hope this helps.
First off the setup in question:
A Jenkins Instance with several build nodes and on prem Azure-Devops server containing the Git Repositories.
The Repo in question is too large to always build on push for all branches and all devs, so a small workaround was done:
The production branches have a polling enabled twice a day (because of testing duration which is handled downstream more builds would not help with quality)
All other branches have their automated building suppressed. They still can start it manually for Builds/Deployments/Unittests if they so choose.
The jenkinsfile has parameterization for which platforms to build, on prod* all the platforms are true, on all other branches false.
This helps because else the initial build of a feature branch would always build/deploy locally all platforms which would take too much of a load on the server infrastructure.
I added a service endpoint for Jenkins in the Azure Devops, added a Buildvalidation .yml - this basically works because when I call the sourcebranch of the pull request with the merge commitID i added a parameter
isPullRequestBuild which contains the ID of the PR.
snippet of the yml:
- task: JenkinsQueueJob#2
inputs:
serverEndpoint: 'MyServerEndpoint'
jobName: 'MyJob'
isMultibranchJob: true
captureConsole: true
capturePipeline: true
isParameterizedJob: true
multibranchPipelineBranch: $(System.PullRequest.SourceBranch)
jobParameters: |
stepsToPerform=Build
runUnittest=true
pullRequestID=$(System.PullRequest.PullRequestId)
Snippet of the Jenkinsfile:
def isPullRequest = false
if ( params.pullRequestID?.trim() )
{
isPullRequest = true
//do stuff to change how the pipeline should react.
}
In the jenkinsfile I look whether the parameter is not empty and reset the platforms to build to basically all and to run the unittests.
The problem is: if the branch has never run, Jenkins does not already know the parameter in the first run, so it is ignored, building nothing, and returning with 0 because "nothing had to be done".
Is there any way to only run the jenkins build if it hasnt run already?
Or is it possible to get information from the remote call if this was the build with ID 1?
The only other thing would be to Call the Jenkins via web api and check for the last successful build, but in that case I would have have the token somewhere stored in source control.
Am I missing something obvious here? I dont want to trigger the feature branch builds to do nothing more than once, because Devs could lose useful information about their started builds/deployments.
Any ideas appreciated
To whom it may concern with similar problems:
In the end I used the following workaround:
The Jenkins Endpoint is called via a user that only is used for automated builds. So, in case that this user triggered the build, I set everything to run a Pull Request Validation, even if it is the first build. Along the lines of
def causes = currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause')
if (causes != null)
{
def buildCauses= readJSON text: currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause').toString()
buildCauses.each
{
buildCause ->
if (buildCause['userId'] == "theNameOfMyBuildUser")
{
triggeredByAzureDevops = true
}
}
}
getBuildcauses must be allowed to run by a Jenkins Admin for that to work.
I have created a pipeline where for build there is one job and for mail notification another job. I need to get the build job latest BUILD_NUMBER(either success or failure whatever) in mail notification job and i should publish the build job latest url as a mail notification regarding the buildstatus.
How could i achieve this?
I think there are multiple ways that you could achieve this. Let's call your 2 builds A (build) and B (mail notification).
Call B from A; pass whatever information you need in B as a parameter
You can use a combination of this script and the Job API.
For #2, I'm thinking you would have something like this in your B job:
def jobname = "<name of Job A>"
def job = Jenkins.instance.getItemByFullName(jobname)
def build = job.getLastBuild()