I need to parse and analyse test logs for tests running in a semaphore pipeline.
Something similar to Jenkins log-parser.
Idea is to scan test logs in semaphore pipeline in after-pipeline stage.
Is there any feasibility of doing this?
Related
is it possible to check in Jenkins pipeline script which executor is executing the job.
Suppose there are two executors (Executor-1 and Executor-2) and JOB-A is triggered. As the Executor-1 is free, execution of JOB-A is started by Executor-1.
I would like to find out in my pipeline script of JOB-A that the execution is currently running on Executor-1.
Is there a way to find this?
I found the answer by going through this issue.
The environment variable EXECUTOR_NUMBER can be used to find on which executor is the job running.
Can I block the run of my current job if the job B is currently running?
There is a plugin for Jenkins job (Blocking Job) to do this but I'm not sure how to do this in Pipeline using Jenkinsfile.
Yes you use lockable resources https://plugins.jenkins.io/lockable-resources/
Job B grabs the lock, and Job A cannot run untill the lock is freed and it can grab it
stage("Run Post-Deployment Test") {
options {
lock(resource: "deploy-env")
}
This will take the "deploy-env" lock untill the end of the stage. Meaning if another deploy tried to happen it couldnt grab the lock untill the tests stage had finished.
Note if a lock doesnt exist Jenkins will create it for you.
I have a pipeline that I want to run e2e test after deploying to uat environment, the e2e test runs on gitlab. What I want to do is to let gitlab use webhook to trigger jenkins to decide if the build could go to production or not. So the pipline looks as below.
After deploying to uat, I can send the webhook to trigger e2e test on gitlab. And gitlab can send the webhook to jenkins, however, as the pipeline job e2e received the webhook and build success. It does not change the pipeline job status, and hence, we can not proceed to deploy to production unless we manually click the trigger on e2e job. And I already try to use currentBuild.currentResult and currentBuild.result which doesn't seem to work.
What should I do to solve this?
Update
I am not using the new pipeline way with groovy, instead, I'm doing it the old way like this. But I supposed this is not related to the issue I'm trying to solve here.
Let me provide more details about the problem. After e2e test is done, I send below request from gitlab to jenkins
curl "SERVER_URL/job/MY_XXX_PROJECT/view/Pipeline/job/X.X%20E2E%20my_e2e_test_result/buildWithParameters?token=xxxxxxxxxxxxxx&PARENT_BUILD_NUMBER=123¤tBuild.currentResult=SUCCESS¤tBuild.Result=SUCCESS"
And here is the pipeline, and if I check the job in SERVER_URL/job/MY_XXX_PROJECT/view/Pipeline/job/X.X%20E2E%20my_e2e_test_result/. The job is actually built SUCCESS.
However, the status shows like nothing happened, and we cannot proceed to deploy production without manually clicking the trigger button on the same job(e2e) again.
If you're just looking for a way for the job to update its own state, use the unstable and the error steps.
pipeline {
agent any
stages {
stage('Hello') {
steps {
echo 'This job is now a success. All jobs are a success until marked otherwise'.
//success 'This will fail. There's no such thing as a success step'
unstable 'This stage is now unstable'
error 'This job is now failed'
unstable 'This job is still failed. Build results can only get worse. They can never be improved.'
}
}
}
}
Notice that there's no such thing as a success step. Probably because all jobs are a success until told otherwise. And the overall result of a job cannot be improved. See the documentation for catchError here.
Note that the build result can only get worse, so you cannot change the result to SUCCESS if the current result is UNSTABLE or worse.
We have a Jenkins scripted pipeline with 6 stages. I need to call a REST api at the end of each stage to push the status of the stage to Cassandra DB. Is there an efficient way of doing this in Jenkins Pipeline?
Currently, I am calling a function at the end of each stage with the status. I have to write this piece of code in all stages even if the stage succeeds or fails.
did you try to use the shared library function, to write you code once and call it all over the pipeline eg. in diffent pipelines? Here is the link to the docs. https://jenkins.io/doc/book/pipeline/shared-libraries
Thanks for looking into my concern.
I have 3 jenkins jobs. JOb A, B & C.
Job A starts at 10PM at night.
JOB B is a down stream of Job A and runs only if job A is success.
Job C is a downstream job of job B
Now I want job C to be triggered after successful completion of job B or at at a scheduled time. Problem is if I schedule job C as down stream as well as with a schedule. It runs twice.
But, it should run only once.
Please help me to achieve this.
Did you try "Conditional BuildStep" plug-in? You can execute a downstream job (or a script) based on "Build cause"
You can add more than 1 "single" conditions for each build cause.
Now you'll need to decide when to run a job, as a timer or as a downstream
You can use jenkins pipeline plugin. You can create a pipeline job with stages. A pipeline will proceed only to next stage if previous stage is successful. Refer documentation for more details on pipeline.
Pipeline comes with a lot of flexibilities in which you can define the flow. You can either use a declarative pipeline or a scripted pipeline. Good number of examples can be found in here