I am running a Jenkins pipline which is calling a script
In following example, running a script which is generating output file with issue_{datetime}.txt
I want to send this file as attachment. However, as file name is generated every time with separate date time stamps not able to find a way to attach file.
e.g.
pipeline {
agent any
stages{
stage('Start Build'){
steps{
python run.py
}
}
}
post {
always {
echo ("${email_id}")
emailext from: 'xyz#yahoo.com', attachmentsPattern: 'resources/*.xlsx',
to: "${email_id}",
}
}
}
You can do it like this try this,
emailext attachmentsPattern: '**/resources/*.xlsx', body: 'Find attachments', subject: 'Attachment', to: 'test#ab.org'
OUTPUT_FILE = sh(script: 'ls -t resources/issue*.xlsx | head -1', returnStdout: true)
emailext from: 'test#yahoo.com', attachmentsPattern:"${OUTPUT_FILE}"
I'm trying to create vault-deployment using Jenkins. Here's a link to my repo.
When running the script I'm getting
"Scripts not permitted to use staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods write java.io.File java.lang.String. Administrators can decide whether to approve or reject this signature." issue.
I got this issue after adding a stage "Generate Vars".
If I remove this stage in the code the other stages works, but they don't complete the job. This is because it needs to get token for vault deployment and it needs to get it from .tfvars file.
It's not a good idea to share my variables on GitHub, that's why I`m trying to create vault.tfvars through Jenkins and provide any token before running a pipeline job.
Does anyone know how to fix this???
If some part is not clear please feel free to ask questions!
If I find the solution for this issue I will share it here with the link to my GitHub.
Thanks
Here is my code Jenkinsfile.groovy
node('master') {
properties([parameters([
string(defaultValue: 'plan', description: 'Please provide what action you want? (plan,apply,destroy)', name: 'terraformPlan', trim: true),
string(defaultValue: 'default_token_add_here', description: 'Please provide a token for vault', name: 'vault_token', trim: true)
]
)])
checkout scm
stage('Generate Vars') {
def file = new File("${WORKSPACE}/vaultDeployment/vault.tfvars")
file.write """
vault_token = "${vault_token}"
"""
}
stage("Terraform init") {
dir("${workspace}/vaultDeployment/") {
sh 'ls'
sh 'pwd'
sh "terraform init"
}
stage("Terraform Plan/Apply/Destroy"){
if (params.terraformPlan.toLowerCase() == 'plan') {
dir("${workspace}/vaultDeployment/") {
sh "terraform plan -var-file=variables.tfvars"
}
}
if (params.terraformPlan.toLowerCase() == 'apply') {
dir("${workspace}/vaultDeployment/") {
sh "terraform apply --auto-approve"
}
}
if (params.terraformPlan.toLowerCase() == 'destroy') {
dir("${workspace}/vaultDeployment/") {
sh "terraform destroy --auto-approve"
}
}
}
}
}
Generally, we choose pipeline to execute in Groovy sandbox which has restriction in some aspects for security considering. Like using new keyword, using static method.
But you need Jenkins admin to add the restriction to whitelist in jenkins > Manage jenkins > In-process Script Approval
To write file, Jenkins pipeline supply alternative writeFile which has no such restriction.
writeFile file: '<file path>', text: """
vault_token = "${vault_token}"
"""
As #yong already pointed out the right way to achieve this and avoid eventual restrictions in environments where we don't have admin control is to use writeFile
i.e.:
writeFile file: 'tmp/query.sql', text: "SELECT * FROM table"
Advantage of this is that migrating from fully managed to restricted environment will be painless.
Subfolders, like 'tmp' in example, will be automatically created and code itself is pretty verbose
I'm a using the jenkins pipeline. My usecase is that the developer are using a simple *.ini file that is parsed by a python script to add or remove stage within the jenkinsfile whenever they want. I don't want them to manually edit the jenkinsfile because they won't know how it works.
Expected behaviour is:
When a build is triggered I would like to first execute a python script which might write into the jenkinsfile to add/remove stage according to the *.ini file.
As far as I understand, when an event trigger a jenkins build, the first thing it does is opening the jenkinsfile. However I would like to know if it's possible to run some prebuild script before that ?
Thanks
Edit: here's a simple view of run of the pipeline (blue ocean UI)
The ini file might for example remove in the stage Compilation the step Building Plan C by removing the groovy code doing that in the jenkins file
Give an example for reference.
node {
git url: '', branch: '', credentialsId: ''
def parseStr = sh(script: 'python parser.py xxx.ini', returnStdout: true).trim()
// the python parser expect to return a JSON string like:
// {'run_stage1': false, 'run_stage2': true}
def parseObj = readJSON text: parseStr
stage('stage 1') {
if(parseObj.run_stage1) {
echo 'stage1'
...
}
}
stage('stage 1') {
if(parseObj.run_stage2) {
echo 'stage1'
....
}
}
}
Jenkins pipeline had supply apis: readJSON, readYaml, readProperties to read JSON, YAML and Properties files.
If you choose any of them to replace ini file, you can drop the python parser to make your pipeline more simple
We have a project on GitHub which has two Jenkins Multibranch Pipeline jobs - one builds the project and the other runs tests. The only difference between these two pipelines is that they have different JenkinsFiles.
I have two problems that I suspect are related to one another:
In the GitHub status check section I only see one check with the following title:
continuous-integration/jenkins/pr-merge — This commit looks good,
which directs me to the test Jenkins pipeline. This means that our build pipeline is not being picked up by GitHub even though it is visible on Jenkins. I suspect this is because both the checks have the same name (i.e. continuous-integration/jenkins/pr-merge).
I have not been able to figure out how to rename the status check message for each Jenkins job (i.e. test and build). I've been through this similar question, but its solution wasn't applicable to us as Build Triggers aren't available in Multibranch Pipelines
If anyone knows how to change this message on a per-job basis for Jenkins Multibranch Pipelines that'd be super helpful. Thanks!
Edit (just some more info):
We've setup GitHub/Jenkins webhooks on the repository and builds do get started for both our build and test jobs, it's just that the status check/message doesn't get displayed on GitHub for both (only for test it seems).
Here is our JenkinsFile for for the build job:
#!/usr/bin/env groovy
properties([[$class: 'BuildConfigProjectProperty', name: '', namespace: '', resourceVersion: '', uid: ''], buildDiscarder(logRotator(artifactDaysToKeepStr: '', artifactNumToKeepStr: '', daysToKeepStr: '', numToKeepStr: '5')), [$class: 'ScannerJobProperty', doNotScan: false]])
node {
stage('Initialize') {
echo 'Initializing...'
def node = tool name: 'node-lts', type: 'jenkins.plugins.nodejs.tools.NodeJSInstallation'
env.PATH = "${node}/bin:${env.PATH}"
}
stage('Checkout') {
echo 'Getting out source code...'
checkout scm
}
stage('Install Dependencies') {
echo 'Retrieving tooling versions...'
sh 'node --version'
sh 'npm --version'
sh 'yarn --version'
echo 'Installing node dependencies...'
sh 'yarn install'
}
stage('Build') {
echo 'Running build...'
sh 'npm run build'
}
stage('Build Image and Deploy') {
echo 'Building and deploying image across pods...'
echo "This is the build number: ${env.BUILD_NUMBER}"
// sh './build-openshift.sh'
}
stage('Upload to s3') {
if(env.BRANCH_NAME == "master"){
withAWS(region:'eu-west-1',credentials:'****') {
def identity=awsIdentity();
s3Upload(bucket:"****", workingDir:'build', includePathPattern:'**/*');
cfInvalidate(distribution:'EBAX8TMG6XHCK', paths:['/*']);
}
};
if(env.BRANCH_NAME == "PRODUCTION"){
withAWS(region:'eu-west-1',credentials:'****') {
def identity=awsIdentity();
s3Upload(bucket:"****", workingDir:'build', includePathPattern:'**/*');
cfInvalidate(distribution:'E6JRLLPORMHNH', paths:['/*']);
}
};
}
}
Try to use GitHubCommitStatusSetter (see this answer for declarative pipeline syntax). You're using a scripted pipeline syntax, so in your case it will be something like this (note: this is just prototype, and it definitely must be changed to match your project specific):
#!/usr/bin/env groovy
properties([[$class: 'BuildConfigProjectProperty', name: '', namespace: '', resourceVersion: '', uid: ''], buildDiscarder(logRotator(artifactDaysToKeepStr: '', artifactNumToKeepStr: '', daysToKeepStr: '', numToKeepStr: '5')), [$class: 'ScannerJobProperty', doNotScan: false]])
node {
// ...
stage('Upload to s3') {
try {
setBuildStatus(context, "In progress...", "PENDING");
if(env.BRANCH_NAME == "master"){
withAWS(region:'eu-west-1',credentials:'****') {
def identity=awsIdentity();
s3Upload(bucket:"****", workingDir:'build', includePathPattern:'**/*');
cfInvalidate(distribution:'EBAX8TMG6XHCK', paths:['/*']);
}
};
// ...
} catch (Exception e) {
setBuildStatus(context, "Failure", "FAILURE");
}
setBuildStatus(context, "Success", "SUCCESS");
}
}
void setBuildStatus(context, message, state) {
step([
$class: "GitHubCommitStatusSetter",
contextSource: [$class: "ManuallyEnteredCommitContextSource", context: context],
reposSource: [$class: "ManuallyEnteredRepositorySource", url: "https://github.com/my-org/my-repo"],
errorHandlers: [[$class: "ChangingBuildStatusErrorHandler", result: "UNSTABLE"]],
statusResultSource: [ $class: "ConditionalStatusResultSource", results: [[$class: "AnyBuildResult", message: message, state: state]] ]
]);
}
Please check this and this links for more details.
You can use the Github Custom Notification Context SCM Behaviour plugin https://plugins.jenkins.io/github-scm-trait-notification-context/
After installing go to the job configuration. Under "Branch sources" -> "GitHub" -> "Behaviors" click "Add" and select "Custom Github Notification Context" from the dropdown menu. Then you can type your custom context name into the "Label" field.
This answer is pretty much like #biruk1230's answer. But if you don't want to downgrade your github plugin to work around the bug, then you could call the API directly.
void setBuildStatus(String message, String state)
{
env.COMMIT_JOB_NAME = "continuous-integration/jenkins/pr-merge/sanity-test"
withCredentials([string(credentialsId: 'github-token', variable: 'TOKEN')])
{
// 'set -x' for debugging. Don't worry the access token won't be actually logged
// Also, the sh command actually executed is not properly logged, it will be further escaped when written to the log
sh """
set -x
curl \"https://api.github.com/repos/thanhlelgg/brain-and-brawn/statuses/$GIT_COMMIT?access_token=$TOKEN\" \
-H \"Content-Type: application/json\" \
-X POST \
-d \"{\\\"description\\\": \\\"$message\\\", \\\"state\\\": \\\"$state\\\", \
\\\"context\\\": \\\"${env.COMMIT_JOB_NAME}\\\", \\\"target_url\\\": \\\"$BUILD_URL\\\"}\"
"""
}
}
The problem with both methods is that continuous-integration/jenkins/pr-merge will be displayed no matter what.
This will be helpful with #biruk1230's answer.
You can remove Jenkins' status check which named continuous-integration/jenkins/something and add custom status check with GitHubCommitStatusSetter. It could be similar effects with renaming context of status check.
Install Disable GitHub Multibranch Status plugin on Jenkins.
This can be applied by setting behavior option of Multibranch Pipeline Job on Jenkins.
Thanks for your question and other answers!
Since there's a limitation in Jenkins Pipeline's that you cannot add a manual build step without hanging the build (see for example this stackoverflow question) I'm experimenting with a combination of Jenkins Pipeline and Build Pipeline Plugin using the Job DSL plugin.
My plan was to create a Job DSL script that first executes the the Jenkins Pipeline (defined in a Jenkinsfile) and then create a downstream job that deploys to production (this is the manual step). I've created this Job DSL script as a test:
pipelineJob("${REPO_NAME} jobs") {
logRotator(-1, 10)
def repo = "https://path-to-repo/${REPO_NAME}.git"
triggers {
scm('* * * * *')
}
description("Pipeline for $repo")
definition {
cpsScm {
scm {
git {
remote { url(repo) }
branches('master')
scriptPath('Jenkinsfile')
extensions { } // required as otherwise it may try to tag the repo, which you may not want
}
}
}
}
publishers {
buildPipelineTrigger("${REPO_NAME} deploy to prod") {
parameters {
currentBuild()
}
}
}
}
freeStyleJob("${REPO_NAME} deploy to prod") {
}
buildPipelineView("$REPO_NAME Build Pipeline") {
selectedJob("${REPO_NAME} jobs")
}
where REPO_NAME is defined as an environment variable. The Jenkinsfile looks like this:
node {
stage('build'){
echo "building"
}
stage('run tests'){
echo "running tests"
}
stage('package Docker'){
echo "packaging"
}
stage('Deploy to Test'){
echo "Deploying to Test"
}
}
The problem is that the selectedJob points to "${REPO_NAME} jobs" which doesn't seem to be a valid option as "Initial Job" in the Build Pipeline Plugin view (you can't select it manually either).
Is there a workaround for this? I.e. how can I use a Jenkins Pipeline as the "Initial Job" for the Build Pipeline Plugin?
From the documentation on yourDomain.com/plugin/job-dsl/api-viewer/index.html#method/javaposse.jobdsl.dsl.views.NestedViewsContext.envDashboardView
It shows that buildPipelineView can only be used within a View block which is inside of a Folder block.
Folder {
View {
buildPipelineView {
}
}
}