I have a Jenkins pipeline as follows and this pipe line is failing with following error, I suspect there is something funky going on with Java Hashmaps but I am not sure at all can someone help me with that?
pipeline {
agent { label 'master' }
parameters {
string(defaultValue: '123456789.ngrok.io/app-name:v1', description: '', name: 'docker_image', trim: true)
password(defaultValue: 'app_db_password', description: '', name: 'app_db_password')
}
environment {
.....
}
stages {
stage('DeployAWS') {
steps {
script{
withEnv(["ENV_APP_DB_PASSWORD=${params.app_db_password}"]) {
env.artifacts = sh(
returnStdout: true,
script: """
set +x
python3 some_script.py --app_db_password='${ENV_APP_DB_PASSWORD}'
set -x
"""
)
def encrypted_key_value_map = readJSON text: env.artifacts
ansiblePlaybook credentialsId: 'dev-server', disableHostKeyChecking: true, "-e \"docker_image=${env.docker_image} fernet_key=${encrypted_key_value_map["fernet_key"]} app_db_password=${encrypted_key_value_map["app_db_password"]}\"", inventory: 'playbooks/dvmt30/dev.inv', playbook: "playbooks/dvmt30/deploy-docker.yml"
}
}
}
}
}
}
ERROR
java.lang.IllegalArgumentException: Expected named arguments but got [{credentialsId=dev-server, disableHostKeyChecking=true, inventory=playbooks/dvmt30/dev.inv, playbook=playbooks/dvmt30/deploy-docker.yml}, -e "docker_image=123456789.ngrok.io/app-name:v1 fernet_key=$$$$$$$$$ app_db_password=*********"]
See the Plugin Documentation, the ansiblePlaybook keyword receives a dictionary as input and your passed dictionary to the function is malformed as the middle value is missing the its key.
To fix the issue you should add that value with the extras key:
ansiblePlaybook credentialsId: 'dev-server',
disableHostKeyChecking: true,
extras: "-e \"docker_image=${env.docker_image} fernet_key=${encrypted_key_value_map["fernet_key"]} app_db_password=${encrypted_key_value_map["app_db_password"]}\"",
inventory: 'playbooks/dvmt30/dev.inv',
playbook: "playbooks/dvmt30/deploy-docker.yml"
Related
Within Jenkins, I would like to parse the ansible playbook "Play Recap" output section for the failing hostname(s). I want to put the information into an email or other notification. This could also be used to fire off another Jenkins job.
I'm currently submitting an ansible-playbook as a jenkins job to deploy software across a number of systems. I'm using a Jenkins Pipeline script, which was necessary to implement for sshagent to be applied correctly.
pipeline {
agent any
options {
ansiColor('xterm')
}
stages {
stage("setup environment") {
steps {
deleteDir()
} //steps
} //stage - setup environment
stage("clone the repo") {
environment {
GIT_SSH_COMMAND = "ssh -o StrictHostKeyChecking=no"
} //environment
steps {
sshagent(['my_git']) {
sh "git clone ssh://git#github.com/~usr/ansible.git"
} //sshagent
} //steps
} //stage - clone the repo
stage("run ansible playbook") {
steps {
sshagent (credentials: ['apps']) {
withEnv(['ANSIBLE_CONFIG=ansible.cfg']) {
dir('ansible') {
ansiblePlaybook(
becomeUser: null,
colorized: true,
credentialsId: 'apps',
disableHostKeyChecking: true,
forks: 50,
hostKeyChecking: false,
inventory: 'hosts',
limit: 'production:&*generic',
playbook: 'demo_play.yml',
sudoUser: null,
extras: '-vvvvv'
) //ansiblePlaybook
} //dir
} //withEnv
} //sshagent
} //steps
} //stage - run ansible playbook
} //stages
post {
failure {
emailext body: "Please go to ${env.BUILD_URL}/consoleText for more details.",
recipientProviders: [[$class: 'DevelopersRecipientProvider'], [$class: 'RequesterRecipientProvider']],
subject: "${env.JOB_NAME}",
to: 'our.dev.team#gmail.com',
attachLog: true
office365ConnectorSend message:"A production system appears to be unreachable.",
status:"Failed",
color:"f00000",
factDefinitions: [[name: "Credentials ID", template: "apps"],
[name: "Build Duration", template: "${currentBuild.durationString}"],
[name: "Full Name", template: "${currentBuild.fullDisplayName}"]],
webhookUrl:'https://outlook.office.com/webhook/[really long alphanumeric key]/IncomingWebhook/[another super-long alphanumeric key]'
} //failure
} //post
} //pipeline
There are several Jenkins plug-ins for parsing the console output, but none will let me capture and utilize text. I have looked at log-parser and text finder.
The only lead I have is using groovy to script this.
https://devops.stackexchange.com/questions/5363/jenkins-groovy-to-parse-console-output-and-mark-build-failure
An example of "Play Recap" within the console output is:
PLAY RECAP **************************************************************************************************************************************************
some.host.name : ok=25 changed=2 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0
some.ip.address : ok=22 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
I am trying to get either a list or a delimited string of each host that is failing. Although, in the case of a list, I need to figure out how to send multiple notifications.
If anyone could help me with the full solution, I would very much appreciate your help.
Q: "Parse the ansible playbook 'Play Recap' output section."
A: Use json callback and parse the output with jq. For example
shell> ANSIBLE_STDOUT_CALLBACK=json ansible-playbook pb.yml | jq .stats
There are a few 'gotchas' that I came across as I solved this problem.
The only successful way I could access the output of the ansible plugin was through pulling the raw log file. def log = currentBuild.rawBuild.getLog(100) In this case I only pulled the last 100 lines, as I'm only looking for the Play Recap box. This method requires special permissions. The console log will display the error and provide a link where the functions can be allowed.
The ansible output should not be colorized. colorized: false Colorized output is quite difficult to parse. The 'console log' doesn't show you the colorized markup, however if you look at the 'consoleText' you will see it.
When using regex, you will most likely have a matcher object which is non-serializable. To use this in Jenkins, it may need to be placed in a function tagged #NonCPS which stops Jenkins from trying to serialize the object. I had mixed results with needing this, so I don't exhaustively understand where it's required.
The regex statement was one of the harder parts for me. I came up with a generic statement that could be easily modified for different scenarios e.g. failed or unreachable. I also had more luck using the 'slashy-style' regex in groovy which places a forward slash on either end of the statement with no need for quotes of any kind. You'll note the 'failed' portion is different failed=([1-9]|[1-9][0-9]), so that it only matches a statement where the failure is non-zero.
/([0-9a-zA-Z\.\-]+)(?=[ ]*:[ ]*ok=([0-9]|[1-9][0-9])[ ]*changed=([0-9]|[1-9][0-9])[ ]*unreachable=([0-9]|[1-9][0-9])[ ]*failed=([1-9]|[1-9][0-9]))/
Here's the full pipeline code that I came up with.
pipeline {
agent any
options {
ansiColor('xterm')
}
stages {
stage("setup environment") {
steps {
deleteDir()
} //steps
} //stage - setup environment
stage("clone the repo") {
environment {
GIT_SSH_COMMAND = "ssh -o StrictHostKeyChecking=no"
} //environment
steps {
sshagent(['my_git']) {
sh "git clone ssh://git#github.com/~usr/ansible.git"
} //sshagent
} //steps
} //stage - clone the repo
stage("run ansible playbook") {
steps {
sshagent (credentials: ['apps']) {
withEnv(['ANSIBLE_CONFIG=ansible.cfg']) {
dir('ansible') {
ansiblePlaybook(
becomeUser: null,
colorized: false,
credentialsId: 'apps',
disableHostKeyChecking: true,
forks: 50,
hostKeyChecking: false,
inventory: 'hosts',
limit: 'production:&*generic',
playbook: 'demo_play.yml',
sudoUser: null,
extras: '-vvvvv'
) //ansiblePlaybook
} //dir
} //withEnv
} //sshagent
} //steps
} //stage - run ansible playbook
} //stages
post {
failure {
script {
problem_hosts = get_the_hostnames()
}
emailext body: "${problem_hosts} has failed. Please go to ${env.BUILD_URL}/consoleText for more details.",
recipientProviders: [[$class: 'DevelopersRecipientProvider'], [$class: 'RequesterRecipientProvider']],
subject: "${env.JOB_NAME}",
to: 'our.dev.team#gmail.com',
attachLog: true
office365ConnectorSend message:"${problem_hosts} has failed.",
status:"Failed",
color:"f00000",
factDefinitions: [[name: "Credentials ID", template: "apps"],
[name: "Build Duration", template: "${currentBuild.durationString}"],
[name: "Full Name", template: "${currentBuild.fullDisplayName}"]],
webhookUrl:'https://outlook.office.com/webhook/[really long alphanumeric key]/IncomingWebhook/[another super-long alphanumeric key]'
} //failure
} //post
} //pipeline
//#NonCPS
def get_the_hostnames() {
// Get the last 100 lines of the log
def log = currentBuild.rawBuild.getLog(100)
print log
// GREP the log for the failed hostnames
def matches = log =~ /([0-9a-zA-Z\.\-]+)(?=[ ]*:[ ]*ok=([0-9]|[1-9][0-9])[ ]*changed=([0-9]|[1-9][0-9])[ ]*unreachable=([0-9]|[1-9][0-9])[ ]*failed=([1-9]|[1-9][0-9]))/
def hostnames = null
// if any matches occurred
if (matches) {
// iterate over the matches
for (int i = 0; i < matches.size(); i++) {
// if there is a name, concatenate it
// else populate it
if (hostnames?.trim()) {
hostnames = hostnames + " " + matches[i]
} else {
hostnames = matches[i][0]
} // if/else
} // for
} // if
if (!hostnames?.trim()) {
hostnames = "No hostnames identified."
}
return hostnames
}
I have created a credential test_cred of type secret text to store a password, which should be passed to an ansible playbook.
I am passing this parameter as an extra variable root_pass to ansible, but the value root_pass is evaluated to string test_cred instead of the secret text contained in it. Can somebody please help to get the value of the credential test_cred so that I can pass it to ansible.
stages {
stage('Execution') {
steps {
withCredentials([string(credentialsId: 'test_cred', variable: 'test')]) {
}
ansiblePlaybook(
installation: 'ansible',
inventory: "inventory/hosts",
playbook: "${PLAYBOOK}",
extraVars: [
server: "${params.Server}",
client: "${params.Client}",
root_pass: "${test}"
]
)
}
}
}
Thank you Zeitounator. The working code is:
stages {
stage('Execution') {
steps {
withCredentials([string(credentialsId: 'test_cred', variable: 'test')]) {
ansiblePlaybook(
installation: 'ansible',
inventory: "inventory/hosts",
playbook: "${PLAYBOOK}",
extraVars: [
server: "${params.Server}",
client: "${params.Client}",
root_pass: "${test}"
]
)
}
}
}
}
I think i dont get how matrix builds work. When i set some variable in some stage depending on which node i run, then on rest of the stage sometimes this variable is set as it should and sometimes it gets values from other nodes (axes). In example below its like job which runs on ub18-1 sometimes has VARIABLE1='Linux node' and sometimes is VARIABLE1='Windows node'. Or gitmethod sometimes it is created from LinuxGitInfo and sometimes WindowsGitInfo.
Source i based on
https://jenkins.io/doc/book/pipeline/syntax/#declarative-matrix
Script almost exactly the same as real one
#Library('firstlibrary') _
import mylib.shared.*
pipeline {
parameters {
booleanParam name: 'AUTO', defaultValue: true, description: 'Auto mode sets some parameters for every slave separately'
choice(name: 'SLAVE_NAME', choices:['all', 'ub18-1','win10'],description:'Run on specific platform')
string(name: 'BRANCH',defaultValue: 'master', description: 'Preferably common label for entire group')
booleanParam name: 'SONAR', defaultValue: false, description: 'Scan and gateway'
booleanParam name: 'DEPLOY', defaultValue: false, description: 'Deploy to Artifactory'
}
agent none
stages{
stage('BuildAndTest'){
matrix{
agent {
label "$NODE"
}
when{ anyOf{
expression { params.SLAVE_NAME == 'all'}
expression { params.SLAVE_NAME == env.NODE}
}}
axes{
axis{
name 'NODE'
values 'ub18-1', 'win10'
}
}
stages{
stage('auto mode'){
when{
expression { return params.AUTO }
}
steps{
echo "Setting parameters for each slave"
script{
nodeLabelsList = env.NODE_LABELS.split()
if (nodeLabelsList.contains('ub18-1')){
println("Setting params for ub18-1");
VARIABLE1 = 'Linux node'
}
if (nodeLabelsList.contains('win10')){
println("Setting params for Win10");
VARIABLE1 = 'Windows node'
}
if (isUnix()){
gitmethod = new LinuxGitInfo(this,env)
} else {
gitmethod = new WindowsGitInfo(this, env)
}
}
}
}
stage('GIT') {
steps {
checkout scm
}
}
stage('Info'){
steps{
script{
sh 'printenv'
echo "branch: " + env.BRANCH_NAME
echo "SLAVE_NAME: " + env.NODE_NAME
echo VARIABLE1
gitinfo = new GitInfo(gitmethod)
gitinfo.init()
echo gitinfo.author
echo gitinfo.id
echo gitinfo.msg
echo gitinfo.buildinfo
}
}
}
stage('install'){
steps{
sh 'make install'
}
}
stage('test'){
steps{
sh 'make test'
}
}
}
}
}
}
}
Ok i solved the problem by defining variables maps with node/slave names as keys. Some friend even suggested to define variables in yml/json file in repository and parse them. Maybe i will, but so far this works well
example:
before the pipelines
def DEPLOYmap = [
'ub18-1': false,
'win10': true
]
in stages
when {
equals expected: true, actual: DEPLOYmap[NODE]
}
We've set up a nightly build using a simple pipeline job that runs periodically every day and but the developers are not getting email notifications for it.
We're using the emailext plugin for sending those emails and Kubernetes agents as nodes.
The job is started by a timer because it's a periodic build, making it run the following pipeline configuration (you can ignore the agent's container definition as it's not relevant, IMO):
pipeline {
agent {
kubernetes {
yaml """
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: agent
image: python:3.7
command:
- cat
tty: true
"""
}
}
options {
timestamps()
}
stages {
stage('SCM') {
steps {
checkout(
changelog: false,
poll: false,
scm: [$class : 'GitSCM',
userRemoteConfigs : [[credentialsId: 'Git SSH Key', url: 'git#bitbucket.org:company/repository.git']]],
branches : [[name: 'master']],
doGenerateSubmoduleConfigurations: false,
extensions : [[$class : 'CloneOption',
depth : 1,
noTags : false,
reference: '',
shallow : true],
[$class: 'PruneStaleBranch'],
[$class: 'GitLFSPull'],
[$class : 'SubmoduleOption',
disableSubmodules : false,
parentCredentials : true,
recursiveSubmodules: true,
reference : '',
trackingSubmodules : false]],
submoduleCfg : []
)
}
}
stage('Build') {
steps {
container('agent') {
echo '-> install tox'
sh 'pip install tox'
sh 'python --version'
sh 'pip --version'
}
}
}
stage('Test') {
steps {
container('agent') {
sh 'tox -c ./tox.ini'
}
post {
always {
echo '-> collecting artifacts'
archiveArtifacts allowEmptyArchive: true, artifacts: '*.txt'
echo '-> collecting test results'
junit allowEmptyResults: true, testResults: 'output/pytest.xml'
}
}
}
}
}
post {
changed {
emailext(
subject: '$DEFAULT_SUBJECT',
body: '$DEFAULT_CONTENT',
recipientProviders: [culprits(),
developers(),
requestor(),
brokenBuildSuspects(),
brokenTestsSuspects(),
upstreamDevelopers()]
)
}
}
}
The above does work when there is a manual start of the job (the starting developer gets the relevant email), however, when the job is triggered from periodic build (cron) - the recipients' list is always empty:
An attempt to send an e-mail to an empty list of recipients, ignored.
What might be the problem?
Another way is to obtain the committers using git:
// Get all commits from the latest merge in an array
def gitCommits = sh(returnStdout: true, script: 'git log --merges -1 --format=%p').trim().split(' ')
// Get committer emails:
def emails = ""
gitCommits.each {
emails = emails + sh(returnStdout: true, script: 'git --no-pager show -s --format=%ae ${it}').trim() + ","
}
echo "${emails}"
Obviously all the recipient providers return empty list.
requestor() returns empty list because there is no requestor
upstreamDevelopers() returns empty list because there is no
upstream build
Check the source code to figure out why the culprits(), developers(), brokenBuildSuspects() and brokenTestsSuspects() return empty list.
pipeline {
agent any
stages {
stage("foo") {
steps {
script {
env.RELEASE_SCOPE = input message: 'User input required', ok: 'Release!',
parameters: [choice(name: 'RELEASE_SCOPE', choices: 'patch\nminor\nmajor',
description: 'What is the release scope?')]
}
echo "${env.RELEASE_SCOPE}"
}
}
}
}
In this above code, The choice are hardcoded (patch\nminor\nmajor) -- My requirement is to dynamically give choice values in the dropdown.
I get the values from calling api - Artifacts list (.zip) file names from artifactory
In the above example, It request input when we do the build, But i want to do a "Build with parameters"
Please suggest/help on this.
Depends how you get data from API there will be different options for it, for example let's imagine that you get data as a List of Strings (let's call it releaseScope), in that case your code be following:
...
script {
def releaseScopeChoices = ''
releaseScope.each {
releaseScopeChoices += it + '\n'
}
parameters: [choice(name: 'RELEASE_SCOPE', choices: ${releaseScopeChoices}, description: 'What is the release scope?')]
}
...
hope it will help.
This is a cutdown version of what we use. We separate stuff into shared libraries but I have consolidated a bit to make it easier.
Jenkinsfile looks something like this:
#!groovy
#Library('shared') _
def imageList = pipelineChoices.artifactoryArtifactSearchList(repoName, env.BRANCH_NAME)
imageList.add(0, 'build')
properties([
buildDiscarder(logRotator(numToKeepStr: '20')),
parameters([
choice(name: 'ARTIFACT_NAME', choices: imageList.join('\n'), description: '')
])
])
Shared library that looks at artifactory, its pretty simple.
Essentially make GET Request (And provide auth creds on it) then filter/split result to whittle down to desired values and return list to Jenkinsfile.
import com.cloudbees.groovy.cps.NonCPS
import groovy.json.JsonSlurper
import java.util.regex.Pattern
import java.util.regex.Matcher
List artifactoryArtifactSearchList(String repoKey, String artifact_name, String artifact_archive, String branchName) {
// URL components
String baseUrl = "https://org.jfrog.io/org/api/search/artifact"
String url = baseUrl + "?name=${artifact_name}&repos=${repoKey}"
Object responseJson = getRequest(url)
String regexPattern = "(.+)${artifact_name}-(\\d+).(\\d+).(\\d+).${artifact_archive}\$"
Pattern regex = ~ regexPattern
List<String> outlist = responseJson.results.findAll({ it['uri'].matches(regex) })
List<String> artifactlist=[]
for (i in outlist) {
artifactlist.add(i['uri'].tokenize('/')[-1])
}
return artifactlist.reverse()
}
// Artifactory Get Request - Consume in other methods
Object getRequest(url_string){
URL url = url_string.toURL()
// Open connection
URLConnection connection = url.openConnection()
connection.setRequestProperty ("Authorization", basicAuthString())
// Open input stream
InputStream inputStream = connection.getInputStream()
#NonCPS
json_data = new groovy.json.JsonSlurper().parseText(inputStream.text)
// Close the stream
inputStream.close()
return json_data
}
// Artifactory Get Request - Consume in other methods
Object basicAuthString() {
// Retrieve password
String username = "artifactoryMachineUsername"
String credid = "artifactoryApiKey"
#NonCPS
credentials_store = jenkins.model.Jenkins.instance.getExtensionList(
'com.cloudbees.plugins.credentials.SystemCredentialsProvider'
)
credentials_store[0].credentials.each { it ->
if (it instanceof org.jenkinsci.plugins.plaincredentials.StringCredentials) {
if (it.getId() == credid) {
apiKey = it.getSecret()
}
}
}
// Create authorization header format using Base64 encoding
String userpass = username + ":" + apiKey;
String basicAuth = "Basic " + javax.xml.bind.DatatypeConverter.printBase64Binary(userpass.getBytes());
return basicAuth
}
I could achieve it without any plugin:
With Jenkins 2.249.2 using a declarative pipeline,
the following pattern prompt the user with a dynamic dropdown menu
(for him to choose a branch):
(the surrounding withCredentials bloc is optional, required only if your script and jenkins configuration do use credentials)
node {
withCredentials([[$class: 'UsernamePasswordMultiBinding',
credentialsId: 'user-credential-in-gitlab',
usernameVariable: 'GIT_USERNAME',
passwordVariable: 'GITLAB_ACCESS_TOKEN']]) {
BRANCH_NAMES = sh (script: 'git ls-remote -h https://${GIT_USERNAME}:${GITLAB_ACCESS_TOKEN}#dns.name/gitlab/PROJS/PROJ.git | sed \'s/\\(.*\\)\\/\\(.*\\)/\\2/\' ', returnStdout:true).trim()
}
}
pipeline {
agent any
parameters {
choice(
name: 'BranchName',
choices: "${BRANCH_NAMES}",
description: 'to refresh the list, go to configure, disable "this build has parameters", launch build (without parameters)to reload the list and stop it, then launch it again (with parameters)'
)
}
stages {
stage("Run Tests") {
steps {
sh "echo SUCCESS on ${BranchName}"
}
}
}
}
The drawback is that one should refresh the jenkins configration and use a blank run for the list be refreshed using the script ...
Solution (not from me): This limitation can be made less anoying using an aditional parameters used to specifically refresh the values:
parameters {
booleanParam(name: 'REFRESH_BRANCHES', defaultValue: false, description: 'refresh BRANCH_NAMES branch list and launch no step')
}
then wihtin stage:
stage('a stage') {
when {
expression {
return ! params.REFRESH_BRANCHES.toBoolean()
}
}
...
}
this is my solution.
def envList
def dockerId
node {
envList = "defaultValue\n" + sh (script: 'kubectl get namespaces --no-headers -o custom-columns=":metadata.name"', returnStdout: true).trim()
}
pipeline {
agent any
parameters {
choice(choices: "${envList}", name: 'DEPLOYMENT_ENVIRONMENT', description: 'please choose the environment you want to deploy?')
booleanParam(name: 'SECURITY_SCAN',defaultValue: false, description: 'container vulnerability scan')
}
The example of Jenkinsfile below contains AWS CLI command to get the list of Docker images from AWS ECR dynamically, but it can be replaced with your own command. Active Choices Plug-in is required.
Note! You need to approve the script specified in parameters after first run in "Manage Jenkins" -> "In-process Script Approval", or open job configuration and save it to approve
automatically (might require administrator permissions).
properties([
parameters([[
$class: 'ChoiceParameter',
choiceType: 'PT_SINGLE_SELECT',
name: 'image',
description: 'Docker image',
filterLength: 1,
filterable: false,
script: [
$class: 'GroovyScript',
fallbackScript: [classpath: [], sandbox: false, script: 'return ["none"]'],
script: [
classpath: [],
sandbox: false,
script: '''\
def repository = "frontend"
def aws_ecr_cmd = "aws ecr list-images" +
" --repository-name ${repository}" +
" --filter tagStatus=TAGGED" +
" --query imageIds[*].[imageTag]" +
" --region us-east-1 --output text"
def aws_ecr_out = aws_ecr_cmd.execute() | "sort -V".execute()
def images = aws_ecr_out.text.tokenize().reverse()
return images
'''.stripIndent()
]
]
]])
])
pipeline {
agent any
stages {
stage('First stage') {
steps {
sh 'echo "${image}"'
}
}
}
}
choiceArray = [ "patch" , "minor" , "major" ]
properties([
parameters([
choice(choices: choiceArray.collect { "$it\n" }.join(' ') ,
description: '',
name: 'SOME_CHOICE')
])
])