Variable zero in groovy script - jenkins

I'm facing one really weird problem
// Update the service
stage "Update Service"
def SERVICE_NAME = "currency-converter-search-srv"
def TASK_FAMILY = "currency-converter-search"
def TASK_REVISION = sh "aws --region us-east-1 ecs describe-task-definition --task-definition currency-converter-search | jq .taskDefinition.revision"
def DESIRED_COUNT = sh "aws --region us-east-1 ecs describe-services --services ${SERVICE_NAME} | jq .services[0].desiredCount"
if (DESIRED_COUNT == 0) {
DESIRED_COUNT = 1
}
sh "aws --region us-east-1 ecs update-service --cluster default --service ${SERVICE_NAME} --task-definition ${TASK_FAMILY}:${TASK_REVISION} --desired-count ${DESIRED_COUNT}"
this script fails and here below the log:
[Pipeline] stage (Update Service)
Entering stage Update Service
Proceeding
[Pipeline] sh
[workspace] Running shell script
+ jq .taskDefinition.revision
+ aws --region us-east-1 ecs describe-task-definition --task-definition currency-converter-search
13
[Pipeline] sh
[workspace] Running shell script
+ jq .services[0].desiredCount
+ aws --region us-east-1 ecs describe-services --services currency-converter-search-srv
0
[Pipeline] sh
[workspace] Running shell script
+ aws --region us-east-1 ecs update-service --cluster default --service currency-converter-search-srv --task-definition currency-converter-search:0 --desired-count 1
An error occurred (InvalidParameterException) when calling the UpdateService operation: revision must be between 1 and 2147483647
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 255
Finished: FAILURE
The reason is because TASK_REVISION variable is 0 but according to how it is processed is not zero but 13. do you know why this weird behaviour?

You can't assign the result of sh to a variable.
sh doesn't return anything meaningful... I think there's an issue for this, but there's no fix yet
The workaround seems to be to redirect the result to a file, then read that file

Related

In a Jenkins pipeline, why is this not showing the return string?

I have the following scriptive pipeline that adds a tag to an existingt ECR image in AWS
node("linux") {
stage("test") {
docker.withRegistry("https://0123456789.dkr.ecr.us-east-1.amazonaws.com", "myCreds") {
String rc = null
sh """
aws ecr batch-get-image --repository-name repo-name --image-ids imageTag=1.0.286 --query images[].imageManifest --output text > manifest.json
cat manifest.json
"""
try {
rc = sh(script: """
aws ecr put-image --repository-name repo-name --image-tag qa-1.0.286 --image-manifest file://manifest.json
""",
returnStdout: true).trim()
}
catch(err) {
println "rc=$rc"
}
}
}
}
When I run the pipeline, I get this in the console output.
+ aws ecr put-image --repository-name repo-name --image-tag qa-1.0.286 --image-manifest file://./manifest.json
An error occurred (ImageAlreadyExistsException) when calling the PutImage operation: Image with digest 'sha256:ff44828207c7c7df75a8be2e08057da438b4b7f3063acab34ea7ebbcb7dd50a6' and tag 'qa-1.0.286' already exists in the repository with name 'repo-name' in registry with id '0123456789'
[Pipeline] echo
rc=null
Why is rc=null instead of the An error occurred... string above it in the console output? I've used this way to capture a shell script outputs, but why doesn't it work here? What's the proper way to do it in this case?
The problem is that the shell step captures standard output and the aws client prints the message into standard error
You can forward the stderr into stdout with 2>&1 at the end of your command, for example:
aws ecr put-image --repository-name repo-name --image-tag qa-1.0.286 --image-manifest file://manifest.json 2>&1
But the other problem is that when the command inside sh fails, it throws an exception and it doesn't assign the value to the variable, so you need to make sure, that the command always succeeds, for example with adding || : (execute an empty command if the previous command fails)
The downside is that you will need to check the output variable to check if the command failed.
The snippet could then look like this:
String rc = null
sh """
aws ecr batch-get-image --repository-name repo-name --image-ids imageTag=1.0.286 --query images[].imageManifest --output text > manifest.json
cat manifest.json
"""
rc = sh(script:
"""
aws ecr put-image --repository-name repo-name --image-tag qa-1.0.286 --image-manifest file://manifest.json 2>&1 || :
""", returnStdout: true).trim()
if (rc.contains("error occurred")) {
// the command invocation failed
}

Terraform ignore parallelism flag when running through Jenkins

I am running Terraform job using Jenkins pipeline. Terraform refresh is taking too long 10m~, using -parallelism=60 (local)terraform runs much faster 2.5m~.
When running the same config through Jenkins salve with parallelism I don't see any improve in running time.
Jenkins ver. 2.154
Jenkins Docker plugin 1.1.6
SSH Agent plugin 1.19
Flow: Jenkins master creates job -> Jenkins slave running Docker image of terraform
def terraformRun(String envName, String terraformAction, String dirName = 'env') {
sshagent(['xxxxxxx-xxx-xxx-xxx-xxxxxxxx']) {
withEnv(["ENV_NAME=${envName}", "TERRAFORM_ACTION=${terraformAction}", "DIR_NAME=${dirName}"]) {
sh '''
#!/bin/bash
set -e
ssh-keyscan -H "bitbucket.org" >> ~/.ssh/known_hosts
AUTO_APPROVE=""
echo terraform "${TERRAFORM_ACTION}" on "${ENV_NAME}"
cd "${DIR_NAME}"
export TF_WORKSPACE="${ENV_NAME}"
echo "terraform init"
terraform init -input=false
echo "terraform refresh"
terraform apply -refresh-only -auto-approve -parallelism=60 -var-file=tfvars/"${ENV_NAME}".tfvars -var-file=../variables.tfvars # Refresh is working but it seems to ignore parallelism
echo "terraform ${TERRAFORM_ACTION}"
if [ ${TERRAFORM_ACTION} = "apply" ]; then
AUTO_APPROVE="-auto-approve"
fi
terraform ${TERRAFORM_ACTION} -refresh=false -var-file=tfvars/"${ENV_NAME}".tfvars -var-file=../variables.tfvars ${AUTO_APPROVE}
echo "terraform ${TERRAFORM_ACTION} on ${ENV_NAME} finished successfully."
'''
}
}
}

Persist shell across multiple sh calls in a groovy jenkins pipline

I have two groovy functions within a Jenkins pipeline that together logs into an ECR repo and builds a docker container. It looks like this:
def login() {
sh "aws ecr get-login --registry-ids <id> --region <region> --no-include-email"
sh "aws ecr get-login --region <region> --no-include-email"
}
def build(project, tag) {
login()
sh "docker build -t ${project}:${tag} ."
}
However, when I run this, I get pull access denied, as if I never logged in. I surmise this is because the aws ecr login commands ran in their own shells, and the build commands ran in another. Ideally, I'd like to leverage this kind of functional decomposition and other features of groovy, but run shell commands in one process/shell. Is this possible? How can I accomplish this?
The issue here is that aws ecr get-login returns a string containing the command to login to your registry. You will need to execute the result of that command, by assigning the output of the command to a variable and then executing that result, like so:
def login() {
loginRegistry = sh (script: "aws ecr get-login --registry-ids <id> --region <region> --no-include-email", returnStdout: true)
sh loginRegistry
loginRegion = sh (script: "aws ecr get-login --region <region> --no-include-email", returnStdout: true)
sh loginRegion
}
def build(project, tag) {
login()
sh "docker build -t ${project}:${tag} ."
}

Running ssh-agent within docker on jenkins doesnt work

I am trying to use a container within my jenkins pipeline, however I cant get ssh-agent to work inside it. I am on v1.19 of the plugin, when I run the below code I get
Host key verification failed. fatal: Could not read from remote
repository.
Please make sure you have the correct access rights and the repository
exists.
However if I run the code from outside the image it works perfect, proving that the user has the correct permissions.
node('nodeName'){
cleanWs()
ws("short"){
withDockerRegistry([credentialsId: 'token', url: "https://private.repo.com"]) {
docker.image("img:1.0.0").inside("-u root:root --network=host") {
sshagent(credentials: ["bitbucket_token"]) {
sh "mkdir ~/.ssh"
sh 'ssh-keyscan bitbucket.company.com >> ~/.ssh/known_hosts'
sh 'git clone ssh://git#bitbucket.company.com:PORT/repo.git'
}
}
}
}
}
Here is the output:
[Pipeline] sshagent
[ssh-agent] Using credentials jenkins (bitbucket_token)
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine)
$ docker exec abcdef123456 ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-qwertyu/agent.15
SSH_AGENT_PID=22
Running ssh-add (command line suppressed)
Identity added: /home/jenkins/short#tmp/private_key_8675309.key (/home/jenkins/short#tmp/private_key_8675309.key)
[ssh-agent] Started.
[Pipeline] {
[Pipeline] sh
+ mkdir /root/.ssh
[Pipeline] sh
+ ssh-keyscan bitbucket.company.com
# bitbucket.company.com:22 SSH-2.0-OpenSSH_6.6.1
# bitbucket.company.com:22 SSH-2.0-OpenSSH_6.6.1
# bitbucket.company.com:22 SSH-2.0-OpenSSH_6.6.1
[Pipeline] sh
+ git clone ssh://git#bitbucket.company.com:PORT/repo.git
Cloning into 'repo'...
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
[Pipeline] }
$ docker exec --env ******** --env ******** abcdef123456 ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 22 killed;
[ssh-agent] Stopped.
[Pipeline] // sshagent
Im completely stumped by this

aws ecs ec2 continuous deployment with jenkins

I am using jenkins for continuous deployment from gitlab into aws ecs ec2 container instance. I am using jenkins file for this purpose. For registering the task definition on each push I have placed the task definition json file in an aws folder in the gitlab. Is it possible to put the task definition json file in the jenkins so that we can need to keep only the jenkinsfile in the gitlab?
There is a workspace folder in jenkins /var/lib/jenkins/workspace/jobname which is created after first build. Can we place the task definition in that place?
My Jenkinsfile is pasted below
stage 'Checkout'
git 'git#gitlab.xxxx.com/repo.git'
stage ("Docker build") {
sh "docker build --no-cache -t xxxx:${BUILD_NUMBER}" ."
}
stage("Docker push") {
docker.withRegistry('https://xxxxxxxxxxxx.dkr.ecr.ap-southeast-1.amazonaws.com', 'ecr:regopm:ecr-credentials') {
docker.image("xxxxx:${BUILD_NUMBER}").push(remoteImageTag)
}
}
stage ("Deploy") {
sh "sed -e 's;BUILD_TAG;${BUILD_NUMBER};g' aws/task-definition.json > aws/task-definition-${remoteImageTag}.json"
sh " \
aws ecs register-task-definition --family ${taskFamily} \
--cli-input-json ${taskDefile} \
"
def taskRevision = sh (
returnStdout: true,
script: " aws ecs describe-task-definition --task-definition ${taskFamily} | egrep 'revision' | tr ',' ' '| awk '{print \$2}' "
).trim()
sh " \
aws ecs update-service --cluster ${clusterName} \
--service ${serviceName} \
--task-definition ${taskFamily}:${taskRevision} \
--desired-count 1 \
"
}
On the very same approach, but putting togheter some reusable logic, we just open-sourced our "glueing" tool, that we're using from Jenkins as well (please, see Extra section for templates on Jenkins pipelines):
https://github.com/GuccioGucci/yoke
use esc-cli as an alternative incase if you do not want to use task definition , install esc-cli on jenkins node and run it, but that still need docker-compose on the git.

Resources