Merge dsl job shells in just one - jenkins

I have this step
def createJob(def jobName,
def branchName) {
job(jobName) {
steps {
shell('export AWS_DEFAULT_REGION=eu-west-1')
shell('$(aws ecr get-login --region eu-west-1)')
shell('docker build -t builder -f ./images/'+branchName+'/Dockerfile .')
shell('docker tag -f '+branchName+':latest *******.dkr.ecr.eu-west-1.amazonaws.com/'+branchName+':latest')
shell('docker push *********.dkr.ecr.eu-west-1.amazonaws.com/'+branchName+':latest)')
}
}
}
How can I just add all those in just one shell?
I tried this way
shell( '''
export AWS_DEFAULT_REGION=eu-west-1
$(aws ecr get-login --region eu-west-1)
docker build -t builder -f ./images/'+branchName+'/Dockerfile .
''')
But then the variable branchName it´s treated as string.
Regards.

Use double quotes instead, which support interpolation (single quotes and single triple quotes do not). Then you can use ${} to insert variables in the string
shell( """
export AWS_DEFAULT_REGION=eu-west-1
$(aws ecr get-login --region eu-west-1)
docker build -t builder -f ./images/${branchName}/Dockerfile .
""")
For more information see the groovy documentation on string interpolation.

Related

In a Jenkins pipeline, why is this not showing the return string?

I have the following scriptive pipeline that adds a tag to an existingt ECR image in AWS
node("linux") {
stage("test") {
docker.withRegistry("https://0123456789.dkr.ecr.us-east-1.amazonaws.com", "myCreds") {
String rc = null
sh """
aws ecr batch-get-image --repository-name repo-name --image-ids imageTag=1.0.286 --query images[].imageManifest --output text > manifest.json
cat manifest.json
"""
try {
rc = sh(script: """
aws ecr put-image --repository-name repo-name --image-tag qa-1.0.286 --image-manifest file://manifest.json
""",
returnStdout: true).trim()
}
catch(err) {
println "rc=$rc"
}
}
}
}
When I run the pipeline, I get this in the console output.
+ aws ecr put-image --repository-name repo-name --image-tag qa-1.0.286 --image-manifest file://./manifest.json
An error occurred (ImageAlreadyExistsException) when calling the PutImage operation: Image with digest 'sha256:ff44828207c7c7df75a8be2e08057da438b4b7f3063acab34ea7ebbcb7dd50a6' and tag 'qa-1.0.286' already exists in the repository with name 'repo-name' in registry with id '0123456789'
[Pipeline] echo
rc=null
Why is rc=null instead of the An error occurred... string above it in the console output? I've used this way to capture a shell script outputs, but why doesn't it work here? What's the proper way to do it in this case?
The problem is that the shell step captures standard output and the aws client prints the message into standard error
You can forward the stderr into stdout with 2>&1 at the end of your command, for example:
aws ecr put-image --repository-name repo-name --image-tag qa-1.0.286 --image-manifest file://manifest.json 2>&1
But the other problem is that when the command inside sh fails, it throws an exception and it doesn't assign the value to the variable, so you need to make sure, that the command always succeeds, for example with adding || : (execute an empty command if the previous command fails)
The downside is that you will need to check the output variable to check if the command failed.
The snippet could then look like this:
String rc = null
sh """
aws ecr batch-get-image --repository-name repo-name --image-ids imageTag=1.0.286 --query images[].imageManifest --output text > manifest.json
cat manifest.json
"""
rc = sh(script:
"""
aws ecr put-image --repository-name repo-name --image-tag qa-1.0.286 --image-manifest file://manifest.json 2>&1 || :
""", returnStdout: true).trim()
if (rc.contains("error occurred")) {
// the command invocation failed
}

Add image tag to kubernetes manifest with Jenkins

Currently, I choose what image to push to registry and i use a " complex " method to set the image tag to manifest before push the files to Git repos.
This is my code
stage("Push to Repo "){
steps {
script {
def filename = 'Path/to/file/deploy.yaml'
def data = readYaml file: filename
data.spec[0].template.spec.containers[0].image = "XXXXXXXXXXXXXXXXXXXXX:${PROJECT_VERSION}"
sh "rm $filename"
writeYaml file: filename, data: data
sh "sed -ie 's/- apiVersion/ apiVersion/g' Path/to/file/deploy.yaml "
sh "sed -i '/^ - c.*/a ---' Path/to/file/deploy.yaml "
sh ''' cd Path/to/file/
git add .
git commit -m "[0000] [update] update manifest to version: ${PROJECT_VERSION} "
git push -u origin HEAD:branche_name '''
}}}
I'am looking for a another way to parse the image tag directly to manifest.
Is there a Jenkins plugin to do that ?
I use YQ tool to do this, it's an image used to edit yaml files.
Example (just docker run):
docker run --rm --user="root"
-e TAG=dev-123456
-v "${PWD}":/workspace
-w /workspace mikefarah/yq
eval '.spec.spec.containers.image.tag = strenv(TAG)'
-i values.yaml
This replaces tag dev-123456 for the current tag in deployment.
I write on multiple lines to make it easier to see, you can write on one line if you want.
Link for details:
https://hub.docker.com/r/mikefarah/yq

How to pass jenkins credentials into docker build command?

My Jenkins pipeline code successfully checks out my private git repo from bitbucket using
checkout([$class: 'GitSCM',
userRemoteConfigs: [[credentialsId: 'cicd-user', url:'ssh://git#bitbucket.myorg.co:7999/A/software.git']]
in same software.git I have a Dockerfile that I want to use to build various build targets present in software.git on Kubernetes and I am trying the below to pass jenkins credentials into a docker container that I want to build and run.
So in the same jenkins pipeline when I checked out software.git (above code), I try to do the following to get the docker container built
withCredentials([sshUserPrivateKey(credentialsId: 'cicd-user', keyFileVariable: 'FILE')]) {
sh "cd ${WORKSPACE} && docker build -t ${some-name} --build-arg USERNAME=cicd-user --build-arg PRIV_KEY_FILE=$FILE --network=host -f software/tools/jenkins/${some-name}/Dockerfile ."
}
in Dockerfile I do
RUN echo "$PRIV_KEY_FILE" > /home/"$USERNAME"/.ssh/id_rsa && \
chmod 700 /home/"$USERNAME"/.ssh/id_rsa
RUN echo "Host bitbucket.myorg.co\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
But still from my Docker container I am not able to successfully checkout my private repo(s). What am I missing ? Any comments, suggestions ? Thanks.
Please read about Groovy String Interpolation.
In your expression
sh "cd ${WORKSPACE} && docker build -t ${some-name} \
--build-arg USERNAME=cicd-user \
--build-arg PRIV_KEY_FILE=$FILE --network=host \
-f software/tools/jenkins/${some-name}/Dockerfile ."
you use double quotes so Groovy interpolates all the variables in the string. This includes $FILE so Groovy replaces that with the value of Groovy variable named FILE. You don't have any Groovy variable with that name (but rather bash variable which is different from Groovy) so this gets replaced with an empty string.
To prevent interpolating that particular variable, you need to hint Groovy not to interpolate this particular one, by escaping this $ with \:
sh "cd ${WORKSPACE} && docker build -t ${some-name}\
--build-arg USERNAME=cicd-user \
--build-arg PRIV_KEY_FILE=\$FILE --network=host \
-f software/tools/jenkins/${some-name}/Dockerfile ."

Persist shell across multiple sh calls in a groovy jenkins pipline

I have two groovy functions within a Jenkins pipeline that together logs into an ECR repo and builds a docker container. It looks like this:
def login() {
sh "aws ecr get-login --registry-ids <id> --region <region> --no-include-email"
sh "aws ecr get-login --region <region> --no-include-email"
}
def build(project, tag) {
login()
sh "docker build -t ${project}:${tag} ."
}
However, when I run this, I get pull access denied, as if I never logged in. I surmise this is because the aws ecr login commands ran in their own shells, and the build commands ran in another. Ideally, I'd like to leverage this kind of functional decomposition and other features of groovy, but run shell commands in one process/shell. Is this possible? How can I accomplish this?
The issue here is that aws ecr get-login returns a string containing the command to login to your registry. You will need to execute the result of that command, by assigning the output of the command to a variable and then executing that result, like so:
def login() {
loginRegistry = sh (script: "aws ecr get-login --registry-ids <id> --region <region> --no-include-email", returnStdout: true)
sh loginRegistry
loginRegion = sh (script: "aws ecr get-login --region <region> --no-include-email", returnStdout: true)
sh loginRegion
}
def build(project, tag) {
login()
sh "docker build -t ${project}:${tag} ."
}

aws ecs ec2 continuous deployment with jenkins

I am using jenkins for continuous deployment from gitlab into aws ecs ec2 container instance. I am using jenkins file for this purpose. For registering the task definition on each push I have placed the task definition json file in an aws folder in the gitlab. Is it possible to put the task definition json file in the jenkins so that we can need to keep only the jenkinsfile in the gitlab?
There is a workspace folder in jenkins /var/lib/jenkins/workspace/jobname which is created after first build. Can we place the task definition in that place?
My Jenkinsfile is pasted below
stage 'Checkout'
git 'git#gitlab.xxxx.com/repo.git'
stage ("Docker build") {
sh "docker build --no-cache -t xxxx:${BUILD_NUMBER}" ."
}
stage("Docker push") {
docker.withRegistry('https://xxxxxxxxxxxx.dkr.ecr.ap-southeast-1.amazonaws.com', 'ecr:regopm:ecr-credentials') {
docker.image("xxxxx:${BUILD_NUMBER}").push(remoteImageTag)
}
}
stage ("Deploy") {
sh "sed -e 's;BUILD_TAG;${BUILD_NUMBER};g' aws/task-definition.json > aws/task-definition-${remoteImageTag}.json"
sh " \
aws ecs register-task-definition --family ${taskFamily} \
--cli-input-json ${taskDefile} \
"
def taskRevision = sh (
returnStdout: true,
script: " aws ecs describe-task-definition --task-definition ${taskFamily} | egrep 'revision' | tr ',' ' '| awk '{print \$2}' "
).trim()
sh " \
aws ecs update-service --cluster ${clusterName} \
--service ${serviceName} \
--task-definition ${taskFamily}:${taskRevision} \
--desired-count 1 \
"
}
On the very same approach, but putting togheter some reusable logic, we just open-sourced our "glueing" tool, that we're using from Jenkins as well (please, see Extra section for templates on Jenkins pipelines):
https://github.com/GuccioGucci/yoke
use esc-cli as an alternative incase if you do not want to use task definition , install esc-cli on jenkins node and run it, but that still need docker-compose on the git.

Resources