Persist shell across multiple sh calls in a groovy jenkins pipline - jenkins

I have two groovy functions within a Jenkins pipeline that together logs into an ECR repo and builds a docker container. It looks like this:
def login() {
sh "aws ecr get-login --registry-ids <id> --region <region> --no-include-email"
sh "aws ecr get-login --region <region> --no-include-email"
}
def build(project, tag) {
login()
sh "docker build -t ${project}:${tag} ."
}
However, when I run this, I get pull access denied, as if I never logged in. I surmise this is because the aws ecr login commands ran in their own shells, and the build commands ran in another. Ideally, I'd like to leverage this kind of functional decomposition and other features of groovy, but run shell commands in one process/shell. Is this possible? How can I accomplish this?

The issue here is that aws ecr get-login returns a string containing the command to login to your registry. You will need to execute the result of that command, by assigning the output of the command to a variable and then executing that result, like so:
def login() {
loginRegistry = sh (script: "aws ecr get-login --registry-ids <id> --region <region> --no-include-email", returnStdout: true)
sh loginRegistry
loginRegion = sh (script: "aws ecr get-login --region <region> --no-include-email", returnStdout: true)
sh loginRegion
}
def build(project, tag) {
login()
sh "docker build -t ${project}:${tag} ."
}

Related

In a Jenkins pipeline, why is this not showing the return string?

I have the following scriptive pipeline that adds a tag to an existingt ECR image in AWS
node("linux") {
stage("test") {
docker.withRegistry("https://0123456789.dkr.ecr.us-east-1.amazonaws.com", "myCreds") {
String rc = null
sh """
aws ecr batch-get-image --repository-name repo-name --image-ids imageTag=1.0.286 --query images[].imageManifest --output text > manifest.json
cat manifest.json
"""
try {
rc = sh(script: """
aws ecr put-image --repository-name repo-name --image-tag qa-1.0.286 --image-manifest file://manifest.json
""",
returnStdout: true).trim()
}
catch(err) {
println "rc=$rc"
}
}
}
}
When I run the pipeline, I get this in the console output.
+ aws ecr put-image --repository-name repo-name --image-tag qa-1.0.286 --image-manifest file://./manifest.json
An error occurred (ImageAlreadyExistsException) when calling the PutImage operation: Image with digest 'sha256:ff44828207c7c7df75a8be2e08057da438b4b7f3063acab34ea7ebbcb7dd50a6' and tag 'qa-1.0.286' already exists in the repository with name 'repo-name' in registry with id '0123456789'
[Pipeline] echo
rc=null
Why is rc=null instead of the An error occurred... string above it in the console output? I've used this way to capture a shell script outputs, but why doesn't it work here? What's the proper way to do it in this case?
The problem is that the shell step captures standard output and the aws client prints the message into standard error
You can forward the stderr into stdout with 2>&1 at the end of your command, for example:
aws ecr put-image --repository-name repo-name --image-tag qa-1.0.286 --image-manifest file://manifest.json 2>&1
But the other problem is that when the command inside sh fails, it throws an exception and it doesn't assign the value to the variable, so you need to make sure, that the command always succeeds, for example with adding || : (execute an empty command if the previous command fails)
The downside is that you will need to check the output variable to check if the command failed.
The snippet could then look like this:
String rc = null
sh """
aws ecr batch-get-image --repository-name repo-name --image-ids imageTag=1.0.286 --query images[].imageManifest --output text > manifest.json
cat manifest.json
"""
rc = sh(script:
"""
aws ecr put-image --repository-name repo-name --image-tag qa-1.0.286 --image-manifest file://manifest.json 2>&1 || :
""", returnStdout: true).trim()
if (rc.contains("error occurred")) {
// the command invocation failed
}

Using a docker command as an variable in a Jenkins sh script?

I am working on a CI/DC Pipeline where I have a DEV, TEST and Prod Server. With a Jenkins Pipeline i deploy my newest Image onto my DEV-Server. Now i want to take the image of my DEV-Server by reading out the sha256 id and put it on my TEST-Server.
I have a Jenkins Pipeline for that:
pipeline {
agent any
tools {
dockerTool 'docker-19.03.9'
}
environment {
}
stages {
stage('DeployToTEST-Angular') {
steps {
script {
echo 'Deploying image...'
sh 'docker stop mycontainertest'
sh 'docker rm mycontainertest'
sh 'docker run -d --name mycontainertest [cant show the envirmoments i give with] --restart always angular:latest'
}
}
}
}
As you see i currently use the :latest tag, but i want something like this:
pipeline {
agent any
tools {
dockerTool 'docker-19.03.9'
}
environment {
}
stages {
stage('DeployToTEST-Angular') {
steps {
script {
echo 'Deploying image...'
sh 'docker stop mycontainertest'
sh 'docker rm mycontainertest'
sh 'docker run -d --name mycontainertest [cant show the envirmoments i give with] --restart always \$imageofDev'
}
}
}
}
$imageofDev = docker inspect mycontainerdev | grep -o 'sha256:[^"]*' // this command works and give my back the raw sha256 number
So that it uses the actuall sha256 number of my dev image
I dont know how i can define this variable and later use the value of it in this Jenkins Pipeline. How can i do this?
When you build the image, choose an image name and tag that you can remember later. Jenkins provides several environment variables that you can use to construct this, such as BUILD_NUMBER; in a multibranch pipeline you also have access to BRANCH_NAME and CHANGE_ID; or you can directly run git in your pipeline code.
def shortCommitID = sh script: 'git rev-parse --short HEAD', returnStdout: true
def dockerImage = "project:${shortCommitID.trim()}"
def registry = 'registry.example.com'
def fullDockerImage "${registry}/${dockerImage}"
Now that you know the Docker image name you're going to use everywhere you can just use it; you never need to go off and look up the image ID. Using the scripted pipeline Docker integration, for example:
docker.withRegistry("https://${registry}") {
def image = docker.build(dockerImage)
image.push
}
Since you know the registry/image:tag name, you can just use it in other Jenkins directives too
def containerName = 'container'
// stop an old container, if any
sh "docker stop ${containerName} || true"
sh "docker rm ${containerName} || true"
// start a new container that outlives this pipeline
sh "docker run -d --name ${containerName} ${fullDockerImage}"

How to pull & run docker image on remote server through jenkins pipeline

I have 2 aws ubuntu instance: 1st-server and 2nd-server.
Below is my jenkins pipeline script which create docker image and runs container on 1st-server and push the image to docker hub repo. That's working fine.
I want to pull image and deploy it on 2nd-server.
When I do ssh for 2nd server through below pipeline script but it logins to 1st-server, even if ssh credential ('my-ssh-key') are of 2nd-server. I'm confused how it logging to 1st-server and I checked with touch commands so the file is creating on 1st-server.
pipeline {
environment {
registry = "docker-user/docker-repo"
registryCredential = 'docker-cred'
dockerImage = ''
}
agent any
stages {
stage('Cloning Git') {
steps {
git url: 'https://github.com/git-user/jenkins-flask-tutorial.git/'
}
}
stage('Building image') {
steps{
script {
sh "sudo docker build -t flask-app-one ."
sh "sudo docker run -p 5000:5000 --name flask-app-one -d flask-app-one "
sh "docker tag flask-app-one:latest docker-user/myrepo:flask-app-push-test"
}
}
}
stage('Push Image') {
steps{
script {
docker.withRegistry( '', registryCredential ) {
sh "docker push docker-user/docker-repo:flask-app-push-test"
sshagent(['my-ssh-key']) {
sh 'ssh -o StrictHostKeyChecking=no ubuntu#2ndserver && cd /home/ubuntu/ && sudo touch test-file && docker pull docker-user/docker-repo:flask-app-push-test'
}
}
}
}
}
My question is, how to login to 2nd server and pull the docker image on 2nd server via through jenkins pipeline script? Help me out where I'm doing wrong.
This is more of an alternative than a solution. You can execute the remote commands as part of ssh. This will execute the command on the server and disconnect.
ssh name#ip "ls -la /home/ubuntu/"

Variable zero in groovy script

I'm facing one really weird problem
// Update the service
stage "Update Service"
def SERVICE_NAME = "currency-converter-search-srv"
def TASK_FAMILY = "currency-converter-search"
def TASK_REVISION = sh "aws --region us-east-1 ecs describe-task-definition --task-definition currency-converter-search | jq .taskDefinition.revision"
def DESIRED_COUNT = sh "aws --region us-east-1 ecs describe-services --services ${SERVICE_NAME} | jq .services[0].desiredCount"
if (DESIRED_COUNT == 0) {
DESIRED_COUNT = 1
}
sh "aws --region us-east-1 ecs update-service --cluster default --service ${SERVICE_NAME} --task-definition ${TASK_FAMILY}:${TASK_REVISION} --desired-count ${DESIRED_COUNT}"
this script fails and here below the log:
[Pipeline] stage (Update Service)
Entering stage Update Service
Proceeding
[Pipeline] sh
[workspace] Running shell script
+ jq .taskDefinition.revision
+ aws --region us-east-1 ecs describe-task-definition --task-definition currency-converter-search
13
[Pipeline] sh
[workspace] Running shell script
+ jq .services[0].desiredCount
+ aws --region us-east-1 ecs describe-services --services currency-converter-search-srv
0
[Pipeline] sh
[workspace] Running shell script
+ aws --region us-east-1 ecs update-service --cluster default --service currency-converter-search-srv --task-definition currency-converter-search:0 --desired-count 1
An error occurred (InvalidParameterException) when calling the UpdateService operation: revision must be between 1 and 2147483647
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 255
Finished: FAILURE
The reason is because TASK_REVISION variable is 0 but according to how it is processed is not zero but 13. do you know why this weird behaviour?
You can't assign the result of sh to a variable.
sh doesn't return anything meaningful... I think there's an issue for this, but there's no fix yet
The workaround seems to be to redirect the result to a file, then read that file

Merge dsl job shells in just one

I have this step
def createJob(def jobName,
def branchName) {
job(jobName) {
steps {
shell('export AWS_DEFAULT_REGION=eu-west-1')
shell('$(aws ecr get-login --region eu-west-1)')
shell('docker build -t builder -f ./images/'+branchName+'/Dockerfile .')
shell('docker tag -f '+branchName+':latest *******.dkr.ecr.eu-west-1.amazonaws.com/'+branchName+':latest')
shell('docker push *********.dkr.ecr.eu-west-1.amazonaws.com/'+branchName+':latest)')
}
}
}
How can I just add all those in just one shell?
I tried this way
shell( '''
export AWS_DEFAULT_REGION=eu-west-1
$(aws ecr get-login --region eu-west-1)
docker build -t builder -f ./images/'+branchName+'/Dockerfile .
''')
But then the variable branchName it´s treated as string.
Regards.
Use double quotes instead, which support interpolation (single quotes and single triple quotes do not). Then you can use ${} to insert variables in the string
shell( """
export AWS_DEFAULT_REGION=eu-west-1
$(aws ecr get-login --region eu-west-1)
docker build -t builder -f ./images/${branchName}/Dockerfile .
""")
For more information see the groovy documentation on string interpolation.

Resources