Jenkins Pipeline docker.withRegistry() push leads to endless loop - docker

I managed to setup a jenkins on kubernetes and gitbucket on kubernetes. Now I am trying out to create my own first docker file for uploading on dockerhub. Unfortunately it fails while uploading to docker. Build is successfully, but I cant manage how to upload it to dockerhub (private repository).
Jenkinsfile
def label = "${BUILD_TAG}"
podTemplate(label: label, containers: [
containerTemplate(name: 'docker', image: 'docker:latest', command: 'cat', ttyEnabled: true)
],
volumes: [
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
]) {
node(label) {
def app
def myRepo = checkout scm
def gitCommit = myRepo.GIT_COMMIT
def gitBranch = myRepo.GIT_BRANCH
def shortGitCommit = "${gitCommit[0..10]}"
def previousGitCommit = sh(script: "git rev-parse ${gitCommit}~", returnStdout: true)
stage('Decommission Infrastructure') {
container('kubectl') {
echo "Decmomission..."
}
}
stage('Build application') {
container('docker') {
app = docker.build("fasautomation/recon", ".")
}
}
stage('Run unit tests') {
container('docker') {
app.inside {
sh 'echo "Test passed"'
}
}
}
stage('Docker publish') {
container('docker') {
docker.withRegistry('https://registry.hub.docker.com', '<<jenkins store-credentials>>') {
echo "Pushing 1..."
// Push tagged version
app.push("${env.BUILD_NUMBER}")
echo "Pushing 2..."
// Push latest-tagged version
app.push("latest")
echo "Pushed!"
}
}
}
stage('Deployment') {
container('docker') {
// Deploy to Kubernetes
echo 'Deploying'
}
}
stage('Provision Infrastructure') {
container('kubectl') {
echo 'Provision...'
}
}
}
}
Jenkins Logs
[...]
[Pipeline] stage (hide)
[Pipeline] { (Docker publish)
[Pipeline] container
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withDockerRegistry
Executing sh script inside container docker of pod jenkins-recon-master-116-0ksw8-f7779
Executing command: "docker" "login" "-u" "*****" "-p" ******** "https://index.docker.io/v1/"
exit
<<endless loading symbol>>
Does anyone has a clue how to debug here? Credentials work. Not sure why there is the exit in the log without the logs for pushing afterwards... :-(

Related

When running a simple Jenkins pipeline I am running into A DOCKER-ENPOINT error

I am creating a simple Jenkins pipeline and run into the error below:
Started by user XYZ
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/jenkins_home/workspace/XYZ1
[Pipeline] {
[Pipeline] isUnix (hide)
[Pipeline] withEnv
[Pipeline] {
[Pipeline] sh
+ docker inspect -f . ubuntu:latest
unable to resolve docker endpoint: open C:/Program Files/Git/certs/client/ca.pem: no such file or directory
My pipeline is as such:
pipeline {
agent {
docker { image 'ubuntu:latest' }
}
environment {
dockerImage =''
registry = 'https://github.com/XYZS/XYZ'
registryCredential ='d34d387c-0abe-4e39-9260-588e5ad529aa'
}
stages {
stage("Set Up") {
steps {
git branch: 'main', url: 'https://github.com/XYZS/XYZ'
}
}
stage("Build docker image") {
steps {
script {
dockerImage = docker.build registry
}
}
}
stage('Push image') {
steps {
script {
docker.withRegistry('https://registry.hub.docker.com', 'git') {
app.push("${env.BUILD_NUMBER}")
app.push("latest")
}
}
}
}
stage ('Push to Docker') {
steps {
script {
docker.withRegistry( '', registryCredential ) {
dockerImage.push()
}
}
}
}
}}
As of yet I have as advised on other forums to delete and re-install Docker, however, the certs folder in the Git directory does not actually exist.
Has anyone come across this problem and how did you resolve it?
Many thanks in advance,

Enabling Jenkins to build on a separate Docker in Docker container

I have a docker in docker setup that build's docker images which is not on the same node as the Jenkins node. When I try to build using the Jenkins node I receive:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
To fix I can build a docker image using below within Jenkinsfile:
stage('Docker Build') {
agent any
steps {
script {
withDockerServer([uri: "tcp://10.44.10.8:2375"]) {
withDockerRegistry([credentialsId: 'docker', url: "https://index.docker.io/v1/"]) {
def image = docker.build("ron/reactive")
image.push()
}
}
}
}
}
This works as expected, I can use the above Jenkins pipeline config to build a Docker container.
I'm attempting to use the Docker server running at tcp://10.44.10.8:2375 to package a Java Maven project on a new container running on Docker. I've defined the pipeline build as :
pipeline {
agent any
stages {
stage('Maven package') {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
}
}
}
}
And receive this message from Jenkins with no further output:
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Maven package)
[Pipeline] node
Still waiting to schedule task
‘Jenkins’ doesn’t have label ‘dockerserverlabel’
I've configured the Docker label in Jenkins as :
Which matches the 'Docker Build' settings from the Jenkins file above.
But it seems I've not included some other config within Jenkins and/or the Jenkinsfile to enable the docker image to be built on tcp://10.44.10.8:2375 ?
I'm working through https://www.jenkins.io/doc/tutorials/build-a-java-app-with-maven/ which describes a pipeline for building a maven project on Docker:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
}
}
But how to configure the build on a separate Docker container is not described.
Can this Jenkins config:
stage('Docker Build') {
agent any
steps {
script {
withDockerServer([uri: "tcp://10.44.10.8:2375"]) {
withDockerRegistry([credentialsId: 'docker', url: "https://index.docker.io/v1/"]) {
def image = docker.build("ron/reactive")
image.push()
}
}
}
}
}
be used with
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
}
}
?

jenkins passing AmazonWebServicesCredentialsBinding to slave node

I have a Jenkins pipeline which needs to run on a slave node. I curently have issues with passing Variables set by plugin withCredentials. When I try to use them on the slave node they are empty, but they work on the master.
Here is the pipeline snippet.
#!groovy
#Library('sharedPipelineLib#master') _
pipeline {
agent { node
{ label 'jenkins-slave-docker' }
}
options {
skipDefaultCheckout(true)
}
environment {
sonar = credentials('SONAR')
}
stages {
stage('Checkout') {
steps {
cleanWs()
script {
checkout scm
}
}
}
stage('Deploy backend') {
steps {
script {
withCredentials([
[
$class : 'AmazonWebServicesCredentialsBinding',
credentialsId : 'AWS_ACCOUNT_ID_DEV',
accessKeyVariable: 'AWS_ACCESS_KEY_ID_DEV',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY_DEV'
],
[
$class : 'AmazonWebServicesCredentialsBinding',
credentialsId : 'AWS_ACCOUNT_ID_DNS',
accessKeyVariable: 'AWS_ACCESS_KEY_ID_DNS',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY_DNS'
]
]){
sh '''
echo "$AWS_ACCESS_KEY_ID_DEV\\n$AWS_SECRET_ACCESS_KEY_DEV\\n\\n" | aws configure --profile profile_705229686812
echo "$AWS_ACCESS_KEY_ID_DNS\\n$AWS_SECRET_ACCESS_KEY_DNS\\n\\n" | aws configure --profile profile_417752960097
'''
}
}
}
}
}
}
And the log
[Pipeline] withCredentials
Masking supported pattern matches of $AWS_ACCESS_KEY_ID_DEV or $AWS_SECRET_ACCESS_KEY_DEV or $AWS_SECRET_ACCESS_KEY_DNS or $AWS_ACCESS_KEY_ID_DNS
[Pipeline] {
[Pipeline] sh
echo '\n\n\n'
aws configure --profile profile_705229686812
AWS Access Key ID [None]: AWS Secret Access Key [None]:
EOF when reading a line
the issue was again echo cmd. I had to uses printf instead, cause echo adds newline which causes to fail.

How to use Terraform Plan and Apply in different Jenkins pipeline stages

I am working on a declarative Jenkins pipeline for Terraform deployments. I want to have the terraform init / select workspace / plan in one stage, ask for approval in another stage, and then do the apply in another stage. I have the agent at the top set to none and then using a kubernetes agent for a docker image we created that has packages we need for the stages. I am declaring those images in each stage. When I execute the pipeline, I get an error that I need to reinitialize Terraform in the apply stage even though I initialized in the init/plan stage. I figure this is nature of the stages running in different nodes.
I have it working by doing init / plan and stashing the plan. In the apply stage, it unstashes the plan, calls init / select workspace again, and then finally applies the unstashed plan.
I realize I could set the agent at the top, but according to Jenkins documentation, that is bad practice, as waiting for user input will block the execution.
I feel like there has to be a way to do this more elegantly. Any suggestions?
Here's my code:
def repositoryURL = env.gitlabSourceRepoHttpUrl != null && env.gitlabSourceRepoHttpUrl != "" ? env.gitlabSourceRepoHttpUrl : env.RepoURL
def repositoryBranch = env.gitlabTargetBranch != null && env.gitlabTargetBranch != "" ? env.gitlabTargetBranch : env.RepoBranch
def notificationEmail = env.gitlabUserEmail != null && env.gitlabUserEmail != "" ? env.gitlabSourceRepoHttpUrl : env.Email
def projectName = env.ProjectName
def deployAccountId = env.AccountId
pipeline {
agent none
stages {
stage("Checkout") {
agent any
steps {
git branch: "${repositoryBranch}", credentialsId: '...', url: "${repositoryURL}"
stash name: 'tf', useDefaultExcludes: false
}
}
stage("Terraform Plan") {
agent {
kubernetes {
label 'myagent'
containerTemplate {
name 'cis'
image 'docker-local.myrepo.com/my-image:v2'
ttyEnabled true
command 'cat'
}
}
}
steps {
container('cis') {
unstash 'tf'
script {
sh "terraform init"
try {
sh "terraform workspace select ${deployAccountId}_${projectName}_${repositoryBranch}"
} catch (Exception e) {
sh "terraform workspace new ${deployAccountId}_${projectName}_${repositoryBranch}"
}
sh "terraform plan -out=${deployAccountId}_${projectName}_${repositoryBranch}_plan.tfplan -input=false"
stash includes: "*.tfplan" name: "tf-plan", useDefaultExcludes: false
}
}
}
post{
success{
echo "Terraform init complete"
}
failure{
echo "Terraform init failed"
}
}
}
stage ("Terraform Plan Approval") {
agent none
steps {
script {
def userInput = input(id: 'confirm', message: 'Apply Terraform?', parameters: [ [$class: 'BooleanParameterDefinition', defaultValue: false, description: 'Apply terraform', name: 'confirm'] ])
}
}
}
stage ("Terraform Apply") {
agent {
kubernetes {
label 'myagent'
containerTemplate {
name 'cis'
image 'docker-local.myrepo.com/my-image:v2'
ttyEnabled true
command 'cat'
}
}
}
steps {
container("cis") {
withCredentials([[
$class: 'AmazonWebServicesCredentialsBinding',
credentialsId: 'my-creds',
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY'
]]) {
script {
unstash "tf"
unstash "tf-plan"
sh "terraform init"
try {
sh "terraform workspace select ${deployAccountId}_${projectName}_${repositoryBranch}"
} catch (Exception e) {
sh "terraform workspace new ${deployAccountId}_${projectName}_${repositoryBranch}"
}
sh """
set +x
temp_role="\$(aws sts assume-role --role-arn arn:aws:iam::000000000000:role/myrole --role-session-name jenkinzassume)" > /dev/null 2>&1
export AWS_ACCESS_KEY_ID=\$(echo \$temp_role | jq .Credentials.AccessKeyId | xargs) > /dev/null 2>&1
export AWS_SECRET_ACCESS_KEY=\$(echo \$temp_role | jq .Credentials.SecretAccessKey | xargs) > /dev/null 2>&1
export AWS_SESSION_TOKEN=\$(echo \$temp_role | jq .Credentials.SessionToken | xargs) > /dev/null 2>&1
set -x
terraform apply ${deployAccountId}_${projectName}_${repositoryBranch}_plan.tfplan
"""
}
}
}
}
}
}
}

jenkins pipeline passing variables

I have a pipeline and I'm building my image through a docker container and it output the image tag, I want to pass that image tag to next stage, when I echo it in the next stage it prints out. but when I use it in a shell it goes empty. here is my pipeline
pipeline {
agent any
stages {
stage('Cloning Git') {
steps {
git( url: 'https://xxx#bitbucket.org/xxx/xxx.git',
credentialsId: 'xxx',
branch: 'master')
}
}
stage('Building Image') {
steps{
script {
env.IMAGE_TAG = sh script: "docker run -e REPO_APP_BRANCH=master -e REPO_APP_NAME=exampleservice -e DOCKER_HUB_REPO_NAME=exampleservice --volume /var/run/docker.sock:/var/run/docker.sock registry.xxxx/build", returnStdout: true
}
}
}
stage('Integration'){
steps{
script{
echo "passed: ${env.IMAGE_TAG}"
sh """
helm upgrade exampleservice charts/exampleservice --set image.tag=${env.IMAGE_TAG}
"""
sh "sleep 5"
}
}
}
}
}
pipeline output
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Integration)
[Pipeline] script
[Pipeline] {
[Pipeline] echo
passed:
b79c3bf-b6eec4f
[Pipeline] sh
[test101] Running shell script
+ helm upgrade exampleservice charts/exampleservice --set image.tag=
getting empty image tag
You should override this by using the 'env'.
Replace your code with this one:
pipeline {
agent any
stages {
stage('Cloning Git') {
steps {
git( url: 'https://xxx#bitbucket.org/xxx/xxx.git',
credentialsId: 'xxx',
branch: 'master')
}
}
stage('Building Image') {
steps{
script {
env.IMAGE_TAG = sh script: "docker run -e REPO_APP_BRANCH=master -e REPO_APP_NAME=exampleservice -e DOCKER_HUB_REPO_NAME=exampleservice --volume /var/run/docker.sock:/var/run/docker.sock registry.xxxx/build", returnStdout: true
}
}
}
stage('Integration'){
steps{
script{
echo "passed: ${env.IMAGE_TAG}"
sh """
helm upgrade exampleservice charts/exampleservice\
--set image.tag="${env.IMAGE_TAG}"
"""
sh "sleep 5"
}
}
}
}
}

Resources