Unable to push docker image on registry using Jenkinsfile - docker

I am trying to push my image through Jenkinsfile on repository but when I do that I am facing below error.
Error response from daemon: Get https://mydockerregistryurl/v1/users/: x509: certificate signed by unknown authority
I have found many articles about this but did not understand any of this.
Can anyone try to help me?
Below is my jenkinsfile.
#!groovy
pipeline {
agent {
node {
label 'otd-agent'
}
}
stages{
stage('Test Stage'){
steps{
sh 'mvn clean test'
}
}
stage('SonarQube Analysis'){
steps{
withSonarQubeEnv('otd-sonar') {
sh 'mvn sonar:sonar'
}
}
}
stage('Package Stage'){
steps{
sh 'mvn clean package'
}
}
stage('Building Docker image') {
steps{
script {
sh 'docker build . -t jagathe-spike'
}
}
}
stage('Deploy Docker Image') {
steps{
script {
sh 'docker login -u username -p password docker-registry-default'
sh 'docker push docker-registry-default/otd-agathe'
}
}
}
}
}

If target registry docker-registry-default is running on OpenShift, you should deploy the OCP CA certificate which download from OCP on your Jenkins host.
Refer Installing a certificate authority certificate for external registries for more details.
For instance,
Download CA certificate from your OCP.
jenkins ~# scp root#master1.ocp.example.com:/etc/origin/master/ca.crt \
/etc/pki/ca-trust/source/anchors/ocp-ca.crt
Execuste update-ca-trust for registering the CA.
jenkins ~# update-ca-trust extract
Copy the CA to /etc/docker/certs.d as follows.(${} is placeholder, you should replace it properly with your information)
jenkins ~# cp /etc/pki/ca-trust/source/anchors/ocp-ca.crt \
/etc/docker/certs.d/${docker-registry-default}:${PORT}
Restart docker service for reloading
jenkins ~# systemctl restart docker.service
I hope it help you.

Related

Proper way to run docker container using Jenkinsfile

When making a jenkinsfile, I have steps to run dockers image which pulling from my docker hub.
stage('pull image and run') {
steps {
sh '''
docker login -u <username> -p <password>
docker run -d -p 9090:3000 <tag>
'''
}
}
This step is okay if I run this script the first time. However, if I run this script the second time, it will get this error.
Login Succeeded
+ docker run -d -p 9090:3000 <tag>
669955464d74f9b5186b437b7127ca0a24f6ea366f3a903c673489bec741cf78
docker: Error response from daemon: driver failed programming external connectivity on endpoint distracted_driscoll (db16abd899cf0cbd4f26cf712b1eee4ace5b491e061e2e31795c2669296068eb): Bind for 0.0.0.0:9090 failed: port is already allocated.
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 125
Finished: FAILURE
Obviously, the port 9090 is not available so the execution failed.
Question:
What is the correct way to upgrade an app inside a docker container?
I can stop the container before running the docker run, but I can't find a proper way to do that in jenkinsfile steps.
Any suggestion?
Thanks
Jenkins has really good docker support to make your build proceed within docker container. good example can be found here
One declarative example to do maven build will be:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /tmp:/tmp'
registryUrl 'https://myregistry.com/'
registryCredentialsId 'myPredefinedCredentialsInJenkins'
}
}
stages {
stage("01") {
steps {
sh "mvn -v"
}
}
stage("02") {
steps {
sh "mvn --help"
}
}
}
}
In a scripted pipeline, it would be
node {
docker.withRegistry('https://registry.example.com', 'credentials-id') {
docker.image('node:14-alpine').inside("-v /tmp:/tmp") {
stage('Test') {
sh 'node --version'
}
}
}
}

How to pull & run docker image on remote server through jenkins pipeline

I have 2 aws ubuntu instance: 1st-server and 2nd-server.
Below is my jenkins pipeline script which create docker image and runs container on 1st-server and push the image to docker hub repo. That's working fine.
I want to pull image and deploy it on 2nd-server.
When I do ssh for 2nd server through below pipeline script but it logins to 1st-server, even if ssh credential ('my-ssh-key') are of 2nd-server. I'm confused how it logging to 1st-server and I checked with touch commands so the file is creating on 1st-server.
pipeline {
environment {
registry = "docker-user/docker-repo"
registryCredential = 'docker-cred'
dockerImage = ''
}
agent any
stages {
stage('Cloning Git') {
steps {
git url: 'https://github.com/git-user/jenkins-flask-tutorial.git/'
}
}
stage('Building image') {
steps{
script {
sh "sudo docker build -t flask-app-one ."
sh "sudo docker run -p 5000:5000 --name flask-app-one -d flask-app-one "
sh "docker tag flask-app-one:latest docker-user/myrepo:flask-app-push-test"
}
}
}
stage('Push Image') {
steps{
script {
docker.withRegistry( '', registryCredential ) {
sh "docker push docker-user/docker-repo:flask-app-push-test"
sshagent(['my-ssh-key']) {
sh 'ssh -o StrictHostKeyChecking=no ubuntu#2ndserver && cd /home/ubuntu/ && sudo touch test-file && docker pull docker-user/docker-repo:flask-app-push-test'
}
}
}
}
}
My question is, how to login to 2nd server and pull the docker image on 2nd server via through jenkins pipeline script? Help me out where I'm doing wrong.
This is more of an alternative than a solution. You can execute the remote commands as part of ssh. This will execute the command on the server and disconnect.
ssh name#ip "ls -la /home/ubuntu/"

How to use helm commands in jenkins pipeline script

I have been trying to deploy the image built on jenkins by docker to helm charts, i have referred couple of documents on website https://dev.to/sword-health/seamless-ci-cd-with-jenkins-helm-and-kubernetes-5e00
and https://cloudcompilerr.wordpress.com/2018/06/03/docker-jenkins-kubernetes-run-jenkins-on-kubernetes-cluster/ and managed till the point where docker image gets pushed into dockerhub but i get stuck at helm
i'm not getting what the error exactly is.
JENKINS ERROR
+ helm list
/var/lib/jenkins/workspace/01#tmp/durable-68e91f76/script.sh: 1: /var/lib/jenkins/workspace/01#tmp/durable-68e91f76/script.sh: helm: not found
PIPELINESCRIPT
pipeline {
environment {
registry = "hemanthpeddi/springboot"
registryCredential = 'dockerhub'
}
agent any
tools {maven "maven" }
stages {
stage('Cloning Git') {
steps {
git 'https://github.com/hrmanth/game-of-life.git'
}
}
stage('Build'){
steps{
sh script: 'mvn clean package'
}
}
stage('Building image') {
steps{
script {
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage('Deploy Image') {
steps{
script {
docker.withRegistry( '', registryCredential ) {
dockerImage.push()
}
}
}
}
stage('Remove Unused docker image') {
steps{
sh "docker rmi $registry:$BUILD_NUMBER"
}
}
stage('Run Helm') {
steps {
script {
container('helm') {
sh "helm ls"
}
}
}
}
}
}
Is there any specific configuration that i'm missing before i use helm in jenkins? And i have configured my kubernetes IP in the cloud configuration in jenkins, Please help
Plugins Installed
Kubernetes Plugin
Docker Plugin
You need helm, it is not available by default. You could add helm as a tool in Jenkins and use it.
https://www.jenkins.io/doc/book/pipeline/syntax/#tools
you can install helm in the container itself by adding an extra stage
stage("install helm"){
steps{
sh 'wget https://get.helm.sh/helm-v3.6.1-linux-amd64.tar.gz'
sh 'ls -a'
sh 'tar -xvzf helm-v3.6.1-linux-amd64.tar.gz'
sh 'sudo cp linux-amd64/helm /usr/bin'
sh 'helm version'
}
}
I am not so familiar with that, but when you are using the "container('helm')" step, I think it refers to
Kubernetes Plugin.
So, reading this docs, I think that the "podTemplate" is missing in your configuration.
Thus what you need to do is to configure a Helm container in the "podTemplate" and put the name "helm". You can try to use, for example, the "alpine/helm" image.
See you later.

Getting error Jenkin pipeline docker: command not found

Dockerfile:
pipeline {
agent any
stages {
stage ('Compile') {
steps {
withMaven(maven: 'maven_3_6_3') {
sh 'mvn clean compile'
}
}
}
stage ('unit test and Package') {
steps {
withMaven(maven: 'maven_3_6_3') {
sh 'mvn package'
}
}
}
stage ('Docker build') {
steps {
sh 'docker build -t dockerId/cakemanager .'
}
}
}
}
docker build -t dockerId/cakemanager .
/Users/Shared/Jenkins/Home/workspace/CDCI-Cake-Manager_master#tmp/durable-e630df16/script.sh:
line 1: docker: command not found
First install docker plugin from Manage Jenkins >> Manage Plugins >> Click on available and search for Docker and install it.
and then configure it on Manage Jenkins >> Global tool configuration.
You need to manually install docker on your Jenkins master or on agents if you're running builds on them.
Here's the doc to install docker on OS X https://docs.docker.com/docker-for-mac/install/

How to change the Agent label in Jenkins depending on Branch Name

I am creating a Jenkin Pipeline for below task.
Pull the latest code from vsts
Build the code and create .jar file out of it
creating a Docker image on the basis of the jar
tag the image
push the image into Docker registry
for this, I have written below Jenkinsfile
pipeline {
agent {
label "master"
}
stages {
stage('Build') {
steps {
echo '..........................Building Jar..........................'
sh 'npm install'
}
}
stage('Build-Image') {
steps {
echo '..........................Building Image..........................'
sh 'sudo docker build -t some-org/admin-portal:v0.1 --build-arg PORT=9007 --build-arg ENVIRONMENT=develop .'
}
}
stage('Tag-Image') {
steps {
echo '..........................Taging Image..........................'
sh 'sudo docker login some-repo -u username001 -p password'
sh 'sudo docker tag some-org/admin-portal:v0.1 some.dtr.io/some-org/admin-portal:v0.1'
}
}
stage('Push-Image') {
steps {
echo '..........................Pushing Image..........................'
sh 'sudo docker push some.dtr.io/some-org/admin-portal:v0.1'
}
}
}
}
Below is Jenkins job configuration snapshot for Pipeline
My Question is how can I change the agent label depending upon branch name or some conditions.
e.g if the branch is develop I want to use slave1 node and if the branch is production I want to use master
Any Help will be appreciable.
Thanks in Advance.
You can assign the agent labels inside the stage, so that you can execute the stages with required agents.
eg:
pipeline {
agent none
stages {
stage('Build') {
agent {
label "master"
}
steps {
echo '..........................Building Jar..........................'
sh 'npm install'
}
}
stage('Build-Image') {
agent {
label "master"
}
steps {
echo '..........................Building Image..........................'
sh 'sudo docker build -t some-org/admin-portal:v0.1 --build-arg PORT=9007 --build-arg ENVIRONMENT=develop .'
}
}
stage('Tag-Image') {
agent {
label "slave1"
}
steps {
echo '..........................Taging Image..........................'
sh 'sudo docker login some-repo -u username001 -p password'
sh 'sudo docker tag some-org/admin-portal:v0.1 some.dtr.io/some-org/admin-portal:v0.1'
}
}
stage('Push-Image') {
agent {
label "slave1"
}
steps {
echo '..........................Pushing Image..........................'
sh 'sudo docker push some.dtr.io/some-org/admin-portal:v0.1'
}
}
}
}

Resources