Error when transferring Docker Image to Docker Hub using Jenkins Pipeline - jenkins

Below is a portion of a Jenkins Pipeline Script that I am using to build a Docker Image and Deploy it to the Docker Hub. The problem that I am having is that after executing the Pipeline, the Docker Image is not transferred to Docker Hub ~and~ the local Docker Image (created during the process) is not erased.
pipeline {
environment {
registry = "<my_docker_hub userid>/object"
registryCredential = 'dockerhub'
}
agent { label 'ubuntu16.04-slave-two' }
stages {
stage('Cloning Git') {
steps {
...
}
}
stage('Building image') {
steps{
sh "/usr/local/bin/docker-compose -p $registry:$BUILD_NUMBER build "
}
}
stage('Deploy Image') {
steps{
sh "docker push $registry:$BUILD_NUMBER"
}
}
stage('Remove Unused docker image') {
steps{
sh "docker rmi $registry:$BUILD_NUMBER"
}
}
}
}
Even though I get a SUCCESS message when building the image:
Successfully built fe86784636c2
Successfully tagged <docker_hub_id>object44_website:latest
The image is not transferred over to the Docker Hub.
Below is the log I got when running the Pipeline code:
Started by user Jenkins Admin
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on ubuntu16.04-slave-two in /var/jenkins/workspace/oracle-client-with-flask
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Cloning Git)
[... snip ...']
Building authorize-service
Step 1/11 : FROM oraclelinux:7
---> a803b2474b20
[... snip ...]
Step 3/4 : COPY . /var/www/html/
---> Using cache
---> e0b4cd5713c0
Step 4/4 : EXPOSE 80
---> Using cache
---> fe86784636c2
Successfully built fe86784636c2
Successfully tagged <docker_hub_id>object44_website:latest
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy Image)
[Pipeline] sh
+ docker push <docker_hub_id>/object:44
The push refers to a repository [docker.io/<docker_hub_id>/object]
An image does not exist locally with the tag: <docker_hub_id>/object
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Remove Unused docker image)
Stage "Remove Unused docker image" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
How can I get the Image successfully to the Docker Hub?
TIA
EDIT: Here is the docker-compose.yml file
version: '3'
services:
authorize-service:
build: ./authorizations
container_name: authorize-service
environment:
- DB_IP=XXX.XX.XX.XX
- DB_PORT=1521
- DB_SID=TEST
- DB_USER=xxxxxxx
- DB_PASS=xxxxxxx
ports:
- 2700:5000
networks:
- kdcnetwork
testtab-service:
build: ./testtab
container_name: testtab-service
environment:
- DB_IP=XXX.XX.XX.XX
- DB_PORT=1521
- DB_SID=TEST
- DB_USER=xxxxx
- DB_PASS=xxxxx
ports:
- 2800:5000
networks:
- kdcnetwork
website:
build: ./website
container_name: testtab-website
links:
- testtab-service
volumes:
- ./website:/var/www/html
ports:
- 5000:80
networks:
- kdcnetwork
depends_on:
- testtab-service
networks:
kdcnetwork:
driver: bridge

You didn't provide docker-compose file so I cannot give very accurate answer but I can simply find that
Successfully tagged <docker_hub_id>object44_website:latest
differs from what you are trying to push:
docker push <docker_hub_id>/object:44
those 2 names must be the same.
Edit:
So you must change section in your docker-compose to following
website:
build: ./website
image: <docker_hub_id>/object:44
so docker-compose will build image <docker_hub_id>/object:44 and your docker push command should be able to push it

Related

terraform plan fails when run from Jenkins docker container

I don't know what possessed me to do this. I am just a tinker so I started following this tutorial. https://www.jenkins.io/doc/book/pipeline/docker/ I got that to work. Then I wanted to create a pipeline that would download a terraform docker container and use that to create a docker container. Here's my Jenkinsfile:
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
args '-i --entrypoint='
}
}
stages {
stage('Hello') {
steps {
echo 'Hello World from Github.'
}
}
stage('Test') {
steps {
sh 'ls -al'
sh 'terraform --version'
sh 'terraform init'
sh 'terraform plan'
sh 'ls -al'
}
}
}
}
Here is my main.tf:
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "~> 2.13.0"
}
}
}
// This didn't work either:
// provider "docker" {}
provider "docker" {
host = "tcp://127.0.0.1:2375/"
}
resource "docker_image" "nginx" {
name = "nginx:latest"
keep_locally = false
}
resource "docker_container" "nginx" {
image = docker_image.nginx.latest
name = "tutorial"
ports {
internal = 80
external = 8000
}
}
Here is the error message from my Jenkins console output:
[Pipeline] sh
+ terraform plan
[31m╷[0m[0m
[31m│[0m [0m[1m[31mError: [0m[0m[1mError pinging Docker server:
Cannot connect to the Docker daemon at tcp://127.0.0.1:2375/.
Is the docker daemon running?[0m
[31m│[0m [0m
[31m│[0m [0m[0m with provider["registry.terraform.io/kreuzwerker/docker"],
[31m│[0m [0m on main.tf line 10, in provider "docker":
[31m│[0m [0m 10: provider "docker" [4m{[0m[0m
[31m│[0m [0m
[31m╵[0m[0m
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Jenkins is running in a docker container on my Mac laptop:
red#Reds-MacBook-Pro ~ % docker ps | grep jenk
8401359dae3e myjenkins-blueocean:2.346.2-1 "/usr/bin/tini -- /u…" 19 hours ago Up 19 hours 0.0.0.0:8080->8080/tcp, 0.0.0.0:50000->50000/tcp jenkins-blueocean
507eb2212eb6 docker:dind "dockerd-entrypoint.…" 19 hours ago Up 19 hours 2375/tcp, 0.0.0.0:2376->2376/tcp jenkins-docker

Unable to login to Docker from Jenkins environment

I have the below script, with legacy Jenkins and installation of just docker plugin, I was able to fetch the node image without credentials.
pipeline {
agent {
docker { image 'node:12.22.1' }
}
}
Now, it miserably fails with the below error
$ docker login -u user#gmail.com -p ******** https://index.docker.io/v1/
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get https://registry-1.docker.io/v2/: unauthorized: incorrect username or password
[Pipeline] // withDockerRegistry
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: docker login failed
Finished: FAILURE
I tried setting up environment with Docker credentials like below:
pipeline {
agent {
docker { image 'node:12.22.1' }
}
environment {
registryCredential: 'dockerhub_id' // created in global credentials
}
}
Still no luck, the best example to dockerize my login app will be good to unblock myself.

Trouble mounting volume in docker within Jenkins pipeline

I'm running flyway within my Jenkins pipeline. The docker image works and flyway runs fine. I can call flyway baseline to initialize the schema and that's about as far as I can get.
I'm attempting to mount the directory "Database/migrations" in the docker image using image.withRun('-v /Database/migrations:/migrations'... as listed in the segment below, but I'm not having any luck.
// git clone
stage("Checkout") {
checkout scm
}
// db migration
stage('Apply DB changes') {
sh "ls Database/migrations"
def flyway = docker.image('flyway/flyway')
flyway.withRun('-v /Database/migrations:/migrations',
'-url=jdbc:mysql://****:3306/**** -user=**** -password=**** -X -locations="filesystem:/migrations" migrate') { c ->
sh "docker exec ${c.id} ls flyway"
sh "docker logs --follow ${c.id}"
}
}
Below is the debug from Jenkins for that stage (cleaned up for simplicity) and notice there is nothing under "migrations".
[Pipeline] { (Apply DB changes)
[Pipeline] sh
+ ls Database/migrations
V2__create_temp_table.sql
[Pipeline] isUnix
[Pipeline] sh
+ docker run -d -v /Database/migrations:/migrations flyway/flyway -url=jdbc:mysql://****:3306/**** -user=**** '-password=****' -X -locations=filesystem:/migrations migrate
[Pipeline] sh
+ docker exec 12461436e4cb1150a20d8fca13ef7691d66528a11864ab17600bb994a1248675 ls /migrations
[Pipeline] sh
+ docker logs --follow 12461436e4cb1150a20d8fca13ef7691d66528a11864ab17600bb994a1248675
DEBUG: Loading config file: /flyway/conf/flyway.conf
DEBUG: Unable to load config file: /flyway/flyway.conf
DEBUG: Unable to load config file: /flyway/flyway.conf
DEBUG: Using configuration:
DEBUG: flyway.locations -> filesystem:/migrations
Flyway Community Edition 7.5.3 by Redgate
DEBUG: Scanning for filesystem resources at '/migrations'
DEBUG: Scanning for resources in path: /migrations (/migrations)
DEBUG: Driver : MySQL Connector/J mysql-connector-java-8.0.20 (Revision: afc0a13cd3c5a0bf57eaa809ee0ee6df1fd5ac9b)
DEBUG: Validating migrations ...
Successfully validated 1 migration (execution time 00:00.033s)
Current version of schema `****`: 1
Schema `****` is up to date. No migration necessary.
Any and all advice is greatly appreciated! Thanks in advance!
Database/migrations is different from /Database/migrations
my $WORKSPACE var is actually pointing to /var/lib/jenkins/workspace/... so I needed to update the mount path to $WORKSPACE/Database/migrations:/migrations 🤦🏻‍♂️

How to Access the Application after the kubernetes deployment

I'm new to kubernetes tool, i'm trying to deploy the Angular application using docker + kubernetes, here the below Jenkins script.
stage('Deploy') {
container('kubectl') {
withCredentials([kubeconfigFile(credentialsId: 'KUBERNETES_CLUSTER_CONFIG', variable: 'KUBECONFIG')]) {
def kubectl
kubectl = "kubectl --kubeconfig=${KUBECONFIG} --context=demo"
echo 'deployment to PRERELEASE!'
sh "kubectl config get-contexts"
sh "kubectl -n demo get pods"
sh "${kubectl} apply -f ./environment/pre-release -n=pre-release"
}
}
}
}
Please find the below jenkins outputs
/home/jenkins/agent/workspace/DevOps-CI_future-master-fix
[Pipeline] stage
[Pipeline] { (Deploy)
[Pipeline] container
[Pipeline] {
[Pipeline] withCredentials
Masking supported pattern matches of $KUBECONFIG
[Pipeline] {
[Pipeline] echo
deploy to deployment!!
[Pipeline] echo
deploy to PRERELEASE!
[Pipeline] sh
+ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* demo kubernetes kubernetes-admin demo
kubernetes-admin#kubernetes kubernetes kubernetes-admin
[Pipeline] sh
+ kubectl -n demo get pods
NAME READY STATUS RESTARTS AGE
worker-f99adee3-dedd-46ca-bc0d-6b24391e5865-qkd47-mwl3v 5/5 Running 0 26s
[Pipeline] sh
+ kubectl '--kubeconfig=****' '--context=demo' apply -f ./environment/pre-release '-n=pre-release'
deployment.apps/frontend-deploy unchanged
service/frontend unchanged
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS
Now the questions is after the deployment i am not able to see the pods and deployment in both machine master machine using below command, can you please some one help me how to access the application after the successful deployment .
kubectl get pods
kubectl get services
kubectl get deployments
You're setting the namespace to pre-release when running "${kubectl} apply -f ./environment/pre-release -n=pre-release".
To get pods in this namespace, use: kubectl get pods -n pre-release.
Namespaces are a way to separate different virtual clusters inside your single physical Kubernetes cluster. See https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ for more detail.
You are creating the resources in a namespace called pre-release using -n option when you run the following command.
kubectl '--kubeconfig=****' '--context=demo' apply -f ./environment/pre-release '-n=pre-release'
deployment.apps/frontend-deploy unchanged
You need to to list the resources in the same namespace.
kubectl get pods -n pre-release
kubectl get services -n pre-release
kubectl get deployments -n pre-release
By default kubectl will do the requested operation in default namespace. If you want to set your current namespace to pre-release so that you need not append -n pre-release with every kubectl command, you can run the following command:
kubectl config set-context --current --namespace=pre-release

Not able to push docker image to artifactory registry

I am not able to push docker image to artifactory registry getting below error
Login and pulling works fine
92bd1433d7c5: Layer already exists
b31411566900: Layer already exists
f0ed7f14cbd1: Layer already exists
851f3e348c69: Layer already exists
e27a10675c56: Layer already exists
EOF
Jenkinsfile:
node ('lnp6xxxxxxb003') {
def app
def server = Artifactory.server 'maven-qa'
server.bypassProxy = true
stage('Clone repository') {
/* Let's make sure we have the repository cloned to our workspace */
checkout scm
}
stage('Build image') {
/* This builds the actual image; synonymous to
* docker build on the command line */
app = docker.build("devteam/maven")
}
stage('Test image') {
/* Ideally, we would run a test framework against our image.
app.inside {
sh 'mvn --version'
sh 'echo "Tests passed"'
}
}
stage('Push image') {
/* Finally, we'll push the image with two tags:
* First, the incremental build number from Jenkins
* Second, the 'latest' tag.
* Pushing multiple tags is cheap, as all the layers are reused. */
docker.withRegistry('https://docker.maven-qa.xxx.partners', 'docker-credentials') {
app.push("${env.BUILD_NUMBER}")
/* app.push("latest") */
}
}
}
Dockerfile:
# Dockerfile
FROM maven
ENV MAVEN_VERSION 3.3.9
ENV MAVEN_HOME /usr/share/maven
VOLUME /root/.m2
CMD ["mvn"]
Not sure what is wrong in that. I am able to manually push a image on the jenkins slave node. But using jenkins it gives error
Logs of my build job
Logs
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Build image)
[Pipeline] sh
[docker-maven-image] Running shell script
+ docker build -t devteam/maven .
Sending build context to Docker daemon 231.9 kB
Step 1 : FROM maven
---> 1f858e89a584
Step 2 : ENV MAVEN_VERSION 3.3.9
---> Using cache
---> c5ff64f9ff9f
Step 3 : ENV MAVEN_HOME /usr/share/maven
---> Using cache
---> 2a2028d6fdbc
Step 4 : VOLUME /root/.m2
---> Using cache
---> a50223412b56
Step 5 : CMD mvn
---> Using cache
---> 2d32a26dde10
Successfully built 2d32a26dde10
[Pipeline] dockerFingerprintFrom
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Push image)
[Pipeline] withDockerRegistry
Wrote authentication to /usr/share/tomcat6/.docker/config.json
[Pipeline] {
[Pipeline] sh
[docker-maven-image] Running shell script
+ docker tag --force=true devteam/maven devteam/maven:84
unknown flag: --force
See 'docker tag --help'.
+ docker tag devteam/maven devteam/maven:84
[Pipeline] sh
[docker-maven-image] Running shell script
+ docker push devteam/maven:84
The push refers to a repository [docker.maven-qa.XXXXX.partners/devteam/maven]
e13738d640c2: Preparing
ef91149a34fb: Preparing
3332503b7bd2: Preparing
875b1eafb4d0: Preparing
7ce1a454660d: Preparing
d3b195003fcc: Preparing
92bd1433d7c5: Preparing
f0ed7f14cbd1: Preparing
b31411566900: Preparing
06f4de5fefea: Preparing
851f3e348c69: Preparing
e27a10675c56: Preparing
92bd1433d7c5: Waiting
f0ed7f14cbd1: Waiting
b31411566900: Waiting
06f4de5fefea: Waiting
851f3e348c69: Waiting
e27a10675c56: Waiting
d3b195003fcc: Waiting
e13738d640c2: Layer already exists
3332503b7bd2: Layer already exists
7ce1a454660d: Layer already exists
875b1eafb4d0: Layer already exists
ef91149a34fb: Layer already exists
d3b195003fcc: Layer already exists
f0ed7f14cbd1: Layer already exists
b31411566900: Layer already exists
92bd1433d7c5: Layer already exists
06f4de5fefea: Layer already exists
851f3e348c69: Layer already exists
e27a10675c56: Layer already exists
EOF
[Pipeline] }
[Pipeline] // withDockerRegistry
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
This is what I have in my build logs.
I am using nginx in artifactory as a reverse proxy which is behind load balancer.I removed below lines from nginx config and it worked
proxy_set_header X-Artifactory-Override-Base-Url
$http_x_forwarded_proto://$host/artifactory;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
I am still not sure why these headers causing issue.
I have also faced same issue after I enable the Docker pipeline plugin it is started working. I think it maybe help you https://plugins.jenkins.io/docker-workflow/

Resources