Jenkins Pipeline Docker Network not found - docker

So I have this setup
stage('Build') {
steps {
sh """ docker-compose -f docker-compose.yml up -d """
sh """ docker-compose -f docker-compose.yml exec -T app buildApp """
}
stage('Start UI server') {
steps {
script { env.NETWORK_ID = get network id with some script }
sh """ docker-compose -f docker-compose.yml exec -d -T app startUiServer """
}
}
stage('UI Smoke Testing') {
agent {
docker {
alwaysPull true
image 'some custom image'
registryUrl 'some custom registry'
registryCredentialsId 'some credentials'
args "-u root --network ${env.NETWORK_ID}"
}
}
steps { sh """ run the tests """ }
}
And for some reason
the pipeline fails with this error. Most of the time, not all the time
java.io.IOException: Failed to run image 'my image'. Error: docker: Error response from daemon: network 3c5b5b45ca0e not found.
So the Network ID is the right one. I've checked.
Any ideas why this is failing?
i really appreciate any help.

Related

Docker-compose error in Jenkins "docker-compose: No such file or directory"

I am making a CI/CD pipeline for an application with React Js front-end and Java Spring Boot backend. When I run the build every time fail and get an error. I face this error with both Jenkins running on the server and running on my local machine.
Error Jenkins running on local :+ /usr/bin/docker-compose up --build -d
/var/root/.jenkins/workspace/flight-test-pipeline#tmp/durable-3512619f/script.sh: line 1: /usr/bin/docker-compose: No such file or directory
Error Jenkins running on server: + docker-compose build
/var/lib/jenkins/workspace/Build-pipeline#tmp/durable-94a5213e/script.sh: 1: /var/lib/jenkins/workspace/Build-pipeline#tmp/durable-94a5213e/script.sh: docker-compose: not found
Jenkins script is here:
pipeline {
environment {
PATH = "$PATH:/usr/local/bin/docker-compose"
}
agent any
stages {
stage('Start container') {
steps {
sh "/usr/bin/docker-compose up --build -d"
}
}
stage('Build') {
steps {
sh 'Docker build -t registry.has.de/jenk1:v1 .'
}
}
stage('Login') {
steps {
sh 'echo docker login registry.has.de --username=furqan.iqbal --password=123...
}
}
stage('Push to Has registry') {
steps {
sh '''
Docker push registry.has.de/jenk1:v1
'''
}
}
}
}
If i recall correctly, Jenkins can't understand what '$PATH' is, so you need to do the following:
environment {
p = sh 'echo $PATH'
PATH = p + ':/usr/local/bin/docker-compose'
}

Jenkins start same docker container with different compose files

I'm new to Jenkins, and I have a project, but there are I need few instances of it, with different configurations, meaning, to run different docker-compose files, due to different mounts / ports, but the rest of the project is the same.
I could not find any information about an issue like this.
if it help:
Jenkinsfile:
pipeline {
agent any
environment {
PATH = "$PATH:/usr/local/bin"
}
stages {
stage("build docker image"){
steps{
sh """
docker build . -t application:development --pull=false
"""
}
}
stage("run compose"){
steps{
sh"""
docker-compose up -d
"""
}
}
}
Yes! This is possible.
You need to create 2 docker-compose files with different configurations.
Ex:
docker-compose-a.yml
docker-compose-b.yml
Then:
pipeline {
agent any
environment {
PATH = "$PATH:/usr/local/bin"
}
stages {
stage("build docker image"){
steps{
sh """
docker build . -t application:development --pull=false
"""
}
}
stage("run compose"){
steps{
sh"""
docker-compose up -f docker-compose-a.yml -d
docker-compose up -f docker-compose-b.yml -d
"""
}
}
}

Jenkins - Mark build as success

I use Jenkins to build my maven Java app then create Docker image and push it. After all of that I have try-catch where I Try to stop and remove the container if it's already running - If not it should just skip it and run the new Image - It works but always marks the build as failed. I tried to change the build status, but apparently that is not possible.
Here is my pipeline:
node {
stage('Clone repository') {
git branch: 'main', credentialsId: 'realsnack-git', url: 'https://github.com/Realsnack/Java-rest-api.git'
}
stage('Build maven project') {
sh './mvnw clean package'
}
stage('Build docker image') {
sh 'docker build -t 192.168.1.27:49153/java-restapi:latest .'
}
stage('Push image') {
sh 'docker push 192.168.1.27:49153/java-restapi:latest'
}
try {
stage('Remove old container') {
sh 'docker stop java-rest_api && docker rm java-rest_api'
}
} catch(all) {
sh 'No container to remove - runnning it anyway'
} finally {
stage('Run image') {
sh 'docker run -d --name java-rest_api -p 8081:8081 192.168.1.27:49153/java-restapi:latest'
}
}
}
docker stop will fail if it fails to stop the container.
You can solve the issue in one of the two following ways:
Check that there is a running container before attempting to stop it:
sh "if [[ docker ps -a | grep java-rest_api ]]; docker stop java-rest_api; fi"
Ignore the docker error:
sh "docker stop java-rest_api || true"

How to pull & run docker image on remote server through jenkins pipeline

I have 2 aws ubuntu instance: 1st-server and 2nd-server.
Below is my jenkins pipeline script which create docker image and runs container on 1st-server and push the image to docker hub repo. That's working fine.
I want to pull image and deploy it on 2nd-server.
When I do ssh for 2nd server through below pipeline script but it logins to 1st-server, even if ssh credential ('my-ssh-key') are of 2nd-server. I'm confused how it logging to 1st-server and I checked with touch commands so the file is creating on 1st-server.
pipeline {
environment {
registry = "docker-user/docker-repo"
registryCredential = 'docker-cred'
dockerImage = ''
}
agent any
stages {
stage('Cloning Git') {
steps {
git url: 'https://github.com/git-user/jenkins-flask-tutorial.git/'
}
}
stage('Building image') {
steps{
script {
sh "sudo docker build -t flask-app-one ."
sh "sudo docker run -p 5000:5000 --name flask-app-one -d flask-app-one "
sh "docker tag flask-app-one:latest docker-user/myrepo:flask-app-push-test"
}
}
}
stage('Push Image') {
steps{
script {
docker.withRegistry( '', registryCredential ) {
sh "docker push docker-user/docker-repo:flask-app-push-test"
sshagent(['my-ssh-key']) {
sh 'ssh -o StrictHostKeyChecking=no ubuntu#2ndserver && cd /home/ubuntu/ && sudo touch test-file && docker pull docker-user/docker-repo:flask-app-push-test'
}
}
}
}
}
My question is, how to login to 2nd server and pull the docker image on 2nd server via through jenkins pipeline script? Help me out where I'm doing wrong.
This is more of an alternative than a solution. You can execute the remote commands as part of ssh. This will execute the command on the server and disconnect.
ssh name#ip "ls -la /home/ubuntu/"

Declarative jenkins pipeline

I am using declarative Jenkins pipeline.
I am newbie in Jenkins so i don't understand something. I don't know how to handle this error and can someone tell me what is best options to do builds.
I have bash script which build, tag and push docker image to repository.
This is the part of my Jenkinsfile;
pipeline {
agent {
kubernetes {
label 'bmf-worker'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
component: ci
spec:
# Use service account that can deploy to all namespaces
serviceAccountName: service-reader
containers:
- name: docker
image: docker
command:
- cat
tty: true
- name: kubectl
image: gcr.io/cloud-builders/kubectl
command:
- cat
tty: true
- name: gcloud
image: google/cloud-sdk
command:
- cat
tty: true
"""
}
}
stages {
stage('Code Checkout and Setup') {
steps {
echo 'Code Checkout and Setup'
}
}
stage('Build') {
parallel {
stage('Build') {
steps {
echo 'Start building Frontend and Backend Docker images'
}
}
stage('Build BMF Frontend') {
steps {
container('gcloud') {
echo 'Building Bmf Frontend Image'
sh 'chmod +x build.sh'
sh './build.sh --build_bmf_frontend'
}
}
}
stage('Tag BMF Frontend') {
steps {
container('gcloud') {
echo 'Building Bmf Frontend Image'
sh 'chmod +x build.sh'
sh './build.sh --tag_frontend'
}
}
}
stage('Build BMF Backend') {
steps {
container('gcloud') {
echo 'Buildinging Bmf Backend Images'
sh 'chmod +x build.sh'
sh './build.sh --build_bmf_backend'
}
}
}
stage('Tag BMF Backend') {
steps {
container('gcloud') {
echo 'Building Bmf Frontend Image'
sh 'chmod +x build.sh'
sh './build.sh --tag_frontend'
}
}
}
}
}
How to use podTemplate to execute my steps. When am using docker container for Stage Build BMF Backend i have these errors;
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
/home/jenkins/workspace/BMF/bmf-web#tmp/durable-c146e810/script.sh: line 1: ./build.sh: not found
With gcloud container defined in podTemplate;
time="2019-03-12T13:40:56Z" level=error msg="failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: dial unix /var/run/docker.sock: connect: no such file or directory"
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
And how to tag docker images because I need docker to tag and git because tags is short commit. When am using docker there is no git.
My Jenkins master is on Google Cloud Kubernetes.
Can someone explain me better solution to execute jobs.
The first issue you are facing is related to Docker, not Jenkins.
Docker commands can only be run by root, or by users in the docker group.
If you want the Jenkins user to be able to execute Docker commands then you can run the following command as root to add Jenkins to the Docker group:
usermod -aG docker jenkins
This is documented in the Docker docs.
Be aware that giving a user access to Docker effectively grants them root access, so be cautious of which users you add to this group.

Resources