Docker: not found when running cmds in jenkinsfile - docker

I am new to docker and CI. I am trying to create a jenkinsfile that would build and test my application, then build a docker image with the Dockerfile i've composed and then push it into AWS ECR. The steps I am stuck on is building an image with docker, i receive and error message docker: not found. I downloaded docker plug-in and configured it in the global tool configuration tab. Am i not adding it into tools correctly?
There was another post wear you could use kubernetes to do that however kubernetes no longer supports docker.
image of how i configured docker in global tools config:
global tool config
error
/var/jenkins_home/workspace/client-pipeline_feature-jenkins#tmp/durable-41220eb0/script.sh: 1: /var/jenkins_home/workspace/client-pipeline_feature-jenkins#tmp/durable-41220eb0/script.sh: docker: not found
error with permission to sock
def gv
containerVersion = "1.0"
appName = "foodcore"
imageName = appName + ":" + version
pipeline {
agent any
environment {
CI = 'true'
}
tools {
nodejs "node"
docker "docker"
}
stages {
stage("init") {
steps {
script {
gv = load "script.groovy"
CODE_CHANGES = gv.getGitChanges()
}
}
}
stage("build frontend") {
steps {
dir("client") {
sh 'npm install'
}
}
}
stage("build backend") {
steps {
dir("server") {
sh 'npm install'
}
}
}
stage("test") {
when {
expression {
script {
CODE_CHANGES == false
}
}
}
steps {
dir("client") {
sh 'npm test'
}
}
}
stage("build docker image") {
when {
expression {
script {
env.BRANCH_NAME.toString().equals('Main') && CODE_CHANGES == false
}
}
}
steps {
sh "docker build -t ${imageName} ."
}
}
stage("push docker image") {
when {
expression {
env.BRANCH_NAME.toString().equals('Main')
}
}
steps {
sh 'aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin repoURI'
sh 'docker tag foodcore:latest ...repoURI
sh 'docker push repoURI'
}
}
}
}

Use echo hello world to make...

Docker should be installed on the server Jenkins is running. The docker plugin provided by Jenkins is just like a tool to generate some snippets for the pipeline scripts. Installing and configuring the tool doesn't install a docker daemon. Please check if docker is installed on the OS or not.

As we can see in the thread, you are start getting permission denied on docker.sock.
docker.sock permissions will be lost if you restart system or docker service.
To make it persistence setup a cron to change ownership after each reboot
#reboot chmod 777 /var/run/docker.sock
and When you restart the docker, make sure to run the below command
chmod 777 /var/run/docker.sock
Or you can put a cron for it also, which will execute in each every 5 minutes.

To use docker inside Jenkins build, There are 2 methods.
Use Jenkins docker plugins as describe in above solution.
Or install docker itself in the Jenkins container and mount the docker.sock file.

Related

Docker-compose error in Jenkins "docker-compose: No such file or directory"

I am making a CI/CD pipeline for an application with React Js front-end and Java Spring Boot backend. When I run the build every time fail and get an error. I face this error with both Jenkins running on the server and running on my local machine.
Error Jenkins running on local :+ /usr/bin/docker-compose up --build -d
/var/root/.jenkins/workspace/flight-test-pipeline#tmp/durable-3512619f/script.sh: line 1: /usr/bin/docker-compose: No such file or directory
Error Jenkins running on server: + docker-compose build
/var/lib/jenkins/workspace/Build-pipeline#tmp/durable-94a5213e/script.sh: 1: /var/lib/jenkins/workspace/Build-pipeline#tmp/durable-94a5213e/script.sh: docker-compose: not found
Jenkins script is here:
pipeline {
environment {
PATH = "$PATH:/usr/local/bin/docker-compose"
}
agent any
stages {
stage('Start container') {
steps {
sh "/usr/bin/docker-compose up --build -d"
}
}
stage('Build') {
steps {
sh 'Docker build -t registry.has.de/jenk1:v1 .'
}
}
stage('Login') {
steps {
sh 'echo docker login registry.has.de --username=furqan.iqbal --password=123...
}
}
stage('Push to Has registry') {
steps {
sh '''
Docker push registry.has.de/jenk1:v1
'''
}
}
}
}
If i recall correctly, Jenkins can't understand what '$PATH' is, so you need to do the following:
environment {
p = sh 'echo $PATH'
PATH = p + ':/usr/local/bin/docker-compose'
}

Jenkins pipeline docker agent, Start docker conatiner from Dockerfile with previliged mode

In my jenkins pipeline, the pipeline code and Dockerfile is available at gitlab
pipeline {
agent { dockerfile true }
stages {
stage('Test') {
steps {
sh '''
java -version
chmod 777 /data
'''
}
}
}
}
From the Dockerfile the image gets created and docker container gets started but missing some privilages.
can not even create a directory
Need to start the docker container with privilages so that I can perform this chmod, mkdir, etc.
agent { dockerfile .. supports arguments. See docs
agent {
// Equivalent to "docker build -f Dockerfile.build
dockerfile {
filename 'Dockerfile.build'
args '--privileged'
}
}

ssh-agent not working on jenkins pipeline

I am newbie and trying to implement CI/CD for my hello world reactive spring project. After releasing the image to docker repo, the next step is to connect to aws ec2 and run the created image. I have already installed ssh agen plugin and tested positive in the ssh connection configured in Mangejenkins->configuration system->ssh client.
Also My system env variabes has path=C:\Windows\System32\OpenSSH\ssh-agent.exe
In the last step I am getting :
Could not find ssh-agent: IOException: Cannot run program "ssh-agent": CreateProcess error=2, The system cannot find the file specified
Check if ssh-agent is installed and in PATH
[ssh-agent] FATAL: Could not find a suitable ssh-agent provider
My Pipelien code:
pipeline {
agent any
tools {
maven 'maven'
jdk 'jdk1.8'
}
environment {
registry ="my-registry"
registryCredential=credentials('docker-credentials')
}
stages {
stage('SCM') {
steps {
git branch: 'master',
credentialsId: 'JenkinsGitlab',
url:'https://www.gitlab.com/my-repo/panda-app'
}
}
stage('Build') {
steps {
bat 'mvn clean package spring-boot:repackage'
}
}
stage('Dockerize') {
steps {
bat "docker build -t ${registry}:${BUILD_NUMBER} ."
}
}
stage('Docker Login') {
steps{
bat "docker login -u ${registryCredential_USR} -p ${registryCredential_PSW}"
}
}
stage('Release to Docker hub') {
steps{
bat "docker push ${registry}:${BUILD_NUMBER}"
}
}
stage('Deploy to AWS') {
steps {
sshagent(['panda-ec2']) {
bat "ssh -o StrictHostKeyChecking=no ubuntu#my-aws-host sudo docker run -p 8080:8080 ${registry}:${BUILD_NUMBER}"
}
}
}
}}
The build-in SSH-agent of Windows is incompatible with Jenkins SSH-Agent plugin.
I'm using the SSH-agent from the Git installation. Make sure to insert the directory(!) path of Git ssh-agent.exe before any other path, to prevent the use of Windows SSH-agent.
With a default Git for Windows installation, you can set the PATH environment variable like this:
path=c:\Program Files\Git\usr\bin;%path%
For me it didn't work to set the env var from within Jenkins UI. I added it through the settings app. When doing so, make sure to insert it before "%SystemRoot%\system32\OpenSSH".

Accessing parent daemon from container in Jenkins

We have a bunch of nodes running our jobs in Jenkins. I have the need to build two images from a Jenkins job. To do this, I've read that you should share the unix socket using bind mounting, and I've done that like this:
agent {
docker {
image 'custom-alpine-with-docker'
args '-v /var/run/docker.sock:/var/run/docker.sock'
}
}
I then want to use it as follows:
stage('Build and push image(s)') {
steps {
dir("${WORKING_DIRECTORY}") {
script {
echo 'Building amd64 image'
amd64image = docker.build("${IMAGE_NAME}:${BUILD_NUMBER}-amd64", "-f ./Dockerfile.amd64 .")
echo 'Building arm32v7 image'
arm32v7image = docker.build("${IMAGE_NAME}:${BUILD_NUMBER}-arm32v7", "-f ./Dockerfile.arm32v7 .")
}
script {
docker.withRegistry("${DOCKER_REGISTRY_URL}", "${REPOSITORY_CREDENTIALS}") {
amd64image.push()
arm32v7image.push()
}
}
}
}
}
However, as soon as the build command is issued in the jenkins job, I get the following error:
time="2019-01-16T16:55:33Z" level=error msg="failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: dial unix /var/run/docker.sock: connect: permission denied"
17:56:59 Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock:
So a simple search shows the source of this error is that the user trying to access the daemon is not in the docker group, but I don't understand how these group memberships work when sharing a daemon like this.
If I go to the node that failed the build, and check the users in the docker group, I get the following:
$ getent group docker
docker:x:126:inst,jenkins
So how do I allow the user running in the container on that host to access the same daemon?
Small update
Just did it locally using docker run -v /var/run/docker.sock:/var/run/docker.sock -ti docker, and when I write docker ps in the container and on my host I see the same containers running.
Getting all the users on my development machine, it looks like this:
docker:x:999:overlord
So I'm guessing I need some special jenkins solution for this to work..
I think I've solved this in satisfactory way. Here's a step-by-step guide:
Ensure docker is installed in the container that needs to run it
Create the docker group and jenkins user:
CMD DOCKER_GID=$(stat -c '%g' /var/run/docker.sock) && \
groupmod --gid ${DOCKER_GID} ${DOCKER_GROUP} && \
usermod -a -G ${DOCKER_GROUP} ${JENKINS_USER} && \
gosu jenkins sh
It is important to note that here I fetch the group Id of the underlying system that runs docker. As I had already installed docker in my container and the group already existed, I modify the existing group to match the group id of the system. Finally I add the jenkins user to the docker group. Your /etc/group should look something like this in the container after it's run:
docker:x:999:jenkins
In your pipeline, start the agent as follows:
agent {
docker {
image 'storemanager-build'
args '-u root -v /var/run/docker.sock:/var/run/docker.sock'
}
}
By supplying the -u root flag, you override the user jenkins that jenkins forces on you when you use the declarative pipeline. You have to use root for the CMD command to work and to be able to create the group.
When the image is running, the command will switch to a jenkins user that is allowed to access the underlying unix socket.
Here's an excerpt fro my Jenkinsfile:
pipeline {
agent {
docker {
image 'build-image'
args '-u root -v /var/run/docker.sock:/var/run/docker.sock'
}
}
stages {
stage('Build jar') {
steps {
dir("${WORKING_DIRECTORY}") {
script {
if (isUnix()) {
sh './mvnw --batch-mode clean install'
} else {
bat 'mvnw.cmd --batch-mode clean install'
}
}
}
}
}
stage('Build and push image(s)') {
steps {
dir("${WORKING_DIRECTORY}") {
script {
amd64image = docker.build("${IMAGE_NAME}", "-f ./Dockerfile.amd64 .")
arm32v7image = docker.build("${IMAGE_NAME}", "-f ./Dockerfile.arm32v7 .")
}
script {
docker.withRegistry("${DOCKER_REGISTRY_URL}", "${STOREMANAGER_REPOSITORY_CREDENTIALS}") {
amd64image.push("${BUILD_NUMBER}-amd64")
arm32v7image.push("${BUILD_NUMBER}-arm32v7")
}
}
}
}
}
}
post {
always {
sh "chmod -R 777 ." // Jenkins can't clean built resources without this as we run the container as root
cleanWs()
}
}
}
And the resources that helped me:
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
https://github.com/jenkinsci/docker/issues/196
Hope this helps.

Jenkins Pipeline Docker -- Container is Not Running

I have Jenkins running on an EC2 Instance. I have the EC2 Plugin Configured in a Peered VPC, and when a job is tagged 'support_ubuntu_docker' it will spin up an Jenkins Slave, with Docker pre-installed.
I am able to follow the examples, and get my job to connect to the local docker running on the Slave, and run commands inside the container.
Working: https://pastebin.com/UANvjhnA
pipeline {
agent {
docker {
image 'node:7-alpine'
label 'support_ubuntu_docker'
}
}
stages {
stage('Test') {
steps {
sh 'node --version'
}
}
}
}
Not Working https://pastebin.com/MsSZaPha
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
label 'support_ubuntu_docker'
}
}
stages {
stage('Test') {
steps {
sh 'terraform --version'
}
}
}
}
I have tried with the ansible/ansible:default image, as well as a image I created myself.
FROM alpine:3.7
RUN apk add --no-cache terraform
RUN apk add --no-cache ansible
ENTRYPOINT ["/bin/ash"]
This image behaves locally.
[jenkins_test] docker exec -it 3843653932c8 ash 10:56:42 ☁ master ☂ ⚡ ✭
/ # terraform --version
Terraform v0.11.0
/ # ansible --version
ansible 2.4.6.0
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15 (default, Aug 22 2018, 13:24:18) [GCC 6.4.0]
/ #
I really just want to be able to clone my terraform git repo, and use the terraform in the container to run my init/plan/applies.
The error im getting for all of these is.
java.io.IOException: Failed to run top 'c9dfeda21b718b9df1035500adf2ef80c5c3807cf63e724317d620d4bcaa14b3'. Error: Error response from daemon: Container c9dfeda21b718b9df1035500adf2ef80c5c3807cf63e724317d620d4bcaa14b3 is not running
The question really should have been a Docker question; what's the difference between node:7-alpine and hashicorp/terraform:light?
hashicorp/terraform:light has an ENTRYPOINT entry, pointing to /bin/terraform.
Basically that means you run it this way:
docker run hashicorp/terraform:light --version
And it will exit right away, i.e., it's not interactive.
So if you want an interactive shell within that Docker container, you'll have to override the ENTRYPOINT to point at a shell, say, /bin/bash and also tell Docker to run interactively:
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
args '-it --entrypoint=/bin/bash'
label 'support_ubuntu_docker'
}
}
stages {
stage('Test') {
steps {
sh 'terraform --version'
}
}
}
}
In a scripted pipeline you can do this:
docker.image(dockerImage).inside("--entrypoint=''") {
// code to run on container
}
If you are creating the image to use in Jenkins from a base image that already has an ENTRYPOINT instruction, you can override it by adding this line to the end of your own Dockerfile:
ENTRYPOINT []
Then the whole --entrypoint is no longer needed.
I had to change the entrypoint to empty to get it working with the following script it worked like a charm:
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
args '-i --entrypoint='
}
}
stages {
stage('Test') {
steps {
sh 'terraform --version'
}
}
}
}

Resources