Jenkins Pipeline Docker -- Container is Not Running - docker

I have Jenkins running on an EC2 Instance. I have the EC2 Plugin Configured in a Peered VPC, and when a job is tagged 'support_ubuntu_docker' it will spin up an Jenkins Slave, with Docker pre-installed.
I am able to follow the examples, and get my job to connect to the local docker running on the Slave, and run commands inside the container.
Working: https://pastebin.com/UANvjhnA
pipeline {
agent {
docker {
image 'node:7-alpine'
label 'support_ubuntu_docker'
}
}
stages {
stage('Test') {
steps {
sh 'node --version'
}
}
}
}
Not Working https://pastebin.com/MsSZaPha
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
label 'support_ubuntu_docker'
}
}
stages {
stage('Test') {
steps {
sh 'terraform --version'
}
}
}
}
I have tried with the ansible/ansible:default image, as well as a image I created myself.
FROM alpine:3.7
RUN apk add --no-cache terraform
RUN apk add --no-cache ansible
ENTRYPOINT ["/bin/ash"]
This image behaves locally.
[jenkins_test] docker exec -it 3843653932c8 ash 10:56:42 ☁ master ☂ ⚡ ✭
/ # terraform --version
Terraform v0.11.0
/ # ansible --version
ansible 2.4.6.0
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15 (default, Aug 22 2018, 13:24:18) [GCC 6.4.0]
/ #
I really just want to be able to clone my terraform git repo, and use the terraform in the container to run my init/plan/applies.
The error im getting for all of these is.
java.io.IOException: Failed to run top 'c9dfeda21b718b9df1035500adf2ef80c5c3807cf63e724317d620d4bcaa14b3'. Error: Error response from daemon: Container c9dfeda21b718b9df1035500adf2ef80c5c3807cf63e724317d620d4bcaa14b3 is not running

The question really should have been a Docker question; what's the difference between node:7-alpine and hashicorp/terraform:light?
hashicorp/terraform:light has an ENTRYPOINT entry, pointing to /bin/terraform.
Basically that means you run it this way:
docker run hashicorp/terraform:light --version
And it will exit right away, i.e., it's not interactive.
So if you want an interactive shell within that Docker container, you'll have to override the ENTRYPOINT to point at a shell, say, /bin/bash and also tell Docker to run interactively:
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
args '-it --entrypoint=/bin/bash'
label 'support_ubuntu_docker'
}
}
stages {
stage('Test') {
steps {
sh 'terraform --version'
}
}
}
}

In a scripted pipeline you can do this:
docker.image(dockerImage).inside("--entrypoint=''") {
// code to run on container
}
If you are creating the image to use in Jenkins from a base image that already has an ENTRYPOINT instruction, you can override it by adding this line to the end of your own Dockerfile:
ENTRYPOINT []
Then the whole --entrypoint is no longer needed.

I had to change the entrypoint to empty to get it working with the following script it worked like a charm:
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
args '-i --entrypoint='
}
}
stages {
stage('Test') {
steps {
sh 'terraform --version'
}
}
}
}

Related

Jenkins pipeline docker agent, Start docker conatiner from Dockerfile with previliged mode

In my jenkins pipeline, the pipeline code and Dockerfile is available at gitlab
pipeline {
agent { dockerfile true }
stages {
stage('Test') {
steps {
sh '''
java -version
chmod 777 /data
'''
}
}
}
}
From the Dockerfile the image gets created and docker container gets started but missing some privilages.
can not even create a directory
Need to start the docker container with privilages so that I can perform this chmod, mkdir, etc.
agent { dockerfile .. supports arguments. See docs
agent {
// Equivalent to "docker build -f Dockerfile.build
dockerfile {
filename 'Dockerfile.build'
args '--privileged'
}
}

Jenkins pipeline and Docker multi-stage builds howto

Question
I have to configure CI/CD for number of Git repositories with help of Jenkins (and DockerHub as CD target). I did that with help of Docker multi-stage build (see Considerations). I'm afraid to misunderstand/overcomplicate a simple idea.
Is Jenkins + Docker multi-stage build = best/good practice? Am I applying the idea in the correct way?
Considerations
From this presentation I assume using Docker inside Jenkins is a good idea. After reading an article Using Multi-Stage Builds to Simplify and Standardize Build Processes, Docker multi-stage builds looks to be the next step of using Jenkins + Docker.
Answers to similar question also say Docker multi-stage makes sense, but doesn't provide an example of realisation.
Implementation
Jenkins creates pipeline from SCM repository.
Git repository
Dockerfile
Jenkinsfile
project-folder
|-src
|-pom.xml
Dockerfile
FROM alpine as source
RUN apk --update --no-cache add git
COPY project-folder repo
FROM maven:3.6.3-jdk-8 as test
COPY --from=source repo repo
WORKDIR repo
RUN mvn clean test
FROM maven:3.6.3-jdk-8 as build
COPY --from=test repo repo
WORKDIR repo
RUN mvn clean package
FROM openjdk:8 as final
MAINTEINER xxx <xxx#gmail.com>
LABEL owner="xxx"
COPY --from=build repo/target/some-lib-1.8.jar /usr/local/some-lib.jar
ENTRYPOINT ["java", "-jar", "/usr/local/some-lib.jar"]
Jenkinsfile
I used docker build --target for more granularity on Jenkins UI.
#!/usr/bin/env groovy
def imageId = "use-name/image-name:1.$BUILD_NUMBER"
pipeline {
agent {
label 'docker' # separate agent (launched as JAR on host machine) to avoid running docker inside docker
}
stages {
stage('Test') {
steps {
script {
sh "docker build --no-cache --target test -t ${imageId} ."
}
}
}
stage('Build') {
steps {
script {
sh "docker build --target build -t ${imageId} ."
}
}
}
stage('Image') {
steps {
script {
sh "docker build --target final -t ${imageId} ."
}
}
}
stage('Deploy') {
steps {
script {
docker.withRegistry('' , 'dockerhub') {
dockerImage = docker.build("${imageId}")
dockerImage.push()
}
}
}
}
stage('Clean') {
steps{
sh "docker rmi ${imageId}"
}
}
}
}
following taleodor's answer I would suggest next jenkinsfile:
pipeline {
agent {
label 'docker' # separate agent (launched as JAR on host machine) to avoid running docker inside docker
}
environment {
imageId = 'use-name/image-name:1.$BUILD_NUMBER'
docker_registry = 'your_docker_registry'
docker_creds = credentials('your_docker_registry_creds')
}
stages {
stage('Docker build') {
steps {
sh "docker build --no-cache --force-rm -t ${imageId} ."
}
}
stage('Docker push') {
steps {
sh'''
docker login $docker_registry --username $docker_creds_USR --password $docker_creds_PSW
docker push $imageId
docker logout
'''
}
}
stage('Clean') {
steps{
sh "docker rmi ${imageId}"
}
}
}
}

Docker: not found when running cmds in jenkinsfile

I am new to docker and CI. I am trying to create a jenkinsfile that would build and test my application, then build a docker image with the Dockerfile i've composed and then push it into AWS ECR. The steps I am stuck on is building an image with docker, i receive and error message docker: not found. I downloaded docker plug-in and configured it in the global tool configuration tab. Am i not adding it into tools correctly?
There was another post wear you could use kubernetes to do that however kubernetes no longer supports docker.
image of how i configured docker in global tools config:
global tool config
error
/var/jenkins_home/workspace/client-pipeline_feature-jenkins#tmp/durable-41220eb0/script.sh: 1: /var/jenkins_home/workspace/client-pipeline_feature-jenkins#tmp/durable-41220eb0/script.sh: docker: not found
error with permission to sock
def gv
containerVersion = "1.0"
appName = "foodcore"
imageName = appName + ":" + version
pipeline {
agent any
environment {
CI = 'true'
}
tools {
nodejs "node"
docker "docker"
}
stages {
stage("init") {
steps {
script {
gv = load "script.groovy"
CODE_CHANGES = gv.getGitChanges()
}
}
}
stage("build frontend") {
steps {
dir("client") {
sh 'npm install'
}
}
}
stage("build backend") {
steps {
dir("server") {
sh 'npm install'
}
}
}
stage("test") {
when {
expression {
script {
CODE_CHANGES == false
}
}
}
steps {
dir("client") {
sh 'npm test'
}
}
}
stage("build docker image") {
when {
expression {
script {
env.BRANCH_NAME.toString().equals('Main') && CODE_CHANGES == false
}
}
}
steps {
sh "docker build -t ${imageName} ."
}
}
stage("push docker image") {
when {
expression {
env.BRANCH_NAME.toString().equals('Main')
}
}
steps {
sh 'aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin repoURI'
sh 'docker tag foodcore:latest ...repoURI
sh 'docker push repoURI'
}
}
}
}
Use echo hello world to make...
Docker should be installed on the server Jenkins is running. The docker plugin provided by Jenkins is just like a tool to generate some snippets for the pipeline scripts. Installing and configuring the tool doesn't install a docker daemon. Please check if docker is installed on the OS or not.
As we can see in the thread, you are start getting permission denied on docker.sock.
docker.sock permissions will be lost if you restart system or docker service.
To make it persistence setup a cron to change ownership after each reboot
#reboot chmod 777 /var/run/docker.sock
and When you restart the docker, make sure to run the below command
chmod 777 /var/run/docker.sock
Or you can put a cron for it also, which will execute in each every 5 minutes.
To use docker inside Jenkins build, There are 2 methods.
Use Jenkins docker plugins as describe in above solution.
Or install docker itself in the Jenkins container and mount the docker.sock file.

Copy build artifacts from insider docker to host

This is my jenkinsfile:
pipeline {
agent any
stages {
stage('Build') {
steps {
echo '####################################################
echo 'Building Docker container'
echo '####################################################
script {
sh 'docker build -t my-gcc:1.0 .'
}
}
}
stage('Run') {
steps {
echo '##########################################################
echo 'Running Docker Image'
echo '##########################################################
script {
sh 'docker run --privileged -i my-gcc:1.0'
sh 'docker cp my-gcc:1.0:/usr/src/myCppProject/build/*.hex .'
}
}
}
stage('Program') {
steps {
echo '#######################################################
echo 'Programming target '
echo '#######################################################
script {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
}
the docker image is build and then run, after this I would like to extract the hex file form the container to the jenkins working directory so that I can flash it to the board.
But when I try to copy the file I get this error:
+ docker cp my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex .
Error: No such container:path: my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex
I tried to access other folders in the container and copy the content, but always the same error. This way it seems that I cannot access any folder or file in the container.
What am I doing wrong?
Regards
Martin
Jenkins has some standard support for Docker; this is described in Using Docker with Pipeline in the Jenkins documentation. In particular, Jenkins knows how to use a Docker image that contains just tools, combined with the project's workspace directory. I'd use that support instead of trying to script docker cp.
That might look roughly like so:
pipeline {
agent none
stages {
stage('Build') {
// Jenkins will run `docker build` for you
agent { dockerfile { args '--privileged' } }
steps {
// The current working directory is bind-mounted into the container;
// the image's `ENTRYPOINT`/`CMD` is ignored.
// Copy the file out of the container:
sh "cp /usr/src/myCppProject/build/*.hex ."
}
}
stage('Program') {
agent any // so not in Docker
steps {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
If you use this approach, also consider whether you should run the main build sequence via Jenkins pipeline steps, or a sh invocation that runs a shell script, or a Makefile, or if a Dockerfile is actually right. It might make sense to build a Docker image out of your customized compiler, but then use the Jenkins pipeline support to build the image for the target board rather than trying to do it all in a Dockerfile.
In the invocation you show, you can't directly docker cp a file out of an image. When you start the container, use docker run --name to give it a name, then docker cp from that container name.
sh 'docker run --name builder ... my-gcc:1.0'
sh 'docker cp builder:/usr/src/myCppProject/build/*.hex .'
sh 'docker rm builder'

How do I specify docker run args when using a dockerfile agent in jenkins

I am setting up a simple jenkins pipeline with a dockerfile agent, Jenkinsfile as follows:
pipeline {
agent {
dockerfile {
dir 'docker'
args '-v yarn_cache:usr/local/share/.cache/yarn'
}
}
environment {
CI = 'true'
}
stages {
stage('Build') {
steps {
sh 'yarn install'
sh 'yarn run build'
}
}
stage('Test') {
steps {
sh 'yarn run test'
}
}
}
}
I would like the yarn cache persist in a volume so I want the image to be started with '-v yarn_cache:usr/local/share/.cache/yarn'.
With the given Jenkinsfile, jenkins stalls after creating the image.
The args parameter is not actually documentented for the dockerfile agent but for the docker agent.
Do I really have to use a predefined (and uploaded) image just to be able to use parameters ?
Cheers Thomas
OK, figured it out:
It actually works just like I configured it only I have forgotten the leading / in the volume path. So with
args `'-v yarn_cache:/usr/local/share/.cache/yarn'`
it works just fine..

Resources