Jenkins scripted pipeline with sidecar MYSQL container for testing - docker

I have the below pipeline that would run the actual container along side with MYSQL container to run test.
stage('Test - To check MYSQL connect') {
def dockerfile = 'Dockerfile.test'
def mysql = docker.image('mysql:5.6').run('-e MYSQL_ALLOW_EMPTY_PASSWORD=yes')
docker.build("rds-test", "-f ${dockerfile} .")
def rds_test_image = docker.image('rds-test')
rds_test_image.inside("--link ${mysql.id}:mysql "){
sh 'echo "Inside Container"'
}
}
And i am stuck with the below error
Successfully tagged rds-test:latest
[Pipeline] isUnix
[Pipeline] sh
+ docker inspect -f . rds-test
.
[Pipeline] withDockerContainer
Jenkins seems to be running inside container d4e0934157d5eb6a9edadef31413d0da44e0e3eaacbb1719fc8d47fbf0a60a2b
$ docker run -t -d -u 1000:1000 --link d14340adbef9c95483d0369857dd000edf1b986e9df452b8faaf907fe9e89bf2:mysql -w /var/jenkins_home/workspace/test-jenkinsfile-s3-rds-backup --volumes-from d4e0934157d5eb6a9edadef31413d0da44e0e3eaacbb1719fc8d47fbf0a60a2b -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** rds-test cat
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
java.io.IOException: Failed to run image 'rds-test'. Error: docker: Error response from daemon: Cannot link to a non running container: /sharp_sanderson AS /fervent_lewin/mysql.
Just in case you want to look at the rds-test dockerfile https://github.com/epynic/rds-mysql-s3-backup/tree/feature

The id of the running container will not be captured in the return of the run method, but rather is stored in the temporary lambda variable of the withRun block. To leverage this capability, we would modify your code accordingly:
stage('Test - To check MYSQL connect') {
def dockerfile = 'Dockerfile.test'
docker.build("rds-test", "-f ${dockerfile} .")
def rds_test_image = docker.image('rds-test')
docker.image('mysql:5.6').withRun('-e MYSQL_ALLOW_EMPTY_PASSWORD=yes') { container ->
rds_test_image.inside("--link ${container.id}:mysql") {
sh 'echo "Inside Container"'
}
}
}
As you can see above, running your second container within the code block of the other container's withRun makes the container id accessible within the id member variable of the temporary lambda variable initialized within the block (named container here for convenience).
Note that you can also do a slight code cleanup here by assigning the value of rds_test_image to the return of docker.build("rds-test", "-f ${dockerfile} .") instead of adding another line of code assigning it to the return of docker.image('rds-test'). The new code would also be more stable.

The above case was as the mysql container was not available before --link with Matt Schuchard suggestion have updated the answer
stage('Test - To check MYSQL connect') {
def dockerfile = 'Dockerfile.test'
docker.build("rds-latest", "-f ${dockerfile} .")
def rds_test_image = docker.image('rds-test:latest')
docker.image('mysql:5.6').withRun('-e MYSQL_ROOT_PASSWORD=admin --name=mysql_server -p 3306:3306') { container ->
docker.image('mysql:5.6').inside("--link ${container.id}:mysql") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hmysql --silent; do sleep 1; done'
}
rds_test_image.inside("--link ${container.id}:mysql -e MYSQL_HOST=mysql -e MYSQL_PWD=admin -e USER=root "){
sh 'bash scripts/test_script.sh'
}
}
}

Related

Official espressif image container with entrypoint not starting correctly in jenkins as a continuous integration tool

I'm working with a project that uses espressif and to build it on my machine with docker way I do the following:
docker run --rm -v $PWD:/project -w /project espressif/idf:v4.2.2 idf.py build
I would like to elaborate a declarative pipeline, and I would like to execute the command equivalent to the one above. The way I implemented it based on other examples that worked, and the log result below.
I don't understand why the way to pass these 'idf.py build' arguments in the 'steps' block is not working. Does anyone have any ideas?
Reading the log and doing some google searches, I believe it's the jenkins plugin that can't handle the command because the image uses the entrypoint feature.
My pipeline:
pipeline {
agent any
environment {
PROJ_NAME = 'test'
}
stages {
stage('Checkout') {
steps {
git url: 'ssh://git#bitbucket.org/john/iot-project.git'
}
}
stage('Build') {
agent {
docker {
image 'espressif/idf:v4.2.2'
args '--rm -v $PWD:/project -w /project'
reuseNode true
}
}
steps{
sh 'idf.py build'
}
}
}
}
Error snippet:
[Pipeline] withDockerContainer
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 1000:1000 --rm -v $PWD:/project -w /project -w /var/lib/jenkins/workspace/iot-project-TEST -v /var/lib/jenkins/workspace/iot-project-TEST:/var/lib/jenkins/workspace/iot-project-TEST:rw,z -v /var/lib/jenkins/workspace/iot-project-TEST#tmp:/var/lib/jenkins/workspace/iot-project-TEST#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** espressif/idf:v4.2.2 cat
$ docker top 81920a1146eabe9bf5c08339a682d81ac23777de0421895e1184d2a8ef27fc8c -eo pid,comm
ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument, as required by official docker images (see https://github.com/docker-library/official-images#consistency for entrypoint consistency requirements).
Alternatively you can force image entrypoint to be disabled by adding option `--entrypoint=''`.
[Pipeline] {
[Pipeline] sh
+ idf.py build
/var/lib/jenkins/workspace/iot-project-TEST#tmp/durable-b8bf6ce0/script.sh: 1: /var/lib/jenkins/workspace/iot-project-TEST#tmp/durable-b8bf6ce0/script.sh: idf.py: not found
[Pipeline] }
$ docker stop --time=1 81920a1146eabe9bf5c08339a682d81ac23777de0421895e1184d2a8ef27fc8c
$ docker rm -f 81920a1146eabe9bf5c08339a682d81ac23777de0421895e1184d2a8ef27fc8c
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
UPDATE1:
The project build with the official espressif image works when I run the command directly, for example:
pipeline {
agent any
environment {
PROJ_NAME = 'test'
}
stages {
stage('Checkout') {
steps {
git url: 'ssh://git#bitbucket.org/john/iot-project.git'
}
}
stage('Build') {
steps{
sh 'docker run --rm -v $WORKSPACE/ESPComm:/project -w /project espressif/idf:v4.2.2 idf.py build'
}
}
}
}
UPDATE2:
Without the --entrypoint='' argument an error is always thrown, so I keep that argument. I will present the log of ls and pwd commands after running docker. Note: cat and top are jenkins' own tricks so that the commands inside the step block are executed
pipeline {
agent any
environment {
PROJ_NAME = 'test'
}
stages {
stage('Checkout') {
steps {
git url: 'ssh://git#bitbucket.org/john/iot-project.git'
}
}
stage('Build') {
agent {
docker {
image 'espressif/idf:v4.2.2'
args '''--rm -v $PWD:/project -w /project --entrypoint='' '''
reuseNode true
}
}
steps{
/*sh '''
source /opt/esp/idf/export.sh
idf.py build
'''*/
sh 'ls'
sh 'pwd'
}
}
}
}
[Pipeline] withDockerContainer
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 1000:1000 --rm -v $PWD:/project -w /project --entrypoint= -w /var/lib/jenkins/workspace/iot-project-TEST -v /var/lib/jenkins/workspace/iot-project-TEST:/var/lib/jenkins/workspace/iot-project-TEST:rw,z -v /var/lib/jenkins/workspace/iot-project-TEST#tmp:/var/lib/jenkins/workspace/iot-project-TEST#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** espressif/idf:v4.2.2 cat
$ docker top 69d5a4450c2463a6d8153582248796a15fa94ed04ef3d45c76c9a2358b8740cd -eo pid,comm
[Pipeline] {
[Pipeline] sh
+ ls
ESPComm
ESPComm#tmp
Grafana
README.md
xctu_template.xml
[Pipeline] sh
+ pwd
/var/lib/jenkins/workspace/iot-project-TEST
[Pipeline] }
$ docker stop --time=1 69d5a4450c2463a6d8153582248796a15fa94ed04ef3d45c76c9a2358b8740cd
$ docker rm -f 69d5a4450c2463a6d8153582248796a15fa94ed04ef3d45c76c9a2358b8740cd
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
UPDATE3:
Now running without entrypoint command. Check in log the error:
ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument, as required by official docker images (see https://github.com/docker-library/official-images#consistency for entrypoint consistency requirements).
Alternatively you can force image entrypoint to be disabled by adding option `--entrypoint=''`.
pipeline {
agent any
environment {
PROJ_NAME = 'test'
}
stages {
stage('Checkout') {
steps {
git url: 'ssh://git#bitbucket.org/john/iot-project.git'
}
}
stage('Build') {
agent {
docker {
image 'espressif/idf:v4.2.2'
args '''--rm -v $PWD:/project -w /project '''
reuseNode true
}
}
steps{
sh '''
pwd
ls
#source /opt/esp/idf/export.sh
. $IDF_PATH/export.sh
idf.py build
'''
}
}
}
}
[Pipeline] withDockerContainer
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 1000:1000 --rm -v $PWD:/project -w /project -w /var/lib/jenkins/workspace/iot-project-TEST -v /var/lib/jenkins/workspace/iot-project-TEST:/var/lib/jenkins/workspace/iot-project-TEST:rw,z -v /var/lib/jenkins/workspace/iot-project-TEST#tmp:/var/lib/jenkins/workspace/iot-project-TEST#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** espressif/idf:v4.2.2 cat
$ docker top 217167975bdf63d861215e12f5e3b2ef35d21681fe77bb7608a6c1cc0d03c237 -eo pid,comm
ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument, as required by official docker images (see https://github.com/docker-library/official-images#consistency for entrypoint consistency requirements).
Alternatively you can force image entrypoint to be disabled by adding option `--entrypoint=''`.
[Pipeline] {
[Pipeline] sh
+ pwd
/var/lib/jenkins/workspace/iot-project-TEST
+ ls
ESPComm
ESPComm#tmp
Grafana
README.md
xctu_template.xml
+ . /opt/esp/idf/export.sh
+ idf_export_main
+ [ -n ]
+ [ -z /opt/esp/idf ]
+ [ ! -d /opt/esp/idf ]
+ [ ! -f /opt/esp/idf/tools/idf.py ]
+ [ ! -f /opt/esp/idf/tools/idf_tools.py ]
+ export IDF_PATH=/opt/esp/idf
+ old_path=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+ echo Detecting the Python interpreter
Detecting the Python interpreter
+ . /opt/esp/idf/tools/detect_python.sh
+ ESP_PYTHON=python
+ echo Checking "python" ...
Checking "python" ...
+ python -c import sys; print(sys.version_info.major)
+ [ 3 = 3 ]
+ ESP_PYTHON=python
+ break
+ python --version
Python 3.6.9
+ echo "python" has been detected
"python" has been detected
+ echo Adding ESP-IDF tools to PATH...
Adding ESP-IDF tools to PATH...
+ export IDF_TOOLS_EXPORT_CMD=/opt/esp/idf/export.sh
+ export IDF_TOOLS_INSTALL_CMD=/opt/esp/idf/install.sh
+ python /opt/esp/idf/tools/idf_tools.py export
+ idf_exports=export OPENOCD_SCRIPTS="/opt/esp/tools/openocd-esp32/v0.10.0-esp32-20200709/openocd-esp32/share/openocd/scripts";export IDF_PYTHON_ENV_PATH="/opt/esp/python_env/idf4.2_py3.6_env";export PATH="/opt/esp/tools/xtensa-esp32-elf/esp-2020r3-8.4.0/xtensa-esp32-elf/bin:/opt/esp/tools/xtensa-esp32s2-elf/esp-2020r3-8.4.0/xtensa-esp32s2-elf/bin:/opt/esp/tools/esp32ulp-elf/2.28.51-esp-20191205/esp32ulp-elf-binutils/bin:/opt/esp/tools/esp32s2ulp-elf/2.28.51-esp-20191205/esp32s2ulp-elf-binutils/bin:/opt/esp/tools/cmake/3.16.4/bin:/opt/esp/tools/openocd-esp32/v0.10.0-esp32-20200709/openocd-esp32/bin:/opt/esp/python_env/idf4.2_py3.6_env/bin:/opt/esp/idf/tools:$PATH"
+ eval export OPENOCD_SCRIPTS="/opt/esp/tools/openocd-esp32/v0.10.0-esp32-20200709/openocd-esp32/share/openocd/scripts";export IDF_PYTHON_ENV_PATH="/opt/esp/python_env/idf4.2_py3.6_env";export PATH="/opt/esp/tools/xtensa-esp32-elf/esp-2020r3-8.4.0/xtensa-esp32-elf/bin:/opt/esp/tools/xtensa-esp32s2-elf/esp-2020r3-8.4.0/xtensa-esp32s2-elf/bin:/opt/esp/tools/esp32ulp-elf/2.28.51-esp-20191205/esp32ulp-elf-binutils/bin:/opt/esp/tools/esp32s2ulp-elf/2.28.51-esp-20191205/esp32s2ulp-elf-binutils/bin:/opt/esp/tools/cmake/3.16.4/bin:/opt/esp/tools/openocd-esp32/v0.10.0-esp32-20200709/openocd-esp32/bin:/opt/esp/python_env/idf4.2_py3.6_env/bin:/opt/esp/idf/tools:$PATH"
+ export OPENOCD_SCRIPTS=/opt/esp/tools/openocd-esp32/v0.10.0-esp32-20200709/openocd-esp32/share/openocd/scripts
+ export IDF_PYTHON_ENV_PATH=/opt/esp/python_env/idf4.2_py3.6_env
+ export PATH=/opt/esp/tools/xtensa-esp32-elf/esp-2020r3-8.4.0/xtensa-esp32-elf/bin:/opt/esp/tools/xtensa-esp32s2-elf/esp-2020r3-8.4.0/xtensa-esp32s2-elf/bin:/opt/esp/tools/esp32ulp-elf/2.28.51-esp-20191205/esp32ulp-elf-binutils/bin:/opt/esp/tools/esp32s2ulp-elf/2.28.51-esp-20191205/esp32s2ulp-elf-binutils/bin:/opt/esp/tools/cmake/3.16.4/bin:/opt/esp/tools/openocd-esp32/v0.10.0-esp32-20200709/openocd-esp32/bin:/opt/esp/python_env/idf4.2_py3.6_env/bin:/opt/esp/idf/tools:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+ which python
+ echo Using Python interpreter in /opt/esp/python_env/idf4.2_py3.6_env/bin/python
Using Python interpreter in /opt/esp/python_env/idf4.2_py3.6_env/bin/python
+ echo Checking if Python packages are up to date...
Checking if Python packages are up to date...
+ python /opt/esp/idf/tools/check_python_dependencies.py
Python requirements from /opt/esp/idf/requirements.txt are satisfied.
+ IDF_ADD_PATHS_EXTRAS=/opt/esp/idf/components/esptool_py/esptool
+ IDF_ADD_PATHS_EXTRAS=/opt/esp/idf/components/esptool_py/esptool:/opt/esp/idf/components/espcoredump
+ IDF_ADD_PATHS_EXTRAS=/opt/esp/idf/components/esptool_py/esptool:/opt/esp/idf/components/espcoredump:/opt/esp/idf/components/partition_table
+ IDF_ADD_PATHS_EXTRAS=/opt/esp/idf/components/esptool_py/esptool:/opt/esp/idf/components/espcoredump:/opt/esp/idf/components/partition_table:/opt/esp/idf/components/app_update
+ export PATH=/opt/esp/idf/components/esptool_py/esptool:/opt/esp/idf/components/espcoredump:/opt/esp/idf/components/partition_table:/opt/esp/idf/components/app_update:/opt/esp/tools/xtensa-esp32-elf/esp-2020r3-8.4.0/xtensa-esp32-elf/bin:/opt/esp/tools/xtensa-esp32s2-elf/esp-2020r3-8.4.0/xtensa-esp32s2-elf/bin:/opt/esp/tools/esp32ulp-elf/2.28.51-esp-20191205/esp32ulp-elf-binutils/bin:/opt/esp/tools/esp32s2ulp-elf/2.28.51-esp-20191205/esp32s2ulp-elf-binutils/bin:/opt/esp/tools/cmake/3.16.4/bin:/opt/esp/tools/openocd-esp32/v0.10.0-esp32-20200709/openocd-esp32/bin:/opt/esp/python_env/idf4.2_py3.6_env/bin:/opt/esp/idf/tools:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+ [ -n ]
+ echo Updated PATH variable:
Updated PATH variable:
+ echo /opt/esp/idf/components/esptool_py/esptool:/opt/esp/idf/components/espcoredump:/opt/esp/idf/components/partition_table:/opt/esp/idf/components/app_update:/opt/esp/tools/xtensa-esp32-elf/esp-2020r3-8.4.0/xtensa-esp32-elf/bin:/opt/esp/tools/xtensa-esp32s2-elf/esp-2020r3-8.4.0/xtensa-esp32s2-elf/bin:/opt/esp/tools/esp32ulp-elf/2.28.51-esp-20191205/esp32ulp-elf-binutils/bin:/opt/esp/tools/esp32s2ulp-elf/2.28.51-esp-20191205/esp32s2ulp-elf-binutils/bin:/opt/esp/tools/cmake/3.16.4/bin:/opt/esp/tools/openocd-esp32/v0.10.0-esp32-20200709/openocd-esp32/bin:/opt/esp/python_env/idf4.2_py3.6_env/bin:/opt/esp/idf/tools:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
/opt/esp/idf/components/esptool_py/esptool:/opt/esp/idf/components/espcoredump:/opt/esp/idf/components/partition_table:/opt/esp/idf/components/app_update:/opt/esp/tools/xtensa-esp32-elf/esp-2020r3-8.4.0/xtensa-esp32-elf/bin:/opt/esp/tools/xtensa-esp32s2-elf/esp-2020r3-8.4.0/xtensa-esp32s2-elf/bin:/opt/esp/tools/esp32ulp-elf/2.28.51-esp-20191205/esp32ulp-elf-binutils/bin:/opt/esp/tools/esp32s2ulp-elf/2.28.51-esp-20191205/esp32s2ulp-elf-binutils/bin:/opt/esp/tools/cmake/3.16.4/bin:/opt/esp/tools/openocd-esp32/v0.10.0-esp32-20200709/openocd-esp32/bin:/opt/esp/python_env/idf4.2_py3.6_env/bin:/opt/esp/idf/tools:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+ unset old_path
+ unset paths
+ unset path_prefix
+ unset path_entry
+ unset IDF_ADD_PATHS_EXTRAS
+ unset idf_exports
+ unset ESP_PYTHON
+ echo Done! You can now compile ESP-IDF projects.
Done! You can now compile ESP-IDF projects.
+ echo Go to the project directory and run:
Go to the project directory and run:
+ echo
+ echo idf.py build
idf.py build
+ echo
+ unset realpath_int
+ unset idf_export_main
+ idf.py build
Executing action: all (aliases: build)
CMakeLists.txt not found in project directory /var/lib/jenkins/workspace/iot-project-TEST
Your environment is not configured to handle unicode filenames outside of ASCII range. Environment variable LC_ALL is temporary set to C.UTF-8 for unicode support.
[Pipeline] }
$ docker stop --time=1 217167975bdf63d861215e12f5e3b2ef35d21681fe77bb7608a6c1cc0d03c237
$ docker rm -f 217167975bdf63d861215e12f5e3b2ef35d21681fe77bb7608a6c1cc0d03c237
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 2
Finished: FAILURE
Example with an image I created, it uses ubuntu as a base, but has no entrypoint. I've already managed to successfully eclipse headless-build as follows. :
stage('Build') {
agent {
docker {
image 'tool/stm32-cubeide-image:1.0'
reuseNode true
}
}
steps {
sh '/opt/stm32cubeide/headless-build.sh -importAll $WORKSPACE -data $WORKSPACE -cleanBuild $DIR/$OPT_BUILD'
}
}
You can do something like the below. Before executing the build command try sourcing the /opt/esp/idf/export.sh which will set the environment so you can execute the build command.
sh'''
source /opt/esp/idf/export.sh
idf.py build
'''
Here is your full pipeline with the necessary changes.
pipeline {
agent any
environment {
PROJ_NAME = 'test'
}
stages {
stage('Checkout') {
steps {
git url: 'ssh://git#bitbucket.org/john/iot-project.git'
}
}
stage('Build') {
agent {
docker {
image 'espressif/idf:v4.2.2'
args '--rm -v $PWD:/project -w /project'
reuseNode true
}
}
steps{
sh '''
#source /opt/esp/idf/export.sh
. $IDF_PATH/export.sh
idf.py build
'''
}
}
}
}
Update
Following is the content in the entrypoint.
#!/usr/bin/env bash
set -e
. $IDF_PATH/export.sh
exec "$#"
So executing the build following ways seems to work for me.
sh'''
. $IDF_PATH/export.sh
idf.py build
'''
or
sh'''
sh /opt/esp/entrypoint.sh idf.py build
'''

How to build and run a Docker image in a scripted Jenkinsfile pipeline?

I'm trying to build and run a docker image on a specific build node from a scripted Jenkinsfile. Switching to declarative syntax is something I would rather like to avoid.
My code is quite close to the example from the documentation. The image builds as expected. But running the container fails Jenkins complaining the physical machine of the node is not running inside a container and the echo and make commands from the innermost block that I would expect to run inside the container are not executed and do not appear in the log.
As far as I understand Jenkins considers containers to be build nodes on their own and that nesting of node statements are not allowed. At the same time a node is required to build and run the Docker image.
What am I missing to build and run the image? As Im quite new to Jenkins as well as to Docker any hints or recommendations are appreciated.
The code:
node('BuildMachine1')
{
withEnv(envList)
{
dir("/some/path")
{
docker.build("build-image:${env.BUILD_ID}", "-f ${env.WORKSPACE}/build/Dockerfile .").inside
{
echo "Echo from Docker"
sh script: 'make'
}
}
}
}
The log:
Successfully built 8c57cad188ed
Successfully tagged build-image:79
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] isUnix
[Pipeline] withEnv
[Pipeline] {
[Pipeline] sh
+ docker inspect -f . build-image:79
.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] withDockerContainer
BuildMachine1 does not seem to be running inside a container
$ docker run -t -d -u 1004:1005 -w /data/Jenkins_Node/workspace/myFeature/buildaarch64Release -v /data/Jenkins_Node/workspace/myFeature/buildaarch64Release:/data/Jenkins_Node/workspace/myFeature/buildaarch64Release:rw,z -v /data/Jenkins_Node/workspace/myFeature/buildaarch64Release#tmp:/data/Jenkins_Node/workspace/myFeature/buildaarch64Release#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** build-image:87 cat
$ docker top 2242078968bc1ee5ddfd08c73a2e1551eda36c2595f0e4c9fb6e9b3b0af15b8b -eo pid,comm
[Pipeline] // withDockerContainer
Looks like the Entrypoint of the container was configured in a way that worked for manual usage in terminal but not inside the Jenkins pipeline.
It was set as
ENTRYPOINT ["/usr/bin/env", "bash"]
After changing it to
ENTRYPOINT [ "/bin/bash", "-l", "-c" ]
the resulting container is used by the Jenkinsfile as intended.

Using docker agent inside a docker-swarm slave that is running inside Docker as well

We have a Jenkins file that looks as follow
pipeline {
agent {
node {
label 'slave-test'
}
}
stages {
stage ('test docker run') {
agent {
docker {
image 'node:14.4.0-slim'
args '-u root -v /var/run/docker.sock:/var/run/docker.sock'
reuseNode true
}
}
steps {
sh 'PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true npm ci'
sh 'npm run test:ci'
sh 'npm run patternlab:build'
}
}
}
The node labelled as slave-test is a docker-swarm client running as a docker image based on debian-buster. Inside this slave we want to start the image node:14.4.0-slim to run some tests and package some frontend-stuff.
We use reuseNode = true to use the same workspace as agent in the beginning of the pipeline. But Jenkins tells us :
[Pipeline] {
[Pipeline] stage
[Pipeline] { (test docker run)
[Pipeline] getContext
[Pipeline] isUnix
[Pipeline] sh
13:24:07 + docker inspect -f . node:14.4.0-slim
13:24:07 .
[Pipeline] withDockerContainer
13:24:07 hofladen-slave01-20d7912d seems to be running inside container 23d34522985b2e7ec99327337cd2b20bee22018562886c9930a4ba777cda11ca
13:24:07 but /home/****/workspace/ttern-library_feature_BWEBHM-262#2 could not be found among [/var/run/docker.sock]
13:24:07 but /home/****/workspace/ttern-library_feature_BWEBHM-262#2#tmp could not be found among [/var/run/docker.sock]
13:24:07 $ docker run -t -d -u 10000:10000 -u root -v /var/run/docker.sock:/var/run/docker.sock -w /home/****/workspace/ttern-library_feature_BWEBHM-262#2 -v /home/****/workspace/ttern-library_feature_BWEBHM-262#2:/home/****/workspace/ttern-library_feature_BWEBHM-262#2:rw,z -v /home/****/workspace/ttern-library_feature_BWEBHM-262#2#tmp:/home/****/workspace/ttern-library_feature_BWEBHM-262#2#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** node:14.4.0-slim cat
13:24:08 $ docker top 0759f74d1c2676d68a32edab9775b2ca3c518fa2e4e673af856a87e9da514683 -eo pid,comm
[Pipeline] {
[Pipeline] sh
13:29:15 process apparently never started in /home/****/workspace/ttern-library_feature_BWEBHM-262#2#tmp/durable-504ce105
13:29:15 (running Jenkins temporarily with -Dorg.****ci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
[Pipeline] }
13:29:15 $ docker stop --time=1 0759f74d1c2676d68a32edab9775b2ca3c518fa2e4e673af856a87e9da514683
13:29:17 $ docker rm -f 0759f74d1c2676d68a32edab9775b2ca3c518fa2e4e673af856a87e9da514683
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Cancel running builds if exist)
Stage "Cancel running builds if exist" skipped due to earlier failure(s)
We need to run these commands all in the same Jenkins Workspace in order to perform the later steps.
Does anybody have an Idea how to achive this. We know that the pipeline runs fine if the pipeline is not running on an agent that is on a standalone machine.
Fixed by starting jenkins slave as container with a volume to share data and allow access to /var/run/docker.sock
#!/bin/bash
volume_name=myfinevolume-slave01-workspace
docker volume create ${volume_name}
docker run -d --name jenkins-agent-for-myfineproject \
-v /var/run/docker.sock:/var/run/docker.sock:rw \
--mount source=${volume_name},target=/home/jenkins/workspace \
--memory=8G \
--memory-swap=16G \
--restart unless-stopped \
myfineregsitry.domain.lala/acme/jenkins-swarm-client:3.17.0_buster

Jenkins pipeline alpine agent "apk update ERROR: Unable to lock database: Permission denied"

I'm using Alpine docker image as a Jenkins pipeline agent but I keep getting permission denied error while running apk update or apk add package. I seeing similar error for Ubuntu images also while running apt update or apt install
Here's my Jenkinsfile:
pipeline {
agent none
stages {
stage('Initialization') {
agent any
steps {
checkout scm
}
}
stage('Git Clone') {
agent { docker { image 'alpine:3.12.0' } }
steps {
sh '''
apk update;
apk add --no-cache git;
apk add --no-cache openssh;
git -v;
'''
}
}
}
}
and here's the Jenkins output:
+ docker inspect -f . alpine:3.12.0
WARNING: Error loading config file: /root/.docker/config.json: stat /root/.docker/config.json: permission denied
.
[Pipeline] withDockerContainer
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 1001:0 -w "/opt/bitnami/jenkins/jenkins_home/workspace/Deploy Glosfy Frontend" -v "/opt/bitnami/jenkins/jenkins_home/workspace/Deploy Glosfy Frontend:/opt/bitnami/jenkins/jenkins_home/workspace/Deploy Glosfy Frontend:rw,z" -v "/opt/bitnami/jenkins/jenkins_home/workspace/Deploy Glosfy Frontend#tmp:/opt/bitnami/jenkins/jenkins_home/workspace/Deploy Glosfy Frontend#tmp:rw,z" -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** alpine:3.12.0 cat
$ docker top 166c9ace17a4eb6aef0af0bbc04902ee4a358212be7f029550fb39a921e305aa -eo pid,comm
[Pipeline] {
[Pipeline] sh
+ apk update
ERROR: Unable to lock database: Permission denied
ERROR: Failed to open apk database: Permission denied
[Pipeline] }
$ docker stop --time=1 166c9ace17a4eb6aef0af0bbc04902ee4a358212be7f029550fb39a921e305aa
$ docker rm -f 166c9ace17a4eb6aef0af0bbc04902ee4a358212be7f029550fb39a921e305aa
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] End of Pipeline
ERROR: script returned exit code 99
Finished: FAILURE
Can someone help me figure out the issue?
Please modify the docker tag from Jenkins pipeline like this:
docker {
image 'alpine:3.12.0'
args '-u root:root'
}
I believe the problem is that Jenkins is running the container with a non-root user, hence the Permission denied error.
Try changing your pipeline like so:
agent {
docker {
image 'alpine:3.12.0'
args '-u root'
}
}
See this answer.

Docker agent volume mount in Jenkins pipeline not working as expected

Following snippet with volume mount creates the maven dependencies under $JENKINS_HOME/workspace/<project-name>/? (Question Mark) instead of under $HOME/.m2/
Note that settings.xml mirror to our internal repository. And the instructions on how to mount has been directly taken from jenkins.io
Anyone has any clue why is this happening?
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /tmp/jenkins/.m2:/root/.m2:rw,z'
}
}
stages {
stage('Build') {
steps {
sh 'mvn clean install -s settings.xml'
}
}
}
}
This is not as simple as using Docker in standalone. I have created /var/jenkins/.m2 directory on Jenkins slave where the build would be running. Ensured the new directory has 775 permission (although that may not be required) and also changed the owner to be the same as what is for "/var/opt/slave/workspace/pipeline_test" (got this path based on logs below)
$ docker login -u dcr-login -p ******** https:// nexus.corp.zenmonics.com:8449
Login Succeeded
[Pipeline] {
[Pipeline] sh
+ docker inspect -f . nexus.corp.zenmonics.com:8449/maven:3-alpine
.
[Pipeline] withDockerContainer
cucj1sb3 does not seem to be running inside a container
$ docker run -t -d -u 1002:1002 -v /tmp/jenkins/.m2:/root/.m2:rw,z -w
/var/opt/slave/workspace/pipeline_test -v /var/opt/slave/workspace/pipeline_test:/var/opt/slave/workspace/pipeline_test:rw,z -v /var/opt/slave/workspace/pipeline_test#tmp:/var/opt/slave/workspace/pipeline_test#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** nexus.corp.zenmonics.com:8449/maven:3-alpine cat
$ docker top c7282468dbb6952aadbe4bb495757e7047122b179c81516645ba23759b78c366 -eo pid,comm
This statement on official maven image at Docker Hub (https://hub.docker.com/_/maven) makes me feel the volume mount is updated
To create a pre-packaged repository, create a pom.xml with the
dependencies you need and use this in your Dockerfile.
/usr/share/maven/ref/settings-docker.xml is a settings file that
changes the local repository to /usr/share/maven/ref/repository, but
you can use your own settings file as long as it uses
/usr/share/maven/ref/repository as local repo.
The documentation at : https://jenkins.io/doc/book/pipeline/docker/ is misleading and waste of time when it comes to volume mounting.
When Docker container is created its created with user 1002 and group 1002. The user 1002 doesn't have access to /root/.m2 and only has access to the working directory injected into the container.
Dockerfile
FROM maven:3-alpine
COPY --chown=1002:1002 repository/ /usr/share/maven/ref/repository/
RUN chmod -R 775 /usr/share/maven/ref/repository
COPY settings.xml /usr/share/maven/ref/
Settings.xml
<localRepository>/usr/share/maven/ref/repository</localRepository>
Docker command
docker build -t <server>:<port>/<image-name>:<image-tag> .
docker push <server>:<port>/<image-name>:<image-tag>
docker volume create maven-repo
Jenkinsfile
pipeline {
agent('linux2') {
docker {
label '<slave-label-here>'
image '<image-name>:<image-tag>'
registryUrl 'https://<server>:<port>'
registryCredentialsId '<jenkins-credentials-for-docker-login>'
args '-v maven-repo:/usr/share/maven/ref/repository/'
}
}
parameters {
booleanParam(name: 'SONAR', defaultValue: false, description: 'Select this option to run SONAR Analysis')
}
stages {
stage('Build') {
steps {
sh 'mvn clean install -s /usr/share/maven/ref/settings.xml -f pom.xml'
}
}
}
}
As #masseyb mentions in the comments, Jenkins treat $HOME as current building context.
And there are two workarounds:
a) use Jenkins plugin to set Env variable
You can use Envinject Plugin to set environment variables in Jenkins.
b) specify absolute path instead of $HOME/.m2
You can specify absolute path for .m2, e.g.:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /home/samir-shail/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B'
}
}
}
}
Note: please check that Jenkins has access to yours $HOME/.m2/ directory.

Resources