Docker volume mounts not working in Azure DevOps Pipeline, please find my code below:
I tried two approaches to run my docker container in the pipeline - please refer below - both returning empty volume - volume mount not happening. I'm not sure what mistake I'm doing here. It would be really appreciated if someone can help me to fix this issue.
I would like to mount run.sh, test.sh and test.txt under /test
In entrypoint.sh - I'm listing all the files inside docker - but it's returning empty - not mounted all these files run.sh, test.sh and test.txt
I'm struggling for the last two days to fix this issue but not getting any resolution - any help would be really appreciated.
This is my folder structure:
my-app/
├─ test/
│ ├─ run.sh
│ ├─ test.sh
│ ├─ test.txt
├─ azure-pipeline.yml
test.sh
#!/bin/bash
rootPath=$1
echo "Root path: $rootPath"
./run.sh $rootPath
run.sh
#!/bin/bash
echo "starting run script"
NAME="testApp"
IMAGE="sample/test-app:1.1"
ROOTPATH=$1
echo "$ROOTPATH"
# Finally run
docker stop $NAME > /dev/null 2>&1
docker rm $NAME > /dev/null 2>&1
docker run --name $NAME -i -v $ROOTPATH:/test -w /test $IMAGE
azure-pipeline.yml (Approach -1)
trigger:
- none
jobs:
- job: test
pool:
name: my-Linux-agents
displayName: Run tests
steps:
- task: Bash#3
displayName: Docker Prune
inputs:
targetType: inline
script: |
docker system prune -f -a
- task: Docker#2
displayName: Docker Login
inputs:
containerRegistry: myRegistry w/ asdf
command: login
- task: Bash#3
displayName: Execute Sample Java
inputs:
targetType: filePath
filePath: 'test/test.sh'
arguments: '$PWD'
workingDirectory: test
azure-pipeline.yml (Approach -2)
trigger:
- none
jobs:
- job: test
pool:
name: my-Linux-agents
displayName: Run tests
steps:
- task: Bash#3
displayName: Docker Prune
inputs:
targetType: inline
script: |
docker system prune -f -a
- task: Docker#2
displayName: Docker Login
inputs:
containerRegistry: myRegistry w/ asdf
command: login
- bash: |
echo "Executing docker run command"
echo $(Build.SourcesDirectory)
echo $PWD
docker run --name testApp -i -v $(Build.SourcesDirectory):/test -w /test sample/test-app:1.1
My Docker Image - files
Dockerfile
FROM alpine:3.12
COPY entrypoint.sh /
RUN echo "hello"
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh
#!/bin/sh
echo "START Running Docker"
echo "Listing Files"
ls -la
TL;DR
Docker volume mounts work in Azure DevOps Pipelines (at least on the Microsoft Hosted Agents).
As can be seen below both the approaches described in the OP works with the ubuntu-latest agent pool. If self hosted agents are used in my-Linux-pool the problem is likely with them rather than with the Dockerfile or pipeline config that was shared in the original post.
Please see the full working demo with git repo and Pipeline
EDIT: If the self hosted pipeline agents are run as docker containers, the problems could come from that the inner container references a path that exists only in the outer container, but not on the host. For more details on how to mount volumes when launching a docker container from another docker container, please see the section on Mounting volumes using Docker within a Docker container from the Azure Pipeline docs, or this answer from Mounting docker run in Azure Pipeline job
Test setup
I have modified the azure-pipelines.yaml to create a fully self contained example:
The following changes were made to the azure-pipelines.yml in the OP
Setting the agentpool to ubuntu-latest rather than my-Linux-agents
Changing the Docker login step to a docker build step that actually builds the image as part of the pipeline. (Not necessary, but it makes the pipeline independent of a custom image registry)
Add a step that lists all the files in the repo recursively and their permission (So that we can easily verify that all files in the /test folder are readable by all)
Adding the steps that runs the containers from the two approaches into the same pipeline so that both approaches are demonstrated in the same pipeline
The azure-pipelines.yml now looks like this:
trigger:
- none
jobs:
- job: test
pool:
vmImage: 'ubuntu-latest'
displayName: Run tests
steps:
- task: Docker#2
displayName: Build Docker Image
inputs:
repository: sample/test-app
command: build
Dockerfile: image/Dockerfile
tags: '1.1'
- bash: |
echo $(Build.SourcesDirectory)
ls -lrtR
displayName: List files
- bash: |
echo "Executing docker run command"
docker run --name testApp -i -v $(Build.SourcesDirectory)/test:/test -w /test sample/test-app:1.1
displayName: Run docker inline in pipeline
- task: Bash#3
displayName: Run test.sh
inputs:
targetType: filePath
filePath: 'test/test.sh'
arguments: '$PWD'
workingDirectory: test
The rest of the files looks the same, the full test setup can be found here
Test Results
When running the pipeline the following output is obtained (Full pipeline run can be found here here
List Files
/home/vsts/work/1/s
.:
total 16
drwxr-xr-x 2 vsts docker 4096 Sep 18 16:51 test
drwxr-xr-x 2 vsts docker 4096 Sep 18 16:51 image
-rw-r--r-- 1 vsts docker 849 Sep 18 16:51 azure-pipelines.yml
-rw-r--r-- 1 vsts docker 198 Sep 18 16:51 README.md
./test:
total 8
-rw-r--r-- 1 vsts docker 0 Sep 18 16:51 test.txt
-rwxr-xr-x 1 vsts docker 72 Sep 18 16:51 test.sh
-rwxr-xr-x 1 vsts docker 258 Sep 18 16:51 run.sh
./image:
total 8
-rwxr-xr-x 1 vsts docker 67 Sep 18 16:51 entrypoint.sh
-rwxr-xr-x 1 vsts docker 87 Sep 18 16:51 Dockerfile
Run docker inline in pipeline
Executing docker run command
START Running Docker
Listing Files
total 16
drwxr-xr-x 2 1001 121 4096 Sep 18 16:51 .
drwxr-xr-x 1 root root 4096 Sep 18 16:51 ..
-rwxr-xr-x 1 1001 121 258 Sep 18 16:51 run.sh
-rwxr-xr-x 1 1001 121 72 Sep 18 16:51 test.sh
-rw-r--r-- 1 1001 121 0 Sep 18 16:51 test.txt
Run test.sh
Root path: /home/vsts/work/1/s/test
starting run script
/home/vsts/work/1/s/test
START Running Docker
Listing Files
total 16
drwxr-xr-x 2 1001 121 4096 Sep 18 16:51 .
drwxr-xr-x 1 root root 4096 Sep 18 16:51 ..
-rwxr-xr-x 1 1001 121 258 Sep 18 16:51 run.sh
-rwxr-xr-x 1 1001 121 72 Sep 18 16:51 test.sh
-rw-r--r-- 1 1001 121 0 Sep 18 16:51 test.txt
i believe solution is relatively simple, idea derived from here,
give this a try:
in pipelines
pool:
vmImage: 'ubuntu-latest'
in Dockerfile
VOLUME ["/test"]
RUN ls -la >> my.log
and then check log
Related
I don't understand why my entrypoint can't execute my command. My entrypoint look like this:
#!/bin/bash
...
exec "$#"
My script is existing I can run it when I go inside my container:
drwxrwxrwx 1 root root 512 mars 25 09:07 .
drwxrwxrwx 1 root root 512 mars 25 09:07 ..
-rwxrwxrwx 1 root root 128 mars 25 10:05 entrypoint.sh
-rwxrwxrwx 1 root root 481 mars 25 09:07 init-dev.sh
-rwxrwxrwx 1 root root 419 mars 25 10:02 migration.sh
root#0c0062fbf916:/app/scripts# pwd
/app/scripts
And when I run my container : docker run my_container "scripts/migration.sh"
I got this error:
scripts/entrypoint.sh: line 8: /app/scripts/migration.sh: No such file or directory
I have the same error if I just run ls -all
docker run my_container "ls -all"
exec: ls -all: not found
I'm switching linux to windows <-> windows to linux so I checked to change lf to crlf but there is no changes
Your first command doesn't work because your scripts are in /app/scripts (note the plural), but you're trying to run run script/migration.sh. Additionally, it's not clear what the current working directory is in your container: even if you wrote scripts/migration.sh, that would only work if either (a) your Dockerfile contains a WORKDIR /app, or if your docker run command line includes -w /app. You would be better off using a fully qualified path:
docker run mycontainer /app/scripts/migration.sh
Your second example (docker run my_container "ls -all") is over-quoted and would never work. You need to write docker run my_container ls -all, except that -all isn't actually an option that ls accepts, although it will work by virtue of being the combination of the -a and -l options.
Following is the Dockerfile for the image,
FROM jenkins/jenkins:lts-jdk11
USER jenkins
RUN jenkins-plugin-cli --plugins "blueocean:1.25.2 http_request" && ls -la /var/jenkins_home
When this is built using docker build -t ireshmm/jenkins:lts-jdk11 ., following is the output,
Sending build context to Docker daemon 3.072kB
Step 1/3 : FROM jenkins/jenkins:lts-jdk11
---> 9aee0d53624f
Step 2/3 : USER jenkins
---> Using cache
---> 49d657d24299
Step 3/3 : RUN jenkins-plugin-cli --plugins "blueocean:1.25.2 http_request" && ls -la /var/jenkins_home
---> Running in b459c4c48e3e
Done
total 20
drwxr-xr-x 3 jenkins jenkins 4096 Jan 22 16:49 .
drwxr-xr-x 1 root root 4096 Jan 12 15:46 ..
drwxr-xr-x 3 jenkins jenkins 4096 Jan 22 16:49 .cache
-rw-rw-r-- 1 jenkins root 7152 Jan 12 15:42 tini_pub.gpg
Removing intermediate container b459c4c48e3e
---> 5fd5ba428f1a
Successfully built 5fd5ba428f1a
Successfully tagged ireshmm/jenkins:lts-jdk11
When create a container and list files docker run -it --rm ireshmm/jenkins:lts-jdk11 ls -la /var/jenkins_home, following is the output:
total 40
drwxr-xr-x 3 jenkins jenkins 4096 Jan 22 16:51 .
drwxr-xr-x 1 root root 4096 Jan 12 15:46 ..
-rw-r--r-- 1 jenkins jenkins 4683 Jan 22 16:51 copy_reference_file.log
drwxr-xr-x 2 jenkins jenkins 16384 Jan 22 16:51 plugins
-rw-rw-r-- 1 jenkins root 7152 Jan 12 15:42 tini_pub.gpg
Question: Why do the contents of /var/jenkins_home differ while building the image and the inside the container created from it given that no command is run after listing the files while building image? How can that happen?
The jenkins/jenkins:lts-jdk11 has an ENTRYPOINT that runs /usr/local/bin/jenkins.sh, which among other things creates the copy_reference_file.log file:
$ grep -i copy_reference /usr/local/bin/jenkins.sh
: "${COPY_REFERENCE_FILE_LOG:="${JENKINS_HOME}/copy_reference_file.log"}"
touch "${COPY_REFERENCE_FILE_LOG}" || { echo "Can not write to ${COPY_REFERENCE_FILE_LOG}. Wrong volume permissions?"; exit 1; }
echo "--- Copying files at $(date)" >> "$COPY_REFERENCE_FILE_LOG"
find "${REF}" \( -type f -o -type l \) -exec bash -c '. /usr/local/bin/jenkins-support; for arg; do copy_reference_file "$arg"; done' _ {} +
The ENTRYPOINT scripts runs whenever you start a container from that image (before any command you've provided on the command line).
I have a volume which uses bind to share a local directory. Sometimes this directory doesn't exist and everything goes to shit. How can I tell docker-compose to look for the directory and use it if it exists or to continue without said volume if errors?
Volume example:
- type: bind
read_only: true
source: /srv/share/
target: /srv/share/
How can I tell docker-compose to look for the directory and use it if it exists or to continue without said volume if errors?
As far I am aware you can't do conditional logic to mount a volume, but i am getting around it in a project of mine, like this:
version: "2.1"
services:
elixir:
image: elixir:alpine
volumes:
- ${VOLUME_SOURCE:-/dev/null}:${VOLUME_TARGET:-/.devnull}:ro
Here I am using /dev/null as the fallback, but in my real project I just use an empty file to do the mapping.
This ${VOLUME_SOURCE:-/dev/null} is how bash works with default values for variables not set, and docker compose supports them.
Testing it without setting the env vars
$ sudo docker-compose run --rm elixir sh
/ # ls -al /.devnull
crw-rw-rw- 1 root root 1, 3 May 21 12:27 /.devnull
Testing it with the env vars set
Creating the .env file:
$ printf "VOLUME_SOURCE=./testing \nVOLUME_TARGET=/testing\n" > .env && cat .env
VOLUME_SOURCE=./testing
VOLUME_TARGET=/testing
Creating the volume for test purposes:
$ mkdir testing && touch testing/test.txt && ls -al testing
total 8
drwxr-xr-x 2 exadra37 exadra37 4096 May 22 13:12 .
drwxr-xr-x 3 exadra37 exadra37 4096 May 22 13:12 ..
-rw-r--r-- 1 exadra37 exadra37 0 May 22 13:12 test.txt
Running the container:
$ sudo docker-compose run --rm elixir sh
/ # ls -al /testing/
total 8
drwxr-xr-x 2 1000 1000 4096 May 22 12:01 .
drwxr-xr-x 1 root root 4096 May 22 12:07 ..
-rw-r--r-- 1 1000 1000 0 May 22 12:01 test.txt
/ #
I don't think there is a way to do that with the docker-compose syntax easily yet, here is how I went, but the container would not start at all if there is no volume.
check the launch command with a docker inspect on the unpatched container
change your command with something like this (here using egorive/seafile-mc:8.0.7-rpi on a raspberry pi, where the data is on an external disk that might not always be plugged):
volumes:
- '/data/seafile-data:/shared:Z'
command: sh -c "( [ -f /shared/.docker-volume-check ] || ( echo volume not mounted, not starting; sleep 60; exit 1 )) && exec /sbin/my_init -- /scripts/start.py"
restart: always
touch .docker-volume-check in the root of your volume
That way, you have a restartable container, that would fail and wait if the volume is not mounted. It also supports volume in a generic way: for instance, when you just create a new container that has not initialized its volume yet with a first setup, it will still boot as you're checking a file you created.
I'm trying to deploy to Docker in my VPS everytime a new commit is made in my project in Gitlab. But I'm having an issue doing that.
I tried to install sshpass and then scp folder and files. But it's saying:
sshpass: Failed to run command: No such file or directory.
The folder and files I'm trying to get from build stage, so I dont have to build again my app.
Here's my .gitlab-ci.yml file:
image: node:9.6.1
cache:
paths:
- node_modules/
- build/
- docker-compose.yml
- Dockerfile
- nginx.conf
stages:
- build
- test
- dockerize
build-stage:
stage: build
script:
- npm install
- CI=false npm run build
artifacts:
paths:
- build/
- docker-compose.yml
- nginx.conf
test-stage:
stage: test
script:
- npm install
- CI=false npm run test
dockerize-stage:
stage: dockerize
image: tmaier/docker-compose:latest
services:
- docker:dind
dependencies:
- build-stage
tags:
- docker
script:
- apk update && apk add sshpass
- sshpass -V
- export SSHPASS=$USER_PASS
- ls -la
- sshpass -e ssh -o stricthostkeychecking=no root#ip:/home mkdir new-super-viva
- sshpass -e scp -o stricthostkeychecking=no -r build user#ip:/home/new-folder
- sshpass -e scp -o stricthostkeychecking=no -r docker-compose.yml user#ip:/home/new-folder
- sshpass -e scp -o stricthostkeychecking=no -r nginx.conf user#ip:/home/new-folder
- sshpass -e ssh -o stricthostkeychecking=no user#ip:/home/new-folder docker-compose up --build
Here's the actual output from Gitlab CI:
$ sshpass -V
sshpass 1.06
(C) 2006-2011 Lingnu Open Source Consulting Ltd.
(C) 2015-2016 Shachar Shemesh
This program is free software, and can be distributed under the terms of the GPL
See the COPYING file for more information.
Using "assword" as the default password prompt indicator.
$ export SSHPASS=$USER_PASS
$ ls -la
total 912
drwxrwxrwx 7 root root 4096 Apr 2 13:24 .
drwxrwxrwx 4 root root 4096 Apr 2 13:24 ..
-rw-rw-rw- 1 root root 327 Apr 2 13:24 .env
drwxrwxrwx 6 root root 4096 Apr 2 13:24 .git
-rw-rw-rw- 1 root root 329 Apr 2 13:24 .gitignore
-rw-rw-rw- 1 root root 1251 Apr 2 13:24 .gitlab-ci.yml
-rw-rw-rw- 1 root root 311 Apr 2 11:57 Dockerfile
-rw-rw-rw- 1 root root 2881 Apr 2 13:24 README.md
drwxrwxrwx 5 root root 4096 Apr 2 13:20 build
-rw-rw-rw- 1 root root 340 Apr 2 13:24 build.sh
-rw-rw-rw- 1 root root 282 Apr 2 11:57 docker-compose.yml
-rw-rw-rw- 1 root root 1385 Apr 2 11:57 nginx.conf
drwxr-xr-x 1191 root root 36864 Apr 2 13:22 node_modules
-rw-rw-rw- 1 root root 765929 Apr 2 13:24 package-lock.json
-rw-rw-rw- 1 root root 1738 Apr 2 13:24 package.json
drwxrwxrwx 4 root root 4096 Apr 2 13:24 public
drwxrwxrwx 10 root root 4096 Apr 2 13:24 src
$ sshpass -e ssh -o stringhostkeychecking=no user#ip:/home mkdir new-folder
sshpass: Failed to run command: No such file or directory
ERROR: Job failed: exit code 3
Is there any way that I can copy the build folder, docker-compose.yml and nginx.conf files from build-stage to dockerize-stage and then pass it with sshpassto VPS? It doesnt even working to create folder with mkdir. I also tried to create folder manually and then remove this command from .gitlab-ci.yml but still the same output.
Just so you know I added that USER_PASS in:
https://gitlab.com/user/project/settings/ci_cd in environment variables and let it non-protected
I am learning Docker which is completely new to me. I already was able to create an jboss/wildfly image, and then i was able to start jboss with my application using this Dockerfile:
FROM jboss/wildfly
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-c", "standalone-full.xml", "-b", "0.0.0.0"]
ADD mywebapp-web/target/mywebapp-1.0.war /opt/jboss/wildfly/standalone/deployments/mywebapp-1.0.war
Now i would like to add support for a MySQL Database by adding a datasource to the standalone and the mysql connector. For that i am following this example:
https://github.com/arun-gupta/docker-images/tree/master/wildfly-mysql-javaee7
Following is my dockerfile and my execute.sh script
Dockerfile:
FROM jboss/wildfly:latest
ADD customization /opt/jboss/wildfly/customization/
CMD ["/opt/jboss/wildfly/customization/execute.sh"]
execute script code:
#!/bin/bash
# Usage: execute.sh [WildFly mode] [configuration file]
#
# The default mode is 'standalone' and default configuration is based on the
# mode. It can be 'standalone.xml' or 'domain.xml'.
echo "=> Executing Customization script"
JBOSS_HOME=/opt/jboss/wildfly
JBOSS_CLI=$JBOSS_HOME/bin/jboss-cli.sh
JBOSS_MODE=${1:-"standalone"}
JBOSS_CONFIG=${2:-"$JBOSS_MODE.xml"}
function wait_for_server() {
until `$JBOSS_CLI -c ":read-attribute(name=server-state)" 2> /dev/null | grep -q running`; do
sleep 1
done
}
echo "=> Starting WildFly server"
echo "JBOSS_HOME : " $JBOSS_HOME
echo "JBOSS_CLI : " $JBOSS_CLI
echo "JBOSS_MODE : " $JBOSS_MODE
echo "JBOSS_CONFIG: " $JBOSS_CONFIG
echo $JBOSS_HOME/bin/$JBOSS_MODE.sh -b 0.0.0.0 -c $JBOSS_CONFIG &
$JBOSS_HOME/bin/$JBOSS_MODE.sh -b 0.0.0.0 -c $JBOSS_CONFIG &
echo "=> Waiting for the server to boot"
wait_for_server
echo "=> Executing the commands"
$JBOSS_CLI -c --file=`dirname "$0"`/commands.cli
# Add MySQL module
module add --name=com.mysql --resources=/opt/jboss/wildfly/customization/mysql-connector-java-5.1.39-bin.jar --dependencies=javax.api,javax.transaction.api
# Add MySQL driver
/subsystem=datasources/jdbc-driver=mysql:add(driver-name=mysql,driver-module-name=com.mysql,driver-xa-datasource-class-name=com.mysql.jdbc.jdbc2.optional.MysqlXADataSource)
# Deploy the WAR
#cp /opt/jboss/wildfly/customization/leadservice-1.0.war $JBOSS_HOME/$JBOSS_MODE/deployments/leadservice-1.0.war
echo "=> Shutting down WildFly"
if [ "$JBOSS_MODE" = "standalone" ]; then
$JBOSS_CLI -c ":shutdown"
else
$JBOSS_CLI -c "/host=*:shutdown"
fi
echo "=> Restarting WildFly"
$JBOSS_HOME/bin/$JBOSS_MODE.sh -b 0.0.0.0 -c $JBOSS_CONFIG
But I get a error when i run the image complaining that a file or directory is not found:
Building Image
$ docker build -t mpssantos/leadservice:latest .
Sending build context to Docker daemon 19.37 MB
Step 1 : FROM jboss/wildfly:latest
---> b8279b641e82
Step 2 : ADD customization /opt/jboss/wildfly/customization/
---> aea03d4f2819
Removing intermediate container 0920e2cd97fd
Step 3 : CMD /opt/jboss/wildfly/customization/execute.sh
---> Running in 8a0dbcb01855
---> 10335320b89d
Removing intermediate container 8a0dbcb01855
Successfully built 10335320b89d
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
Running image
$ docker run mpssantos/leadservice
no such file or directory
Error response from daemon: Cannot start container 5d3357ba17afa36e81d8794f2b0cd45cc00dde955b2b2054282c4ef17dd4f265: [8] System error: no such file or directory
Can someone let me know how can i access the filesystem so i can check which file or directory is complaining? Is there a better way to debug this?
I believe that is something related with the bash which is referred on first line of the script because the following echo is not printed
Thank you so much
I made it to ssh the container to check whats inside.
1) ssh to the docker machine: docker-machine ssh default
2) checked the container id with the command: docker ps -a
3) ssh to the container with the command: sudo docker exec -i -t 665b4a1e17b6 /bin/bash
4) i can check that the "/opt/jboss/wildfly/customization/" directory exists with the expected files
The customization dir have the following permissions and is listed like this:
drwxr-xr-x 2 root root 4096 Jun 12 23:44 customization
drwxr-xr-x 10 jboss jboss 4096 Jun 14 00:15 standalone
and the files inside the customization dir
drwxr-xr-x 2 root root 4096 Jun 12 23:44 .
drwxr-xr-x 12 jboss jboss 4096 Jun 14 00:15 ..
-rwxr-xr-x 1 root root 1755 Jun 12 20:06 execute.sh
-rwxr-xr-x 1 root root 989497 May 4 11:11 mysql-connector-java-5.1.39-bin.jar
if i try to execute the file i get this error
[jboss#d68190e4f0d8 customization]$ ./execute.sh
bash: ./execute.sh: /bin/bash^M: bad interpreter: No such file or directory
Does this bring light to anything?
Thank you so much again
I found the issue. The execute.sh file was with windows eof. I converted to UNIX And start to work.
I believe the execute.sh is not found. You can verify by running the following and finding the result is an empty directory:
docker run mpssantos/leadservice ls -al /opt/jboss/wildfly/customization/
The reason for this is you are doing your build on a different (virtual) machine than your local system, so it's pulling the "customization" folder from that VM. I'd run the build within the VM and place the files you want to import on that VM where the build can find it.