Unable to modify files in container from docker - docker

I am attempting to build an imaging by modifying some of the files in an existing image. However, the files are not changed by RUN commands. My dockerfile is
FROM vromero/activemq-artemis
ADD . .
RUN ls
RUN whoami
# Overwrite existing password file. The existing file is invulnerable, and
# cannot be modified by docker. I have no idea why.
RUN rm -f /var/lib/artemis/etc/artemis-users.properties
RUN ls -l /var/lib/artemis/etc
RUN mv passwords.txt /var/lib/artemis/etc/artemis-users.properties
RUN cat /var/lib/artemis/etc/artemis-users.properties
RUN touch /var/lib/artemis/etc/touch-test
# Add the predefined queues
RUN sed -i.bak '/<core/r queues.xml' /var/lib/artemis/etc/broker.xml
# EOF
The base image is from the public docker repository. When I run it, I get the following output
$ docker build .
Sending build context to Docker daemon 4.608 kB
Step 0 : FROM vromero/activemq-artemis
---> 4e0f54c798cc
Step 1 : ADD . .
---> 3efde5a1fdea
Removing intermediate container c8621adc900b
Step 2 : RUN ls
---> Running in 5c5dca9449da
Dockerfile
artemis
artemis-service
passwords.txt
queues.xml
---> 22c541f98339
Removing intermediate container 5c5dca9449da
Step 3 : RUN whoami
---> Running in f11fcd2e2c5b
root
---> 15ee9aeb4c15
Removing intermediate container f11fcd2e2c5b
Step 4 : RUN rm -f /var/lib/artemis/etc/artemis-users.properties
---> Running in ab4383f0bb91
---> 10877bdb08ee
Removing intermediate container ab4383f0bb91
Step 5 : RUN ls -l /var/lib/artemis/etc
---> Running in a5669c8808e8
total 24
-rw-r--r-- 1 artemis artemis 959 Oct 4 05:40 artemis-roles.properties
-rw-r--r-- 1 artemis artemis 968 Oct 4 05:40 artemis-users.properties
-rwxrwxr-x 1 artemis artemis 1342 Oct 4 05:40 artemis.profile
-rw-r--r-- 1 artemis artemis 1302 Oct 4 05:40 bootstrap.xml
-rw-r--r-- 1 artemis artemis 4000 Oct 4 05:40 broker.xml
-rw-r--r-- 1 artemis artemis 2203 Oct 4 05:40 logging.properties
---> 02e3acc58653
Removing intermediate container a5669c8808e8
Step 6 : RUN mv passwords.txt /var/lib/artemis/etc/artemis-users.properties
---> Running in 68000aa34f6b
---> ec057d5adc67
Removing intermediate container 68000aa34f6b
Step 7 : RUN cat /var/lib/artemis/etc/artemis-users.properties
---> Running in 934a36d8c4d1
## ---------------------------------------------------------------------------
## Licensed to the Apache Software Foundation (ASF) under one or more
## contributor license agreements. See the NOTICE file distributed with
## this work for additional information regarding copyright ownership.
## The ASF licenses this file to You under the Apache License, Version 2.0
## (the "License"); you may not use this file except in compliance with
## the License. You may obtain a copy of the License at
##
## http://www.apache.org/licenses/LICENSE-2.0
##
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## See the License for the specific language governing permissions and
## limitations under the License.
## ---------------------------------------------------------------------------
apollo=ollopaehcapa ---> ca1bad8a8903
Removing intermediate container 934a36d8c4d1
Step 8 : RUN touch /var/lib/artemis/etc/touch-test
---> Running in cb931c5cfcd1
---> 6961b4fcde75
Removing intermediate container cb931c5cfcd1
Step 9 : RUN sed -i.bak '/<core/r queues.xml' /var/lib/artemis/etc/broker.xml
---> Running in a829642b29ab
---> effd394fc02f
Removing intermediate container a829642b29ab
Successfully built effd394fc02f
The ADD . . has worked, as passwords.txt and queues.xml both show up in the ls. whoami shows that the current user is root, so there should be no permissions problems.
However, the existing files are not changed. If I run the image but use bash as the start command (see below), none of the files have a current date, although the file that was mv'ed to replace an existing file is gone. If I paste the sed command into the shell, it does update the file.
$ docker run -it effd394fc02f bash
root#51d1cc0a94cb:/var/lib/artemis/bin# ls -l
total 16
-rw-r--r-- 1 root root 543 Oct 21 22:12 Dockerfile
-rwxrwxr-x 1 artemis artemis 3416 Oct 4 05:40 artemis
-rwxrwxr-x 1 artemis artemis 3103 Oct 4 05:40 artemis-service
-rw-r--r-- 1 root root 329 Oct 21 22:18 queues.xml
root#51d1cc0a94cb:/var/lib/artemis/bin# cd ../etc
root#51d1cc0a94cb:/var/lib/artemis/etc# ls -l
total 24
-rw-r--r-- 1 artemis artemis 959 Oct 4 05:40 artemis-roles.properties
-rw-r--r-- 1 artemis artemis 968 Oct 4 05:40 artemis-users.properties
-rwxrwxr-x 1 artemis artemis 1342 Oct 4 05:40 artemis.profile
-rw-r--r-- 1 artemis artemis 1302 Oct 4 05:40 bootstrap.xml
-rw-r--r-- 1 artemis artemis 4000 Oct 4 05:40 broker.xml
-rw-r--r-- 1 artemis artemis 2203 Oct 4 05:40 logging.properties
Why are these files not being changed by the run commands?

The actual problem was related to how the base image was built. If you run docker history --no-trunc vromero/activemq-artemis, you see these commands (among others):
<id> 6 weeks ago /bin/sh -c #(nop) VOLUME [/var/lib/artemis/etc] 0 B
<id> 6 weeks ago /bin/sh -c #(nop) VOLUME [/var/lib/artemis/tmp] 0 B
<id> 6 weeks ago /bin/sh -c #(nop) VOLUME [/var/lib/artemis/data] 0 B
The Dockerfile volume documentation states
Note: If any build steps change the data within the volume after it
has been declared, those changes will be discarded.
This means that the configuration in the base image is locked.
I solved my problem by creating my own dockerfile based on the output of the history command, without the volume lines.

Not a complete answer, but at least a clue: you don't change the entrypoint of the built image.
That means your image will still execute the one from vromero/activemq-artemis, which, according to its Dockerfile is:
ENTRYPOINT ["/docker-entrypoint.sh"]
And docker-entrypoint.sh might reset some of your changes on docker run.

There are two RUN commands in Dockerfile.
You are running this: RUN <command> (the command is run in a shell - /bin/sh -c (shell form)
The other one is this: RUN ["executable", "param1", "param2"] (exec form)
Try this:
RUN ["rm", "-f", "/var/lib/artemis/etc/artemis-users.properties"]
RUN ["ls", "-l", "/var/lib/artemis/etc"]
RUN ["mv", "passwords.txt", "/var/lib/artemis/etc/artemis-users.properties"]
RUN ["cat", "/var/lib/artemis/etc/artemis-users.properties"]
RUN ["touch", "/var/lib/artemis/etc/touch-test"]
# Add the predefined queues
RUN ["sed", "-i.bak", "'/<core/r queues.xml'", "/var/lib/artemis/etc/broker.xml"]

Related

Why does docker image content differ from the container created from it?

Following is the Dockerfile for the image,
FROM jenkins/jenkins:lts-jdk11
USER jenkins
RUN jenkins-plugin-cli --plugins "blueocean:1.25.2 http_request" && ls -la /var/jenkins_home
When this is built using docker build -t ireshmm/jenkins:lts-jdk11 ., following is the output,
Sending build context to Docker daemon 3.072kB
Step 1/3 : FROM jenkins/jenkins:lts-jdk11
---> 9aee0d53624f
Step 2/3 : USER jenkins
---> Using cache
---> 49d657d24299
Step 3/3 : RUN jenkins-plugin-cli --plugins "blueocean:1.25.2 http_request" && ls -la /var/jenkins_home
---> Running in b459c4c48e3e
Done
total 20
drwxr-xr-x 3 jenkins jenkins 4096 Jan 22 16:49 .
drwxr-xr-x 1 root root 4096 Jan 12 15:46 ..
drwxr-xr-x 3 jenkins jenkins 4096 Jan 22 16:49 .cache
-rw-rw-r-- 1 jenkins root 7152 Jan 12 15:42 tini_pub.gpg
Removing intermediate container b459c4c48e3e
---> 5fd5ba428f1a
Successfully built 5fd5ba428f1a
Successfully tagged ireshmm/jenkins:lts-jdk11
When create a container and list files docker run -it --rm ireshmm/jenkins:lts-jdk11 ls -la /var/jenkins_home, following is the output:
total 40
drwxr-xr-x 3 jenkins jenkins 4096 Jan 22 16:51 .
drwxr-xr-x 1 root root 4096 Jan 12 15:46 ..
-rw-r--r-- 1 jenkins jenkins 4683 Jan 22 16:51 copy_reference_file.log
drwxr-xr-x 2 jenkins jenkins 16384 Jan 22 16:51 plugins
-rw-rw-r-- 1 jenkins root 7152 Jan 12 15:42 tini_pub.gpg
Question: Why do the contents of /var/jenkins_home differ while building the image and the inside the container created from it given that no command is run after listing the files while building image? How can that happen?
The jenkins/jenkins:lts-jdk11 has an ENTRYPOINT that runs /usr/local/bin/jenkins.sh, which among other things creates the copy_reference_file.log file:
$ grep -i copy_reference /usr/local/bin/jenkins.sh
: "${COPY_REFERENCE_FILE_LOG:="${JENKINS_HOME}/copy_reference_file.log"}"
touch "${COPY_REFERENCE_FILE_LOG}" || { echo "Can not write to ${COPY_REFERENCE_FILE_LOG}. Wrong volume permissions?"; exit 1; }
echo "--- Copying files at $(date)" >> "$COPY_REFERENCE_FILE_LOG"
find "${REF}" \( -type f -o -type l \) -exec bash -c '. /usr/local/bin/jenkins-support; for arg; do copy_reference_file "$arg"; done' _ {} +
The ENTRYPOINT scripts runs whenever you start a container from that image (before any command you've provided on the command line).

docker volume permission issue

I am trying to launch an app, deployed using wildfly18 in a docker container, which internally connects to my host postgresql database installation. During the container creation process, I am also mapping my container's wildfly log directory to my local i.e "host" directory via a named volume, created using the docker volume create command.
The issue is, I get a "permission denied" error when the app runs and the container tries to create log files inside the mapped volume.
My Dockerfile contents are as below:
FROM jboss/base-jdk:8
ENV WILDFLY_VERSION 18.0.1.Final
ENV WILDFLY_SHA1=ef0372589a0f08c36b15360fe7291721a7e3f7d9
ENV JBOSS_HOME /opt/jboss/wildfly
USER root
RUN cd $HOME \
&& curl -O https://download.jboss.org/wildfly/$WILDFLY_VERSION/wildfly-$WILDFLY_VERSION.tar.gz \
&& sha1sum wildfly-$WILDFLY_VERSION.tar.gz | grep $WILDFLY_SHA1 \
&& tar xf wildfly-$WILDFLY_VERSION.tar.gz \
&& mv $HOME/wildfly-$WILDFLY_VERSION $JBOSS_HOME \
&& rm wildfly-$WILDFLY_VERSION.tar.gz
COPY ./bin $JBOSS_HOME/bin
COPY ./standalone/configuration/* $JBOSS_HOME/standalone/configuration/
COPY ./modules/com $JBOSS_HOME/modules/com
COPY ./modules/system/layers/base/org/ $JBOSS_HOME/modules/system/layers/base/org/
COPY ./standalone/waffle_resource $JBOSS_HOME/standalone/waffle_resource
COPY ./standalone/waffle_resource/waffle.ear $JBOSS_HOME/standalone/deployments/
COPY ./standalone/waffle_resource/waffle-war.ear $JBOSS_HOME/standalone/deployments/
RUN chown -R jboss:jboss ${JBOSS_HOME} && chmod -R g+rw ${JBOSS_HOME}
ENV LAUNCH_JBOSS_IN_BACKGROUND true
USER jboss
EXPOSE 8989 9990
WORKDIR $JBOSS_HOME/bin
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0"]
As you can see above, I am using user JBOSS inside the container to kick off wildfly.
The commands used to create an image and run a container and also to create a volume are as below:
docker image build -t viaduct/wildfly .
docker volume create viaduct-wildfly-logs
docker run -d -v viaduct-wildfly-logs:/opt/jboss/wildfly/standalone/log --network=host \
-e "DB_DBNAME=dbname" \
-e "DB_PORT=5432" \
-e "DB_USER=xyz" \
-e "DB_PASS=" \
-e "DB_HOST=127.0.0.1" \
--name petes viaduct/wildfly
I verified the permissions within the container and my local "host" directory created by docker volume create command. Also, it's worth noting,
I am running wildlfy as user JBOSS
.
The containers permissions are as below:
[jboss#localhost /]$ ll /opt/jboss/wildfly/standalone/
total 4
drwxrwxr-x 1 jboss jboss 62 Sep 18 00:24 configuration
drwxr-xr-x 6 jboss jboss 84 Sep 18 00:23 data
drwxrwxr-x 1 jboss jboss 64 Sep 18 00:24 deployments
drwxrwxr-x 1 jboss jboss 17 Nov 15 2019 lib
*drwxr-xr-x 2 root root 6 Sep 17 23:48 log*
drwxrwxr-x 1 jboss jboss 4096 Sep 18 00:24 tmp
drwxrwxr-x 1 jboss jboss 98 Sep 18 00:23 waffle_resource
[jboss#localhost /]$ exit
and the local volume permissions are as below:
[root#localhost xyz]# cd /var/lib/docker/volumes/
[root#localhost volumes]# ll
drwxrwsr-x 3 root root 19 Sep 18 11:48 viaduct-wildfly-logs
The docker volume create command creates directory in my local machine as below:
/var/lib/docker/volumes/viaduct-wildfly-logs/_data
and the permissions for each subdirectories by default are as follows, which definitely is for maintained for security reasons:
drwx--x--x 14 root root 182 Sep 14 09:32 docker
drwx------ 7 root root 285 Sep 18 11:48 volumes
drwxrwsr-x 3 root root 19 Sep 18 11:48 viaduct-wildfly-logs
To start with, please let me know whether my strategy is correct?
Secondly, let me know the best way to fix the permission issue?
You need to create a user with the same UID/GID and give the permission on the host folder for this volume.
The server is run as the jboss user which has the uid/gid set to 1000. doc

Using the same Docker image file permissions differ from machine to machine

I have a problem, that I cannot grasp at all. I'm running my Jenkins pipeline in a Docker container on the master node. Now I added another node and want to run the pipeline there as well.
However, using the same image I get different file permissions in the container:
### master
> docker image ls node:10.20.1-stretch
REPOSITORY TAG IMAGE ID CREATED SIZE
node 10.20.1-stretch c5f1efe092a0 13 days ago 912MB
> docker run --rm -ti -u 1000:1000 node:10.20.1-stretch ls -la /home/node
total 20
drwxr-xr-x 2 1000 1000 4096 May 15 20:31 .
drwxr-xr-x 3 0 0 4096 May 15 20:31 ..
-rw-r--r-- 1 1000 1000 220 May 15 2017 .bash_logout
-rw-r--r-- 1 1000 1000 3526 May 15 2017 .bashrc
-rw-r--r-- 1 1000 1000 675 May 15 2017 .profile
### node 1
> docker image ls node:10.20.1-stretch
REPOSITORY TAG IMAGE ID CREATED SIZE
node 10.20.1-stretch c5f1efe092a0 13 days ago 912MB
> docker run --rm -ti -u 1000:1000 node:10.20.1-stretch ls -la /home/node
total 20
drwxr-xr-x 2 0 0 4096 May 26 05:42 .
drwxr-xr-x 1 0 0 4096 May 26 05:42 ..
-rw-r--r-- 1 0 0 220 May 26 05:42 .bash_logout
-rw-r--r-- 1 0 0 3526 May 26 05:42 .bashrc
-rw-r--r-- 1 0 0 675 May 26 05:42 .profile
I observed a similar behavior for the /tmp directory, which has chmod 1777 on master and 1755 on node 1.
# master
> docker -v
Docker version 19.03.9, build 9d988398e7
> dockerd -v
Docker version 19.03.9, build 9d988398e7
# node 1
> docker -v
Docker version 19.03.10, build 9424aeaee9
> dockerd -v
Docker version 19.03.10, build 9424aeaee9
I assume the wrong behavior is on node 1, as the /home/node directory and all of its children are owned by root:root there, but the same directory is owned by node:node on the master. However, I already upgraded the Docker version on node 1 from 19.03.8 to 19.03.10 and nothing changed.
It there anything I don't understand about Docker containers? I have been working with them for a while, but never observed such a behavior.
I have change the storage driver from overlay2 to aufs. Now I have the correct permissions.

Dockerized terraform and tfstate

I have this docker container to run terraform.
alias terraform='docker run -i -t -v ~/.aws:/root/.aws:ro -v $(pwd):/app -w /app/ rubendob/terraform:0.11.8'
is just a copy of the official terraform image. Nothing fancy.
FROM golang:alpine
MAINTAINER "HashiCorp Terraform Team <terraform#hashicorp.com>"
ENV TERRAFORM_VERSION=0.11.8
RUN apk add --update git bash openssh
ENV TF_DEV=true
ENV TF_RELEASE=true
WORKDIR $GOPATH/src/github.com/hashicorp/terraform
RUN git clone https://github.com/hashicorp/terraform.git ./ && \
git checkout v${TERRAFORM_VERSION} && \
/bin/bash scripts/build.sh
RUN rm -rf /var/lib/apt/lists/*
WORKDIR $GOPATH
ENTRYPOINT ["terraform"]
So I called this way:
alias terraform='docker run -i -t -v ~/.aws:/root/.aws:ro -v $(pwd):/app -w /app/ rubendob/terraform:0.11.8'
Then I have the next folder structure and it was working ok since ups, I decided to run some terraform stuff in the dev folder.
ls -ls tf
total 0
0 drwxr-xr-x 3 ruben.ortiz staff 96 15 sep 23:43 dev
0 drwxr-xr-x 6 ruben.ortiz staff 192 11 sep 19:53 modules
0 drwxr-xr-x 4 ruben.ortiz staff 128 15 sep 12:39 prod
I ran the container like
terraform plan tf/prod/
and worked ok but container created then the .terraform folder with tfstate, and other stuff.
So if I want to run the same command but to dev environment simply cannot because it detects and previous .terraform folder
ls -lisah tf/.terraform/
total 8
901814 0 drwxr-xr-x 5 ruben.ortiz staff 160B 15 sep 12:38 .
885805 0 drwxr-xr-x 6 ruben.ortiz staff 192B 15 sep 23:54 ..
901815 0 drwxr-xr-x 15 ruben.ortiz staff 480B 16 sep 00:05 modules
901821 0 drwxr-xr-x 3 ruben.ortiz staff 96B 10 sep 23:02 plugins
901819 8 -rw-r--r-- 1 ruben.ortiz staff 567B 16 sep 18:43 terraform.tfstate
And if I enter into the dev folder, as I just set up a volume to the current directory it is not able to see the shared modules folder.
How do you do guys to workaround this?
Thanks!
I have to agree with the comments here. I would encourage you to re-evaluate the benefits you are gaining from this process.
That being said, the reason it’s causing conflicts is because you are trying to invoke 2 different workspaces from a common directory. You can avoid this by overriding the working directory when you enter the container (see https://docs.docker.com/engine/reference/run/#workdir) or simply changing directory to the correct context.
I would also suggest you try an alternative to managing environments using different workspaces.
Don’t use folders to manage your IaC environments. This leads to drift as there’s no common template for your infrastructure.
Do use a single workspace and variables to control environment specifications.
Example: Write your modules so that when you change the environment variable (var.stage is popular) the plan alters to fit your requirements. Typically the environments should vary as little as possible with quantity, exposure and capacity usually being the variable configurations. Dev might deploy 1 VM with 1 core and 1GB RAM in private topology but production may be 3 VMs with 2 cores and 4GB RAM with additional public topology. You can of course have more variation: dev may run database process on the same server as the application to save cost but production may have a dedicated DB instance. All of this can be managed by changing a single variable, ternary statements and interpolation.

mkdir .ssh in a Dockerfile, folder is not there?

I want my Dockerfile to mkdir .ssh/
But it does not, why not?
FROM jenkinsci/jnlp-slave
MAINTAINER Johnny5 isAlive <johnny5#hotmail.com>
USER root
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN apt-get update
RUN apt-get install unzip git curl vim -y
USER jenkins
RUN mkdir -p /home/jenkins/.ssh && touch /home/jenkins/.ssh/aFile
...building...
Looks fine?
Step 12 : RUN mkdir -p /home/jenkins/.ssh && touch /home/jenkins/.ssh/aFile
---> Running in ca19a679580d
---> 5980df7db482
Removing intermediate container ca19a679580d
Successfully built 5980df7db482
Running and looking around, the .ssh/ folder and aFile inside are not there ...
$ docker run -it -u 0 --entrypoint /bin/bash 5980df7db482
root#4aa40a18baf2:~# pwd
/home/jenkins
root#4aa40a18baf2:~# ls -al
total 24
drwxr-xr-x 3 jenkins jenkins 4096 Oct 17 23:17 .
drwxr-xr-x 4 root root 4096 Sep 14 08:50 ..
-rw-r--r-- 1 jenkins jenkins 220 Nov 12 2014 .bash_logout
-rw-r--r-- 1 jenkins jenkins 3515 Nov 12 2014 .bashrc
-rw-r--r-- 1 jenkins jenkins 675 Nov 12 2014 .profile
drwxr-xr-x 2 jenkins jenkins 4096 Sep 14 08:50 .tmp
root#4aa40a18baf2:~#
If I pull the parent image, jenkinsci/jnlp-slave, and inspect it with docker inspect jenkinsci/jnlp-slave, I can see that it already has a volume defined at /home/jenkins:
[
{
...
"ContainerConfig": {
...
"Volumes": {
"/home/jenkins": {}
},
...
}
]
This means that during each build step, any changes you make to that location won't be committed to your new layer.
Here's a simplified version of your Dockerfile to highlight what's going on:
FROM jenkinsci/jnlp-slave
RUN mkdir -p /home/jenkins/.ssh
Now, let's build with the following command: docker build --no-cache --rm=false -t jns .:
Sending build context to Docker daemon 2.56 kB
Step 1 : FROM jenkinsci/jnlp-slave
---> d7731d944ad7
Step 2 : RUN mkdir -p /home/jenkins/.ssh
---> Running in 520a8e2f7cae
---> 962189878d5e
Successfully built 962189878d5e
The --no-cache option makes the command easier to work with on repeat invocations. The --rm=false will cause the builder to not remove the containers created for each step.
In this case, the builder ran the Step 2 in 520a8e2f7cae on my system. I can now do a docker inspect 520a8e2f7cae and see the actual container used for this step. Specifically, I'm curious about the mounts location:
[
{
...
"Mounts": [
{
"Name": "e34fd82bd190f21dbd63b5cf70167a16674cd00d95fdc6159314c25c6d08e10e",
"Source": "/var/lib/docker/volumes/e34fd82bd190f21dbd63b5cf70167a16674cd00d95fdc6159314c25c6d08e10e/_data",
"Destination": "/home/jenkins",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
...
}
]
I see that there's an anonymous volume with id e34fd82bd190f21dbd63b5cf70167a16674cd00d95fdc6159314c25c6d08e10e for /home/jenkins.
I can inspect the contents of that volume like this:
$ docker run --rm -v e34fd82bd190f21dbd63b5cf70167a16674cd00d95fdc6159314c25c6d08e10e:/volume alpine ls -lah /volume
total 28
drwxr-xr-x 4 10000 10000 4.0K Oct 18 02:49 .
drwxr-xr-x 25 root root 4.0K Oct 18 02:55 ..
-rw-r--r-- 1 10000 10000 220 Nov 12 2014 .bash_logout
-rw-r--r-- 1 10000 10000 3.4K Nov 12 2014 .bashrc
-rw-r--r-- 1 10000 10000 675 Nov 12 2014 .profile
drwxr-xr-x 2 10000 10000 4.0K Oct 18 02:49 .ssh
drwxr-xr-x 2 10000 10000 4.0K Sep 14 08:50 .tmp
The .ssh directory created in the RUN step is in this volume. Since volumes aren't part of the container's write layer, it won't get committed. I can confirm this by doing a docker diff on this container:
docker diff 520a8e2f7cae
There is no output, showing no changes to the container's filesystem, which is why it doesn't come forward into this layer of the image.
The other contents at this location are files in the parent image that were committed before the VOLUME directive that made /home/jenkins into a volume.

Resources