Understanding "VOLUME" instruction in DockerFile - docker

Below is the content of my "Dockerfile"
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/app
# Change working dir to /usr/src/app
WORKDIR /usr/src/app
VOLUME . /usr/src/app
RUN npm install
EXPOSE 8080
CMD ["node" , "server" ]
In this file I am expecting VOLUME . /usr/src/app instruction to mount
contents of present working directory in host to be mounted on /usr/src/app
folder of container.
Please let me know if this is the correct way?

In short: No, your VOLUME instruction is not correct.
Dockerfile's VOLUME specify one or more volumes given container-side paths. But it does not allow the image author to specify a host path. On the host-side, the volumes are created with a very long ID-like name inside the Docker root. On my machine this is /var/lib/docker/volumes.
Note: Because the autogenerated name is extremely long and makes no sense from a human's perspective, these volumes are often referred to as "unnamed" or "anonymous".
Your example that uses a '.' character will not even run on my machine, no matter if I make the dot the first or second argument. I get this error message:
docker: Error response from daemon: oci runtime error: container_linux.go:265: starting container process caused "process_linux.go:368: container init caused "open /dev/ptmx: no such file or directory"".
I know that what has been said to this point is probably not very valuable to someone trying to understand VOLUME and -v and it certainly does not provide a solution for what you try to accomplish. So, hopefully, the following examples will shed some more light on these issues.
Minitutorial: Specifying volumes
Given this Dockerfile:
FROM openjdk:8u131-jdk-alpine
VOLUME vol1 vol2
(For the outcome of this minitutorial, it makes no difference if we specify vol1 vol2 or /vol1 /vol2 — this is because the default working directory within a Dockerfile is /)
Build it:
docker build -t my-openjdk
Run:
docker run --rm -it my-openjdk
Inside the container, run ls in the command line and you'll notice two directories exist; /vol1 and /vol2.
Running the container also creates two directories, or "volumes", on the host-side.
While having the container running, execute docker volume ls on the host machine and you'll see something like this (I have replaced the middle part of the name with three dots for brevity):
DRIVER VOLUME NAME
local c984...e4fc
local f670...49f0
Back in the container, execute touch /vol1/weird-ass-file (creates a blank file at said location).
This file is now available on the host machine, in one of the unnamed volumes lol. It took me two tries because I first tried the first listed volume, but eventually I did find my file in the second listed volume, using this command on the host machine:
sudo ls /var/lib/docker/volumes/f670...49f0/_data
Similarly, you can try to delete this file on the host and it will be deleted in the container as well.
Note: The _data folder is also referred to as a "mount point".
Exit out from the container and list the volumes on the host. They are gone. We used the --rm flag when running the container and this option effectively wipes out not just the container on exit, but also the volumes.
Run a new container, but specify a volume using -v:
docker run --rm -it -v /vol3 my-openjdk
This adds a third volume and the whole system ends up having three unnamed volumes. The command would have crashed had we specified only -v vol3. The argument must be an absolute path inside the container. On the host-side, the new third volume is anonymous and resides together with the other two volumes in /var/lib/docker/volumes/.
It was stated earlier that the Dockerfile can not map to a host path which sort of pose a problem for us when trying to bring files in from the host to the container during runtime. A different -v syntax solves this problem.
Imagine I have a subfolder in my project directory ./src that I wish to sync to /src inside the container. This command does the trick:
docker run -it -v $(pwd)/src:/src my-openjdk
Both sides of the : character expects an absolute path. Left side being an absolute path on the host machine, right side being an absolute path inside the container. pwd is a command that "print current/working directory". Putting the command in $() takes the command within parenthesis, runs it in a subshell and yields back the absolute path to our project directory.
Putting it all together, assume we have ./src/Hello.java in our project folder on the host machine with the following contents:
public class Hello {
public static void main(String... ignored) {
System.out.println("Hello, World!");
}
}
We build this Dockerfile:
FROM openjdk:8u131-jdk-alpine
WORKDIR /src
ENTRYPOINT javac Hello.java && java Hello
We run this command:
docker run -v $(pwd)/src:/src my-openjdk
This prints "Hello, World!".
The best part is that we're completely free to modify the .java file with a new message for another output on a second run - without having to rebuild the image =)
Final remarks
I am quite new to Docker, and the aforementioned "tutorial" reflects information I gathered from a 3-day command line hackathon. I am almost ashamed I haven't been able to provide links to clear English-like documentation backing up my statements, but I honestly think this is due to a lack of documentation and not personal effort. I do know the examples work as advertised using my current setup which is "Windows 10 -> Vagrant 2.0.0 -> Docker 17.09.0-ce".
The tutorial does not solve the problem "how do we specify the container's path in the Dockerfile and let the run command only specify the host path". There might be a way, I just haven't found it.
Finally, I have a gut feeling that specifying VOLUME in the Dockerfile is not just uncommon, but it's probably a best practice to never use VOLUME. For two reasons. The first reason we have already identified: We can not specify the host path - which is a good thing because Dockerfiles should be very agnostic to the specifics of a host machine. But the second reason is people might forget to use the --rm option when running the container. One might remember to remove the container but forget to remove the volume. Plus, even with the best of human memory, it might be a daunting task to figure out which of all anonymous volumes are safe to remove.

The official docker tutorial says:
A data volume is a specially-designated directory within one or more containers that bypasses the Union File System. Data volumes provide several useful features for persistent or shared data:
Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point,
that existing data is copied into the new volume upon volume
initialization. (Note that this does not apply when mounting a host
directory.)
Data volumes can be shared and reused among containers.
Changes to a data volume are made directly.
Changes to a data volume will not be included when you update an image.
Data volumes persist even if the container itself is deleted.
In Dockerfile you can specify only the destination of a volume inside a container. e.g. /usr/src/app.
When you run a container, e.g. docker run --volume=/opt:/usr/src/app my_image, you may but do not have to specify its mounting point (/opt) on the host machine. If you do not specify --volume argument then the mount point will be chosen automatically, usually under /var/lib/docker/volumes/.

Specifying a VOLUME line in a Dockerfile configures a bit of metadata on your image, but how that metadata is used is important.
First, what did these two lines do:
WORKDIR /usr/src/app
VOLUME . /usr/src/app
The WORKDIR line there creates the directory if it doesn't exist, and updates some image metadata to specify all relative paths, along with the current directory for commands like RUN will be in that location. The VOLUME line there specifies two volumes, one is the relative path ., and the other is /usr/src/app, both just happen to be the same directory. Most often the VOLUME line only contains a single directory, but it can contain multiple as you've done, or it can be a json formatted array.
You cannot specify a volume source in the Dockerfile: A common source of confusion when specifying volumes in a Dockerfile is trying to match the runtime syntax of a source and destination at image build time, this will not work. The Dockerfile can only specify the destination of the volume. It would be a trivial security exploit if someone could define the source of a volume since they could update a common image on the docker hub to mount the root directory into the container and then launch a background process inside the container as part of an entrypoint that adds logins to /etc/passwd, configures systemd to launch a bitcoin miner on next reboot, or searches the filesystem for credit cards, SSNs, and private keys to send off to a remote site.
What does the VOLUME line do? As mentioned, it sets some image metadata to say a directory inside the image is a volume. How is this metadata used? Every time you create a container from this image, docker will force that directory to be a volume. If you do not provide a volume in your run command, or compose file, the only option for docker is to create an anonymous volume. This is a local named volume with a long unique id for the name and no other indication for why it was created or what data it contains (anonymous volumes are were data goes to get lost). If you override the volume, pointing to a named or host volume, your data will go there instead.
VOLUME breaks things: You cannot disable a volume once defined in a Dockerfile. And more importantly, the RUN command in docker is implemented with temporary containers with the classic builder. Those temporary containers will get a temporary anonymous volume. That anonymous volume will be initialized with the contents of your image. Any writes inside the container from your RUN command will be made to that volume. When the RUN command finishes, changes to the image are saved, and changes to the anonymous volume are discarded. Because of this, I strongly recommend against defining a VOLUME inside the Dockerfile. It results in unexpected behavior for downstream users of your image that wish to extend the image with initial data in volume location.
How should you specify a volume? To specify where you want to include volumes with your image, provide a docker-compose.yml. Users can modify that to adjust the volume location to their local environment, and it captures other runtime settings like publishing ports and networking.
Someone should document this! They have. Docker includes warnings on the VOLUME usage in their documentation on the Dockerfile along with advice to specify the source at runtime:
Changing the volume from within the Dockerfile: If any build steps change the data within the volume after it has been declared,
those changes will be discarded.
...
The host directory is declared at container run-time: The host directory (the mountpoint) is, by its nature, host-dependent. This is
to preserve image portability, since a given host directory can’t be
guaranteed to be available on all hosts. For this reason, you can’t
mount a host directory from within the Dockerfile. The VOLUME
instruction does not support specifying a host-dir parameter. You
must specify the mountpoint when you create or run the container.
The behavior of defining a VOLUME followed by RUN steps in a Dockerfile has changed with the introduction of buildkit. Here are two examples. First the Dockerfile:
$ cat df.vol-run
FROM busybox
WORKDIR /test
VOLUME /test
RUN echo "hello" >/test/hello.txt \
&& chown -R nobody:nobody /test
Next, building without buildkit. Note how the changes from the RUN step are lost:
$ DOCKER_BUILDKIT=0 docker build -t test-vol-run -f df.vol-run .
Sending build context to Docker daemon 23.04kB
Step 1/4 : FROM busybox
---> beae173ccac6
Step 2/4 : WORKDIR /test
---> Running in aaf2c2920ebd
Removing intermediate container aaf2c2920ebd
---> 7960bec5b546
Step 3/4 : VOLUME /test
---> Running in 9e2fbe3e594b
Removing intermediate container 9e2fbe3e594b
---> 5895ddaede1f
Step 4/4 : RUN echo "hello" >/test/hello.txt && chown -R nobody:nobody /test
---> Running in 2c6adff98c70
Removing intermediate container 2c6adff98c70
---> ef2c30f207b6
Successfully built ef2c30f207b6
Successfully tagged test-vol-run:latest
$ docker run -it test-vol-run /bin/sh
/test # ls -al
total 8
drwxr-xr-x 2 root root 4096 Mar 6 14:35 .
drwxr-xr-x 1 root root 4096 Mar 6 14:35 ..
/test # exit
And then building with buildkit. Note how the changes from the RUN step are preserved:
$ docker build -t test-vol-run -f df.vol-run .
[+] Building 0.5s (7/7) FINISHED
=> [internal] load build definition from df.vol-run 0.0s
=> => transferring dockerfile: 154B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> [internal] load metadata for docker.io/library/busybox:latest 0.0s
=> CACHED [1/3] FROM docker.io/library/busybox 0.0s
=> [2/3] WORKDIR /test 0.0s
=> [3/3] RUN echo "hello" >/test/hello.txt && chown -R nobody:nobody /test 0.4s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:8cb3220e3593b033778f47e7a3cb7581235e4c6fa921c5d8ce1ab329ebd446b6 0.0s
=> => naming to docker.io/library/test-vol-run 0.0s
$ docker run -it test-vol-run /bin/sh
/test # ls -al
total 12
drwxr-xr-x 2 nobody nobody 4096 Mar 6 14:34 .
drwxr-xr-x 1 root root 4096 Mar 6 14:34 ..
-rw-r--r-- 1 nobody nobody 6 Mar 6 14:34 hello.txt
/test # exit

To better understand the volume instruction in dockerfile, let us learn the typical volume usage in mysql official docker file implementation.
VOLUME /var/lib/mysql
Reference:
https://github.com/docker-library/mysql/blob/3362baccb4352bcf0022014f67c1ec7e6808b8c5/8.0/Dockerfile
The /var/lib/mysql is the default location of MySQL that store data files.
When you run test container for test purpose only, you may not specify its mounting point,e.g.
docker run mysql:8
then the mysql container instance will use the default mount path which is specified by the volume instruction in dockerfile. the volumes is created with a very long ID-like name inside the Docker root, this is called "unnamed" or "anonymous" volume. In the folder of underlying host system /var/lib/docker/volumes.
/var/lib/docker/volumes/320752e0e70d1590e905b02d484c22689e69adcbd764a69e39b17bc330b984e4
This is very convenient for quick test purposes without the need to specify the mounting point, but still can get best performance by using Volume for data store, not the container layer.
For a formal use, you will need to specify the mount path by using named volume or bind mount, e.g.
docker run -v /my/own/datadir:/var/lib/mysql mysql:8
The command mounts the /my/own/datadir directory from the underlying host system as /var/lib/mysql inside the container.The data directory /my/own/datadir won't be automatically deleted, even the container is deleted.
Usage of the mysql official image (Please check the "Where to Store Data" section):
Reference:
https://hub.docker.com/_/mysql/

The VOLUME command in a Dockerfile is quite legit, totally conventional, absolutely fine to use and it is not deprecated in anyway. Just need to understand it.
We use it to point to any directories which the app in the container will write to a lot. We don't use VOLUME just because we want to share between host and container like a config file.
The command simply needs one param; a path to a folder, relative to WORKDIR if set, from within the container. Then docker will create a volume in its graph(/var/lib/docker) and mount it to the folder in the container. Now the container will have somewhere to write to with high performance. Without the VOLUME command the write speed to the specified folder will be very slow because now the container is using it's copy on write strategy in the container itself. The copy on write strategy is a main reason why volumes exist.
If you mount over the folder specified by the VOLUME command, the command is never run because VOLUME is only executed when the container starts, kind of like ENV.
Basically with VOLUME command you get performance without externally mounting any volumes. Data will save across container runs too without any external mounts. Then when ready simply mount something over it.
Some good example use cases:
- logs
- temp folders
Some bad use cases:
- static files
- configs
- code

I don't consider the use of VOLUME good in any case, except if you are creating an image for yourself and no one else is going to use it.
I was impacted negatively due to VOLUME exposed in base images that I extended and only came up to know about the problem after the image was already running, like wordpress that declares the /var/www/html folder as a VOLUME, and this meant that any files added or changed during the build stage aren't considered, and live changes persist, even if you don't know. There is an ugly workaround to define web directory in another place, but this is just a bad solution to a much simpler one: just remove the VOLUME directive.
You can achieve the intent of volume easily using the -v option, this not only make it clear what will be the volumes of the container (without having to take a look at the Dockerfile and parent Dockerfiles), but this also gives the consumer the option to use the volume or not.
It's also bad to use VOLUMES due to the following reasons, as said by this answer:
However, the VOLUME instruction does come at a cost.
Users might not be aware of the unnamed volumes being created, and continuing to take up storage space on their Docker host after containers are removed.
There is no way to remove a volume declared in a Dockerfile. Downstream images cannot add data to paths where volumes exist.
The latter issue results in problems like these.
How to “undeclare” volumes in docker image?
GitLab on Docker: how to persist user data between deployments?
Having the option to undeclare a volume would help, but only if you know the volumes defined in the dockerfile that generated the image (and the parent dockerfiles!). Furthermore, a VOLUME could be added in newer versions of a Dockerfile and break things unexpectedly for the consumers of the image.
Another good explanation (about the oracle image having VOLUME, which was removed): https://github.com/oracle/docker-images/issues/640#issuecomment-412647328
More cases in which VOLUME broke stuff for people:
https://github.com/datastax/docker-images/issues/31
https://github.com/docker-library/wordpress/issues/232
https://github.com/docker-library/ghost/issues/195
https://github.com/samos123/docker-drupal/issues/10
A pull request to add options to reset properties the parent image (including VOLUME), was closed and is being discussed here (and you can see several cases of people affected adversely due to volumes defined in dockerfiles), which has a comment with a good explanation against VOLUME:
Using VOLUME in the Dockerfile is worthless. If a user needs
persistence, they will be sure to provide a volume mapping when
running the specified container. It was very hard to track down that
my issue of not being able to set a directory's ownership
(/var/lib/influxdb) was due to the VOLUME declaration in InfluxDB's
Dockerfile. Without an UNVOLUME type of option, or getting rid of it
altogether, I am unable to change anything related to the specified
folder. This is less than ideal, especially when you are
security-aware and desire to specify a certain UID the image should be
ran as, in order to avoid a random user, with more permissions than
necessary, running software on your host.
The only good thing I can see about VOLUME is about documentation, and I would consider it good if it only did that (without any side effects).
Update (2021-10-19)
One more related issue with the mysql official image: https://github.com/docker-library/mysql/issues/255
Update (2022-01-26)
I found a good article explaining about the issues with VOLUME. It's already several years old, but the same issues remain:
https://boxboat.com/2017/01/23/volumes-and-dockerfiles-dont-mix/
TL;DR
I consider that the best use of VOLUME is to be deprecated.

Although it is a very old post, I still want you could check out the latest docker official docs if you have some confusion between volume with bind mounts
Bind mounts have been around since the early days of Docker, I think it should not be a perfect design either, eg "Bind mounts allow access to sensitive files",
and you can get the info docker official prefers you use VOLUME rather than bind mounts.
You can get good use cases for volumes from here
Reference to
docker volume docs
docker storage overview

Related

Combing VOLUME + docker run -v

I was looking for an explanation on the VOLUME entry when writing a Dockerfile and came across this statement
A volume is a persistent data stored in /var/lib/docker/volumes/...
You can either declare it in a Dockerfile, which means each time a container is started from the image, the volume is created (empty), even if you don't have any -v option.
You can declare it on runtime docker run -v [host-dir:]container-dir.
combining the two (VOLUME + docker run -v) means that you can mount the content of a host folder into your volume persisted by the container in /var/lib/docker/volumes/...
docker volume create creates a volume without having to define a Dockerfile and build an image and run a container. It is used to quickly allow other containers to mount said volume.
But I'm having a hard time understanding this line:
...combining the two (VOLUME + docker run -v) means that you can mount the content of a host folder into your volume persisted by the container in /var/lib/docker/volumes/...
For example, let's say I have a config file on my host machine and I run the container based off the image I made with the Dockerfile I wrote. Will it copy the config file into where the volume that I stated in my the volume entry?
Would it be something like (pseudocode)
#dockerfile
From Ubuntu
Run apt-get update
Run apt-get install mysql
Volume . /etc/mysql/conf.d
Cmd systemcl start MySQL
And when I run it
docker run -it -v /path/to/config/file: ubuntu_based_image
Is this what they mean?
You probably don't want VOLUME in your Dockerfile. It's not necessary to mount files or directories at runtime, and it has confusing side effects like making subsequent RUN commands silently lose state.
If an image does have a VOLUME, and you don't mount anything else there when you start the container, Docker will create an anonymous volume and mount it for you. This can result in space leaks if you don't clean these volumes up.
You can use a docker run -v option on any container directory regardless of whether or not it's declared as a VOLUME.
If you docker run -v /host/path:/container/path, the two directories are actually the same; nothing is copied, and writes to one are (supposed to be) immediately visible on the other.
docker run -v /host/path:/container/path bind mounts aren't visible in /var/lib/docker at all.
You shouldn't usually be looking at content in /var/lib/docker (and can't if you're not on a native-Linux host). If you need to access the volume file content directly, use a bind mount rather than a named or anonymous volume.
Bind mounts like you've shown are appropriate for injecting config files into containers, and for reading log files back out. Named volumes are appropriate for stateful applications' storage, like the data for a MySQL database. Neither type of volume is appropriate for code or libraries; build these directly into Docker images instead.

Docker volume creates malfunction in container behaviour

i am trying to create a volume of a directory in a docker container (confluence).
https://hub.docker.com/r/atlassian/confluence-server/
To fix a bug with postgres, i have to add driver files manually in the container. The location inside the container is:
/opt/atlassian/confluence/confluence/WEB-INF/lib
After creating the volume i wanted to add the newer driver into the directory. So, inside my docker-compose.yaml i mapped a volume to the directory.
- ./data/driverfiles:/opt/atlassian/confluence/confluence/WEB-INF/lib
Volume and directory get created after calling docker-compose up and everything seems fine.
Problem is, that the volume remains empty and when starting an interactive shell into the container, the once filles with thousands of files directory is empty, too. When removing the volume from the docker-compose.yaml, the directory is full of files again.
Objectively, it looks like mapping the volume to this directory somehow prohibits the container of enriching it with files, what is going on here?
If you're mounting a host directory over a container directory, at container startup time, this is always a one-way operation: whatever is in the host directory (if anything) completely replaces whatever might have been in the image. Content from a container will never get copied into a host directory unless the image startup code explicitly does this for you.
If you need to modify a configuration file in the container, you need to first copy it out of the image; for example
# with the volumes: mount deleted
docker-compose run confluence \
sh -c 'cd /opt/atlassian/confluence/confluence/WEB-INF/lib && tar cf - .' \
| tar xf - .
That particular invocation will copy the entire directory out to the host, where you can mount it in again.
Note that if there's an updated image that changes the contents of this lib directory, the content you have on the host will always take precedence; this hides any changes that might be made in the image.
You might find it more reliable to build a custom image that adds the driver files you need
FROM ...conflunce:...
COPY ... /opt/atlassian/confluence/confluence/WEB-INF/lib
# Use CMD and all other options from the original image
Specify build: . (and no image:) in the docker-compose.yml file to use this Dockerfile.

How to copy from volume mapped opt to image opt folder in docker?

Assuming I have a docker image
FROM openjdk:8-jdk-slim
WORKDIR /opt
COPY localfile ../imagefile
I can create my docker image docker build -t my-image . and have my local localfile not the image as ../imagefile.
I can also do this interactively by
Run docker run -it --name my-container --volume=$(pwd):/opt --workdir=/opt openjdk:8-jdk-slim
Then cp localfile ../imagefile
Then exit
Then create the image by running docker commit my-container my-image
Both produce the equivalent my-image.
However, if I change my Dockerfile to below
FROM openjdk:8-jdk-slim
WORKDIR /opt
COPY localfile imagefile
I can build the image using the docker build -t my-image .. However, I cannot cp localfile imagefile, because the cp will only copy the file to the original disk volume folder mapped to opt and not the image actual opt folder.
Is there a way still copy my file to the image opt folder (and not the local one), when my opt is mapped to the local folder?
Or another way of asking, is there equivalent COPY command I can use when I'm running the container interactively to create the image?
There's two important details around this question:
If you mount something into a Docker container (or for that matter on your Linux host), it hides whatever was in that directory before; you can't directly see or access it.
In Docker in particular, you can't change mounts without deleting and recreating the container, and thereby removing the container filesystem.
So on the one hand, you can't copy from a mounted volume to the container filesystem at the same location; on the other, even if you could, there's (almost) no way to see the contents of the filesystem.
(As you note, docker commit will create an image of the container filesystem, ignoring volumes, so that will see this. Using docker commit isn't really a best practice, though; building images via Dockerfiles as you've shown and using docker build is almost always preferred.)
In general I've found volume mounts useful for three things, where hiding the container content is acceptable or even expected. You can use mounts to inject config files into containers (where you want to overwrite the image's copy). You can use mounts to read log files back out (where you don't care what the image started with). If you have a stateful workload like a database, you can use a mount to hold the data that needs to be persisted (so it outlives the container filesystem).

What happens when a volume links an existing populated host and container dir

I've searched the docs but nothing came up so time to test it. But for a quick future reference...
Is the host folder populated with the container folder contents?
Is it the opposite?
Are both folder contents merged? (In that case: What happens when a file with the same name is in both folders?)
Or does it produce an error? Is the error thrown on launch or is it thrown when you try to build an image with a VOLUME pointing to an existing populated folder on the container?
Also, another thing that isn't in the docs: Do I have to define the container path as a VOLUME in the Dockerfile in order to use -v against it when launching the container or can I create volumes on the fly?
When you run a container and mount a volume from the host, all you see in the container is what is on the host - the volume mount points at the host directory, so if there was anything in the directory in the image it gets bypassed.
With an image from this Dockerfile:
FROM ubuntu
WORKDIR /vol
RUN touch /vol/from-container
VOLUME /vol
When you run it without a host mount, the image contents get copied into the volume:
> docker run vol-test ls /vol
from-container 
But mount the volume from the host and you only see the host's content:
> ls $(pwd)/host
from-host
> docker run -v $(pwd)/host:/vol vol-test ls /vol
from-host
And no, you don't need the VOLUME instruction. The behaviour is the same without it.
Whenever a Docker container is created with a volume mounted on the host, e.g.:
docker run -v /path/on/host:/data container-image
Any contents that are already in /data due to the image build process are always completely discarded, and whatever is currently at /path/on/host is used in its place. (If /path/on/host does not exist, it is created as an empty directory, though I think some aspect of that behavior may currently be deprecated.)
Pre-defining a volume in the Dockerfile with VOLUME is not necessary; all VOLUME does is cause any containers run from the image to have an implicit -v /volume/path (Note lack of host mount path) argument added to their docker run command which is ignored if an explicit -v /host/path:/volume/path is used.

Is there a way to replicate pwd in a volume mount for docker in a boot2docker context?

So currently I can do: docker -v .:/usr/src/app or even specify it in my docker-compose.yml:
web:
volumes:
- .:/usr/src/app
But when I attempt to define this in my Dockerfile:
VOLUME .:/usr/src/app
It doesn't mount anything.
Now I understand the complexities in that I'm using OSX and so I have to virtualize the environment to run Docker via boot2docker, and that boot2docker solves the copy issue by mounting /User to the linux machine running Docker.
The documentation wants me to be explicit, but since my explicitness would require me to name my user (in this case /User/krainboltgreene/code/krainboltgreene/blankrails) it seems non-idiomatic, as that obviously doesn't work on other people's environments.
What's the solution for this? I mean, I can technically get this all working without (as noted above the CLI and compose works fine), but it means not being able to do project specific provisioning (bower install, npm install, vulcanize, etc).
You can't specify a host directory for a volume inside a Dockerfile, because of the portability reasons you mention (not everyone will have the same directories and there are security issues regarding mounting sensitive files).
If you instead do:
VOLUME /usr/src/app
Docker will automatically set up a volume at run-time for the folder, which will be mapped to a directory under /var/lib/docker/volumes.
If you want to be able to quickly make changes during development, I would suggest using COPY in the Dockerfile, but mounting local changes over the top with a volume at run-time. This has the disadvantage that if you volume mount a folder, all the contents of that directory in the container will be hidden (rather than merged).
The docker run -v .:/usr/src/app ... command as well as the docker-compose definitions are executing during runtime. Whereas the Dockerfile instructions are executed during build time.
By the way the instruction in your Dockerfile is syntactically incorrect. It should be VOLUME /usr/src/app instead.
That VOLUME keyword only defines that later during runtime this location will be stored on a volume. So all files that you add by further Dockerfile instructions or manual commits to that location are ignored and not added to the resulting image.
Now during runtime when you did not specify a volume it Docker will generate a volume for you which is empty by default.
To have your docker-compose setup working for other colleagues you could simply make the docker-compose configuration file being part of your blankrails project folder. Everybody then runs docker-compose from within that directory and your provided configuration will work.
EDIT:
I do not know exactly what you mean with project specific provisioning. But if your aim is to provide default contents for the defined volume you could do something like the following:
Add all required project files during the Dockerfile build to a /bootstrap folder on the image.
Instead of executing your app directly use a start shell script for CMD.
In that start script you can check whether the volume mounted to /usr/src/app is empty or not. When it is empty copy all the /bootstrap contents into it.
Afterwards start your app from within that script in foreground.
With that approach you can easily provide a default file set for mounted volumes. And when you re-use that volume e.g. after a container restart the container just works with the files that are on the volume without touching them again during startup. So modified files will be persisted.

Resources