Assetss from wwwroot cannot be found on Docker Image - docker

I have got Linux VM Docker Image up and running but I have encountered one difficulity.
All assetss that were in my wwwroot folder cannot be found
Failed to load resource: the server responded with a status of 404 (Not Found)
I have included
"webroot": "wwwroot"
In project.json file but that doesn't fix the problem. One more thing is thaht running from VS 2015 (on ISS Express) everything works - is there something that I should include in Dockerfile as well?.
EDIT:
I added VOLUME to docker file but that did not help:
FROM microsoft/aspnet
COPY . /app
WORKDIR /app
RUN ["kpm", "restore"]
VOLUME ["/wwwroot"]
EXPOSE 5004
ENTRYPOINT ["k", "kestrel"]

are you working through the example here: asp ? I don't know much about asp, but, I think you are pretty close. First, I don't think you need to modify the Dockerfile. You can always mount a volume, the VOLUME keyword just declares it as necessary. But, you do need to modify your project.json file like you have shown, with one difference:
"webroot": "/webroot"
I am assuming that the name is "webroot" and the directory to look in (for the project) is "/webroot". Then, build it, like the example shows:
docker build -t myapp .
So, when you run this do:
docker run -t -v $(pwd)/webroot:/webroot -d -p 80:5004 myapp
What this docker run command does is takes your webroot directory from the current directory ($pwd) and mounts it in the container and calls that mount /webroot. In other words, you container must reference /webroot (not webroot, that would be relative to WORKDIR I think).
I think the bottom line is there are two things going on here. The first one is 'building' the image, the second one is running it. When you run it you provide the volume that you want mounted. As long as you application repects the project.json file's "webroot" value as the place to look for the web pages then this will work.

Related

Dockerfile with custom parent image

How can I use existing images as the FROM parameter in a dockerfile?
I'm trying to dockerize a VueJS application, but wanted pierrezemb/gostatic to be the base image -- it's a tiny http server that, in principle, is able to host files and directories. However, when running the completed image and checking the exposed port in the browser, the index.html file loads but all other resources in subfolders 404 fail with:
The resource from “http://localhost:8043/js/app.545bfbc1.js” was blocked due to MIME type (“text/plain”) mismatch (X-Content-Type-Options: nosniff). Curling the resource returns just the 404.
This is likely because the gostatic base image is created to be very much standalone, not to be included as the FROM parameter in a Dockerfile. When I build the code myself and use gostatic to host the directory, everything is fine. When I build with a Dockerfile, build succeeds but I get the aforementioned errors when trying to get resources not in the main directory.
Ideal, standalone use case:
docker run -d -p 80:8043 -v path/to/website:/srv/http --name goStatic pierrezemb/gostatic
Current Dockerfile
FROM pierrezemb/gostatic AS deployment
COPY ./dist/* /srv/http/
EXPOSE 8043
# Note, gostatic calls: ENTRYPOINT ["/goStatic"]
# Therefore CMD need only be goStatic parameters
CMD ["-enable-health", "-enable-logging"]
Note, dist folder is built and functioning. Also notably, the health endpoint doesn't work, and there is no logging (which the flags are set for). It's clear I'm handling the parent image wrong
I'm building and running with the following commands:
docker build -t tweet-dash .
docker run -d -p 8043:8043 --name dash tweet-dash
Dockerfile for goStatic is here
This is actually almost exactly the way you're supposed to use existing images: everything here is being done correctly.
For those coming after; pay attention to the parent base image's Dockerfile -- build your own with it open next to you. Figure out how to use the image by itself, as a standalone first, and then see if you can add on to it.
The Dockerfile is slightly incorrect in this case: when you copy a file directory over with COPY ./dist/* /srv/http, docker will iterate recursively over file structure and add each individual file to /srv/http. No folders will be preserved.
This can be fixed by doing COPY ./dist /srv/http.

Copy file into Dockerfile from different directory

Is it possible for Dockerfile to copy over some file from the host filesystem and not from the context it's being build from ?
# Inside Dockerfile
FROM gradle:latest
COPY ~/.super/secrets.yaml /opt
# I think you can work around it with but doesn't look nice
COPY ../../../../../../.super/secrets.yaml /opt
when I ran the command on the /home/user/some/path/to/project/ path ?
docker build .
The usual way to get "external" files into your docker container is by copying them into your build directory before starting the docker build. It is strongly recommended to create a script for this to ensure that your preparation step is reproducible.
No this is not possible to go up the directory. Here is why.
When runnig docker build . have you ever considered what this dot stand for at the end? Well, here is part of docker documentation
The docker build command builds Docker images from a Dockerfile and a
“context”. A build’s context is the set of files located in the
specified PATH or URL. The build process can refer to any of the files
in the context. For example, your build can use a COPY instruction to
reference a file in the context.
As you see, this dot is referencing context path (here it means "this directory"). All files under context path get sent to docker daemon and you can reference only these files in your Dockerfile. Of course, you can think that you are clever and reference / (root path) so you have access to all files on your machine. (I highly encourage you to try this and see what happens). What you should see that happens is that docker client is freezing. Or, does it really? Well, it's not really freezing, its sending all / directory to docker daemon and it can take ages or (what's more probable) you may run out of memory.
So now when you understand this limitation you see that the only way to make it work is to copy the file you are interested in to the location of context path and then run docker build command.
When you do docker build ., that last argument is the build context directory: you can only access files from it.
You could do docker build ../../../ but then every single file in that root directory will get packaged up and send to the Docker daemon, which will be slow.
So instead, do something like:
cp ../../../secret.yaml .
docker build .
rm secret.yaml
However, keep in mind that will result in the secret being embedded in the image forever, which might be a security risk. If it's a secret you need for runtime, better to pass it in via environment variable at runtime. If you only need the secret for building the image, there are other alternatives, e.g. https://pythonspeed.com/articles/docker-build-secrets/.

Docker: When running with volume (-v) flag Error response from daemon: OCI runtime create failed

I am trying to dockerize my first Go Project (Although the question has nothing to do with Go, I guess!).
Short summary (of what the code is doing) - It simply checks whether a .cache folder is present and creates it if it doesn't exist.
After dockerizing the project, my goal is to mount the path within the container where .cache is created to a host path
Here's my Dockerfile (Multistaged):
FROM golang as builder
ENV GO111MODULE=on
WORKDIR /proj
COPY go.mod .
COPY go.sum .
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build
RUN ls
FROM alpine
COPY --from=builder /proj/project /proj/
RUN chmod a+x /proj/project
ENTRYPOINT [ "/proj/project" ]
EDIT: If I run something like this (as #Jan Garaj mentioned in the comments):
docker run --rm -v "`pwd`/data/.cache:/proj/.cache/" project-image:latest
doesn't throw an error, but creates an empty data/.cache folder on the host with no actual (content) files and folders from the container's .cache directory. Although, the executable inside the container is able to create the .cache directory and its subsequent files and folders.
I know, variations of this problem has been asked a lot of times, but trust me, I've tried out all those solutions. The following are some of the questions:
Error response from daemon: OCI runtime create failed: container_linux.go:296
A GitHub issue which looked familiar - Still doesn't have an answer and is open.
Another GitHub issue - Probably the best link so far, but I still couldn't get it to work.
The fact that removing the volume flag makes the run command to work is confusing me a lot.
Can someone please explain what's going on in this case and point me to the right direction.
P.S. - Also, I'm running docker on a MacOS (macOS High Sierra to be specific) and I had to enable file sharing in Docker-> Preferences -> File Sharing with the host mount path (Just an extra information!!).
Needless to say that I have also tried out overriding ENTRYPOINT by trying to fire something like /bin/sh /proj/project which also didn't work (as it couldn't find the executable project even after mentioning the full path from the root). I read somewhere that the alpine image has sh only and doesn't have a bash. I am also changing the privileges of my executable project while building the image to a+x, which also doesn't work.
Please do let me know if any part of the question is unclear. I've also checked in my code here in GitHub if anyone wants to reproduce the error.
When you mount your working directory's subdirectory data to the /proj directory inside the container, the entire folder, including binary you've compiled and copied in there, will no longer be available. Instead, the contents of your data directory will be available inside your container on /proj instead. Essentially, you are 'hiding' the container image's version of the directory and replacing it with a directory from outside the container.
This is because the -v flag, with the argument you've given it, creates a bind mount and uses the second parameter (/proj) as the mount target.
To solve the problem, either copy the binary to a different directory (and change the ENTRYPOINT instruction correspondingly), or choose a different target for the bind mount.

Docker, copy files in production and use volume in development

I'm new to using docker for development but wanted to try it in my latest project and have ran into a couple of questions.
I have a scenario where I want to link the current project directory as a volume to a running docker container in development mode, so that file changes can be done locally without restarting the container each time. To do this, I have the following comand:
docker run --name app_instance -p 3100:80 -v $(pwd):/app appimage
In contrast, in production I want to copy files from the current project directory.
E.G in the docker file have ADD . /app (With a .dockerignore file to ignore certain folders). Also, I would like to mount a volume for persistent storage. For this scenario, I have the following command :
docker run --name app_instance -p 80:80 -v ./filestore:/app/filestore appimage
My problem is that with only one dockerfile, for the development command a volume will be mounted at /app and also files copied with ADD . /app. I haven't tested what happens in this scenario, but I am assuming it is incorrect to have both for the same destination.
My question is, what is the best practice to handle such a situation?
Solutions I have thought of:
Mount project folder to different path than /app during development and ignore the /app directory created in the container by the dockerfile
Have two docker files, one that copies the current project and one that does not.
My problem is that with only one dockerfile, for the development command a volume will be mounted at /app and also files copied with ADD . /app. I haven't tested what happens in this scenario, but I am assuming it is incorrect to have both for the same destination.
For this scenario, it will do as follows:
a) Add your code in host server to app folder in container when docker build.
b) Mount your local app to the folder in the container when docker run, here will always your latest develop code.
But it will override the contents which you added in dockerfile, so this could meet your requirements. You should try it, no need for any complex solution.

Understanding "VOLUME" instruction in DockerFile

Below is the content of my "Dockerfile"
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/app
# Change working dir to /usr/src/app
WORKDIR /usr/src/app
VOLUME . /usr/src/app
RUN npm install
EXPOSE 8080
CMD ["node" , "server" ]
In this file I am expecting VOLUME . /usr/src/app instruction to mount
contents of present working directory in host to be mounted on /usr/src/app
folder of container.
Please let me know if this is the correct way?
In short: No, your VOLUME instruction is not correct.
Dockerfile's VOLUME specify one or more volumes given container-side paths. But it does not allow the image author to specify a host path. On the host-side, the volumes are created with a very long ID-like name inside the Docker root. On my machine this is /var/lib/docker/volumes.
Note: Because the autogenerated name is extremely long and makes no sense from a human's perspective, these volumes are often referred to as "unnamed" or "anonymous".
Your example that uses a '.' character will not even run on my machine, no matter if I make the dot the first or second argument. I get this error message:
docker: Error response from daemon: oci runtime error: container_linux.go:265: starting container process caused "process_linux.go:368: container init caused "open /dev/ptmx: no such file or directory"".
I know that what has been said to this point is probably not very valuable to someone trying to understand VOLUME and -v and it certainly does not provide a solution for what you try to accomplish. So, hopefully, the following examples will shed some more light on these issues.
Minitutorial: Specifying volumes
Given this Dockerfile:
FROM openjdk:8u131-jdk-alpine
VOLUME vol1 vol2
(For the outcome of this minitutorial, it makes no difference if we specify vol1 vol2 or /vol1 /vol2 — this is because the default working directory within a Dockerfile is /)
Build it:
docker build -t my-openjdk
Run:
docker run --rm -it my-openjdk
Inside the container, run ls in the command line and you'll notice two directories exist; /vol1 and /vol2.
Running the container also creates two directories, or "volumes", on the host-side.
While having the container running, execute docker volume ls on the host machine and you'll see something like this (I have replaced the middle part of the name with three dots for brevity):
DRIVER VOLUME NAME
local c984...e4fc
local f670...49f0
Back in the container, execute touch /vol1/weird-ass-file (creates a blank file at said location).
This file is now available on the host machine, in one of the unnamed volumes lol. It took me two tries because I first tried the first listed volume, but eventually I did find my file in the second listed volume, using this command on the host machine:
sudo ls /var/lib/docker/volumes/f670...49f0/_data
Similarly, you can try to delete this file on the host and it will be deleted in the container as well.
Note: The _data folder is also referred to as a "mount point".
Exit out from the container and list the volumes on the host. They are gone. We used the --rm flag when running the container and this option effectively wipes out not just the container on exit, but also the volumes.
Run a new container, but specify a volume using -v:
docker run --rm -it -v /vol3 my-openjdk
This adds a third volume and the whole system ends up having three unnamed volumes. The command would have crashed had we specified only -v vol3. The argument must be an absolute path inside the container. On the host-side, the new third volume is anonymous and resides together with the other two volumes in /var/lib/docker/volumes/.
It was stated earlier that the Dockerfile can not map to a host path which sort of pose a problem for us when trying to bring files in from the host to the container during runtime. A different -v syntax solves this problem.
Imagine I have a subfolder in my project directory ./src that I wish to sync to /src inside the container. This command does the trick:
docker run -it -v $(pwd)/src:/src my-openjdk
Both sides of the : character expects an absolute path. Left side being an absolute path on the host machine, right side being an absolute path inside the container. pwd is a command that "print current/working directory". Putting the command in $() takes the command within parenthesis, runs it in a subshell and yields back the absolute path to our project directory.
Putting it all together, assume we have ./src/Hello.java in our project folder on the host machine with the following contents:
public class Hello {
public static void main(String... ignored) {
System.out.println("Hello, World!");
}
}
We build this Dockerfile:
FROM openjdk:8u131-jdk-alpine
WORKDIR /src
ENTRYPOINT javac Hello.java && java Hello
We run this command:
docker run -v $(pwd)/src:/src my-openjdk
This prints "Hello, World!".
The best part is that we're completely free to modify the .java file with a new message for another output on a second run - without having to rebuild the image =)
Final remarks
I am quite new to Docker, and the aforementioned "tutorial" reflects information I gathered from a 3-day command line hackathon. I am almost ashamed I haven't been able to provide links to clear English-like documentation backing up my statements, but I honestly think this is due to a lack of documentation and not personal effort. I do know the examples work as advertised using my current setup which is "Windows 10 -> Vagrant 2.0.0 -> Docker 17.09.0-ce".
The tutorial does not solve the problem "how do we specify the container's path in the Dockerfile and let the run command only specify the host path". There might be a way, I just haven't found it.
Finally, I have a gut feeling that specifying VOLUME in the Dockerfile is not just uncommon, but it's probably a best practice to never use VOLUME. For two reasons. The first reason we have already identified: We can not specify the host path - which is a good thing because Dockerfiles should be very agnostic to the specifics of a host machine. But the second reason is people might forget to use the --rm option when running the container. One might remember to remove the container but forget to remove the volume. Plus, even with the best of human memory, it might be a daunting task to figure out which of all anonymous volumes are safe to remove.
The official docker tutorial says:
A data volume is a specially-designated directory within one or more containers that bypasses the Union File System. Data volumes provide several useful features for persistent or shared data:
Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point,
that existing data is copied into the new volume upon volume
initialization. (Note that this does not apply when mounting a host
directory.)
Data volumes can be shared and reused among containers.
Changes to a data volume are made directly.
Changes to a data volume will not be included when you update an image.
Data volumes persist even if the container itself is deleted.
In Dockerfile you can specify only the destination of a volume inside a container. e.g. /usr/src/app.
When you run a container, e.g. docker run --volume=/opt:/usr/src/app my_image, you may but do not have to specify its mounting point (/opt) on the host machine. If you do not specify --volume argument then the mount point will be chosen automatically, usually under /var/lib/docker/volumes/.
Specifying a VOLUME line in a Dockerfile configures a bit of metadata on your image, but how that metadata is used is important.
First, what did these two lines do:
WORKDIR /usr/src/app
VOLUME . /usr/src/app
The WORKDIR line there creates the directory if it doesn't exist, and updates some image metadata to specify all relative paths, along with the current directory for commands like RUN will be in that location. The VOLUME line there specifies two volumes, one is the relative path ., and the other is /usr/src/app, both just happen to be the same directory. Most often the VOLUME line only contains a single directory, but it can contain multiple as you've done, or it can be a json formatted array.
You cannot specify a volume source in the Dockerfile: A common source of confusion when specifying volumes in a Dockerfile is trying to match the runtime syntax of a source and destination at image build time, this will not work. The Dockerfile can only specify the destination of the volume. It would be a trivial security exploit if someone could define the source of a volume since they could update a common image on the docker hub to mount the root directory into the container and then launch a background process inside the container as part of an entrypoint that adds logins to /etc/passwd, configures systemd to launch a bitcoin miner on next reboot, or searches the filesystem for credit cards, SSNs, and private keys to send off to a remote site.
What does the VOLUME line do? As mentioned, it sets some image metadata to say a directory inside the image is a volume. How is this metadata used? Every time you create a container from this image, docker will force that directory to be a volume. If you do not provide a volume in your run command, or compose file, the only option for docker is to create an anonymous volume. This is a local named volume with a long unique id for the name and no other indication for why it was created or what data it contains (anonymous volumes are were data goes to get lost). If you override the volume, pointing to a named or host volume, your data will go there instead.
VOLUME breaks things: You cannot disable a volume once defined in a Dockerfile. And more importantly, the RUN command in docker is implemented with temporary containers with the classic builder. Those temporary containers will get a temporary anonymous volume. That anonymous volume will be initialized with the contents of your image. Any writes inside the container from your RUN command will be made to that volume. When the RUN command finishes, changes to the image are saved, and changes to the anonymous volume are discarded. Because of this, I strongly recommend against defining a VOLUME inside the Dockerfile. It results in unexpected behavior for downstream users of your image that wish to extend the image with initial data in volume location.
How should you specify a volume? To specify where you want to include volumes with your image, provide a docker-compose.yml. Users can modify that to adjust the volume location to their local environment, and it captures other runtime settings like publishing ports and networking.
Someone should document this! They have. Docker includes warnings on the VOLUME usage in their documentation on the Dockerfile along with advice to specify the source at runtime:
Changing the volume from within the Dockerfile: If any build steps change the data within the volume after it has been declared,
those changes will be discarded.
...
The host directory is declared at container run-time: The host directory (the mountpoint) is, by its nature, host-dependent. This is
to preserve image portability, since a given host directory can’t be
guaranteed to be available on all hosts. For this reason, you can’t
mount a host directory from within the Dockerfile. The VOLUME
instruction does not support specifying a host-dir parameter. You
must specify the mountpoint when you create or run the container.
The behavior of defining a VOLUME followed by RUN steps in a Dockerfile has changed with the introduction of buildkit. Here are two examples. First the Dockerfile:
$ cat df.vol-run
FROM busybox
WORKDIR /test
VOLUME /test
RUN echo "hello" >/test/hello.txt \
&& chown -R nobody:nobody /test
Next, building without buildkit. Note how the changes from the RUN step are lost:
$ DOCKER_BUILDKIT=0 docker build -t test-vol-run -f df.vol-run .
Sending build context to Docker daemon 23.04kB
Step 1/4 : FROM busybox
---> beae173ccac6
Step 2/4 : WORKDIR /test
---> Running in aaf2c2920ebd
Removing intermediate container aaf2c2920ebd
---> 7960bec5b546
Step 3/4 : VOLUME /test
---> Running in 9e2fbe3e594b
Removing intermediate container 9e2fbe3e594b
---> 5895ddaede1f
Step 4/4 : RUN echo "hello" >/test/hello.txt && chown -R nobody:nobody /test
---> Running in 2c6adff98c70
Removing intermediate container 2c6adff98c70
---> ef2c30f207b6
Successfully built ef2c30f207b6
Successfully tagged test-vol-run:latest
$ docker run -it test-vol-run /bin/sh
/test # ls -al
total 8
drwxr-xr-x 2 root root 4096 Mar 6 14:35 .
drwxr-xr-x 1 root root 4096 Mar 6 14:35 ..
/test # exit
And then building with buildkit. Note how the changes from the RUN step are preserved:
$ docker build -t test-vol-run -f df.vol-run .
[+] Building 0.5s (7/7) FINISHED
=> [internal] load build definition from df.vol-run 0.0s
=> => transferring dockerfile: 154B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> [internal] load metadata for docker.io/library/busybox:latest 0.0s
=> CACHED [1/3] FROM docker.io/library/busybox 0.0s
=> [2/3] WORKDIR /test 0.0s
=> [3/3] RUN echo "hello" >/test/hello.txt && chown -R nobody:nobody /test 0.4s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:8cb3220e3593b033778f47e7a3cb7581235e4c6fa921c5d8ce1ab329ebd446b6 0.0s
=> => naming to docker.io/library/test-vol-run 0.0s
$ docker run -it test-vol-run /bin/sh
/test # ls -al
total 12
drwxr-xr-x 2 nobody nobody 4096 Mar 6 14:34 .
drwxr-xr-x 1 root root 4096 Mar 6 14:34 ..
-rw-r--r-- 1 nobody nobody 6 Mar 6 14:34 hello.txt
/test # exit
To better understand the volume instruction in dockerfile, let us learn the typical volume usage in mysql official docker file implementation.
VOLUME /var/lib/mysql
Reference:
https://github.com/docker-library/mysql/blob/3362baccb4352bcf0022014f67c1ec7e6808b8c5/8.0/Dockerfile
The /var/lib/mysql is the default location of MySQL that store data files.
When you run test container for test purpose only, you may not specify its mounting point,e.g.
docker run mysql:8
then the mysql container instance will use the default mount path which is specified by the volume instruction in dockerfile. the volumes is created with a very long ID-like name inside the Docker root, this is called "unnamed" or "anonymous" volume. In the folder of underlying host system /var/lib/docker/volumes.
/var/lib/docker/volumes/320752e0e70d1590e905b02d484c22689e69adcbd764a69e39b17bc330b984e4
This is very convenient for quick test purposes without the need to specify the mounting point, but still can get best performance by using Volume for data store, not the container layer.
For a formal use, you will need to specify the mount path by using named volume or bind mount, e.g.
docker run -v /my/own/datadir:/var/lib/mysql mysql:8
The command mounts the /my/own/datadir directory from the underlying host system as /var/lib/mysql inside the container.The data directory /my/own/datadir won't be automatically deleted, even the container is deleted.
Usage of the mysql official image (Please check the "Where to Store Data" section):
Reference:
https://hub.docker.com/_/mysql/
The VOLUME command in a Dockerfile is quite legit, totally conventional, absolutely fine to use and it is not deprecated in anyway. Just need to understand it.
We use it to point to any directories which the app in the container will write to a lot. We don't use VOLUME just because we want to share between host and container like a config file.
The command simply needs one param; a path to a folder, relative to WORKDIR if set, from within the container. Then docker will create a volume in its graph(/var/lib/docker) and mount it to the folder in the container. Now the container will have somewhere to write to with high performance. Without the VOLUME command the write speed to the specified folder will be very slow because now the container is using it's copy on write strategy in the container itself. The copy on write strategy is a main reason why volumes exist.
If you mount over the folder specified by the VOLUME command, the command is never run because VOLUME is only executed when the container starts, kind of like ENV.
Basically with VOLUME command you get performance without externally mounting any volumes. Data will save across container runs too without any external mounts. Then when ready simply mount something over it.
Some good example use cases:
- logs
- temp folders
Some bad use cases:
- static files
- configs
- code
I don't consider the use of VOLUME good in any case, except if you are creating an image for yourself and no one else is going to use it.
I was impacted negatively due to VOLUME exposed in base images that I extended and only came up to know about the problem after the image was already running, like wordpress that declares the /var/www/html folder as a VOLUME, and this meant that any files added or changed during the build stage aren't considered, and live changes persist, even if you don't know. There is an ugly workaround to define web directory in another place, but this is just a bad solution to a much simpler one: just remove the VOLUME directive.
You can achieve the intent of volume easily using the -v option, this not only make it clear what will be the volumes of the container (without having to take a look at the Dockerfile and parent Dockerfiles), but this also gives the consumer the option to use the volume or not.
It's also bad to use VOLUMES due to the following reasons, as said by this answer:
However, the VOLUME instruction does come at a cost.
Users might not be aware of the unnamed volumes being created, and continuing to take up storage space on their Docker host after containers are removed.
There is no way to remove a volume declared in a Dockerfile. Downstream images cannot add data to paths where volumes exist.
The latter issue results in problems like these.
How to “undeclare” volumes in docker image?
GitLab on Docker: how to persist user data between deployments?
Having the option to undeclare a volume would help, but only if you know the volumes defined in the dockerfile that generated the image (and the parent dockerfiles!). Furthermore, a VOLUME could be added in newer versions of a Dockerfile and break things unexpectedly for the consumers of the image.
Another good explanation (about the oracle image having VOLUME, which was removed): https://github.com/oracle/docker-images/issues/640#issuecomment-412647328
More cases in which VOLUME broke stuff for people:
https://github.com/datastax/docker-images/issues/31
https://github.com/docker-library/wordpress/issues/232
https://github.com/docker-library/ghost/issues/195
https://github.com/samos123/docker-drupal/issues/10
A pull request to add options to reset properties the parent image (including VOLUME), was closed and is being discussed here (and you can see several cases of people affected adversely due to volumes defined in dockerfiles), which has a comment with a good explanation against VOLUME:
Using VOLUME in the Dockerfile is worthless. If a user needs
persistence, they will be sure to provide a volume mapping when
running the specified container. It was very hard to track down that
my issue of not being able to set a directory's ownership
(/var/lib/influxdb) was due to the VOLUME declaration in InfluxDB's
Dockerfile. Without an UNVOLUME type of option, or getting rid of it
altogether, I am unable to change anything related to the specified
folder. This is less than ideal, especially when you are
security-aware and desire to specify a certain UID the image should be
ran as, in order to avoid a random user, with more permissions than
necessary, running software on your host.
The only good thing I can see about VOLUME is about documentation, and I would consider it good if it only did that (without any side effects).
Update (2021-10-19)
One more related issue with the mysql official image: https://github.com/docker-library/mysql/issues/255
Update (2022-01-26)
I found a good article explaining about the issues with VOLUME. It's already several years old, but the same issues remain:
https://boxboat.com/2017/01/23/volumes-and-dockerfiles-dont-mix/
TL;DR
I consider that the best use of VOLUME is to be deprecated.
Although it is a very old post, I still want you could check out the latest docker official docs if you have some confusion between volume with bind mounts
Bind mounts have been around since the early days of Docker, I think it should not be a perfect design either, eg "Bind mounts allow access to sensitive files",
and you can get the info docker official prefers you use VOLUME rather than bind mounts.
You can get good use cases for volumes from here
Reference to
docker volume docs
docker storage overview

Resources