Docker container bind host directory - docker

I wish I could update the settings of the application by changing the local profile.
I use "volume" to bind a local directory, for example:
docker run -v D:\test:/app
But when the container is running, all files in /app are emptied, because D:\test does not have any files.
Is there any way I can achieve my goal

Your question is a bit unclear. I guess you're problem is the following: You want to bind mount your app directory, but it is initially empty and will stay empty since the bind mount overwrites everything put into /app during build
I usually use two different ways:
Put your profile into host directory D:\test (if applicable). This is also a viable strategy for e.g. the source code of nodejs apps
During build, put your profile into /app_temp. Then create an entry point which moves /app_temp into /app. If you want to persist the profile through multiple build/run phases, it has to be inside the build context (which is likely not D:\test) on your host.

You need to change a bit the way your application is organized. put all the settings in their own directory and have the application read them from there. Then you can map only the settings folder you the host one.
Another option is to map the host folder to a temporary folder inside the container and have the ENTRYPOINT script update your files (by copying them over) and then run your application.
Docker was not meant to be used for the workflow you are trying to setup and for this reason you need to do some extra work.

because D:\test does not have any files.
That the way it works. Volume type you use is bind mount, i.e. you mount file system, using mount point mapped to a host directory.
According to documentation:
With bind mounts, we control the exact mountpoint on the host. We can use this to persist data, but it’s often used to provide additional data into containers.
You have two options here (both imply host data should exist in advance):
Bind to a folder, containing configuration data. As you showed.
It is possible to bind only file:
docker run -v D:\test\config.json:/app/config.json
While binding to a file, if it does not exist beforehand, docker-daemon would think it is a directory and will create directory, both in container and on the host.
you mount file system, using mount point mapped to a host directory
Hence, if host directory is empty mounted file system would also be empty.

Related

Where are stored the linux container volumes under windows?

I state that I'm quite new to docker, I need to run an OTRS docker image that hosts the file inside a volume, but it's not clear to me where those files are stored inside windows.
I need to know this since I wish to perform a backup of the files...
The compose is here
I've seen that there's under C:\ProgramData\Docker\volumes but seems empty...
Thanks in advance
for this example the volumes will be created and stored in current directory.
Like it written in this line https://github.com/juanluisbaptiste/docker-otrs/blob/master/docker-compose-prod.yml#L22
so that line meant in current directory if there are volumes folder use it otherwise create an empty directory and mount it to container.

A safe directory that can be used in Docker and a development environment

I have a webapp which needs to store temporary files wherever it runs.
Since I want the app to execute both in Docker and in a development environment - I need a safe directory that can be created in the development environment (Mac OS usually) and in the Docker container.
I used /usr/temp on the container but on a mac this directory in inaccessible.
What would be the best, safest directory to use?
Thank you
If the environment variable $TMPDIR is set, it's a standard place for temporary files, and if it's not set, it usually defaults to /tmp. (On MacOS it points to a per-user directory that quickly gets filled with clutter.) You don't mention what language you're using, but most have a specific function or module to create a file "in the usual temporary directory", which is this one.
In general environment variables are a good way to encapsulate differences between your development and various deployment environments and it makes sense here.
Also remember, on the one hand, that anything in Docker filesystem space you don't explicitly persist will be lost when the container exits, and on the other, that if the container stays running for a long time, there isn't any sort of automated /tmp cleaner. You'll need to properly manage the lifecycle of these files. Also also remember that you have near-complete control over the container's filesystem layout and if you need some specific directory to exist you can RUN mkdir it in your Dockerfile.
Docker provides volumes concept to help you sync up data between Host and Container.
In your case, lets say you want to sync /home/user/data from your host to /usr/temp in Container
you can do so like this
docker run -itd -v /home/user/data:/usr/temp --name mycontainer imagename
Once the container is up and running
Add some file in the data folder and it will be available inside the container in the temp folder and other way around.
I ended up having the following in my Dockerfile
ENV HOME /usr
WORKDIR $HOME/app
RUN mkdir -p $HOME/temp/abc \
$HOME/temp/xyz
And in my configuration files, I used /temp/abc or /temp/xyz to point to the destination folders.
Finally, in my application code I made sure to prepend any path resolution with process.env['HOME'] (NodeJS).
The above works well in both a development envirnment, since $HOME is set by default on a mac and also in production which runs the docker above.
Thanks everyone!

How to update docker container image but keep the generated files by container app

What is the best practices for the updating container for the following scenario;
I have images that build on my web app project, and I am puplishing new images based on updated source code, once in a month.
Buy my web app generates files or updates some file in time after running in container. For example, app is creating new xml files under user folder for each web user. Another example is upload files by users.
I want to keep these files after running new updated image without lose.
/bin/
/first.dll
/second.dll
/other-soruces/
/some.cs
/other.cs
/user/
/user-1.xml
/user-2.xml
/uploads/
/images
/image-1.jpg
/web.config
Should I use the volume feature of Docker ? Is there any another strategy ?
Short answer, yes, you do want a volume for these directories. More specifically, two volumes: /user and /uploads.
This gets into a fundamental practice of image and container design that is best done by dividing your application into three parts:
The application code, binaries, libraries, and other runtime dependencies.
The persistent data that the application access and creates.
The configuration that modifies how the application runs, particularly in different environments with the same code.
Each of these parts should go in a different place in docker.
The first part, the code and binaries, goes in your image. This is what you ship to run your container on different nodes in docker, and what you store in a registry for later reuse.
The second part, your persistent data, gets stored in a volume. There are two main types of volumes to pick from: a named volume and a host volume (aka bind mount). A named volume has a particular feature that improves portability, it will be initialized to the contents of your image at the volume location when the volume is created for the first time. This initialization includes directory and file permissions and ownership, and can be used to seed your volume with an initial state. The host volume (bind mount) is just a directory mount from the docker host into the container, and you get exactly what was on the host, including the uid/gid of the files/directories, along with no initialization procedure. The host volume is very easy to access for developers, but lacks portability if you move into a multi-node swarm cluster, and suffers from uid/gid on the host mapping to different users inside the container since usernames inside the container can be different for the same id's. Any files you write inside the container that are not written to a volume should be considered disposable and will be lost when you recreate the container to update to a new image. And any directories you define as a volume should be considered owned by that volume and will not receive updates from the image when you replace the container.
The last piece, configuration, is often overlooked but equally important. This is anything injected into the application at startup to tell it where to connect for external data, config files that alter it's behavior, and anything that needs to be separated to allow the same image to be reusable in different environments. This is how you get portability from development to production with the same image, and how you get reusability of publicly provided images. The configuration is injected with environment variables, command line parameters, bind mounts of a config file (when you run on a single node), and configs + secrets which are essentially the same bind mount of a config file that is now stored in docker's swarm rather than locally on a single host. In your situation, the /web.config looks suspiciously like a config file that you'll want to move out of the image and inject as a bind mount or swarm config.
To put these all together, you will want a compose file that defines your image, the volumes to use, and any configs or environment variables to set.

Dockerized executable read/write on host filesystem

I just dockerized an executable that reads from a file and creates a new file in the very directory that file came from.
I want to use Docker in that setup, so that I avoid installing numerous third-party libraries in the production environment.
My problem now: I have file /this/is/a.file on my underlying (host) file system and my executable is supposed to create /this/is/b.file.
As far as I see it, the only chance to get this done is by mapping a volume that points to /this/is and then let the executable know where I mounted it to in the docker, container.
Am I right? Or is there a way that I just pass docker run mydockerizedstuff /this/is/a.file without using Docker volumes?
You're correct, you need to pass in /this/is as a volume and the executable will write to that location.
If you want to constrain the thing even more, you can pass /this/is/b.file as a volume. You need to create it (simply via touch) beforehand, otherwise Docker will consider it a directory and create it as such for you, but you'll know that the thing won't be able to create /this/is/c.file or any other thing.

How to place files on shared volume from within Dockerfile?

I have a Dockerfile which builds an image that provides for me a complicated tool-chain environment to compile a project on a mounted volume from the host machines file system. Another reason is that I don't have a lot of space on the image.
The Dockerfile builds my tool-chain in the OS image, and then prepares the source by downloading packages to be placed on the hosts shared volume. And normally from there I'd then log into the image and execute commands to build. And this is the problem. I can download the source in the Dockerfile, but how then would I get it to the shared volume.
Basically I have ...
ADD http://.../file mydir
VOLUME /home/me/mydir
But then of course, I get the error 'cannot mount volume over existing file ..."
Am I going about this wrong?
You're going about it wrong, but you already suspected that.
If you want the source files to reside on the host filesystem, get rid of the VOLUME directive in your Dockerfile, and don't try to download the source files at build time. This is something you want to do at run time. You probably want to provision your image with a pair of scripts:
One that downloads the files to a specific location, say /build.
Another that actually runs the build process.
With these in place, you could first download the source files to a location on the host filesystem, as in:
docker run -v /path/on/my/host:/build myimage fetch-sources
And then you can build them by running:
docker run -v /path/on/my/host:/build myimage build-sources
With this model:
You're trying to muck about with volumes during the image build process. This is almost never what you want, since data stored in a volume is explicitly excluded from the image, and the build process doesn't permit you to conveniently mount host directories inside the container.
You are able to download the files into a persistent location on the host, where they will be available to you for editing, or re-building, or whatever.
You can run the build process multiple times without needing to re-download the source files every time.
I think this will do pretty much what you want, but if it doesn't meet your needs, or if something is unclear, let me know.

Resources