I state that I'm quite new to docker, I need to run an OTRS docker image that hosts the file inside a volume, but it's not clear to me where those files are stored inside windows.
I need to know this since I wish to perform a backup of the files...
The compose is here
I've seen that there's under C:\ProgramData\Docker\volumes but seems empty...
Thanks in advance
for this example the volumes will be created and stored in current directory.
Like it written in this line https://github.com/juanluisbaptiste/docker-otrs/blob/master/docker-compose-prod.yml#L22
so that line meant in current directory if there are volumes folder use it otherwise create an empty directory and mount it to container.
Related
I wish I could update the settings of the application by changing the local profile.
I use "volume" to bind a local directory, for example:
docker run -v D:\test:/app
But when the container is running, all files in /app are emptied, because D:\test does not have any files.
Is there any way I can achieve my goal
Your question is a bit unclear. I guess you're problem is the following: You want to bind mount your app directory, but it is initially empty and will stay empty since the bind mount overwrites everything put into /app during build
I usually use two different ways:
Put your profile into host directory D:\test (if applicable). This is also a viable strategy for e.g. the source code of nodejs apps
During build, put your profile into /app_temp. Then create an entry point which moves /app_temp into /app. If you want to persist the profile through multiple build/run phases, it has to be inside the build context (which is likely not D:\test) on your host.
You need to change a bit the way your application is organized. put all the settings in their own directory and have the application read them from there. Then you can map only the settings folder you the host one.
Another option is to map the host folder to a temporary folder inside the container and have the ENTRYPOINT script update your files (by copying them over) and then run your application.
Docker was not meant to be used for the workflow you are trying to setup and for this reason you need to do some extra work.
because D:\test does not have any files.
That the way it works. Volume type you use is bind mount, i.e. you mount file system, using mount point mapped to a host directory.
According to documentation:
With bind mounts, we control the exact mountpoint on the host. We can use this to persist data, but it’s often used to provide additional data into containers.
You have two options here (both imply host data should exist in advance):
Bind to a folder, containing configuration data. As you showed.
It is possible to bind only file:
docker run -v D:\test\config.json:/app/config.json
While binding to a file, if it does not exist beforehand, docker-daemon would think it is a directory and will create directory, both in container and on the host.
you mount file system, using mount point mapped to a host directory
Hence, if host directory is empty mounted file system would also be empty.
I have pulled few windows images from docker hub which are stored in my C-Drive by default. (C:\ProgramData\Docker)
Please explain how I can move those to a different drive like D.
The simplest solution is to move the directory to the intended location, and then create a directory junction from the old location to the new one:
move C:\ProgramData\Docker D:\mypath\Docker
mklink /j C:\ProgramData\Docker D:\mypath\Docker
this causes Docker to believe that the data is still at C:\ProgramData\Docker, even though it isn't, and it will not take up any space on C:.
You can find a few other solutions at https://github.com/docker/for-win/issues/185 , but it appears that they don't work 100%.
As described in the following article, worked for me in a Windows Server 2019 environment running Docker client and server version 20.10.7.
https://www.ntweekly.com/2019/09/20/how-to-change-docker-storage-data-folder-on-windows-server-2016/
Stop the docker service
stop-service docker
Edit following file, create it if not already there:
C:\ProgramData\Docker\config\daemon.json
Add data-root element to the json string:
{
"data-root": "e:\\DockerData"
}
Restart the service:
restart-service docker
Beware, your images and containers aren't automatically moved to the new location, you need to manually relocate them.
What is the best practices for the updating container for the following scenario;
I have images that build on my web app project, and I am puplishing new images based on updated source code, once in a month.
Buy my web app generates files or updates some file in time after running in container. For example, app is creating new xml files under user folder for each web user. Another example is upload files by users.
I want to keep these files after running new updated image without lose.
/bin/
/first.dll
/second.dll
/other-soruces/
/some.cs
/other.cs
/user/
/user-1.xml
/user-2.xml
/uploads/
/images
/image-1.jpg
/web.config
Should I use the volume feature of Docker ? Is there any another strategy ?
Short answer, yes, you do want a volume for these directories. More specifically, two volumes: /user and /uploads.
This gets into a fundamental practice of image and container design that is best done by dividing your application into three parts:
The application code, binaries, libraries, and other runtime dependencies.
The persistent data that the application access and creates.
The configuration that modifies how the application runs, particularly in different environments with the same code.
Each of these parts should go in a different place in docker.
The first part, the code and binaries, goes in your image. This is what you ship to run your container on different nodes in docker, and what you store in a registry for later reuse.
The second part, your persistent data, gets stored in a volume. There are two main types of volumes to pick from: a named volume and a host volume (aka bind mount). A named volume has a particular feature that improves portability, it will be initialized to the contents of your image at the volume location when the volume is created for the first time. This initialization includes directory and file permissions and ownership, and can be used to seed your volume with an initial state. The host volume (bind mount) is just a directory mount from the docker host into the container, and you get exactly what was on the host, including the uid/gid of the files/directories, along with no initialization procedure. The host volume is very easy to access for developers, but lacks portability if you move into a multi-node swarm cluster, and suffers from uid/gid on the host mapping to different users inside the container since usernames inside the container can be different for the same id's. Any files you write inside the container that are not written to a volume should be considered disposable and will be lost when you recreate the container to update to a new image. And any directories you define as a volume should be considered owned by that volume and will not receive updates from the image when you replace the container.
The last piece, configuration, is often overlooked but equally important. This is anything injected into the application at startup to tell it where to connect for external data, config files that alter it's behavior, and anything that needs to be separated to allow the same image to be reusable in different environments. This is how you get portability from development to production with the same image, and how you get reusability of publicly provided images. The configuration is injected with environment variables, command line parameters, bind mounts of a config file (when you run on a single node), and configs + secrets which are essentially the same bind mount of a config file that is now stored in docker's swarm rather than locally on a single host. In your situation, the /web.config looks suspiciously like a config file that you'll want to move out of the image and inject as a bind mount or swarm config.
To put these all together, you will want a compose file that defines your image, the volumes to use, and any configs or environment variables to set.
I just dockerized an executable that reads from a file and creates a new file in the very directory that file came from.
I want to use Docker in that setup, so that I avoid installing numerous third-party libraries in the production environment.
My problem now: I have file /this/is/a.file on my underlying (host) file system and my executable is supposed to create /this/is/b.file.
As far as I see it, the only chance to get this done is by mapping a volume that points to /this/is and then let the executable know where I mounted it to in the docker, container.
Am I right? Or is there a way that I just pass docker run mydockerizedstuff /this/is/a.file without using Docker volumes?
You're correct, you need to pass in /this/is as a volume and the executable will write to that location.
If you want to constrain the thing even more, you can pass /this/is/b.file as a volume. You need to create it (simply via touch) beforehand, otherwise Docker will consider it a directory and create it as such for you, but you'll know that the thing won't be able to create /this/is/c.file or any other thing.
I have a Dockerfile which builds an image that provides for me a complicated tool-chain environment to compile a project on a mounted volume from the host machines file system. Another reason is that I don't have a lot of space on the image.
The Dockerfile builds my tool-chain in the OS image, and then prepares the source by downloading packages to be placed on the hosts shared volume. And normally from there I'd then log into the image and execute commands to build. And this is the problem. I can download the source in the Dockerfile, but how then would I get it to the shared volume.
Basically I have ...
ADD http://.../file mydir
VOLUME /home/me/mydir
But then of course, I get the error 'cannot mount volume over existing file ..."
Am I going about this wrong?
You're going about it wrong, but you already suspected that.
If you want the source files to reside on the host filesystem, get rid of the VOLUME directive in your Dockerfile, and don't try to download the source files at build time. This is something you want to do at run time. You probably want to provision your image with a pair of scripts:
One that downloads the files to a specific location, say /build.
Another that actually runs the build process.
With these in place, you could first download the source files to a location on the host filesystem, as in:
docker run -v /path/on/my/host:/build myimage fetch-sources
And then you can build them by running:
docker run -v /path/on/my/host:/build myimage build-sources
With this model:
You're trying to muck about with volumes during the image build process. This is almost never what you want, since data stored in a volume is explicitly excluded from the image, and the build process doesn't permit you to conveniently mount host directories inside the container.
You are able to download the files into a persistent location on the host, where they will be available to you for editing, or re-building, or whatever.
You can run the build process multiple times without needing to re-download the source files every time.
I think this will do pretty much what you want, but if it doesn't meet your needs, or if something is unclear, let me know.