Writing and reading files to/from host system on Docker - docker

Context:
I have a Java Spring Boot Application which has been deployed to run on a Docker Container. I am using Docker Toolbox to be precise.
The application exposes a few REST API's to upload and download files. The application works fine on Docker i.e. i'm able to upload and download files using API.
Questions:
In the application I have hard coded the path as something like "C:\SomeFolder". What location is this stored on the Docker container?
How do I force the application when running on Docker to use the Host file system instead of Docker's File system?

This is all done by Docker Volumes.
Read more about that in the Docker documentation:
https://docs.docker.com/storage/volumes/

In the application I have hard coded the path as something like "C:\SomeFolder". What location is this stored on the Docker container?
c:\SomeFolder, assuming you have a Windows container. This is the sort of parameter you'd generally set via a command-line option or environment variable, though.
How do I force the application when running on Docker to use the Host file system instead of Docker's File system?
Use the docker run -v option or an equivalent option to mount some directory from the host on that location. Whatever the contents of that directory are on the host will replace what's in the container at startup time, and after that changes in the host should be reflected in the container and vice versa.
If you have an opportunity to rethink this design, there are a number of lurking issues around file ownership and the like. The easiest way to circumvent these issues are to store data somewhere like a database (which may or may not itself be running in Docker) and use network I/O to send data to and from the container, and store as little as possible in the container filesystem. docker run -v is an excellent way to inject configuration files and get log files out in a typical server-oriented use.

Related

Blazor server app multi-platform and docker data persistency

I am creating a core library for Blazor server Apps creating a core DB automatically at runtime.
Until now, I create the database in Environment.SpecialFolder.LocalApplicationData which I got working on multiple platforms (OSX, Ubunt and Windows).
As I just discovered Docker's simplicity for deploying images, I am trying to make my library compatible with it.
So I face two issues:
Determine if the app is hosted on a Docker image or not
Persist data on a different volume that is NOT on the host if running Docker.
Of course, if in Docker, I shall not use Environment.SpecialFolder.LocalApplicationData as this is not a persistent location on the image itself. I can mount a volume when starting the image as described here.
So my natural idea is to assume users will mount a volumne with a specific path when starting the image, say
docker volume create MyAppDB
and run it with
docker run -dp 3000:3000 -v MyAppDB:/app/Data/MyAppDB myBuildDockerImage
can be verified by testing the existance of folder /app/Data/MyAppDB, and once 1. is verified, 2. become trivial.
If the folder does not exist, I am for sure on non-docker image... Well am I? What if users forgot to mount the volume? Or misspelled it? Maybe the folder does not exist because I am running on a non-docker environment...!
Is there a way to tweak my docker image when building it to force the mount volumes - i.e. created by ME and not the end-user? That seems safest... Alternatively, if not possible, can I add some specific element in Docker image to make absolutely sure I am running on the docker image I built or not?

How to take backup of docker volumes?

I'm using named volumes to persist data on Host machine in the cloud.
I want to take backup of these volumes present in the docker environment so that I can reuse them on critical incidents.
Almost decided to write a python script to compress the specified directory on the host machine and push it to the AWS S3.
But I would like to know if there is any other approaches to this problem?
docker-volume-backup may be helpful. It allows you to back up your Docker volumes to an external location or to a S3 storage.
Why use a Docker container to back up a Docker volume instead of writing your own Python script? Ideally you don't want to make backups while the volume is being used, so having a container on your docker-compose able to properly stop your container before taking backups can effectively copy data without affecting the application performance or backup integrity.
There's also this alternative: volume-backup

Access Host Folder from Docker Container without run -v command

I want to share access with my host (Ubuntu) or from an nfs server and a container or image (Ubuntu). I can't use the -v command, since the container is started by a program that only allows the container name and runs it itself. Copying is not possible since the folder is big and the content might change regulary.
The nfs-mount inside of the container does throw the error: "Protocol not supported"(done the same way as on host).
Until now it got the information that a "hardcoded" mount is not possible for images and nfs-mounts might not work with docker.
I'd be open for some "hacky" solutions as well if docker might not support it.
Bind mounts (the docker run -v option) are the only way to do this. It's considered a major design goal and security feature of Docker that containers can't generally access the host filesystem, so it'd be a major bug if there was some way to bypass this isolation.
You need to change the calling code to include the -v option, or rebuild your image to embed the data you need (if it's read-only).

Docker backup container with startup parameters

Im facing the same problem since months now and i dont have an adequate solution.
Im running several Containers based on different images. Some of them were started using portainer with some arguments and volumes. Some of them were started using the CLI and docker start with some arguments and parameters.
Now all these settings are stored somewhere. Because if i stop and retart such a container, everything works well again. but, if i do a commit, backup it with tar and load it on a different system and do a docker start, it has lost all of its settings.
The procedure as described here: https://linuxconfig.org/docker-container-backup-and-recovery does not work in my case.
Now im thinking about to write an own web application which will create me some docker compose files based on my setting rather than to just do a docker start with the correct params. This web application should also take care of the volumes (just folders) and do a incremental backup of them with borg to a remote server.
But actually this is only an idea. Is there a way to "extract" a docker compose file of a running containter? So that i can redeploy a container 1:1 to an other server and just have to run docker run mycontainer and it will have the same settings?
Or do i have to write my web app? Or have i missed some page on google and there is already such a solution?
Thank you!
To see the current configuration of a container, you can use:
docker container inspect $container_id
You can then use those configurations to run your container on another machine. There is no easy import/export of these settings to start another container that I'm aware of.
Most people use a docker-compose.yml to define how they want a container run. They also build images with a Dockerfile and transfer them with a registry server rather than a save/load.
The docker-compose.yml can be used with docker-compose or docker stack deploy and allows the configuration of the container to be documented as a configuration file that is tracked in version control, rather than error prone user entered settings. Running containers by hand or starting them with a GUI is useful for a quick test or debugging, but not for reproducibility.
You would like to backup the instance but the commands you're providing are to backup the image. I'd suggest to update your Dockerfile to solve the issue. In case you really want to go down the saving the instance current status, you should use the docker export and docker import commands.
Reference:
https://docs.docker.com/engine/reference/commandline/import/
https://docs.docker.com/engine/reference/commandline/export/
NOTE: the docker export does not export the content of the volumes anyway, I suggest you to should refer to https://docs.docker.com/engine/admin/volumes/volumes/

Docker Storage - Getting a Layman's answer

I am just discovering Docker - I am finding so much information, but I can't seem to get a straight answer on this option. If someone could give me a clear explanation based on my understanding I have of it so far it would be appreciated.
I am downloading a docker image locally - say the default one from Microsoft, using microsoft/dotnet-samples:dotnetapp-nanoserver I am lost as to where this is downloaded to? Is this downloaded and installed as a program on the host machine, with a isolated script that controls the container? The download is about 1.3 gigs because it includes .Net Core
In another example, if I download apache2 to run as a web server, does it install it in the default paths on the host system, but every container I want to use taps into that - or does every container contain it's isolated version of apache2?
I ask this because I can't find files that mimic the file size of these programs.
I know they are not complete VM's but where can I find the files associated with a container?
I am using Windows Server 2016 and a Mac since I want to do some trials with containers.
An image is a filesystem
Docker images are encapsulated filesystems. The software and files inside are not being directly installed onto your system.
You can think of a Docker image sort of the way you think of a .zip file. You can download a .zip file from somewhere, and it is a single file. Contained inside it might be one file, or dozens of files, or a nested tree of directories and files. But on your disk, it exists as one file.
A Docker image is similar (conceptually, at least... the details are more complicated).
Image storage
Where images are stored varies by platform. On a Linux system, they are usually under /var/lib/docker. I don't know where they are stored on Windows, but this is a more or less opaque store. Poking around inside will not reveal very much to you anyway.
To see what you have, you should use the docker images command. It will show you the images you have stored locally.
Like I said earlier, each image may consist of multiple layers. By default, that command will only show you the top layer, which is the one you'll care about, to run containers from. Technically, there are other layers, and you can see all of them using docker images -a.
Where is the software installed?
When you download an Apache image, nothing is installed on your system at all. The image file(s) are downloaded and stored. Hiding inside is Apache and everything Apache needs in order to run, but Apache is not installed onto your Windows OS anywhere.
When you want to use Apache, you would run a container. Docker takes the Apache image and, using it as a starting template, creates a running process container, inside of which Apache is running. This is isolated from your operating system. Apache is only running inside of the container.
If you run a second container from the Apache image, you now have two completely separate Apache instances running, each in their own isolated filesystem environment.
Where can I find the files?
If you just want to poke around in the container filesystem, you can start the container in interactive mode, and run a shell instead of whatever it normally runs (like Apache). For instance, if you have an image apache:latest, you can do this:
docker run --rm -it apache:latest bash
This will run an instance of apache:latest, but instead of launching Apache, it will run a bash shell and drop you into it.
The --rm flag is convenient for cases like this. It tells Docker to remove the running container when its process exits. That way for a "just looking at something" container like this one, it cleans up after itself.
The -it is actually two flags. -i is interactive mode, and -t allocates a terminal. This is a common flag to pass when you want to directly interact with the container.
Once inside, you can use the usual commands to look at files and directory listings. Note that many containers are stripped-down, though. You don't always have all of the tools you are used to having. Things like ls in Linux are typically there, but a lot of things will not be.
Simply exit when you're done looking around to exit.
Looking around while the process is running
You can also look at the container while Apache is running. First start it normally.
docker run -d apache:latest
This will return a container ID. You can also get the ID from docker ps. Then you can attach to the container with that ID by executing a shell.
docker exec -it <container_id> bash
Now you're in the container in a shell, but Apache is in there running.

Resources