Copy data from host to container efficiently - docker

The idea is a docker container which aims to train machine learning models for computer vision.
The data which is trained needs to be uploaded to the container, consumed and deleted afterwards.
Is there any way to use a volume and transport data efficiently between the host and the container?
When I searched on the web, most sources mentioned manual transport via bash or something similar, while my application needs to do this in an automated an repeating way for different datasets.
The host machine is a windows and the container is linux.
EDIT: there is a main application running on the machine which is responsible for managing the process.
send data to container (somehow?)
trigger training process
the training process runs async to not block the rest api
Any ideas?

You can use bind mounts when building you docker container,
See https://docs.docker.com/storage/bind-mounts/
you specify a directory inside your container which will be mounted on a local directory on your host, this way all container write operations are mounted directly to your host directory.
I'd recommand doint it using docker-compose, for example below:
container directory /opt/Projects/01_MyProject will be mounted to host directory /Volumes/Partition_2/Docker-Volumes/Volume_1,
#docker-compose.yml
...
volumes:
- PERSIST_DATA:/opt/Projects/01_MyProject
volumes:
PERSIST_DATA:
driver: local
driver_opts:
o: bind
type: none
device: /Volumes/Partition_2/Docker-Volumes/Volume_1
This way, you can copy anything to /Volumes/Partition_2/Docker-Volumes/Volume_1, which will show up under /opt/Projects/01_MyProject

Related

How to organize data into separate vhdx disks using Docker (on WSL2)?

I am running Docker on Windows 11 with WSL2 integration enabled. If I create a volume named "my_awesome_volume" in Docker Desktop then the following folder is created:
\\wsl.localhost\docker-desktop-data\data\docker\volumes\my_awesome_volume\_data
I understand that this data lives in the ext4.vhdx for docker-desktop-data. I don't want to store data for all of my containers here. I'd like to isolate some of my data into separate vhdx disks for portability and organization purposes. So, I created a new my_storage.vhdx for data storage, formatted it as ext4, and mounted it in WSL. This was successful, and I can access/read/write this storage from any distro on my system using the following path from within the distro:
/mnt/wsl/my_storage_folder
Or from Windows (any distro works but my example uses docker-desktop-data):
\\wsl.localhost\docker-desktop-data\mnt\wsl\my_storage_folder
I am unable to access this storage using a Docker volume, though.
I understand how to create a volume with access to the host file system, like this:
volumes:
- /c/my_host_folder/config:/config
Of course, performance is better if files aren't read from the Windows host, but if I do this:
volumes:
- my_awesome_volume:/config
My data is pointed at the docker-desktop-data container.
Is it possible to create a Docker volume (in Windows w/ WSL2) that points to a folder in the my_storage.vhdx I created? How?
I tried to follow a couple of examples using the local driver options, but I couldn't get anything to work.

How to copy files to a docker container via docker compose, without binding to host?

IOs are very slow in Docker for Mac due to the overhead of syncing files between the host and container. I want to be able to use docker compose to copy my application files to a certain directory in the container, but WITHOUT any kind of syncing/mounting. If I modify my application, I would just do docker-compose restart to create a new container using the updated files. It seems that all docker volume functionality necessitates some kind of syncing to/from the host, which I don't want. How do I accomplish this?
My current volume config, which DOES do syncing, looks like this:
volumes:
- ./application:/var/www/application:cached

docker volume container strategy

Let's say you are trying to dockerise a database (couchdb for example).
Then there are at least two assets you consider volumes for:
database files
log files
Let's further say you want to keep the db-files private but want to expose the log-files for later processing.
As far as I undestand the documentation, you have two options:
First option
define managed volumes for both, log- and db-files within the db-image
import these in a second container (you will get both) and work with the logs
Second option
create data container with a managed volume for the logs
create the db-image with a managed volume for the db-files only
import logs-volume from data container when running db-image
Two questions:
Are both options realy valid/ possible?
What is the better way to do it?
br volker
The answer to question 1 is that, yes both are valid and possible.
My answer to question 2 is that I would consider a different approach entirely and which one to choose depends on whether or not this is a mission critical system and that data loss must be avoided.
Mission critical
If you absolutely cannot lose your data, then I would recommend that you bind mount a reliable disk into your database container. Bind mounting is essentially mounting a part of the Docker Host filesystem into the container.
So taking the database files as an example, you could image these steps:
Create a reliable disk e.g. NFS that is backed-up on a regular basis
Attach this disk to your Docker host
Bind mount this disk into my database container which then writes database files to this disk.
So following the above example, lets say I have created a reliable disk that is shared over NFS and mounted on my Docker Host at /reliable/disk. To use that with my database I would run the following Docker command:
docker run -d -v /reliable/disk:/data/db my-database-image
This way I know that the database files are written to reliable storage. Even if I lose my Docker Host, I will still have the database files and can easily recover by running my database container on another host that can access the NFS share.
You can do exactly the same thing for the database logs:
docker run -d -v /reliable/disk/data/db:/data/db -v /reliable/disk/logs/db:/logs/db my-database-image
Additionally you can easily bind mount these volumes into other containers for separate tasks. You may want to consider bind mounting them as read-only into other containers to protect your data:
docker run -d -v /reliable/disk/logs/db:/logs/db:ro my-log-processor
This would be my recommended approach if this is a mission critical system.
Not mission critical
If the system is not mission critical and you can tolerate a higher potential for data loss, then I would look at Docker Volume API which is used precisely for what you want to do: managing and creating volumes for data that should live beyond the lifecycle of a container.
The nice thing about the docker volume command is that it lets you created named volumes and if you name them well it can be quite obvious to people what they are used for:
docker volume create db-data
docker volume create db-logs
You can then mount these volumes into your container from the command line:
docker run -d -v db-data:/db/data -v db-logs:/logs/db my-database-image
These volumes will survive beyond the lifecycle of your container and are stored on the filesystem if your Docker host. You can use:
docker volume inspect db-data
To find out where the data is being stored and back-up that location if you want to.
You may also want to look at something like Docker Compose which will allow you to declare all of this in one file and then create your entire environment through a single command.

docker data volume vs mounted host directory

We can have a data volume in docker:
$ docker run -v /path/to/data/in/container --name test_container debian
$ docker inspect test_container
...
Mounts": [
{
"Name": "fac362...80535",
"Source": "/var/lib/docker/volumes/fac362...80535/_data",
"Destination": "/path/to/data/in/container",
"Driver": "local",
"Mode": "",
"RW": true
}
]
...
But if the data volume lives in /var/lib/docker/volumes/fac362...80535/_data, is it any different from having the data in a folder mounted using -v /path/to/data/in/container:/home/user/a_good_place_to_have_data?
Although using volumes and bind mounts feels the same (with the only change being the location of the directory), there are differences in behavior.
Volumes vs Bind Mounts
With Bind Mount, a file or directory on the host machine is mounted into a container. The file or directory is referenced by its full or relative path on the host machine.
With Volume, a new directory is created within Docker's storage directory on the host machine, and Docker manages that directory's content.
Volumes advantages over bind mounts:
Volumes are easier to back up or migrate than bind mounts.
You can manage volumes using Docker CLI commands or the Docker API.
Volumes work on both Linux and Windows containers.
Volumes can be more safely shared among multiple containers.
Volume drivers allow you to store volumes on remote hosts or cloud providers, to encrypt the contents of volumes, or to add other functionality.
A new volume’s contents can be pre-populated by a container.
EDIT (9.9.2019):
According to #Sebi2020 comment, Bind mounts are much easier to backup. Docker doesn't provide any command to backup volumes. You have to use temporary containers with a bind mount to create backups.
Volumes
Created and managed by Docker. You can create a volume explicitly
using the docker volume create command, or Docker can create a volume
during container or service creation.
When you create a volume, it is stored within a directory on the
Docker host. When you mount the volume into a container, this
directory is what is mounted into the container. This is similar to
the way that bind mounts work, except that volumes are managed by
Docker and are isolated from the core functionality of the host
machine.
A given volume can be mounted into multiple containers simultaneously.
When no running container is using a volume, the volume is still
available to Docker and is not removed automatically. You can remove
unused volumes using docker volume prune.
When you mount a volume, it may be named or anonymous. Anonymous
volumes are not given an explicit name when they are first mounted
into a container, so Docker gives them a random name that is
guaranteed to be unique within a given Docker host. Besides the name,
named and anonymous volumes behave in the same ways.
Volumes also support the use of volume drivers, which allow you to
store your data on remote hosts or cloud providers, among other
possibilities.
Bind mounts
Available since the early days of Docker. Bind mounts have limited
functionality compared to volumes. When you use a bind mount, a file
or directory on the host machine is mounted into a container. The file
or directory is referenced by its full path on the host machine. The
file or directory does not need to exist on the Docker host already.
It is created on demand if it does not yet exist. Bind mounts are very
performant, but they rely on the host machine’s filesystem having a
specific directory structure available. If you are developing new
Docker applications, consider using named volumes instead. You can’t
use Docker CLI commands to directly manage bind mounts.
There is also tmpfs mounts.
tmpfs mounts
A tmpfs mount is not persisted on disk, either on the Docker host or
within a container. It can be used by a container during the lifetime
of the container, to store non-persistent state or sensitive
information. For instance, internally, swarm services use tmpfs mounts
to mount secrets into a service’s containers.
Reference:
https://docs.docker.com/storage/
is it any different from having the data in a folder mounted using -v /path/to/data/in/container:/home/user/a_good_place_to_have_data?
It is because, as mentioned in "Mount a host directory as a data volume"
The host directory is, by its nature, host-dependent. For this reason, you can’t mount a host directory from Dockerfile because built images should be portable. A host directory wouldn’t be available on all potential hosts.
If you have some persistent data that you want to share between containers, or want to use from non-persistent containers, it’s best to create a named Data Volume Container, and then to mount the data from it.
You can combine both approaches:
docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
Here we’ve launched a new container and mounted the volume from the dbdata container.
We’ve then mounted a local host directory as /backup.
Finally, we’ve passed a command that uses tar to backup the contents of the dbdata volume to a backup.tar file inside our /backup directory. When the command completes and the container stops we’ll be left with a backup of our dbdata volume.
Yes, this is quite different from a few perspectives. Like you wrote in the question's title, it is about understanding why we need data volumes vs bind mount to host.
Part 1 - Basic scenarios with examples
Lets take 2 scenarios.
Case 1: Web server
We want to provide our web server a configuration file that might change frequently. For example: exposing ports according to the current environment.
We can rebuild the image each time with the relevant setup or create 2 different images for each environment. Both of this solutions aren’t very efficient.
With Bind mounts Docker mounts the given source directory into a location inside the container.
(The original directory / file in the read-only layer inside the union file system will simply be overridden).
For example - binding a dynamic port to nginx:
version: "3.7"
services:
web:
image: nginx:alpine
volumes:
- type: bind #<-----Notice the type
source: ./mysite.template
target: /etc/nginx/conf.d/mysite.template
ports:
- "9090:8080"
environment:
- PORT=8080
command: /bin/sh -c "envsubst < /etc/nginx/conf.d/mysite.template >
/etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"
(*) Notice that this example could also be solved using Volumes.
Case 2 : Databases
Docker containers do not store persistent data -- any data that will be written to the writable layer in container’s union file system will be lost once the container stops running.
But what if we have a database running on a container, and the container stops - that means that all the data will be lost?
Volumes to the rescue.
Those are named file system trees which are managed for us by Docker.
For example - persisting Postgres SQL data:
services:
db:
image: postgres:latest
volumes:
- "dbdata:/var/lib/postgresql/data"
volumes:
- type: volume #<-----Notice the type
source: dbdata
target: /var/lib/postgresql/data
volumes:
dbdata:
Notice that in this case, for named volumes, the source is the name of the volume
(for anonymous volumes, this field is omitted).
Part 2 - Comparison
Differences in management and isolation on the host
Bind mounts exist on the host file system and being managed by the host maintainer. Applications / processes outside of Docker can also modify it.
Volumes can also be implemented on the host, but Docker will manage them for us and they can not be accessed outside of Docker.
Volumes are a much wider solution
Although both solutions help us to separate the data lifecycle from containers,
by using Volumes you gain much more power and flexibility over your system.
With Volumes we can design our data effectively and decouple it from other parts of the system by storing it in dedicated remote locations (e.g., in the cloud) and integrate it with external services like backups, monitoring, encryption and hardware management.
The difference between host directory and a data volume is in that that Docker manages the latter by placing it into the $DOCKER-DATA-DIR/volumes directory and attaching a reference to it (names or randomly generated ids). That is you get a little bit of convenience.
Both host directories and data volumes are directories on the host. Both are host dependent. You can't reference either of them in a Dockerfile; the VOLUME directive creates a new nameless (with randomly generated id) volume every time you launch a new container and cannot reference an existing volume.
* $DOCKER-DATA-DIR is /var/lib/docker here unless you changed the defaults.

Appropriate use of Volumes - to push files into container?

I was reading Project Atomic's guidance for images which states that the 2 main use cases for using a volume are:-
sharing data between containers
when writing large files to disk
I have neither of these use cases in my example using an Nginx image. I intended to mount a host directory as a volume in the path of the Nginx docroot in the container. This is so that I can push changes to a website's contents into the host rather then addressing the container. I feel it is easier to use this approach since I can - for example - just add my ssh key once to the host.
My question is, is this an appropriate use of a data volume and if not can anyone suggest an alternative approach to updating data inside a container?
One of the primary reasons for using Docker is to isolate your app from the server. This means you can run your container anywhere and get the same result. This is my main use case for it.
If you look at it from that point of view, having your container depend on files on the host machine for a deployed environment is counterproductive- running the same container on a different machine may result in different output.
If you do NOT care about that, and are just using docker to simplify the installation of nginx, then yes you can just use a volume from the host system.
Think about this though...
#Dockerfile
FROM nginx
ADD . /myfiles
#docker-compose.yml
web:
build: .
You could then use docker-machine to connect to your remote server and deploy a new version of your software with easy commands
docker-compose build
docker-compose up -d
even better, you could do
docker build -t me/myapp .
docker push me/myapp
and then deploy with
docker pull
docker run
There's a number of ways to achieve updating data in containers. Host volumes are a valid approach and probably the simplest way to achieve making your data available.
You can also copy files into and out of a container from the host. You may need to commit afterwards if you are stopping and removing the running web host container at all.
docker cp /src/www webserver:/www
You can copy files into a docker image build from your Dockerfile, which is the same process as above (copy and commit). Then restart the webserver container from the new image.
COPY /src/www /www
But I think the host volume is a good choice.
docker run -v /src/www:/www webserver command
Docker data containers are also an option for mounted volumes but they don't solve your immediate problem of copying data into your data container.
If you ever find yourself thinking "I need to ssh into this container", you are probably doing it wrong.
Not sure if I fully understand your request. But why you need do that to push files into Nginx container.
Manage volume in separate docker container, that's my suggestion and recommend by Docker.io
Data volumes
A data volume is a specially-designated directory within one or more containers that bypasses the Union File System. Data volumes provide several useful features for persistent or shared data:
Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization.
Data volumes can be shared and reused among containers.
Changes to a data volume are made directly.
Changes to a data volume will not be included when you update an image.
Data volumes persist even if the container itself is deleted.
refer: Manage data in containers
As said, one of the main reasons to use docker is to achieve always the same result. A best practice is to use a data only container.
With docker inspect <container_name> you can know the path of the volume on the host and update data manually, but this is not recommended;
or you can retrieve data from an external source, like a git repository

Resources