Too many docker mounts - docker

I have a web application inside a docker image . The web application is a bit complex, so every time I create a new component inside my app I have to mount another directory.The problem is that I will end up with a command having too many mounts:
docker run -v ... -v ... -v ... ... myimage
Is there a better solution for this?

The main idea of dockerization that you have immutable containers which you can run everywhere with same result(stateless). If your container has state maybe you have bad architecture solution for your application. Maybe you should separate your application on two. For example, first application will be stateless and another will manage your first application storage. As a variant you can create all your new directory in only one volume:
-v ./app_state:/app_state
with the next app_state dir structure
app_state
|__ subvolume_1
|__ subvolume_2
|
.
.
.
|__ subvolume_n

If thé problèmes is that the command became too long to be typed in a terminal you can use docker compose or a custom script. Then you'll be able to mount as many volumes you want without to rewrite the all stuff anytime you launch a container.

Ok, so i suppose your web application store somewhere in a database a list of the projects and the path wher it's stored in the filesystem. If you can modify the source of the web application maybe you can add a procedure that create a file that map project path. And then create a script that start your container and mount each projects in that file (by parsing it with awk). If you can't modify the web app i'm sure you can at least access the projects list in your database and make the parsing process direcly in your container's running script
so your web app create a file like that :
Project1 /opt/project1
Project2 /opt/project2
and you container's running script look like that :
#!/bin/bash
VOLUMES=$(cat projects.txt | awk '{print "-v " $2":/home/"$1}')
COMMAND=$(docker run $VOLUMES myimage)

Related

Docker Desktop on Hyper-V - bind mount do not propagate inotify on file copy

I have a Docker Desktop installed on my dev machine, with WSL 2 disabled. I have shared my entire C:/ drive:
Then I have a container that inside has a .net 6 (Core) application that uses the FileSystemWatcher to observe one directory, and when a file is pasted inside to read it.
I red in several articles in the internet that WSL2 do not support notification to propagate from the Windows file system to the underlying Linux distribution that docker is running on, hence there is no way that I can bind the directory that I have to "watch" with the app in the container. So I swithed to the old Hyper-V support of docker.
I run the container with the following command:
docker run `
--name mlc-importer `
-v C:/temp/DZBank:/opt/docker/mlc_importer/dfs/DZBank `
-v C:\temp\appsettings.json:/app/appsettings.json `
-v C:\temp\log4net.config:/app/log4net.config `
mlc-importer
The container starts and starts "watching" for new files. The strange thing is, that when I cut a file and paste it in the directory, the app in the container registers the new file and reads it, but when I copy the file and paste in in the directory, the app in teh container do not register it and read it.
Can someone help me because I can't find out what the problem might comes from.
Thanks in advance,
Julian
I managed to solve mu problem, and I'll post it here if somebody encounters the same problem.
The problem was in teh file itself. This I found out when I started a new container with only debian, and installed inotify-tools, and binded the same path. When I tried to copy the file and paste it in the binded dir the output was:
Three times MODIFY event.
When I tried to cut the file and paste in in the new dir the events were:
So with copy - three times MODIFY, with cut one CREATE and two MODIFY.
Then I inspected the copied file and saw this:
When I checked the checkbox and hit ok, everything is ok. And since in the container app (from the post), I hook to only "File created" callback, it not triggers when the file is only modified.
Hope this helps someone with a similar problem

Can I run scripts from a docker build context without a copy?

I want to build on top of a windows docker container by installing a couple programs. The files total .5 GB and I want to keep the layers as small as possible. I am hoping I can run the setup files from the build-context, and then have the build-context swept away at the end so I don't have a needless copy of the source files for the setup.exe embedded in my container layers. However, I have not found an example where this is the case. Instead I mostly see people run a COPY command to a temporary build folder, run their setup, then remove the folder. Won't those files still be in the container layers because the COPY command creates a new layer when it's done?
I don't know if the container can see the build-context directly. I was hoping for some magical folder filled with the build-context files so I could run a script using it, but haven't found anything.
It seems like the alternative is to create a private file-server and perform a RUN that can download them from that private server and unpack them, run the install, and remove them (all as 1 docker step). I understand this would make it more available to others who need to rerun the build, but I'm not convinced we'll need to rerun it. It's not likely to change as the container will build patches for a legacy application. Just seems like a lot to host files on a private, public-facing server for something that will get called once every couple years if ever.
So are these my two options?
Make a container with needless copies of source files embedded within
Host the files on a private file server and download/install/remove them
Or am I missing another option or point about how the containers work?
It's a long shot as Windows is a tricky thing with file system, but you could do this way:
In your Dockerfile use a COPY command, install then RUN del ... to remove the installation files
Build your image docker build -t my-large-image:latest .
Run your image docker run --name my-large-container my-large-image:latest
Stop the container
Export your container filesystem docker export my-large-container > my-large-container.tar
Import the filesystem to a new image cat my-large-container.tar | docker import - my-small-image
Caveat is you need to run the container once which might not be what you want. And also I haven't tested with windows container, sorry.
I usually do the download or copy in one step, then in the next step I do the silent installation and remove the installer.
# escape=`
FROM mcr.microsoft.com/dotnet/framework/wcf:4.8-windowsservercore-ltsc2016
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ADD https://download.visualstudio.microsoft.com/download/pr/6afa582f-fa26-4a73-8cb9-194321e85f8d/ecea51ead62beb7acc73ad9799511ffdb3083ad384fe04ec50e2cbecfb426482/VS_RemoteTools.exe VS_RemoteTools_x64.exe
RUN Start-Process .\\VS_RemoteTools_x64.exe -ArgumentList #('/install','/quiet','/norestart') -NoNewWindow -Wait; `
Remove-Item -Path C:/VS_RemoteTools_x64.exe -Force;
But otherwise, I don't think you can mount a custom volume while it's being built.
I didn't find a satisfactory answer to this. Docker seems designed for only the modern era and assumes you'll be able to download what you need via scripts and tools hitting APIs and file servers. The easiest option I found that I eventually went with was to host the files on a private file server or service (in my case, AWS S3).
I really wish there was a way to have files hosted by the docker daemon in some way, eg. if it acted like a temporary server that you could get data from via http instead of needing to COPY the files and create a layer. Alas, I found no such feature.
Taking this route made my container about a GB smaller.

`docker service update` not working to update config

Firstly every step is done seemingly successfully (not any error reported). But per what I understand about how to check if the new config is applied OK, it seems to be failed updating the config.
Suppose I have a config file with a simple content like this:
Well done
I created a config (the first version) like this:
echo 'Well done' | docker config create my-config -
Now I have a local file named my-config.txt (on the host machine) with content as described above, it's used as a template (source) to clone over the target on the Docker container. On the Docker container, there is already a config file with the same content (originally). Now I change the content of the file my-config.txt (on the host machine) to something like this:
Well done !!!
And next I update the current docker service (created before) by using docker service update to apply the new config file, like this:
//firstly create another version of config
docker config create my-config-2 /home/my_user/my-config.txt
docker service update \
--config-add source=my-config-2,target=my-config.txt \
--config-rm my-config \
my-service
As I said, it seems to execute successfully. But when I try opening the my-config.txt file on the Docker container, its content is kept unchanged, like this:
docker exec [container_id] cat my-config.txt
It still shows Well done whereas the expected content should be Well done !!!. Isn't that what it should be? Am I doing something wrong here? Or could you suggest something to diagnose this issue or something different from what I've done without having to trying to solve this issue?

How to specify different .dockerignore files for different builds in the same project?

I used to list the tests directory in .dockerignore so that it wouldn't get included in the image, which I used to run a web service.
Now I'm trying to use Docker to run my unit tests, and in this case I want the tests directory included.
I've checked docker build -h and found no option related.
How can I do this?
Docker 19.03 shipped a solution for this.
The Docker client tries to load <dockerfile-name>.dockerignore first and then falls back to .dockerignore if it can't be found. So docker build -f Dockerfile.foo . first tries to load Dockerfile.foo.dockerignore.
Setting the DOCKER_BUILDKIT=1 environment variable is currently required to use this feature. This flag can be used with docker compose since 1.25.0-rc3 by also specifying COMPOSE_DOCKER_CLI_BUILD=1.
See also comment0, comment1, comment2
from Mugen comment, please note
the custom dockerignore should be in the same directory as the Dockerfile and not in root context directory like the original .dockerignore
i.e.
when calling
DOCKER_BUILDKIT=1
docker build -f /path/to/custom.Dockerfile ...
your .dockerignore file should be at
/path/to/custom.Dockerfile.dockerignore
At the moment, there is no way to do this. There is a lengthy discussion about adding an --ignore flag to Docker to provide the ignore file to use - please see here.
The options you have at the moment are mostly ugly:
Split your project into subdirectories that each have their own Dockerfile and .dockerignore, which might not work in your case.
Create a script that copies the relevant files into a temporary directory and run the Docker build there.
Adding the cleaned tests as a volume mount to the container could be an option here. After you build the image, if running it for testing, mount the source code containing the tests on top of the cleaned up code.
services:
tests:
image: my-clean-image
volumes:
- '../app:/opt/app' # Add removed tests
I've tried activating the DOCKER_BUILDKIT as suggested by #thisismydesign, but I ran into other problems (outside the scope of this question).
As an alternative, I'm creating an intermediary tar by using the -T flag which takes a txt file containing the files to be included in my tar, so it's not so different than a whitelist .dockerignore.
I export this tar and pipe it to the docker build command, and specify my docker file, which can live anywhere in my file hierarchy. In the end it looks like this:
tar -czh -T files-to-include.txt | docker build -f path/to/Dockerfile -
Another option is to have a further build process that includes the tests. The way I do it is this:
If the tests are unit tests then I create a new Docker image that is derived from the main project image; I just stick a FROM at the top, and then ADD the tests, plus any required tools (in my case, mocha, chai and so on). This new 'testing' image now contains both the tests and the original source to be tested. It can then simply be run as is or it can be run in 'watch mode' with volumes mapped to your source and test directories on the host.
If the tests are integration tests--for example the primary image might be a GraphQL server--then the image I create is self-contained, i.e., is not derived from the primary image (it still contains the tests and tools, of course). My tests use environment variables to tell them where to find the endpoint that needs testing, and it's easy enough to get Docker Compose to bring up both a container using the primary image, and another container using the integration testing image, and set the environment variables so that the test suite knows what to test.
Sadly it isn't currently possible to point to a specific file to use for .dockerignore, so we generate it in our build script based on the target/platform/image. As a docker enthusiast it's a sad and embarrassing workaround.

How to set volume for dokku-persistent-storage

I am trying to use dokku-persistent-storage so my uploads for my rails app stay on the server, but I don't quite understand how to build the path since I am new to Dokku and Docker.
(I am running this on an Ubuntu droplet on Digital Ocean)
I'm not sure if it should be something like this:
[SERVER IP ADDRESS]/home/dokku/myapp/public_folder
or
/home/dokku/myapp/public_folder
or if i'm way off and it should be something completely different.
This is what the github section says about it:
In your applications folder (/home/dokku/app_name) create a file called PERSISTENT_STORAGE.
Inside this file list one volume-map/volume per line to mount. For example:
/host/path:/container/path
/another/container/path
The above example will result in the following arguments being passed to docker during deploy and docker run:
-v /host/path:/container/path -v /another/container/path
Move information on docker volumes can be found here: http://docs.docker.io/en/latest/use/working_with_volumes/
I am not into Ruby or dokku, but if I understood correctly, you want your docker to have a persistent storage on the host machine.
PERSISTENT_STORAGE file, as to the documentation that you've quoted, contains mappings from host file-system directories to your container file-system directories (translated to -v arguments of the CLI).
Therefore, you should map the directory of your uploads in the container, to the desired directory on the host.
For example, if your app's uploads are saved to this dir (inside the docker container):
/home/dokku/myapp/public_folder
and you'd like them to be kept in your host at:
/home/some/dir
then, as I understand, the content of PERSISTENT_STORAGE file should be:
/home/some/dir:/home/dokku/myapp/public_folder
I hope I got you right.
Use Dokku's storage:mount option.
You'll need to SSH into your dokku host:
ssh dokku#host
Run the following command to link the storage directory for that app to the app/public/uploads folder, for example:
storage:mount <app> /var/lib/dokku/data/storage:/app/public/uploads
The Dokku docs cover this well at: at http://dokku.viewdocs.io/dokku/advanced-usage/persistent-storage/

Resources