Sharing /tmp folder between host and container - docker

I am currently working on transforming my single application with multiple services into multiple docker containers running each of those. My current problem is the following: These services originally share some files located on the tmp/ folder so the host's tmp/ folder would need to be mounted and shared among the containers.
I already tried the most obvious solution which is mounting the folders like what normally would be done:
This indeed does share the folders as intended but has the effect of deleting all the contents of the containers' tmp/ folders which breaks the application.
services:
first:
image: first_image
build:
context: .
dockerfile: first/Dockerfile
ports:
- "8080:8080"
volumes:
- ./tmp/:/tmp
Is there any way to properly do what I am trying to do? Or at least is there a way to mount the files before my service inside the container starts? This way the temp files would be created after and not be deleted.This thread was the closest I found to solve my problem but it didn't address the files being deleted
Sharing /tmp between two containers

Related

Exclude sub-folder when mounting host to volume docker [duplicate]

Supposed I have a Docker container and a folder on my host /hostFolder. Now if I want to add this folder to the Docker container as a volume, then I can do this either by using ADD in the Dockerfile or mounting it as a volume.
So far, so good.
Now /hostFolder contains a sub-folder, /hostFolder/subFolder.
I want to mount /hostFolder into the Docker container (whether as read-write or read-only does not matter, works both for me), but I do NOT want to have it included /hostFolder/subFolder. I want to exclude this, and I also want the Docker container be able to make changes to this sub-folder, without the consequence of having it changed on the host as well.
Is this possible? If so, how?
Using docker-compose I'm able to use node_modules locally, but ignore it in the docker container using the following syntax in the docker-compose.yml
volumes:
- './angularApp:/opt/app'
- /opt/app/node_modules/
So everything in ./angularApp is mapped to /opt/app and then I create another mount volume /opt/app/node_modules/ which is now empty directory - even if in my local machine ./angularApp/node_modules is not empty.
If you want to have subdirectories ignored by docker-compose but persistent, you can do the following in docker-compose.yml:
volumes:
node_modules:
services:
server:
volumes:
- .:/app
- node_modules:/app/node_modules
This will mount your current directory as a shared volume, but mount a persistent docker volume in place of your local node_modules directory. This is similar to the answer by #kernix, but this will allow node_modules to persist between docker-compose up runs, which is likely the desired behavior.
For those trying to get a nice workflow going where node_modules isn't overridden by local this might help.
Change your docker-compose to mount an anonymous persistent volume to node_modules to prevent your local overriding it. This has been outlined in this thread a few times.
services:
server:
build: .
volumes:
- .:/app
- /app/node_modules
This is the important bit we were missing. When spinning up your stack use docker-compose -V. Without this if you added a new package and rebuilt your image it would be using the node_modules from your initial docker-compose launch.
-V, --renew-anon-volumes Recreate anonymous volumes instead of retrieving
data from the previous containers.
To exclude a file, use the following
volumes:
- /hostFolder:/folder
- /dev/null:/folder/fileToBeExcluded
With the docker command line:
docker run \
--mount type=bind,src=/hostFolder,dst=/containerFolder \
--mount type=volume,dst=/containerFolder/subFolder \
...other-args...
The -v option may also be used (credit to Bogdan Mart), but --mount is clearer and recommended.
First, using the ADD instruction in a Dockerfile is very different from using a volume (either via the -v argument to docker run or the VOLUME instruction in a Dockerfile). The ADD and COPY commands just take a copy of the files at the time docker build is run. These files are not updated until a fresh image is created with the docker build command. By contrast, using a volume is essentially saying "this directory should not be stored in the container image; instead use a directory on the host"; whenever a file inside a volume is changed, both the host and container will see it immediately.
I don't believe you can achieve what you want using volumes, you'll have to rethink your directory structure if you want to do this.
However, it's quite simple to achieve using COPY (which should be preferred to ADD). You can either use a .dockerignore file to exclude the subdirectory, or you could COPY all the files then do a RUN rm bla to remove the subdirectory.
Remember that any files you add to image with COPY or ADD must be inside the build context i.e. in or below the directory you run docker build from.
for the people who also had the issue that the node_modules folder would still overwrite from your local system and the other way around
volumes:
node_modules:
services:
server:
volumes:
- .:/app
- node_modules:/app/node_modules/
This is the solution, With the trailing / after the node_modules being the fix.
Looks like the old solution doesn't work anymore(at least for me).
Creating an empty folder and mapping target folder to it helped though.
volumes:
- ./angularApp:/opt/app
- .empty:/opt/app/node_modules/
I found this link which saved me: Working with docker bind mounts and node_modules.
This working solution will create a "exclude" named volume in docker volumes manager. The volume name "exclude" is arbitrary, so you can use a custom name for the volume intead exclude.
services:
node:
command: nodemon index.js
volumes:
- ./:/usr/local/app/
# the volume above prevents our host system's node_modules to be mounted
- exclude:/usr/local/app/node_modules/
volumes:
exclude:
You can see more infos about volumes in Official docs - Use a volume with docker compose
To exclude a mounted file contained in the volume of your machine, you will have to overwrite it by allocating a volume to this same file.
In your config file:
services:
server:
build : ./Dockerfile
volumes:
- .:/app
An example in you dockerfile:
# Image Location
FROM node:13.12.0-buster
VOLUME /app/you_overwrite_file

how to dynamically get new folders with files in volumes in Docker to my host

I have a folder called 'Transfer'. During the execution of my program, new folders with files can be created in the 'Transfer' folder.
How do I dynamically transfer all new created file folders in Docker to my PC?
I tried to do something like this, but it doesn't work:
In the docker-compose.yml file for my transfer service, I added a volume called files:
transfer:
build: ./transfer
ports:
- 6666:6666
volumes:
- ./:/files
./ is a folder on the host, it is next to docker-compose.yml, new folders with files from my Docker volume called files should appear here
/files - volume in Docker
At the end of docker-compose.yml I created this volume files:
volumes:
files:
You say at the start of your post that the folder in the container is called 'Transfer'. That's the folder you need to map to a folder on your host machine.
If the Transfer folder is at the root of the file system, i.e. /Transfer, you can do
transfer:
build: ./transfer
ports:
- 6666:6666
volumes:
- ./:/Transfer
Then the . directory on the host and the /Transfer directory in the container will in effect be the same. Any changes done to one of them will be visible in the other.
You don't need the volume definition you have at the bottom of your post.

Map the docker compose volume from container to host is not working

I have a very simple nextjs application where I have two folders which I like to map to the host (developer system) while I deploy this application inside docker (I use docker-desktop).
Data folder (It has some json files and also some nested folders and files)
Public folder (It has nested folders too but It contains images)
I have tested in local and also inside the docker container (without volume and all) - It's all functioning.
As a next step - I want to use the Volume with my docker-compose file so that, I can bind these directories inside the container with the source (and going forward with AKS file storage options).
I have tried with multiple approaches (also checked some of the answers in stackoverflow) but it does not help to achieve the same result.
Here is my docker-compose file for your reference.
version: '3.4'
services:
portfolio:
image: ${DOCKER_REGISTRY-}brij1111-portfolio
build:
context: ./APP-03/clientapp
dockerfile: dockerfile
volumes:
- /app/node_modules
# anonymous volume only for node_modules
- portfolio_data:/app/data
# named volume inside which the nextjs app writes content to the file
volumes:
portfolio_data:
driver: local
driver_opts:
o: bind
type: none
device: /APP-03/clientapp/data
# I have tried here to give a full path like /mnt/c/work/.../APP-03/clientapp/data but that also not working.
using docker-desktop I can see the volume indeed created for the container and it has all the files. However, It does not get reflected in my source if anything is updated inside that volume (like I add some content through nextjs application to that file it does not get reflected inside the running container).
in case, someone wants to know my folder hierarchy and where am i running docker-compose file, here is that reference image.
I had a similar problem installing Gitea on my NAS until someone (thankfully) told me a way to compromise (ie. your data will be persistent, but not in the location of your choosing).
version: '3.4'
volumes:
portfolio_data: {}
services:
portfolio:
image: ${DOCKER_REGISTRY-}brij1111-portfolio
build:
context: ./APP-03/clientapp
dockerfile: dockerfile
volumes:
- /app/node_modules
# anonymous volume only for node_modules
- portfolio_data:/app/data
In my articular case, I had to access my NAS using terminal, to the location which the container image is located, and search from there.
Hope it helps you

Docker compose Volume - Uploaded files

I have a basic application runnig inside a docker container. The application is a page where users can upload files. The uploaded files are storing inside app/myApp/UploadedFiles (app folder is where the container installs my application)
If I restart the container I lost all files stored inside the folder app/myApp/UploadedFiles
What is the best approach to persist the uploaded files even if I restart the container?
I tried to use volumes inside my docker compose files;
volumes:
- ${WEBAPP_STORAGE_HOME}/var/myFiles:/myApp/UploadedFiles
This creates a folder in the following root home>var>myFiles but if I upload files I never see them in these directory.
How can I do that?
My goal is to persists the files and could access to them and for example download the files.
Thanks
EDIT:
I created an App Service in Azure using Container Registry and this docker compose:
version: '2.0'
services:
myWebSite:
image: myWebSite.azurecr.io/myWebSite:latest
volumes:
- ${WEBAPP_STORAGE_HOME}/var/myFiles:/myApp/UploadedFiles
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
ports:
- 5001:443
If I upload a file in the web site the file goes to /myApp/UploadedFiles
Using BASH I can go to /home/var/myFiles but there aren't any file inside.
I don't know if this is the correct approach. I can have the same problem with my application logs. I don't know how to read my logs.
Besides declaring the volume in the service you are using, you need to create another section in order to link in both ways the volume (from container to machine and from machine to pc)
Like this:
volumes:
- database:/var/lib/postgresql/data
volumes:
database:

Make a directory from one container available to another one while keeping files from the original one

Let's say you have an image with a Rails application, containing assets. And you want to serve them from another container running Nginx.
From what I gather, mounting a volume makes the contents of a directory disappear. So, if you mount one volume into two containers, like,
volumes:
assets:
services:
app:
volumes:
assets:/app/public/assets
nginx:
volumes:
assets:/assets
they both will see an empty folder. You can very well fill it up by hand. But if you were to deploy a newer version of the Rails app image, those two won't see the changes.
Am I missing something? Is there a way to handle files without proxying them to Rails app or copying them from container to container?
UPD First container with non-empty directory that gets the volume mounted determines its initial content.
You can add the following lines before starting Rails to your Rails image's Dockerfile (CMD or ENTRYPOINT):
rm -r /assets/*
cp -r /app/public/assets/* /assets
And mount the volume into /assets for both services.
This way every time your container restarts (on docker stack deploy when it's changed), the volume is refilled with fresh assets that is visible to nginx container.

Resources