I have a basic application runnig inside a docker container. The application is a page where users can upload files. The uploaded files are storing inside app/myApp/UploadedFiles (app folder is where the container installs my application)
If I restart the container I lost all files stored inside the folder app/myApp/UploadedFiles
What is the best approach to persist the uploaded files even if I restart the container?
I tried to use volumes inside my docker compose files;
volumes:
- ${WEBAPP_STORAGE_HOME}/var/myFiles:/myApp/UploadedFiles
This creates a folder in the following root home>var>myFiles but if I upload files I never see them in these directory.
How can I do that?
My goal is to persists the files and could access to them and for example download the files.
Thanks
EDIT:
I created an App Service in Azure using Container Registry and this docker compose:
version: '2.0'
services:
myWebSite:
image: myWebSite.azurecr.io/myWebSite:latest
volumes:
- ${WEBAPP_STORAGE_HOME}/var/myFiles:/myApp/UploadedFiles
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
ports:
- 5001:443
If I upload a file in the web site the file goes to /myApp/UploadedFiles
Using BASH I can go to /home/var/myFiles but there aren't any file inside.
I don't know if this is the correct approach. I can have the same problem with my application logs. I don't know how to read my logs.
Besides declaring the volume in the service you are using, you need to create another section in order to link in both ways the volume (from container to machine and from machine to pc)
Like this:
volumes:
- database:/var/lib/postgresql/data
volumes:
database:
Related
I have a very simple nextjs application where I have two folders which I like to map to the host (developer system) while I deploy this application inside docker (I use docker-desktop).
Data folder (It has some json files and also some nested folders and files)
Public folder (It has nested folders too but It contains images)
I have tested in local and also inside the docker container (without volume and all) - It's all functioning.
As a next step - I want to use the Volume with my docker-compose file so that, I can bind these directories inside the container with the source (and going forward with AKS file storage options).
I have tried with multiple approaches (also checked some of the answers in stackoverflow) but it does not help to achieve the same result.
Here is my docker-compose file for your reference.
version: '3.4'
services:
portfolio:
image: ${DOCKER_REGISTRY-}brij1111-portfolio
build:
context: ./APP-03/clientapp
dockerfile: dockerfile
volumes:
- /app/node_modules
# anonymous volume only for node_modules
- portfolio_data:/app/data
# named volume inside which the nextjs app writes content to the file
volumes:
portfolio_data:
driver: local
driver_opts:
o: bind
type: none
device: /APP-03/clientapp/data
# I have tried here to give a full path like /mnt/c/work/.../APP-03/clientapp/data but that also not working.
using docker-desktop I can see the volume indeed created for the container and it has all the files. However, It does not get reflected in my source if anything is updated inside that volume (like I add some content through nextjs application to that file it does not get reflected inside the running container).
in case, someone wants to know my folder hierarchy and where am i running docker-compose file, here is that reference image.
I had a similar problem installing Gitea on my NAS until someone (thankfully) told me a way to compromise (ie. your data will be persistent, but not in the location of your choosing).
version: '3.4'
volumes:
portfolio_data: {}
services:
portfolio:
image: ${DOCKER_REGISTRY-}brij1111-portfolio
build:
context: ./APP-03/clientapp
dockerfile: dockerfile
volumes:
- /app/node_modules
# anonymous volume only for node_modules
- portfolio_data:/app/data
In my articular case, I had to access my NAS using terminal, to the location which the container image is located, and search from there.
Hope it helps you
I have an app developped in .Net Core et i use Docker to Deploy it in Linux VPS.
In The app, i have a feature that consists on uploading files and i store them in wwwroot. I have used docker volumes to externalize the folder.
But everytime i did a build i loose all the files that users uploaded. Which is normal..
Update: This is how i'm declaring the volume
app:
image: app
depends_on:
- "postgres_image"
build:
context: .
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- app_wwwroot:/wwwroot
My question is what is the best approach to be able to make changes on the app (build source code and get a new release) without loosing the uploaded files.
Thanks.
It would've been better if you provided how you are using docker volumes to persist the wwwroot data.
If you want to persist your data you can either use bind mounts or volumes within the docker run command or the docker-compose.
I usually use bind mounts instead of volumes when I want to persist data:
docker run -v './path/on/your/machine:/path/inside/the/container' image_name
or In docker compose
version: '3.8'
services:
app:
image: image_name
volumes:
- './path/on/your/machine:/path/inside/the/container'
as you can see you will be mounting ./path/on/your/machine from your host machine into /path/inside/the/container which is the data of wwwroot in your case.
Any changes made on each one of the dir/file mapped inside the container or your host machine will affect both.
Building again wouldn't affect the dir/file
my docker compose file looks like:
app:
image: app
restart: always
ports:
- 127.0.0.1:8080:8080
as far as I know docker is storing logs into virtual disc, so how can I copy logs from there and store into my host machine
In fact, I tried to add
volumes:
- ./logs:/home/logs
but only directory logs is creating, there are no logs. What am I doing wrong?
I have a suspicion that the target folder in the docker container is wrong. You specify /home/logs - which seems like an odd place. That would mean that the logs are stored in the home folder of a user named 'logs'.
Are you sure that is the path where logs are stored in the docker container?
I am currently working on transforming my single application with multiple services into multiple docker containers running each of those. My current problem is the following: These services originally share some files located on the tmp/ folder so the host's tmp/ folder would need to be mounted and shared among the containers.
I already tried the most obvious solution which is mounting the folders like what normally would be done:
This indeed does share the folders as intended but has the effect of deleting all the contents of the containers' tmp/ folders which breaks the application.
services:
first:
image: first_image
build:
context: .
dockerfile: first/Dockerfile
ports:
- "8080:8080"
volumes:
- ./tmp/:/tmp
Is there any way to properly do what I am trying to do? Or at least is there a way to mount the files before my service inside the container starts? This way the temp files would be created after and not be deleted.This thread was the closest I found to solve my problem but it didn't address the files being deleted
Sharing /tmp between two containers
I have an app written using ASP.NET Core 3.1 framework. I am trying to create a docker image to allow me to run the app on a Linux system.
My app allows the user to upload files to the server, so it writes the uploaded file onto a folder called "Storage" located on the root folder of my project.
I want to create a permanent storage on the Linux machine so it is not destroyed when the image is removed.
I create the following docker-compose.yml file with instructions on how to create volumes as follow
version: '3.4'
services:
myproject:
image: ${DOCKER_REGISTRY-}myproject
build:
context: .
dockerfile: MyProject/Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://localhost;http://localhost
- ASPNETCORE_HTTPS_PORT=44344
ports:
- 51736:80
- 44344:443
volumes:
- photos:/app/Storage
volumes:
photos:
According to my understanding, the volumes located under the myproject service mapps the volume that is called photos to /app/Storage.
However, I am not what command would I use to create the volume on the server so it is not deleted.
How can I correctly create a volume and point the image to use it?
You don't need anything else. You have everything in there already. docker-compose will automatically create the photos volume for you and retain it across container restarts. Check docker volume ls after you start this stack for the first time and you'll see the volume listed.