H2O in docker-compose - which folders to mount to persist all data? - docker

Trying to use H20 with docker-compose. Their website has instructions on running with Docker which I'm using as a basis.
I can't work out how to persist the appropriate folders to keep the models accessible in H2O Flow. Which folders do I need to persist locally for this?
I've used the Dockerfile here and the docker-compose.yaml below. I'm able to store models locally by mounting the /tmp folder, but which other folders do I need to mount?
version: '3.1'
services :
h2o-svc:
build:
context: .
dockerfile: Dockerfile
image: h2o:latest
restart: always
volumes:
- ./app/h2o_models:/tmp
ports:
- 54321:54321

H2O-3 has an in-memory architecture.
It does not write anything to disk unless you ask it to, and the location it persists to (when saving a model, for example) is the location you manually give it.
I suggest you try this without docker first, to get the hang of what to expect when H2O-3 restarts.

Related

Map the docker compose volume from container to host is not working

I have a very simple nextjs application where I have two folders which I like to map to the host (developer system) while I deploy this application inside docker (I use docker-desktop).
Data folder (It has some json files and also some nested folders and files)
Public folder (It has nested folders too but It contains images)
I have tested in local and also inside the docker container (without volume and all) - It's all functioning.
As a next step - I want to use the Volume with my docker-compose file so that, I can bind these directories inside the container with the source (and going forward with AKS file storage options).
I have tried with multiple approaches (also checked some of the answers in stackoverflow) but it does not help to achieve the same result.
Here is my docker-compose file for your reference.
version: '3.4'
services:
portfolio:
image: ${DOCKER_REGISTRY-}brij1111-portfolio
build:
context: ./APP-03/clientapp
dockerfile: dockerfile
volumes:
- /app/node_modules
# anonymous volume only for node_modules
- portfolio_data:/app/data
# named volume inside which the nextjs app writes content to the file
volumes:
portfolio_data:
driver: local
driver_opts:
o: bind
type: none
device: /APP-03/clientapp/data
# I have tried here to give a full path like /mnt/c/work/.../APP-03/clientapp/data but that also not working.
using docker-desktop I can see the volume indeed created for the container and it has all the files. However, It does not get reflected in my source if anything is updated inside that volume (like I add some content through nextjs application to that file it does not get reflected inside the running container).
in case, someone wants to know my folder hierarchy and where am i running docker-compose file, here is that reference image.
I had a similar problem installing Gitea on my NAS until someone (thankfully) told me a way to compromise (ie. your data will be persistent, but not in the location of your choosing).
version: '3.4'
volumes:
portfolio_data: {}
services:
portfolio:
image: ${DOCKER_REGISTRY-}brij1111-portfolio
build:
context: ./APP-03/clientapp
dockerfile: dockerfile
volumes:
- /app/node_modules
# anonymous volume only for node_modules
- portfolio_data:/app/data
In my articular case, I had to access my NAS using terminal, to the location which the container image is located, and search from there.
Hope it helps you

Docker-compose volume permission

I am new to Docker and still learning.
I have a problem now, which drives me crazy as I am unable to figure out a pretty way to solve this for quite some time already.
So I have a very simple and common stack: nginx + php + mariadb + redis.
The idea is to have a shared volume between php and nginx containers, which will contain the app and run the php and nginx images as a non-root user, say with uid 1001.
here is the docker-compose.yml that i have came up with:
version: '3.8'
volumes:
app-data:
driver: local
driver_opts:
type: bind
o: uid=1001
device: ./app
services:
web:
image: nginx:1.20
user: "1001:1001"
volumes:
- ./nginx/server.conf:/etc/nginx/conf.d/default.conf:ro
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- app-data:/usr/share/nginx/html
depends_on:
- php
php:
build:
context: ./
dockerfile: ./php/Dockerfile
user: "1001:1001"
volumes:
- app-data:/usr/share/nginx/html
depends_on:
- db
- redis
I have omitted the mariadb and redis, as they are not relevant to my question. Dockerfile for the php image is irrelevant as well, as it is used only to install couple of modules, which were not included in the default image. if i had a choice, i would avoid having any custom Dockerfiles at all.
so this isn't working because apparently uid is not recognized as a valid option, although documentation CLEARLY STATES that local driver with bind would take the SAME OPTIONS as the mount command.
My goal here is to have a docker-compose file which will:
boot the neccessary services, i.e. db, php, nginx and redis
will have a volume created using a local directory which stores the app
have that volume shared between php and nginx images
have php and nginx images run as non-root, with same uid, so that they can access the app directory.
have no custom Dockerfiles
Could you please help me to achieve the goal? i would also appreciate any links to relevant documentation and/or best practices.
Thank you!
edit:
i would also like to understand clearly if best-practices docker/docker-compose assume that the user would have custom Dockerfile's for the services with needed adjustments, or if it is supposed to use the stock images with all configs done on the docker-compose file.

.NET Core + Docker - How to Persist uploaded files in wwwroot after new build

I have an app developped in .Net Core et i use Docker to Deploy it in Linux VPS.
In The app, i have a feature that consists on uploading files and i store them in wwwroot. I have used docker volumes to externalize the folder.
But everytime i did a build i loose all the files that users uploaded. Which is normal..
Update: This is how i'm declaring the volume
app:
image: app
depends_on:
- "postgres_image"
build:
context: .
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- app_wwwroot:/wwwroot
My question is what is the best approach to be able to make changes on the app (build source code and get a new release) without loosing the uploaded files.
Thanks.
It would've been better if you provided how you are using docker volumes to persist the wwwroot data.
If you want to persist your data you can either use bind mounts or volumes within the docker run command or the docker-compose.
I usually use bind mounts instead of volumes when I want to persist data:
docker run -v './path/on/your/machine:/path/inside/the/container' image_name
or In docker compose
version: '3.8'
services:
app:
image: image_name
volumes:
- './path/on/your/machine:/path/inside/the/container'
as you can see you will be mounting ./path/on/your/machine from your host machine into /path/inside/the/container which is the data of wwwroot in your case.
Any changes made on each one of the dir/file mapped inside the container or your host machine will affect both.
Building again wouldn't affect the dir/file

Populating and versioning docker named volume, used as persistent storage with balena.io

I have a collection of binary files, which need to be shared by 2 services specified in docker-compose. I need the data to persist on the host OS, but I also need to update it once in a while with new version. Using named volumes I can achieve this using:
volumes:
data:
services:
myapp:
image: myimage:latest
volumes:
- "data:/opt/data"
anotherapp:
image: anotherapp:latest
volumes:
- "data:/opt/data"
I want to add my binary blobs to a separate container, which can be versioned and pulled from the hub, and files are then placed inside the data volume. I considered using a scratch docker image, or alpine, but scratch container can not be started by docker-compose, as expected.
How can I achieve this configuration and should I even use so-called Data Only Containers, which are advised against? Perhaps myapp itself should download these binary files and place them in persistent storage, in this case I'd have to implement my own versioning/updating logic.
This app runs on balena.io and I'm using docker volume API and balena docs as a reference: https://www.balena.io/docs/learn/develop/multicontainer/#named-volumes

How to create docker volume using docker-compose file so the app can write files to when running?

I have an app written using ASP.NET Core 3.1 framework. I am trying to create a docker image to allow me to run the app on a Linux system.
My app allows the user to upload files to the server, so it writes the uploaded file onto a folder called "Storage" located on the root folder of my project.
I want to create a permanent storage on the Linux machine so it is not destroyed when the image is removed.
I create the following docker-compose.yml file with instructions on how to create volumes as follow
version: '3.4'
services:
myproject:
image: ${DOCKER_REGISTRY-}myproject
build:
context: .
dockerfile: MyProject/Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://localhost;http://localhost
- ASPNETCORE_HTTPS_PORT=44344
ports:
- 51736:80
- 44344:443
volumes:
- photos:/app/Storage
volumes:
photos:
According to my understanding, the volumes located under the myproject service mapps the volume that is called photos to /app/Storage.
However, I am not what command would I use to create the volume on the server so it is not deleted.
How can I correctly create a volume and point the image to use it?
You don't need anything else. You have everything in there already. docker-compose will automatically create the photos volume for you and retain it across container restarts. Check docker volume ls after you start this stack for the first time and you'll see the volume listed.

Resources