How to create and mount a volume to a grafana docker image and use it to strore grafana dashbords and user information - docker

I have created a grafana docker image in aws fargate using aws ecs. The web app works well. However, I loose dashboards and user information anytime I restart the app. From my readings, this is because grafana image has no storage to keep information.
Following this link,https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-mount-efs-containers-tasks/, I have added an EFS volume to the container.
Volume configuration : Root directory = / (the default one).
On the container name: STORAGE AND LOGGING session:.
Mount points: I have added the volume name.
Container path: /usr/app.
However, I still loose all dashboards ans user information on container restart.
Is there something that I am missing ?
Thank you for your support

Instead of making container have persistent storage one alternative could be to have a custom entrypoint script that downloads the dashboards and put them in /etc/grafana/provisioning/dashboards/ (could be from s3) and then runs /run.sh this way you can keep your container stateless and not add any storage to it.

Configure custom database (e.g. MySQL, PostgreSQL - AWS RDS/Aurora) instead of default file based SQLite DB. RDBs are better for concurrent access, so you can scale out better.
AWS offers also options for backups (e.g. automated snapshosts), so I would say DB is better solution than FS volumes (+ problems with FS permissions).

Related

How to take backup of docker volumes?

I'm using named volumes to persist data on Host machine in the cloud.
I want to take backup of these volumes present in the docker environment so that I can reuse them on critical incidents.
Almost decided to write a python script to compress the specified directory on the host machine and push it to the AWS S3.
But I would like to know if there is any other approaches to this problem?
docker-volume-backup may be helpful. It allows you to back up your Docker volumes to an external location or to a S3 storage.
Why use a Docker container to back up a Docker volume instead of writing your own Python script? Ideally you don't want to make backups while the volume is being used, so having a container on your docker-compose able to properly stop your container before taking backups can effectively copy data without affecting the application performance or backup integrity.
There's also this alternative: volume-backup

How does the data in HOME directory persist on cloud shell?

Do they use environment / config variables to link the persistent storage to the project related docker image ?
So that everytime new VM is assigned, the cloud shell image can be run with those user specific values ?
Not sure to have caught all your questions and concerns. So, Cloud Shell is in 2 parts:
The container that contains all the installed library, language support/sdk, binaries (docker for example). This container is stateless and you can change it (in the setting section of Cloud Shell) if you want to deploy a custom container. For example, it's what is done with Cloud Run Button for deploying a Cloud Run service automatically.
The volume dedicated to the current user that is mounted in the Cloud Shell container.
By the way, you can easily deduce that all you store outside the /home/<user> directory is stateless and not persist. /tmp directory, docker image (pull or created),... all of these are lost when the Cloud Shell start on other VM.
Only the volume dedicated to the user is statefull, and limited to 5Gb. It's linux environment and you can customize the .profile and .bash_rc files as you want. You can store keys in /.ssh/ directory and all the other tricks that you can do on Linux in your /home directory.

Many docker volumes, each for different user (swarm mode)

I have users that each will each have a directory for storing arbitrary php code. Each user can execute their code in a Docker container - this means I don't want user1 to be able to see the directory for user2.
I'm not sure how to set this up.
I've read about bind-mounts vs named-volumes. I'm using swarm-mode so I don't know on which host a particular container will run. This means I'm not sure how to connect the container to the volume mount and subdirectory.
Any ideas?
Have you considered having an external server for storage and mounting it on each Docker host?
If you need the data to exist and you don't want to mount external storage you can try looking into something like Gluster for syncing files across multiple hosts
As for not wanting users to share directories you can just set the rights on the folder.

Deploy a docker app using volume create

I have a Python app using a SQLite database (it's a data collector that runs daily by cron). I want to deploy it, probably on AWS or Google Container Engine, using Docker. I see three main steps:
1. Containerize and test the app locally.
2. Deploy and run the app on AWS or GCE.
3. Backup the DB periodically and download back to a local archive.
Recent posts (on Docker, StackOverflow and elsewhere) say that since 1.9, Volumes are now the recommended way to handle persisted data, rather than the "data container" pattern. For future compatibility, I always like to use the preferred, idiomatic method, however Volumes seem to be much more of a challenge than data containers. Am I missing something??
Following the "data container" pattern, I can easily:
Build a base image with all the static program and config files.
From that image create a data container image and copy my DB and backup directory into it (simple COPY in the Dockerfile).
Push both images to Docker Hub.
Pull them down to AWS.
Run the data and base images, using "--volume-from" to refer to the data.
Using "docker volume create":
I'm unclear how to copy my DB into the volume.
I'm very unclear how to get that volume (containing the DB) up to AWS or GCE... you can't PUSH/PULL a volume.
Am I missing something regarding Volumes?
Is there a good overview of using Volumes to do what I want to do?
Is there a recommended, idiomatic way to backup and download data (either using the data container pattern or volumes) as per my step 3?
When you first use an empty named volume, it will receive a copy of the image's volume data where it's first used (unlike a host based volume that completely overlays the mount point with the host directory). So you can initialize the volume contents in your main image as a volume, upload that image to your registry and pull that image down to your target host, create a named volume on that host, point your image to that named volume (using docker-compose makes the last two steps easy, it's really 2 commands at most docker volume create <vol-name> and docker run -v <vol-name>:/mnt <image>), and it will be populated with your initial data.
Retrieving the data from a container based volume or a named volume is an identical process, you need to mount the volume in a container and run an export/backup to your outside location. The only difference is in the command line, instead of --volumes-from <container-id> you have -v <vol-name>:/mnt. You can use this same process to import data into the volume as well, removing the need to initialize the app image with data in it's volume.
The biggest advantage of the new process is that it clearly separates data from containers. You can purge all the containers on the system without fear of losing data, and any volumes listed on the system are clear in their name, rather than a randomly assigned name. Lastly, named volumes can be mounted anywhere on the target, and you can pick and choose which of the volumes you'd like to mount if you have multiple data sources (e.g. config files vs databases).

What's the correct way to take automatic backups of docker volume containers?

I am in the process of building a simple web application using NodeJS that persists data to a MySQL database and saves images that have been uploaded to it. With my current setup, I have 4 Docker containers - 1 for the NodeJS application, 1 for the MySQL server, 1 Volume Container for the MySQL Data and 1 Volume container for the uploaded files.
What I would like to do is come up with a mechanism where I can periodically take backups of both volume containers automatically without stopping the web application.
Is it possible to do this and if so, what's the best way?
I have looked at the Docker Documentation on Volume management that covers backing up and restoring volumes, but I'm not sure that would work while the application is still writing data to the database or saving uploaded files.
To backup your database I suggest to use mysqldump, that is more safer then a simple file copy.
For volume backup you can also run a container, link the volume and tar the contents together.
In both cases you can use additional containers or process injection via docker exec.

Resources