I want to create multiple data table with telegraf prometheus and grafana, but I can't open the docker.compose.yml file to the outside network, how should I add a code to open it?
Related
I have created a grafana docker image in aws fargate using aws ecs. The web app works well. However, I loose dashboards and user information anytime I restart the app. From my readings, this is because grafana image has no storage to keep information.
Following this link,https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-mount-efs-containers-tasks/, I have added an EFS volume to the container.
Volume configuration : Root directory = / (the default one).
On the container name: STORAGE AND LOGGING session:.
Mount points: I have added the volume name.
Container path: /usr/app.
However, I still loose all dashboards ans user information on container restart.
Is there something that I am missing ?
Thank you for your support
Instead of making container have persistent storage one alternative could be to have a custom entrypoint script that downloads the dashboards and put them in /etc/grafana/provisioning/dashboards/ (could be from s3) and then runs /run.sh this way you can keep your container stateless and not add any storage to it.
Configure custom database (e.g. MySQL, PostgreSQL - AWS RDS/Aurora) instead of default file based SQLite DB. RDBs are better for concurrent access, so you can scale out better.
AWS offers also options for backups (e.g. automated snapshosts), so I would say DB is better solution than FS volumes (+ problems with FS permissions).
Is it possible to output container logs to a file per container using fluentd?
I installed fluentd ( by running a fluentd official image) and am running a multiple application containers on the host.
I was able to output all of containers logs to one file, but I’d like to create a log file per container.
I’m thinking about using “match” directive, but have no idea.
As the title states. I am looking to send a file from container A to container B. Both containers are running on separate volumes and are on the same network. Is this possible without temporarily storing the file in the host file system?
I have been reading around and found this solution, however it requires that the file I wish to send is temporarily stored in the host
https://medium.com/#gchudnov/copying-data-between-docker-containers-26890935da3f
Container A has its own volume to which a file is written to. I want to get Container A to send this file to a volume to which Container B is attached. Container B then reads this file.
Thanks
If they are linux containers you can use scp
scp file root#172.17.x.x:/path
I have a docker container with an application running inside of it. I am trying to export a text file from the docker container to the host. The problem is the application keeps on writing data into the text file at regular intervals.
Is there a way to directly store the file onto the host and the application inside the docker container keep storing the data to the text file?
Take a look at bind mounts or volumes. They are used to achieve exactly what you asked.
I have users that each will each have a directory for storing arbitrary php code. Each user can execute their code in a Docker container - this means I don't want user1 to be able to see the directory for user2.
I'm not sure how to set this up.
I've read about bind-mounts vs named-volumes. I'm using swarm-mode so I don't know on which host a particular container will run. This means I'm not sure how to connect the container to the volume mount and subdirectory.
Any ideas?
Have you considered having an external server for storage and mounting it on each Docker host?
If you need the data to exist and you don't want to mount external storage you can try looking into something like Gluster for syncing files across multiple hosts
As for not wanting users to share directories you can just set the rights on the folder.