Unable to run docker containers on the mounted storage in gce vm - docker

We are planning to move from on-prem to gcp, but the process of designing is taking some time. We want to deploy our docker containers on gce vms. We don't want to manually increase the storage every time the containers fill up the space, we know that this can be done by auto scaling but we don't have enough expertise on this. So, we found out that GCS can be mounted on the VM and we can run our containers on the mounted path, but when we tried touching files or running containers we are getting permission denied error. can anyone help us to resolve it we tried many docs and tutorials but they are a little bit confusing.
We used gcsfuse for mounting the bucket to the gce vm

Related

Share volume in docker swarm for many nodes

I'm facing a big challenge. Trying run my app on 2 VPS in docker swarm. Containers that use volumes should use shared volume between nodes.
My solution is:
Use plugin glusterFS and mount volume on every node using nfs. NFS generate single point of failure so when something go wrong my data are gone. (it's not look good maybe im wrong)
Use Azure Storage - store data as blob ( Azure Data Lake Storage Gen2 ). But my main is problem how can i connect to azure storage using docker-compose.yaml? I should declarate volume in every service that use volume and declare volume in volume section. I don't have idea how to do that.
Docker documentation about it is gone. Should be here https://docs.docker.com/docker-for-azure/persistent-data-volumes/.
Another option is use https://hub.docker.com/r/docker4x/cloudstor/tags?page=1&ordering=last_updated but last update was 2 years ago so its probably not supported anymore.
Do i have any other options and which share volume between nodes is best solution?
There are a number of ways of dealing with creating persistent volumes in docker swarm, none of them particularly satisfactory:
First up, a simple way is to use nfs, glusterfs, iscsi, or vmware to multimount the same SAN storage volume onto each docker swarm node. Services just mount volumes as /mnt/volumes/my-sql-workload
On the one hand its really simple, on the other hand there is literally no access control and you can easilly accidentally load services pointing at each others data.
Next, commercial docker volume plugins for SANs. If you are lucky and possess a Pure Storage, NetApp or other such SAN array, some of them still offer docker volume plugins. Trident for example if you have a NetApp.
Third. if you are in the cloud, the legacy swarm offerings on Azure and Aws included a built in "cloudstor" volume driver but you need to dig really deep to find it in their legacy offering.
Four, there are a number of opensource or free volume plugins that will mount volumes from nfs, glusterfs or other sources. But most are abandoned or very quiet. The most active I know of is marcelo-ochoa/docker-volume-plugins
I wasn't particularly happy with how those plugins mounted pre-existing volumes, but made operations like docker volume create hard, so I made my own, but really
Swarm Cluster Volume Support with CSI Plugins is hopefully going to drop in 2021¹. Which hopefully is a solid rebuttal to all the problems above.
¹Its now 2022 and the next version of Docker has not yet gone live with CSI support. Still we wait.
In my opinion, a good solution could be to create a GlusterFS cluster, configure a single volume and mount it in every Docker Swarm node (i.e. in /mnt/swarm-storage).
Then, for every Container that needs persistent storage, bind-mount a subdirectory of the GlusterFS volume inside the container.
Example:
services:
my-container:
...
volumes:
- type: bind
source: /mnt/swarm-storage/my-container
target: /a/path/inside/the/container
This way, every node shares the same storage, so that a given container could be instantiated indifferently on every cluster node.
You don't need any Docker plugin for a particular storage driver, because the distributed storage is transparent to the Swarm cluster.
Lastly, GlusterFS is a distributed filesystem, designed to not have a single point of failure and you can cluster it on as many node you like (contrary to NFS).

GitHub Actions Runner Container on Google Cloud Run that can build docker image

I have built a docker image that when run, it registers itself as a GitHub Runner. This runner will, amongst other things, be used to build and push images to GitHub Container Registry. I don't want to deploy the containers to GKE or Compute, as I don't want the overhead of managing those resources. I would prefer to deploy the containers to Google Cloud Run. I've scoured the docs for help but I can't seem to find the answers to the following question:
Can I run 'docker in docker' when the container is deployed to GCP Cloud Run?
How do I specify the volume mount required when deploying the container to Google Cloud Run, i.e. the usual mapping with docker run would be:
-v /var/run/docker.sock:/var/run/docker.sock
I never tested but it's possible that the current Cloud Run sandbox prevent this king of use. And I don't really know the use case for this!
You can't mount volume in Cloud Run, it's stateless. You only have a in-memory file system in the /tmp directory (and it's in memory, size correctly your Cloud Run instance memory to take this into account). You can connect your instance to 3rd party product, such as Google Cloud Storage or databases, but no volume mountable on Cloud Run (for now)
If you have these requirement you can maybe have a look to autopilot and deploy directly your container on fully managed K8S.

Backup docker volume on a broken system. Volume recovery

I have a server that's been running in docker on coreos. For some reason containerd has stopped running and the docker daemon has stopped working correctly. My efforts to debug haven't gotten far. I'd like to just boot a new instance and migrate, but I'm not sure I can backup my volume without a working docker service. Is it possible to backup my volume without using docker?
Most search results assume a running docker system, and don't work in this case.
By default, docker volumes are stored in /var/lib/docker/volumes. Being that you don't have a working docker setup, you might have to dive into the subfolders to figure out which volume you're concerned with, but that should at least give you a start. If it's helpful, in a working docker environment, you can inspect docker volumes outlined here, and get all necessary information you would need to carry this out.

Persistent storage solution for Docker on AWS EC2

I want to deploy a node-red server on my AWS EC2 cluster. I got the docker image up and running without problems. Node-red stores the user flows in a folder named /data. Now when the container is destroyed the data is lost. I have red about several solutions where you can mount a local folder into a volume. What is a good way to deal with persistent data in AWS EC2?
My initial thoughts are to use a S3 volume or mount a volume in the task definition.
It is possible to use a volume driver plugin with docker that supports mapping EBS volumes.
Flocker was one of the first volume managers, it supports EBS and has evolved to support a lot of different back ends.
Cloudstor is Dockers volume plugin (It comes with Docker for AWS/Azure).
Blocker is an EBS only volume driver.
S3 doesn't work well for all file system operations as you can't update a section of an object, so updating 1 byte of a file means you have to write the entire object again. It's also not immediately consistent so a write then read might give you odd/old results.
The EBS volume can only be attached to one instance which means that you can only run your docker containers in one EC2 instance. Assuming that you would like to scale your solution in future with many containers running in ECS cluster then you need to look into EFS. It’s a shared system from AWS. The only issue is performance degradation of EFS over EBS.
The easiest way (and the most common approach) is run your docker with -v /path/to/host_folder:/path/to/container_folder option, so the container will refer to host folder and information will stay after it will be restarted or recreated. Here the detailed information about docker volume system.
I would use AWS EFS. It is like a NAS in that you can have it mounted to multiple instances at the same time.
If you are using ECS for your docker host the following guide may be helpful http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_efs.html

Combining Chef And Docker

I am having hard time figuring how I should combine Chef and Docker to get the best of them.
Right now I am using Chef to automatically pull a docker image and create a container.
But things get messy when I want to change the configuration inside the container.
I read about knife container but I didn't understand how one can bootstrap a container and a new vm (on Amazon for example) all together.
I would suggest that if all you want to do is manage Docker images/containers, that you don't really need Chef.
Docker provides tools like:
Fig (http://www.fig.sh/), which brings up multiple containers as one logical unit.
Swarm (https://github.com/docker/swarm/), which allows you to abstract away the machines you have for deployments. For example, "My app needs 2GB of RAM, 1 CPU, 10GB of HD, which machine has available resources?"
Machine (https://github.com/docker/machine), which allows you to create VMs in the cloud in pretty much any provider.
A REST API (https://docs.docker.com/reference/api/docker_remote_api/), which allows you to remotely start/stop containers etc.
In my opinion those suite of tools replace the need for Chef if all you're going to do is manage Docker images and containers.
As someone already noted, don't change configs after a container has started. Better to make a new image or restart the container. You could also mount the configs external to the container and modify them there, then restart the container.

Resources