Suppose I use a Cloud SDK Docker container, which lets me run various gcloud commands, including gcloud compute disks create which creates a Google persistent disk. However I cannot then attach to this disk within the container, since gcloud compute instances attach-disk only works on GCE instances and not Docker containers.
Is there a way for the container to attach or even access the persistent disk? Can I in fact attach persistent disks to arbitrary Linux machines, not just GCE instances?
I know I can use either Docker or Kubernetes to attach persistent disks fixed and determined before the container is launched, but what I need is the container itself to attach to arbitrary persistent disks as determined by container code.
Can I in fact attach persistent disks to arbitrary Linux machines, not just GCE instances?
No, you can only attach GCE persistent disks to GCE VMs.
I cannot then attach to this disk within the container, since gcloud compute instances attach-disk only works on GCE instances and not Docker containers.
If the container is running inside a GCE VM, you should be able to attach the persistent disk to the VM that hosts the container.
I need is the container itself to attach to arbitrary persistent disks as determined by container code.
If you run your container in privileged mode, then you should be able to run the appropriate mount commands to mount the disk after you've attached it to the VM. You can try mapping in a volume to the container that is initially empty and then mounting the PD to that path, but I'm not sure whether it will work.
Related
Currently i have deployed Openstack using Kolla ansible for Yoga version.
I am able to successfully create cinder volume and able to mount it to Openstack Zun container. But after stopping and recreating container with the same cinder volume I am facing issues.
Few times container will be mounted with cinder volume successfully and few time it will fail. And even if it mounts also there will not be any data in the mounted disk of the container.
Also, I could see data mounted on compute node at /var/lib/zun/mnt/ location, but why it is not getting reused when we mount it for other containers.
Can any one help me out to debug this issue.
Thanks.
I am running Docker Swarm with 3-Masters and 3-Worker nodes.
On this Swarm, I have an Elastic search container which reads data from multiple log files and then writes the data into a directory. Later it reads data from this directory and shows me the logs on a UI.
Now the problem is that I am running only 1 instance of this Elastic Search Container and say for some reason it goes down then docker swarm start this on another machine. Since I have 6 machines I have created the particular directory on all the machines, but whenever I start the docker stack the ES container starts and starts reading/writing directory on the machine where it is running.
Is there a way that we can
Force docker swarm to run a container on a particular machine
or
Map volume to shared/network drive
Both are available.
Force docker swarm to run a container on a particular machine
Add --constraint flag when executing docker service create. Some introduction.
Map volume to shared/network drive
Use docker volume with a driver that supports writing files to an external storage system like NFS or Amazon S3. More introduction.
I'm trying to build a Container Optimized VM in Google Cloud to host a Docker container. This Docker container needs storage but the optimized container VM images have almost no writeable storage. I then created a persistent disk to attach to the VM to mount in the container, but the VMs /etc is also read only, so I'm unable to write to fstab, OR mount the disk anywhere in the filesystem. How is this supposed to be accomplished in a VM that is designed specifically to host Docker containers?
The storage space in the instances is independent of the image used.
You can change the boot disk size on creation time or later. This will allow you to have more storage space in the instance.
If you want to use Kubernetes Engine it is also possible to change the boot disk size on creation time.
How can I achieve a persistent storage for a WebDAV server running on several/any swarm nodes?
It's part of a docker-compose app running on my own vSphere infrastructure.
I was thinking about mounting an external NFS share from insde the containers (at the OS level, not docker volumes) but then how would that be better than having WebDAV outside the swarm cluster?
I can think of 2 options:
Glusterfs
This option is vSphere independent. You can create replicated bricks and store your volumes on them. Exposing same volume to multiple docker hosts. So in case of node failure the container will get restarted on another node and has it's persistent storage with it. You can also mount the persistent data on multiple containers.
There is one catch: Same diskspace will be consumed on multiple nodes.
Docker-Volume-vSphere
This option requires vsphere hosts. You can create docker volumes on vmfs datastores. they will be shared between docker hosts (virtual machines). So in case of failure the container restarts on another node and has persistent data available. Multiple containers can share a single volume.
I'building a mesosphere infrastructure on AWS instances with 3 master servers (running zookeeper, mesos-master, marathon and haproxy) and N slaves (running mesos-slave and docker).
If I run the same container on different slaves marathon downloads on each slave the same image. I would like to share one single nfs export (say on master1) and mount it on every slave in order to have a unique storage for the images.
Im using Ubuntu on the EC2 instances, so the storage driver used by default is device-mapper. I set up the slaves to mount /var/lib/docker/devicemapper and /var/lib/docker/graph but it ends up with this error: "stale NFS file handle"
What I would like to understand is:
There is a way to do it using a different storage driver?
In any case is the docker daemon doing some look on the files in this directory?
Is my approach wrong or possible leading into "cconcurrency access issues?
Instead of using NFS to expose the backing file system, I think it would be easier to set up docker-registry (with a volume on the master1, so the data is persisted there) and on the other nodes pull images via docker protocol by e.g. docker pull master1:5000/image:latest