I have a two EC2 server and I wanted to create volume from aws EBS which should be available for both server. So I used REx-Ray plugin for this.
steps I did:
install
docker plugin install rexray/ebs REXRAY_PREEMPT=true EBS_ACCESSKEY=* EBS_SECRETKEY=*
create volume
docker volume create -d rexray/ebs --name mongo_vol -o=volumeType=io1 -o=size=100 -o=iops=100
When I ran docker volume ls in first EC2 server shows result like this;
DRIVER VOLUME NAME
rexray/ebs:latest External MongoDB Data
rexray/ebs:latest MySQL
rexray/ebs:latest Private MongoDB
rexray/ebs:latest mongo_vol
But when I ran docker volume ls in my second server that shows result like this:
DRIVER VOLUME NAME
local mongo_vol
My driver have not change, but volume name shows in both side.
I could not find anything related this on internet when do my research about this.
Does anyone give me a idea to solve this?
I had a issue like this. Rex-ray make EBS accessible to both server, I think you have install rexy-ray into one server.
Install Rex-Ray into your other server as well.
that won't fix your issue, Next,
Remove local driver volume in your other server
before remove volume, make a backup or snapshot of your volume in case.
EBS volumes can only be attached to one EC2 instance at a time. If you need storage that is accessible to both servers simultaneously, you can use EFS and the REX-Ray EFS driver.
Related
im fairly new to the Docker Container world and im trying to move my Nextcloud server to the container.
i can deploy it successfully on a test environment, but im trying to map an externall HDD that will eventually contain all of the data (profiles/pics/data/etc) as it is on my current server.
my current setup is an ubuntu server 20.04.1 and Nextcloud 18 with an external HDD mounted for storage.
so far i havent been able to map the external drive.
can anyone provide any insights?
Regards!
To help you specifically, more information is required, like which docker image are you using and how are you deploying your container. Also, this might be a question for https://serverfault.com/
The general concepts of "mounting" parts of a filesystem into a container are described at Docker Volumes and Bind Mounts.
Suppose your harddrive is mounted at /mnt/usb on the host, you could access it within a docker container at /opt/usb when started like this
docker run -i -t -v /mnt/usb:/opt/usb ubuntu /bin/bash
I am trying to make a centralized location for postgresql data, and use that for multiple containers on the same network of docker. For this i have to use the shared location in the volume, for example something like
docker -v 0.0.0.0\data:/var/lib/postgrsql/data
How can i specify the shared location as volume's host path and make a linkage with the container's binded folder.
Environment details:
Ubuntu 17.10
Docker 17
Any help or guidance to achieve this would be appreciated.
In Swarm you want to use a volume driver. This way 1. you don't have manual nfs mounts on each node host OS, and 2. you can ensure that the volume you want for a specific service is connected to the host it plans to run on.
The Docker Store has a list of volume plugin drivers for various storage solutions.
If you're using cloud storage or simple NFS, REX-Ray is likely the plugin you want.
I want to deploy a node-red server on my AWS EC2 cluster. I got the docker image up and running without problems. Node-red stores the user flows in a folder named /data. Now when the container is destroyed the data is lost. I have red about several solutions where you can mount a local folder into a volume. What is a good way to deal with persistent data in AWS EC2?
My initial thoughts are to use a S3 volume or mount a volume in the task definition.
It is possible to use a volume driver plugin with docker that supports mapping EBS volumes.
Flocker was one of the first volume managers, it supports EBS and has evolved to support a lot of different back ends.
Cloudstor is Dockers volume plugin (It comes with Docker for AWS/Azure).
Blocker is an EBS only volume driver.
S3 doesn't work well for all file system operations as you can't update a section of an object, so updating 1 byte of a file means you have to write the entire object again. It's also not immediately consistent so a write then read might give you odd/old results.
The EBS volume can only be attached to one instance which means that you can only run your docker containers in one EC2 instance. Assuming that you would like to scale your solution in future with many containers running in ECS cluster then you need to look into EFS. It’s a shared system from AWS. The only issue is performance degradation of EFS over EBS.
The easiest way (and the most common approach) is run your docker with -v /path/to/host_folder:/path/to/container_folder option, so the container will refer to host folder and information will stay after it will be restarted or recreated. Here the detailed information about docker volume system.
I would use AWS EFS. It is like a NAS in that you can have it mounted to multiple instances at the same time.
If you are using ECS for your docker host the following guide may be helpful http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_efs.html
I have an existing AWS EC2 instance with docker already provisioned on it. I would like to import this existing host to allow Docker Machine to manage this locally.
To do this, so far I have been using the generic driver. But as you can see in the documentation, it re-provisions docker every time, thereby bringing down my running containers. The AWS driver does not seem to have an option to do this either.
So how can I add an existing host locally without re-provisioning docker or bringing down my containers?
I'building a mesosphere infrastructure on AWS instances with 3 master servers (running zookeeper, mesos-master, marathon and haproxy) and N slaves (running mesos-slave and docker).
If I run the same container on different slaves marathon downloads on each slave the same image. I would like to share one single nfs export (say on master1) and mount it on every slave in order to have a unique storage for the images.
Im using Ubuntu on the EC2 instances, so the storage driver used by default is device-mapper. I set up the slaves to mount /var/lib/docker/devicemapper and /var/lib/docker/graph but it ends up with this error: "stale NFS file handle"
What I would like to understand is:
There is a way to do it using a different storage driver?
In any case is the docker daemon doing some look on the files in this directory?
Is my approach wrong or possible leading into "cconcurrency access issues?
Instead of using NFS to expose the backing file system, I think it would be easier to set up docker-registry (with a volume on the master1, so the data is persisted there) and on the other nodes pull images via docker protocol by e.g. docker pull master1:5000/image:latest