How to mount OpenStack container in a Docker container - docker

I am new to OpenStack. I saw there is a feature called containers in OpenStack. I think those containers are not the same thing as Docker containers. I have understood OpenStack containers are just file storage (volume?) Right or wrong?
But is there a way to mount an OpenStack container in a Docker container?
I want to have a Docker container which contains only "system files" (/bin, /usr, apache, mysql) and to put all my configuration files and PHP files in an OpenStack container.

Containers in OpenStack are applied to the Object Storage (OpenStack SWIFT). Swift is the OpenStack-counterpart of AWS S3. What you call "Bucket" in S3, is a "container" in SWIFT.
Nevertheless, OpenStack is including "docker-container" support trough two projects:
Nova-Docker (Compute component in NOVA, wotking at "hypervisor" level, but whith docker instead of KVM/QEMU/Libvirt)
OpenStack Magnum: Docker-container orchestration solution for OpenStack.
You can see any of those projects at:
Magnum: https://wiki.openstack.org/wiki/Magnum
Nova-docker: https://github.com/openstack/nova-docker

Related

Jenkins Pipelining using Containers

I'm trying to setup Pipelining using Jenkins. However, my Jenkins instance itself is a container. My goal is to run each layer of my application (frontend, backend, database) using docker, but I don't want to run docker within docker.
Does it make sense to convert Jenkins from a container to a VM? Or is there a way to overcome the docker within docker inception problem?
Any thoughts would be greatly appreciated.
You should use docker out of docker rather than docker in docker, there's a great article about that by one of docker's creator here: https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/.
This is what I am using and it works pretty well.
There is a gotcha with this: Your bind mount would be relative to host filesystem not the jenkins containers filesystem, thus I recommend having jenkins_home a bind mount rather than a named volume, and having the bind mount in the same path on the host and in the container, as jenkins will generate path to files relative to workspace (which is within jenkins_home usually).
Or is there a way to overcome the docker within docker inception problem?
You can use container orchestration tool Kubernetes or Mesos.

Docker volumes vs nfs

I would like to know if it is logical to use a redundant NFS/GFS share for webcontent instead of using docker volumes?
I'm trying to build a HA docker environment with the least amount of additional tooling. I would like to stick to 3 servers, each a docker swarm node.
Currently I'm looking into storage: an NFS/GFS filesystem cluster would require additional tooling for a small environment (100gb max storage). I would like to only use native docker supported configurations. So I would prefer to use volumes and share those across containers. However, those volumes are, for as far as I know, not synchronized to other swarm nodes by default.. so if the swarm node that hosts the data volume goes down it will be unavailable for each container across the swarm..
A few things, that together, should answer your question:
Volumes use a driver, and the default driver in Docker run and Swarm services is the built-in "local" driver which only supports file paths that are mounted on that host. For using shared storage with Swarm services, you'll want a 3rd party plugin driver, like REX-Ray. An official list is here: store.docker.com
What you want to look for in a volume driver is one that's "docker swarm aware" that will re-attach volumes to a new task created if old Swarm service task is killed/updated. Tools like REX-Ray are almost like a "persistent data orchestrator" that ensures volumes are attached to the proper node where they are needed.
I'm not sure what web content you're talking about, but if it's code or templates, it should be built into the image. If you're talking about user uploaded content that needs to be backed up, then yep a volume sounds like the right way.

Persistent storage solution for Docker on AWS EC2

I want to deploy a node-red server on my AWS EC2 cluster. I got the docker image up and running without problems. Node-red stores the user flows in a folder named /data. Now when the container is destroyed the data is lost. I have red about several solutions where you can mount a local folder into a volume. What is a good way to deal with persistent data in AWS EC2?
My initial thoughts are to use a S3 volume or mount a volume in the task definition.
It is possible to use a volume driver plugin with docker that supports mapping EBS volumes.
Flocker was one of the first volume managers, it supports EBS and has evolved to support a lot of different back ends.
Cloudstor is Dockers volume plugin (It comes with Docker for AWS/Azure).
Blocker is an EBS only volume driver.
S3 doesn't work well for all file system operations as you can't update a section of an object, so updating 1 byte of a file means you have to write the entire object again. It's also not immediately consistent so a write then read might give you odd/old results.
The EBS volume can only be attached to one instance which means that you can only run your docker containers in one EC2 instance. Assuming that you would like to scale your solution in future with many containers running in ECS cluster then you need to look into EFS. It’s a shared system from AWS. The only issue is performance degradation of EFS over EBS.
The easiest way (and the most common approach) is run your docker with -v /path/to/host_folder:/path/to/container_folder option, so the container will refer to host folder and information will stay after it will be restarted or recreated. Here the detailed information about docker volume system.
I would use AWS EFS. It is like a NAS in that you can have it mounted to multiple instances at the same time.
If you are using ECS for your docker host the following guide may be helpful http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_efs.html

Possibility of using gluster volume in k8s if glusterfs client package on container

Kubernetes supports several types of Volumes including GlusterFS. Also GlusterFS can be Persistent Volumes in k8s.
https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/glusterfs/
For using glusterfs volumes in k8s, one of prerequisites is "Install Glusterfs client package on the Kubernetes nodes". But if it is expect that everything shall be in containers. Is that possible that put gluster client in container (e.g. DaemonSet deploy gluster client on k8s node first), while k8s still can suing glusterfs as above example?
Will k8s support such using scenarios?
Nothing is officially supported via the way you described at the moment, but take a look at this blog post, it outlines exactly what you describe: https://huaminchen.wordpress.com/2016/03/22/yet-another-containerized-mounter-for-kubernetes/

Share docker images between hosts with NFS

I'building a mesosphere infrastructure on AWS instances with 3 master servers (running zookeeper, mesos-master, marathon and haproxy) and N slaves (running mesos-slave and docker).
If I run the same container on different slaves marathon downloads on each slave the same image. I would like to share one single nfs export (say on master1) and mount it on every slave in order to have a unique storage for the images.
Im using Ubuntu on the EC2 instances, so the storage driver used by default is device-mapper. I set up the slaves to mount /var/lib/docker/devicemapper and /var/lib/docker/graph but it ends up with this error: "stale NFS file handle"
What I would like to understand is:
There is a way to do it using a different storage driver?
In any case is the docker daemon doing some look on the files in this directory?
Is my approach wrong or possible leading into "cconcurrency access issues?
Instead of using NFS to expose the backing file system, I think it would be easier to set up docker-registry (with a volume on the master1, so the data is persisted there) and on the other nodes pull images via docker protocol by e.g. docker pull master1:5000/image:latest

Resources