How to install docker daemon when resizing data center cluster size in Mesosphere? - docker

We're thinking about using mesos and mesosphere to host our docker containers. Reading the docs it says that a prerequisite is that:
Docker version 1.0.0 or later needs to be installed on each slave
node.
We don't want to manually SSH into each new machine and install the correct version of the Docker daemon. Instead we're thinking about using something like Ansible to install Docker (and perhaps other services that may be required on each slave).
Is this a good way to solve it or does Mesosphere/DCOS or any of Mesos ecosystem components have other ways of dealing with this?
I've seen the quick intro where someone from Mesosphere just use dcos resize to change the cluster size on the Google Cloud Platform. Is there a way to hook in to this process and install additional services on the (google) container when it has booted? Or is this something we should avoid and instead just use a "pre-baked image"?

In your own datacenter using your favorite configuration tool such as ansible, salt, ... is probably a good choice.
On the cloud it might be easier to use virtual machine images providing docker, so for example dcos on aws uses coreOS which comes with docker out of the box. Shouldn't be too difficult with Ubuntu either...

Related

Building a docker image on EC2 for web application with many dependencies

I am very new to Docker and have some very basic questions. I was unable to get my doubts clarified elsewhere and hence posting it here. Pardon me if the queries are very obvious. I know I lack some basic understanding regarding images but i had a hard time finding some easy to understand explanation for the whole of it.
Problem at hand:
I have my application running on an EC2 node (r4.xlarge). It is a web application which has a LOT of dependencies (system dependencies + other libraries etc). I would like to create a docker image of my machine so that i can easily run it at ease when I launch a new EC2 instance.
Questions:
Do i need to build the docker image from scratch or can I use some base image?
If i can use a base image, which one do I select? (It is hard to know the OS version on the EC2 machine and hence I am not sure which base image do i start on.
I referred this documentation-
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html#install_docker
But it creates from an Ubuntu base image.
The above example has instructions on installing apache (and other things needed for the application). Let's say my application needs server X to be installed + 20 system dependencies + 10 other libraries.
Ex:
yum install gcc
yum install gfortran
wget <abc>
When I create a docker file do i need to specify all the installation instructions like above? I thought creating an image is like taking a copy of your existing machine. What is the docker file supposed to have in this case?
Pointing me out to some good documentation to build a docker image on EC2 for a web app with dependencies will be very useful too.
Thanks in advance.
First, if you want to move toward docker then I will suggest using AWS ECS which specially designed for docker container and have auto-scaling and load balancing feature.
As for your question is concern so
You need a docker file which contains all the packages and application which already installed in your EC2 instance. As for base image is concern i will recommend Alpine. Docker default image is Alpine
Why Alpine?
Alpine describes itself as:
Small. Simple. Secure. Alpine Linux is a security-oriented,
lightweight Linux distribution based on musl libc and busybox.
https://nickjanetakis.com/blog/the-3-biggest-wins-when-using-alpine-as-a-base-docker-image
https://hub.docker.com/_/alpine/
Let's say my application needs server X to be installed + 20 system
dependencies + 10 other libraries.
So You need to make dockerfile which need all these you mentioned.
Again I will suggest ECS for best docker based application because that is ECS that designed for docker, not EC2.
CONTAINERIZE EVERYTHING
Amazon ECS lets you easily build all types of
containerized applications, from long-running applications and
microservices to batch jobs and machine learning applications. You can
migrate legacy Linux or Windows applications from on-premises to the
cloud and run them as containerized applications using Amazon ECS.
https://aws.amazon.com/ecs/
https://aws.amazon.com/getting-started/tutorials/deploy-docker-containers/
https://caylent.com/containers-kubernetes-docker-swarm-amazon-ecs/
You can use a base image, you specify it with the first line of
your Docker file, with FROM
The base OS of the EC2 instance doesn't matter for the container.
that's the point of containers, you can run linux on windows, arch
on debian, whatever you want.
Yes, dependencies that don't exist in your base image will need to
be specified and installed. ( Depending on the default packager
manger for the base image you are working from you might use dpkg,
or yum or apt-get. )

How to run puppet/systemctl inside docker container centos7

My question revolves around the following problem/error.
Service/Service[jenkins]: Provider redhat is not functional on this host. OR directly that D-BUS not available.
Let's say for instance i'm running packer, which invokes a puppet-masterless provisioner on a docker builder.
The puppet code base & contrib modules for the most part will attempt to manage the 'service' of the installed module. For instance, lets take Jenkins as an example. Jenkins puppet module although good, will fail, on packer builds to a centos7 & puppet docker host. As systemctl will not be available.
At this moment in time i'm confused how this would/will ever work for puppet/ansible code bases which attempt to manage the service. Without considerable extra effort to the codebase.
I have considered the contain running being /sbin/init but still feels a bit hacky.
Can anyone shed any light on this issue for me?
I am using ansible code to provision real machines or docker containers - to get away with SystemD / D-Bus I have created the docker-systemctl-replacement

Does Hyperledger Fabric need Docker?

This may be the stupid question.
Does Hyperledger Fabric require Docker for its operations.
I'm just wondering that Docker is needed only if we need to run Fabric peer, orderer or couchDB as virtual machine in the same physical machine. I think Docker might not be necessary if we install those sofwares (peer, order, couchDB, etc) natively on the separate and same server.
Thank you.
Just so this point does not go unnoticed, while you do not need to run the peer in a Docker container, endorsing peers (the ones which run chaincode) need access to a Docker daemon (ideally on the same host). Chaincode is currently only deployed via Docker containers.
The question as to whether Docker is required to run a peer, orderer, fabric-ca, etc. depends on what effort you are willing to expend.
The Hyperledger Fabric community publishes stable, tested Docker images for X86, PowerPC and s390 (mainframe) architectures for each of its releases. These images are based on Ubuntu.
To use the Hyperledger Fabric published release images, you need Docker and some form of orchestration support. For sample use cases, we provide some simple Docker Compose definitions. Hyperledger Cello and other provisioning platforms such as the IBM sandbox, provide kubernetes helm charts.
It is possible to build the binaries outside of their Docker images without modification of the source. However, if you wish to build for an alternative OS (e.g. Windows, RHEL or CENTOS, etc) then you will need to modify the build process. However, it can and has been done. Suggest you reach out to the hyperledger-fabric#lists.hyperledger.org mailing list to see if any in the community that have built for alternative deployment will share their work.
Starting HLF 2.0 things have changed. According to documentation, chaincode can be in 'external containers' also.
https://hyperledger-fabric.readthedocs.io/en/release-2.0/cc_launcher.html
Yes, it is the second heading on the prerequisites page at http://hyperledger-fabric.readthedocs.io/en/latest/prereqs.html
Docker and Docker Compose

Why run Docker under Vagrant?

I've read multiple articles how to do this, but I can't figure out what the benefits are under macOS.
From my point of view, you can run Docker natively on macOS using Docker Community Edition (boot2docker+Kitematic). What does it's give me for running from Vagrant, mobility?
My standard day to day development work is carried out in Docker For Mac/Windows as they cover about 95% of what I need to do with Docker. Since they replaced Docker Toolbox/boot2docker and made the integration to the OS pretty seamless I have found very few reasons to move over to another virtual machine. The two main reasons I see for using Vagrant or standalone VM's now are for VM customisation and clustering.
VM Customisation
The virtual machines supplied by Docker Toolbox, Docker for Mac/Windows are pre packaged cut down Linux distros (TinyCore and Alpine) that are largely ephemeral, except for the Docker configuration so you don't get much say in how they work.
Networking
I deal with a number of custom network configurations that just aren't possible in the pre packaged VM's, largely around having containers connected to routable networks rather than using mapped ports.
Version Control
Occasionally you need to replicate server environments that run old versions of the Docker daemon, or RHEL servers using devicemapper. A VM let's you choose the packages to install.
Clustering
Building a swarm, or branching out into Mesosphere/Kubernetes will require multiple VM's. I tend to find these easier to manage and build with Vagrant rather than Docker Machine, and again they require custom config inside the VM.

docker is great for run-anywhere but what about the machines to host docker?

I am wondering how do we make machines that host docker to be easily replaceable. I would like something like a Dockerfile that contains instructions on how to set-up the machine that will host docker. Is there a way to do that?
The naive solution would be to create an official "docker host" binary image to install on new machines, but I would like to have something that is reproducible and transparent like the dockerfile?
It seems like tools like Vagrant, Puppet, or Chef may be useful but they appear to be for virtual machine procurement and they seem to all require set-up of some sort of "master node" server. I am not going to be spinning up and tearing down regularly so a master server is a waste of a server, I just want something that is reproducible in the event i need to set-up or replace a new machine.
this is basically what docker-machine does for you https://docs.docker.com/machine/overview/
and other "orchestration" systems will make this automated and easier, as well
There are lots of solutions to this with no real one size fits all answer.
Chef and Puppet are the popular configuration management tools that typically use a centralized server. Ansible is another option that typically runs without a server and just connects with ssh to configure the host. All three of these works very similarly, so if your concern is simply managing the CM server, Ansible may be the best option for you.
For VM's Vagrant is the typical solution and it can be combined with other tools like Ansible to provision the VM after creating it.
In the cloud space, there's tools like Terraform or vendor specific tools like CloudFormation.
Docker is working on a project called Infrakit to deploy infrastructure the way compose deploys containers. It includes hooks for several of the above tools, including Terraform and Vagrant. For your own requirements, this may be overkill.
Lastly, for designing VM images, Docker recently open sourced their Moby project which creates the VM image containing a minimal container OS, the same one used under the covers in Docker for Windows, Docker for Mac, and possibly some of the cloud hosing providers.
We automate Docker installation on hosts using Ansible + Jenkins. Given the propper SSH access, provisioning new Docker hosts is a matter of triggering a Jenkins job.

Resources