Why run Docker under Vagrant? - docker

I've read multiple articles how to do this, but I can't figure out what the benefits are under macOS.
From my point of view, you can run Docker natively on macOS using Docker Community Edition (boot2docker+Kitematic). What does it's give me for running from Vagrant, mobility?

My standard day to day development work is carried out in Docker For Mac/Windows as they cover about 95% of what I need to do with Docker. Since they replaced Docker Toolbox/boot2docker and made the integration to the OS pretty seamless I have found very few reasons to move over to another virtual machine. The two main reasons I see for using Vagrant or standalone VM's now are for VM customisation and clustering.
VM Customisation
The virtual machines supplied by Docker Toolbox, Docker for Mac/Windows are pre packaged cut down Linux distros (TinyCore and Alpine) that are largely ephemeral, except for the Docker configuration so you don't get much say in how they work.
Networking
I deal with a number of custom network configurations that just aren't possible in the pre packaged VM's, largely around having containers connected to routable networks rather than using mapped ports.
Version Control
Occasionally you need to replicate server environments that run old versions of the Docker daemon, or RHEL servers using devicemapper. A VM let's you choose the packages to install.
Clustering
Building a swarm, or branching out into Mesosphere/Kubernetes will require multiple VM's. I tend to find these easier to manage and build with Vagrant rather than Docker Machine, and again they require custom config inside the VM.

Related

Is docker a infrastructure as code technology because it virtualizes an OS to handle multiple workloads on a single OS instance?

I have come across the word IaC many times while learning DevOps and when I googled it to know what it is it showed that it used code as it is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. So is docker also a infrastructure as code technology because it virtualizes an OS to handle multiple workloads on a single OS instance? Thanks in advance
I'm not sure exactly what you are asking, but Docker provides infrastructure as code because the Docker functionality is set via Dockerfiles and shell scripts. You don't install a list of programs manually when defining an image. You don't configure anything with a GUI in order to create an environment when you pull an image from Docker hub or when you deploy your own image.
And as said in another answer, Docker is not virtualization, as everything is actually running in your Linux kernel, but with limited resources in its own namespace. You can see a container process via htop in the host machine, for instance. There's no hypervisor. There's no overhead.
I think you misunderstud the concept, because neither Docker is an hypervidor nor containers are VMs.
From this page: https://www.docker.com/resources/what-container
A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
Container images become containers at runtime and in the case of Docker containers - images become containers when they run on Docker Engine.
Containers are an abstraction at the app layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space.

why docker virtualization is faster vs VM [duplicate]

This question already has answers here:
How is Docker different from a virtual machine?
(23 answers)
Closed 1 year ago.
From what I understand, VMs use hardware virtualization, whereas dockers use software virtualization and therefore have better performance (in a case, lets say, I am running a Dockerized Linux on a Windows machine). But what is exactly the reason that OS virtualization is faster than hardware virtualization?
Docker doesn't do virtualization. It uses kernel namespaces to achieve a chroot-like effect not just for the root filesystem but process information (PID namespace), mount points, networking, IPC (shared memory), UTS information (hostname) & user id's.
The containers share the kernel with the host. For security Docker uses AppArmor/SELinux, Linux capabilities and seccomp to filter system calls. Control groups (known as cgroups] are used for process accounting and for imposing limits on resources.
Docker is not about virtualization. It's about containerization (how to run a process in an isolated environment).
This means that you can't run a linux container on windows or a windows container on linux without using some kind of virtualization (Virtualbox, Hyper-v...) It's ok to do this on your laptop while developing but in production you would choose the appropriate architecture for your containers.
What is a container?
from A sysadmin's guide to containers:
Traditional Linux containers are really just ordinary processes on a Linux system. These groups of processes are isolated from other groups of processes using resource constraints:
(control groups [cgroups]),
Linux security constraints (Unix permissions, capabilities, SELinux, AppArmor, seccomp, etc.), and
namespaces (PID, network, mount, etc.).
Setting all these manually (network namespaces, iptable-rules etc..) with linux commands would be tricky, so it's the docker-daemon's job to do them when you type docker ... commands and things happen under the hood...
About speed...
First of all, containers can be less fast than running a process directly on the host networking stack, because of the complexity which is introduced. See for example this: Performance issues running nginx in a docker container
But, they will offer you speed. How?:
containers are not full OSs (base images have small size)
containers follow the concepts of micro-services and "do one thing, do it well". This means that you don't put everything in a container the same way you would do with VMs. This is called separation of concerns and it results in more lightweight app components. It also gives speed to developers because different teams can work on their component separately (others also mention this as developer velocity) with different programming languages and frameworks.
image layers: docker has an internal way of splitting an image to layers and when you build a new image, layers can be reused. This gives you good deployment speeds (consider how useful this is in case of a rollback)
About Windows Containers
Containers was a "linux" thing but this wave of containerization has also had an effect on the Windows land. In the beginning docker-toolbox was using Virtualbox to run containers on a linux VM. Later, docker-for-windows was introduced and gives the option to run containers directly on the host or on hyper-v. If you visit Windows Container Types you can find more.

Bitnami and Docker

How Bitnami and Docker are different from each other when it comes to container based deployments.
I have been learning about microservices recently. I used Docker images to run my apps as containers. And, I noticed that Bitnami does something similar when it creates a virtual image on a cloud form its launchpad.
From whatever links I could see on Internet, I could not visualize how these two - Docker and Bitnami - are different from each other.
Docker
Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.
Containers and virtual machines have similar resource isolation and allocation benefits -- but a different architectural approach allows containers to be more portable and efficient.
Virtual machines include the application, the necessary binaries and libraries, and an entire guest operating system -- all of which can amount to tens of GBs. Docker containers include the application and all of its dependencies --but share the kernel with other containers, running as isolated processes in user space on the host operating system. Docker containers are not tied to any specific infrastructure: they run on any computer, on any infrastructure, and in any cloud.
Bitnami
Bitnami is an app library for server software. You can install your favorite applications on your own servers or run them in the cloud.
One of the platforms on which to deploy these applications are using Docker Containers. Virtual machines are another technology where applications can be deployed.
Bitnami containers give you the latest stable versions of your application stacks, allowing you to focus on coding rather than updating dependencies or outdated libraries. Available as development containers, turnkey application and infrastructure containers, or build your own custom container using Stacksmith.

What is the difference between a Vagrant Provider and a Vagrant Provisioner?

I think the words "Provider" and "Provisioner" sound very similar which may lead to confusion especially among beginners confronted with documentation where both terms are mixed up or used synonymous (already seen on the net). Even more confusing it gets when beginners see Docker as Provider and Docker as Provisioner mentioned on Vagrant´s website.
So this question is actually about three things:
What is a Vagrant Provider?
What is a Vagrant Provisioner?
How does Docker fit in here?
What could be a typical use case for Docker as Vagrant Provider?
What could be a typical use case for Docker as Vagrant Provisioner?
I appreciate explanations, examples and links for further reading which illustrate things clearly (even for noobs).
The underlying virtualization solutions are called providers. To work with Vagrant, you have to install at least one provider (e.g. Virtualbox, VMWare)
Provisioning in Vagrant is the process of automatic installation and configuration of the system within during $ vagrant up and the tools to perform this operation are called provisioners (e.g. Shell scripts, Chef, Puppet).
Provider vs Provisioner
Vagrant uses Providers such as hypervisors (e.g VirtualBox, Hyper-V) or Docker to create and run virtual environments. Vagrant uses Provisioners (e.g Ansible, Puppet, Chef) as configuration tools to customize these environments, e.g carrying out installs and starting apps.
How does Docker fit in?
If a hypervisor is used as a Provider, the environment that is created is a virtual machine based on a self-contained image of an operating system environment as provided by a “Vagrantbox” (aka “box”). The box is utilized by Vagrant to create a dedicated kernel and set of operating system processes for the virtual machine.
If Docker is used as a Provider and Docker is available on the host system, Vagrant manages and runs containers directly on the host system. Here Vagrant is not actually building and managing a virtual machine but rather is working with the Docker engine running on the host to manage and build Docker containers.

Linking containers together on production deploys

I want to migrate my current deploy to docker, it counts on a mongodb service, a redis service, a pg server and a rails app, I have created already a docker container for each but i have doubts when it comes to start and linking them. Under development I'm using fig but I think it was not meant to be used on production. In order to take my deployment to production level, what mechanism should I use to auto-start and link containers together? my deploy uses a single docker host that already runs Ubuntu so i can't use CoreOS.
Linknig containers in production is a tricky thing. It will hardwire the IP addresses of the dependent containers so if you ever need to restart a container or launch a replacement (like upgrading the version of mongodb) your rails app will not work out of the box with the new container and its new IP address.
This other answer explains some available alternatives to linking.
Regarding starting the containers, you can use any deployment tool to run the required docker commands (Capistrano can easily do that). After that, docker will restart running the containers after a reboot.
You might need a watcher process to restart containers if they die, just as you would have one for a normal rails app.
Services like Tutum and Dockerize.it can make this simpler. As far as I know, Tutum will not deploy to your servers. Dockerize.it will, but is very rough (disclaimer: I'm part of the team building it).
You can convert your fig configuration to CoreOS formatted systemd configuration files with fig2coreos. Google App Engine supports CoreOS, or you can run CoreOS on AWS or your cloud provider of choice. fig2coreos also supports deploying to CoreOS in Vagrant for local development.
CenturyLink (fig2coreos authors) have an example blog post here:
This blog post will show you how to bridge the gap between building
complex multi-container apps using Fig and deploying those
applications into a production CoreOS system.
EDIT: If you are constrained to an existing host OS you can use QEMU ("a generic and open source machine emulator and virtualizer") to host a CoreOS instance. Instructions are available from the CoreOS team.

Resources