Linking containers together on production deploys - ruby-on-rails

I want to migrate my current deploy to docker, it counts on a mongodb service, a redis service, a pg server and a rails app, I have created already a docker container for each but i have doubts when it comes to start and linking them. Under development I'm using fig but I think it was not meant to be used on production. In order to take my deployment to production level, what mechanism should I use to auto-start and link containers together? my deploy uses a single docker host that already runs Ubuntu so i can't use CoreOS.

Linknig containers in production is a tricky thing. It will hardwire the IP addresses of the dependent containers so if you ever need to restart a container or launch a replacement (like upgrading the version of mongodb) your rails app will not work out of the box with the new container and its new IP address.
This other answer explains some available alternatives to linking.
Regarding starting the containers, you can use any deployment tool to run the required docker commands (Capistrano can easily do that). After that, docker will restart running the containers after a reboot.
You might need a watcher process to restart containers if they die, just as you would have one for a normal rails app.
Services like Tutum and Dockerize.it can make this simpler. As far as I know, Tutum will not deploy to your servers. Dockerize.it will, but is very rough (disclaimer: I'm part of the team building it).

You can convert your fig configuration to CoreOS formatted systemd configuration files with fig2coreos. Google App Engine supports CoreOS, or you can run CoreOS on AWS or your cloud provider of choice. fig2coreos also supports deploying to CoreOS in Vagrant for local development.
CenturyLink (fig2coreos authors) have an example blog post here:
This blog post will show you how to bridge the gap between building
complex multi-container apps using Fig and deploying those
applications into a production CoreOS system.
EDIT: If you are constrained to an existing host OS you can use QEMU ("a generic and open source machine emulator and virtualizer") to host a CoreOS instance. Instructions are available from the CoreOS team.

Related

How to deploy docker containers on permise

I work for a company where we are developing a web application of about 20 microservices between FE and BE. The company wants to deploy the containers in its local infrastucture based on wmvare. Knowing that we expect to have maximum 40/50 connected users at the same time, how do you suggest to deploy the containers? In which enviroment? We checked to use the container functionalities of wmvare but to do that we should change same network configuration of all the active vm in production and the person in charge is not confident in doing that.
security-wise it is good to have your server on separate virtual machine. In this case you retain snapshot and migration functionality as long with host isolation;
Inside guest virtual machine you can use Docker containers. It allows you to deploy and maintain your application with relatively small effort. As platform I'd use Ubuntu server or RHEL. On Ubuntu it is better to use latest docker repository, so it will have containerd management daemon.
It is hard to give more accurate instructions without knowledge of network topology, but maybe you consider routing so you do not need to change your network configuration.

Feasibility of choosing EC2 + Docker as a production deployment option

I am trying to deploy my microservice in EC2 machine. I already launched my EC2 machine with Ubuntu 16.04 LTS AMI. And also I found that we can install Docker and run containers through Docker installation. Also I tried sample service deployment using Docker in my Ubuntu. I successfully run commands using -d option for running image in background also.
Can I choose this EC2 + Docker for deployment of my microservice for actual production environment? Then I can deploy all my Spring Boot microservice in this option.
I know that ECS is another option for me.To be frank trying to avoid ECR, ECS optimized AMI and its burdens, Looking for machine with full control that only belongs to me.
But still I need to know about the feasibility of choosing EC2 + Docker through my Ubuntu machine. Also I am planning to deploy my Angular 2 app. I don't need to install, deploy and manage any application server for both Spring Boot and Angular, since it will gives me about a serverless production environment.
What you are describing is a "traditional" single server environment and does not have much in common with a microservices deployment. However keep in mind that this may be OK if it is only you, or a small team working on the whole application. The microservices architectural style was introduced to be able to handle huge, complex applications with large development teams that require to scale out immensely due to fast business growth. Here an example story from Uber.
Please read this for more information about how and why the microservices architectural style was introduced as well as the benefits/drawbacks. Now about your question:
"Can I choose this EC2 + Docker for deployment of my microservice for actual production environment? "
Your question can be simply answered: You can, but it is probably not a good idea assuming you have a large enough project to require a microservices architecture.
You would have to implement all of the following deployment aspects yourself, which is typically covered by an orchestration system, like kubernetes:
Service Discovery and Load Balancing
Horizontal Scaling
Multi-Container Application Deployment
Container Health-Management / Self-Healing
Virtual Networking
Rolling Updates
Storage Orchestration
"Since It will gives me about a serverless production environment to
me."
EC2 is by definition not serverless, of course. You will have to maintain your EC2 instances, including OS updates, security patches etc. And if you only have a single server you will have service outages because of it.
You can do it. I have had Docker on standard EC2 instances running without problem. By "my microservice" you mean a single microservice, right?
You don't need service discovery or routing rules?
Can I choose this EC2 + Docker for deployment of my microservice for actual production environment?
Yes, this is totally possible, although I suggest using kubernetes as the container-orchestrator as it manages the lifecycle of the containers for you:
Running Kubernetes on AWS EC2
Amazon Elastic Container Service for Kubernetes
Manage Kubernetes Clusters on AWS Using Kops
Amazon EKS

Kubernetes for a Development Environment

Good day
We have a development environment that consists of 6 virtual machines. Currently we are using Vagrant and Ansible with VirtualBox.
As you can imagine, hosting this environment is a maintenance nightmare particularly as versions of software/OS change. Not too mention resource load for developer machines.
We have started migrating some virtual machines to docker. But this itself poses problems around orchestration, correct configurations, communication etc. This led me to Kubernetes.
Would someone be so kind as to provide some reasoning as to whether Kubernetes would or wouldn't be the right tool for the job? That is managing and orchestrating 'development' docker containers.
Thanks
This is quite complex topic and many things have to be considered if it's worth to use k8s as local dev environment. Especially I used it when I wanted to have my local developer environment very close to production one which was running on Kubernetes. This helped to avoid many configuration bugs.
In my opinion Kubernetes(k8s) will provide you all you need for a development environment.
It gives you much flexibility and does much configuration itself. Few examples:
An easy way to deploy new version into local kubernetes stack
You prepare k8s replication controller files for each of your application module (keep in mind that they need to be stateless modules)
In replication controller you specify the docker image and that's it.
Using this approach you can push new docker images to local docker_registry and then using kubectl control the lifecycle of your application.
Easy way to scale your application modules
For example:
kubectl scale rc your_application_service --replicas=3
This way k8s will check how many pods you have running for your service and if it recognises that the number is smaller then the replicas value it will create new to satisfy the replicas number.
It's endless topic and many other things come to my mind, but I would suggest you to try it out.
There is a https://github.com/kubernetes/kubernetes/blob/master/docs/devel/developer-guides/vagrant.md project for running the k8s cluster in vagrant.
Of course you have to remember that if you have many services all of them have to be pushed to local repository and run by k8s. This will require some time but if you automate local deploy with some custom scripts you won't regret.
As wsl mentioned before, it is a quite complex topic. But i'm doing this as well at the moment. So let me summaries some things for you:
With Kubernetes (k8s) you're going to orchestrate your SaaS Application. In best case, it is a Cloud-native Application. The properties/requirements for a Cloud-native Application are formulated by the Cloud Native Computing Foundation (CNCF), which basically were formed around k8s, after Google donates it to the Linux Foundation.
So the properties/requirements for a Cloud-native Application are: Container packaged, Dynamically managed and Micro-services oriented (cncf.io/about/charter). You will benefit mostly from k8s, if your applications are micro-service based and every service has a separate container.
With micro-service based applications, every service can be developed independently. The developer only needs to follow the 12Factor Method (12factor.net) for example (use env var instead of hard coded IP addresses, etc).
In the next step the developer build the container for a service and pushes it the a container registry. For a local develop environment, you may need to run a container registry inside the cluster as well, so the developer can push and test his code locally.
Then you're able to define your k8s replication-controllers, services, PetSets, etc. with Ports, Port-mapping, env vars, Container Images... and create and run it inside the cluster.
The k8s-documentation recommend Minikube for running k8s locally (kubernetes.io/docs/getting-started-guides/minikube/). With Minikube you got features like DNS, NodePorts, ConfigMaps and Secrets
Dashboards.
But I choose the multi node CoreOS Kubernetes with Vagrant Cluster for my Development Environment as Puja Abbassi mentioned in the Blog "Finding The Right Local Kubernetes Development Environment" (https://deis.com/blog/2016/local-kubernetes-development-environment/), it is closer to the my production environment (12Factor: 10 - Dev/prod parity).
With the Vagrant Environment you got features like:
Networking with flannel
Service Discovery with etcd
DNS names for a set of containers with SkyDNS
internal load balancing
If you want to know, how everything works look inside this Github repo github.com/coreos/coreos-kubernetes/tree/master/multi-node (vagrant and generic folder).
So you have to ask yourself, if you or your developers really need to run a complete "cloud environment" locally. In many cases a developer can develop a service (based on micro-services and containers) independently.
But sometimes it is necessary to have multiple or all services run on your local machine as a dev-environment.

How to create a local development environment for Kubernetes?

Kubernetes seems to be all about deploying containers to a cloud of clusters. What it doesn't seem to touch is development and staging environments (or such).
During development you want to be as close as possible to production environment with some important changes:
Deployed locally (or at least somewhere where you and only you can access)
Use latest source code on page refresh (supposing its a website; ideally page auto-refresh on local file save which can be done if you mount source code and use some stuff like Yeoman).
Similarly one may want a non-public environment to do continuous integration.
Does Kubernetes support such kind of development environment or is it something one has to build, hoping that during production it'll still work?
Update (2016-07-15)
With the release of Kubernetes 1.3, Minikube is now the recommended way to run Kubernetes on your local machine for development.
You can run Kubernetes locally via Docker. Once you have a node running you can launch a pod that has a simple web server and mounts a volume from your host machine. When you hit the web server it will read from the volume and if you've changed the file on your local disk it can serve the latest version.
We've been working on a tool to do this. Basic idea is you have remote Kubernetes cluster, effectively a staging environment, and then you run code locally and it gets proxied to the remote cluster. You get transparent network access, environment variables copied over, access to volumes... as close as feasible to remote environment, but with your code running locally and under your full control.
So you can do live development, say. Docs at http://telepresence.io
The sort of "hot reload" is something we have plans to add, but is not as easy as it could be today. However, if you're feeling adventurous you can use rsync with docker exec, kubectl exec, or osc exec (all do the same thing roughly) to sync a local directory into a container whenever it changes. You can use rsync with kubectl or osc exec like so:
# rsync using osc as netcat
$ rsync -av -e 'osc exec -ip test -- /bin/bash' mylocalfolder/ /tmp/remote/folder
I've just started with Skaffold
It's really useful to apply changes in the code automatically to a local cluster.
To deploy a local cluster, the best way is Minikube or just Docker for Mac and Windows, both includes a Kubernetes interface.
EDIT 2022: By now, there are obviously dozens of way to provision k8s, unlike 2015 when we started using it. kubeadm, microk8s, k3s, kube-spray, etc.
My advice: (If your cluster can't fit on your workstation/laptop,) Rent a Hetzner server for 40 euro a month, and run WSL2 if on Windows.
Set up k8s cluster on the remote machine (with any of the above, I prefer microk8s these days). Set up Docker and Telepresence on your local Linux/Mac/WSL2 env. Install kubectl and connect it to the remote cluster.
Telepresence will let you replace a remote pod with a local docker pod, with access to local files (hopefully the same git repo that's used to build the pod you're developing/replacing), and possibly nodemon (or other language-specific auto-source-code-reload system).
Write bash functions. I cannot stress this enough, this will save you hundreds of hours of time. If replacing the pod and starting to develop isn't one line / two words, then you're doing it not-well-enough.
2016 answer below:
Another great starting point is this Vagrant setup, esp. if your host OS is Windows. The obvious advantages being
quick and painless setup
easy to destroy / recreate the machine
implicit limit on resources
ability to test horizontal scaling by creating multiple nodes
The disadvantages - you need lot of RAM, and VirtualBox is VirtualBox... for better or worse.
A mixed advantage / disadvantage is mapping files through NFS. In our setup, we created two sets of RC definitions - one that just download a docker image of our application servers; the other with 7 extra lines that set up file mapping from HostOS -> Vagrant -> VirtualBox -> CoreOS -> Kubernetes pod; overwriting the source code from the Docker image.
The downside of this is NFS file cache - with it, it's problematic, without it, it's problematically slow. Even setting mount_options: 'nolock,vers=3,udp,noac' doesn't get rid of caching problems completely, but it works most of the time. Some Gulp tasks ran in a container can take 5 minutes when they take 8 seconds on host OS. A good compromise seems to be mount_options: 'nolock,vers=3,udp,ac,hard,noatime,nodiratime,acregmin=2,acdirmin=5,acregmax=15,acdirmax=15'.
As for automatic code reload, that's language specific, but we're happy with Django's devserver for Python, and Nodemon for Node.js. For frontend projects, you can of course do a lot with something like gulp+browserSync+watch, but for many developers it's not difficult to serve from Apache and just do traditional hard refresh.
We keep 4 sets of yaml files for Kubernetes. Dev, "devstable", stage, prod. The differences between those are
env variables explicitly setting the environment (dev/stage/prod)
number of replicas
devstable, stage, prod uses docker images
dev uses docker images, and maps NFS folder with source code over them.
It's very useful to create a lot of bash aliases and autocomplete - I can just type rec users and it will do kubectl delete -f ... ; kubectl create -f .... If I want the whole set up started, I type recfo, and it recreates a dozen services, pulling the latest docker images, importing the latest db dump from Staging env and cleaning up old Docker files to save space.
See https://github.com/kubernetes/kubernetes/issues/12278 for how to mount a volume from the host machine, the equivalent of:
docker run -v hostPath:ContainerPath
Having a nice local development feedback loop is a topic of rapid development in the Kubernetes ecosystem.
Breaking this question down, there are a few tools that I believe support this goal well.
Docker for Mac Kubernetes
Docker for Mac Kubernetes (Docker Desktop is the generic cross platform name) provides an excellent option for local development. For virtualization, it uses HyperKit which is built on the native Hypervisor framework in macOS instead of VirtualBox.
The Kubernetes feature was first released as beta on the edge channel in January 2018 and has come a long way since, becoming a certified Kubernetes in April 2018, and graduating to the stable channel in July 2018.
In my experience, it's much easier to work with than Minikube, particularly on macOS, and especially when it comes to issues like RBAC, Helm, hypervisor, private registry, etc.
Helm
As far as distributing your code and pulling updates locally, Helm is one of the most popular options. You can publish your applications via CI/CD as Helm charts (and also the underlying Docker images which they reference). Then you can pull these charts from your Helm chart registry locally and upgrade on your local cluster.
Azure Draft
You can also use a tool like Azure Draft to do simple local deploys and generate basic Helm charts from common language templates, sort of like buildpacks, to automate that piece of the puzzle.
Skaffold
Skaffold is like Azure Draft but more mature, much broader in scope, and made by Google. It has a very pluggable architecture. I think in the future more people will use it for local app development for Kubernetes.
If you have used React, I think of Skaffold as "Create React App for Kubernetes".
Kompose or Compose on Kubernetes
Docker Compose, while unrelated to Kubernetes, is one alternative that some companies use to provide a simple, easy, and portable local development environment analogous to the Kubernetes environment that they run in production. However, going this route means diverging your production and local development setups.
Kompose is a Docker Compose to Kubernetes converter. This could be a useful path for someone already running their applications as collections of containers locally.
Compose on Kubernetes is a recently open sourced (December 2018) offering from Docker which allows deploying Docker Compose files directly to a Kubernetes cluster via a custom controller.
Kubespary is helpful setting up local clusters. Mostly, I used vagrant based cluster on local machine.
Kubespray configuration
You could tweak these variables to have the desired kubernetes version.
The disadvantage of using minkube is that it spawns another virtual machine over your machine. Also, with latest minikube version it minimum requires to have 2 CPU and 2GB of RAM from your system, which makes it pretty heavy If you do not have the system with enough resources.
This is the reason I switched to microk8s for development on kubernetes and I love it. microk8s supports the DNS, local-storage, dashboard, istio, ingress and many more, everything you need to test your microservices.
It is designed to be a fast and lightweight upstream Kubernetes installation isolated from your local environment. This isolation is achieved by packaging all the binaries for Kubernetes, Docker.io, iptables, and CNI in a single snap package.
A single node kubernetes cluster can be installed within a minute with a single command:
snap install microk8s --classic
Make sure your system doesn't have any docker or kubelet service running. Microk8s will install all the required services automatically.
Please have a look at the following link to enable other add ons in microk8s.
https://github.com/ubuntu/microk8s
You can check the status using:
velotio#velotio-ThinkPad-E470:~/PycharmProjects/k8sClient$ microk8s.status
microk8s is running
addons:
ingress: disabled
dns: disabled
metrics-server: disabled
istio: disabled
gpu: disabled
storage: disabled
dashboard: disabled
registry: disabled
Have a look at https://github.com/okteto/okteto and Okteto Cloud.
The value proposition is to have the classical development experience than working locally, prior to docker, where you can have hot-reloads, incremental builds, debuggers... but all your local changes are immediately synchronized to a remote container. Remote containers give you access to the speed of cloud, allow a new level of collaboration, and integrates development in a production-like environment. Also, it eliminates the burden of local installations.
As specified before by Robert, minikube is the way to go.
Here is a quick guide to get started with minikube. The general steps are:
Install minikube
Create minikube cluster (in a Virtual Machine which can be VirtualBox or Docker for Mac or HyperV in case of Windows)
Create Docker image of your application file (by using Dockerfile)
Run the image by creating a Deployment
Create a service which exposes your application so that you can access it.
Here is the way I did a local set up for Kubernetes in Windows 10: -
Use Docker Desktop
Enable Kubernetes in the settings option of Docker Desktop
In Docker Desktop by default resource allocated for Memory is 2GB so to use Kubernetes
with Docker Desktop increase the memory.
Install kubectl as a client to talk to Kubernetes cluster
Run command kubectl config get-contexts to get the available cluster
Run command kubectl config use-context docker-desktop to use the docker desktop
Build a docker image of your application
Write a YAML file (descriptive method to create your deployment in Kubernetes) pointing
to the image created in above step cluster
Expose a service of type node port for each of your deployment to make it available to
the outside world

Running and Deploying Rails to Docker Container

I am a total noob to linux containers and been spending some time learning about Docker, and forgive my confusion thought this question. Currently, I have a Rails app in production deployed via capistrano. My cloud servers are maintained with Opscode Chef on the Debian Wheezy distribution. For development, I have a Vagrant VM preinstalled with the app and services.
If I were to employ Docker, where would my app sit? The container or the host? How would I deploy (production) and share directories (development)? Can I run all my additional services ie memcache, redis, postgresql, etc on the same server using docker? I can maybe envision the potential of Docker but having trouble seeing its practical use.
Seems like containers are part of the future. Any guidance for someone making the switch from virtualization?
If I were to employ Docker, where would my app sit?
It could sit inside the container or it could sit on the host(you can use docker build to copy the app into the container)
How would I deploy (production) and share directories (development)?
Deploying your app would mean committing your local container into an image, publishing it
and running a container out of the published images on your servers. I have not tried sharing directories between host and container, but you can try this : https://gist.github.com/jpetazzo/5668338 . You can also write a Dockerfile which can copy a directory to a target in the container. Docker's docs on building images will help you there.
Can I run all my additional services ie memcache, redis, postgresql, etc on the same server using docker?
Yes. You will be running multiple containers on the same server.
I'm no expert and I haven't even used docker myself, but as I understand it, your app sits inside a docker container. You would deploy ideally a whole container with your own ruby version installed and so on.
The big benefit is, that you can test exactly the same container in your staging system that you're going to ship to production then. So you're able to test the complete system with all installed C extensions, the exact same ls command and so on.

Resources