Anyone is using swarm docker as Production? - docker

I'm just curious, how reliable swarm docker is ? Because I'm making decision to replace current production physical infrastructure to be a swarm docker but I'm not quite sure.
Please suggest me about swarm docker or any URL for instruction of swarm docker as a production environment.
Thanx.

There are two versions of Swarm.
The original Docker Swarm introduced in late 2014. It requires external components like Consul, Etcd, Registrator, load balancer, etc. It is being used in production. It is still supported but will probably be supplanted eventually by "swarm mode" (my guess).
The new "Swarm Mode" introduced in June 2016. This is a much easier version as it doesn't require external services. It is however very new and still maturing. It is starting to be used in production but you need to be keenly aware of its limitations.
Overall, both Swarm versions are currently being used in production but because they are relatively new, specially Swarm Mode, they are not as mature or extensive as alternatives like Kubernetes or Mesos. But then it's really a matter of what type of production system you need. Do you need a simple 3 node High Availability system or a fully scalable hundred node system? "Production system" is become too generic a term.

Related

Vagrat Box vs docker?

Vagrant Box:
Boxes are the package format for Vagrant environments. A box can be used by anyone on any platform that Vagrant supports to bring up an identical working environment.
Docker
Docker is a tool that packages, provisions and runs containers independent of the OS. A container packages the application service or function with all of the libraries, configuration files, dependencies and other necessary parts to operate
Question :
How docker and vagrant box are different from each other?
What freedom does they provide for the developer and production?
How Developer can make use of the Vagrant and differenciate the differences between docker and vagrant.
Vagrant : Vagrant is a project that helps the spawning of virtual machines. It started as an command line of VirtualBox, something similar to Gemfile for VM's. You can choose the base image to start with, network, IP, share folders and put it all in a file that anyone can reuse to spawn the same configured machine. Vagrant has different extensions, provisioning options and VM providers. You can run a VirtualBox, VMware and it is extensible enough to be able to create instances on EC2.
Docker : Docker, allows to package an application with all of its dependencies into a standardized unit of software development. So, it reduces a friction between developer, QA and testing. The idea is to share the linux kernel. It dynamically change your application, adding new capabilities every single day, scaling out services to quickly changing the problem areas. Docker is putting itself in an excited place as the interface to PaaS be it networking, discovery and service discovery with applications not having to care about underlying infrastructure. The industry now benefits from a standardized container work-flow and an ecosystem of helpful tools, services and vibrant community around it.
Following are few points ease for developer and production deployments:
ACCELERATE DEVELOPERS : Your development environment is the first and foremost thing in IT. Whatever you want, the different tools, databases, instances, networks, etc. you can easily create all these with docker using simple commands(Image creation using Dockerfile or pull from Docker Hub). Get 0 to 100 with docker machine within seconds and as a developer I can focus more on my application.
EMPOWER CREATIVITY : The loosely coupled architecture where every instance i.e. container here is completely isolated with each other. So, their is no any conflict between the tools, softwares, etc. So, the more creative way developer can utilize the system.
ELIMINATE ENVIRONMENT INCONSISTENCIES : Docker containers are responsible for actual running of the applications and includes the operating system, user-files and metadata. And docker image is same across the environment so your build will go seamlessly from dev to qa, staging and production.
In production environment you must have a zero downtime along with automated deployments. You should take care of all things as service discovery, logging and monitoring, scaling and vulnerability scanning for docker images, etc. All these things accelerate the deployment process and help you better serve the production environment. You don't need to login into production server for any configuration change, logging or monitoring. Docker will do it for you. Developers must understand that docker is a tool, it's nothing without other components. But, it will definitely reduce your huge deployment from hours to minutes. Hope this will clear. Thank you.
Docker relies on containerization, while Vagrant utilizes virtualization.

Does it makes sense to manage Docker containers of a/few single hosts with Kubernetes?

I'm using docker on a bare metal server. I'm pretty happy with docker-compose to configure and setup applications.
Still some features are missing, like configuration management and monitoring maybe there are other solutions to solve this issues but I'm a bit overwhelmed by the feature set of Kubernetes and can't judge if it would help me here.
I'm also open for recommendations to solve the requirements separately:
Configuration / Secret management
Monitoring of my docker hostes applications (e.g. having some kind of dashboard)
Remot container control (SSH is okay with only one Server)
Being ready to scale my environment (based on multiple different Dockerized applications) to more than one server in future - already thinking about networking/service discovery issues with a pure docker-compose setup
I'm sure Kubernetes covers some of these features, but I have the feeling that it's too much focused on Cloud platforms where Machines are created on the fly (since I only have at most few bare metal Servers)
I hope the questions scope is not too broad, else please use the comment section and help me to narrow down the question.
Thanks.
I think the Kubernetes is absolutely much your requests and it is what you need.
Let's start one by one.
I have the feeling that it's too much focused on Cloud platforms where Machines are created on the fly (since I only have at most few bare metal Servers)
No, it is not focused on Clouds. Kubernates can be installed almost on any bare-metal platform (include ARM) and have many tools and instructions which can help you to do it. Also, it is easy to deploy it on your local PC using Minikube, which will prepare local cluster for you within VMs or right in your OS (only for Linux).
Configuration / Secret management
Kubernates has a powerful configuration and management based on special objects which can be attached to your containers. You can read more about configuration management in that article.
Moreover, some tools like Helm can provide you more automation and range of preconfigured applications, which you can install using a single command. And you can prepare your own charts for it.
Monitoring of my docker hostes applications (e.g. having some kind of dashboard)
Kubernetes has its own dashboard where you can get many kinds of information: current applications status, configuration, statistics and many more. Also, Kubernetes has great integration with Heapster which can be used with Grafana for powerful visualization of almost anything.
Remot container control (SSH is okay with only one Server)
Kubernetes controlling tool kubectl can get logs and connect to containers in the cluster without any problems. As an example, to connect a container "myapp" you just need to call kubectl exec -it myapp sh, and you will get sh session in the container. Also, you can connect to any application inside your cluster using kubectl proxy command, which will forward a port you need to your PC.
Being ready to scale my environment (based on multiple different Dockerized applications) to more than one server in future - already thinking about networking/service discovery issues with a pure docker-compose setup
Kubernetes can be scaled up to thousands of nodes. Or can have only one. It is your choice. Independent of a cluster size, you will get production-grade networking, service discovery and load balancing.
So, do not afraid, just try to use it locally with Minikube. It will make many of operation tasks more simple, not more complex.

Kubernetes vs Docker Swarm

I am evaluating Kubernetes (with Docker containers, not Kubernetes) and Docker Swarm and could use your input.
If I'm looking at 3 (8.76 hours) or 4 (52 min) 9's reliability in a server farm that is < 100 servers, would Kubernetes be overkill due to its complexity? Would Docker Swarm suffice?
Docker swarm will be able to meet your requirements. I recommend you start with Docker swarm as it is robust and very straightforward to use for anyone who has used Docker before.
For a Docker user, there are many new concepts that you need to learn to be able to use Kubernetes. Moreover, setting up Kubernetes on premise without using a preconfigured cloud platform is not straightforward
On the other hand, Kubernetes is more flexible and extensible. Kubernetes is older than Docker swarm and the community for kubernetes community is really big.
It really depends on your real needs, Kubernetes or Swarm orchestrators are not silver bullets. To take real advantage from the container technology, the applications have to be properly designed. A design guideline for these cloud native apps are the Twelve Factor app principles made by Heroku.
In case you want to scale and achieve global scaling, Kubernetes is a great framework to run distributed apps at scale. In case you have a lot of Java apps, maybe containerized traditional applications then Swarm is a best option.
The business requirements can drive you to make the right choice.
Hope this helps!

What are the limitations of docker swarm over kubernetes?

I need to make a decision for container orchestration , and needed help in finding out limitation in real world scenarios that can occur using docker swarm over kubernetes, if anyone ever faced any such limitation please suggest.
The containers cluster may reach a value of approx 50-100 containers.
Docker swarm is young and there are a lot of features introduced relatively quickly. This however causes more issues and open "serious" bugs. For a production system that should be up 100% that might be an issue. I personally experienced a bug that made it impossible to start new containers because they were assigned an IP that is already taken. This forced me to shut down my swarm (it's a dev system so I didn't mind too much).
I suggest having a look at the most commented swarm bugs/issues in github.

How does coreos compare to triton?

Recently some alternatives for running docker containers or even the app container have developed.
I know that there is rkt from coreos (https://coreos.com/blog/rocket/) and triton from joyent (https://www.joyent.com/)
How do these two approaches compare?
Edit
Maybe I should re-phrase my question after these good comments from # Lakatos Gyula
How does Triton compare to coreos or kubernetes for running docker-containers at scale?
So in a way, this is an apples to oranges to grapes comparison. CoreOS is an operating system, Kubernetes is open source container orchestration software, and Triton is a PaaS.
So CoreOS, it's a minimal operating system with a focus on security. I've been using this in production for several months now at work, haven't found a reason to not like it yet. It does not have a package manager, but it comes preinstalled with both rkt and Docker. You can run both docker and rkt just fine on there. It also comes with Etcd, which is a distributed key-value store, and it happens that kubernetes is backed by it. It also comes with Flannel which is a networking program for networking between containers and machines in your cluster. CoreOS also ships with Fleet, which you can think of like a distributed version of systemd, which systemd is CoreOS' init system. And as of recently, CoreOS ships with Kubernetes itself.
Kubernetes is a container orchestration software that is made up of a few main components. There are masters, which use the APIServer, controller and scheduler to manage the cluster. And there are nodes which use the "kubelet" and kube-proxy". Through these components, Kubernetes schedules and manages where to run your containers on your cluster. As of v1.1 Kubernetes also can auto-scale your containers. I also have been using this in production as long as I have been using CoreOS, and the two go together very well.
Triton is Joyent's Paas for Docker. Think of it like Joyent's traditional service, but instead of BSD jails (similar concept to Linux containers) and at one point Solaris Zones (could be wrong on that one, that was just something I heard from word of mouth), you're using Docker containers. This does abstract away a lot of the work you'd have to do with setting up CoreOS and Kubernetes, that said there are services that'll do the same and use kubernetes under the hood. Now I haven't used Triton like I have used Kubernetes and CoreOS, but it definitely seems to be quite well engineered.
Ultimately, I'd say it's about your needs. Do you need flexibility and visibility, then something like CoreOS makes sense, particularly with Kubernetes. If you want that abstracted away and have these things handled for you, I'd say Triton makes sense.

Resources