We got these two clusters with eight nodes each, and we are looking for a good cluster framework that would allow us to launch jobs, has an inbuilt scheduler with different scheduling policies and a monitoring system with web frontend. Each of the nodes are running on Ubuntu 11.04. Both commercial and opensource are OK.
Some of them i saw were,
TORQUE and MAUI.(Not sure if it has a web frontend for monitoring)
SLURM and MAUI.
GEXEC and GANGLIA.(Doesn't have a scheduler)
Which product(s) would you recommend? Also is there any advantage using cluster operating systems like MOSIX instead of tools?
The paid version of Maui is called Moab (it normally uses TORQUE as the RM). It also can be sold with monitoring tools as well. I think Moab is a really good product, but I am strongly biased towards it (I work for the company that develops TORQUE/Moab).
Related
Our company is developing an application which runs in 3 seperate kubernetes-clusters in different versions (production, staging, testing).
We need to monitor our clusters and the applications over time (metrics and logs). We also need to run a mailserver.
So basically we have 3 different environments with different versions of our application. And we have some shared services that just need to run and we do not care much about them:
Monitoring: We need to install influxdb and grafana. In every cluster there's a pre-installed heapster, that needs to send data to our tools.
Logging: We didn't decide yet.
Mailserver (https://github.com/tomav/docker-mailserver)
independant services: Sentry, Gitlab
I am not sure where to run these external shared services. I found these options:
1. Inside each cluster
We need to install the tools 3 times for the 3 environments.
Con:
We don't have one central point to analyze our systems.
If the whole cluster is down, we cannot look at anything.
Installing the same tools multiple times does not feel right.
2. Create an additional cluster
We install the shared tools in an additional kubernetes-cluster.
Con:
Cost for an additional cluster
It's probably harder to send ongoing data to external cluster (networking, security, firewall etc.).
3) Use an additional root-server
We run docker-containers on an oldschool-root-server.
Con:
Feels contradictory to use root-server instead of cutting-edge-k8s.
Single point of failure.
We need to control the docker-containers manually (or attach the machine to rancher).
I tried to google for the problem but I cannot find anything about the topic. Can anyone give me a hint or some links on this topic?
Or is it just no relevant problem that a cluster might go down?
To me, the second option sound less evil but I cannot estimate yet if it's hard to transfer data from one cluster to another.
The important questions are:
Is it a problem to have monitoring-data in a cluster because one cannot see the monitoring-data if the cluster is offline?
Is it common practice to have an additional cluster for shared services that should not have an impact on other parts of the application?
Is it (easily) possible to send metrics and logs from one kubernetes-cluster to another (we are running kubernetes in OpenTelekomCloud which is basically OpenStack)?
Thanks for your hints,
Marius
That is a very complex and philosophic topic, but I will give you my view on it and some facts to support it.
I think the best way is the second one - Create an additional cluster, and that's why:
You need a point which should be accessible from any of your environments. With a separate cluster, you can set the same firewall rules, routes, etc. in all your environments and it doesn't affect your current workload.
Yes, you need to pay a bit more. However, you need resources to run your shared applications, and overhead for a Kubernetes infrastructure is not high in comparison with applications.
With a separate cluster, you can setup a real HA solution, which you might not need for staging and development clusters, so you will not pay for that multiple times.
Technically, it is also OK. You can use Heapster to collect data from multiple clusters; almost any logging solution can also work with multiple clusters. All other applications can be just run on the separate cluster, and that's all you need to do with them.
Now, about your questions:
Is it a problem to have monitoring-data in a cluster because one cannot see the monitoring-data if the cluster is offline?
No, it is not a problem with a separate cluster.
Is it common practice to have an additional cluster for shared services that should not have an impact on other parts of the application?
I think, yes. At least I did it several times, and I know some other projects with similar architecture.
Is it (easily) possible to send metrics and logs from one kubernetes-cluster to another (we are running kubernetes in OpenTelekomCloud which is basically OpenStack)?
Yes, nothing complex there. Usually, it does not depend on the platform.
I am developing a distributed application with several small zeromq applications. is there a recommended way to monitor and configure all the zeromq applications? I think of configuring ports and network addresses, starting of applications, monitoring, etc. I know that there exist systems like Nagios, but I wonder if there is an easy and recommended way for zeromq.
This is far from easy, because it depends on your system, its architecture and the requirements e.g. on availability that you have. If I assume that you provide a service that should be restarted when it becomes unavailable, there are a bunch of solutions:
monit: http://mmonit.com/monit/
runit: http://smarden.org/runit/
daemontools
some more alternatives: http://packages.debian.org/jessie/daemontools
None of these have as goal the configuration of services or their deployment. Also, independently monitoring instead of fully controlling your services is still a good idea, so keep your nagios install.
Some background:
I'm building a pretty involved website (as far as used stack concerned). Components among some other smaller stuff include:
Elasticsearch
Redis
ZeroMQ
Couchbase
RethinkDB
traffic through Nginx -> Node
The intention is to have a high available website running but be pretty lean (and low cost) at the same time.
Current topology I'm considering:
2 webservers in active/active config with DNS-loadbalancing. (Nginx, static asset serving, etc. + loadbalancing to the second tier:
2 appservers in active/active. Most of the components like Elasticsearch can do sharding/replication themselves so this should not be as hard to set-up (fingers crossed)
session handling in replicated Redis
Naturally I want monitoring and alerting when something is wrong, and ideally the system should be able to handle failures automatically. Stuff like: promote Redis from Slave to Master, or even initialize a new ec2-instance, if I were to be on Ec2 that is.
However, I want to be free from a particular hosting provider. Which I believe (please correct if wrong) is where Openstack comes in.
Is it correct that:
- openstack allows me to control the entire lifecycle of my website-stack (covering multiple boxes / virtual machines? )
- Does it allow me to (with work on config of course) to spin-up instances, monitor, alert when something goes wrong, take appropriate actions in those scenario's, etc.?
Or is Openstack just entirely the wrong tool for the job? Anything else that would fit better as a sort of "management layer" on top of my entire website?
Thanks
OpenStack isn't VMWare ESX. It's not a very good straight up simple virtual machine hosting environment. If what you want is a way to easily manage virtual machines I might suggest Ganeti. It even has HA failover of virtual machines. In a two physical host environment, this is probably the way to go.
What OpenStack gives you that Ganeti won't is RESTful APIs. It has AWS Compatible APIs, but it has OpenStack APIs that are even better. If you want to automate elasticity or healability this is huge. Being able to link up in python using existing client APIs and just write scripts that spin up instances as needed is something joe DevOps is all about.
So I guess it comes down to what your level of commitment is and what you need. For 2 physical machines OpenStack probably isn't the best solution. But, down the line when you've got more apps and more vms than you can manage manually, openstack will be there to help you write code that makes your datacenter dance to your melodic tunes.
I have a CouchDB instance running on one machine, and thus with its own Erlang VM process. If I have another separate Erlang application running on that machine too, is it be better to share the same VM between CouchDB and my application, or is it recommended to start a new Erlang node?
While many would recommend decoupling these subsystems I would take the opposite approach. Erlang has a built in strategy to run many applications on the same release. If your applications talk to each other directly it might make sense for you to bundle them together into a release. This will make calls between the applications faster. Some will argue that all you applications now share the same fate should you need to take the system down for an upgrade that only one of the applications needs. This is a moot point with Erlang where you are distributing your applications across many nodes. Also most upgrades can be done with hot code loading.
There is no problem with running several VMs on the same machnie (at least recent OTP releases), however it is quite handy if you have all your applications on one Erlang node. Easier communication, dependency management, supervision, fault-tolerance - you get it for free in this case, not mentioning maintaining only one 'node' in source control system.
The problem starts with CouchDB. It does not have decent build system which let you use it as one of independent Erlang node applications. So in this case you need to have at least 2 VMs (one acts as Couch daemon, the other one hosts your application)
I'm really impressed with the power of cloud computing when it comes to the possibility to scale up and down your facilities depending on your load.
How can I shift my paradigm and learn to write my applications in that way? Write it once and forget(no matter of the future load) would be the best solution.
How can I practice my skills in that area?
Setup virtualization environment when I can add another VMs into the private cloud(via command line?) on some smart algorithms to foresee the load for some period of time?
Ideally I want to practice it without buying actual Cloud computing services and just on my hardware.
The only thing I want to practice here is app/web role and/or message queue systems scaling when current workers have too many jobs in queue. So let's rule out database scaling from the question's goal as too big topic.
One option I will throw out is to use a native Cloud execution framework. You might look at CloudIQ Platform. One component is CloudIQ Engine. It allows you to develop cloud native apps in C/C++, Java and .NET. You get the capabilities of scale up by simply adding workers to your cloud. The framework automatically distributes your applications to the new machine(s), and once installed, will begin sending work to them as requests come in. So in effect the cloud handles your queueing issue for you.
Check out the Download and Community links for more information.
You should try AWS- Amazon's offering a free tier that gives you storage, messaging and micro instances (only linux). you can start developing small try-outs without paying. writing an application that scales isn't that hard- try to break your flow into small, concurrent tasks. client-server applications are even easier- use a load balancer to raise\terminate servers by demand.