Optimal min hardware configuration for an Orleans silo - orleans

What is an optimal minimum or recommended hardware (mostly cores-ram) for an orleans silo? for applications having CPU tasks and IO tasks
and with which criteria orleans decides to scale, adding more nodes in the cloud?

We recommend at least 4 core machines, 8 cores is even before. In terms on memory it mostly depends on your application usage. Orleans itself is pretty modest with its internal memory usage. The general guidelines is to prefer fewer larger machines over more smaller machines.
Orleans does not automatically add new nodes. Thus should be done outside Orleans, via the mechanisns provided by the Cloud provider. Once new nodes are added, Orleans will automatically join them to Orleans cluster and will start utilizing them.

Related

Temporal workflow hardware constraints for low latency

I have recently started reading about temporal for use in my company's project. I have built an in-house orchestrator for short lived tasks which is able to support 100 tps with 8 gb heap and 4 cores. I am running on a oracle database with 16gb and 4 cores.
My in-house orchestrator uses kafka as an event source and a trie to model a DSL using which the next service to be invoked is decided. On comparing my simple orchestrator with temporal I see that while temporal has more features for resiliency and instrumentation of workflows it seems to be taking too much hardware and does not seem to have a focus on low latency. Is my observation correct and should I look for a combination of my low latency orchestrator with temporal or am I missing something here.

Why would one chose many smaller machine types instead of fewer big machine types?

In a clustering high-performance computing framework such as Google Cloud Dataflow (or for that matter even Apache Spark or Kubernetes clusters etc), I would think that it's far more performant to have fewer really BIG machine types rather than many small machine types, right? As in, it's more performant to have 10 n1-highcpu-96 rather than say 120 n1-highcpu-8 machine types, because
the cpus can use shared memory, which is way way faster than network communications
if a single thread needs access to lots of memory for a single threaded operation (eg sort), it has access to that greater memory in a BIG machine rather than a smaller one
And since the price is the same (eg 10 n1-highcpu-96 costs the same as 120 n1-highcpu-8 machine types), why would anyone opt for the smaller machine types?
As well, I have a hunch that for the n1-highcpu-96 machine type, we'd occupy the whole host, so we don't need to worry about competing demands on the host by another VM from another Google cloud customer (eg contention in the CPU caches
or motherboard bandwidth etc.), right?
Finally, although I don't think the google compute VMs correctly report the "true" CPU topology of the host system, if we do chose the n1-highcpu-96 machine type, the reported CPU topology may be a touch closer to the "truth" because presumably the VM is using up the whole host, so the reported CPU topology is a little closer to the truth, so any programs (eg the "NUMA" aware option in Java?) running on that VM that may attempt to take advantage of the topology has a better chance of making the "right decisions".
It will depend on many factors if you want to choose many instances with smaller machine type or a few instances with big machine types.
The VMs sizes differ not only in number of cores and RAM, but also on network I/O performance.
Instances with small machine types have are limited in CPU and I/O power and are inadequate for heavy workloads.
Also, if you are planning to grow and scale it is better to design and develop your application in several instances. Having small VMs gives you a better chance of having them distributed across physical servers in the datacenter that have the best resource situation at the time the machines are provisioned.
Having a small number of instances helps to isolate fault domains. If one of your small nodes crashes, that only affects a small number of processes. If a large node crashes, multiple processes go down.
It also depends on the application you are running on your cluster and the workload.I would also recommend going through this link to see the sizing recommendation for an instance.

Mirrored queue performance factors

We operate two dual-node brokers, each broker having quite different queues and workloads. Each box has 24 cores (H/T) worth of Xeon E5645 # 2.4GHz with 48GB RAM, connected by Gigabit LAN with ~150μs latency, running RHEL 5.6, RabbitMQ 3.1, Erlang R16B with HiPE off. We've tried with HiPE on but it made no noticeable performance impact, and was very crashy.
We appear to have hit a ceiling for our message rates of between 1,000/s and 1,400/s both in and out. This is broker-wide, not per-queue. Adding more consumers doesn't improve throughput overall, just gives that particular queue a bigger slice of this apparent "pool" of resource.
Every queue is mirrored across the two nodes that make up the broker. Our publishers and consumers connect equally to both nodes in a persistant way. We notice an ADSL-like asymmetry in the rates too; if we manage to publish a high rate of messages the deliver rate drops to high double digits. Testing with an un-mirrored queue has much higher throughput, as expected. Queues and Exchanges are durable, messages are not persistent.
We'd like to know what we can do to improve the situation. The CPU on the box is fine, beam takes a core and a half for 1 process, then another 80% each of two cores for another couple of processes. The rest of the box is essentially idle. We are using ~20GB of RAM in userland with system cache filling the rest. IO rates are fine. Network is fine.
Is there any Erlang/OTP tuning we can do? delegate_count is the default 16, could someone explain what this does in a bit more detail please?
This is difficult to answer without knowing more about how your producers and consumers are configured, which client library you're using and so on. As discussed on irc (http://dev.rabbitmq.com/irclog/index.php?date=2013-05-22) a minute ago, I'd suggest you attempt to reproduce the topology using the MulticastMain java load test tool that ships with the RabbitMQ java client. You can configure multiple producers/consumers, message sizes and so on. I can certainly get 5Khz out of a two-node cluster with HA on my desktop, so this may be a client (or application code) related issue.

Defining Scaling Threshold for Azure Web Roles

Azure embraces the notion of elastic scaling and I've been able to acheive this with my Worker Roles. However, when it comes to my Web Roles (e.g. MVC Apps) I am not sure what to monitor (or how) to determine when its a good time to increase (or descrease) the number of running instances. I'm assuming I need to monitor one or many Performance Counters but not sure where to start.
Can anyone recommend a best practice for assessing an MVC Web Role instances load relative to scaling decisions?
This question is a bit open-ended, as monitoring is typically app-specific. Having said that:
Start with simple measurements that you'd look at on a local server, representing KPIs for your app. For instance: Maybe look at network utilization. This TechNet article describes performance counters collected by System Center for Windows Azure. For instance:
ASP.NET Applications Requests/sec
Network Interface Bytes
Received/sec
Network Interface Bytes Sent/sec
Processor % Processor Time Total
LogicalDisk Free Megabytes
LogicalDisk % Free Space
Memory Available Megabytes
You may also want to watch # of requests queued and request wait time.
Network utilization is interesting, since your NIC provides approx. 100Mbps per core and could end up being a bottleneck even when CPU and other resources are underutilized. You may need to scale out to more instances to handle high-bandwidth scenarios.
Also: I tend to give less importance to CPU utilization, even though it's so easy to measure (and shows up so frequently in examples). Running a CPU at near capacity is a good thing usually, since you're paying for it and might as well use as much as possible.
As far as decreasing: This needs to be handled a bit more carefully. Windows Azure compute is billed by the hour. If, say, you scale out to an extra instance at 11:50 and scale in again at 12:10, you've just incurred two cpu-hours. Also: You don't want to scale out, then take new measurements and deciding you can now scale back again (effectively creating a constant pulse of adding and decreasing instances). To make things easier, consider the Autoscaling Application Block (WASABi), found in the Enterprise Library. This has all the scale rules baked in (such as the ones I just mentioned) and is very straightforward to use.

Berkeley DB replication: upper limit on number of replicants?

I'm considering using Berkeley DB to cache some data on an application cluster. What's a reasonable upper limit on the number of nodes I can plan on Berkeley DB handling? Writing to the database will be from a single node.
Mark,
Most of our customers are using replication groups of 5-20 nodes, although we have some large customers running with much larger replication groups. There is no inherent limit built into Berkeley DB.
The real world limit will depend on your read/write workload mix, how you configure your replication system and the amount of CPU cycles available on the master system. Basically, the master needs to communicate with each replica (send log records, process acknowledgements, respond to requests, etc.). Each replica that communicates with the master adds a small amount of overhead. For a mostly read/occasionally write workload, the master doesn't have to communicate that often and communicating with the replicas requires minimal processing. On a predominantly write workload, the master is actively communicating with the replicas and incurs a more significant workload per replica. You can reduce the workload on the master by funneling read operations to the replicas and by utilizing the Berkeley DB HA client-to-client synchronization feature.
Your mileage will vary, so the best approach is to test a prototype of your application and evaluate the balance of throughput, application requirements and available CPU cycles. Do you have a sense of how many nodes you expect to need in your replication groups?
Regards,
Dave
PS: The Getting Started with Replication Guide is a good place to start.

Resources