Design of HA, consistent and responsive counter [closed] - scalability

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Lets say flipkart launched a exclusive redmi sale by 12PM, stock is 10K but more people will access at same time. There are adv and disadv of keeping the counter in single machine or distributed. If we keep it in some in-memory data store single machine, the machine will become bottleneck as many app machines will retrieve at same time, have to consider memory and cpu for queueing those requests. If its distributed across nodes and machines access different nodes, here we eliminate bottleneck, but a update in node has to be consistent across nodes, this will also affect response time. What can be the design choice for the same?

Yes, a single machine counter will be really a performance bottleneck during intensive load and a single point of failure as well. I would suggest to go for a sharded counter implementation.

Related

Sync time across multiple iOS devices to milliseconds level? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Is it (really) possible to sync time across multiple (not inter-connected) iOS devices to within a few milliseconds accuracy? The only possible solution I (and others, according to Stack Overflow) can think of is sync the devices with a time server(s) over NTP.
Multiple sources state:
NTP is intended to synchronize all participating computers to within a
few milliseconds of Coordinated Universal Time (UTC). It uses the
intersection algorithm, a modified version of Marzullo's algorithm, to
select accurate time servers and is designed to mitigate the effects
of variable network latency. NTP can usually maintain time to within
tens of milliseconds over the public Internet, and can achieve better
than one millisecond accuracy in local area networks under ideal
conditions. Asymmetric routes and network congestion can cause errors
of 100 ms or more.
Can NTP really achieve accuracy within the few milliseconds level?

Aurora vs MemSQL for low throughput but low latency purpose [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
We are changing our current user side MySQL database. We have a website + mobile app for which users around the US query our database. the relevant data is is contained in three tables, and a join query against the three tables is needed to send the relevant results to the users.
The result sent back to the users are of small size (<6kb). If our objective is to go for low latency and throughput is a low priority, which of the two following databases would perform better:
MemSQL or AWS Aurora?
they both have the same starting cost for hardware (~$0.28/hr). We are only considering those two databases at this stage so that we can continue our "MySQL" in-house knowledge.
I like that i can outsource the DB headache to Aurora. But surely MemSQL's ability to read/write to memory is a lower latency solution?
Nothing beats in-memory for speed and this is what MemSQL is built for. It stores tables (in rowstore mode) in memory and uses a custom query engine to cache queries into an intermediate language so it can execute as fast as possible. Aurora is more like a classic disk-based MySQL instance but with lots of infrastructure changes and optimizations to make the most of Amazon's services.
Before deciding though, you need to figure out what "low-latency" means - is this within seconds or milliseconds?
MemSQL will be faster and most likely in milliseconds depending on your query. Aurora will be slower but can probably deliver sub-second, again depending on your query and the resources allocated and how the data is structured.
Without any more details, the answer is to judge by what your performance tolerance is and then experiment.

MIPS/Pipeline regarding unique Data & Instruction memory [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
How does having unique Data & Instructions memory affects us to the standard 5-stage Pipeline?What about with & without Forwarding?
What's the advantage of having a different memory for each?
Regardless of having Forwarding, if you only have one port to access memory (e.g. unique Data & Instruction memory bus) and to simplify let's say there is no cache in the system (so every memory access needs to use the memory unit) then every instruction that needs the MEM stage to use the memory bus will generate a structural hazard, as the CPU won't be able to perform the FETCH and MEM stages in parallel because they both need to access memory.
If instead you have two ports to access memory (e.g, one for Instructions and another for data), then the structural hazard noted above will be avoided as each memory-access stage will use its own bus+memory.

Would doubling speed of CPU allow system to handle twice as many processes? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
If the speed of the CPU is doubled, would the system be able to handle twice as many processes? Assuming you ignore context switches that is.
No. CPU speed is rarely the bottleneck anymore. Also, doubling the clock speed would require changers in both your OS's scheduler and your compiler (both of which make assumptions about the speed of a CPU relative to the data buses).
It would make things better, but it's not a linear improvement.

In Erlang, what is the best way to upgrade a distributed system? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
If I have multiple web server written in Erlang running (load balanced) and Mnesia is used for the backend database, what is the best way to upgrade the whole system to a newer version?
Even though Erlang supports hot code-loading, it is not something one must use. In your case, it seems easier to remove one node a time from the load-balancer, restart it running the new code and put it back in the load-balancer.
Taking down nodes is also something you must be prepared to do if you will want to hot-upgrade to new Erlang/OTP releases in your live system.
But the real issue that can bubble up for you are those concerns that come from mnesia. I think you should ask a new question with the specifics in what want mnesia to do. If there are no schema/table-changes, and you just want to take one node down and add it later. Or if you actually are introducing new tables or new columns in tables. Mnesia does provide the ability to add and remove nodes with table replicas, and it also rather uniquely supports multiple table definions through mnesia:transform_table/3,4.
If you're just doing code upgrades, I wrote a an article about erlang release handling with fab. With this setup, you can do real-time code loading without having to restart nodes. Transforming the database, though, should be done from a single node, triggered separately from the release upgrade.

Resources