Would doubling speed of CPU allow system to handle twice as many processes? [closed] - cpu-speed

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
If the speed of the CPU is doubled, would the system be able to handle twice as many processes? Assuming you ignore context switches that is.

No. CPU speed is rarely the bottleneck anymore. Also, doubling the clock speed would require changers in both your OS's scheduler and your compiler (both of which make assumptions about the speed of a CPU relative to the data buses).
It would make things better, but it's not a linear improvement.

Related

Sync time across multiple iOS devices to milliseconds level? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Is it (really) possible to sync time across multiple (not inter-connected) iOS devices to within a few milliseconds accuracy? The only possible solution I (and others, according to Stack Overflow) can think of is sync the devices with a time server(s) over NTP.
Multiple sources state:
NTP is intended to synchronize all participating computers to within a
few milliseconds of Coordinated Universal Time (UTC). It uses the
intersection algorithm, a modified version of Marzullo's algorithm, to
select accurate time servers and is designed to mitigate the effects
of variable network latency. NTP can usually maintain time to within
tens of milliseconds over the public Internet, and can achieve better
than one millisecond accuracy in local area networks under ideal
conditions. Asymmetric routes and network congestion can cause errors
of 100 ms or more.
Can NTP really achieve accuracy within the few milliseconds level?

Is this possible to predict the lottery numbers (not the most accurate)? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am looking for the machine learning correct approach for predicting the lottery numbers, not the most accurate answer but at least we have some predicted output. I am implementing the regression based and neural network models for this. Is their any specific approach which follows this?
It is impossible. The lottery numbers are random - actually to be more specific, the system is chaotic. You would require the initial configuration (positions etc) to insane (possibly infinite) precision to be able to make any predictions. Basically, don't even try it.

Design of HA, consistent and responsive counter [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Lets say flipkart launched a exclusive redmi sale by 12PM, stock is 10K but more people will access at same time. There are adv and disadv of keeping the counter in single machine or distributed. If we keep it in some in-memory data store single machine, the machine will become bottleneck as many app machines will retrieve at same time, have to consider memory and cpu for queueing those requests. If its distributed across nodes and machines access different nodes, here we eliminate bottleneck, but a update in node has to be consistent across nodes, this will also affect response time. What can be the design choice for the same?
Yes, a single machine counter will be really a performance bottleneck during intensive load and a single point of failure as well. I would suggest to go for a sharded counter implementation.

Can I customize the amount of memory on an Azure Virtual Machine? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I was looking at the Azure full calculator and did not see a slider for adjusting the memory. Do they allow customized memory size or am I stuck with these silly per-configurations? I want at the minimum 16GB with the ability to go much higher like 300GB+.
There are five sizes (extra small, small, medium, large, and extra large), and they're fixed. Extra large is a 16GB VM, but after overhead of the OS, you only get 14GB.
Other sizes may appear over time, but it's unlikely you'll see 300GB any time soon. Windows Azure uses commodity servers, and they don't have nearly that much RAM.

Quad-port ram from single or double port ram? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
In a design I am currently working on, I need quad port ram. However implementing it in lookup tables is using a massive amount of area and I cant reach the needed performance with that setup. Since, my FPGA has hardware blocks for single and dual port ram, is there anyway I can combine them to make quad port memory?
You could consider double-clocking the block RAM, although this will have implications for timing, etc.
See e.g. http://www.xilinx.com/support/documentation/application_notes/xapp228.pdf.
If you only need quad read access, then you just need two dual-port block RAMs, both connected to the same write-enable and write data.

Resources