Is it possible to speed up boot time? - android-things

On a raspberry pi 3 running R8, it takes over 30 seconds to boot up.
Is it possible to speed this up at all?
I'd like to get it under 10 seconds.

Related

Understanding Threading in Google Cloud DataFlow Workers

I made a simple program which waits for 60 seconds. I have 300 input elements to process.
Number of threads - Batch - 1 and Streaming - 300 per this document
https://cloud.google.com/dataflow/docs/resources/faq#beam-java-sdk
In streaming mode - with 1 worker and 300 threads, job should complete in 2 to 3 minutes considering the overhead of spawning workers etc. My understanding is there will be 300 threads for each of 300 input elements and all sleep for 60 seconds and the job should get done. However, the job takes more time to complete.
Similarly, in Batch mode with 1 worker (1 Thread) and 300 input elements, it should take 300 minutes to complete.
Can someone clarify how this happens at worker level ?
There is considerable overhead in starting up and tearing down worker VMs, so it's hard to generalize from a short experiment such as this. In addition, there's not promise that there will be a given number of workers for streaming or batch, as this is an implementation-dependent parameter that my change at any time for any runner (and indeed may even be chosen dynamically).

gearman works very slow with out of memory

gearmand version 1.1.18 (latest) works very slow when free RAM runs out. I have 64 GB RAM on server and when gearman use all free memory and goes to 1mb of swap - gearmand starts to work very-very slowly. Adding any task to background que can take up to 10 seconds.
Removing task from que can take up to 30 seconds.
Que more than 10 000 000 jobs.

Scaling Rails Needs A Lot Of Nginx Instances

We have built a rails app and are trying to support around 6k concurrent users making on average 6 request each per minute, with an average rails web transaction response time of 700 ms.
We did the maths, and it works out that we would need around 420 nginx/passenger instances running (we are not running in multithreaded mode due to legacy codebase that may not be threadsafe). This seems like an awful lot of nginx instances needed to support this kind of load.
We are running 20 nginx/passenger instances per server at the moment, so we need about 20 servers to get to the 420 instances of nginx/passenger required to serve that traffic.
Here is the math:
6k Users X 6 RPM Per User = 36k Total RPM
36k X .7 Seconds (AVG response time) = 25200 seconds of processing
25200/60 = 420 instances (Divide by 60 to fit all that processing into 1 min)
Does anybody have any experience around this that could help us out?
Is this just how it has to be with the amount of servers we must run?
thanks

Neo4j Query response time is too high

I have around 4 million nodes on a server having memory 3.5Gb . Everytime I fire a query, CPU% goes to 100% and response takes atleast 10+ seconds! What should I do to improve response time? Bigger Server?

Current request spikes every 7 minutes

On our webhomes on a iis 7.5 webserver there is an increase in active request in a cycle of 7 minutes. 5 minutes everything is quite normal then comes an increase in active request for 2 minutes.
While active requests increases, ntoskrnl.exe also has an increase in cpu load.
Are there anybody who can give me any clues to what I shall look for?
One of the things that we did notice, was that the Garbage Collector was going nuts every 5 minuttes.
After a server restart everything is fine again.

Resources