Bandwidth Speed Test Slower Than Expected - dart

Im trying to create a bandwidth test using Dart using a LibreSpeed server, but for some reason it's reporting a much slower speed than expected. When I test using the web client for LibreSpeed I get around 30mb/s download bandwidth, however with my Dart program I'm only getting around 4mb/s.
The code for the speed test is as follows:
Future<void> start() async {
var rand = Random();
var req = await client.getUrl(Uri.http(_serverAddress, '/garbage.php',
{'r': rand.nextDouble().toString(), 'ckSize': '20'}));
var resp = await req.close();
var bytesDownloaded = 0;
var start = DateTime.now();
await for (var bytes in resp) {
bytesDownloaded += bytes.length;
}
var timeTaken = DateTime.now().difference(start).inSeconds;
var mbsDownloaded = bytesDownloaded / 1000000;
print(
'$mbsDownloaded megabytes downloaded in $timeTaken seconds at a rate of ${mbsDownloaded / timeTaken} mbs per second');
}
I think I'm probably not understanding a crucial reason as to why it appears to be so slow compared to the web client. Can anyone give me any ideas as to what the bottleneck might be?

The problem is confusion between the use of units when measure internet speed. In general terms there are two ways we can measure the speed:
Mbps (MegaBit Per Second)
MB/s (MegaByte per second)
To understand your problem we need to notice that 1 byte = 8 bits. Also, that the unit LibreSpeed (and most internet providers) uses is Mbps.
The unit your current program are measuring in is MB/s since you are using the length of the list (each element of the list is 1 byte = 8 bit:
await for (var bytes in resp) {
bytesDownloaded += bytes.length;
}
And never multiply that number by 8 later in your code. You are then comparing this number against the number from LibreSpeed which uses Mbps (same as most internet providers) which means your number is 8 times smaller that expected.

Related

Emgucv cannot train more than 210 images using EigenFaceRecognizer it stop writing new details or data

Good day everyone I am relatively new to OpenCV using .net wrapper Emgucv my program is simple face detection and recognition, first I train user faces, at least 20 images of 100x100pixel per user and write (EigenFaceRecognizer) the data to yml files, then load this files(user images and data in yml) before running real time recognition or comparing, it worked perfectly fine with 9 user (9x20 = 180 images). However when i try to register or train another user I notice the (EigenFaceRecognizer) stop writing the data in yml. How do we solve this? The format of my data with yml extension below
opencv_eigenfaces:
threshold: .Inf
num_components: 10
mean: !!opencv-matrix
rows: 1
cols: 4096
dt: d
data: []
The trainingData.yml https://www.dropbox.com/s/itm58o24lka9wa3/trainingData.yml?dl=0
I figure it out the problem is just not enough time in writing the data so I need to increase the delay.
private async Task LoadData()
{
outputBox.Clear();
var i = 0;
var itemData = Directory.EnumerateFiles("trainingset/", "*.bmp");
var enumerable = itemData as IList<string> ?? itemData.ToList();
var total = enumerable.Count();
_arrayNumber = new int[total];
var listMat = new List<Mat>();
foreach (var file in enumerable)
{
var inputImg = Image.FromFile(file);
_inputEmGuImage = new Image<Bgr, byte>(new Bitmap(inputImg));
var imgGray = _inputEmGuImage.Convert<Gray, byte>();
listMat.Add(imgGray.Mat);
var number = file.Split('/')[1].ToString().Split('_')[0];
if (number != "")
{
_arrayNumber[i] = int.Parse(number);
}
i++;
processImg.Image = _inputEmGuImage.ToBitmap();
outputBox.AppendText($"Person Id: {number} {Environment.NewLine}");
if (total == i)
{
fisherFaceRecognizer.Train(listMat.ToArray(), _arrayNumber);
fisherFaceRecognizer.Write(YlmPath);
// FaceRecognition.Train(listMat.ToArray(), _arrayNumber);
// FaceRecognition.Write(YlmPath);
MessageBox.Show(#"Total of " + _arrayNumber.Length + #" successfully loaded");
}
await Task.Delay(10);
}
}

Script fails when running normally but in debug its fine

I'm developing a google spreadsheet that is automatically requesting information from a site, below is the code. The variable 'tokens' is an array consisting of about 60 different 3 letter unique identifiers. The problem that i have been getting is that the code keeps failing to request all information on the site. Instead it falls back (at random) on the validation part, and fills the array up with "Error!" strings. Sometimes its row 5, then 10-12, then 3, then multiple rows, etc. When i run it in debug mode everythings fine, can't seem to be able to reproduce the problem.
Already tried to place a sleep (100ms) but that fixed nothing. Also looked at the amount of traffic the API accepts (10 requests per second, 1.200 per minute, 100.000 per day) , it shouldn't be a problem.
Runtime is limited so i need it to be as efficient as possible. I'm thinking it is an issue of computational power after i pushed all values in the json request into the 'tokens' array. Is there a way to let the script wait as long as necessary for the changes to be committed?
function newGetOrders() {
var starttime = new Date().getTime().toString();
var refreshTime = new Date();
var tokens = retrieveTopBin();
var sheet = SpreadsheetApp.openById('aaafFzbXXRzSi-eXBu9Xh81Ne2r09vM8rLFkA4fY').getSheetByName("Sheet37");
sheet.getRange('A2:OL101').clear();
for (var i=0; i<tokens.length; i++) {
var request = UrlFetchApp.fetch("https://api.binance.com/api/v1/depth?symbol=" + tokens[i][0] + "BTC", {muteHttpExceptions:true});
var json = JSON.parse(request.getContentText());
tokens[i].push(refreshTime);
Utilities.sleep(100);
for (var k in json.bids) {
tokens[i].push(json.bids[k][0]);
tokens[i].push(json.bids[k][1]);
}
for (var k in json.asks) {
tokens[i].push(json.asks[k][0]);
tokens[i].push(json.asks[k][1]);
}
if (tokens[i].length < 402) {
for (var x=tokens[i].length; x<402; x++) {
tokens[i].push("ERROR!");
}
}
}
sheet.getRange(2, 1, tokens.length, 402).setValues(tokens);
}

How can I maximize throughput in Docker and Akka HTTP?

I am building a specific jig for performance measurement. I have a load generator, boom (https://github.com/rakyll/boom). With this I can generate a pretty decent amount of load.
I also have a Docker image containing nginx as a load balancer, and two Akka-HTTP based REST servers. These do nothing except count hits (they always just return 200).
Running one of these servers stand-alone (outside the Docker) I have been able to get 1000 hits/second. Not sure if that's good or not. In this Docker configuration that figure drops to about 220 hits/second. I was kinda expecting, well... 2000 hits/second or thereabouts. Higher would even be better. I'd be happy if I can find a way to get 3-4K hits/sec with this arrangement.
I often get an error message like this:
[9549] Get http://192.168.99.100:9090/dispatcher?reply_to=foo: dial tcp 192.168.99.100:9090: socket: too many open files
Tried running my Docker with --ulimit nofile=2048, but that didn't help. My application.conf for Akka is merely:
akka {
loglevel = "ERROR"
stdout-loglevel = "ERROR"
http.host-connection-pool.max-open-requests = 512
}
The server code:
object Main extends App {
implicit val system = ActorSystem()
implicit val mat = ActorMaterializer()
println(":: Starting Simulator on port "+args(0))
Http().bindAndHandle(route, java.net.InetAddress.getLoopbackAddress.getHostAddress, args(0).toInt)
var hits = 0
var isTiming = false
var numSec = 1
lazy val route =
get {
path("dispatcher") {
if(isTiming) hits += 1
complete(StatusCodes.OK)
} ~
path("startTiming" / IntNumber) { sec =>
isTiming = true
hits = 0
numSec = sec
val timeUnit = FiniteDuration(sec, SECONDS)
system.scheduler.scheduleOnce(timeUnit){ isTiming = false }
complete(StatusCodes.OK)
} ~
path("tps") {
val tps = hits/numSec * 2
complete(s"""${args(0)}: TPS-$tps\n""")
}
}
}
Theory of operation: Start traffic flowing then call the /startTiming/10 endpoint (for a 10-second capture on one of the 2 servers). After 10 seconds, call /tps a couple of times and the timing node will return approx. hits/second (x2).
Any idea how I can get more performance out of this?

Can iOS boot time drift?

I'm using this code to determine when my iOS device last rebooted:
int mib[MIB_SIZE];
size_t size;
struct timeval boottime;
mib[0] = CTL_KERN;
mib[1] = KERN_BOOTTIME;
size = sizeof(boottime);
if (sysctl(mib, MIB_SIZE, &boottime, &size, NULL, 0) != -1) {
return boottime.tv_sec;
}
return 0;
I'm seeing some anomalies with this time. In particular, I save the long and days and weeks later check the saved long agains the value returned by the above code.
I'm not sure, but I think I'm seeing some drift. This doesn't make any sense to me. I'm not converting to NSDate to prevent drift. I would think that boot time is record by the kernel when it boots and isn't computed again, it is just stored. But could iOS be saving boot time as an NSDate, with any inherent drift problems with that?
While the iOS Kernel is closed-source, it's reasonable to assume most of it is the same as the OSX Kernel, which is open-source.
Within osfmk/kern/clock.c there is the function:
/*
* clock_get_boottime_nanotime:
*
* Return the boottime, used by sysctl.
*/
void
clock_get_boottime_nanotime(
clock_sec_t *secs,
clock_nsec_t *nanosecs)
{
spl_t s;
s = splclock();
clock_lock();
*secs = (clock_sec_t)clock_boottime;
*nanosecs = 0;
clock_unlock();
splx(s);
}
and clock_boottime is declared as:
static uint64_t clock_boottime; /* Seconds boottime epoch */
and finally the comment to this function shows that it can, indeed, change:
/*
* clock_set_calendar_microtime:
*
* Sets the current calendar value by
* recalculating the epoch and offset
* from the system clock.
*
* Also adjusts the boottime to keep the
* value consistent, writes the new
* calendar value to the platform clock,
* and sends calendar change notifications.
*/
void
clock_set_calendar_microtime(
clock_sec_t secs,
clock_usec_t microsecs)
{
...
Update to answer query from OP
I am not certain about how often clock_set_calendar_microtime() is called, as I am not familiar with the inner workings of the kernel; however it adjusts the clock_boottime value and the clock_bootime value is initialized in clock_initialize_calendar(), so I would say it can be called more than once. I have been unable to find any call to it using:
$ find . -type f -exec grep -l clock_set_calendar_microtime {} \;
RE my comment above...
"to my understanding, when the user goes into settings and changes the
time manually, the boot time is changed by the delta to the new time
to keep the interval between boot time and system time, equal. but it
does not "drift" as it is a timestamp, only the system clock itself
drifts."
I'm running NTP on my iOS app, and speak with Google's time servers.
I feed NTP the uptime since boot (which doesn't pause and is correctly adjusted if some nefarious user starts messing with system time... which is the whole point of this in the first place), and then add the offset between uptime since boot and epoch time to my uptime.
inline static struct timeval uptime(void) {
struct timeval before_now, now, after_now;
after_now = since_boot();
do {
before_now = after_now;
gettimeofday(&now, NULL);
after_now = since_boot();
} while (after_now.tv_sec != before_now.tv_sec && after_now.tv_usec != before_now.tv_usec);
struct timeval systemUptime;
systemUptime.tv_sec = now.tv_sec - before_now.tv_sec;
systemUptime.tv_usec = now.tv_usec - before_now.tv_usec;
return systemUptime;
}
I sync with the time servers once every 15 minutes, and calculate the offset drift (aka on system clock drift) every time.
static void calculateOffsetDrift(void) {
static dispatch_queue_t offsetDriftQueue = dispatch_queue_create("", DISPATCH_QUEUE_CONCURRENT);
static double lastOffset;
dispatch_barrier_sync(offsetDriftQueue, ^{
double newOffset = networkOffset();
if (lastOffset != 0.0f) printf("offset difference = %f \n", lastOffset - newOffset);
lastOffset = newOffset;
});
}
On my iPhone Xs Max the system clock usually runs around 30ms behind over 15 minutes.
Here's some figures from a test I just ran using LTE in NYC..
+47.381592 ms
+43.325684 ms
-67.654541 ms
+24.860107 ms
+5.940674 ms
+25.395264 ms
-34.969971 ms

Console Print Speed

I’ve been looking at a few example programs in order to find better ways to code with Dart.
Not that this example (below) is of any particular importance, however it is taken from rosettacode dot org with alterations by me to (hopefully) bring it up-to-date.
The point of this posting is with regard to Benchmarks and what may be detrimental to results in Dart in some Benchmarks in terms of the speed of printing to the console compared to other languages. I don’t know what the comparison is (to other languages), however in Dart, the Console output (at least in Windows) appears to be quite slow even using StringBuffer.
As an aside, in my test, if n1 is allowed to grow to 11, the total recursion count = >238 million, and it takes (on my laptop) c. 2.9 seconds to run Example 1.
In addition, of possible interest, if the String assignment is altered to int, without printing, no time is recorded as elapsed (Example 2).
Typical times on my low-spec laptop (run from the Console - Windows).
Elapsed Microseconds (Print) = 26002
Elapsed Microseconds (StringBuffer) = 9000
Elapsed Microseconds (no Printing) = 3000
Obviously in this case, console print times are a significant factor relative to computation etc. times.
So, can anyone advise how this compares to eg. Java times for console output? That would at least be an indication as to whether Dart is particularly slow in this area, which may be relevant to some Benchmarks. Incidentally, times when running in the Dart Editor incur a negligible penalty for printing.
// Example 1. The base code for the test (Ackermann).
main() {
for (int m1 = 0; m1 <= 3; ++m1) {
for (int n1 = 0; n1 <= 4; ++n1) {
print ("Acker(${m1}, ${n1}) = ${fAcker(m1, n1)}");
}
}
}
int fAcker(int m2, int n2) => m2==0 ? n2+1 : n2==0 ?
fAcker(m2-1, 1) : fAcker(m2-1, fAcker(m2, n2-1));
The altered code for the test.
// Example 2 //
main() {
fRunAcker(1); // print
fRunAcker(2); // StringBuffer
fRunAcker(3); // no printing
}
void fRunAcker(int iType) {
String sResult;
StringBuffer sb1;
Stopwatch oStopwatch = new Stopwatch();
oStopwatch.start();
List lType = ["Print", "StringBuffer", "no Printing"];
if (iType == 2) // Use StringBuffer
sb1 = new StringBuffer();
for (int m1 = 0; m1 <= 3; ++m1) {
for (int n1 = 0; n1 <= 4; ++n1) {
if (iType == 1) // print
print ("Acker(${m1}, ${n1}) = ${fAcker(m1, n1)}");
if (iType == 2) // StringBuffer
sb1.write ("Acker(${m1}, ${n1}) = ${fAcker(m1, n1)}\n");
if (iType == 3) // no printing
sResult = "Acker(${m1}, ${n1}) = ${fAcker(m1, n1)}\n";
}
}
if (iType == 2)
print (sb1.toString());
oStopwatch.stop();
print ("Elapsed Microseconds (${lType[iType-1]}) = "+
"${oStopwatch.elapsedMicroseconds}");
}
int fAcker(int m2, int n2) => m2==0 ? n2+1 : n2==0 ?
fAcker(m2-1, 1) : fAcker(m2-1, fAcker(m2, n2-1));
//Typical times on my low-spec laptop (run from the console).
// Elapsed Microseconds (Print) = 26002
// Elapsed Microseconds (StringBuffer) = 9000
// Elapsed Microseconds (no Printing) = 3000
I tested using Java, which was an interesting exercise.
The results from this small test indicate that Dart takes about 60% longer for the console output than Java, using the results from the fastest for each. I really need to do a larger test with more terminal output, which I will do.
In terms of "computational" speed with no output, using this test and m = 3, and n = 10, the comparison is consistently around 530 milliseconds for Java compared to 580 milliseconds for Dart. That is 59.5 million calls. Java bombs with n = 11 (238 million calls), which I presume is stack overflow. I'm not saying that is a definitive benchmark of much, but it is an indication of something. Dart appears to be very close in the computational time which is pleasing to see. I altered the Dart code from using the "question mark operator" to use "if" statements the same as Java, and that appears to be a bit faster c. 10% or more, and that appeared to be consistently the case.
I ran a further test for console printing as shown below (example 1 – Dart), (Example 2 – Java).
The best times for each are as follows (100,000 iterations) :
Dart 47 seconds.
Java 22 seconds.
Dart Editor 2.3 seconds.
While it is not earth-shattering, it does appear to illustrate that for some reason (a) Dart is slow with console output, and (b) Dart-Editor is extremely fast with console output. (c) This needs to be taken into account when evaluating any performance that involves console output, which is what initially drew my attention to it.
Perhaps when they have time :) the Dart team could look at this if it is considered worthwhile.
Example 1 - Dart
// Dart - Test 100,000 iterations of console output //
Stopwatch oTimer = new Stopwatch();
main() {
// "warm-up"
for (int i1=0; i1 < 20000; i1++) {
print ("The quick brown fox chased ...");
}
oTimer.reset();
oTimer.start();
for (int i2=0; i2 < 100000; i2++) {
print ("The quick brown fox chased ....");
}
oTimer.stop();
print ("Elapsed time = ${oTimer.elapsedMicroseconds/1000} milliseconds");
}
Example 2 - Java
public class console001
{
// Java - Test 100,000 iterations of console output
public static void main (String [] args)
{
// warm-up
for (int i1=0; i1<20000; i1++)
{
System.out.println("The quick brown fox jumped ....");
}
long tmStart = System.nanoTime();
for (int i2=0; i2<100000; i2++)
{
System.out.println("The quick brown fox jumped ....");
}
long tmEnd = System.nanoTime() - tmStart;
System.out.println("Time elapsed in microseconds = "+(tmEnd/1000));
}
}

Resources