HHVM not compiling JWT library as expected - hhvm

When I run code like this
$before = microtime(true);
$amount = 90000000;
$sum = 0;
for ($i = 0; $i < $amount; $i++) {
$sum += $i;
}
echo $sum;
$after = microtime(true);
echo '<br>'.($after-$before);
It runs much slower than if I wrapped the whole thing inside a function because HHVM does not compile for loops in the global scope.
echo "loop in function <br /><br />";
function run_loop ($amount, $sum) {
for ($i = 0; $i < $amount; $i++) {
$sum += $i;
}
return $sum;
}
echo run_loop($amount, $sum);
The last one runs 10x faster than the first one, even tho the first one still runs faster than stock PHP 5.6.
But when I try to do something like this using the JWT library from firebase
https://github.com/firebase/php-jwt
And test it like this
require_once("JWT.php");
$before = microtime(true);
$amount = 100000;
for ($i = 0; $i < $amount; $i++) {
$jwt = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWV9.TJVA95OrM7E2cBab30RMHrHDcEfxjoYZgeFONFh7HgQ';
$decoded = JWT::decode($jwt, 'secret', array('HS256'));
}
echo $decoded->name;
then this runs just as when I wrap it inside a function
echo "loop in function <br /><br />";
function run_loop ($amount) {
$jwt = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWV9.TJVA95OrM7E2cBab30RMHrHDcEfxjoYZgeFONFh7HgQ';
for ($i = 0; $i < $amount; $i++) {
$decoded = JWT::decode($jwt, 'secret', array('HS256'));
}
return $decoded->sub;
}
echo run_loop($amount);
This leads me to believe that HHVM does not compile the JWT library because they run just as fast even if it is for loop in the global scope or wrapped in a function. So my question is. How can I check which function or method is not supported by HHVM inside the JWT library? Is there a way for me to find all supported functions by HHVM? Do you think it is because of the JWT library?

HHVM can't optimize code running at the top level (i.e., outside a function) very well.
In the first example, most of the work is happening outside a function -- the "work" is the $sum += $i. Moving that into a function, your second example, allows HHVM to optimize it pretty well, since it's really trivial. (Though it's not a good benchmark because it's so unrealistic!)
In your third and fourth examples, very little work is happening in the code you showed. I suspect that JWT::decode is responsible for most of the work being done -- and it's already in a function. Moving the code calling it out of toplevel, while a good idea, doesn't help as much as before because it's doing less of the relative work. The heavy lifting was already happening in a function, namely JWT::decode.
As to how to allow HHVM to optimize JWT::decode better, that's a very complicated question. If you're interested, you can build HHVM from source yourself, use different profiling tools (such as the Linux perf command) to see where HHVM is spending its time, and optimize that.

Related

Qt foreach() only iterates once, ignoring the rest of the items

I'm trying to add some features to an older Qt4 application, and I'm new to Qt. The application uses the foreach keyword which I believe is implemented by Qt. However all foreach loops in the application only run once, regardless of the number of items that are in the container.
I added this sanity check to the application:
QString test("1234");
int i = 0;
foreach (QChar c, test) {
i++;
}
int stl = 0;
for (QString::iterator j = test.begin(); j != test.end(); j++) {
stl++;
}
qDebug()
<< "string:" << test
<< "size:" << test.size()
<< "foreach:" << i
<< "stl:" << stl
;
It always shows this message:
string: "1234" size: 4 foreach: 1 stl: 4
I've tested it with the above QString and with a QModelIndexList and each time it only runs the loop once, even when the container reports having more than one item, and in both cases the STL-style loop works fine, it's only the foreach that exits the loop early.
What am I doing wrong? The application is built against Qt 4.8.7.
For the record, it turns out this is a change in behaviour with GCC 9 (bug report) to do with where break; statements should appear and what they do.
It seems GCC versions before 9 did the wrong thing, but Qt 4 was written around that behaviour, so once it was fixed in GCC 9, Qt's foreach looping broke.
It looks like it has been addressed in recent Qt versions but not unfortunately in Qt 4.

Can I use two sets of variables in one foreach loop?

Is is possible to construct one single foreach loop that loops through using two separate sets of variables?
Below is a simplified example of what I'm trying to do - except this example lists two separate loops whereas I would like to set them up in one single loop.
$Sites = #("https://www.google.com" , "https://duckduckgo.com")
$Site_names = #( "Google" , "DuckDuckGO")
foreach ($element in $Sites) {
Write-Host "`n`n"
$element
Write-Host "`n`n"
}
foreach ($name in $Site_names) {
Write-Host "`n`n"
$name
Write-Host "`n`n"
}
There is other code to be used so the loop needs to be able to allow for multiple lines of code in the code block - so a single line solution if there is one isn't what I'm after. Also I didn't think using the pipeline would be workable (but I could certainly be wrong on that).
Two sets of variables: $Sites and $Site_names.
I would like one foreach loop that runs through and lists the site address and the site name with both values changing each time the loop is run.
First run: reference the URL "https://www.google.com" and the site name "Google".
Second run: reference the URL "https://duckduckgo.com" and the site name "DuckDuckGo".
Is this possible?
If you have two arrays of the same size you can simply use a for loop like this:
for ($i=0; $i -lt $Sites.Count; $i++) {
"{0}`t{1}" -f $Site_names[$i], $Sites[$i]
}
However, if the elements of your two arrays are correlated anyway, it would be better to use a hashtable instead:
$Sites = #{
'Google' = 'https://www.google.com'
'DuckDuckGo' = 'https://duckduckgo.com'
}
foreach ($name in $Sites.Keys) {
"{0}`t{1}" -f $name, $Sites[$name]
}

Memory Not Given Back Until Files Deleted?

We ran into a strange problem with some of our in-house developed applications and thought it was something deep in the code but then we wrote a quick sample to test it, we experienced the same issue.
Here's the code for the sample:
#include <stdio.h>
int main(void)
{
int i;
int j;
int COUNT = 750000;
double x[100];
double y[100];
FILE *OutputFile1;
FILE *OutputFile2;
FILE *OutputFile3;
FILE *OutputFile4;
FILE *OutputFile5;
FILE *OutputFile6;
FILE *OutputFile7;
FILE *OutputFile8;
FILE *OutputFile9;
OutputFile1 = fopen("Output_file_1.dat","w");
OutputFile2 = fopen("Output_file_2.dat","w");
OutputFile3 = fopen("Output_file_3.dat","w");
OutputFile4 = fopen("Output_file_4.dat","w");
OutputFile5 = fopen("Output_file_5.dat","w");
OutputFile6 = fopen("Output_file_6.dat","w");
OutputFile7 = fopen("Output_file_7.dat","w");
OutputFile8 = fopen("Output_file_8.dat","w");
OutputFile9 = fopen("Output_file_9.dat","w");
/* Do stuff in here */
/* Initialize the arrays */
for( i = 0; i < 100; i++)
{
x[i] = 2.50 * (double)i;
y[i] = 10.0 * (double)i;
}
printf("Initialized the x and y arrays\n");
/* Write junk to files */
for( i = 0; i < COUNT; i++)
{
printf("Outer loop %d\n", i);
for( j = 0; j < 100; j++)
{
fprintf(OutputFile1," %e", x[j]);
fprintf(OutputFile2," %e", x[j]);
fprintf(OutputFile3," %e", x[j]);
fprintf(OutputFile4," %e", x[j]);
fprintf(OutputFile5," %e", x[j]);
fprintf(OutputFile6," %e", y[j]);
fprintf(OutputFile7," %e", y[j]);
fprintf(OutputFile8," %e", y[j]);
fprintf(OutputFile9," %e", y[j]);
}
fprintf(OutputFile1,"\n");
fprintf(OutputFile2,"\n");
fprintf(OutputFile3,"\n");
fprintf(OutputFile4,"\n");
fprintf(OutputFile5,"\n");
fprintf(OutputFile6,"\n");
fprintf(OutputFile7,"\n");
fprintf(OutputFile8,"\n");
fprintf(OutputFile9,"\n");
}
/* End doing stuff here */
fflush(OutputFile1);
fclose(OutputFile1);
fflush(OutputFile2);
fclose(OutputFile2);
fflush(OutputFile3);
fclose(OutputFile3);
fflush(OutputFile4);
fclose(OutputFile4);
fflush(OutputFile5);
fclose(OutputFile5);
fflush(OutputFile6);
fclose(OutputFile6);
fflush(OutputFile7);
fclose(OutputFile7);
fflush(OutputFile8);
fclose(OutputFile8);
fflush(OutputFile9);
fclose(OutputFile9);
return(0);
}
So, here's what happens when you run this. If you run this in one terminal window and run top in another while it's running, you'll notice your memory being eaten away. It takes about 8 minutes for it to run and when it's finished, the system doesn't give the memory back, until the files are deleted. Once the files are deleted, all of the memory is released back to the system.
It's just C with the latest gcc compiler, CentOs 6.3.
Are we missing something?
Thanks!
"The system doesn't give the memory back". How do you know? There is a difference between "memory reported as free by top", and "memory you can use". This is because disk I/O is stored in cache when possible - that way, if you need to use the same file again, the information is already available in memory - faster to access. Once you delete a file, it's no longer useful to keep in cache - so the cache is cleared.
Another way to look at this is to use the free command. Doing this on my Linux box, I see the following:
total used free shared buffers cached
Mem: 66005544 65559292 446252 0 199832 60332160
-/+ buffers/cache: 5027300 60978244
Swap: 1044216 1884 1042332
The key line is the one that says "-/+ buffers/cache" . You can see that the first line tells me "446M free" - not a lot on a 64G machine. But the second line says "only joking, you have 60 G free". That is the real "free memory".
See whether that line "gives back memory" without having to delete the files. I think you will find it does.
The system caches these files for quick access at the next time. Since the memory isn't used by an application it's used for caching. The cache is freed if the files are gone or an other application needs more memory to run.
See: Linux Memory Management

Can I use fastcgi_finish_request() like register_shutdown_function?

This simple method for caching dynamic content uses register_shutdown_function() to push the output buffer to a file on disk after exiting the script. However, I'm using PHP-FPM, with which this doesn't work; a 5-second sleep added to the function indeed causes a 5-second delay in executing the script from the browser. A commenter in the PHP docs notes that there's a special function for PHP-FPM users, namely fastcgi_finish_request(). There's not much documentation for this particular function, however.
The point of fastcgi_finish_request() seems to be to flush all data and proceed with other tasks, but what I want to achieve, as would normally work with register_shutdown_function(), is basically to put the contents of the output buffer into a file without the user having to wait for this to finish.
Is there any way to achieve this under PHP-FPM, with fastcgi_finish_request() or another function?
$timeout = 3600; // cache time-out
$file = '/home/example.com/public_html/cache/' . md5($_SERVER['REQUEST_URI']); // unique id for this page
if (file_exists($file) && (filemtime($file) + $timeout) > time()) {
readfile($file);
exit();
} else {
ob_start();
register_shutdown_function(function () use ($file) {
// sleep(5);
$content = ob_get_flush();
file_put_contents($file, $content);
});
}
Yes, it's possible to use fastcgi_finish_request for that. You can save this file and see that it works:
<?php
$timeout = 3600; // cache time-out
$file = '/home/galymzhan/www/ps/' . md5($_SERVER['REQUEST_URI']); // unique id for this page
if (file_exists($file) && (filemtime($file) + $timeout) > time()) {
echo "Got this from cache<br>";
readfile($file);
exit();
} else {
ob_start();
echo "Content to be cached<br>";
$content = ob_get_flush();
fastcgi_finish_request();
// sleep(5);
file_put_contents($file, $content);
}
Even if you uncomment the line with sleep(5), you'll see that page still opens instantly because fastcgi_finish_request sends data back to browser and then proceeds with whatever code is written after it.
First of all,
If you cache the dynamic content this way, you're doing it wrong. Surely it can be used this way and it will be able to work, but its totally paralyzed by the approach itself.
if you want to efficiently cache and handle content, create one class and wrap all caching function into.
Yes, you can use fast_cgi_finish_request() like a register_shutdown(). The only difference is that, fast_cgi_finish_request() will send an output(if any) and WILL NOT TERMINATE the script, while an callback of register_shutdown() will be invoked on script termination.
This is an old question, but I don't think any of the answers correctly answer the problem.
As I understand it, the problem is that none of the output gets sent to the client until the ob_get_flush() call that happens during the shutdown function.
To fix that issue, you need to pass a function and a chunk size to ob_start() in order to handle the output in chunks.
Something like this for your else clause:
$content = '';
$write_buffer_func = function($buffer, $phase) use ($file, &$content) {
$content .= $buffer;
if ($phase & PHP_OUTPUT_HANDLER_FINAL) {
file_put_contents($file, $content);
}
return $buffer;
};
ob_start($write_buffer_func, 1024);

How to make an input from command line in JScript?

How can I read input from the command line in JScript, similar to Pascal's readln?
It sounds like you're asking about Windows Script Host. If you're using cscript.exe to run your scripts, you can work with WScript.StdIn:
WScript.Echo("Enter something");
WScript.Echo("You entered " +WScript.StdIn.ReadLine());
http://msdn.microsoft.com/en-us/library/skwz6sz4(v=VS.85).aspx
Assuming cscript the.js a1 a2 ... you can;
var args = WScript.Arguments;
for (var i= 0; i < args.length; i++) {
WScript.Echo(args(i))
}
It has been like forever that I have looked into Pascal, so I'm not quite sure what ReadLn() exactly does. If you just want to get a line from the user in the command line you may use the WScript.StdIn.ReadLine() method as described here.
But if you want to read from a file, then you may try:
var myFileSysObj = new ActiveXObject("Scripting.FileSystemObject");
var myInputTextStream = myFileSysObj.OpenTextFile("c:\\temp\\test.txt", 1, true);
var myString = myInputTextStream.ReadLine();
myInputTextStream.Close();
from here.

Resources