I'm looking for a way to optimize our website's Speed Index metric on Lighthouse
I found this helpful article describe Speed Index metric very well, and help me understand how Speed Index is calculated.
https://calendar.perfplanet.com/2016/speed-index-tips-and-tricks/
But there is a key concept not being described clear on the article, and I search for a lot of other Speed Index related blogs still can't found the answer.
What is the 100% visual completeness frame?
We all know the First Frame is 0% VC because it's blank, but the VC keep increasing during the page load process, so what frame will be consider as 100% visual completeness?
The definition of 100% VC frame is important because it's the baseline for calculate all other frame's visual completeness.
If I have a page that simply print from 1 to 100 with interval 100ms and just enough to fill in the viewport, will the 100% VC frame be the frame that number 100 is printed?
Lighthouse
According to Google's description of the Lighthouse "Speed Index" audit:
Lighthouse uses a node module called Speedline to generate the Speed Index score.
sends Speedline
Speedline's Github readme says
The Speed Index, introduced by WebpageTest.org, aims to solve this issue. It measures how fast the page content is visually displayed. The current implementation is based on the Visual Progress from Video Capture calculation method described on the Speed Index page. The visual progress is calculated by comparing the distance between the histogram of the current frame and the final frame.
(Italics mine.)
a timeline of paints
The Speed Index page goes into painful detail about how visual progress is calculated. Here's a snippet:
In the case of Webkit-based browsers, we collect the timeline data which includes paint rects as well as other useful events.
I believe "timeline data" refers to a JSON object retrieved via the Performance Timeline API.
It seems Lighthouse passes the JSON timeline to Speedline, which then extracts an array of "frames," describing the page load's paint events:
/**
* #param {string|Array<TraceEvent>|{traceEvents: Array<TraceEvent>}} timeline
* #param {Options} opts
*/
function extractFramesFromTimeline(timeline, opts) {
which calculates histograms
Speedline converts the image data from each paint event to an image histogram, interestingly excluding pixels that are "close enough" to pass as white:
/**
* #param {number} i
* #param {number} j
* #param {ImageData} img
*/
function isWhitePixel(i, j, img) {
return getPixel(i, j, 0, img.width, img.data) >= 249 &&
getPixel(i, j, 1, img.width, img.data) >= 249 &&
getPixel(i, j, 2, img.width, img.data) >= 249;
}
A lot of math goes into calculating and comparing histograms. The project maintainer is the right person to ask about that. But this is where the eventual determination of the "visually complete" happens:
// find visually complete
for (let i = 0; i < frames.length && !visuallyCompleteTs; i++) {
if (frames[i][progressToUse]() >= 100) {
visuallyCompleteTs = frames[i].getTimeStamp();
}
}
and infers "progress",
The "progress" of a given frame seems to be calculated by this function:
/**
* #param {Frame} current
* #param {Frame} initial
* #param {Frame} target
*/
function calculateFrameProgress(current, initial, target) {
let total = 0;
let match = 0;
const currentHist = current.getHistogram();
const initialHist = initial.getHistogram();
const targetHist = target.getHistogram();
for (let channel = 0; channel < 3; channel++) {
for (let pixelVal = 0; pixelVal < 256; pixelVal++) {
const currentCount = currentHist[channel][pixelVal];
const initialCount = initialHist[channel][pixelVal];
const targetCount = targetHist[channel][pixelVal];
const currentDiff = Math.abs(currentCount - initialCount);
const targetDiff = Math.abs(targetCount - initialCount);
match += Math.min(currentDiff, targetDiff);
total += targetDiff;
}
}
let progress;
if (match === 0 && total === 0) { // All images are the same
progress = 100;
} else { // When images differs
progress = Math.floor(match / total * 100);
}
return progress;
}
and "visually complete" is the first frame with 100% progress.
Without fully auditing the code, my interpretation is that the "visually complete frame" is the first frame calculated to have the same total difference from the initial frame as the final frame (which is determined by which frames Lighthouse chooses to send to Speedline).
Or, in other words, it's complicated.
Visually complete is when the page in the viewport stops changing. I.e. the visuals are not changing.
It is calculated by taking screenshots throughout the load and comparing them to each other and to the final end state. So yes in your example when all numbers 1-100 are printed and the page stops changing you are “visually complete”.
So if a page loads the data in view quickly but renders “below the fold” content (e.g. off screen images) more slowly then you will get a quick visually complete, even if the page overall load time is still long.
Similarly if most of the on screen content is drawn early on but one small part is drawn later (perhaps a “click to chat” option) you will get mostly visually complete early on and so a good speed index, even if not as good as the above example.
On the other hand if you load fonts, or perhaps a large hero image, last and it redraws large parts of the page in view you will get a slow visual complete time and also a slow speed index score.
More details here: https://sites.google.com/a/webpagetest.org/docs/using-webpagetest/metrics/speed-index
I just got the answer from Lighthouse repo contributor, pls check this link guys.
https://github.com/GoogleChrome/lighthouse/issues/8148
Related
I want to "translate" a Pine-Script to MQL4 but I get the wrong output in MQL4 compared to the Pine-Script in Trading-view.
I wrote the Indicator in Pine-Script since it seems fairly easy to do so.
After I got the result that I was looking for I shortened the Pine-Script.
Here the working Pine-Script:
// Pinescript - whole Code to recreate the Indicator
study( "Volume RSI", shorttitle = "VoRSI" )
periode = input( 3, title = "Periode", minval = 1 )
VoRSI = rsi( volume, periode )
plot( VoRSI, color = #000000, linewidth = 2 )
Now I want to translate that code to MQL4 but I keep getting different outputs.
Here is the MQL4 code I wrote so far:
// MQL4 Code
input int InpRSIPeriod = 3; // RSI Period
double sumn = 0.0;
double sump = 0.0;
double VoRSI = 0.0;
int i = 0;
void OnTick() {
for ( i; i < InpRSIPeriod; i++ ) {
// Check if the Volume is buy or sell
double close = iClose( Symbol(), 0, i );
double old_close = iClose( Symbol(), 0, i + 1 );
if ( close - old_close < 0 )
{
// If the Volume is positive, add it up to the positive sum "sump"
sump = sump + iVolume( Symbol(), 0, i + 1 );
}
else
{
// If the Volume is negative, add it up to the negative sum "sumn"
sumn = sumn + iVolume( Symbol(), 0, i + 1 );
}
}
// Get the MA of the sump and sumn for the Input Period
double Volume_p = sump / InpRSIPeriod;
double Volume_n = sumn / InpRSIPeriod;
// Calculate the RSI for the Volume
VoRSI = 100 - 100 / ( 1 + Volume_p / Volume_n );
// Print Volume RSI for comparison with Tradingview
Print( VoRSI );
// Reset the Variables for the next "OnTick" Event
i = 0;
sumn = 0;
sump = 0;
}
I already checked if the Period, Symbol and timeframe are the same and also have a Screenshoot of the different outputs.
I already tried to follow the function-explanations in the pine-script for the rsi, max, rma and sma function but I cant get any results that seem to be halfway running.
I expect to translate the Pine-Script into MQL4.
I do not want to draw the whole Volume RSI as a Indicator in the Chart.
I just want to calculate the value of the Volume RSI of the last whole periode (when new candel opens) to check if it reaches higher than 80.
After that I want to check when it comes back below 80 again and use that as a threshold wether a trade should be opened or not.
I want a simple function that gets the Period as an input and takes the current pair and Timeframe to return the desired value between 0 and 100.
Up to now my translation persists to provide the wrong output value.
What am I missing in the Calculation? Can someone tell me what is the right way to calculate my Tradingview-Indicator with MQL4?
Q : Can someone tell me what is the right way to calculate my Tradingview-Indicator with MQL4?
Your main miss of the target is in putting the code into a wrong type of MQL4-code. MetaTrader Terminal can place an indicator via a Custom Indicator-type of MQL4-code.
There you have to declare so called IndicatorBuffer(s), that contain pre-computed values of the said indicator and these buffers are separately mapped onto indicator-lines ( depending on the type of the GUI-presentation style - lines, area-between-lines, etc ).
In case you insist on having a Custom-Indicator-less indicator, which is pretty legal and needed in some use-cases, than you need to implement you own "mechanisation" of drawing lines into a separate sub-window of the GUI in the Expert-Advisor-code, where you will manage all the settings and plotting "manually" as you wish, segment by segment ( we use this for many reasons during prototyping, so as to avoid all the Custom-Indicator dependencies and calling-interface gritty-nitties during the complex trading exosystem integration - so pretty well sure about doability and performance benefits & costs of going this way ).
The decision is yours, MQL4 can do it either way.
Q : What am I missing in the Calculation?
BONUS PART : A hidden gem for improving The Performance ...
In either way of going via Custom-Indicator-type-of-MQL4-code or an Expert-Advisor-type-of-MQL4-code a decision it is possible to avoid a per-QUOTE-arrival re-calculation of the whole "depth" of the RSI. There is a frozen-part and a one, hot-end of the indicator line and performance-wise it is more than wise to keep static records of "old" and frozen data and just update the "live"-hot-end of the indicator-line. That saves a lot of the response-latency your GUI consumes from any real-time response-loop...
I need to get dynamic bar count using moving average to find swing low and swing high. Please check screenshot for better understanding Thanks in advance
https://imgur.com/OQGy239
double mafast = iMA(NULL,0,15,0,MODE_SMA,PRICE_CLOSE,1);
double maslow = iMA(NULL,0,30,0,MODE_SMA,PRICE_CLOSE,1);
int i=1;
for (i = 1; i<=Bars; i++)
{
if (mafast<maslow || mafast>maslow);
}
int low = iLowest(NULL,0,MODE_LOW,i-1,0);
int high =iHighest(NULL,0,MODE_HIGH,i-1,0);
Print(Low[low]);
Print(High[high]);
}
actual 'Bars' function is getting value for entire chart but i need that should be quantify based on Moving Average price close above or below with strong move
Background: I'm a dev that knows JS, but is relatively new to Three JS. I've done a few small projects that involve static scenes with basic repeating animation.
I'm currently working on a modified version of Google's Globe project http://workshop.chromeexperiments.com/globe/. Looking back, I probably should have just started from scratch, but it was a good tool to see the approach their dev took. I just wish I could now update ThreeJS w/o the whole thing falling apart (too many unsupported methods and some bugs I never could fix, at least not in the hour I attempted it).
In the original, they are merging all of the geometric points into one object to speed up FPS. For my purposes, I'm updating the points on the globe using JSON, and there will never be more than 100 (probably no more than 60 actually), so they need to remain individual. I've removed the "combine" phase so I can now individually assign data to the points and then TWEEN the height change animation.
My question is, how do I manually select a single point (which is a Cube Geometry) so that I can modify the height value? I've looked through Stack Overflow and Three JS on GitHub and I'm not sure I understand the process. I'm assigning an ID to make it directly relate to the data that is being passed into it (I know WebGL adds an individual name/ID for particles, but I need something that is more directly related to what I'm doing for the sake of simplicity). That seems to work fine. But again, as a JS dev I've tried .getElementById(id) and $('#'+id) in jQuery, and neither works. I realize that Geometry objects don't behave the same way as HTML DOM objects, so I guess that's where I'm having struggles.
Code to add a point of data to the globe:
function addPoint(lat, lng, size, color, server) {
geometry = new THREE.Cube(0.75, 0.75, 1, 1, 1, 1, null, false, { px: true,
nx: true, py: true, ny: true, pz: false, nz: true});
for (var i = 0; i < geometry.vertices.length; i++) {
var vertex = geometry.vertices[i];
vertex.position.z += 0.5;
}
var point = new THREE.Mesh(geometry, new THREE.MeshBasicMaterial ({
vertexColors: THREE.FaceColors
}));
var phi = (90 - lat) * Math.PI / 180;
var theta = (180 - lng) * Math.PI / 180;
point.position.x = 200 * Math.sin(phi) * Math.cos(theta);
point.position.y = 200 * Math.cos(phi);
point.position.z = 200 * Math.sin(phi) * Math.sin(theta);
if($('#'+server).length > 0) {
server = server+'b';
}
point.id = server;
point.lookAt(mesh.position);
point.scale.z = -size;
point.updateMatrix();
for (var i = 0; i < point.geometry.faces.length; i++) {
point.geometry.faces[i].color = color;
}
console.log(point.id);
scene.addObject(point);
}
So now to go back, I know I can't use point.id because obviously that will only reference inside the function. But I've tried 'Globe.id', 'Globe.object.id', 'object.id', and nothing seems to work. I know it is possible, I just can't seem to find a method that works.
Okay, I found a method that works for this by playing with the structure.
Essentially, the scene is labeled "globe" and all objects are its children. So treating the scene as an array, we can successfully pass an object into a var using the following structure:
Globe > Scene > Children > [Object]
Using a matching function, we loop through each item and find the desired geometric object and assign it to a temporary var for animation/adjustment:
function updatePoints(server){
var p, lineObject;
$.getJSON('/JSON/'+server+'.json', function(serverdata) {
/* script that sets p to either 0 or 1 depending on dataset */
var pointId = server+p;
//Cycle through all of the child objects and find a patch in
for(var t = 3; t < globe.scene.children.length; t++) {
if(globe.scene.children[t].name === pointId) {
//set temp var "lineObject" to the matched object
lineObject = globe.scene.children[t];
}
}
/* Manipulation based on data here, using lineObject */
});
}
I don't know if this is something that anyone else has had questions on, but I hope it helps someone else! :)
EDIT: Just realized this isn't a keyed array so I can use .length to get total # of objects
Hei, SWT Gurus, I have a pretty weird situation. The thing is, that I am trying to print gantt chart in my Eclipse RCP application. Gantt chart is quite long and sometimes high as well. Its dimensions are following: height=3008px (2 vertical pages), width > 20000px. My print area can fit something like 1400x2000 px. First of all, I am creating image out with my chart (image is OK, I can save it separately and see, that everything is there). However, in order to print it on the paper, I am printing it piece by piece (moving source X and Y positions respectively). This algorithm was working fine for some time, but now something strange happened:
When chart is not high enough and can fit on 1 vertical page, then it is printed normally, but when it is 2 vertical pages, then only second vertical page is printed and first one is left out. There are no errors, nor anything, that could help me. I thought, may be there is not enough heap memory, so I allocated -Xmx1014m space to my application, but it didn't helped. So, I am really lost, and can not find any solution or even an explanation to this problem. I was trying to simply print image by gc.drwImage(image, x, y), but is also printed me only the second half of it. I am also printing some text after every try of printing an image, and it is printed
The code, that is responsible for printing an image is following:
for (int verticalPageNumber = 0; verticalPageNumber <= pageCount.vGanttChartPagesCount; verticalPageNumber++) {
// horizontal position needs to be reset to 0 before printing next bunch of horizontal pages
int imgPieceSrcX = 0;
for (int horizontalPageNumber = 0; horizontalPageNumber <= pageCount.hGanttChartPagesCount; horizontalPageNumber++) {
// Calculate bounds for the next page
final Rectangle printBounds = PrintingUtils.calculatePrintBounds(printerClientArea, scaleFactor, verticalPageNumber, horizontalPageNumber);
if (shouldPrint(printer.getPrinterData(), currentPageNr)
&& nextPageHasSomethingToPrint(imgPieceSrcX, imgPieceSrcY, totalGanttChartArea.width, totalGanttChartArea.height)) {
printer.startPage();
final Transform printerTransform = PrintingUtils.setUpTransform(printer, printerClientArea, scaleFactor, gc, printBounds);
printHeader(gc, currentPageNr, printBounds);
imgPieceSrcY = printBounds.y;
final int imgPieceSrcHeight =
imgPieceSrcY + printBounds.height < ganttChartImage.getBounds().height ? printBounds.height : ganttChartImage.getBounds().height
- imgPieceSrcY;
final int imgPieceSrcWidth =
imgPieceSrcX + printBounds.width < ganttChartImage.getBounds().width ? printBounds.width : ganttChartImage.getBounds().width
- imgPieceSrcX;
if (imgPieceSrcHeight > 0 && imgPieceSrcWidth > 0) {
// gantt chart is printed as image, piece by piece
gc.drawImage(ganttChartImage,
imgPieceSrcX, imgPieceSrcY,
imgPieceSrcWidth, imgPieceSrcHeight,
printBounds.x, printBounds.y,
imgPieceSrcWidth, imgPieceSrcHeight); // destination width and height equal to source width and height to prevent
// stretching/shrinking of image
// move x and y to print next image piece
gc.drawText("Text " + currentPageNr, imgPieceSrcX, imgPieceSrcY);
imgPieceSrcX += printBounds.width;
}
currentPageNr++;
printer.endPage();
printerTransform.dispose();
}
}
Thanks in advance.
I am a newbie to openCV. I have installed the opencv library on a ubuntu system, compiled it and trying to look into some image/video processing apps in opencv to understand more.
I am interested to know if OpenCV library has any algorithm/class for removal flicker in captured videos? If yes what document or code should I should look deeper into?
If openCV does not have it, are there any standard implementations in some other Video processing library/SDK/Matlab,.. which provide algorithms for flicker removal from video sequences?
Any pointers would be useful
Thank you.
-AD.
I don't know any standard way to deflicker a video.
But VirtualDub is a Video Processing software which has a Filter for deflickering the video. You can find it's filter source and documents (algorithm description probably) here.
I wrote my own Deflicker C++ function. here it is. You can cut and paste this code as is - no headers needed other than the usual openCV ones.
Mat deflicker(Mat,int);
Mat prevdeflicker;
Mat deflicker(Mat Mat1,int strengthcutoff = 20){ //deflicker - compares each pixel of the frame to a previously stored frame, and throttle small changes in pixels (flicker)
if (prevdeflicker.rows){//check if we stored a previous frame of this name.//if not, theres nothing we can do. clone and exit
int i,j;
uchar* p;
uchar* prevp;
for( i = 0; i < Mat1.rows; ++i)
{
p = Mat1.ptr<uchar>(i);
prevp = prevdeflicker.ptr<uchar>(i);
for ( j = 0; j < Mat1.cols; ++j){
Scalar previntensity = prevp[j];
Scalar intensity = p[j];
int strength = abs(intensity.val[0] - previntensity.val[0]);
if(strength < strengthcutoff){ //the strength of the stimulus must be greater than a certain point, else we do not want to allow the change
//value 25 works good for medium+ light. anything higher creates too much blur around moving objects.
//in low light however this makes it worse, since low light seems to increase contrasts in flicker - some flickers go from 0 to 255 and back. :(
//I need to write a way to track large group movements vs small pixels, and only filter out the small pixel stuff. maybe blur first?
if(intensity.val[0] > previntensity.val[0]){ // use the previous frames value. Change it by +1 - slow enough to not be noticable flicker
p[j] = previntensity.val[0] + 1;
}else{
p[j] = previntensity.val[0] - 1;
}
}
}
}//end for
}
prevdeflicker = Mat1.clone();//clone the current one as the old one.
return Mat1;
}
Call it as: Mat= deflicker(Mat). It needs a loop, and a greyscale image, like so:
for(;;){
cap >> frame; // get a new frame from camera
cvtColor( frame, src_grey, CV_RGB2GRAY ); //convert to greyscale - simplifies everything
src_grey = deflicker(src_grey); // this is the function call
imshow("grey video", src_grey);
if(waitKey(30) >= 0) break;
}