I am facing issue while rendering the data in column chart around 9000 (thousand) points with 'stacking:normal' and its taking too much time to load the data in IE and Chrome(taking around 5-12 secs each time when i request the data from server in paging mode) but when i comment the 'stacking:normal' it works fine and quite fast and taking less than 2 secs. but my requirement with 'stacking:normal'.
Any suggestion how i can resolve the issue and optimize the performance ?
Please check the below fiddler here you can find the data format.
live demo
I checked the performance in chrome and found the below result. will wait for the reply.
See chrome perfomance results screenshot
Related
I m boosting my website performance . I'm testing my website page speed with Google PSI. I've done maximum correction and i'm getting 90+ score in PSI . All lab data attributes are green . But Field data is still not being updated. I m just wondering, how long Google page speed insight take to update the Field data. also, If Fields data will update, then it will be same as the Lab data ?
Page Insight Screenshort
The data displayed is aggregated daily, so the data should change day to day.
However the reason you do not see the results of any improvements you make instantly is because the data is taken over a rolling 28 day period.
After about 7 days you should start to see your score improve, in 28 days the report data will be reflective of changes you made today.
Because of this delay I would recommend taking your own metrics using the Web Vitals Library so you can see the results in real time. This also lets you verify the changes you made work at all screen sizes and cross browser.
Either that or you could query the CrUX data set yourself on shorter timescales.
NewRelic is showing Avg. CPU usage as 11200% for my app. What could be the issue. My app seems to work fine on my iPhone and no user ever reported any battery degradation because of my app. Is there anyone else facing the same issue? How to debug?
Infrastructure Hosts page calculate CPU average using several attributes
CPU percentage is not collected by New Relic, but derived from several other metrics. Specifically, the cpuPercent attribute is an aggregation of cpuUserPercent, cpuSystemPercent, cpuIoWaitPercent and cpuStealPercent
Probably at least one of them is wrong
You can ask this question (or report a bug) in discuss newrelic infrastructure
Or write NRQL query in Insights to check these attributes using SystemSample
To query Infrastructure event data, use the NRQL syntax with the Insights Data Explorer:
Go to insights.newrelic.com > Data Explorer.
From the query command line, use FROM before the event type.
select cpuUserPercent , cpuSystemPercent , cpuIOWaitPercent , cpuStealPercent from SystemSample
It means there is more than one CPU or core.
Newrelic-Docs
Thanks.
I'm generating json of 65,000 users to populate a typeahead. The query is quick, turns out building the json was the bottleneck. I'm trying to cache the result but what happens when the cache expires, does it rebuild it automatically or is it going to wait until someone triggers the call, resulting in a 9-second page load once every 12-hours?
def user_json
Rails.cache.fetch("users", expires_in: 12.hours) do
User.all.to_json
end
end
If you did not want to hit the database each time then you could look in to a solution such as elastic search or sphinx which are designed to perform quick searching like your describing.
I was listening to javascript jabber this morning and they where saying that the average web page is now a shade under 2mb including images and CSS. Your request doubles that size. While that's fine for north Americans your page is likely to feel much slower in internet backwaters such as Australia.
It's also worth noting that older browsers such as IE don't handle iteration in javascript too well. I would suggest that your application would crash in any IE pre version 9.
Because of these reasons I would avoid pushing JSON that contains 65,000 rows over the wire and in to the browser. If the query is quick why not do a trip back to the server each time the user changes the input. Many trips back to the server based on input would be quicker than sending all 65,000 records and in the process removes the entire class of problems I have described above. Your original problem also goes away as you don't have to cache any responses any more.
I have a query which involves getting a list of user from a table in sorted order based on at what time it was created. I got the following timing diagram from the chrome developer tools.
You can see that TTFB (time to first byte) is too high.
I am not sure whether it is because of the SQL sort. If that is the reason then how can I reduce this time?
Or is it because of the TTFB. I saw blogs which says that TTFB should be less (< 1sec). But for me it shows >1 sec. Is it because of my query or something else?
I am not sure how can I reduce this time.
I am using angular. Should I use angular to sort the table instead of SQL sort? (many posts say that shouldn't be the issue)
What I want to know is how can I reduce TTFB. Guys! I am actually new to this. It is the task given to me by my team members. I am not sure how can I reduce TTFB time. I saw many posts, but not able to understand properly. What is TTFB. Is it the time taken by the server?
The TTFB is not the time to first byte of the body of the response (i.e., the useful data, such as: json, xml, etc.), but rather the time to first byte of the response received from the server. This byte is the start of the response headers.
For example, if the server sends the headers before doing the hard work (like heavy SQL), you will get a very low TTFB, but it isn't "true".
In your case, TTFB represents the time you spend processing data on the server.
To reduce the TTFB, you need to do the server-side work faster.
I have met the same problem. My project is running on the local server. I checked my php code.
$db = mysqli_connect('localhost', 'root', 'root', 'smart');
I use localhost to connect to my local database. That maybe the cause of the problem which you're describing. You can modify your HOSTS file. Add the line
127.0.0.1 localhost.
TTFB is something that happens behind the scenes. Your browser knows nothing about what happens behind the scenes.
You need to look into what queries are being run and how the website connects to the server.
This article might help understand TTFB, but otherwise you need to dig deeper into your application.
If you are using PHP, try using <?php flush(); ?> after </head> and before </body> or whatever section you want to output quickly (like the header or content). It will output the actually code without waiting for php to end. Don't use this function all the time, or the speed increase won't be noticable.
More info
I would suggest you read this article and focus more on how to optimize the overall response to the user request (either a page, a search result etc.)
A good argument for this is the example they give about using gzip to compress the page. Even though ttfb is faster when you do not compress, the overall experience of the user is worst because it takes longer to download content that is not zipped.
I'm considering using SignalR to keep persistent (COMET) connections with my .Net server in a project where I need to update a client-side graph. I'm considering Flot of the graphing portion, but am curious how possible it is to display a "live graph" in this manner. Is Flot a good choice for this? I would like the server to be able to push new data to the graph and have it append to the existing data, as it becomes available.
I haven't found any examples of doing this, so am wondering if there is some difficulty in doing this that I am not anticipating.
Flot and Highcharts, the two I'm most familiar with, let you redraw the data as long as the axes and grid stay the same. They are pretty efficient in that case.
To use flot to append data to a continuous graph, you will end up just redrawing the whole graph all the time. In any modern browser (heck, even IE7), as long as you keep the number of points reasonable, the performance will be totally acceptable. I have pages with 4-6 flot graphs, updating every second, each having ~3-5 datapoints per second, with up to 5 minutes of data (so ~1000 datapoints per graph, 4000 points in total on the page). This is achieved with no lag, even on a low-powered machine.
I have not seen any libraries for managing this type of thing over top of flot, so I ended up doing my own caching.
I think the only "gotcha" you'll run into is making sure you don't let your memory usage run out of control. The first couple attempts I made at this, if you left the graph running overnight, you would come back to 4GB of memory used. Make sure you properly remove old data, and don't keep references to replaced graphs and AJAX requests.