Is there any way I can get a performance summary Results(Scripting, Rendering, Idle time)? - lighthouse

I want to capture chrome performance summary. Scripting, Rendering, Loading, Idle and Other time programatically. I want know is there any metrics lighhouse generates which I can use?.
I have tried https://github.com/axemclion/browser-perf. It doesn't calculate Other and idle time. I looked at chromium source code (TimeLineUtil.js) to see if I can use the same logic to capture these details. It seems bit complex. Thought of asking this question here. If anyone already implemented the same.

You are probably looking for The Page Speed API: https://developers.google.com/speed/docs/insights/v5/reference/pagespeedapi/runpagespeed#response. Or is that insufficient?

Related

Lidars in Drake

I want to simulate lidars. I saw that a class DepthSensor was mentioned in the documentation, but I have not found its actual implementation. For now, I am planning on using the RgbdSensor class and use only the height I need of the depth point cloud I receive to simulate my lidars.
Just to get your input on that, maybe I missed something, but is there a specific class for lidars, and how would you go about adding lidars to a simulation?
Thanks in advance,
Arnaud
You've discovered an anchronism in the code. There had previously been a lidar-like sensor (called DepthSensor). The extant documentation refers to that class. The class's removal should've been accompanied by a clean up of the documentation.
The approach you are taking is the expected approach given Drake's current state.
There has always been an intention to re-introduce a lidar-like sensor in Drake's current architecture. It simply hasn't been a high priority.
I'd recommend you proceed with what you're currently doing (lidar from depth images) but, at the same time, post an issue requesting a lidar-like query with specific focus on the minimum lidar-properties that you require. A discussion regarding how that would differ from what you can actually get from the depth images would better inform of us your unique needs and how to prioritize it. (You can also indicate more advanced features that you need less but would be good to have, of course).
As for the question: how would you go about adding lidars?
That's problematic. Ideally, what you would need is ray-casting ability. The intent is for QueryObject to support such a query, but it hasn't happened yet. (It's certainly the underlying technology we'd have used to implement a LidarSensor.) In the absence of that kind of functionality, you'd essentially have to do it yourself in the most horrible, tedious way imaginable. I'd go so far as to suggest that it's not feasible with the current API.

Is Marshal.ReleaseComObject really necessary when using Microsoft.Office.Interop.Excel?

I have a mid sized code library (several thousand lines) that uses Excel Interop (Microsoft.Office.Interop.Excel).
The program that keeps a workbook open for hours at a time, and does manipulations like adding/editing text, shapes, and calling macros.
I have not once seen a Marshal.ReleaseComObject. Yet, the users don't report any problems.
In all cases, the objects go out of scope within several seconds.
So, is this a problem? How? If yes, how do I justify to management that it needs cleanup? If not, why recommend it in the first place?
It's been a while, but I did a lot of Excel automation from .NET. I never used Marshal.ReleaseComObject either. Never saw a problem.

Crackling during playback of a libPd patch - esp. related to keyboard presentation

I've integrated a libPd patch in iOS.
When entering a text field, and presenting the keyboard there's some crackling sounds.
How would I go about debugging this?
NB I've tagged this question with Objective-C and iOS, however this question may require knowledge in all four tags - libPd and Pure Data well:
What is Pure Data
Pure Data is a powerful programming language for the manipulating of audio from core mathematical concepts. It's widely used games as well as DJ and other music focused applications. Some example apps that are built with Pure Data and libPd are: The Rj Voyager app from RjDj and the Inception App from Warner Brothers.
libPD is a method of embedding Pure Data patches (developed using the visual interface) within an iOS app. Controlling the Pd interface is done via a publish/subscribe message interface similar to OSC or MIDI. .
The GitHub page for libPd is here: https://github.com/libpd
What help am I looking for?
I'm not sure where to start debugging this. Someone who has integrated and used libPd on iOS could surely share experience. It could be related to the following:
How threading works, and how it interacts with the main queue
What sample rates work best given the target devices
What debugging tools are available.
Other advice earned through deep experience.
I don't know anything about PD, but it seems likely that the presentation of the keyboard is causing you to be CPU-starved for some reason. You might try:
verifying this still happens when in release and not attached to a debugger (log messages cause long delays when attached to the debugger, which alone can cause hiccups like this)
profiling your code using Instruments to see if you're inadvertently using a whole lot of CPU at once or
increasing buffer sizes so PD doesn't need CPU as often.
I was experiencing the same symptoms in an app I'm working on. I did manage to ascertain a couple of things early on. My recent changes involved sending alot of messages to pd during app init. I noticed when debugging that when I reduced the amount of messages sent, the sound improved. Also, I didn't see this in the simulator, only on the device.
The libpd example PolyPatch was pretty useful in this case, if you increase the amount of patches that can be generated. I found that the sound was breaking up with many patches open, in exactly the same way as in my app. This is quite simply where the overhead of using libpd takes its toll on performance. What's also clear is that simplifying a patch (so it contains less objects) impacts performance. But by far the biggest hit is creating a new, separate patch. So you won't want to be creating huge numbers of patches. Debugging does of course take a toll too.
44.1khz works pretty much everywhere as far as sample rates go (it's the pd standard too). And there's nothing to stop you debugging the libpd code right there in xcode, i've done that a few times. Other than that, there is the issue of debugging patches. You can either set up your patch with test versions of your objects directly in pd, or you should be able to set up libpd to view the same output as you would normally see in pd's main window in the console (you just need to make sure that you have something like this
[PdBase setDelegate:_dispatcher];
in your code - it's all in the dox of course). Then you just pepper your patch with print messages as required...
Hope it helps, and is still relevant after 3 mths...!

Page rendering speed improvement

We are running a web service, that is struggling with some pretty high page rendering times especially IE8 (around 20 sec). We are very skilled at building high performing backend systems, but not as skilled at optimizing the frontend.
Currently it seems (from newrelic) that page rendering and dom-parsing is the biggest issue.
We have tried to optimize js scrips, and that helped a little, but still the page renders terrible slow in IE8, and I have a feeling that some low hanging fruits is out there. My problem is, that I have really no idea where to start, and what would work and if there is some red lambs flashing that I'm not seeing. I need an experienced eye.
Can any one help me in the right direction (I'm open for everything!)?
The slow page is here: the slow page
PS. we are running Rails 3.2
I recommend you to analyze your website via the tools above (YSlow is also a good tool)
Or with this Online-Tool Pingdom. There you'll see in a very easy way, where your speed is gone.
Theres a free available summary from the performance optimization books in Hooopo's answer (which are excellent!) Yahoo! Developer Network
"Currently it seems (from newrelic) that page rendering and dom-parsing is the biggest issue." Therefore i recommend you to study this book: High Performance Javascript from Nicholas C. Zakas.
Put as much JS as possible to the bottom of your page to improve progressive rendering.
I sometimes found CSS-selectors that are a bit long (doesn't matter if it's a small site, but in this case..). this can make your page-rendering very slow, especially in IE.
Example (from your site):
table.results_table td.car_details .content > .left { ... }
Try to break down this large selector to this (if possible):
.car_details .content .left-child { ... }
Short: optimize your JS performance and keep your css-selectors as small and simple as possible.
Hope this helps.
To optimize the front-end, try these two tools and follow its suggestions:
http://www.webpagetest.org/
https://developers.google.com/speed/pagespeed/insights
You could also use css sprite image to reduce http requests. Try https://github.com/Compass/compass-rails
Recommend two books to you:
http://www.amazon.com/High-Performance-Web-Sites-Essential/dp/0596529309
http://shop.oreilly.com/product/9780596522315.do

iOS / C: Algorithm to detect phonemes

I am searching for an algorithm to determine whether realtime audio input matches one of 144 given (and comfortably distinct) phoneme-pairs.
Preferably the lowest level that does the job.
I'm developing radical / experimental musical training software for iPhone / iPad.
My musical system comprises 12 consonant phonemes and 12 vowel phonemes, demonstrated here. That makes 144 possible phoneme pairs. The student has to sing the correct phoneme pair 'laa duu bee' etc in response to visual stimulus.
I have done a lot of research into this, it looks like my best bet may be to use one of the iOS Sphinx wrappers ( iPhone App › Add voice recognition? is the best source of information I have found ). However, I can't see how I would adapt such a package, can anyone with experience using one of these technologies give a basic rundown of the steps that would be required?
Would training be necessary by the user? I would have thought not, as it is such an elementary task, compared with full language models of thousands of words and far greater and more subtle phoneme base. However, it would be acceptable (not ideal) to have the user train 12 phoneme pairs: { consonant1+vowel1, consonant2+vowel2, ..., consonant12+vowel12 }. The full 144 would be too burdensome.
Is there a simpler approach? I feel like using a fully featured continuous speech recogniser is using a sledgehammer to crack a nut. It would be far more elegant to use the minimum technology that would solve the problem.
So really I'm hunting for any open source software that recognises phonemes.
PS I need a solution which runs pretty much real-time. so even as they are singing the note, firstly it blinks on to illustrate that it picked up the phoneme pair that was sung, and then it glows to illustrate whether they are singing the correct note pitch
If you are looking for a phone-level open source recogniser, then I would recommend HTK. Very good documentation is available with this tool in the form of the HTK Book. It also contains an entire chapter dedicated to building a phone level real-time speech recogniser. From your problem statement above, it seems to me like you might be able to re-work that example into your own solution. Possible pitfalls:
Since you want to do a phone level recogniser, the data needed to train the phone models would be very high. Also, your training database should be balanced in terms of distribution of the phones.
Building a speaker-independent system would require data from more than one speaker. And lots of that too.
Since this is open-source, you should also check into the licensing info for any additional details about shipping the code. A good alternative would be to use the on-phone recorder and then have the recorded waveform sent over a data channel to a server for the recognition, pretty much something like what google does.
I have a little bit of experience with this type of signal processing, and I would say that this is probably not the type of finite question that can be answered definitively.
One thing worth noting is that although you may restrict the phonemes you are interested in, the possibility space remains the same (i.e. infinite-ish). User training might help the algorithms along a bit, but useful training takes quite a bit of time and it seems you are averse to too much of that.
Using Sphinx is probably a great start on this problem. I haven't gotten very far in the library myself, but my guess is that you'll be working with its source code yourself to get exactly what you want. (Hooray for open source!)
...using a sledgehammer to crack a nut.
I wouldn't label your problem a nut, I'd say it's more like a beast. It may be a different beast than natural language speech recognition, but it is still a beast.
All the best with your problem solving.
Not sure if this would help: check out OpenEars' LanguageModelGenerator. OpenEars uses Sphinx and other libraries.
http://www.hfink.eu/matchbox
This page links to both YouTube video demo and github source.
I'm guessing it would still be a lot of work to mould it into the shape I'm after, but is also definitely does do a lot of the work.

Resources