Rhino Javascript coverage using Debugger - code-coverage

I'm executing some Javascript inside Rhino, and wanted to have coverage data. I read in many places that implementing Debugger gives access to single line execution, and in fact the interface has a proper callback for it.
I'm planning to write my Debugger implementation to create coverage data.
However, I cannot find anywhere a similar implementation, and it seems strange to me that nobody used such a feature. Even systems that use Rhino and do coverage does not use that approach, but instrument javascript code instead.
Are you aware of :
Any existing Rhino Debugger implementation that does coverage analysis? Even just a proof of concept, or example code.
Any reason why such approach is not used? Will it be slow, will it introduce malfunctioning?
Thanks in advance.

Related

Understanding how do the coverage modules work

I came across with a new dynamic language. I would like to create a coverage tool for that language. I started reading the source code of Perl 5 and Python coverage modules but it got complicated. It's a dynamic scripting language so I guess that source code of static languages (like Java & C++) won't help me here. Also, as I understand, each language was built in a different way and the same ideas won't work. But, the big concepts could be similar.
My question is as follows: how do I "attack" this task? What is the proper workflow I need to follow? What I need to investigate? Are there any books or blogs I can read about those kind of stuff?
There are two kinds of coverage collection mechanisms:
1) Real-time sampling of the program counter, typically by a clock running at 1-10ms. Difficulties: a) mapping an actual PC value back to a source line, b) sampling means you might not see execution of a rarely used bit of code, so your coverage reporting is inaccurate. Because of these issues, this approach isn't used very often.
2) Instrumenting the program so that it collects coverage as it runs. This is hard to do with object code... a) you have to decode the instructions to see where to put probes, and this can be very hard to do right, b) you have patch the source code to include the probes (this can be really awkward; a "probe" might consist of a 5 byte subroutine call but the probe has replace a single-byte instruction). c) you still have to figure out how to map a probe location back to a source code line. A more effective way is to instrument the source code, which requires pretty sophisticated machinery to read source, make probe patches, and regenerate the instrumented code for execution/compilation.
My technical paper Branch Coverage for Arbitrary Languages Made Easy provides explicit detail for how to do this in a general way. My company has built commercial test coverage tools for a wide variety of languages (C, Python, PHP, COBOL, Java, C++, C#, ProC,....) using this approach. This covers most static and dynamic languages. Some dynamic mechanisms are extremely difficult to instrument, e.g., eval() but that is true of every approach.
In addition to Ira's answer, there is a third coverage collection mechanism: the language implementation provides a callback that can inform you about program events. For example, Python has sys.settrace: you provide it a function, and Python calls your function for every function called or returned, and every line executed.

iOS - why use quick and nimble vs XCTest

Quick is a behavior-driven development testing framework. I'd like to know why this could be better then doing regular XCTests. Nimble is only a matcher library but it makes the tests easy to read like writing things like expect(13) > 9.
To me Quick provides a new vocabulary for writing tests (that XCTests doesnt have) and makes you focus on writing a unit test. Basically it is a feature induced path to TDD. When a test fails its also much more descriptive.
The other thing i noticed is that if i want to see what a method does, if i go to the quick spec i can easily read whats being tested and then know more about the method instead of writing comments on the method. So the quick spec acts as comments on the method.
Is there anything more i should know about Quick or BDD ?
You need to evaluate what you need.
I've been using Quick (and Nimble) for a long time and my big concern about it is that is not possible to run a single test case. The thing is Quick generates runtime test cases and that is the reason because of is not possible to run them individually, even if you use fit().

How to count how many times a function or a line of code is being called?

I am developing a web application in ruby on rails.Before pushing it to production i want to add a functionality in the code using which i can get how many times a particular function is being called.So that i can enhance the code which is being called frequently.And check the dead code which is not being called.
Seems like you can find some examples of AOP for Ruby-on-rails.
Like http://blog.arkency.com/2013/07/ruby-and-aop-decouple-your-code-even-more/
AOP will allow you to execute code before/adter/around a method for example without modifying the class itself.
It would be a good way to count method use without impacting your current code.
You can also take a look at this question
This functionality exists when running tests, but it actually measures how much code is covered by your tests. Check out simplecov, you can use it together with rspec or cucumber, and definitely the last one should give you some sort of indication.
Instead of running tests, you could also run a simple script together with simplecov, to achieve kind of the same effect.
But if you are worried about code quality, I would urge you to look at metric_fu: it will generate all kinds of statistics with regards to the quality of your code.
This is called profiling, and there are plenty of profilers for Ruby. Some of the best profiling tools on the planet exist on the Java platform, and they pretty much Just Work™ out-of-the-box with JRuby. For YARV, there is ruby-prof, for example.

How to start unit-test old and new code?

I admit that I have almost none experience of unittesting. I did a try with DUnit a while ago but gave up because there was so many dependencies between classes in my application.
It is a rather big (about 1.5 million source lines) Delphi application and we are a team that maintain it.
The testing for now is done by one person that use it before release and report bugs. I have also set up some GUI-tests in TestComplete 6, but it often fails because of changes in the application.
Bold for Delphi is used as persistance framework against the database.
We all agree that unittesting is the way to go and we plan to write a new application in DotNet with ECO as persistance framework.
I just don't know where to start with unittesting...
Any good books, URL, best practice etc ?
Well, the challenge in unit testing is not the testing itself, but in writing testable code. If the code was written not thinking about testing, then you'll probably have a really hard time.
Anyway, if you can refactor, do refactor to make it testable. Don't mix object creation with logic whenever possible (I don't know delphi, but there might be some dependency injection framework to help in this).
This blog has lots of good insight about testing. Check this article for instance (my first suggestion was based on it).
As for a suggestion, try testing the leaf nodes of your code first, those classes that don't depend on others. They should be easier to test, as they don't require mocks.
Writing unit tests for legacy code usually requires a lot of refactoring.
Excellent book that covers this is Michael Feather's "Working Effectively with Legacy Code"
One additional suggestion: use a unit test coverage tool to indicate your progress in this work. I'm not sure about what the good coverage tools for Delphi code are though. I guess this would be a different question/topic.
Working Effectively with Legacy Code
One of the more popular approaches is to write the unit-tests as you modify the code. All new codes gets unit tests, and for any code you modify you first write its test, verify it, modify it, re-verify it, and then write/fix any tests that you need due to your modifications.
One of the big advantages of having good unit test coverage is being able to verify that the changes you make don't inadvertently break something else. This approach allows you to do that, while focusing your efforts on your immediate needs.
The alternate approach I've employed is to develop my unit tests via Co-Ops :)
When you work with legacy code, mock objetcs are really usefull to build unit tests.
Take a look at this question regarding Delphi and mocks: What is your favorite Delphi mocking library?
For .Net unittesting read this : "The Art of Unit Testing: with Examples in .NET"
About best pratices :
What you said is right : Sometimes, it's difficult to write unit tests because of the dependancy between classes...
So write unit tests just after or just before ;-) the implementation of the classes. Like this, if you have some difficulties to write the tests, maybe it means you have a design problem !

Is it possible to write extensions to Delphi's debugger?

I know there's an API for creating extensions to Delphi. I use the GExperts package and various JVCL experts frequently. But I've never seen any extensions to the debugger. It would be very nice, for example, to be able to register viewers for various objects instead of having to examine them in the Inspector. (A form with an image control that displays a TImage, for example, or a grid that displays the contents of a dataset.)
Are there any APIs that allow you to extend Delphi's debugger in this way?
EDIT: This wasn't available back when I wrote the question, but Delphi 2010 provides a way to do it.
In ToolsAPI.pas source there is some API interfaces for debugging. You can be informed when a debugging event occurred, info about breakpoints, which process is being debugged etc. with this API. But it seems there is no support for variables or values of them. So there is no easy way to implement your requisition without ugly hacks.
Basic debugger visualizers can be implemented with the Evaluation interfaces exposed by the OTA. (Examples for debugger visualizers can be found here and here.)
A deeper integration into the debugger is possible as well (for example, I wrote a little extension for C++Builder that enables the debugger to evaluate the actual objects behind an interface) - but as Khan pointed out, to achieve such a level of integration, you'll need to resort to quite a few dirty hacks.

Resources