Is there a way to get code coverage for the Fortran layer which is wrapped using f2py? Since f2py generates C wrappers, maybe this is not easily doable since we'd be measuring the coverage of the wrappers instead. I googled this and couldn't find any relevant info.
Related
I'm using LLVM Code Coverage to determine the code coverage of my iOS app's source code, and after that generate a report using Slather.
I was wondering which of the criteria listed at the code coverage wikipedia, however I have trouble finding this information.
In other words; what criteria does LLVM Code Coverage Mapping Format (or Slather) use to determine the code coverage?
Thanks
LLVM Coverage is at the finer of the wikipedia list, i.e. the condition level.
For example here: http://lab.llvm.org:8080/coverage/coverage-reports/clang/coverage/Users/buildslave/jenkins/sharedspace/clang-stage2-coverage-R#2/llvm/tools/clang/lib/CodeGen/BackendUtil.cpp.html#L659
You can mouse over each side of the condition line 664 and see how many times each was evaluated.
I am trying to utilize clang tooling library for the purpose of my future tool.
What I would like to do with this tool is:
1. parse all the source code (with includes) and detect any of my keywords in the comments (comments will be some kind of interface between the programmer and my tool, which will do various things with the rest of the source code according to commands placed in the comments).
2. according to commands from the source code, do some refactoring of it
The refactoring itself will be done using clang AST, like from example below:
http://eli.thegreenplace.net/2014/07/29/ast-matchers-and-clang-refactoring-tools
The thing I am looking for currently is how to parse the comments, within the same run of clang tooling procedures. I do not want to make separate step just for parsing the source code, because it have to be already done in tooling library.
Do you know how to somehow get the information about comments included in the source code I am parsing by tooling library?
Try the options -Wdocumentation and associated options (as -fparse-all-comments). If U use some tools (as clang-check or clang-tidy, adds these options in the compile commands db.
The Delphi Linker strips out any functions that aren't actually used, thus reducing the executable size.
Is there any way to stop the Delphi Linker doing this? e.g. a compiler switch?
To those wondering "why?"...
I am trying to use the delphi-code-coverage tool, but it only reports on code that is actually compiled into the executable. Which makes it not very useful. If I could get Delphi to include all code, I'm hoping I could then get some useful code coverage statistics.
I should mention that I have DUnit tests in a separate project to my application. So even though the code is "unused" in the DUnit project, it is used in the actual application.
See here for more details.
Your code-coverage tool is measuring the wrong thing. It works off the map file instead of the source code, so it will only report on live code instead of on all code in a project. The linker already filters out the dead code, and in a blank unit-test project, all code is dead code. There is no way to tell Delphi to include dead code in an EXE.
Run the code-coverage tool on your application to get a list of functions that needs testing. Then, write code in your unit-test project that mentions all those functions. (It doesn't have to call everything yet, and it certainly doesn't have to test it all. We're just making sure it's linked to the unit-test project.) Now the coverage tool can get an accurate measurement of what's been executed and what hasn't.
Is there anyway to use the llvm-clang parser in an incremental/online manner?
Say I'm writing an editor and I want to be able to parse the C++ code I have in front of me.
I don't want to write my own hacked up parser.
I'd like to use something full featured, like llvm-clang.
Is there an easy way to hijack the llvm-clang parser? (And is it fast enough to run it continuously in the background)?
Thanks!
I don't think clang can incrementally parse C++ files, but it's one of this project goals: http://clang.llvm.org/features.html
I've written something similar for my final year project. It wasn't C++ editor, but a Visual Studio plugin, which main task was improving C++ intellisense (like Visual Assist X).
When I was writing this project I've been also thinking about C++ incremental parser, but I haven't found any suitable solution. To solve the C++ intellisense problem I used normal C++ parser from GCC. However it was to slow, to parse file after each code completion request (ctrl+space), just try including boost::spirit. To make this project work properly I parsed files in the background and after each code completion request I compared current file with it's previous version (via diff) to detect changes made from last parsing. Having those changes I updated syntax tree, mostly by adding or removing variables.
Except incremental parsing, there is also another problem with projects like this. Mostly you'll be parsing C++ code which is being edited so it's invalid code. Given the complex C++ grammar, sometimes parser won't be able to recover from syntax errors, so it won't detect correctly some symbols in code.
Another issue are C++ parsers / compilers differences. Let's say I'm using working in Visual Studio and I have used some VC++ compiler specific contruction in my code. Clang parser won't be able to parse it correctly.
For writing something similair to IntelliSense, I would advise you to write your own parser using the LALR parsing algorithm. Since you can save its state in each line so you don't have to reparse the whole file when a file has been editted, which is very fast!
Note that C++ can't be fully expressed in BNF, but I think you could get pretty far with some adjustments. It's ofcourse a lot more work than using Clang's frontend, but you could still use Clang for analysing header files in coöperation with you own written parser.
Is there any way to measure code coverage with DUnit? Or are there any free tools accomplishing that? What do you use for that? What code coverage do you usually go for?
Jim McKeeth: Thanks for the detailed answer. I am talking about unit testing in the sense of a TDD approach, not only about unit tests after a failure occured. I'm interested in the code coverage I can achieve with some basic prewritten unit tests.
I have just created a new open source project on Google Code with a basic code coverage tool for Delphi 2010. https://sourceforge.net/projects/delphicodecoverage/
Right now it can measure line coverage but I'm planning to add class and method coverage too.
It generates html reports with a summary as well as marked up source showing you what lines are covered (green), which were not (red) and the rest of the lines that didn't have any code generated for them.
Update:
As of version 0.3 of Delphi Code Coverage you can generate XML reports compatible with the Hudson EMMA plugin to display code coverage trends within Hudson.
Update:
Version 0.5 brings bug fixes, increased configurability and cleaned up reports
Update:
Version 1.0 brings support for emma output, coverage of classes and methods and coverage of DLLs and BPLs
I don't know of any free tools. AQTime is almost the defacto standard for profiling Delphi. I haven't used it, but a quick search found Discover for Delphi, which is now open source, but just does code coverage.
Either of these tools should give you an idea of how much code coverage your unit tests are getting.
Are you referring to code coverage from unit tests or stale code? Generally I think only testable code that has a failure should be covered with a unit test (yes I realize that may be starting a holy war, but that is where I stand). So that would be a pretty low percentage.
Now stale code on the other hand is a different story. Stale code is code that doesn't get used. You most likely don't need a tool to tell you this for a lot of your code, just look for the little Blue Dots after you compile in Delphi. Anything without a blue dot is stale. Generally if code is not being used then it should be removed. So that would be 100% code coverage.
There are other scenarios for stale code, like if you have special code to handle if the date ever lands on the 31st of February. The compiler doesn't know it can't happen, so it compiles it in and gives it a blue dot. Now you can write a unit test for that, and test it and it might work, but then you just wasted your time a second time (first for writing the code, second for testing it).
There are tools to track what code paths get used when the program runs, but that is only simi-reliable since not all code paths will get used every time. Like that special code you have to handle leap year, it will only run every four years. So if you take it out then your program will be broken every four years.
I guess I didn't really answer your question about DUnit and Code Coverage, but I think I may have left you with more questions then you started with. What kind of code coverage are you looking for?
UPDATE: If you are taking a TDD approach then no code is written until you write a test for it, so by nature you have 100 test coverage. Of course just because each method is exercised by a test does not mean that its entire range of behaviors is exercised. SmartInspect provides a really easy method to measure which methods are called along with timing, etc. It is a little less then AQTime, but not free. With some more work on your part you can add instrumentation to measure every code path (branches of "if" statements, etc.) Of course you can also just add your own logging to your methods to achieve a coverage report, and that is free (well, expect for your time, which is probably worth more then the tools). If you use JEDI Debug then you can get a call stack too.
TDD really cannot easily be applied retroactively to existing code without a lot of refactoring. Although the newer Delphi IDEs have the ability to generate unit test stubs for each public method, which then gives you 100% coverage of your public methods. What you put in those stubs determines how effective that coverage is.
I use Discover for Delphi and it does the work, for unit testing with DUnit and Functional testing with TestComplete.
Discover can be configured to run from the command line for automation.
As in:
Discover.exe Project.dpr -s -c -m
Discover works great for me. It hardly slows down your application, unlike AQTime. This may not be a problem for you anyway, of course. I think the recent versions of AQTime perform better in this respect.
I've been using Discover" for years, worked excellently up to and including BDS2006 (which was the last pre-XE* version of Delphi i used and still use), but its current opensourced state, it's unclear how to make it work with XE* versions of Delphi. A shame really, because I loved this tool, fast and convenient in almost every way.
So now I'm moving to delphi-code-coverage...