Does clang-tidy make clang-check redundant? - clang

Both of these tools seem to share some common goals and while the documentation of clang-tidy is quite explicit about its capabilities, clang-check's is a bit sparse.
It would be nice if I could run only one of these tools while having the same checks in place. Obviously, clang-tidy has some features which are absent in clang-check, so the question is:
Is there a combination of checks for clang-tidy that includes all of the features of clang-check -analyze?

After looking at the sources of both tools, clang-check -analyze instantiate an AnalysisASTConsumer from the StaticAnalyzer lib through here.
clang-tidy also does it if analyzer options are supplied.
So, everything seems to indicate that clang-tidy -check='clang-analyzer-*' is equivalent to clang-check -analyze.

Related

Are tuple modules an officially documented feature of the language?

EDIT: Steve Vinoski kindly provided in the comments the official name for those : tuple modules.
My original question remains though: are tuple modules officially documented by the OTP team? And are they expected to remain supported in the future?
Original question:
Consider the following erlang module:
-module(foo).
-compile(export_all).
new(Bar) -> {foo, Bar}.
get({foo, Bar}) -> Bar.
I was quite amazed to see it allows the following (using erlang 19.1):
2> Foo = foo:new(bar).
{foo,bar}
3> Foo:get().
bar
which differs quite strongly from the usual way of calling a module's function.
As far as I can tell, it seems to be a remnant of parametrized modules, which have been deprecated since R16; and I can't find anything in the official documentation stating this is a supported, stable feature of the language.
My question is: is this a documented feature of the language? And if yes, where?
As far as I know this is an undocumented remnant of parameterized modules and exists to prevent legacy code from breaking. I imagine it is intended chiefly to prevent Mochiweb from breaking, as I can't think of any other serious libraries that make use of parameterized modules.
I can't locate any documentation on it and it doesn't seem to be a subject of current consideration. There was an announcement I cannot locate (but found references to, but not links) that claimed this would be documented, but that was quite a while ago.
The release readme for R16B where parameterized modules were removed mentions this:
OTP-10616
The experimental feature "parameterized modules" (also
called "abstract modules") has been removed. For applications that
depends on parameterized modules, there is a parse transform
that can be used to still use parameterized modules.
The parse transform can be found at: github.com/erlang/pmod_transform
That issue number does not appear in OTP's issue tracker anymore, and I can't even find an occurrence of "parameterized module" or "tuple module" anywhere in OTP's Jira instance. So I'm assuming this is an undocumented legacy crutch and nothing more.

Parsing comments with clang

I am trying to utilize clang tooling library for the purpose of my future tool.
What I would like to do with this tool is:
1. parse all the source code (with includes) and detect any of my keywords in the comments (comments will be some kind of interface between the programmer and my tool, which will do various things with the rest of the source code according to commands placed in the comments).
2. according to commands from the source code, do some refactoring of it
The refactoring itself will be done using clang AST, like from example below:
http://eli.thegreenplace.net/2014/07/29/ast-matchers-and-clang-refactoring-tools
The thing I am looking for currently is how to parse the comments, within the same run of clang tooling procedures. I do not want to make separate step just for parsing the source code, because it have to be already done in tooling library.
Do you know how to somehow get the information about comments included in the source code I am parsing by tooling library?
Try the options -Wdocumentation and associated options (as -fparse-all-comments). If U use some tools (as clang-check or clang-tidy, adds these options in the compile commands db.

Clang dynamic memory analyzer not referencing back to source code Red Hat 6.3

We recently built the 3.3 release of clang/llvm using the Fedora 20 packaging process as a guide to unpacking, moving the different parts to the correct location and building the compiler tool chain. All seems to be working correctly except the dynamic memory analyzer is not referencing back to the source code. The same usage on the Fedora platform does reference back to the source code.
This is our first attempt to use the clang/llvm tool set. Also this is the first question asked in this forum which seems a bit different on its organization from all the others I have participated in so my appologies in advance if I have not figured out the nuances of posting a question here. Does seem odd that the main projects do not seem to have a way of asking questions.
We found a solution, do not know quite why we needed to add the extra
environment setup. Compiling as follows:
PATH=/net/fas4045/home3/jq031c/llvm_sandbox/bin:$PATH make -j 16
DEPFILES= CXX=clang++ CC=clang CXXFLAGS="-fsanitize=memory
-fsanitize-memory-track-origins -fno-omit-frame-pointer"
LDXFLAGS=-fsanitize=memory
Runing as follows:
MSAN_SYMBOLIZER_PATH=/net/fas4045/home3/jq031c/llvm_sandbox/bin/llvm-symbolizer ./runtests.sh
We can understand that we need to add the analysis option to the link flags as we do a two step build of compile followed by link. The discovery after searching was the need to define the path to llvm-symbolizer with an environment variable which none of the other dynamic analysis options seems to need.

OCRopus documentation?

Is there a documentation for ocropus?
I am looking for an explanation for the functions like:
make_SegmentPageByRAST():
segment()
RegionExtractor():
setPageLines()
extract()
Thank you.
A requirement of Lua API for OCRopus has been filed in the bug-tracker list of the project.
They will soon be releasing this documentation in the next beta release(expected).
First, note that you can use the command line tools without actual Lua programming.
A good place to see how to use ocroscript is to look at the test cases in
ocroscript/tests and the command line driver scripts in ocroscript/scripts.
Note: The Lua bindings follow the C++ API very closely (the binding is mostly
automatic), so C++ and Lua documentation are pretty much the same problem.

How to enforce Delphi Coding Standards

I want to enforce coding standards for our Delphi codebase.
A few colleagues have suggested Code Healer and Pascal Analyzer. I've had a look at these tools and they aren't suitable.
I was hoping to be able to do the same thing that CheckStyle for Java or StyleCop for C# can do
Some newer editions of Delphi offer Audits and Metrics in the Model view, which can also be configured to set allowed limits. They do not run from command line for build integration afaik so I found them not very helpful.
I know the highly customizable Java (and .Net tools) like PMD, FindBugs and CheckStyle which generate XML or HTML statistic reports, and also integrate very well with build tools (Ant, Maven, Hudosn) - but for Delphi nothing comparable has crossed my road so far.
It seems those 2 are the most used. You can also try:
http://jedicodeformat.sourceforge.net/
The best one is Pascal Analyzer (PAL) by Peganza, which you said you tried and found unsuitable, but did not say why. I will say a bit in its favor: It's Commercial, inexpensive, and so worth it. They recently released version 5, and if version 5 doesn't do what you want, you should tell them what you want, because they have always answered my requests whenever I have mentioned a feature I wish the product would add.
We use it instead of the high-end SKUs of Delphi's metrics because it costs less and does more than the built-in $3000 stuff. I think it costs about $160 us.
I am a happy customer. Here is a sample of some of the metric areas that I like:
convention compliance - class names that don't start with T, exception types don't start with E, class fields not in private, identifiers with goofy names, class visibility confusion or bad order, local identifier/unit outer scope identifier clashes. Inconsistent case, Many many many more!
The weakness is that the output is plain text in a "TMemo" control. Of course, I have found a lot of ways to take that output and write my own small sort/filter utilities to mine even more useful stuff from the reports. A powerful tool that you won't be able to live without once you try it.
I realize you said in your answer that you tried that already, but if it's not what you want, it's already the best LINT like tool for Pascal that currently exists.
If you're into writing your own style checking, you can write a .exe in Delphi to look for bad things being committed. Call that in a pre-commit hook into your repository.
You can examine the differences of a checkin by using SVNLOOK.
ex:
excerpt from pre-commit.bat
SVNLOOK diff -t "%2" "%1" | MyCustomFilter.exe
IF %ERRORLEVEL% == 0 GOTO EOJ
EXIT 1
:EOJ
EXIT 0

Resources