I want to traverse the AST from a C program in postorder. I've found RecursiveASTVisitor so far to traverse a tree in preorder. I thought of overriding the Traverse* methods. However, I think it is quite complicated for a relatively common task. Is there a simpler way to do it or a class which I haven't found yet?
https://clang.llvm.org/doxygen/classclang_1_1RecursiveASTVisitor.html#details
By default, this visitor preorder traverses the AST. If postorder
traversal is needed, the shouldTraversePostOrder method needs to be
overridden to return true.
Making generic code would be more complicated and probably not as efficient.
Related
I'd like to take clang AST, analyze how a certain variable is used and do some
source-to-source transformation if a specific usage pattern is recognized.
Particularly, I'm looking for patterns like this:
void *h;
h = create_handler(...);
use_handler(h);
destroy_handler(h);
So far, I am able to detect ValueDecl corresponding to void *h. Next step
would be to find all uses of h and see if they are safe and if
create_handler/destroy_handler properly dominate/post-dominate one another.
Unfortunately, I have no idea how to iterate over h's uses, it seems that
there is no such interface in ValueDecl class.
I'd appreciate it if you could you either suggest how I could find all uses of a
variable in AST, or point me to some clang-based tool dealing with a similar problem.
Thank you!
One can match declRefExprs referencing the variable (using AST matchers). After that, ParentMap could be used to traverse AST backward and find recursively AST nodes which use those declRefExprs. Keep in mind that typically ParentMap is constructed not for the whole AST but for a subtree only (passed as a parameter into the constructor).
I have a parse tree that I wish to convert to an abstract parse tree.
I found found examples online but that are normally just simple addition.
I understand that I need to remove unnecessary information but I don't know how to lay it out with regards to the repeat and until.
Is this the correct APT for the concrete parse tree?
There is no standard which defines a "correct" AST. ASTs are used instead of parse trees only to make life easier for the application which is going to process the tree.
In short, the decision is entirely yours, and you should make it on the basis of what you intend to do with the AST.
Given an AST, what would be the reason behind making a Walker class that walks over the tree and does the output, as opposed to giving each Node class a compile() method and having it responsible for its own output?
Here are some examples:
Doctrine 2 (an ORM) uses a SQLWalker to walk over an AST and generate SQL from nodes.
Twig (a templating language) has the nodes output their own code (this is an if statement node).
Using a separate Walker for code generation avoids combinatorial explosion in the number of AST node classes as the number of target representations increases. When a Walker is responsible for code generation, you can retarget it to a different representation just by altering the Walker class. But when the AST nodes themselves are responsible for compilation, you need a different version of each node for each separate target representation.
Mostly because of old literature and available tools. Experimenting with both methods you can easily find that AST traversal produces very slow and convoluted code. Moreover, code separated from immediate syntax doesn't resemble it anymore. It's very much like supporting two synchronized code bases, which is always a bad idea. Debugging, maintenance become difficult.
Of course, it can be also difficult to process semantics on the nodes unless you have a well designed state machine. In fact you are never worse than having to traverse AST after the fact, because it's just one particular case of processing semantics on nodes.
You can often hear that AST traversal allows for implementation of multiple semantics for the same syntax. In reality you would never want that, not only because it's rarely needed, but also for performance reasons. And frankly, there is no difficulty in writing separate syntax for a different semantics. The results were always better when both designed together.
And finally, in every non-trivial task, get syntax parsed is the easiest part, getting semantics correct and process actions fast is a challenge. Focusing on AST is approaching the task backwards.
To have support for a feature that the "internal AST walker" doesn't have.
For example, there are several ways to trasnverse a "hierarchical" or "tre" structure,
like "walk thru the leafs first", or "walk thru the branches first".
Or if the nodes siblings have a sort index, and you want to "walk" / "visit" them decremantally by their index, instead of incrementally.
If the AST class or structure you have only works with one method, you may want to use another method using your custom "walker" / "visitor".
I'm learning F# (new to functional programming in general though used functional aspects of C# for years but let's face it, that's pretty different) and one of the things that I've read is that the F# compiler identifies tail recursion and compiles it into a while loop (see http://thevalerios.net/matt/2009/01/recursion-in-f-and-the-tail-recursion-police/).
What I don't understand is why you would write a recursive function instead of a while loop if that's what it's going to turn into anyway. Especially considering that you need to do some extra work to make your function recursive.
I have a feeling someone might say that the while loop is not particularly functional and you want to act all functional and whatnot so you use recursion but then why is it sufficient for the compiler to turn it into a while loop?
Can someone explain this to me?
You could use the same argument for any transformation that the compiler performs. For instance, when you're using C#, do you ever use lambda expressions or anonymous delegates? If the compiler is just going to turn those into classes and (non-anonymous) delegates, then why not just use those constructions yourself? Likewise, do you ever use iterator blocks? If the compiler is just going to turn those into state machines which explicitly implement IEnumerable<T>, then why not just write that code yourself? Or if the C# compiler is just going to emit IL anyway, why bother writing C# instead of IL in the first place? And so on.
One obvious answer to all of these questions is that we want to write code which allows us to express ourselves clearly. Likewise, there are many algorithms which are naturally recursive, and so writing recursive functions will often lead to a clear expression of those algorithms. In particular, it is arguably easier to reason about the termination of a recursive algorithm than a while loop in many cases (e.g. is there a clear base case, and does each recursive call make the problem "smaller"?).
However, since we're writing code and not mathematics papers, it's also nice to have software which meets certain real-world performance criteria (such as the ability to handle large inputs without overflowing the stack). Therefore, the fact that tail recursion is converted into the equivalent of while loops is critical for being able to use recursive formulations of algorithms.
A recursive function is often the most natural way to work with certain data structures (such as trees and F# lists). If the compiler wants to transform my natural, intuitive code into an awkward while loop for performance reasons that's fine, but why would I want to write that myself?
Also, Brian's answer to a related question is relevant here. Higher-order functions can often replace both loops and recursive functions in your code.
The fact that F# performs tail optimization is just an implementation detail that allows you to use tail recursion with the same efficiency (and no fear of a stack overflow) as a while loop. But it is just that - an implementation detail - on the surface your algorithm is still recursive and is structured that way, which for many algorithms is the most logical, functional way to represent it.
The same applies to some of the list handling internals as well in F# - internally mutation is used for a more efficient implementation of list manipulation, but this fact is hidden from the programmer.
What it comes down to is how the language allows you to describe and implement your algorithm, not what mechanics are used under the hood to make it happen.
A while loop is imperative by its nature. Most of the time, when using while loops, you will find yourself writing code like this:
let mutable x = ...
...
while someCond do
...
x <- ...
This pattern is common in imperative languages like C, C++ or C#, but not so common in functional languages.
As the other posters have said some data structures, more exactly recursive data structures, lend themselves to recursive processing. Since the most common data structure in functional languages is by far the singly linked list, solving problems by using lists and recursive functions is a common practice.
Another argument in favor of recursive solutions is the tight relation between recursion and induction. Using a recursive solution allows the programmer to think about the problem inductively, which arguably helps in solving it.
Again, as other posters said, the fact that the compiler optimizes tail-recursive functions (obviously, not all functions can benefit from tail-call optimization) is an implementation detail which lets your recursive algorithm run in constant space.
i am kinda confused reading the definition between the two. Can they actually intersect in terms of definition? or am i completely lost? Thanks.
Closures, as the word tends to be used, are just functions (or blocks of code, if you like) that you can treat like a piece of data and pass to other functions, etc. (the "closed" bit is that wherever you eventually call it, it behaves just as it would if you called it where it was originally defined). A monad is (roughly) more like a context in which functions can be chained together sequentially, and controls how data is passed from one function to the next.
They're quite different, although monads will often use closures to capture logic.
Personally I would try to get solid on the definition of closures (essentially a piece of logic which also captures its environment, i.e. local variables etc) before worrying about monads. They can come later :)
There are various questions about closures on Stack Overflow - the best one to help you will depend on what platform you're working on. For instance, there's:
What are closures in .NET?
Function pointers, closures and lambda
Personally I'm only just beginning to "grok" monads (thanks to the book I'm helping out on). One day I'll get round to writing an article about them, when I feel I understand them well enough :)
A "closure" is an object comprising 1) a function, and 2) the values of its free variables where it's constructed.
A "monad" is a class of functions that can be composed in a certain way, i.e. by using associated bind and return higher-order function operators, to produce other functions.
I think monads are a little more complicated than closures because closures are just blocks of code that remember something from the point of their definitions and monads are a construct for "twisting" the usual function composition operation.