From reading the source code of LLVM in lib/Transforms/IPO/Inliner.cpp I found that LLVM designed the actual inliner pass as a CGSCC pass, and then there is ModuleInlinerWrapperPass that wraps around the CGSCC pass to do per-module inlining.
Peeking inside PassBuilder.cpp, I found the module-level inliner wrapper pass is typically run at the PGO-instrumentation stage (as part of the addPGOInstrPipeline pipeline), as well as the LTO stage.
I was interested in the differences between the CGSCC pass and the module-level pass and which one is scheduled earlier, so I added some LLVM_DEBUG statements to print from the initializer of the module-level pass. seems like by default opt -O2 does not run the module-level inliner; instead, it runs the CGSCC pass quite early in the optimization pipeline.
My question is: When is the module-level inliner pass run in the optimization pipeline (if ever), and what is its relationship with the CGSCC inliner pass?
This question boils down to the difference between the new PassManager and old PassManager in LLVM.
Essentially, there are two ways to write a pass: either we can use the new PassManager (with pass classes extending PassInfoMixin<...>) or use the legacy PassManager (with pass classes extending ModulePass/CGSCCPass/FunctionPass...).
The new PassManager uses the PassBuilder class to schedule passes into pipelines and then run the pipeline in the sequence it is scheduled. The ModuleInlinerWrapperPass is essentially a module-level wrapper pass in order for the new PassManager to schedule inliner into its existing module-level optimization pipeline.
Related
I'd like to visualize method chains of our codebase (which method invokes which method) starting from a given method with jqassistant.
For normal method calls the following Cypher query works. workupNotification is the method I am starting from:
MATCH (begin:Method {name: "workupNotification"}) -[INVOKES*1..20]-> (method:Method) WHERE not method:Constructor and exists(method.name) RETURN begin, method, type
But many of the methods calls in our software are calls to interfaces of which the implementation is not known in the method (SOA with Dependency Inversion).
serviceRegistry.getService(MyServiceInterface.class).serviceMethod();
How can I select the implementation of this method (There are two classes implementing each interface. One is automatically generated (Proxy), the other is the one I'm interested in.)
You need to do what the JVM is performing at runtime for you: resolving virtual method invocations. There's a predefined jQAssistant concept that propagates INVOKES relations to implementing sub-classes: java:InvokesOverriddenMethod. You can either reference it as a required concept from one of your own rules or apply it from the command line, e.g. with Maven:
mvn jqassistant:analyze -Djqassistant.concepts=java:InvokesOverriddenMethod
The rule is documented in the manual, see http://buschmais.github.io/jqassistant/doc/1.6.0/#java:InvokesOverriddenMethod
(The name of the concept is not intuitive, it would be better to replace it with something like java:VirtualInvokes).
It is deprecated. In 1.9.0 version it should be used this command line:
mvn jqassistant:analyze -Djqassistant.concepts=java:VirtualInvokes
http://jqassistant.github.io/jqassistant/doc/1.8.0/#java:VirtualInvokes
In nix, overlay is a function with 2 arguments: self and super. Based on the manual, self corresponds to the final package set (or some others call it result of the fix point calculation) and only to be used when dealing with dependencies. While super is the result of the evaluation of the previous stages of nixpkgs and only to be used when you refer to packages you want to override or to access certain function.
Sadly I don't really understand this. In what way that the nixpkgs gets updated by the overlays such that there's 2 restriction mentioned above?
These restrictions follow from the requirement that evaluation of an attribute should terminate.
Suppose you want to override the hello package. To reference the old definition of the package, you need to use super.hello, because that attribute can be evaluated without evaluating the hello definition in your overlay. If you would instead reference self.hello, that means that for evaluating the final hello attribute, Nix will have to evaluate self.hello, which references the final hello attribute, which references self.hello, and so on, creating an infinite recursion.
self can actually be used to reference functions, but the convention seems to be to use super instead. The idea that the next overlay may monkey-patch the lib.head function is not very enticing, although using super the same can still be done in a previous overlay.
You may also want to check out this excellent NixCon 2017 presentation by Nicolas. He both introduces the concept and explains how you can use it in the best way.
I am trying to use Dart to tersely define entities in an application, following the idiom of code = configuration. Since I will be defining many entities, I'd like to keep the code as trim and concise and readable as possible.
In an effort to keep boilerplate as close to 0 lines as possible, I recently wrote some code like this:
// man.dart
part of entity_component_framework;
var _man = entity('man', (entityBuilder) {
entityBuilder.add([TopHat, CrookedTeeth]);
})
// test.dart
part of entity_component_framework;
var man = EntityBuilder.entities['man']; // null, since _man wasn't ever accessed.
The entity method associates the entityBuilder passed into the function with a name ('man' in this case). var _man exists because only variable assignments can be top-level in Dart. This seems to be the most concise way possible to use Dart as a DSL.
One thing I wasn't counting on, though, is lazy initialization. If I never access _man -- and I had no intention to, since the entity function neatly stored all the relevant information I required in another data structure -- then the entity function is never run. This is a feature, not a bug.
So, what's the cleanest way of using Dart as a DSL given the lazy initialization restriction?
So, as you point out, it's a feature that Dart doesn't run any code until it's told to. So if you want something to happen, you need to do it in code that runs. Some possibilities
Put your calls to entity() inside the main() function. I assume you don't want to do that, and probably that you want people to be able to add more of these in additional files without modifying the originals.
If you're willing to incur the overhead of mirrors, which is probably not that much if they're confined to this library, use them to find all the top-level variables in that library and access them. Or define them as functions or getters. But I assume that you like the property that variables are automatically one-shot. You'd want to use a MirrorsUsed annotation.
A variation on that would be to use annotations to mark the things you want to be initialized. Though this is similar in that you'd have to iterate over the annotated things, which I think would also require mirrors.
I am putting together a built-in script capability using the excellent Pascal DWScript. I have also add my own Delphi-side class definition (TDemo) to DWScript using:
dwsUnit.ExposeRTTI( TDemo.ClassInfo )
This just works and is a great way of quickly adding properties and methods.
I also wish to add an existing instance in a similar way, so I have created my instance FDemo of type TDemo and then performed:
dwsUnit.ExposeInstanceToUnit( 'Demo', 'TDemo', FDemo );
This looks a promising routine to call but I get an AV from an uninitialised unit table. I've also looked in the unit test code of the SVN source to see the use of this function but to no avail. Can anyone point me at what I should add / change?
ExposeInstanceToUnit has to be used from within the TdwsUnit table initialization, see RTTIExposeTests/ExposeInstancesAfterInitTable for some sample code. It allows directly exposing dynamic instances.
The other approach is to use the Instances collection of a TdwsUnit component, you get design-time support, and more controls over your instances and their lifetime.
Also keep in mind you have to make sure the instances you expose will properly behave even if the script misbehaves, f.i. when the user attempts to manually destroys an instance you exposed, and that instance shouldn't be destroyed. By default ExposeRTTI will map the destructors, so you may want to restrict that by specifying eoNoFreeOnCleanup.
edit: a last approach recently added is to use the TdwsRttiConnector, which basically allows exposing and connection to anything that's reachable through RTTI. That's very lightweight in terms of code to setup, but the downside is you don't get any form of compile-time checks.
When coding, I often come across the following pattern:
-A method calls another method (Fine), but the method being called/callee takes parameters, so in the wrapping method, I pass in parameters. Problem is, this dependency carrying can go on and on. How could I avoid this (any sample code appreciated)?
Thanks
Passing a parameter along just because a lower-layer component needs it is a sign of a Leaky Abstraction. It can often be more effective to refactor dependencies to aggregate services and hide each dependency behind an interface.
Cross-cutting concerns (which are often the most common reason to pass along parameters) are best addressed by Decorators.
If you use a DI Container with interception capabilities, you can take advantage of those to implement Decorators very efficiently (some people refer to this as a container's AOP capabilities).
You can use a dependency injection framework. One such is Guice: see http://code.google.com/p/google-guice/
Step 1: Instead of passing everything as separate arguments, group the arguments into a class, let's say X.
Step 2: Add getters to the class X to get the relevant information. The callee should use the getters to get the information instead of relying on parameters.
Step 3: Create an interface class of which class X inherits. Put all the getters in the interface (in C++ this is as pure virtual methods).
Step 4: Make the called methods only depend on the interface.
Refactoring: Introduce Parameter Object
You have a group of parameters that naturally go together?
Replace them with an object.
http://www.refactoring.com/catalog/introduceParameterObject.html
The advantage of the parameter object is that the calls passing them around don't need to change if you add/remove parameters.
(given the context of your answers, I don't think that an IoC library or dependency injection patterns are really what you're after)
Since they cannot be (easily) unit tested, most developers choose to inject objects into Views. Since the views are not (normally) used to construct other views, that is where your DI chain ends. You may have the issue (which I have run into every once in ahwile) where you need to construct objects in the correct order especially when using a DI framework like Unity where an attemt to resolve the object will deadlock. The main thing you need to worry about is circular dependency. In order to do this, read the following article:
Can dependency injection prevent a circular dependency?