Does Dagger2 annotation processor supports the Eclipse incremental compiler?
I setup Dagger2 with the sample app and after a full compile (after cleaning the project) everything works fine, but after small changes (module or component) and only an incremental compiler run nothing is updated (and errors are shown in the Eclipse log).
Is this normal and if not how could I fix this, because full compiler runs are expensive.
Thanks
Yes and no.
Dagger has been written to use only the standard annotation processing API provided as part of the JDK. There is nothing compiler-specific in its implementation. So, theoretically, Dagger should work under any compiler.
Unfortunately, in trying to run Dagger with Eclipse's implementation of that API we have bumped up against a significant number of bugs. Anything based on ECJ (Eclipse's incremental compilation, Android's Jack toolchain, etc.) tends to crash in unexpected ways.
While projects like AutoValue exercise annotation processing in limited enough ways as to build in workarounds for their required functionality, that would be a significantly larger undertaking for Dagger.
So, if/when Eclipse can reliably support annotation processing, Dagger should work.
Related
Having just switched to VS2019 I’m exploring whether to use code analysis. In the project properties, “code analysis” tab, there are numerous built-in Microsoft rule sets, and I can see the editor squiggles when my code violates one of these rules. I can customise these rule sets and “save as” to create my own.
I have also seen code analyser NuGet packages such as “Roslynator” and “StyleCop.Analyzers”. What’s the difference between these and the built-in MS rules? Is it really just down to more comprehensive sets of rules/more choice?
If I wanted to stick with the built-in MS rules, are there any limitations? E.g. will they still get run and be reported on during a TFS/Azure DevOps build?
What's the difference between legacy FxCop and FxCop analyzers?
Legacy FxCop runs post-build analysis on a compiled assembly. It runs as a separate executable called FxCopCmd.exe. FxCopCmd.exe loads the compiled assembly, runs code analysis, and then reports the results (or diagnostics).
FxCop analyzers are based on the .NET Compiler Platform ("Roslyn"). You install them as a NuGet package that's referenced by the project or solution. FxCop analyzers run source-code based analysis during compiler execution. FxCop analyzers are hosted within the compiler process, either csc.exe or vbc.exe, and run analysis when the project is built. Analyzer results are reported along with compiler results.
Note
You can also install FxCop analyzers as a Visual Studio extension. In this case, the analyzers execute as you type in the code editor, but they don't execute at build time. If you want to run FxCop analyzers as part of continuous integration (CI), install them as a NuGet package instead.
https://learn.microsoft.com/en-us/visualstudio/code-quality/fxcop-analyzers-faq?view=vs-2019
So, the built-in legacy FxCop and NuGet analyzers only run at build time while the extension analyzers can run at the same time the JIT compiler does as you type. Also, you have to specifically say to run legacy code analysis on build, whereas the NuGet analyzers will run on build just because they are installed. And analyzers installed as NuGet or extensions won't run when you go to the menu option "Run Code Analysis".
At least, that's what I get out of that page.
There's a link near the bottom of that page that takes you to what code analysis rules have moved over to the new analyzers, including rules that are now deprecated.
https://learn.microsoft.com/en-us/visualstudio/code-quality/fxcop-rule-port-status?view=vs-2019
The different analyzers attempt to cover different coding styles and things Microsoft didn't cover when they built FxCop. With the little research I just did on this, there's a whole rabbit hole to follow, Alice, that would take more time than I have right now to devote to it. And it seems to be filled with lots of arcane knowledge and OCD style code nitpicks that make Wonderland seem normal. But that's just my opinion.
There's lots of personal and professional opinion about various rules in these and basic Microsoft rules, so there's plenty of room to use what you want and disable what you don't. For a beginner, I'd suggest turning on only a few rules at a time. That way you aren't inundated with more warnings and errors than lines of code you might have. Ok, so that might be a bit of an exaggeration, but there's so many rules that really are nitpicks, especially on legacy code, that they aren't really worth it to have enabled, since you likely won't have time to fix it all. You will also want to do basic research and use "common sense" when you decide what to enable. ("Do I really need to worry about variable capitalization coding style consistency on an app that's been ported into 4 different languages over 15+ years and has 10k files?") This is both personal and professional opinion here, so follow it or not.
And don't forget the rules that contradict each other. Those are fun to deal with.......
I have read an article from Xamarin, and came across a particular computer science word : Ahead of Time.
According to some google search result, this AOT does not allow for code generation during run time.
Does it mean, it does not support dynamic stuff?
I know this question may stupid and I have 0 knowledge in IOS, hopefully can get some answer from here. thanks
First, what is the definition of dynamic? For the general public, dynamic code mean the application can change functionality at run-time. For the iOS platform, the binaries are signed to prevent malware. And Apple don't like apps that can load functionality at run-time.
An ahead-of-time (AOT) compiler has nothing to do with dynamic code per se. It's has to do with intermediate language that are just-in-time compilation (JIT). The biggest example of intermediate language is Java bytecode; compile once, run anywhere. When a Java application is executing, the compiled code is JIT to native machine code. AOT compiler is just doing it ahead of time, to save time.
For the iOS platform, Xcode compiles Objective-C to a native binary for the device.
Another way of looking at this is with an example...
In .Net, you can use the Reflection.Emit namespace to generate and compile code at runtime.
Eg you could create an "IDE" with a textbox that accepts C#. When you click a button that C# could be compiled by the .Net framework into a custom library that's loaded dynamically or a fully-fledged executable which is launched as a new process.
This is insanely powerful when combined with the rest of the System.Reflection namespace. You can examine objects at runtime and compile custom code based on any criteria you like.
That said... The problems usually outweigh the benefits in most cases. Mainly, it's a massive security issue, especially when running on a consumer device.
It would be possible to create an app that wouldn't have anything close to malicious code, get it audited by apple, then have the app download code from your webserver, compile it, and execute it. This new code wouldn't be audited...
There really is no good reason to be doing this in a consumer app.
Is there any migration analysers available for MonoTouch ?
I have seen one for Mono, but not for MonoTouch.
Short answer: No, there is none at the moment.
Long answer
The situation is a bit different from Mono. In general you test a complete and compiled (against a specific version of the framework) .NET application with MoMA, to get a report of what pieces are missing (or incomplete) in Mono that could affect the execution of your application on other platforms (e.g. OSX and Linux).
Testing a complete applications for MonoTouch would reports tons of issues - since the UI toolkit is totally different. E.g. anything about System.Windows.Forms, WPF... would always missing.
However if your assemblies are already split into (something like) an MVC design it would be possible to test some (the non-UI parts) of them against definitions based on the MonoTouch base class library.
Finally if someone has an immediate need (or looking for a nice project) MoMA is available as open source and the evaluation versions of MonoTouch contains all the assemblies needed to build the definitions files. A bit of extra filtering could make this into a very nice tool.
Alternative
You can see a list of the assemblies that are part of MonoTouch and some platform restrictions (compared to .NET) you should be aware.
I am having a Graphics application using GL API written in c++ and using openGL compiler to compile it.
i am looking ways how can i profile the application along with compiler ( i am having access to compiler code too which i can change for my usage and is in C++).
i am mainly looking for ways i can profile related to compilation , how much time it was taken by compiler functions while compiling C++ application.
i had tried using timer API but if there is some profiling tool like gprof in android which i can use it will be easy for me.
i read about Traveview but i think its meant for mainly java application.
any suggestion will be highly use to me.
If you want to make your app run faster, and you have source code and can run it under a debugger, you can use this technique.
If you want the compiler to run faster, and you have source code and can run it under a debugger, you can use the same technique.
If you had gprof, you could use it, but you would be disappointed, for these reasons.
I have few own APIs with around 2000 classes overall. Some of them use the new Path API from JDK7. Most other classes, however, do not rely on any new JDK APIs or new language features. So most classes could be used in a JDK6 environment (which I plan to do). Let's assume, I've annotated all JDK7-only classes with #Java7Only.
What I need now, is a way to create a JDK6-only subset of all my projects more-or-less automatically, without introducing new version branching or product lines (would be too complicated to maintain).
All projects are created using Netbeans, thus using Ant. Many projects depend on others.
Please help me evaluate, which ideas according to my problem is most appropriate. Which problems could occur with each idea?
Common first step for all ideas
Let an annotation processor search for #Java7Only-annotated classes and store the list to a properties file.
Idea 1 (specific)
Write a tool which would use the properties file to recursively copy the whole project, except JDK7-only files.
Build the copied project using JDK6 by invoking ant, thus getting a JDK6-compliant jar.
Idea 2 (specific)
Write a second annotation processor which would use the properties file to pass everything except JDK7-only files to a JavaCompiler instance.
Either build a jar using Java APIs or use Ant API for that.
(This would be a Java-only idea, but probably too complicated)
Idea X (abstract)
Somehow influence the Ant build process (by overwriting some targets?) and for each JDK6-compliant class: let Ant compile two versions of it (one time with JDK6 compiler, another time with JDK7 compiler).
(JDK7-only classes would be compiled only once, using the JDK7 compiler, of course)
Package each bunch to a separate jar.
Possible common problems to the ideas
Some projects dependent on others, so some actions (such as packaging) should consider this.
Remember: the JDK7 compiler generates downward incompatible class files, that's why every possible idea has to happen on sources-level (before or during the build process, not afterwards).
My thoughts on Idea 2:
Essentially this is invoking a compiler within a compiler. Annotation processors are run as part of compilation. Can this be done safely? Is there any static state in Sun's javac that would cause problems. (I don't know the answer but from memory there might be some static state that could cause problems in this scenario).
Idea 1 seems simpler and better to me.
But taking a step back, is it possible to separate out all the JDK 7 specific stuff into a separate module and compile it separately, into a different JAR?
Have the 'main' project, compiled using JDK 6 (which JDK 7 would have no problems reading because it is backwards compatible)
The JDK 7 specific module(s), with source in a different directory, which includes the 'main' JAR on the compilation classpath, could be built separately, with a different build.xml if necessary.
This only partially applies but I'd thought I'd mention it anyway.
The problem with just using -source 1.6 -target 1.6 options for validation is that you can still use Java 7 API when compiled using JDK 7.
I've used the Animal Sniffer Maven Plugin for a few projects now and it has proved quite useful. This plugin scans byte-code of your classes for JDK API usage. That is, you can tell it to fail the build if you attempt to use JDK 7 API when you are targeting JDK 6. This wont help much for separating out classes as you need but it could be useful as a final validation step combined with -source 1.6 -target 1.6 compiler options.
There is also an animal sniffer Ant plugin, as mentioned from the Animal Sniffer main page.