How to implement a Groovy global AST transformation in a Grails plugin? - grails

I'd like to modify some of my Grails domain classes at compilation time. I initially thought this was a job for Groovy's global ASTTransformation since I don't want to annotate my domain classes (which local transformers require). What's the best way to do this?
I also tried mimicking DefaultGrailsDomainClassInjector.java by creating my own class in the same package, implementing the same interfaces, but I probably just didn't know how to package it up in the right place because I never saw my methods get invoked.
On the other hand I was able to manually create a JAR which contained a compiled AST transformation class, along with the META-INF/services artifacts that plain Groovy global transformations require. I threw that JAR into my project's "lib" dir and visit() was successfully invoked. Obviously this was a sloppy job because I am hoping to have the source code of my AST transformation in a Grails plugin and not require a separate JAR artifact if I don't have to, plus I couldn't get this approach to work by having the JAR in my Grails plugin's "lib" but had to put it into the Grails app's "lib" instead.
This post helped a bit too: Grails 2.1.1 - How to develop a plugin with an AstTransformer?

The thing about global transforms the transform code should be available when the compilation starts. Having the transformer in a jar was what i did first! But as you said it is a sloppy job.
What you want to do is have your ast transforming class compile before others gets to the compilation phase. Here is what you do!
Preparing the transformer
Create a directory called precompiled in src folder! and add the Transformation class and the classes (such as annotations) the transformer uses in this directory with the correct packaging structure.
Then create a file called org.codehaus.groovy.transform.ASTTransformation in called precompiled/META-INF/services and you will have the following structure.
precompiled
--amanu
----LoggingASTTransformation.groovy
--META-INF
----services
------org.codehaus.groovy.transform.ASTTransformation
Then write the fully qualified name of the transformer in the org.codehaus.groovy.transform.ASTTransformation file, for the example above the fully qualified name would be amanu.LoggingASTTransformation
Implementation
package amanu
import org.codehaus.groovy.transform.GroovyASTTransformation
import org.codehaus.groovy.transform.ASTTransformation
import org.codehaus.groovy.control.CompilePhase
import org.codehaus.groovy.ast.ASTNode
import org.codehaus.groovy.control.SourceUnit
#GroovyASTTransformation(phase=CompilePhase.CANONICALIZATION)
class TeamDomainASTTransformation implements ASTTransformation{
public void visit(ASTNode[] nodes, SourceUnit sourceUnit) {
println ("*********************** VISIT ************")
source.getAST()?.getClasses()?.each { classNode ->
//Class node is a class that is contained in the file being compiled
classNode.addProperty("filed", ClassNode.ACC_PUBLIC, new ClassNode(Class.forName("java.lang.String")), null, null, null)
}
}
}
Compilation
After implementing this you can go off in two ways! The first approach is to put it in a jar, like you did! and the other is to use a groovy script to compile it before others. To do this in grails, we use _Events.groovy script.
You can do this from a plugin or the main project, it doesn't matter. If it doesn't exist create a file called _Events.groovy and add the following content.
The code is copied from reinhard-seiler.blogspot.com with modifications
eventCompileStart = {target ->
...
compileAST(pluginBasedir, classesDirPath)
...
}
def compileAST(def srcBaseDir, def destDir) {
ant.sequential {
echo "Precompiling AST Transformations ..."
echo "src ${srcBaseDir} ${destDir}"
path id: "grails.compile.classpath", compileClasspath
def classpathId = "grails.compile.classpath"
mkdir dir: destDir
groovyc(destdir: destDir,
srcDir: "$srcBaseDir/src/precompiled",
classpathref: classpathId,
stacktrace: "yes",
encoding: "UTF-8")
copy(toDir:"$destDir/META-INF"){
fileset(dir:"$srcBaseDir/src/precompiled/META-INF")
}
echo "done precompiling AST Transformations"
}
}
the previous script will compile the transformer before others are compiled! This enable the transformer to be available for transforming your domain classes.
Don't forget
If you use any class other than those added in your classpath, you will have to precompile those too. The above script will compile everything in the precompiled directory and you can also add classes that don't need ast, but are needed for it in that directory!
If you want to use domain classes in transformation, You might want to do the precompilation in evenCompileEnd block! But this will make things slower!
Update
#Douglas Mendes mentioned there is a simple way to cause pre compilation. Which is more concise.
eventCompileStart = {
target -> projectCompiler.srcDirectories.add(0, "./src/precompiled")
}

Related

Jenkins Shared Library - Importing classes from the /src folder in /vars

I am trying to writing a Jenkins Shared Library for my CI process. I'd like to reference a class that is in the \src folder inside a global function defined in the \vars folder, since it would allow me to put most of the logic in classes instead of in the global functions. I am following the repository structure documented on the official Jenkins documentation:
Jenkins Shared Library structure
Here's a simplified example of what I have:
/src/com/example/SrcClass.groovy
package com.example
class SrcClass {
def aFunction() {
return "Hello from src folder!"
}
}
/vars/classFromVars.groovy
import com.example.SrcClass
def call(args) {
def sc = new SrcClass()
return sc.aFunction()
}
Jenkinsfile
#Library('<lib-name>') _
pipeline {
...
post {
always {
classFromVars()
}
}
}
My goal was for the global classes in the /vars folder to act as a sort of public facade and to use it in my Jenkinsfile as a custom step without having to instantiate a class in a script block (making it compatible with declarative pipelines). It all seems pretty straightforward to me, but I am getting this error when running the classFromVars file:
<root>\vars\classFromVars.groovy: 1: unable to resolve class com.example.SrcClass
# line 1, column 1.
import com.example.SrcClass
^
1 error
I tried running the classFromVars class directly with the groovy CLI locally and on the Jenkins server and I have the same error on both environments. I also tried specifying the classpath when running the /vars script, getting the same error, with the following command:
<root>>groovy -cp <root>\src\com\example vars\classFromVars.groovy
Is what I'm trying to achieve possible? Or should I simply put all of my logic in the /vars class and avoid using the /src folder?
I have found several repositories on GitHub that seem to indicate this is possible, for example this one: https://github.com/fabric8io/fabric8-pipeline-library, which uses the classes in the /src folder in many of the classes in the /vars folder.
As #Szymon Stepniak pointed out, the -cp parameter in my groovy command was incorrect. It now works locally and on the Jenkins server. I have yet to explain why it wasn't working on the Jenkins server though.
I found that when I wanted to import a class from the shared library I have, to a script step in the /vars I needed to do it like this:
//thanks to '_', the classes are imported automatically.
// MUST have the '#' at the beginning, other wise it will not work.
// when not using "#BRANCH" it will use default branch from git repo.
#Library('my-shared-library#BRANCH') _
// only by calling them you can tell if they exist or not.
def exampleObject = new example.GlobalVars()
// then call methods or attributes from the class.
exampleObject.runExample()

Substitutions and Java library lifecycle

I've got a Java project I'm converting to Bazel.
As is typical with Java projects, there are property files with placeholders that need to be resolved/substituted at build time.
Some of the values can be hardcoded in a BUILD or BZL file:
BUILD_PROPERTIES = { "pom.version": "1.0.0", "pom.group.id": "com.mygroup"}
Some of the variables are "stamps" (e.g. BUILD_TIMESTAMP, GIT_REVISION, etc): The source for these variables are volatile-status.txt and stable-status.txt
I must generate a POM for publish, so I use #bazel_common//tools/maven:pom_file in BUILD
(assume that I need ALL the values described above for my pom template):
_local_build_properties = {}
_local_build_properties.update(BUILD_PROPERTIES)
# somehow add workspace status properties?
# add / override
_local_build_properties.update({
"pom.project.name": "my-submodule",
"pom.project.description": "My submodule description",
"pom.artifact.id": "my-submodule",
})
# Variable placeholders in the pom template are wrapped with {}
_pom_substitutions = { '{'+k+'}':v for (k,v) in _local_build_properties.items()}
pom_file(
name = "my_submodule_pom",
targets = [
"//my-submodule",
],
template_file = "//:pom_template.xml",
substitutions = _pom_substitutions,
)
So, my questions are:
How do I get key-value pairs from volatile/stable -status.txt into the
dictionary I need for pom_file.substitutions?
pom_file depends on java_library so that it can write its dependencies
into the POM. How do I update the jar generated by java_library with the
pom?
Once I have the pom and the updated jar containing the pom, how do I publish to a Maven repo?
When I look at existing code, for example rules_docker, it seems that the implementation always bails to a local executable (shell | python | go) to do the real work of substitution, jar manipulation and image publication. Am I trying to do too much in BUILD and BZL files? Should I be thinking, "Ultimately, what do I need to pass to local shell/python/go scripts to get real build work done?
(Answered on bazel-discuss group)
Hi,
You can't get these values from Starlark. You need a genrule to read the stable/volatile files and do the substitutions using an
external tool like 'sed'.
A file cannot be both an input and output of an action, i.e. you can't update the .jar from which you generate the pom. The action has
to produce a new .jar file.
I don't know -- how would you publish outside of Bazel, is there a tool to do so? Can you write a genrule / Starlark rule to wrap this
tool?
Cheers, László

How to get a field's type by using CDT parser

I'm trying to extract c++ source code's info.
One is field's type.
when source code like under I want to extract info's Type when info.call() is called.
Info info;
//skip
info.call(); //<- from here
Trough making a visitor which visit IASTName node, I tried to extract type info like under.
public class CDTVisitor extends ASTVisitor {
public CDTVisitor(boolean visitNodes) {
super(true);
}
public int visit(IASTName node){
if(node.resolveBinding().getName().toString().equals("info"))
System.out.println(((IField)node.getBinding()).getType());
// this not work properly.
//result is "org.eclipse.cdt.internal.core.dom.parser.ProblemType#86be70a"
return 3;
}
}
Assuming the code is in fact valid, a variable's type resolving to a ProblemType is an indication of a configuration problem in whatever tool or plugin is running this code, or in the project/workspace containing the code on which it is run.
In this case, the type of the variable info is Info, which is presumably a class or structure type, or a typedef. To resolve it correctly, CDT needs to be able to see the declaration of this type.
If this type is not declared in the same file that's being analyzed, but rather in a header file included by that file, CDT needs to use the project's index to find the declaration. That means:
The AST must be index-based. For example, if using ITranslationUnit.getAST to create the AST, the overload that takes an IIndex parameter must be used, and a non-null argument must be provided for it.
Since an IIndex is associated with a CDT project, the code being analyzed needs to be part of a CDT project, and the project needs to be indexed.
In order for the indexer to resolve #include directives correctly, the project's include paths need to be configured correctly, so that the indexer can actually find the right header files to parse.
Any one of these not being the case can lead to a type resolving to a ProblemType.
Self response.
The reason I couldn't get a binding object was the type of AST.
When try to parse C++ source code, I should have used ICPPASTTranslationUnit.
There is no code related this, I used IASTTranslationUnit as a return type of AST.
After using ICPPASTTranslationUnit instead of IASTTranslationUnit, I solved this problem.
Yes, I figure it out! Here is the entire code which can index all files in "src" folder of a cpp project and output the resolved type binding for all code expressions including the return value of low level API such as memcpy. Note that the project variable in following code is created by programatically importing an existing manually configured cpp project. I often manually create an empty cpp project and programatically import it as a general project (once imported, Eclipse will automatically detect the project type and complete the relevant configuration of CPP project). This is much more convenient than creating and configuring a cpp project from scratch programmatically. When importing project, you'd better not to copy the project or containment structures into workspace, because this may lead to infinitely copying same project in subfolder (infinite folder depth). The code works in Eclipse-2021-12 version. I download Eclipse-For-cpp and install plugin-development and jdt plugins. Then I create an Eclipse plugin project and extend the "org.eclipse.core.runtime.applications" extension point.
In another word, it is an Eclipse-Application plugin project which can use nearly all features of Eclipse but do not start the graphical interface (UI) of Eclipse. You should add all cdt related non-ui plugins as the dependencies because new version of Eclipse does not automatically add missing plugins any more.
ICProject cproject = CoreModel.getDefault().getCModel().getCProject(project.getName());
// this code creates index for entire project.
IIndex index = CCorePlugin.getIndexManager().getIndex(cproject);
IFolder folder = project.getFolder("src");
IResource[] rcs = folder.members();
// iterate all source files in src folder and visit all expressions to print the resolved type binding.
for (IResource rc : rcs) {
if (rc instanceof IFile) {
IFile f = (IFile) rc;
ITranslationUnit tu= (ITranslationUnit) CoreModel.getDefault().create(f);
index.acquireReadLock(); // we need a read-lock on the index
ICPPASTTranslationUnit ast = null;
try {
ast = (ICPPASTTranslationUnit) tu.getAST(index, ITranslationUnit.AST_SKIP_INDEXED_HEADERS);
} finally {
index.releaseReadLock();
}
if (ast != null) {
ast.accept(new ASTVisitor() {
#Override
public int visit(IASTExpression expression) {
// get the resolved type binding of expression.
IType etp = expression.getExpressionType();
System.out.println("IASTExpression type:" + etp + "#expr_str:" + expression.toString());
return super.visit(expression);
}
});
}
}
}

dart pub build: exclude a file or directory

I am trying to exclude a list of files or directories when building a web application with dart's pub build.
Using this, as suggested by the documentation:
transformers:
- simple_transformer:
$exclude: "**/CVS"
does not work:
Error on line 10, column 3 of pubspec.yaml: "simple_transformer" is not a dependency.
- simple_transformer:
Is there a way to do it (using SDK 1.10.0) ?
Sadly there is currently no support to mark files as ignored by pub build as Günter already mentioned. The .gitignore feature was removed as it was undocumented and caused more trouble than it solved.
But you can execlude files from the build output. This means that the files are still processed (and still take time to process =/ ) but aren't present in the output directiory. This is useful for generating a deployable copy of your application in one go.
In our application we use a simple ConsumeTransformer to mark assets as consumed so that they are not written to the output folder:
library consume_transformer;
import 'package:barback/barback.dart';
class ConsumeTransformer extends Transformer implements LazyTransformer {
final List<RegExp> patterns = <RegExp>[];
ConsumeTransformer.asPlugin(BarbackSettings settings) {
if (settings.configuration['patterns'] != null) {
for (var pattern in settings.configuration['patterns']) {
patterns.add(new RegExp(pattern));
}
}
}
bool isPrimary(AssetId inputId) =>
patterns.any((p) => p.hasMatch(inputId.path));
void declareOutputs(DeclaringTransform transform) {}
void apply(Transform transform) => transform.consumePrimary();
}
The consumer requires a list of regex patterns as an argument an consumes the matched files. You need to add the transformer to you pubspec.yaml file as the last transformer:
transformers:
- ... # Your other transformers
- packagename/consume_transformer:
patterns: ["\\.psd$"]
The example configuration ignores all files that have the psd extension, but you can add pattern as you need them.
I created a pub package that contains the transformer, take a look here.
simple_transformer is the name of the transformer you want to inform to exclude the files. If you want to apply this to dart2js you need to use the name $dart2js instead of simple_transformer.
For more details about configuring $dart2js see https://www.dartlang.org/tools/pub/dart2js-transformer.html

TypeScript bundling and minification?

Assume I have two files
AFile.ts
/// <reference path="ZFile.ts" />
new Z().Foo();
ZFile.ts
class Z
{
Foo() { }
}
Is there a way to generate all scripts in a single js file in the order it requires (need ZFile before AFile to get the definition of Z)?
In post build events I added a call to TypeScript compiler
tsc "..\Content\Scripts\Start.ts" --out "..\Content\Scripts\all.js"
In the bundle configuration I added
bundles.Add(new ScriptBundle("~/scripts/all").Include("~/Content/Scripts/all.js"));
On the _Layout.cshtml file I added
#Scripts.Render("~/Scripts/all")
And with that I got
<script src="/Scripts/all?v=vsTcwLvB3b7F7Kv9GO8..."></script>
Which is all my script in a single file.
The compiler does not minify, you have to use bundles and compile on Release or set
BundleTable.EnableOptimizations = true;
You can also minify using Web Essentials or grabbing the contents and minifing somewhere else.
Now VS Typescript Extension supports merging to one file.
Make sure that you have installed the extension Tools -> Extensions and Updates (VS2015 has it by default)
Go to the project properties and check Combine JavaScript output into file:
Important to have /// <reference /> (as in question), it helps tsc order files by dependencies before the merge.
Then for minimisation bundle can be used as usual:
bundles.Add(new ScriptBundle("~/bundles/finale").Include("~/js/all.js"));
and in view
#Scripts.Render("~/bundles/finale")
Use the --out parameter.
tsc AFile.ts ZFile.ts --out single.js
The typescript compiler will do the dependency navigation for you automatically.
Assuming all of your ts files are directly or indirectly under a folder called say 'ts' you could write a tt script which merged all of .js files(but not min.js) into a file myApp.js and all of your min.js files into myApp.min.js.
To obtain the ordering of files you could process subfolders thus:
string[] FolderOrder =
{
#"libs\utils\",
#"libs\controls\",
#"app\models",
#"app\viewmodels",
#".",
};

Resources