Dart build runner generate one dart file with content - dart

I am working on a dart package with includes over 200 models and at the moment i have to write manually one line of "export" for each model, to make the models available for everyone who uses this package.
I want the build runner to generate one dart file which contains every export definition.
Therefore I would create an annotation "ExportModel". The builder should search for each class annotated with this annotation.
I tried creating some Builders, but they will generate a *.g.dart file for each class that is annotated. I just want to have one file.
Is where a way to create a builder that runs only once and creates a file at the end ?

The short answer to your question of a builder that only runs once and creates a single file in the package is to use r'$lib$' as the input extension. The long answer is that to find the classes that are annotated you probably want an intermediate output to track them.
I'd write this with 2 builders, one to search for the ExportModel annotation, and another to write the exports file. Here is a rough sketch with details omitted - I haven't tested any of the code here but it should get you started on the right path.
Builder 1 - find the classes annotated with #ExportModel().
Could write with some utilities from package:source_gen, but can't use LibraryBuilder since it's not outputting Dart code...
Goal is to write a .exports file next to each .dart file which as the name of all the classes that are annotated with #ExportModel().
class ExportLocatingBuilder implements Builder {
#override
final buildExtensions = const {
'.dart': ['.exports']
};
#override
Future<void> build(BuildStep buildStep) async {
final resolver = buildStep.resolver;
if (!await resolver.isLibrary(buildStep.inputId)) return;
final lib = LibraryReader(await buildStep.inputLibrary);
final exportAnnotation = TypeChecker.fromRuntime(ExportModel);
final annotated = [
for (var member in lib.annotatedWith(exportAnnotation)) element.name,
];
if (annotated.isNotEmpty) {
buildStep.writeAsString(
buildStep.inputId.changeExtension('.exports'), annotated.join(','));
}
}
}
This builder should be build_to: cache and you may want to have a PostProcessBuilder that cleans up all the outputs it produces which would be specified with applies_builder. You can use the FileDeletingBuilder to cheaply implement the cleanup. See the FAQ on temporary outputs and the angular cleanup for example.
Builder 2 - find the .exports files and generate a Dart file
Use findAssets to track down all those .exports files, and write an export statement for each one. Use a show with the content of the file which should contain the names of the members that were annotated.
class ExportsBuilder implements Builder {
#override
final buildExtensions = const {
r'$lib$': ['exports.dart']
};
#override
Future<void> build(BuildStep buildStep) async {
final exports = buildStep.findAssets(Glob('**/*.exports'));
final content = [
await for (var exportLibrary in exports)
'export \'${exportLibrary.changeExtension('.dart').uri}\' '
'show ${await buildStep.readAsString(exportLibrary)};',
];
if (content.isNotEmpty) {
buildStep.writeAsString(
AssetId(buildStep.inputId.package, 'lib/exports.dart'),
content.join('\n'));
}
}
}
This builder should likely be build_to: source if you want to publish this file on pub. It should have a required_inputs: [".exports"] to ensure it runs after the previous builder.
Why does it need to be this complex?
You could implement this as a single builder which uses findAssets to find all the Dart files. The downside is that rebuilds would be much slower because it would be invalidated by any content change in any Dart file and you'd end up parsing all Dart code for a change in any Dart code. With the 2 builder approach then only the individual .exports which come from a changed Dart file need to be resolved and rebuilt on a change, and then only if the exports change will the exports.dart file be invalidated.
Older versions of build_runner also didn't support using the Resolver to resolve code that isn't transitively imported from the input library. Recent version of build_runner have relaxed this constraint.

Related

where is `cc_proto_library` implementation in bazel?

I read the source code of bazel, and I found cc_binary and cc_library in src/main/starlark/builtins_bzl/common/cc.
Also I found proto_library in src/main/starlark/builtins_bzl/common/proto. But I can't find where is the cc_proto_library's implementation.
Can anyone tell me how it works?
Tip: use the Bazel codesearch at https://source.bazel.build to navigate its source code.
You can quickly find the implementation of any built-in rule using a query like:
language:java "implements RuleDefinition" "\"cc_proto_library\")"
This will bring you to the CcProtoLibraryRule.java definition:
public class CcProtoLibraryRule implements RuleDefinition {
private final CcProtoAspect ccProtoAspect;
public CcProtoLibraryRule(CcProtoAspect ccProtoAspect) {
this.ccProtoAspect = ccProtoAspect;
}
#Override
public RuleClass build(RuleClass.Builder builder, RuleDefinitionEnvironment environment) {
return builder
.requiresConfigurationFragments(CppConfiguration.class)
/* <!-- #BLAZE_RULE(cc_proto_library).ATTRIBUTE(deps) -->
The list of <code>proto_library</code>
rules to generate C++ code for.
<!-- #END_BLAZE_RULE.ATTRIBUTE --> */
.override(
attr("deps", LABEL_LIST)
.allowedRuleClasses("proto_library")
.allowedFileTypes()
.aspect(ccProtoAspect))
.build();
}
#Override
public Metadata getMetadata() {
return RuleDefinition.Metadata.builder()
.name("cc_proto_library")
.factoryClass(CcProtoLibrary.class)
.ancestors(BaseRuleClasses.NativeActionCreatingRule.class)
.build();
}
}
The implementation is defined with .factoryClass(CcProtoLibrary.class).
For the second part of your question, "builtins" are a Bazel-internal concept to transparently swap out the Java implementation of a Bazel rule to its Starlark equivalent, without needing to add any load statements in the BUILD file. This is necessary to migrate existing users to Starlark implementations without causing user impact.

Grails 4 how to get an handle to artifacts in custom command

I need to build a custom command in a Grails 4 application (https://docs.grails.org/4.0.11/guide/single.html#creatingCustomCommands), and I need to get an handle to some Grails Services and Domain classes which I will query as needed.
The custom command skeleton is quite simple:
import grails.dev.commands.*
import org.apache.maven.artifact.Artifact
class HelloWorldCommand implements GrailsApplicationCommand {
boolean handle() {
return true
}
}
While the documentation says that a custom command has access to the whole application context, I haven't found any examples on how to get an handle of that and start accessing the various application artifacts.
Any hints?
EDIT: to add context and clarify the goal of the custom command in order for further recommendation/best practices/etc.: the command reads data from a file in a custom format, persist the data, and writes reports in another custom format.
Will eventually be replaced by a recurrent job, once the data will be available on demand from a third party REST API.
See the project at github.com/jeffbrown/marco-vittorini-orgeas-artifacts-cli.
grails-app/services/marco/vittorini/orgeas/artifacts/cli/GreetingService.groovy
package marco.vittorini.orgeas.artifacts.cli
class GreetingService {
String greeting = 'Hello World'
}
grails-app/commands/marco/vittorini/orgeas/artifacts/cli/HelloCommand.groovy
package marco.vittorini.orgeas.artifacts.cli
import grails.dev.commands.*
class HelloCommand implements GrailsApplicationCommand {
GreetingService greetingService
boolean handle() {
println greetingService.greeting
return true
}
}
EDIT:
I have added a commit at github.com/jeffbrown/marco-vittorini-orgeas-artifacts-cli/commit/49a846e3902073f8ea0539fcde550f6d002b9d89 which demonstrates accessing a domain class, which was part of the question I overlooked when writing the initial answer.

Dart: Transforming a command line application

Is it possible to run transformer on a command line application before running it?
For example, if I have a class that mixes in Observable class. And I would like to transform it so that dirtCheck is transformed into ChangeNotifier.
holder.dart
class Member extends Object with ChangeNotifier {
#observable
String name = "";
}
class Holder extends Object with ChangeNotifier {
Holder() {
}
#observable
Member member = new Member();
}
pubspec.yml
transformers:
- observe:
files:
- bin/models/holder.dart
If I run this application from IntelliJ IDE, it doesn't seem to run the transformer on it before executing main.dart.
Thanks.
Transfomers are not applied to command line apps. Only code that is served using pub serve or pub build runs and applies transformers. Your code should run on the server/command line as is. There is no need to run transformers.
Transformers are used for observe to replace dart:mirrors access by generated code to prevent code bloat for dart2js-generated JS but this is no issue on the command line.

How to set extra fields for entries using plexus-archiver

I'm working with a maven plugin that is using plexus-archiver in order to create a zip file.
Basically, I'm getting the component inject by Sisu, then I'm traversing a specified fileSet and registering the ones required:
zipArchiver.addFile(from_file, to_file);
And the zip are being generated properly.
But I need to include an extra-field for the file mime-type in some of those files that are being added to the zip.
how can I do that with plexus-archiver ?
It seems that the current plexus-archiver (3.0) doesn't support extra-fields.
I have to hack a bit in order to keep using plexus-archive.
The solution was to extend ZipArchiver class and override the method initZipOutputStream that provides an object from ZipArchiveOutputStream class.
With it I could create the entry and its extra-field:
#Override
protected void initZipOutputStream(ZipArchiveOutputStream pZOut)
throws ArchiverException, IOException {
super.initZipOutputStream(pZOut);
ZipArchiveEntry ae = new ZipArchiveEntry(pFile,
pFile.getName());
ZipExtraField zef = new ContentTypeExtraField(
Constants.MIME_STRING);
ae.addExtraField(zef);
pZOut.putArchiveEntry(ae);
pZOut.write(content);
pZOut.closeArchiveEntry();
}

Conditional imports / code for Dart packages

Is there any way to conditionally import libraries / code based on environment flags or target platforms in Dart? I'm trying to switch out between dart:io's ZLibDecoder / ZLibEncoder classes and zlib.js based on the target platform.
There is an article that describes how to create a unified interface, but I'm unable to visualize that technique not creating duplicate code and redundant tests to test that duplicate code. game_loop employs this technique, but uses separate classes (GameLoopHtml and GameLoopIsolate) that don't seem to share anything.
My code looks a bit like this:
class Parser {
Layer parse(String data) {
List<int> rawBytes = /* ... */;
/* stuff you don't care about */
return new Layer(_inflateBytes(rawBytes));
}
String _inflateBytes(List<int> bytes) {
// Uses ZLibEncoder on dartvm, zlib.js in browser
}
}
I'd like to avoid duplicating code by having two separate classes -- ParserHtml and ParserServer -- that implement everything identically except for _inflateBytes.
EDIT: concrete example here: https://github.com/radicaled/citadel/blob/master/lib/tilemap/parser.dart. It's a TMX (Tile Map XML) parser.
You could use mirrors (reflection) to solve this problem. The pub package path is using reflection to access dart:io on the standalone VM or dart:html in the browser.
The source is located here. The good thing is, that they use #MirrorsUsed, so only the required classes are included for the mirrors api. In my opinion the code is documented very good, it should be easy to adopt the solution for your code.
Start at the getters _io and _html (stating at line 72), they show that you can load a library without that they are available on your type of the VM. Loading just returns false if the library it isn't available.
/// If we're running in the server-side Dart VM, this will return a
/// [LibraryMirror] that gives access to the `dart:io` library.
///
/// If `dart:io` is not available, this returns null.
LibraryMirror get _io => currentMirrorSystem().libraries[Uri.parse('dart:io')];
// TODO(nweiz): when issue 6490 or 6943 are fixed, make this work under dart2js.
/// If we're running in Dartium, this will return a [LibraryMirror] that gives
/// access to the `dart:html` library.
///
/// If `dart:html` is not available, this returns null.
LibraryMirror get _html =>
currentMirrorSystem().libraries[Uri.parse('dart:html')];
Later you can use mirrors to invoke methods or getters. See the getter current (starting at line 86) for an example implementation.
/// Gets the path to the current working directory.
///
/// In the browser, this means the current URL. When using dart2js, this
/// currently returns `.` due to technical constraints. In the future, it will
/// return the current URL.
String get current {
if (_io != null) {
return _io.classes[#Directory].getField(#current).reflectee.path;
} else if (_html != null) {
return _html.getField(#window).reflectee.location.href;
} else {
return '.';
}
}
As you see in the comments, this only works in the Dart VM at the moment. After issue 6490 is solved, it should work in Dart2Js, too. This may means that this solution isn't applicable for you at the moment, but would be a solution later.
The issue 6943 could also be helpful, but describes another solution that is not implemented yet.
Conditional imports are possible based on the presence of dart:html or dart:io, see for example the import statements of resource_loader.dart in package:resource.
I'm not yet sure how to do an import conditional on being on the Flutter platform.

Resources