How do you include a raw, uncompiled partial in dust.js? - dust.js

I have dust working reasonably well, but I'd like to be able to include a file, such as a css file, without compiling and rendering it.
It seems like maybe I need to create an onLoad handler that loads the file and registers the content directly.
Is there a way to do this within dust already?

You can take advantage of Dust's native support for streams and promises to make file inclusion really nice. The exact implementation depends on whether you're running Dust on the server or the client, but the idea is the same.
Write a helper that loads the file and returns a Stream or Promise for the result. Dust will render it into the template asynchronously, so doing file I/O won't block the rest of the template from processing.
For simplicity, I'll write a context helper, but you could also make this a Dust global helper.
{
"css": function(chunk, context, bodies, params) {
// Do some due diligence to sanitize paths in a real app
var cssPrefix = '/css/';
var path = cssPrefix + context.resolve(params.path);
return $.get(path);
}
}
Then use it like this:
{#css path="foo/bar.css"}{.}{/css}
Or a streamy version for the server:
{
"css": function(chunk, context, bodies, params) {
// Do some due diligence to sanitize paths in a real app
var cssPrefix = '/css/';
var path = cssPrefix + context.resolve(params.path);
return fs.createReadStream(path, "utf8");
}
}

Related

Is it possible to store and load precompiled js to org.graalvm.polyglot.Context

It there any way to convert javascript source into some pre-compiled stated that can be stored and loaded somehow to org.graalvm.polyglot.Context instead of eval-ing it as a raw String? Something like undocumented --persistent-code-cache in nashorn.
As of May'19, you can share code within the same process to avoid reparsing (similar to the Nashorn code persistence) by reusing the same Engine object between different Contexts like this:
try (Engine engine = Engine.create()) {
Source source = Source.create("js", "21 + 21");
try (Context context = Context.newBuilder().engine(engine).build()) {
int v = context.eval(source).asInt();
assert v == 42;
}
try (Context context = Context.newBuilder().engine(engine).build()) {
int v = context.eval(source).asInt();
assert v == 42;
}
}
More details can be found here: https://www.graalvm.org/docs/graalvm-as-a-platform/embed/#enable-source-caching
We have plans to support persistent code cache across processes in combination with the GraalVM native-image tool in the future. We already support creating native-images that contain the JavaScript interpreter and the GraalVM compiler. We want to add support for allowing to include pre-warmed up scripts, hopefully with pre-compiled JavaScript native-code as well. So you might be able to start your JS application with close to zero startup time. No ETA though.

Override UrlResolver

I've been working through the Angular 2 tutorial (in TypeScript) when I got stuck on this part. They now want to separate templates into separate files. That's fine and dandy, but I've got a quirky setup: I'm serving up the files with ASP.NET MVC, and it's refusing to serve up the file from the Views folder. Fair enough: I anticipate needing to serve up Razor (.cshtml) files, so I'm happy to try and hack this out instead of just whitelisting .html.
I've worked with Angular 1 before, and in this situation I used a decorator to modify the $templateRequest service to modify the template URLs into something MVC will accept, and then I set up MVC to serve up the corresponding files. Quite clever work if I do say so myself. So I just need to replicate this in Angular 2, right? That should be easy.
Wrong. So wrong. After some guesswork Googling I found UrlResolver which, after some client-side debugging I confirmed, is the class I want to extend. The documentation even says:
This class can be overridden by the application developer to create custom behavior.
Yes! This is exactly what I want to do. Unfortunately no examples of how to override it have been supplied. I've found this DemoUrlResolver and this MyUrlResolver, but I can't figure out how or if either of them works. I've tried the multiple approaches to supplying my custom provider (see this answer) including the bootstrap and providers (on the module and the app component) approaches all to no avail.
How do I override UrlResolver?
I assume it doesn't matter, but at the moment my extension does nothing but defer to the base class:
class MvcUrlResolver extends UrlResolver {
resolve(baseUrl: string, url: string): string {
return super.resolve(baseUrl, url);
}
}
Interesting question. Since it is part of compiler it makes sense that it would not be instantiated along with other application components, and after some research and analyzing angular's code I found the solution. You need to provide it directly in the call to platformBrowserDynamic(). In this case it will be merged into default compiler options and will be used by injector that instantiates compiler.
import { COMPILER_OPTIONS } from '#angular/core';
class MvcUrlResolver extends UrlResolver {
resolve(baseUrl: string, url: string): string {
let result = super.resolve(baseUrl, url);
console.log('resolving urls: baseUrl = ' + baseUrl + '; url = ' + url + '; result = ' + result);
return result;
}
}
platformBrowserDynamic([{
provide: COMPILER_OPTIONS,
useValue: {providers: [{provide: UrlResolver, useClass: MvcUrlResolver}]},
multi: true
}]).bootstrapModule(AppModule);

sun.org.mozilla.javascript.internal.NativeObject vs org.mozilla.javascript.NativeObject

I am really stuck at this now..
Essentially I have a Java Map, which I would like to pass it to a Javascript Code, so that in my JS code I can use dot notation to refer the keys in this Map. ( I know I can serialize the map into JSON and deserialize it back and pass it into JS, but I don't like that ) I have this piece of the unit code
#Test
public void mapToJsTest() throws Exception{
Map m = Maps.newHashMap();
m.put("name", "john");
NativeObject nobj = new NativeObject();
for (Object k : m.keySet()) {
nobj.defineProperty((String)k, m.get(k), NativeObject.READONLY);
}
engine.eval("function test(obj){ return obj.name;}");
Object obj = ((Invocable)engine).invokeFunction("test", nobj);
Assert.assertEquals(obj, "john");
}
If I am using
org.mozilla.javascript.NativeObject
then the test won't pass,
However, if I am using
sun.org.mozilla.javascript.internal.NativeObject
then the test will pass.
However, we all know that we shouldn't rely on these internal classes, and when I deploy my code on the server side, trying to access
this internal class will cause other problems.
So how do I achieve this with just "org.mozilla.javascript.NativeObject"?
BTW, I am using Rhino
ScriptEngine engine = new ScriptEngineManager().getEngineByName("JavaScript");
For Nashorn, it is much easier, I can pass Map directly into JS code.
Unless you're stuck with older JDK (jdk7 or below) for some reason, I'd recommend moving to jdk8u. You've ES 5.1 compliant JS implementation (Nashorn) that is bundled with JDK8+.

Caching streams in Functional Reactive Programming

I have an application which is written entirely using the FRP paradigm and I think I am having performance issues due to the way that I am creating the streams. It is written in Haxe but the problem is not language specific.
For example, I have this function which returns a stream that resolves every time a config file is updated for that specific section like the following:
function getConfigSection(section:String) : Stream<Map<String, String>> {
return configFileUpdated()
.then(filterForSectionChanged(section))
.then(readFile)
.then(parseYaml);
}
In the reactive programming library I am using called promhx each step of the chain should remember its last resolved value but I think every time I call this function I am recreating the stream and reprocessing each step. This is a problem with the way I am using it rather than the library.
Since this function is called everywhere parsing the YAML every time it is needed is killing the performance and is taking up over 50% of the CPU time according to profiling.
As a fix I have done something like the following using a Map stored as an instance variable that caches the streams:
function getConfigSection(section:String) : Stream<Map<String, String>> {
var cachedStream = this._streamCache.get(section);
if (cachedStream != null) {
return cachedStream;
}
var stream = configFileUpdated()
.filter(sectionFilter(section))
.then(readFile)
.then(parseYaml);
this._streamCache.set(section, stream);
return stream;
}
This might be a good solution to the problem but it doesn't feel right to me. I am wondering if anyone can think of a cleaner solution that maybe uses a more functional approach (closures etc.) or even an extension I can add to the stream like a cache function.
Another way I could do it is to create the streams before hand and store them in fields that can be accessed by consumers. I don't like this approach because I don't want to make a field for every config section, I like being able to call a function with a specific section and get a stream back.
I'd love any ideas that could give me a fresh perspective!
Well, I think one answer is to just abstract away the caching like so:
class Test {
static function main() {
var sideeffects = 0;
var cached = memoize(function (x) return x + sideeffects++);
cached(1);
trace(sideeffects);//1
cached(1);
trace(sideeffects);//1
cached(3);
trace(sideeffects);//2
cached(3);
trace(sideeffects);//2
}
#:generic static function memoize<In, Out>(f:In->Out):In->Out {
var m = new Map<In, Out>();
return
function (input:In)
return switch m[input] {
case null: m[input] = f(input);
case output: output;
}
}
}
You may be able to find a more "functional" implementation for memoize down the road. But the important thing is that it is a separate thing now and you can use it at will.
You may choose to memoize(parseYaml) so that toggling two states in the file actually becomes very cheap after both have been parsed once. You can also tweak memoize to manage the cache size according to whatever strategy proves the most valuable.

Conditional imports / code for Dart packages

Is there any way to conditionally import libraries / code based on environment flags or target platforms in Dart? I'm trying to switch out between dart:io's ZLibDecoder / ZLibEncoder classes and zlib.js based on the target platform.
There is an article that describes how to create a unified interface, but I'm unable to visualize that technique not creating duplicate code and redundant tests to test that duplicate code. game_loop employs this technique, but uses separate classes (GameLoopHtml and GameLoopIsolate) that don't seem to share anything.
My code looks a bit like this:
class Parser {
Layer parse(String data) {
List<int> rawBytes = /* ... */;
/* stuff you don't care about */
return new Layer(_inflateBytes(rawBytes));
}
String _inflateBytes(List<int> bytes) {
// Uses ZLibEncoder on dartvm, zlib.js in browser
}
}
I'd like to avoid duplicating code by having two separate classes -- ParserHtml and ParserServer -- that implement everything identically except for _inflateBytes.
EDIT: concrete example here: https://github.com/radicaled/citadel/blob/master/lib/tilemap/parser.dart. It's a TMX (Tile Map XML) parser.
You could use mirrors (reflection) to solve this problem. The pub package path is using reflection to access dart:io on the standalone VM or dart:html in the browser.
The source is located here. The good thing is, that they use #MirrorsUsed, so only the required classes are included for the mirrors api. In my opinion the code is documented very good, it should be easy to adopt the solution for your code.
Start at the getters _io and _html (stating at line 72), they show that you can load a library without that they are available on your type of the VM. Loading just returns false if the library it isn't available.
/// If we're running in the server-side Dart VM, this will return a
/// [LibraryMirror] that gives access to the `dart:io` library.
///
/// If `dart:io` is not available, this returns null.
LibraryMirror get _io => currentMirrorSystem().libraries[Uri.parse('dart:io')];
// TODO(nweiz): when issue 6490 or 6943 are fixed, make this work under dart2js.
/// If we're running in Dartium, this will return a [LibraryMirror] that gives
/// access to the `dart:html` library.
///
/// If `dart:html` is not available, this returns null.
LibraryMirror get _html =>
currentMirrorSystem().libraries[Uri.parse('dart:html')];
Later you can use mirrors to invoke methods or getters. See the getter current (starting at line 86) for an example implementation.
/// Gets the path to the current working directory.
///
/// In the browser, this means the current URL. When using dart2js, this
/// currently returns `.` due to technical constraints. In the future, it will
/// return the current URL.
String get current {
if (_io != null) {
return _io.classes[#Directory].getField(#current).reflectee.path;
} else if (_html != null) {
return _html.getField(#window).reflectee.location.href;
} else {
return '.';
}
}
As you see in the comments, this only works in the Dart VM at the moment. After issue 6490 is solved, it should work in Dart2Js, too. This may means that this solution isn't applicable for you at the moment, but would be a solution later.
The issue 6943 could also be helpful, but describes another solution that is not implemented yet.
Conditional imports are possible based on the presence of dart:html or dart:io, see for example the import statements of resource_loader.dart in package:resource.
I'm not yet sure how to do an import conditional on being on the Flutter platform.

Resources