Conditional imports / code for Dart packages - dart

Is there any way to conditionally import libraries / code based on environment flags or target platforms in Dart? I'm trying to switch out between dart:io's ZLibDecoder / ZLibEncoder classes and zlib.js based on the target platform.
There is an article that describes how to create a unified interface, but I'm unable to visualize that technique not creating duplicate code and redundant tests to test that duplicate code. game_loop employs this technique, but uses separate classes (GameLoopHtml and GameLoopIsolate) that don't seem to share anything.
My code looks a bit like this:
class Parser {
Layer parse(String data) {
List<int> rawBytes = /* ... */;
/* stuff you don't care about */
return new Layer(_inflateBytes(rawBytes));
}
String _inflateBytes(List<int> bytes) {
// Uses ZLibEncoder on dartvm, zlib.js in browser
}
}
I'd like to avoid duplicating code by having two separate classes -- ParserHtml and ParserServer -- that implement everything identically except for _inflateBytes.
EDIT: concrete example here: https://github.com/radicaled/citadel/blob/master/lib/tilemap/parser.dart. It's a TMX (Tile Map XML) parser.

You could use mirrors (reflection) to solve this problem. The pub package path is using reflection to access dart:io on the standalone VM or dart:html in the browser.
The source is located here. The good thing is, that they use #MirrorsUsed, so only the required classes are included for the mirrors api. In my opinion the code is documented very good, it should be easy to adopt the solution for your code.
Start at the getters _io and _html (stating at line 72), they show that you can load a library without that they are available on your type of the VM. Loading just returns false if the library it isn't available.
/// If we're running in the server-side Dart VM, this will return a
/// [LibraryMirror] that gives access to the `dart:io` library.
///
/// If `dart:io` is not available, this returns null.
LibraryMirror get _io => currentMirrorSystem().libraries[Uri.parse('dart:io')];
// TODO(nweiz): when issue 6490 or 6943 are fixed, make this work under dart2js.
/// If we're running in Dartium, this will return a [LibraryMirror] that gives
/// access to the `dart:html` library.
///
/// If `dart:html` is not available, this returns null.
LibraryMirror get _html =>
currentMirrorSystem().libraries[Uri.parse('dart:html')];
Later you can use mirrors to invoke methods or getters. See the getter current (starting at line 86) for an example implementation.
/// Gets the path to the current working directory.
///
/// In the browser, this means the current URL. When using dart2js, this
/// currently returns `.` due to technical constraints. In the future, it will
/// return the current URL.
String get current {
if (_io != null) {
return _io.classes[#Directory].getField(#current).reflectee.path;
} else if (_html != null) {
return _html.getField(#window).reflectee.location.href;
} else {
return '.';
}
}
As you see in the comments, this only works in the Dart VM at the moment. After issue 6490 is solved, it should work in Dart2Js, too. This may means that this solution isn't applicable for you at the moment, but would be a solution later.
The issue 6943 could also be helpful, but describes another solution that is not implemented yet.

Conditional imports are possible based on the presence of dart:html or dart:io, see for example the import statements of resource_loader.dart in package:resource.
I'm not yet sure how to do an import conditional on being on the Flutter platform.

Related

How can I get a custom python type and avoid importing a python module every time a C function is called

I am writing some functions for a C extension module for python and need to import a module I wrote directly in python for access to a custom python type. I use PyImport_ImportModule() in the body of my C function, then PyObject_GetAttrString() on the module to get the custom python type. This executes every time the C function is called and seems like it's not very efficient and may not be best practice. I'm looking for a way to have access to the python custom type as a PyObject* or PyTypeObject* in my source code for efficiency and I may need the type in more than one C function also.
Right now the function looks something like
static PyObject* foo(PyObject* self, PyObject* args)
{
PyObject* myPythonModule = PyImport_ImportModule("my.python.module");
if (!myPythonModule)
return NULL;
PyObject* myPythonType = PyObject_GetAttrString(myPythonModule, "MyPythonType");
if (!myPythonType) {
Py_DECREF(myPythonModule);
return NULL;
}
/* more code to create and return a MyPythonType instance */
}
To avoid retrieving myPythonType every function call I tried adding a global variable to hold the object at the top of my C file
static PyObject* myPythonType;
and initialized it in the module init function similar to the old function body
PyMODINIT_FUNC
PyInit_mymodule(void)
{
/* more initializing here */
PyObject* myPythonModule = PyImport_ImportModule("my.python.module");
if (!myPythonModule) {
/* clean-up code here */
return NULL;
}
// set the static global variable here
myPythonType = PyObject_GetAttrString(myPythonModule, "MyPythonType");
Py_DECREF(myPythonModule);
if (!myPythonType) {
/* clean-up code here */
return NULL;
/* finish initializing module */
}
which worked, however I am unsure how to Py_DECREF the global variable whenever the module is finished being used. Is there a way to do that or even a better way to solve this whole problem I am overlooking?
First, just calling import each time probably isn't as bad as you think - Python does internally keep a list of imported modules, so the second time you call it on the same module the cost is much lower. So this might be an acceptable solution.
Second, the global variable approach should work, but you're right that it doesn't get cleaned up. This is rarely a problem because modules are rarely unloaded (and most extension modules don't really support it), but it isn't great. It also won't work with isolated sub-interpreters (which isn't much of a concern now, but may become more more popular in future).
The most robust way to do it needs multi-phase initialization of your module. To quickly summarise what you should do:
You should define a module state struct containing this type of information,
Your module spec should contain the size of the module state struct,
You need to initialize this struct within the Py_mod_exec slot.
You need to create an m_free function (and ideally the other GC functions) to correctly decref your state during de-initialization.
Within a global module function, self will be your module object, and so you can get the state with PyModule_GetState(self)

RxJava2 order of sequence called with compleatable andThen operator

I am trying to migrate from RxJava1 to RxJava2. I am replacing all code parts where I previously had Observable<Void> to Compleatable. However I ran into one problem with order of stream calls. When I previously was dealing with Observables and using maps and flatMaps the code worked 'as expected'. However the andthen() operator seems to work a little bit differently. Here is a sample code to simplify the problem itself.
public Single<String> getString() {
Log.d("Starting flow..")
return getCompletable().andThen(getSingle());
}
public Completable getCompletable() {
Log.d("calling getCompletable");
return Completable.create(e -> {
Log.d("doing actuall completable work");
e.onComplete();
}
);
}
public Single<String> getSingle() {
Log.d("calling getSingle");
if(conditionBasedOnActualCompletableWork) {
return getSingleA();
}else{
return getSingleB();
}
}
What I see in the logs in the end is :
1-> Log.d("Starting flow..")
2-> Log.d("calling getCompletable");
3-> Log.d("calling getSingle");
4-> Log.d("doing actuall completable work");
And as you can probably figure out I would expect line 4 to be called before line 3 (afterwards the name of andthen() operator suggest that the code would be called 'after' Completable finishes it's job). Previously I was creating the Observable<Void> using the Async.toAsync() operator and the method which is now called getSingle was in flatMap stream - it worked like I expected it to, so Log 4 would appear before 3. Now I tried changing the way the Compleatable is created - like using fromAction or fromCallable but it behaves the same. I also couldn't find any other operator to replace andthen(). To underline - the method must be a Completable since it doesn't have any thing meaning full to return - it changes the app preferences and other settings (and is used like that globally mostly working 'as expected') and those changes are needed later in the stream. I also tried to wrap getSingle() method to somehow create a Single and move the if statement inside the create block but I don't know how to use getSingleA/B() methods inside there. And I need to use them as they have their complexity of their own and it doesn't make sense to duplicate the code. Any one have any idea how to modify this in RxJava2 so it behaves the same? There are multiple places where I rely on a Compleatable job to finish before moving forward with the stream (like refreshing session token, updating db, preferences etc. - no problem in RxJava1 using flatMap).
You can use defer:
getCompletable().andThen(Single.defer(() -> getSingle()))
That way, you don't execute the contents of getSingle() immediately but only when the Completablecompletes and andThen switches to the Single.

Caching streams in Functional Reactive Programming

I have an application which is written entirely using the FRP paradigm and I think I am having performance issues due to the way that I am creating the streams. It is written in Haxe but the problem is not language specific.
For example, I have this function which returns a stream that resolves every time a config file is updated for that specific section like the following:
function getConfigSection(section:String) : Stream<Map<String, String>> {
return configFileUpdated()
.then(filterForSectionChanged(section))
.then(readFile)
.then(parseYaml);
}
In the reactive programming library I am using called promhx each step of the chain should remember its last resolved value but I think every time I call this function I am recreating the stream and reprocessing each step. This is a problem with the way I am using it rather than the library.
Since this function is called everywhere parsing the YAML every time it is needed is killing the performance and is taking up over 50% of the CPU time according to profiling.
As a fix I have done something like the following using a Map stored as an instance variable that caches the streams:
function getConfigSection(section:String) : Stream<Map<String, String>> {
var cachedStream = this._streamCache.get(section);
if (cachedStream != null) {
return cachedStream;
}
var stream = configFileUpdated()
.filter(sectionFilter(section))
.then(readFile)
.then(parseYaml);
this._streamCache.set(section, stream);
return stream;
}
This might be a good solution to the problem but it doesn't feel right to me. I am wondering if anyone can think of a cleaner solution that maybe uses a more functional approach (closures etc.) or even an extension I can add to the stream like a cache function.
Another way I could do it is to create the streams before hand and store them in fields that can be accessed by consumers. I don't like this approach because I don't want to make a field for every config section, I like being able to call a function with a specific section and get a stream back.
I'd love any ideas that could give me a fresh perspective!
Well, I think one answer is to just abstract away the caching like so:
class Test {
static function main() {
var sideeffects = 0;
var cached = memoize(function (x) return x + sideeffects++);
cached(1);
trace(sideeffects);//1
cached(1);
trace(sideeffects);//1
cached(3);
trace(sideeffects);//2
cached(3);
trace(sideeffects);//2
}
#:generic static function memoize<In, Out>(f:In->Out):In->Out {
var m = new Map<In, Out>();
return
function (input:In)
return switch m[input] {
case null: m[input] = f(input);
case output: output;
}
}
}
You may be able to find a more "functional" implementation for memoize down the road. But the important thing is that it is a separate thing now and you can use it at will.
You may choose to memoize(parseYaml) so that toggling two states in the file actually becomes very cheap after both have been parsed once. You can also tweak memoize to manage the cache size according to whatever strategy proves the most valuable.

Switching from Rhino to Nashorn

I have a Java 7 project which makes a lot of use of Javascript for scripting various features. Until now I was using Rhino as script engine. I would now like to move to Java 8, which also means that I will replace Rhino by Nashorn.
How compatible is Nashorn to Rhino? Can I use it as a drop-in replacement, or can I expect that some of my scripts will not work anymore and will need to be ported to the new engine? Are there any commonly-used features of Rhino which are not supported by Nashorn?
One problem is that Nashorn can no longer by default import whole Java packages into the global scope by using importPackage(com.organization.project.package);
There is, however, a simple workaround: By adding this line to your script, you can enable the old behavior of Rhino:
load("nashorn:mozilla_compat.js");
Another problem I ran into is that certain type-conversions when passing data between java and javascript work differently. For example, the object which arrives when you pass a Javascript array to Java can no longer be cast to List, but it can be cast to a Map<String, Object>. As a workaround you can convert the Javascript array to a Java List in the Javascript code using Java.to(array, Java.type("java.util.List"))
To use the importClass method on JDK 8, we need to add the following command:
load("nashorn:mozilla_compat.js");
However, this change affect the execution on JDK 7 (JDK does not gives support to load method).
To maintain the compatibility for both SDKs, I solved this problem adding try/catch clause:
try{
load("nashorn:mozilla_compat.js");
}catch(e){
}
Nashorn can not access an inner class when that inner class is declared private, which Rhino was able to do:
import javax.script.ScriptEngine;
import javax.script.ScriptEngineManager;
import javax.script.ScriptException;
public class Test {
public static void main(String[] args) {
Test test = new Test();
test.run();
}
public void run() {
ScriptEngineManager factory = new ScriptEngineManager();
ScriptEngine engine = factory.getEngineByName("JavaScript");
Inner inner = new Inner();
engine.put("inner", inner);
try {
engine.eval("function run(inner){inner.foo(\"test\");} run(inner);");
} catch (ScriptException e) {
e.printStackTrace();
}
}
private class Inner {
public void foo(String msg) {
System.out.println(msg);
}
}
}
Under Java8 this code throws following exception:
javax.script.ScriptException: TypeError: kz.test.Test$Inner#117cd4b has no such function "foo" in <eval> at line number 1
at jdk.nashorn.api.scripting.NashornScriptEngine.throwAsScriptException(NashornScriptEngine.java:564)
at jdk.nashorn.api.scripting.NashornScriptEngine.evalImpl(NashornScriptEngine.java:548)
I noticed that Rhino didn't have a problem with a function called 'in()' (although 'in' is a reserved JavaScript keyword).
Nashorn however raise an error.
Nashorn cannot call static methods on instances! Rhino did this, therefore we had to backport Rhino to Java 8 (Here's a short summary: http://andreas.haufler.info/2015/04/using-rhino-with-java-8.html)
Nashorn on Java8 does not support AST. So if you have Java code that inspects the JS source tree using Rhino's AST mechanism , you may have to rewrite it (using regex maybe) once you port your code to use Nashorn.
I am talking about this API https://mozilla.github.io/rhino/javadoc/org/mozilla/javascript/ast/AstNode.html
Nashorn on Java9 supports AST though.
One feature that is in Rhino and not Nashorn: exposing static members through instances.
From http://nashorn-dev.openjdk.java.narkive.com/n0jtdHc9/bug-report-can-t-call-static-methods-on-a-java-class-instance : "
My conviction is that exposing static members through instances is a
sloppy mashing together of otherwise separate namespaces, hence I
chose not to enable it.
I think this is deeply wrong. As long as we have to use two different constructs to access the same java object and use package declarations unnecessarily in javascript, code becomes harder to read and write because cognitive load increases. I will rather stick to Rhino then.
I have not found a workaround for this obvious "design bug" yet.

Why getting a 202 in two equal setup structuremap code paths

In the C# language, using StructureMap 2.5.4, targeting .NET Framework 3.5 libraries.
I've taken the step to support multiple Profiles in a structure map DI setup, using ServiceLocator model with Bootstrapper activation. First setup was loading default registry, using the scanner.
Now I like to determine runtime what Registry configuration I like to use. Scanning and loading multiple assemblies with registries.
Seems it's not working for the actual implementation (Getting the 202, default instance not found), but a stripped test version does work. The following setup.
Two assemblies containing Registries and implementations
Scanning them in running AppDomain, providing the shared Interface, and requesting Creation Of Instance, using the interfaces in constructor (which get dealt with thanx to the profile on Invokation)
Working code sample below (same structure for other setup, but with more complex stuff, that get's a 202):
What type of couses are possible for a 202, specifically naming the System.Uri type, not being handles by a default type?? (uri makes no sense)
// let structure map create instance of class tester, that provides the registered
// interfaces in the registries to the constructor of tester.
public class Tester<TPOCO>
{
private ITestMe<TPOCO> _tester;
public Tester(ITestMe<TPOCO> some)
{
_tester = some;
}
public string Exec()
{
return _tester.Execute();
}
}
public static class Main {
public void ExecuteDIFunction() {
ObjectFactory.GetInstance<Tester<string>>().Exec();
}
}
public class ImplementedTestMe<TSome> : ITestMe<TSome>
{
public string Execute()
{
return "Special Execution";
}
}
public class RegistryForSpecial : Registry
{
public RegistryForSpecial()
{
CreateProfile("Special",
gc =>
{
gc.For(typeof(ITestMe<>)).UseConcreteType(typeof(ImplementedTestMe<>));
});
}
}
Background articles on Profiles I used.
How to setup named instances using StructureMap profiles?
http://devlicio.us/blogs/derik_whittaker/archive/2009/01/07/setting-up-profiles-in-structuremap-2-5.aspx
http://structuremap.sourceforge.net/RegistryDSL.htm
EDIT:
It seemed the missing interface was actually the one being determined runtime. So here is the next challange (and solved):
I provided a default object whenever StructureMap needs to create the object. Like:
x.ForRequestedType<IConnectionContext>()
.TheDefault.Is.Object(new WebServiceConnection());
This way I got rid of the 202 error, because now a real instance could be used whever structure map needed the type.
Next was the override on runtime. That did not work out at first using the ObjectFactory.Configure method. Instead I used the ObjectFactory.Inject method to overide the default instance. Works like a charm.
ObjectFactory.Inject(typeof(IConnectionContext), context);
Loving the community effort.
Error code 202 means a default instance could not be built for the requested type. Your test code is apparently not equal to your real code that fails. If you are getting an error about Uri, you likely have a dependency that requires a Uri in its constructor. It may not be the class you are asking for - it may be one of that classes dependendencies - or one of the dependencies dependencies... somewhere down the line someone is asking StructureMap to resolve a Uri, which it cannot do, without some help from you.

Resources