how do I make analysis warnings trigger runtime failure - dart

In the following example, main is allowed to call a sniff function on Dog that I would prefer would somehow break. If I say exactly what a Dog can do, but somehow the client knows more and can get the object to do more with that special knowledge - I think that is an encapsulation leak. I don't necessarily want it to die in the general case, but is there a flag or way to run that would enforce that only methods be called if they exist. I know the language supports the knowledge that something is wrong since the Dart Editor displays a warning: The method 'sniff' is not defined for the class 'Dog'. Even when run with --checked flag, this runs fine.
So suppose similar code were invoked by a test. Is there a flag in Dart or some code to cause it to fail when the test is run?
abstract class Dog {
void run();
void bark();
}
class BigDog implements Dog {
void run() => print("Big dog running");
void bark() => print("Woof");
void sniff() => print("Sniff");
}
main() {
Dog bd = new BigDog();
bd.run();
bd.bark();
bd.sniff();
}

You should ensure to run the tests in checked mode dart -c testfile.dart then type annotations are taken into account. You shouldn't use checked mode in production though because it slows down your application.

This isn't possible. The problem is that your instance really is a BigDog; and therefore it really has a sniff method. An important goal in Dart is that type annotations do not affect runtime behaviour; therefore your Dog annotation is unable to change the behaviour.
The tooling is able to highlight this because it does use the type annotations.
This sort of question comes up a lot in the Dart bug tracker; but it would be a pretty fundamental change to allow type annotations to change behaviour, so it's not something I expect would ever be considered; confusing as it might be. See here for some more info on this.

Related

dart: wrap all function calls

I am attempting to write two versions of the same program:
a performant version; and
a slower version that lets the user know what's happening.
I imagine it's not entirely disimilar to how an IDE might implement a normal/debug mode.
My requirements, in decreasing order of importance, are as follows:
the slow version should produce the same results as the performant version;
the slow version should wrap a subset of public function calls made by the performant version;
the requirement for the slower version should not adversely effect the performance of the performant version;
preferably no code reproduction, but automated reproduction where necessary;
minimal increase in code-base size; and
ideally the slow version should be able to be packaged separately (presumably with a one-way dependence on the performant version)
I understand requirement 6 may be impossible, since requirement 2 requires access to a classes implementation details (for cases where a public function calls another public function).
For the sake of discussion, consider the following performant version of a program to tell a simple story.
class StoryTeller{
void tellBeginning() => print('This story involves many characters.');
void tellMiddle() => print('After a while, the plot thickens.');
void tellEnd() => print('The characters resolve their issues.');
void tellStory(){
tellBeginning();
tellMiddle();
tellEnd();
}
}
A naive implementation with mirrors such as the following:
class Wrapper{
_wrap(Function f, Symbol s){
var name = MirrorSystem.getName(s);
print('Entering $name');
var result = f();
print('Leaving $name');
return result;
}
}
#proxy
class StoryTellerProxy extends Wrapper implements StoryTeller{
final InstanceMirror mirror;
StoryTellerProxy(StoryTeller storyTeller): mirror = reflect(storyTeller);
#override
noSuchMethod(Invocation invocation) =>
_wrap(() => mirror.delegate(invocation), invocation.memberName);
}
I love the elegance of this solution, since I can change the interface of the performant version and this just works. Unfortunately, it fails to satisfy requirement 2, since the inner calls of tellStory() are not wrapped.
A simple though more verbose solution exists:
class StoryTellerVerbose extends StoryTeller with Wrapper{
void tellBeginning() => _wrap(() => super.tellBeginning(), #tellBeginning);
void tellMiddle() => _wrap(() => super.tellMiddle(), #tellMiddle);
void tellEnd() => _wrap(() => super.tellEnd(), #tellEnd);
void tellStory() => _wrap(() => super.tellStory(), #tellStory);
}
This code can easily be auto-generated using mirrors, but it can result in a large increase in the code-base size, particularly if the performant version has an extensive class hierarchy and I want to have a const analogue to const variables of a class deep in the class tree.
Also, if any class doesn't have a public constructor, this approach prevents the separation of the packages (I think).
I've also considered wrapping all methods of the base class with a wrap method, with the performant version having a trivial wrap function. However, I'm worried this will adversely effect the performant version's performance, particularly if the wrap method was to require, say, an invocation as an input. I also dislike the fact that this intrinsicly links my performant version to the slow version. In my head, I'm thinking there must be a way to make the slower version an extension of the performant version, rather than both versions being an extension of some more general super-version.
Am I missing something really obvious? Is there an in-built 'anySuchMethod' or some such? I'm hoping to combine the elegance of the proxy solution with the completeness of the verbose solution.
You could try to put the additional debugging code inside asserts(...). This gets automatically removed when not run in checked mode. See also
Is there a compiler preprocessor in Dart?
How to exclude debug code
How to achieve precompiler directive like functionality
Otherwise just make a global constant (const bool isSlow = true/false;) Use interfaces everywhere and factory constructors which return the slow or the fast implementation of an interface depending on the isSlow value.
The slow version can just extend the fast version to reuse its functionality and extend it by overriding its methods.
This way you don't have to use mirrors which causes code bloat, at least for client side code.
When you build all unnecessary code is removed by tree-shaking, depending on the setting of isSlow.
Using dependency injection helps simplify this way of developing different implementations.

cannot traverse the nodes of an AST, while assigning each node an ID

This is more a simple personal attempt to understand what goes on inside Rascal. There must be better (if not already supported) solution.
Here's the code:
fileLoad = |home:///PHPAnalysis/systems/ApilTestScripts/simple1.php|;
fileAST=loadPHPFile(fileLoad,true,false);
//assign a simple id to each node
public map[value,int] assignID12(node N)
{
myID=();
visit(N)
{
case node M:
{
name=getName(M);
myID[name] =999;
}
}
return myID;
}
ids=assignID12(fileAST);
gives me
|stdin:///|(92,4,<1,92>,<1,96>): Expected str, but got value
loadPHPFile returns a node of type: list[Stmt], where each Stmt is one of the many types of statements that could occur in a program (PHP, in my case). Without going into why I'd do this, why doesn't the above code work? Especially frustrating because a very simple example is worked out in the online documentation. See: http://tutor.rascal-mpl.org/Recipes/Basic/Basic.html#/Recipes/Common/CountConstructors/CountConstructors.html
I started a new console, and it seems to work. Of course, I changed the return type from map[value,int] to map[str,int] as it was originally in the example.
The problem I was having was that I may have erroneously defined the function previously. While I quickly fixed an apparent problem, it kept giving me errors. I realized that in Rascal, when you've started a console and imported certain definitions, it (seems)is impossible to overwrite those definitions. The interpreter keeps making reference to the very first definition that you provided. This could just be the interpreter performing a type-check, and preventing unintentional and/or incompatible assignments further down the road. That makes sense for variables (in the typical program sense), but it doesn't seem like the best idea to enforce that on functions (or methods). I feel it becomes cumbersome, because a user typically has to undergo some iterations before he/she is satisfied with a function definition. Just my opinion though...
Most likely you already had the name ids in scope as having type map[str,int], which would be the direct source of the error. You can look in script https://github.com/cwi-swat/php-analysis/blob/master/src/lang/php/analysis/cfg/LabelState.rsc at the function labelScript to see how this is done in PHP AiR (so you don't need to write this code yourself). What this will give you is a script where all the expressions and statements have an assigned ID, as well as the label state, which just keeps track of some info used in this labeling operation (mainly the counter to generate a unique ID).
As for the earlier response, the best thing to do is to give your definitions in modules which you can import. If you do that, any changes to types, etc will be picked up (automatically if the module is already imported, since Rascal will reimport the module for you if it has changed, or when you next import the module). However, if you define something directly in the console, this won't happen. Think of the console as one large module that you keep adding to. Since we can have overloads of functions, if you define the function again you are really defining a new alternative to the function, but this may not work like you expect.

Dart using Json_object to produce strong typing

Using the ideas outlined in ding the article Using Dart with JSON Web Services at https://www.dartlang.org/articles/json-web-service/, I have been trying to implement the section on using JsonObject and interfaces to produce a strong typing of JSON data.
The article indicates that one should write something like.
abstract class Language {
String language;
List targets;
Map website;
}
class LanguageImpl extends JsonObject implements Language {
LanguageImpl();
factory LanguageImpl.fromJsonString(string) {
return new JsonObject.fromJsonString(string, new LanguageImpl());
}
}
However the compiler 'fail' at the definition of the class LanguageImpl with the message
Missing inherited members: 'Language.website', 'Language.targets' and
'Language.language'
Even more confusing the code will run without a problem....
In Darteditor you get Quick fix support for this.
Set the caret at LanguageImpl and press ctrl+1 or use the context menu > Quick fix.
You get the missing concrete implementations you inherit from the abstract base class generated automatically.
Dart is a dynamic language and therefor very flexible.
The tools support you and try to give meaningful warnings and hints about what could go wrong
but don't prevent you running a program even when it is not yet finished.
You can use the #proxy annotation on the class to silent the warnings.
This also needs the class to have a noSuchMethod() implementation.

What parameters should be used for AbstractDeclarativeValidator.warning and error?

I have a working grammar on xtext, and am starting the validation of the code.
For this, I added a method in the validator xtext created for me.
Of course, when an expression isn't valid, I want to be able to give a warning on the given AST node.
I attempted the obvious:
#Check
public void testCheck(Expression_Multiplication m){
if(!(m.getLeft() instanceof Expression_Number)){
warning("Multiplication should be on numbers.",m.getLeft());
}
if(!(m.getRight() instanceof Expression_Number)){
warning("Multiplication should be on numbers.",m.getRight());
}
}
Without success, as Expression_Number extends EObject, but is not an EStructuralFeature.
warning(String message, EStructuralFeature feature)
There are many other prototypes for warning, but none that takes just a String and a Eobject. Using null or various values extracted from eContainingFeature logs an error, and sometimes shows the warning at the correct place anyway. Searching for examples, I found that the values were often coming from the statics fields of a class called Literals or ***Package, the one generated in the project contains EStructuralFeatures, but I have no idea of which one to use, or why I would need one of these.
So the question is:
How can I place a warning on a given AST element ?
The EStructuralFeature is the property of your AST. You'll find a generated EPackage class, which contains constants.
I guess in your case it is something like:
MyDslPackage.Literals.EXPRESSION_MULTIPLICATION__LEFT
and
MyDslPackage.Literals.EXPRESSION_MULTIPLICATION__RIGHT
I ended up using
private void warning(String text, EObject badAstNode){
// The -1 seems to come from a static member somewhere. Probably cleaner to
// name it, but I couldn't find it again.
warning(text,badAstNode,null,-1);
}
I have no idea about whether this is supposed to be the right way, but it seemed to work in the various cases I used it, and requires a minimal amount of state to be kept.

Why does one use dependency injection?

I'm trying to understand dependency injections (DI), and once again I failed. It just seems silly. My code is never a mess; I hardly write virtual functions and interfaces (although I do once in a blue moon) and all my configuration is magically serialized into a class using json.net (sometimes using an XML serializer).
I don't quite understand what problem it solves. It looks like a way to say: "hi. When you run into this function, return an object that is of this type and uses these parameters/data."
But... why would I ever use that? Note I have never needed to use object as well, but I understand what that is for.
What are some real situations in either building a website or desktop application where one would use DI? I can come up with cases easily for why someone may want to use interfaces/virtual functions in a game, but it's extremely rare (rare enough that I can't remember a single instance) to use that in non-game code.
First, I want to explain an assumption that I make for this answer. It is not always true, but quite often:
Interfaces are adjectives; classes are nouns.
(Actually, there are interfaces that are nouns as well, but I want to generalize here.)
So, e.g. an interface may be something such as IDisposable, IEnumerable or IPrintable. A class is an actual implementation of one or more of these interfaces: List or Map may both be implementations of IEnumerable.
To get the point: Often your classes depend on each other. E.g. you could have a Database class which accesses your database (hah, surprise! ;-)), but you also want this class to do logging about accessing the database. Suppose you have another class Logger, then Database has a dependency to Logger.
So far, so good.
You can model this dependency inside your Database class with the following line:
var logger = new Logger();
and everything is fine. It is fine up to the day when you realize that you need a bunch of loggers: Sometimes you want to log to the console, sometimes to the file system, sometimes using TCP/IP and a remote logging server, and so on ...
And of course you do NOT want to change all your code (meanwhile you have gazillions of it) and replace all lines
var logger = new Logger();
by:
var logger = new TcpLogger();
First, this is no fun. Second, this is error-prone. Third, this is stupid, repetitive work for a trained monkey. So what do you do?
Obviously it's a quite good idea to introduce an interface ICanLog (or similar) that is implemented by all the various loggers. So step 1 in your code is that you do:
ICanLog logger = new Logger();
Now the type inference doesn't change type any more, you always have one single interface to develop against. The next step is that you do not want to have new Logger() over and over again. So you put the reliability to create new instances to a single, central factory class, and you get code such as:
ICanLog logger = LoggerFactory.Create();
The factory itself decides what kind of logger to create. Your code doesn't care any longer, and if you want to change the type of logger being used, you change it once: Inside the factory.
Now, of course, you can generalize this factory, and make it work for any type:
ICanLog logger = TypeFactory.Create<ICanLog>();
Somewhere this TypeFactory needs configuration data which actual class to instantiate when a specific interface type is requested, so you need a mapping. Of course you can do this mapping inside your code, but then a type change means recompiling. But you could also put this mapping inside an XML file, e.g.. This allows you to change the actually used class even after compile time (!), that means dynamically, without recompiling!
To give you a useful example for this: Think of a software that does not log normally, but when your customer calls and asks for help because he has a problem, all you send to him is an updated XML config file, and now he has logging enabled, and your support can use the log files to help your customer.
And now, when you replace names a little bit, you end up with a simple implementation of a Service Locator, which is one of two patterns for Inversion of Control (since you invert control over who decides what exact class to instantiate).
All in all this reduces dependencies in your code, but now all your code has a dependency to the central, single service locator.
Dependency injection is now the next step in this line: Just get rid of this single dependency to the service locator: Instead of various classes asking the service locator for an implementation for a specific interface, you - once again - revert control over who instantiates what.
With dependency injection, your Database class now has a constructor that requires a parameter of type ICanLog:
public Database(ICanLog logger) { ... }
Now your database always has a logger to use, but it does not know any more where this logger comes from.
And this is where a DI framework comes into play: You configure your mappings once again, and then ask your DI framework to instantiate your application for you. As the Application class requires an ICanPersistData implementation, an instance of Database is injected - but for that it must first create an instance of the kind of logger which is configured for ICanLog. And so on ...
So, to cut a long story short: Dependency injection is one of two ways of how to remove dependencies in your code. It is very useful for configuration changes after compile-time, and it is a great thing for unit testing (as it makes it very easy to inject stubs and / or mocks).
In practice, there are things you can not do without a service locator (e.g., if you do not know in advance how many instances you do need of a specific interface: A DI framework always injects only one instance per parameter, but you can call a service locator inside a loop, of course), hence most often each DI framework also provides a service locator.
But basically, that's it.
P.S.: What I described here is a technique called constructor injection, there is also property injection where not constructor parameters, but properties are being used for defining and resolving dependencies. Think of property injection as an optional dependency, and of constructor injection as mandatory dependencies. But discussion on this is beyond the scope of this question.
I think a lot of times people get confused about the difference between dependency injection and a dependency injection framework (or a container as it is often called).
Dependency injection is a very simple concept. Instead of this code:
public class A {
private B b;
public A() {
this.b = new B(); // A *depends on* B
}
public void DoSomeStuff() {
// Do something with B here
}
}
public static void Main(string[] args) {
A a = new A();
a.DoSomeStuff();
}
you write code like this:
public class A {
private B b;
public A(B b) { // A now takes its dependencies as arguments
this.b = b; // look ma, no "new"!
}
public void DoSomeStuff() {
// Do something with B here
}
}
public static void Main(string[] args) {
B b = new B(); // B is constructed here instead
A a = new A(b);
a.DoSomeStuff();
}
And that's it. Seriously. This gives you a ton of advantages. Two important ones are the ability to control functionality from a central place (the Main() function) instead of spreading it throughout your program, and the ability to more easily test each class in isolation (because you can pass mocks or other faked objects into its constructor instead of a real value).
The drawback, of course, is that you now have one mega-function that knows about all the classes used by your program. That's what DI frameworks can help with. But if you're having trouble understanding why this approach is valuable, I'd recommend starting with manual dependency injection first, so you can better appreciate what the various frameworks out there can do for you.
As the other answers stated, dependency injection is a way to create your dependencies outside of the class that uses it. You inject them from the outside, and take control about their creation away from the inside of your class. This is also why dependency injection is a realization of the Inversion of control (IoC) principle.
IoC is the principle, where DI is the pattern. The reason that you might "need more than one logger" is never actually met, as far as my experience goes, but the actually reason is, that you really need it, whenever you test something. An example:
My Feature:
When I look at an offer, I want to mark that I looked at it automatically, so that I don't forget to do so.
You might test this like this:
[Test]
public void ShouldUpdateTimeStamp
{
// Arrange
var formdata = { . . . }
// System under Test
var weasel = new OfferWeasel();
// Act
var offer = weasel.Create(formdata)
// Assert
offer.LastUpdated.Should().Be(new DateTime(2013,01,13,13,01,0,0));
}
So somewhere in the OfferWeasel, it builds you an offer Object like this:
public class OfferWeasel
{
public Offer Create(Formdata formdata)
{
var offer = new Offer();
offer.LastUpdated = DateTime.Now;
return offer;
}
}
The problem here is, that this test will most likely always fail, since the date that is being set will differ from the date being asserted, even if you just put DateTime.Now in the test code it might be off by a couple of milliseconds and will therefore always fail. A better solution now would be to create an interface for this, that allows you to control what time will be set:
public interface IGotTheTime
{
DateTime Now {get;}
}
public class CannedTime : IGotTheTime
{
public DateTime Now {get; set;}
}
public class ActualTime : IGotTheTime
{
public DateTime Now {get { return DateTime.Now; }}
}
public class OfferWeasel
{
private readonly IGotTheTime _time;
public OfferWeasel(IGotTheTime time)
{
_time = time;
}
public Offer Create(Formdata formdata)
{
var offer = new Offer();
offer.LastUpdated = _time.Now;
return offer;
}
}
The Interface is the abstraction. One is the REAL thing, and the other one allows you to fake some time where it is needed. The test can then be changed like this:
[Test]
public void ShouldUpdateTimeStamp
{
// Arrange
var date = new DateTime(2013, 01, 13, 13, 01, 0, 0);
var formdata = { . . . }
var time = new CannedTime { Now = date };
// System under test
var weasel= new OfferWeasel(time);
// Act
var offer = weasel.Create(formdata)
// Assert
offer.LastUpdated.Should().Be(date);
}
Like this, you applied the "inversion of control" principle, by injecting a dependency (getting the current time). The main reason to do this is for easier isolated unit testing, there are other ways of doing it. For example, an interface and a class here is unnecessary since in C# functions can be passed around as variables, so instead of an interface you could use a Func<DateTime> to achieve the same. Or, if you take a dynamic approach, you just pass any object that has the equivalent method (duck typing), and you don't need an interface at all.
You will hardly ever need more than one logger. Nonetheless, dependency injection is essential for statically typed code such as Java or C#.
And...
It should also be noted that an object can only properly fulfill its purpose at runtime, if all its dependencies are available, so there is not much use in setting up property injection. In my opinion, all dependencies should be satisfied when the constructor is being called, so constructor-injection is the thing to go with.
I think the classic answer is to create a more decoupled application, which has no knowledge of which implementation will be used during runtime.
For example, we're a central payment provider, working with many payment providers around the world. However, when a request is made, I have no idea which payment processor I'm going to call. I could program one class with a ton of switch cases, such as:
class PaymentProcessor{
private String type;
public PaymentProcessor(String type){
this.type = type;
}
public void authorize(){
if (type.equals(Consts.PAYPAL)){
// Do this;
}
else if(type.equals(Consts.OTHER_PROCESSOR)){
// Do that;
}
}
}
Now imagine that now you'll need to maintain all this code in a single class because it's not decoupled properly, you can imagine that for every new processor you'll support, you'll need to create a new if // switch case for every method, this only gets more complicated, however, by using Dependency Injection (or Inversion of Control - as it's sometimes called, meaning that whoever controls the running of the program is known only at runtime, and not complication), you could achieve something very neat and maintainable.
class PaypalProcessor implements PaymentProcessor{
public void authorize(){
// Do PayPal authorization
}
}
class OtherProcessor implements PaymentProcessor{
public void authorize(){
// Do other processor authorization
}
}
class PaymentFactory{
public static PaymentProcessor create(String type){
switch(type){
case Consts.PAYPAL;
return new PaypalProcessor();
case Consts.OTHER_PROCESSOR;
return new OtherProcessor();
}
}
}
interface PaymentProcessor{
void authorize();
}
** The code won't compile, I know :)
The main reason to use DI is that you want to put the responsibility of the knowledge of the implementation where the knowledge is there. The idea of DI is very much inline with encapsulation and design by interface.
If the front end asks from the back end for some data, then is it unimportant for the front end how the back end resolves that question. That is up to the requesthandler.
That is already common in OOP for a long time. Many times creating code pieces like:
I_Dosomething x = new Impl_Dosomething();
The drawback is that the implementation class is still hardcoded, hence has the front end the knowledge which implementation is used. DI takes the design by interface one step further, that the only thing the front end needs to know is the knowledge of the interface.
In between the DYI and DI is the pattern of a service locator, because the front end has to provide a key (present in the registry of the service locator) to lets its request become resolved.
Service locator example:
I_Dosomething x = ServiceLocator.returnDoing(String pKey);
DI example:
I_Dosomething x = DIContainer.returnThat();
One of the requirements of DI is that the container must be able to find out which class is the implementation of which interface. Hence does a DI container require strongly typed design and only one implementation for each interface at the same time. If you need more implementations of an interface at the same time (like a calculator), you need the service locator or factory design pattern.
D(b)I: Dependency Injection and Design by Interface.
This restriction is not a very big practical problem though. The benefit of using D(b)I is that it serves communication between the client and the provider. An interface is a perspective on an object or a set of behaviours. The latter is crucial here.
I prefer the administration of service contracts together with D(b)I in coding. They should go together. The use of D(b)I as a technical solution without organizational administration of service contracts is not very beneficial in my point of view, because DI is then just an extra layer of encapsulation. But when you can use it together with organizational administration you can really make use of the organizing principle D(b)I offers.
It can help you in the long run to structure communication with the client and other technical departments in topics as testing, versioning and the development of alternatives. When you have an implicit interface as in a hardcoded class, then is it much less communicable over time then when you make it explicit using D(b)I. It all boils down to maintenance, which is over time and not at a time. :-)
Quite frankly, I believe people use these Dependency Injection libraries/frameworks because they just know how to do things in runtime, as opposed to load time. All this crazy machinery can be substituted by setting your CLASSPATH environment variable (or other language equivalent, like PYTHONPATH, LD_LIBRARY_PATH) to point to your alternative implementations (all with the same name) of a particular class. So in the accepted answer you'd just leave your code like
var logger = new Logger() //sane, simple code
And the appropriate logger will be instantiated because the JVM (or whatever other runtime or .so loader you have) would fetch it from the class configured via the environment variable mentioned above.
No need to make everything an interface, no need to have the insanity of spawning broken objects to have stuff injected into them, no need to have insane constructors with every piece of internal machinery exposed to the world. Just use the native functionality of whatever language you're using instead of coming up with dialects that won't work in any other project.
P.S.: This is also true for testing/mocking. You can very well just set your environment to load the appropriate mock class, in load time, and skip the mocking framework madness.

Resources