Self and super in nix overlays - nix

In nix, overlay is a function with 2 arguments: self and super. Based on the manual, self corresponds to the final package set (or some others call it result of the fix point calculation) and only to be used when dealing with dependencies. While super is the result of the evaluation of the previous stages of nixpkgs and only to be used when you refer to packages you want to override or to access certain function.
Sadly I don't really understand this. In what way that the nixpkgs gets updated by the overlays such that there's 2 restriction mentioned above?

These restrictions follow from the requirement that evaluation of an attribute should terminate.
Suppose you want to override the hello package. To reference the old definition of the package, you need to use super.hello, because that attribute can be evaluated without evaluating the hello definition in your overlay. If you would instead reference self.hello, that means that for evaluating the final hello attribute, Nix will have to evaluate self.hello, which references the final hello attribute, which references self.hello, and so on, creating an infinite recursion.
self can actually be used to reference functions, but the convention seems to be to use super instead. The idea that the next overlay may monkey-patch the lib.head function is not very enticing, although using super the same can still be done in a previous overlay.
You may also want to check out this excellent NixCon 2017 presentation by Nicolas. He both introduces the concept and explains how you can use it in the best way.

Related

Why is using final, with no type, considered good practice in Dart? ie `final foo = config.foo;`?

I see this recommended in the dart style guide, and copied in tons of tutorials and flutter source.
final foo = config.foo;
I don't understand it, how is this considered best practice when the readability is so poor? I have no clue what foo is here, surely final String foo = config.foo is preferable if we really want to use final?
This seems the equivalent to using var, which many consider a bad practice because it prevents the compiler from spotting errors and is less readable.
Where am I wrong?
In a lot of cases is does not really matter what type you are using as long the specific type can be statically determined by the compiler. When you are using var/final in Dart it is not that Dart does not know the type, it will just figure it out automatically based on the context. But the type must still be defined when the program are compiled so the types will never be dynamic based on runtime behavior. If you want truly dynamic types, you can use the dynamic keyword where you tell Dart "trust me, I know what I am doing with this types".
So types should still be defined where it matter most. This is e.g. for return and argument types for methods and class variables. The common factor for this types is that they are used to define the interface for using the method or class.
But when you are writing code inside a method, you are often not that interested in the specific types of variables used inside the method. Instead the focus should be the logic itself and you can often make it even more clear by using good describing variable names. As long the Dart analyzer can figure out the type, you will get autocomplete from your IDE and you can even still see the specific type from your IDE by e.g. Ctrl+Q in IntelliJ on the variable if you ends up in a situation where you want to know the type.
This is in particular the case when we are talking about the use of generics where it can be really tiresome to write the full specific type. Especially if you are using multiple generics inside each other like e.g. Map<String, List<String>>.
In my experience, Dart is really good to figure out very specific types and will complain loudly if your code cannot be determined statically. In the coming future, Dart will introduce non-null by default, which will make the Dart compiler and analyzer even more powerful since it will make sure your variable cannot be null unless this is something you specifically want and will make sure you are going to test for null when using methods which are specified to not expecting null.
About "Where am I wrong?". Well, a lot of languages have similar feature of var/final like Dart with the same design principle about the type should still be statically determined by a compiler or runtime. And even Java has introducing this feature. As a experienced Java and Dart programmer I have come to the conclusion for myself that typing inside methods are really not that important in a lot of cases as long I can still easily figure out the specific type by using an IDE when it really matters.
But it does make it more important to name your variables so they are clearly defining the purpose. But I am hoping you already are doing that. :)

D/Dlang: Lua interface, any way to force users to have no access to intermediate objects?

Status: Sort of solved. Switching Lua.Ref (close equivalent to LuaD LuaObject) to struct as suggested in answer has solved most issues related to freeing references, and I changed back to similar mechanism LuaD uses. More about this in the end.
In one of my project, I am working with Lua interface. I have mainly borrowed the ideas from LuaD. The mechanism in LuaD uses lua_ref & lua_unref to be able to move lua table/function references in D space, but this causes heavy problems because the calls to destructors and their order is not guaranteed. LuaD usually segfaults at least at the program exit.
Because it seems that LuaD is not maintained anymore, I decided to write my own interface for my purposes. My Lua interface class is here: https://github.com/mkoskim/games/blob/master/engine/util/lua.d
Usage examples can be found here:
https://github.com/mkoskim/games/blob/master/demo/luasketch/luademo.d
And in case you need, the Lua script used by the example is here:
https://github.com/mkoskim/games/blob/master/demo/luasketch/data/test.lua
The interface works like this:
Lua.opIndex pushes global table and index key to stack, and return Top object. For example, lua["math"] pushes _G and "math" to stack.
Further accesses go through Top object. Top.opIndex goes deeper in the table hierarchy. Other methods (call, get, set) are "final" methods, which perform an operation with the table and key at the top of the stack, and clean the stack afterwards.
Close everything works fine, except this mechanism has nasty quirk/bug that I have no idea how to solve it. If you don't call any of those "final" methods, Top will leave table and key to the stack:
lua["math"]["abs"].call(-1); // Works. Final method (call) called.
lua["math"]["abs"]; // table ref & key left to stack :(
What I know for sure, is that playing with Top() destructor does not work, as it is not called immediately when object is not referenced anymore.
NOTE: If there is some sort of operator to be called when object is accessed as rvalue, I could replace call(), set() and get() methods with operator overloads.
Questions:
Is there any way to prevent users to write such expressions (getting Top object without calling any of "final" methods)? I really don't want users to write e.g. luafunc = lua["math"]["abs"] and then later try to call it, because it won't work at all. Not without starting to play with lua_ref & lua_unref and start fighting with same issues that LuaD has.
Is there any kind of opAccess operator overloading, that is, overloading what happens when object is used as rvalue? That is, expression "a = b" -> "a.opAssign(b.opAccess)"? opCast does not work, it is called only with explicit casts.
Any other suggestions? I internally feel that I am looking solution from wrong direction. I feel that the problem reside in the realm of metaprogramming: I am trying to "scope" things at expression level, which I feel is not that suitable for classes and objects.
So far, I have tried to preserve the LuaD look'n'feel at interface user's side, but I think that if I could change the interface to something like following, I could get it working:
lua.call(["math", "abs"], 1); // call lua.math.abs(2)
lua.get(["table", "x", "y", "z"], 2); // lua table.x.y.z = 2
...
Syntactically that would ensure that reference to lua object fetched by indexing is finally used for something in the expression, and the stack would be cleaned.
UPDATE: Like said, changing Lua.Ref to struct solved problems related to dereferencing, and I am again using reference mechanism similar to LuaD. I personally feel that this mechanism suits the LuaD-style syntax I am using, too, and it can be quite a challenge to make the syntax working correctly with other mechanisms. I am still open to hear if someone has ideas to make it work.
The system I sketched to replace references (to tackle the problem with objects holding references living longer than lua sandbox) would probably need different kind of interface, something similar I sketched above.
You also have an issue when people do
auto math_abs = lua["math"]["abs"];
math_abs.call(1);
math_abs.call(3);
This will double pop.
Make Top a struct that holds the stack index of what they are referencing. That way you can use its known scoping and destruction behavior to your advantage. Make sure you handle this(this) correctly as well.
Only pop in the destructor when the value is the actual top value. You can use a bitset in LuaInterface to track which stack positions are in use and put the values in it using lua_replace if you are worried about excessive stack use.

Using Dart as a DSL

I am trying to use Dart to tersely define entities in an application, following the idiom of code = configuration. Since I will be defining many entities, I'd like to keep the code as trim and concise and readable as possible.
In an effort to keep boilerplate as close to 0 lines as possible, I recently wrote some code like this:
// man.dart
part of entity_component_framework;
var _man = entity('man', (entityBuilder) {
entityBuilder.add([TopHat, CrookedTeeth]);
})
// test.dart
part of entity_component_framework;
var man = EntityBuilder.entities['man']; // null, since _man wasn't ever accessed.
The entity method associates the entityBuilder passed into the function with a name ('man' in this case). var _man exists because only variable assignments can be top-level in Dart. This seems to be the most concise way possible to use Dart as a DSL.
One thing I wasn't counting on, though, is lazy initialization. If I never access _man -- and I had no intention to, since the entity function neatly stored all the relevant information I required in another data structure -- then the entity function is never run. This is a feature, not a bug.
So, what's the cleanest way of using Dart as a DSL given the lazy initialization restriction?
So, as you point out, it's a feature that Dart doesn't run any code until it's told to. So if you want something to happen, you need to do it in code that runs. Some possibilities
Put your calls to entity() inside the main() function. I assume you don't want to do that, and probably that you want people to be able to add more of these in additional files without modifying the originals.
If you're willing to incur the overhead of mirrors, which is probably not that much if they're confined to this library, use them to find all the top-level variables in that library and access them. Or define them as functions or getters. But I assume that you like the property that variables are automatically one-shot. You'd want to use a MirrorsUsed annotation.
A variation on that would be to use annotations to mark the things you want to be initialized. Though this is similar in that you'd have to iterate over the annotated things, which I think would also require mirrors.

Regarding F# Object Oriented Programming

There's this dichotomy in the way we can create classes in f# which really bothers me. I can create classes using either an implicit format or an explicit one. But some of the features that I want are only available for use with the implicit format and some are only available for use with the explicit format.
For example:
I can't use let inline* (or let alone) inside an explicitly defined class.
The only way (that I know) to define immutable public fields (not properties*) inside an implicitly defined class is the val bla : bla syntax.
But there's a redundancy here. Since I'll end up with two copy of the same immutable data, one private, one public (because in the implicit mode the constructor parameters persist throughout the class existence)
(Not so relevant) The need to use attributes for method overloading and for field's defaults is rather off putting.
Is there anyway I can work around this?
*For performance reasons
EDIT: Turns out I'm wrong about both points (Thanks Ganesh Sittampalam & MichaelGG).
While I can't use let inline in both implicit & explicit class definition, I can use member inline just fine, which I assume does the same thing.
Apparently with the latest F# there's no longer any redundancy since any parameters not used in the class body are local to the constructor.
Will be gone in the next F# release.
This might not help, but you can make members inline. "member inline private" works fine.
For let inline, you can work around by moving it outside the class and explicitly passing any values you need from inside the scope of the class when calling it. Since it'll be inlined, there'll be no performance penalty for doing this.

Usage of inline closures / function delegates in Actionscript

Why are inline closures so rarely used in Actionscript? They are very powerful and I think quite readable. I hardly ever see anyone using them so maybe I'm just looking at the wrong code. Google uses them in their Google Maps API for Flash samples, but I think thats the only place I've seen them.
I favor them because you have access to local variables in the scope that defines them and you keep the logic in one method and dont end up with lots of functions for which you have to come up with a name.
Are there any catches of using them? Do they work pretty much the same way as in C#.
I actually only just discovered that AS3 supports them, and I'm quite annoyed becasue I had thought I read that they were deprecated in AS#. So I'm back to using them!
private function showPanel(index:int):void {
_timer = new Timer(1000, 1);
_timer.addEventListener(TimerEvent.TIMER, function(event:Event):void
{
// show the next panel
showPanel(index++);
});
The biggest gotcha to watch out for is that often 'this' is not defined in the inline closure. Sometimes you can set a 'this', but it's not always the right 'this' that you would have available to set, depending on how you're using them.
But I'd say most of the Flex code I've worked on has had inline closures rampantly throughout the code--since callbacks are the only way to get work done, and often you don't need the bring out a whole separate function.
Sometimes when the function nested is getting to be too much, I'll break it out slightly with Function variables in the function; this helps me organize a bit by giving labels to the functions but keeping some of the characteristics of inline closures (access to the local variables, for example).
Hope this helps.
One additional problem is that garbage collection is broken when it comes to closures (at least in Flash 9). The first instance of a given closure (from a lexical standpoint) will never be garbage collected - along with anything else referenced by the closure in the scope chain.
I found what originally made me NOT want to do this, but I had forgotten the details:
http://livedocs.adobe.com/flex/3/html/16_Event_handling_6.html#119539
(This is what Mitch mentioned - as far as the 'this' keyword being out of scope)
So thats Adobe's answer, however I am much more likely to need to refer to local variables than 'this'.
How do others interpret Adobe's recommendation ?

Resources