Vala object class lighter than normal class - vala

"What happens if you don't inherit from Object? Nothing terrible. These classes will be slightly more lightweight, however, they will lack some features such as property change notifications, and your objects won't have a common base class. Usually inheriting from Object is what you want." Vala team said.
So I wanted to know how light the classes are with or without inheriting form Object.
So, Here are my test files
test1.vala:
class Aaaa : Object {
public Aaaa () { print ("hello\n"); }
}
void main () { new Aaaa (); }
test2.vala:
class Aaaa {
public Aaaa () { print ("hello\n"); }
}
void main () { new Aaaa (); }
The results after the compilation was totally unexpected, the size of test1 is 9.3 kb and the size of test2 is 14.9 kb and that contradicts what they said. Can someone explain this please?

You are comparing the produced object code / executable size, but that's not what the statement from the tutorial was referring to.
It is refering to the features that your class will support. It's just clarifying that you don't get all the functionality that GLib.Object / GObject provides.
In C# (and in Java, too?) the type system is "rooted" which means all classes always derive implicitly from System.Object. That is not the case for Vala. Vala classes can be "stand alone" classes which means that these stand alone classes don't have any parent class (not even GLib.Object / GObject).
The code size is bigger, because the stand alone class doesn't reuse any functionality from GLib.Object / GObject (which is implemented in glib), so the compiler has to output more boiler plate code (writing classes in C is always involving a lot of boiler plate code).
You can compare yourself with "valac -C yourfile.vala" which will produce a "yourfile.c" file.

That's a very interesting question. The answer will get you deep into how GObjects work. With these kinds of questions a useful feature of valac is to use the --ccode switch. This will produce the C code, instead of the binary. If you look at the C code of the second code sample, which doesn't inherit from Object, it includes a lot more functions, such as aaaa_ref and aaaa_unref. These are basic functions used to handle objects in GLib's object system. When you inherit from Object these functions are already defined in the parent class so the C code and resulting binary are smaller.
By just using class without inheriting from Object you are creating your own GType, but not inheriting all the features of Object so in that sense your classes are lighter weight. This makes them quicker to instantiate. If you time how long it takes to create a huge number of GType objects compared to the same number of GObject inheriting objects you should see the GType object being created more quickly. As you have pointed out GType objects lose some additional features. So the choice depends on your application.

Related

How to properly use class extensions in Swift?

In Swift, I have historically used extensions to extend closed types and provide handy, logic-less functionality, like animations, math extensions etc. However, since extensions are hard dependencies sprinkled all over your code-base, I always think three times before implementing something as an extension.
Lately, though, I have seen that Apple suggests using extensions to an even greater extent, e.g. implementing protocols as separate extensions.
That is, if you have a class A that implement protocol B, you end up with this design:
class A {
// Initializers, stored properties etc.
}
extension A: B {
// Protocol implementation
}
As you enter that rabbit-hole, I started seeing more extension-based code, like:
fileprivate extension A {
// Private, calculated properties
}
fileprivate extension A {
// Private functions
}
One part of me likes the building-blocks you get when you implement protocols in separate extensions. It makes the separate parts of the class really distinct. However, as soon as you inherit this class, you will have to change this design, since extension functions cannot be overridden.
I think the second approach is...interesting. Once great thing with it is that you do not have to annotate each private property and function as private, since you can specify that for the extension.
However, this design also splits up stored and non-stored properties, public and private functions, making the "logic" of the class harder to follow (write smaller classes, I know). That, together with the subclassing issues, makes me halt a bit on the porch of extension wonderland.
Would love to hear how the Swift community of the world looks at extensions. What do you think? Is there a silverbullet?
This is only my opinion, of course, so take what I'll write easy.
I'm currently using the extension-approach in my projects for few reasons:
The code is much more clean: my classes are never over 150 lines and the separation through extensions makes my code more readable and separated by responsibilities
This is usually what a class looks like:
final class A {
// Here the public and private stored properties
}
extension A {
// Here the public methods and public non-stored properties
}
fileprivate extension A {
// here my private methods
}
The extensions can be more than one, of course, it depends on what your class does. This is simply useful to organize your code and read it from the Xcode top bar
It reminds me that Swift is a protocol-oriented-programming language, not an OOP language. There is nothing you can't do with protocol and protocol extensions. And I prefer to use protocols for adding a security layer to my classes / struct. For example I usually write my models in this way:
protocol User {
var uid: String { get }
var name: String { get }
}
final class UserModel: User {
var uid: String
var name: String
init(uid: String, name: String) {
self.uid = uid
self.name = name
}
}
In this way you can still edit your uid and name values inside the UserModel class, but you can't outside since you'll only handle the User protocol type.
I use a similar approach, which can be described in one sentence:
Sort a type's responsibilities into extensions
These are examples for aspects I'm putting into individual extensions:
A type's main interface, as seen from a client.
Protocol conformances (i.e. a delegate protocol, often private).
Serialization (for example everything NSCoding related).
Parts of a types that live on a background thread, like network callbacks.
Sometimes, when the complexity of a single aspect rises, I even split a type's implementation over more than one file.
Here are some details that describe how I sort implementation related code:
The focus is on functional membership.
Keep public and private implementations close, but separated.
Don't split between var and func.
Keep all aspects of a functionality's implementation together: nested types, initializers, protocol conformances, etc.
Advantage
The main reason to separate aspects of a type is to make it easier to read and understand.
When reading foreign (or my own old) code, understanding the big picture is often the most difficult part of diving in. Giving a developer an idea of a context of some method helps a lot.
There's another benefit: Access control makes it easier not to call something inadvertently. A method that is only supposed to be called from a background thread can be declared private in the "background" extension. Now it simply can't be called from elsewhere.
Current Restrictions
Swift 3 imposes certain restrictions on this style. There are a couple of things that can only live in the main type's implementation:
stored properties
overriding func/var
overidable func/var
required (designated) initializers
These restrictions (at least the first three) come from the necessity to know the object's data layout (and witness table for pure Swift) in advance. Extensions can potentially be loaded late during runtime (via frameworks, plugins, dlopen, ...) and changing the type's layout after instances have been created would brake their ABI.
A modest proposal for the Swift team :)
All code from one module is guaranteed to be available at the same time. The restrictions that prevent fully separating functional aspects could be circumvented if the Swift compiler would allow to "compose" types within a single module. With composing types I mean that the compiler would collect all declarations that define a type's layout from all files within a module. Like with other aspects of the language it would find intra file dependencies automatically.
This would allow to really write "aspect oriented" extensions. Not having to declare stored properties or overrides in the main declaration would enable better access control and separation of concerns.
I hate it. It adds extra complexity and muddies the use of extensions, making it unclear on what to expect that people are using the extensions for.
If you're using an extension for protocol conformance, OK, I can see that, but why not just comment your code? How is this better? I don't see that.

Why are these contravariant argument types considered safe?

I just learned in my programming languages class that "contravariant argument types would actually be safe, but they have not been found useful and are hence not supported in practical languages." Even though they are not supported, I am confused as to why something like this example we were given would still be, in theory, "safe":
class Animal {
...
public bool compare(Panda) { ... }
}
class Panda extends Animal {
...
public bool compare(Animal) { ... }
}
From what I understand, problems with subtyping come up when something is done that could cause a loss of specificity. So what if I did this? :
Panda p = new Panda();
Animal a = new Animal
...
p.compare(a);
When I look at this, it seems like panda could (and probably does) have some extra fields in it that a plain animal wouldn't know about. Thus, even if all of their animal-specific data members are the same, a panda can have other stuff that differs. How would that make it okay to compare it to a plain animal? Would it just consider the animal-only stuff and ignore the rest?
In your example you don't use any generic types. You have Panda extending Animal, and it's an example of inheritance and leads to polymorphism which is more or less what you describe. Check the links.
To get contravariance, you need to consider some generic type. I'll use .NET type IComparer`1[T] as an example. With C# syntax (which I'll use rather than Java), we indicate that IComparer is contravariant in T by writing in in the definition:
public interface IComparer<in T>
{
...
}
Suppose I have a method which returns an IComparer`1[Animal] (or IComaparer<Animal>), like:
static IComparer<Animal> CreateAnimalComparer()
{
// code that returns something here
}
Now in C#, it's legal to say:
IComparer<Panda> myPandaComparer = CreateAnimalComparer();
Now, this is because of contravariance. Note that the type IComparer<Animal> does not derive from (or "extend") the type IComparer<Panda>. Instead, Panda derives from Animal, and this leads to the IComparer<Xxxx> being assignable to each other (in the opposite order, hence "contravariance" (not "covariance")).
The reason why it's meaningful to declare a Comparer<> contravariant, is if you have a comparer that can compare two arbitrary animals, and return a signed number indicating which is greater, then that same comparer can also take in two pandas and compare those. For pandas are animals.
So the relation
any Panda is an Animal
(from inheritance) leads to the relation
any IComparer<Animal> is an IComparer<Panda>
(by contravariance).
For an example with covariance, the same relation
any Panda is an Animal
leads to
any IEnumerable<Panda> is an IEnumerable<Animal>
by covariance (IEnumerable<out T>).

XNA/C#: Entity Factories and typeof(T) performance

In our game (targeted at mobile) we have a few different entity types and I'm writing a factory/repository to handle instantiation of new entities. Each concrete entity type has its own factory implementation and these factories are managed by an EntityRepository.
I'd like to implement the repository as such:
Repository
{
private Dictionary <System.Type, IEntityFactory<IEntity>> factoryDict;
public T CreateEntity<T> (params) where T : IEntity
{
return factoryDict[typeof(T)].CreateEntity() as T;
}
}
usage example
var enemy = repo.CreateEntity<Enemy>();
but I am concerned about performance, specifically related to the typeof(T) operation in the above. It is my understanding that the compiler would not be able to determine T's type and it will have to be determined at runtime via reflection, is this correct? One alternative is:
Repository
{
private Dictionary <System.Type, IEntityFactory> factoryDict;
public IEntity CreateEntity (System.Type type, params)
{
return factoryDict[type].CreateEntity();
}
}
which will be used as
var enemy = (Enemy)repo.CreateEntity(typeof(Enemy), params);
in this case whenever typeof() is called, the type is on hand and can be determined by the compiler (right?) and performance should be better. Will there be a noteable difference? any other considerations? I know I can also just have a method such as CreateEnemy in the repository (we only have a few entity types) which would be faster but I would prefer to keep the repository as entity-unaware as possible.
EDIT:
I know that this may most likely not be a bottleneck, my concern is just that it is such a waste to use up time on reflecting when there is a slightly less sugared alternative available. And I think it's an interesting question :)
I did some benchmarking which proved quite interesting (and which seem to confirm my initial suspicions).
Using the performance measurement tool I found at
http://blogs.msdn.com/b/vancem/archive/2006/09/21/765648.aspx
(which runs a test method several times and displays metrics such as average time etc) I conducted a basic test, testing:
private static T GenFunc<T>() where T : class
{
return dict[typeof(T)] as T;
}
against
private static Object ParamFunc(System.Type type)
{
var d = dict[type];
return d;
}
called as
str = GenFunc<string>();
vs
str = (String)ParamFunc(typeof(String));
respectively. Paramfunc shows a remarkable improvement in performance (executes on average in 60-70% the time it takes GenFunc) but the test is quite rudimentary and I might be missing a few things. Specifically how the casting is performed in the generic function.
An interesting aside is that there is little (neglible) performance gained by 'caching' the type in a variable and passing it to ParamFunc vs using typeof() every time.
Generics in C# don't use or need reflection.
Internally types are passed around as RuntimeTypeHandle values. And the typeof operator maps to Type.GetTypeFromHandle (MSDN). Without looking at Rotor or Mono to check, I would expect GetTypeFromHandle to be O(1) and very fast (eg: an array lookup).
So in the generic (<T>) case you're essentially passing a RuntimeTypeHandle into your method and calling GetTypeFromHandle in your method. In your non-generic case you're calling GetTypeFromHandle first and then passing the resultant Type into your method. Performance should be near identical - and massively outweighed by other factors, like any places you're allocating memory (eg: if you're using the params keyword).
But it's a factory method anyway. Surely it won't be called more than a couple of times per second? Is it even worth optimising?
You always hear how slow reflection is, but in C#, there is actually fast reflection and slow reflection. typeof is fast-reflection - it has basically the overhead of method call, which is nearly infinitesimal.
I would bet a steak and lobster dinner that this isn't going to be a performance bottleneck in your application, so it's not even worth your (or our) time in trying to optimize it. It's been said a million times before, but it's worth saying again: "Premature optimization is the root of all evil."
So, finish writing the application, then profile to determine where your bottlenecks are. If this turns out to be one of them, then and only then spend time optimizing it. And let me know where you'd like to have dinner.
Also, my comment above is worth repeating, so you don't spend any more time reinventing the wheel: Any decent IoC container (such as AutoFac) can [create factory methods] automatically. If you use one of those, there is no need to write your own repository, or to write your own CreateEntity() methods, or even to call the CreateEntity() method yourself - the library does all of this for you.

Do Delphi generic collections override equals?

This is a question for the generic collection gurus.
I'm shocked to find that TList does not override equals. Take a look at this example:
list1:=TList<String>.Create;
list2:=TList<String>.Create;
list1.Add('Test');
list2.Add('Test');
Result:=list1.Equals(list2);
"Result" is false, even though the two Lists contain the same data. It is using the default equals() (which just compares the two references for equality).
Looking at the code, it looks like the same is true for all the other generic collection types too.
Is this right, or am I missing something??
It seems like a big problem if trying to use TLists in practice. How do I get around this? Do I create my own TBetterList that extends TList and overrides equals to do something useful?
Or will I run into further complications with Delphi generics...... ?
[edit: I have one answer so far, with a lot of upvotes, but it doesn't really tell me what I want to know. I'll try to rephrase the question]
In Java, I can do this:
List<Person> list1=new ArrayList<Person>();
List<Person> list2=new ArrayList<Person>();
list1.add(person1);
list2.add(person1);
boolean result=list1.equals(list2);
result will be true. I don't have to subclass anything, it just works.
How can I do the equivalent in Delphi?
If I write the same code in Delphi, result will end up false.
If there is a solution that only works with TObjects but not Strings or Integers then that would be very useful too.
Generics aren't directly relevant to the crux of this question: The choice of what constitutes a valid base implementation of an Equals() test is entirely arbitrary. The current implementation of TList.Equals() is at least consistent will (I think) all other similar base classes in the VCL, and by similar I don't just mean collection or generic classes.
For example, TPersistent.Equals() also does a simple reference comparison - it does not compare values of any published properties, which would arguably be the semantic equivalent of the type of equality test you have in mind for TList.
You talk about extending TBetterList and doing something useful in the derived class as if it is a burdensome obligation placed on you, but that is the very essence of Object Oriented software development.
The base classes in the core framework provide things that are by definition of general utility. What you consider to be a valid implementation for Equals() may differ significantly from someone else's needs (or indeed within your own projects from one class derived from that base class to another).
So yes, it is then up to you to implement an extension to the provided base class that will in turn provide a new base class that is useful to you specifically.
But this is not a problem.
It is an opportunity.
:)
You will assuredly run into further problems with generics however, and not just in Delphi. ;)
What it boils down to is this:
In Java (and .NET languages) all types descend from Object. In Delphi integers, strings, etc. do not descend from TObject. They are native types and have no class definition.
The implications of this difference are sometimes subtle. In the case of generic collections Java has the luxury of assuming that any type will have a Equals method. So writing the default implementation of Equals is a simple matter of iterating through both lists and calling the Equals method on each object.
From AbstractList definition in Java 6 Open JDK:
public boolean equals(Object o) {
if (o == this)
return true;
if (!(o instanceof List))
return false;
ListIterator<E> e1 = listIterator();
ListIterator e2 = ((List) o).listIterator();
while(e1.hasNext() && e2.hasNext()) {
E o1 = e1.next();
Object o2 = e2.next();
if (!(o1==null ? o2==null : o1.equals(o2)))
return false;
}
return !(e1.hasNext() || e2.hasNext());
}
As you can see the default implementation isn't really all that deep a comparison after all. You would still be overriding Equals for comparison of more complex objects.
In Delphi since the type of T cannot be guaranteed to be an object this default implementation of Equals just won't work. So Delphi's developers, having no alternative left overriding TObject.Equals to the application developer.
I looked around and found a solution in DeHL (an open source Delphi library). DeHL has a Collections library, with its own alternative List implementation. After asking the developer about this, the ability to compare generic TLists was added to the current unstable version of DeHL.
So this code will now give me the results I'm looking for (in Delphi):
list1:=TList<Person>.Create([Person.Create('Test')]);
list2:=TList<Person>.Create([Person.Create('Test')]);
PersonsEqual:=list1.Equals(list2); // equals true
It works for all types, including String and Integer types
stringList1:=TList<string>.Create(['Test']);
stringList2:=TList<string>.Create(['Test']);
StringsEqual:=stringList1.Equals(stringList2); // also equals true
Sweet!
You will need to check out the latest unstable version of DeHL (r497) to get this working. The current stable release (0.8.4) has the same behaviour as the standard Delphi TList.
Be warned, this is a recent change and may not make it into the final API of DeHL (I certainly hope it does).
So perhaps I will use DeHL instead of the standard Delphi collections? Which is a shame, as I prefer using standard platform libraries whenever I can. I will look further into DeHL.

F# mutual recursion between modules

For recursion in F#, existing documentation is clear about how to do it in the special case where it's just one function calling itself, or a group of physically adjacent functions calling each other.
But in the general case where a group of functions in different modules need to call each other, how do you do it?
I don't think there is a way to achieve this in F#. It is usually possible to structure the application in a way that doesn't require this, so perhaps if you described your scenario, you may get some useful comments.
Anyway, there are various ways to workaround the issue - you can declare a record or an interface to hold the functions that you need to export from the module. Interfaces allow you to export polymorphic functions too, so they are probably a better choice:
// Before the declaration of modules
type Module1Funcs =
abstract Foo : int -> int
type Module2Funcs =
abstract Bar : int -> int
The modules can then export a value that implements one of the interfaces and functions that require the other module can take it as an argument (or you can store it in a mutable value).
module Module1 =
// Import functions from Module2 (needs to be initialized before using!)
let mutable module2 = Unchecked.defaultof<Module2Funcs>
// Sample function that references Module2
let foo a = module2.Bar(a)
// Export functions of the module
let impl =
{ new Module1Funcs with
member x.Foo(a) = foo a }
// Somewhere in the main function
Module1.module2 <- Module2.impl
Module2.module1 <- Module1.impl
The initializationcould be also done automatically using Reflection, but that's a bit ugly, however if you really need it frequently, I could imagine developing some reusable library for this.
In many cases, this feels a bit ugly and restructuring the application to avoid recursive references is a better approach (in fact, I find recursive references between classes in object-oriented programming often quite confusing). However, if you really need something like this, then exporting functions using interfaces/records is probably the only option.
This is not supported. One evidence is that, in visual stuido, you need to order the project files correctly for F#.
It would be really rare to recursively call two functions in two different modules.
If this case does happen, you'd better factor the common part of the two functions out.
I don't think that there's any way for functions in different modules to directly refer to functions in other modules. Is there a reason that functions whose behavior is so tightly intertwined need to be in separate modules?
If you need to keep them separated, one possible workaround is to make your functions higher order so that they take a parameter representing the recursive call, so that you can manually "tie the knot" later.
If you were talking about C#, and methods in two different assemblies needed to mutually recursively call each other, I'd pull out the type signatures they both needed to know into a third, shared, assembly. I don't know however how well those concepts map to F#.
Definetely solution here would use module signatures. A signature file contains information about the public signatures of a set of F# program elements, such as types, namespaces, and modules.
For each F# code file, you can have a signature file, which is a file that has the same name as the code file but with the extension .fsi instead of .fs.

Resources