One of the projects I'm working on involves a module that needs to allow end users to create what essentially equates to, their own "object classes," storing data structures / record types that they can design and modify at runtime. The users will also be able to customize the user interface considerably, but that is not so much the scope of this question.
The closest example to what we are striving to build with this functionality, that I've seen, would be something very akin to what InstantObjects provides at design time, except our system would provide it at runtime, and put the class design control in the hands of the end-user (who will generally be pretty technically sophisticated, obviously).
I recently came across this list of Object Persistence Frameworks for Delphi Win32:
http://tdelphihobbyist.blogspot.com/2008/01/win32-object-persistence-frameworks.html
Any recommendations as to which of these might be the most conducive to the kind of runtime flexibility we are trying to create?
There is currently ongoing discussion of this in the tiopf newsgroups. Tiopf is a open source object persistence framework. Currently it doesn't support user modifiable objects but it looks like someone will be adding this functionality shortly.
Main site: http://tiopf.sourceforge.net/
News groups: See http://tiopf.sourceforge.net/Support.shtml
See "tiOPF ad runtime modifications" thread in the support newsgroup.
JSON could be used to declare and modify user-defined data structures at runtime. There are two libraries for Delphi, SuperObject and lkJSON. With JSON, object hierarchies would be easy to build. The SuperObject demo sources include examples for many usage areas.
SuperObject also contains iterator methods for a given JSON object structure.
I use it for a Delphi client library which is able to exchange objects with Java using the ApacheMQ message broker.
You can create some form of persistence using xml and then buil a form acordingly, we do this a lot for configurable filter windows for example
Related
I have to integrate various legacy applications with some newly introduced parts that are silos of information and have been built at different times with varying architectures. At times these applications may need to get data from other system if it exists and display it to the user within their own screens based on the business needs.
I was looking to see if its possible to implement a generic federation engine that kind of abstracts the aggregation of the data from various other OData endpoints and have a single version of truth.
An simplistic example could be as below.
I am not really looking to do an ETL here as that may introduce some data related side effects in terms of staleness etc.
Can some one share some ideas as to how this can be achieved or point me to any article on the net that shows such a concept.
Regards
Kiran
Officially, the answer is to use either the reflection provider or a custom provider.
Support for multiple data sources (odata)
Allow me to expose entities from multiple sources
To decide between the two approaches, take a look at this article.
If you decide that you need to build a custom provider, the referenced article also contains links to a series of other articles that will help you through the learning process.
Your project seems non-trivial, so in addition I recommend looking at other resources like the WCF Data Services Toolkit to help you along.
By the way, from an architecture standpoint, I believe your idea is sound. Yes, you may have some domain logic behind OData endpoints, but I've always believed this logic should be thin as OData is primarily used as part of data access layers, much like SQL (as opposed to service layers which encapsulate more behavior in the traditional sense). Even if that thin logic requires your aggregator to get a little smart, it's likely that you'll always be able to get away with it using a custom provider.
That being said, if the aggregator itself encapsulates a lot of behavior (as opposed to simply aggregating and re-exposing raw data), you should consider using another protocol that is less data-oriented (but keep using the OData backends in that service). Since domain logic is normally heavily specific, there's very rarely a one-size-fits-all type of protocol, so you'd naturally have to design it yourself.
However, if the aggregated data is exposed mostly as-is or with essentially structural changes (little to no behavior besides assembling the raw data), I think using OData again for that central component is very appropriate.
Obviously, and as you can see in the comments to your question, not everybody would agree with all of this -- so as always, take it with a grain of salt.
I'm not too big of a fan of direct entity mappers, because I still think that SQL queries are fastest and most optimized when written by hand directly on and for the database (using correct joins, groupings, indexes etc).
On my current project I decided to give BLToolkit a try and I'm very pleased with its wrapper around Ado.net and speed so I query database and get strong type C# objects back. I've also written a T4 that generates stored procedure helpers so I don't have to use magic strings when calling stored procedures so all my calls use strong types for parameters.
Basically all my CRUD calls are done via stored procedures, because many of my queries are not simple select statements and especially my creates and updates also return results which is easily done using a stored procedure making just a single call. Anyway...
Downside
The biggest drawback of BLToolkit (I'd like everyone evaluating BLToolkit to know this) aren't its capabilities or speed but its very scarce documentation as well as support or lack thereof. So the biggest problem with this library is doing trial and error to get it working. That's why I also don't want to use too many different parts of it, because the more I use the more problems I have to solve on my own.
Question
What alternatives do I have to BLToolkit that:
support use of stored procedures that return whatever entities I provide that are not necessarily the same as DB tables
provide a nice object mapper from data reader to objects
supports relations (all of them)
optional (but desirable) support for multiple result-set results
doesn't need any special configuration (I only use data connection string and nothing else)
Basically it should be very lightweight, should basically just have a simple Ado.net wrapper and object mapper.
And the most important requirement: is easy to use, well supported and community uses it.
Alternatives (May 2011)
I can see that big guns have converted their access strategies to micro ORM tools. I was playing with the same idea when I evaluated BLToolkit, because it felt bulky (1.5MB) for the functionality I'd use. In the end I decided to write the aforementioned T4 (link in question) to make my life easier when calling stored procedures. But there are still many possibilities inside BLToolkit that I don't use at all or even understand (reasons also pointed out in the question).
Best alternative are micro ORM tools. Maybe it would be better to call them micro object mappers. They all have the same goals: simplicity and extreme speed. They are not following the NoSQL paradigm of their big fellow ORMs, so most of the time we have to write (almost) everyday TSQL to power their requests. They fetch data and map them to objects (and sometimes provide something more - check below).
I would like to point out 3 of them. They're all provided in a single code file and not as a compiled DLL:
Dapper - used by Stackoverflow itself; all it actually does it provides generic extension methods over IDbConnection which means it supports any backing data store as long there's a connection class that implements IDbConnection interface;
uses parametrised SQL
maps to static types as well as dynamic (.net 4+)
supports mapping to multiple objects per result record (as in 1-1 relationships ie. Person+Address)
supports multi-resultset object mapping
supports stored procedures
mappings are generated, compiled (MSIL) and cached - this can as well be downside if you use huge number of types)
Massive - written by Rob Connery;
only supports dynamic type mapping (no support in .net 3.5 or older baby)
is extremely small (few hundreds of lines of code)
provides a DynamicModel class that your entities inherit from and provides CRUD functionaly or maps from arbitrary baremetal TSQL
implicit paging support
supports column name mapping (but you have to do it every time you access data as opposed to declarative attributes)
supports stored procedures by writing direct parametrised TSQL
PetaPoco - inspired my Massive but with a requirement to support older framework versions
supports strong types as well as dynamic
provides T4 template to generate your POCOs - you'll end up with similar classes as big fellow ORMs (which means that code-first is not supported) but you don't have to use these you can still write your own POCO classes of course to keep your model lightweight and not include DB only information (ie. timestamps etc.)
similar to Dapper it also compiles mappings for speed and reuse
supports CRUD operations + IsNew
implicit paging support that returns a special type with page-full of data + all metadata (current page, number of all pages/records)
has extensibility point for various scenarios (logging, type converters etc)
supports declarative metadata (column/table mappings etc)
supports multi object mapping per result record with some automatic relation setting (unlike Dapper where you have to manually connect related objects)
supports stored procedures
has a helper SqlBuilder class for easier building TSQL statements
Of all three PetaPoco seems to be the liveliest in terms of development and support most of the things by taking the best of the other two (and some others).
Of all three Dapper has the best real-world usage reference because it's used by one of the highest traffic sites on the world: Stackoverflow.
They all suffer from magic string problem because you write SQL queries directly into them most of the time. But some of this can be mitigated by T4, so you can have strong typed calls that provide intellisense, compile-time checking and re-generation on the fly within Visual Studio.
Downside of dynamic type
I think the biggest downside of dynamic types is maintenance. Imagine your application using dynamic types. Looking at your own code after a while will become rather problematic, because you don't have any concrete classes to observe or hang on to. As much as dynamic types are a blessing they're as well a curse on the long run.
I don't quite get type providers after watching Don Symes's pdc video
http://player.microsoftpdc.com/Session/04092962-4ed1-42c6-be07-203d42115274
Do I understand this correctly. You can get ready made type providers for Twitter, Excel...
What if I have a custom Xml structure, do I need to implement my own type provider for that and how is this different from creating my own custom mapper?
Say you have some arbitrary data entity out in the world. For this example, let's say it's a spreadsheet.
Let's also say you have some way to get/infer schema/metadata for that data - that is, you can know types (e.g. double versus string) and relationships (e.g. this column means 'salary') and metadata (e.g. this sheet is for the June 2009 budget).
Type providers lets you code up a kind of 'shim library' that knows about some kind of data entity (e.g. a spreadsheet) and use that library as part of the compiler/IDE toolchain so that you can write code like
mySpreadsheet.ByRowAndColumn.C4
or something, and get Intellisense (autocompletion) and tooltips (e.g. describing cell C4 as Salary for Bob) and static typing (e.g. have it be a double or a string or whatever it is). Essentially this gives you the tooling affordances of statically-typed object models with the ease-of-use leverage of various dynamic or code-generation systems, with some improvements on both. The 'cost' is that someone has to write the shim library (the 'type provider'), but many such providers are very general (e.g. one that speaks OData or Excel or WMI or whatnot) and so a small handful of type provider libraries makes vast quantities of the world's data available in your programming language with static typing and first-class tooling support.
The architecture is an open compiler, where provider-authors implement a small interface that allows them to inject new names/types into the programming context. A type provider might be just another library you pass to the compiler (a reference in your project, -r-ed), with extra metadata that marks it as a type provider that participates in the compilation/IDE/codegen portions of development.
I don't know exactly what a "custom mapper" is in your xml example to draw a comparison.
I understand that this is an old question, but now Type providers are available (as F# 3.0 got released). There is a white paper explaining it too. And we have a code drop from Microsoft that can let you see under the hood.
http://www.infoq.com/news/2012/09/fsharp-type-providers
Type providers use F#'s quotations to act as (effectively) compiler plugins that can generate code based on meta-data at compile time.
This allows you to (for example) read in some JSON, or a data base schema, or some XSD or whatever and then generate F# classes to model the domain that meta-data represents.
In terms of creating them, I wrote a few blog posts that might be of interest starting with Type Providers from the Ground Up.
My problem is a simple one. I've created a class library for Delphi 2007 and added the modelling support to it that Delphi offers. It generates nice class overviews of my code, which I'd like to use. But it's not enough. I want to export the generated UML to Altova's UModel to generate some additional documentation and nicer-looking models.
I can't find a way to export the UML from Delphi, though. I can't even find anything in Delphi that would help me to generate any other documentation, except for the class model images that it allows me to save.
My main problem with my class library is that while it's usage is simple, it's creation was quite complex. I've used several techniques to encapsulate functionality, types within types, interfaces and delegations, type aliases and a lot more. The result is actually three simple-looking classes that only expose methods needed to call a specific web service with one class for the WS itself, one class to manage the input and one to manage the output. The class interface is thus kept simple to make it's usage simple. Unfortunately, the complexity of the WS required me to create some complex code.
I need to generate two kinds of documentation now for this code. One simple document that explains how it's used. That one is simple. A second one that explains how to maintain the code, what is where and how and why certain decisions have been taken. That one is complex and requires me to model the whole thing.
I have UModel, which is a great product, especially with C# and Java code. Unfortunately, it can't import Delphi code. I've tried Enterprise Architect, which can manage Delphi code, but this code happens to be way too complex. EA doesn't understand a thing about types within types and other features I've used. Also tried StarUML but had to cry after 10 minutes of usage since that product is just real bad... And doesn't even support Delphi... My hard disk feels real dirty now after I've installed it...
And while I could continue to try other modelling tools, I think I should have a better chance in findiing some way to convert the Together UML stuff to a regular XMI file.
You might want to try ModelMaker.
It has an add-on that allows you to export the UML as XMI, which you can import in Altova UModel.
ModelMaker supports both the Delphi and C# language.
--jeroen
I'm afraid there's no such thing as "regular XMI file" (see for instance this example, that shows the differences in the XMI representation of the same model depending on the tool you use).
We’re rewriting a calculation core from scratch in Delphi, and we’re looking for ways to let other people write code against it.
Automation seems a fairly safe way to get this done. One use we’re thinking of is making it available to VBA/Office, and also generating a .NET assembly (based on the Automation object, that's easy).
But the code should still be easy to use from Delphi, since we’ll be writing our (desktop) UI with that.
Now I’ve been looking into creating an Automation server in Delphi, and it looks like quite a hassle to have to design the components in the Type Library wizard, and then generate the base code.
The calculations we’re having to implement are described in official rules and regulations that are still not ratified, and so could still change before we’re done — they very probably will, perhaps quite extensively. Waiting for the final version is not an option.
An alternative way could be to finish the entire object model first, and write a separate Automation server which only describes the top-level object, switch $METHODINFO ON, and use TObjectDispatch to return all the subordinate objects. As I see it, that would entail having to write wrappers to return the objects by IDispatch interface. Since there's over a 100 different classes in there, that doesn’t look like an attractive option.
Edit: TObjectDispatch is smart enough to wrap any objects returned by properties and methods as well; so only the top object(s) would need to be wrapped. Lack of a complete type library does mean only late-binding is possible, however.
Is there an other, easier (read: hassle-free) way to write a COM-accessible object model in Delphi?
You don't have to use the type library designer. You can write or generate (e.g. from RTTI of your Delphi classes) a .ridl file and add it to your Automation library project.
Generating interface description from RTTI is a great idea! After you have your interfaces generated you can generate a delphi unit from them and implementing in your classes. Of course the majority are implemented already since you have generated the interfaces from those classes after all. The late binding resolution can be done after that by hand using RTTI and implementing IDispatch and IDispatchEx in a common baseclass of the scriptable classes.