User-adjustable data structures - delphi

assume a data structure Person used for a contact database. The fields of the structure should be configurable, so that users can add user defined fields to the structure and even change existing fields. So basically there should be a configuration file like
FieldNo FieldName DataType DefaultValue
0 Name String ""
1 Age Integer "0"
...
The program should then load this file, manage the dynamic data structure (dynamic not in a "change during runtime" way, but in a "user can change via configuration file" way) and allow easy and type-safe access to the data fields.
I have already implemented this, storing information about each data field in a static array and storing only the changed values in the objects.
My question: Is there any pattern describing that situation? I guess that I'm not the first one running into the problem of creating a user-adjustable class?
Thanks in advance. Tell me if the question is not clear enough.

I've had a quick look through "Patterns of Enterprise Application Architecture" by Martin Folwer and the Metadata Mapping pattern describes (at quick glance) what you are describing.
An excerpt...
"A Metadata Mapping allows developers to define the mappings in a simple tabular form, which can then be processed bygeneric code to carry out the details of reading, inserting and updating the data."
HTH

I suggest looking at the various Object-Relational pattern in Martin Fowler's Patterns of Enterprise Application Architecture available here. This is a list of patterns it covers here.
The best fit to your problem appears to be metadata mapping here. There are other patterns, Mapper, etc.

The normal way to handle this is for the class to have a list of user-defined records, each of which consists of list of user-defined fields. The configuration information forc this can easily be stored in a database table containing the a type id, field type etc, The actual data is then stored in a simple table with the data represented only as (objectid + field index)/string pairs - you convert the strings to and from the real type when you read or write the database.

Related

Partial deserialization with Apache Avro

Is it possible to deserialize a subset of fields from a large object serialized using Apache Avro without deserializing all the fields? I'm using GenericDatumReader and the GenericRecord contains all the fields.
I'm pretty sure you can't do it using GenericDatumReader, but my question is whether it is possible given the binary format of Avro.
Conceptually, binary serialization of Avro data is in-order and depth-first. As you traverse the data, record fields are serialized one after the other, lists are serialized from the top to the bottom, etc.
Within one object, there no markers to separate fields, no tags to identify specific fields, and no index into the binary data to help quickly scan to specific fields.
Depending on your schema, you could write custom code to skip some kinds of data ... for example, if a field is a LIST of FIXED bytes, you could read the size of the list and just jump over the data to the next field. This is pretty specific and wouldn't work for most Avro types though (notably integers are variable length when encoded).
Even in that unlikely case, I don't believe there are any helpers in the Java SDK that would be useful.
In brief, Avro isn't designed to do that, and you're probably not going to find a satisfactory way to do a projection on your Schema without deserializing the entire object. If you have a collection, column-oriented persistence like Parquet is probably the right thing to do!
It is possible if the fields you want to read occur first in the record. We do this in some cases where we want to read only the header fields of an object, not the full data which follows.
You can create a "subset" schema containing just those first fields, and pass this to GenericDatumReader. Avro will deserialise those fields, and anything which comes after will be ignored, because the schema doesn't "know" about it.
But this won't work for the general case where you want to pick out fields from within the middle of a record.

How to create dynamic parser?

I want to create something called dynamic parser.
My project input is some data file like XML, Excel, CSVand ... file and I must parse it and extract its records and its fields and finally save it to SQL Server database.
My problem is that fields of the record is dynamic and I can not write parser in development time. I must provide parser in run-time. By dynamic I mean a user select each record fields using a Web UI. So, I know the numbers of fields in each record in run-time and some information about each field like its name and so on.
I discussed this type of project in question titled 'Design Pattern for Custom Fields in Relational Database'.
I also looked at Parser Generator but i did not get enought information about it and I don't know it is really related to my problem or not.
Is there any design pattern for this type of problem?
If you know the number of fields and the field names then extract the data from the file and then build a query using string concatenation

Parsing field name using Crystal Reports

I have a customer who really wants to keep a very long naming convention during a migration to a new database. The new database uses Crystal Reports for reporting. I have gotten an ok to shorten the naming convention somewhat to "shortened name-date" with all of the other pertinent information parsed out into new fields.
However, one of the users who does a lot of the reporting has now said that one of the most tedious parts of her job was parsing out the old names so she could have a simple, high level, parent name for executive reports. With the new naming convention, she will still need to parse the field to get just the shortened name as her executive-level parent name. If I can't manage to get the ok to drop the date from this field, can Crystal reports be used to parse the field at the "-" similar to parsing the data using Excel? What I'm looking for is that her reports would have a formula that generates the executive-level short name behind the scenes so she doesn't have to think about it.
The date already exists in a date field, so parsing out the date from the name would not change other report functionality. Ideally, I would want to enter the data already separated out and concatenate fields per each user's particular needs, but I may not be able to do. Any info would be much appreciated.
Thank you.
I think you are looking for this...
Split({fieldname},"-")[1]

Rails: To normalize or not to normalize for few values

Assuming in a Rails app connected to a Postgres database, you have a table called 'Party', which can have less than 5 well-defined party_types such as 'Person' or 'Organization'.
Would you store the party_type in the Party table (e.g. party.party_type = 'Person') or normalize it (e.g. party.party_type = 1 and party_type.id = 1 / party_type.name = 'Person')? And why?
If party type can be defined in code, I'll definitely go with the names "Person" etc.
If you expect such types will be dynamically added by admin/user, and have such GUI for it, then modelling it and set it like party.party_type = 1
Of course there will be a db storage/performance consideration between "1" VS "Person", but that's too minor to considerate when the app is not that big.
There are two issues here:
Are you treating these types generically or not?1
Do you display the type to the user?
If the answer to (1) is "yes", then just adding a row in the table is clearly preferable to changing a constraint and/or your application code.
If the answer to (2) is "yes", then storing a human-readable label in the database may be preferable to translating to human-readable text in the application code.
So in a nutshell, you'd probably want to have a separate table. On the other hand, if all types are known in advance and you just use them to drive specific paths of your application logic without directly displaying to user, then separate table may be superfluous - just define the appropriate CHECK to restrict the field to valid values and clearly document each value.
1 In other words, can you add a new type and the logic of your application will continue to work, or you also need to change the application logic?

Core data migrating from rdbms

I am trying to migrate from pure sqlite (with FMDB wrapper) to core data.
My main reason is the icu problem (I have some multilingual projects - German, Spanish, Greek, Chinese) that are difficult to be searched from sqlite, as opposed with the icu built-in on core data. (NSDiacriticInsensitive | NSCaseInsensitive)
Generally I have my data (a coded book) in the following structure:
id
parentId
content
contentType
nContent
vieworder
where nContent is a diacriticinsensitive/caseinsensitive field that I need to ditch, since it slows my database very much (I have used indexes, I have used optimizations but I can't find anything to speedup the search process).
I am baffled with the core data concept -- I can understand it on a master-detail project but I can't understand how to achieve a self referenced item object --
A typical data stored with the above structure is this:
Chapter A
Chapter A.1
Title 1
Content #1
Title 2
Content #2
Chapter A.2
[...]
Where "Chapter/Title/Content" is the content field (so it varies from a small >256 string to a large block of text).
So my questions are:
* How to achieve this structure in core data entity/class (I know that it will need the self-reference relationship)
* How to find the items of each level (for example I would like to find all Title types -- that's why I have the contentType field)
* Is indexing this on the core data structures will provide me with a better indexing and better time searching rates than normal sql (I use LIKE %% structures on the nContent field)?
* Is it better to leave it on SQLITE and try finding a different indexing strategy?
Please feel free to answer any of these questions or give me at least an insight.
UPDATE
here is another more "realistic" example of what I mean:
Beginning HTML (Type: Chapter, parentid:0,id:1)
The fundamental pieces (Type Chapter, parentid:1, id:2)
How to begin (Type: title, parentid:2, id:3)
[content] (Type: content, parentid:3, id:4)
Using paragraphs (Type title, parentid:2, id:5)
[content] (Type: content, parentid:5, id:6)
Using Forms (Type: chapter, parentid:1, id:7)
... (and so on)
EDIT
Considering your clarification...
You probably want to revisit your design to see what works best. However, a simple approach to start with would be something like...
ContentObject
title: NSString
type: whatever
content: NSString
subcontent: 1-to-Many relationship to ContentObject
In Xcode model view, you would just control-click-drag from ContentObject to itself. A self-reference will be made.
Then, make it to-many, and give it the name "subcontent" or whatever. Then, name the inverse relationship "parent."
Now, you have a list of objects and you can add sub object to each object, and CoreData will automatically manage the pointers back to each other. You can also add an index for any attribute for faster searching.
If your actually content may grow large, you may want to make it an entity of its own, with a relationship to it.

Resources