I'm looking for a way to store large objects within Drools for long periods of time (i.e. not like facts which are added and removed from within a session).
I've read that Drools works using KnowledgeBases and Sessions (Stateless & Stateful) and that KnowledgeBases contain application knowledge definitions but no runtime data.
In a case where I need to store, for example, a large dictionary (that won't change but will be referenced by more than one successive session), and have objects added to working memory and checked against this dictionary to have rules fired, where would it be best to have this stored?
Does everything just go into working memory (in which case, would I need to load the dictionary into memory each time I open a new session?) or am I just missing a crucial Drools basic principle? Would global variables be a good fix for this?
Not sure how large "large" is (of course there's always a performance tradeoff), but you could also use an inserted object to pull from a database (/cache) and have the rules access the values via method.
when
$y : AnObject (name == "car", lookupPrice > 10000 );
where AnObject.getLookupPrice() is a method that would pull a value out of the cached / stored dictionary.
If the object isn't too big you could codify as well (as an object) and use it the same way.
Related
I Would like to know if the Genexus Extension SDK already implements something to store persistent data locally (KB Independant and per KB), something like PersistentDictionary from ManagedEsent
I know that genexus uses SQL Server to store KB Related information, is there an interface for me to extend that?
I want to save data per genexus instance (locally) and use that data to load my extension config, everytime the users executes Genexus.
We don't use PersistentDictionary. I would advice not to use it, as it's a Windows specific API, and we are trying make everything new cross platform, as part of our journey of making GeneXus BL run on other OS.
There are different options of persistence, depending on the specific details of your scenario.
If you want to store something like configuration settings for your extension, you can use the ConfigurationHelper class located in Artech.Common.Helpers. This class provides read access to the configurations defined in the GeneXus.exe.config file in the GeneXus installation folder, as well as read/write access to the Environment.config file located in %AppData%\GeneXus\GeneXus\<version>\Environment.config. Note this file depends on the current user, and is shared between different GeneXus instances of a same main version.
The ConfigurationHelper class provides operations to read and save settings of basic types string, int and bool.
const string MY_EXTENSION = "MyExtensionSettings";
const string SETTING1 = "Setting1";
const string SETTING1_DEFAULT_VALUE = "This is the default value";
const string SETTING2 = "Setting2";
const int SETTING2_DEFAULT_VALUE = 20;
string setting1Value = ConfigurationHelper.GetUserSetting(MY_EXTENSION, SETTING1, SETTING1_DEFAULT_VALUE);
int setting2Value = ConfigurationHelper.GetUserSetting(MY_EXTENSION, SETTING2, SETTING2_DEFAULT_VALUE);
// Do something and maybe change the setting values
ConfigurationHelper.SetUserSetting(MY_EXTENSION, SETTING1, setting1Value);
ConfigurationHelper.SetUserSetting(MY_EXTENSION, SETTING2, setting2Value);
If you want to store something in a file based on the current opened KB, there's no specific API that'll help you handle the persistence. You can use the properties Location and UserDirectory of the KnowledgeBase class to access the KB location or a directory for the current user under the KB location, but it's up to you the handling of the file. You'll have to decide on the file format (binary or text), file encoding in case of text files, and handle all read and write operations to that file.
We use the kb.UserDirectory path to store non-critical stuff, such as the set of objects that were opened the last time the KB was closed, or the filter values for different dialogs.
In case you'd like to store settings inside the KB, there are plenty of options.
You can add properties to existing objects, KB version or environment. Making it a property doesn't necessary mean you'll have to edit the value in the property grid, although it's usually the way to go.
You can define a new kind of entity. Entities are the basic elements that can be stored in a KB. The entity may be stored depending on the active version of the KB, or may be independent of the current version. Entities can have properties, whose serialization is handled by the property engine, and also can read and store a byte array whose format and content will be handled by you.
You can add a part to an existing object. For instance you may want to add a part to Procedure objects. In order to do this you'll have to extend KBObjectPart, define your part in a BL package, declare that the part composes objects of certain type, and provide an editor for your new part in a UI package. KBObjectPart extends Entity so the serialization of the part is similar as in the previous case. A caveat of this option is that you'll also have to handle how the part content is imported, exported, and compared.
You can add a new kind of object. Objects extend the KBObject class, which extends Entity. Objects are not obliged to have parts (for instance the Folder object doesn't have any). When choosing to provide a new kind of object you have to consider a couple of things, such as:
Do you want to be able to create new instances from the new object dialog?
Will it be shown in the folder view?
Can it be added into modules?
Can it have the same name as other objects of different types?
As a general guideline, if you choose to add a new property, add it to objects, versions, or environments, not parts. Adding properties to parts is not so good for discoverability. Also if you choose to add a new kind of object, even though it inherits from Entity which as mentioned earlier can read and store a byte array, it's preferred to don't use the byte array in KBObject and add a KBObjectPart to it instead. That way the KBObject remains as lightweight as possible, and loading the object definition from the DB remains fast, and the blob content is loaded only when truly needed.
There's no rule of thumb. Depending on the specifics of the scenario, one option may be more suited than others.
It is generally known that ABAP memory (EXPORT/IMPORT) is used for passing data inside ABAP session across call stack, and SAP memory (SET/GET) is session independent and valid for all the ABAP sessions of user session.
The pitfall here is that SET PARAMETER supports only primitive flat types, otherwise it throws the error:
"LS_MARA" must be a character-type field (data type C, N, D or T). by
Global assignment like ASSIGN '(PrgmName)Globalvariable' TO FIELD-SYMBOLS(<lo_data>). is not always a way, for example if one wants to pass structure to some local method variable.
Creating SHMA shared memory objects seems like an overkill for simple testing tasks.
So far I found only this ancient thread were the issue was raised, but the solution from there is stupid and represents a perfect example of how you shouldn't write, a perfect anti-pattern.
What options (except DB) do we have if we want to pass structure or table to another ABAP session?
As usual Sandra has a good answer.
Export/Import To/From Shared buffer/Memory is very powerful.
But use it wisely and make sure you understand that is is on 1 App server and
is non persistent.
You can use rfc to call the other app servers to get the buffer from other servers if necessary. CALL FUNCTION xyz DESTINATION ''
See function TH_SERVER_LIST . ie what you see in SM59 Internal Connection.
Clearly the lack of persistency of shared buffer/memory is of key consideration.
But what is not immediately obvious until you read the docu carefully is how the shared buffer manager will abandon entries based on buffer size and avaliable memory. You can not assume shared buffer entry will be there when you go to access it. It most likely will be, but it can be "dropped", the server might be restarted etc. Use it as a performance helping tool but always assume the entry might not be there.
Shared memory as opposed to shared buffer, suffers from the upper limit issue, requiring other entries to be discarded before more can be added. Both have pros and cons.
In St02 , look for red entries here, buffer limits reached.
See the current parameters button that tells you which profile parameters need to be changed.
A great use of this language element is for logging or for high performance buffering of data that could be reconstructed . It is also ideal for scenarios in badis etc were you can not issue commits. You can "hold" info without issuing a commit or db commit.
You can also update / store your log without even using locking.
Using the simple principle the current workprocess no. is unique.
CALL FUNCTION 'TH_GET_OWN_WP_NO'
IMPORTING
wp_index = wp_index.
Use the index no as part of the key to your data .
if your kernel is 7.40 or later see class CL_OBJECT_BUFFER
otherwise see function SBUF_OBJ_SHOW_OBJECT
Have fun with Shared Buffers/Memory.
One major advantage of share buffers over share memory objects is the ABAP Garbage Collector. SAPSYS Garbage collection can bite you!
In the same application server, you may use EXPORT/IMPORT ... SHARED BUFFER/MEMORY ....
Probable statement for your requirement:
EXPORT mara = ls_mara TO SHARED BUFFER indx(zz) ID 'MARA'.
Between application servers, you may use ABAP Channels.
I'm using (and loving) Siesta to communicate with a REST web service in my Swift App. I have implemented a series of ResponseTransformers to map the API call responses to model classes so that the Siesta Resources are automatically parsed into object instances. This all works great.
I now want to implement a Siesta PersistantCache object to support an offline mode by having Siesta cache these objects to disk (rather than in memory) by storing them in Realm. I am not sure how to do this because the documentation says (about the EntityCache.writeEntity function):
This method can — and should — examine the entity’s content and/or headers and ignore it if it is not encodable. While they can apply type-based rules, however, cache implementations should not apply resource-based or url-based rules; use Resource.configure(...) to select which resources are cached and by whom.
In an attempt to conform to this guideline, I have created a specific PersistentCache object for each Resource type based on URL Pattern matching during Service Configuration:
class _GFSFAPI: Service {
private init() {
configure("/Challenge/*") { $0.config.persistentCache = SiestaRealmChallengeCache() }
}
However, since the EntityCache protocol methods only include a reference to the Entity (which exposes raw content but not the typed objects), I don't see how I can call the realm write methods during the call to EntityCache.writeEntity or how to pull the objects out of Realm during EntityCache.readEntity.
Any suggestions about how to approach this would be greatly appreciated.
Excellent question. Having a separate EntityCache implementations for each model could certainly work, though it seems like it might be burdensome to create all those little glue classes.
Models in the Cache
Your writeEntity() is called with whatever comes out at the end of all your response transformers. If your transformers are configured to spit out model classes, then writeEntity() sees models. If those models are Realm-friendly models, well, I don’t see any reason why you shouldn’t be able to just call realm.add(entity.content). (If that’s giving you problems, let me know with an update to the question.)
Conversely, when reading from the cache, what readEntity() returns does not go through the transformer pipeline again, so it should return exactly the same thing your transformers would have produced, i.e. models.
Cache Lookup Keys
The particular paragraph you quote from the docs is ill-written and perhaps a bit misleading. When it says you “should not apply resource-based or url-based rules,” it’s really just trying to dissuade you from parsing the forKey: parameter — which is secretly just a URL, but should remain opaque to cache implementations. However, any information you can gather from the given entity is fair game, including the type of entity.content.
The one wrinkle under the current API — and it is a serious wrinkle — is that you need to keep a mapping from Siesta’s key (which you should treat as opaque) to Realm objects of different types. You might do this by:
keeping a Realm model dedicated to keeping a polymorphic mapping from Siesta cache keys to Realm objects of various types,
by adding a siestaKey attribute and doing some kind of union query across models, or
by keeping a (cache key) → (model type, model ID) mapping outside of Realm.
I’d probably pursue the options in that order, but I believe you are in relatively unexplored (though perfectly reasonable) territory here using Realm as the backing for EntityCache. Once you’ve sussed out the options, I’d encourage you to file a Github issue for any suggested API improvements.
When saving a Core Data managed object context on iOS 6.0.1 to a SQLite store, I run into a strange "CoreData does not support persistent cross-store relationships" exception. It concerns a one-to-one relationship between Quotes and AbstractSources in the model. At runtime it concerns a Quote and a Book (where Book inherits from AbstractSource. All works well in the model editor.)
I've researched similar reports and covered the reported causes:
I am assigning both the Quote and the Book to the same persistent
store using assignObject:toPersistentStore:, so neither remains
unassigned.
The error description shows that all "absolute" x-coredata ids start
with the same prefix (e.g.
"x-coredata://82B3BEB3-60F2-4912-AC80-11AAD29CFF99/", so there
really seems to be one store only in use.
My questions are these:
Is there anything else I have to check (perhaps sg. in relation to
AbstractSource, which I do not touch/control in my source? I am
creating both the Quote and the Book with a call to
initWithEntity:insertIntoManagedObjectContext each.)
I noticed that the error description also includes several
"relative" x-coredata ids (of the form "x-coredata:///..."). Could
it be that the absolute form is always considered as
"cross-database", even if "absolute" prefixes (see example above) are the same?
And if so, how could I influence any choice between "absolute" and
"relative" x-coredata ids?
Thx (much) for your attention!
So this is what had (presumably) caused the trouble:
My managed object context's coordinator has to manage two persistent
stores. Now the one to which I assigned Quote and Book and were I
wanted them saved is reset at start-up. There was a bug
in this code, which rendered this store unusable. Since a second one
was available it silently took over, in this case leading to unwanted results.
Lesson: I now assert that there are/remain indeed two stores after setting up the core data stack.
During earlier development of my Core Data model, I had renamed some
of its entities in the model editor. By mistake I had only changed
names, but not the entities class properties. So in effect while
everything worked well in the model editor, by-then-unexpected
classes were used at runtime, and therefore unexpected classes where
assigned to unexpected/wrong stores as well. Lesson: I now make sure
that entities names and their class properties remain in perfect sync (other
circumstances permitting).
The issue is now resolved, and I've also refactored my code/model to use (non-overlapping) configurations instead of explicit assignments, which should also help going forward.
Again, thx for your attention
I am new to db4o.
I have this question in mind:
when the object are retrieved from DAL, maybe it will update in Business layer, then we lost it's original property, so when it comes to updating how can I find which one is the original object in the database to update?
You need to be more precise about "the object". If you modify the object instance's properties, simply storing it again will perform an update:
MyClass someInstance = ObjectContainer.Query<MyClass>().FirstOrDefault();
someInstance.Name = "NewName";
someInstance.PhoneNumber = 12132434;
ObjectContainer.Store(someInstance); // This is the update call
[This is just pseudo-code]
So you don't need to match objects to each other as you would have to when using an RDBMS.
However, you need to make sure you are not using a different instance of ObjectContainer, because a different container will not know these objects are the same instance (since there is no ID field in them).
Your application architecture should help to do this for most workflows, so there should be really only one IObjectContainer around. Only if timespans are really long (e.g. you need to store a reference to the object in a different database and process it somehow) it'd use the UUID. AS you already pointed out, that requires to store the ID somewhere else and hence complexifies your architecture.
If you however intend to create a new object and 'overwrite' the old object, things get somewhat more complicated because of other objects that might refer to it. However, this is a somehwat pathological case and should typically be handled within the domain model itself, e.g. by copying object data from one object to another.
You should load the object via its ID:
objectContainer.get().ext().getByID(id);
or via its UUID:
objectContainer.get().ext().getByUUID(uuId);
See the docs for the latter one. For an explanation see the answer here or the docs here. In short use uuid only for long term referencing.