Passing structured data between ABAP sessions? - memory

It is generally known that ABAP memory (EXPORT/IMPORT) is used for passing data inside ABAP session across call stack, and SAP memory (SET/GET) is session independent and valid for all the ABAP sessions of user session.
The pitfall here is that SET PARAMETER supports only primitive flat types, otherwise it throws the error:
"LS_MARA" must be a character-type field (data type C, N, D or T). by
Global assignment like ASSIGN '(PrgmName)Globalvariable' TO FIELD-SYMBOLS(<lo_data>). is not always a way, for example if one wants to pass structure to some local method variable.
Creating SHMA shared memory objects seems like an overkill for simple testing tasks.
So far I found only this ancient thread were the issue was raised, but the solution from there is stupid and represents a perfect example of how you shouldn't write, a perfect anti-pattern.
What options (except DB) do we have if we want to pass structure or table to another ABAP session?

As usual Sandra has a good answer.
Export/Import To/From Shared buffer/Memory is very powerful.
But use it wisely and make sure you understand that is is on 1 App server and
is non persistent.
You can use rfc to call the other app servers to get the buffer from other servers if necessary. CALL FUNCTION xyz DESTINATION ''
See function TH_SERVER_LIST . ie what you see in SM59 Internal Connection.
Clearly the lack of persistency of shared buffer/memory is of key consideration.
But what is not immediately obvious until you read the docu carefully is how the shared buffer manager will abandon entries based on buffer size and avaliable memory. You can not assume shared buffer entry will be there when you go to access it. It most likely will be, but it can be "dropped", the server might be restarted etc. Use it as a performance helping tool but always assume the entry might not be there.
Shared memory as opposed to shared buffer, suffers from the upper limit issue, requiring other entries to be discarded before more can be added. Both have pros and cons.
In St02 , look for red entries here, buffer limits reached.
See the current parameters button that tells you which profile parameters need to be changed.
A great use of this language element is for logging or for high performance buffering of data that could be reconstructed . It is also ideal for scenarios in badis etc were you can not issue commits. You can "hold" info without issuing a commit or db commit.
You can also update / store your log without even using locking.
Using the simple principle the current workprocess no. is unique.
CALL FUNCTION 'TH_GET_OWN_WP_NO'
IMPORTING
wp_index = wp_index.
Use the index no as part of the key to your data .
if your kernel is 7.40 or later see class CL_OBJECT_BUFFER
otherwise see function SBUF_OBJ_SHOW_OBJECT
Have fun with Shared Buffers/Memory.
One major advantage of share buffers over share memory objects is the ABAP Garbage Collector. SAPSYS Garbage collection can bite you!

In the same application server, you may use EXPORT/IMPORT ... SHARED BUFFER/MEMORY ....
Probable statement for your requirement:
EXPORT mara = ls_mara TO SHARED BUFFER indx(zz) ID 'MARA'.
Between application servers, you may use ABAP Channels.

Related

Is Reactor Context used only for statically initialised data?

Consider following 4 lines of code:
Mono<Void> result = personRepository.findByNameStartingWith("Alice")
.map(...)
.flatMap(...)
.subscriberContext()
Fictional Use Case which I hope you will immediately map to your real task requirement:
How does one adds "Alice" to the context, so that after .map() where "Alice" is no longer Person.class but a Cyborg.class (assuming irreversible transformation), in .flatMap() I can access original "Alice" Person.class. We want to compare the strength of "Alice" person versus "Alice" cyborg inside .flatMap() and then send them both to the moon on a ship to build a colony.
I've read about 3 times:
https://projectreactor.io/docs/core/release/reference/#context
I've read dozen articles on subscriberContext
I've looked at colleague code who uses subscriberContext but only for Tracing Context and MDM which are statically initialised outside of pipelines at the top of the code.
So the conclusion I am coming to is that something else was named as "context" , what majority can't use for the overwhelming use case above.
Do I have to stick to tuples and wrappers? Or I am totally dummy and there is a way. I need this context to work in entirely opposite direction :-), unless "this" context is not the context I need.
I will await for Reactor developers attention (or later than that go to GitHub to raise an issue with the conceptual naming error, if I am correct) but in the meantime. I believed that Reactor Context could solve this:
What is the efficient/proper way to flow multiple objects in reactor
But what it actually reminds is some kind of mega closure over reactive pipeline propagating down->up and accepting values from outside in an imperative way, which IMO is a very narrow and limited use case to be called a "context", which will confuse more people to come.
Context and subscribeContext in posts you refer to are indeed one and the same...
The goal of the Context is more along the lines of attaching some information to a given subscription.
This works because upon subscription, a chain of Subscriber is constructed to "materialize" the processing, and by nature each given operator (or step) as a reference to its downstream in order to be able to push data to it.
As a result, it can also query it for its view of what the current subscription Context is, hence the down-to-up approach.

How to implement a Persistent Cache in Siesta with a structured model layer

I'm using (and loving) Siesta to communicate with a REST web service in my Swift App. I have implemented a series of ResponseTransformers to map the API call responses to model classes so that the Siesta Resources are automatically parsed into object instances. This all works great.
I now want to implement a Siesta PersistantCache object to support an offline mode by having Siesta cache these objects to disk (rather than in memory) by storing them in Realm. I am not sure how to do this because the documentation says (about the EntityCache.writeEntity function):
This method can — and should — examine the entity’s content and/or headers and ignore it if it is not encodable. While they can apply type-based rules, however, cache implementations should not apply resource-based or url-based rules; use Resource.configure(...) to select which resources are cached and by whom.
In an attempt to conform to this guideline, I have created a specific PersistentCache object for each Resource type based on URL Pattern matching during Service Configuration:
class _GFSFAPI: Service {
private init() {
configure("/Challenge/*") { $0.config.persistentCache = SiestaRealmChallengeCache() }
}
However, since the EntityCache protocol methods only include a reference to the Entity (which exposes raw content but not the typed objects), I don't see how I can call the realm write methods during the call to EntityCache.writeEntity or how to pull the objects out of Realm during EntityCache.readEntity.
Any suggestions about how to approach this would be greatly appreciated.
Excellent question. Having a separate EntityCache implementations for each model could certainly work, though it seems like it might be burdensome to create all those little glue classes.
Models in the Cache
Your writeEntity() is called with whatever comes out at the end of all your response transformers. If your transformers are configured to spit out model classes, then writeEntity() sees models. If those models are Realm-friendly models, well, I don’t see any reason why you shouldn’t be able to just call realm.add(entity.content). (If that’s giving you problems, let me know with an update to the question.)
Conversely, when reading from the cache, what readEntity() returns does not go through the transformer pipeline again, so it should return exactly the same thing your transformers would have produced, i.e. models.
Cache Lookup Keys
The particular paragraph you quote from the docs is ill-written and perhaps a bit misleading. When it says you “should not apply resource-based or url-based rules,” it’s really just trying to dissuade you from parsing the forKey: parameter — which is secretly just a URL, but should remain opaque to cache implementations. However, any information you can gather from the given entity is fair game, including the type of entity.content.
The one wrinkle under the current API — and it is a serious wrinkle — is that you need to keep a mapping from Siesta’s key (which you should treat as opaque) to Realm objects of different types. You might do this by:
keeping a Realm model dedicated to keeping a polymorphic mapping from Siesta cache keys to Realm objects of various types,
by adding a siestaKey attribute and doing some kind of union query across models, or
by keeping a (cache key) → (model type, model ID) mapping outside of Realm.
I’d probably pursue the options in that order, but I believe you are in relatively unexplored (though perfectly reasonable) territory here using Realm as the backing for EntityCache. Once you’ve sussed out the options, I’d encourage you to file a Github issue for any suggested API improvements.

Storing large reference data objects in Drools

I'm looking for a way to store large objects within Drools for long periods of time (i.e. not like facts which are added and removed from within a session).
I've read that Drools works using KnowledgeBases and Sessions (Stateless & Stateful) and that KnowledgeBases contain application knowledge definitions but no runtime data.
In a case where I need to store, for example, a large dictionary (that won't change but will be referenced by more than one successive session), and have objects added to working memory and checked against this dictionary to have rules fired, where would it be best to have this stored?
Does everything just go into working memory (in which case, would I need to load the dictionary into memory each time I open a new session?) or am I just missing a crucial Drools basic principle? Would global variables be a good fix for this?
Not sure how large "large" is (of course there's always a performance tradeoff), but you could also use an inserted object to pull from a database (/cache) and have the rules access the values via method.
when
$y : AnObject (name == "car", lookupPrice > 10000 );
where AnObject.getLookupPrice() is a method that would pull a value out of the cached / stored dictionary.
If the object isn't too big you could codify as well (as an object) and use it the same way.

Is it acceptable to have a single instance of a model to keep "global", alterable information?

I ran into a problem which required me to have access to a bunch of variables which need to be changed every so often, so I made a Misc model and there is only ever one instance of it. This was my solution for having editable global variables.
It holds all types of stuff that didn't seem like they deserve their own models. Is this acceptable or does this violate some Rails-buliding principle I'm not aware of? It works, but I have doubts.
Is there a better alternative to this strategy (think fetching/editing (as an example) Misc.first.todays_specials).
If this is passable, then is there a way to prevent a creation of more than one item of a model in the database? The problem with the above approach as you can see is that if there are all of a sudden TWO entries for Misc, things will get wonky as it requests the .first under the assumption that there's ever only going to be one.
You can create a table for Settings storing key-value configs. It will be scalable and not depend on predefined keys. Also you won't have a table with one row this way.
If you need lots of read/writes you might also want to cache rails SQL Caching
you could use a singleton pattern.
a singleton class is a class that can only have one instance.
so you could do something like this:
initializers/config.rb
require 'singleton'
class MyConfig
include Singleton
attr_accessor :config1
def initialize
self.config1 = ["hello", "world"]
end
end
and use it in this way:
MyConfig.instance.config1
You can also consider global variables. Global variables are those which start with the $ sign, and are accessible in the whola application by all instances of your ws.
Using a singleton to hold global state is a very bad idea, especially in a web-server:
If you are using a multi-threaded environment - you will run into thread-safety issues.
More relevantly - if you run multi-process, or multi-server (as you would have to, if your web application ever succeeds...) - the state will be inconsistent, as changes in one process/machine will not be propagated to the other processes/machines.
A restart of your application will destroy the state of the application, since it is held only in memory.
You could use an SQL solution, as suggested by #Tala, but if you want something more light-weight and 'freestyle', you might want to look at some key-value stores like memcached or redis, where you could save your state in a central location, and fetch it when needed.

db4o object update dilemma

I am new to db4o.
I have this question in mind:
when the object are retrieved from DAL, maybe it will update in Business layer, then we lost it's original property, so when it comes to updating how can I find which one is the original object in the database to update?
You need to be more precise about "the object". If you modify the object instance's properties, simply storing it again will perform an update:
MyClass someInstance = ObjectContainer.Query<MyClass>().FirstOrDefault();
someInstance.Name = "NewName";
someInstance.PhoneNumber = 12132434;
ObjectContainer.Store(someInstance); // This is the update call
[This is just pseudo-code]
So you don't need to match objects to each other as you would have to when using an RDBMS.
However, you need to make sure you are not using a different instance of ObjectContainer, because a different container will not know these objects are the same instance (since there is no ID field in them).
Your application architecture should help to do this for most workflows, so there should be really only one IObjectContainer around. Only if timespans are really long (e.g. you need to store a reference to the object in a different database and process it somehow) it'd use the UUID. AS you already pointed out, that requires to store the ID somewhere else and hence complexifies your architecture.
If you however intend to create a new object and 'overwrite' the old object, things get somewhat more complicated because of other objects that might refer to it. However, this is a somehwat pathological case and should typically be handled within the domain model itself, e.g. by copying object data from one object to another.
You should load the object via its ID:
objectContainer.get().ext().getByID(id);
or via its UUID:
objectContainer.get().ext().getByUUID(uuId);
See the docs for the latter one. For an explanation see the answer here or the docs here. In short use uuid only for long term referencing.

Resources