I know that access links on the stack + a display array
are a way of implementation for reaching non-local objects in nested procedures.
Can anyone refer me to a reading material on the subject (google didn't help)
or can simply explain me how it works?
thanks
I don't know what you mean with display array, but nested procedure support usually does not use an array, but the frame pointer of the each parent is passed to the each child. Since you can lookup the parent's parent stackframe ( typically parentparentptr:=[my parentptr+constant]), this creates a linked list of stackframes.
In the compiler you have to then build a list of variables for each frame, and then you can build an expression (load frame pointer, then load variable by indirect load via framepointer) to access it. In deeply nested structures it is probably worthwhile to lookup the needed parentframes once, and store them on the stack.
Related
A standard VirtualStringTree in the examples I've seen has a GetText procedure to get at data which is stored in memory via InitNode. It uses a set of Nodes to manage display, ordering etc.
An application which collects and manages some data in its own class (TMyDataClass) but uses a VirtualStringTree just to display it could, however, simply access TMyDataClass's data in GetText without bothering to attach that data to a Node in InitNode (and later freeing it in FreeNode).
I'm trying to understand any downside there is to that. What functionality of VirtualStringTree will I miss if I don't use InitNode and FreeNode? I have nothing against those of course but I just want to understand better.
The TVirtualStringTree is used to handle elements related to display wihtout worrying about what the data actually is.
To allow it to do that, each of the Node records can store additional information to get to the data it needs to know. The Data property. In general you don't want to have multiple copies of your data, so in the Data property of the Node you would store a pointer (reference) to your object from which it can get the data. I understand that the Data property is defined as a record - so you need to define a record to hold your data. (Please note I may be referring to a different version than you are using - check your docs or source code to confirm what you need).
So that would mean you have, for example:
CellText := TMyNodeRecord(Node.Data).AppData.DisplayText;
Where TMyNodeRecord is defined as something like:
type TMyNodeRecord = record
AppData: TMyDataClass;
end;
Where TMyDataClass is your own data object that supplies the text through the DisplayText property (or any other property / metod you like).
It is generally known that ABAP memory (EXPORT/IMPORT) is used for passing data inside ABAP session across call stack, and SAP memory (SET/GET) is session independent and valid for all the ABAP sessions of user session.
The pitfall here is that SET PARAMETER supports only primitive flat types, otherwise it throws the error:
"LS_MARA" must be a character-type field (data type C, N, D or T). by
Global assignment like ASSIGN '(PrgmName)Globalvariable' TO FIELD-SYMBOLS(<lo_data>). is not always a way, for example if one wants to pass structure to some local method variable.
Creating SHMA shared memory objects seems like an overkill for simple testing tasks.
So far I found only this ancient thread were the issue was raised, but the solution from there is stupid and represents a perfect example of how you shouldn't write, a perfect anti-pattern.
What options (except DB) do we have if we want to pass structure or table to another ABAP session?
As usual Sandra has a good answer.
Export/Import To/From Shared buffer/Memory is very powerful.
But use it wisely and make sure you understand that is is on 1 App server and
is non persistent.
You can use rfc to call the other app servers to get the buffer from other servers if necessary. CALL FUNCTION xyz DESTINATION ''
See function TH_SERVER_LIST . ie what you see in SM59 Internal Connection.
Clearly the lack of persistency of shared buffer/memory is of key consideration.
But what is not immediately obvious until you read the docu carefully is how the shared buffer manager will abandon entries based on buffer size and avaliable memory. You can not assume shared buffer entry will be there when you go to access it. It most likely will be, but it can be "dropped", the server might be restarted etc. Use it as a performance helping tool but always assume the entry might not be there.
Shared memory as opposed to shared buffer, suffers from the upper limit issue, requiring other entries to be discarded before more can be added. Both have pros and cons.
In St02 , look for red entries here, buffer limits reached.
See the current parameters button that tells you which profile parameters need to be changed.
A great use of this language element is for logging or for high performance buffering of data that could be reconstructed . It is also ideal for scenarios in badis etc were you can not issue commits. You can "hold" info without issuing a commit or db commit.
You can also update / store your log without even using locking.
Using the simple principle the current workprocess no. is unique.
CALL FUNCTION 'TH_GET_OWN_WP_NO'
IMPORTING
wp_index = wp_index.
Use the index no as part of the key to your data .
if your kernel is 7.40 or later see class CL_OBJECT_BUFFER
otherwise see function SBUF_OBJ_SHOW_OBJECT
Have fun with Shared Buffers/Memory.
One major advantage of share buffers over share memory objects is the ABAP Garbage Collector. SAPSYS Garbage collection can bite you!
In the same application server, you may use EXPORT/IMPORT ... SHARED BUFFER/MEMORY ....
Probable statement for your requirement:
EXPORT mara = ls_mara TO SHARED BUFFER indx(zz) ID 'MARA'.
Between application servers, you may use ABAP Channels.
https://facebook.github.io/relay/graphql/objectidentification.htm is very clear around what Node is and how it behaves, but it doesn't specify which objects must implement it, or what the consequences are if your object doesn't implement it. Is there a set of features that don't work? Are such objects completely ignored? Not all objects in the existing spec (e.g. pageInfo) implement it, so it's clearly not universally required, but pageInfo is somewhat of a special case.
Another way of thinking about the Node interface is that objects that implement it are refetchable. Refetchability effectively means that an object has an ID that I can use to identify the object and retrieve it; by convention, these IDs will usually be opaque, but will contain type information and an identifier within that type (eg. a Base-64 encoding of a string like "Account:1234").
Relay will leverage refetchability in two ways:
Under a process known as "diffing", if you already have some data for an object identified by ID QWNjb3VudDoxMjM0 (say, the name and address fields), and you then navigate to a view where we show some additional fields (location, createdAt) then Relay can make a minimal query that "refetches" the node but only requests the missing fields.
Relatedly, Relay will diff connections and will make use of the Node interface to fill in missing data on those (example: through some combination of navigation you might have full information for some items in a view, but need to fill in location for some items within the range, or you might modify an item in a connection via a mutation). So, in basic pagination, Relay will often end up making a first + after query to extend a connection, but if you inspect its network traffic in a real app you will also see that it makes node queries for items within connections.
So yes, you're right that pageInfo doesn't implement Node, and it wouldn't really make sense for it to do so.
In one unit I'm running a query which will return one users details from the database. Right now I'm thinking of creating a user object and assigning the results of the query to the different properties, the setting that as a global variable. I wanted to know if there was a way to pass the data between the units without having to use the global variables.
Avoiding global variables is actually a good idea. Also, storing the query result as properties of a (database-independent) object makes sense, because the application might need the information also when the connection is not active.
To avoid a global variable, the easiest way would be to make the object a field of a main form (or datamodule), and use Getter methods to make it (and its fields) read-only. I would also implement the procedure of loading the dataset values into the object properties as a spearate class.
I am debugging some Groovy code for a website and have hit an issue where I create an object A in controller in one part of the flow and set a variable within it (read it back and its correct).
I then pick up what I had understood to the the same object in a different controller. But the variable is no longer set.
Either my assumption that the object A in the first controller is the same object A as picked up the the second controller is wrong or something has modified the value en-route.
So, what might be a very basic question (and I have have a horrible feeling that it points to some fundamental misunderstanding on my part of how Groovy/Java works - so please be gentle) :
How can I tell if the object A in controller 1 is the same as the object A in controller 2 (by the same I mean point to the same object, not that they are equivalent).
I then pick up what I had understood to the the same object in a
different controller.
If you show an example of what you are doing in the first controller and what you are doing in the second controller that would help clarify what is going on. It isn't clear what you might mean by "pick up" in the sentence quoted above.
If you can orchestrate things such that you have the 2 references at the same time you can call o1.is(o2) which will tell you if o1 points to the same object as o2. Something you can use to help debug the situation is in the first controller your can call System.identityHashCode(o1) and in the second controller you can call System.identityHashCode(o2) and see if those return the same value.
There are times in a web app where the notion of being the same object can be ambiguous. For example, if you have 2 separate proxies but they are proxying the same instance, there are contexts where you would treat them as the same object. Another example is that if you are dealing with persistent entities you could have 2 separate instances in memory that actually correspond to the same record in the data store.
Anyway, the identityHashCode approach mentioned above is a technique you could use to know if these objects are the same object or not. If that doesn't do it for you and you can show some code or provide some more details that might help.
If you want to make some variables available in several controllers, you can do it in one of the following ways:
put the object in a session
put the object in a flash scope
put the object in a flow scope - here you MUST make sure, that you are accessing the SAME flow.
use a singleton service to persist the value permanently or temporarily
use a static field somewhere (shall never be used though)