Consider following 4 lines of code:
Mono<Void> result = personRepository.findByNameStartingWith("Alice")
.map(...)
.flatMap(...)
.subscriberContext()
Fictional Use Case which I hope you will immediately map to your real task requirement:
How does one adds "Alice" to the context, so that after .map() where "Alice" is no longer Person.class but a Cyborg.class (assuming irreversible transformation), in .flatMap() I can access original "Alice" Person.class. We want to compare the strength of "Alice" person versus "Alice" cyborg inside .flatMap() and then send them both to the moon on a ship to build a colony.
I've read about 3 times:
https://projectreactor.io/docs/core/release/reference/#context
I've read dozen articles on subscriberContext
I've looked at colleague code who uses subscriberContext but only for Tracing Context and MDM which are statically initialised outside of pipelines at the top of the code.
So the conclusion I am coming to is that something else was named as "context" , what majority can't use for the overwhelming use case above.
Do I have to stick to tuples and wrappers? Or I am totally dummy and there is a way. I need this context to work in entirely opposite direction :-), unless "this" context is not the context I need.
I will await for Reactor developers attention (or later than that go to GitHub to raise an issue with the conceptual naming error, if I am correct) but in the meantime. I believed that Reactor Context could solve this:
What is the efficient/proper way to flow multiple objects in reactor
But what it actually reminds is some kind of mega closure over reactive pipeline propagating down->up and accepting values from outside in an imperative way, which IMO is a very narrow and limited use case to be called a "context", which will confuse more people to come.
Context and subscribeContext in posts you refer to are indeed one and the same...
The goal of the Context is more along the lines of attaching some information to a given subscription.
This works because upon subscription, a chain of Subscriber is constructed to "materialize" the processing, and by nature each given operator (or step) as a reference to its downstream in order to be able to push data to it.
As a result, it can also query it for its view of what the current subscription Context is, hence the down-to-up approach.
Related
When I use a map constructor like:
Person p = new Person(name: "Bob")
through something that is called via a grails.gsp.PageRenderer, the field values are not populated. When I use an empty constructor and then set the fields individually like:
Person p = new Person()
p.name = "Bob"
it succeeds. When I use the map constructor via a render call, it also succeeds.
Any ideas as to why this is the case?
Sample project is here in case anyone wants to dig deeper: https://github.com/danduke/constructor-test/
Actual use case, as requested by Jeff below:
I have a computationally expensive view to render. Essentially it's a multi-thousand page (when PDF'd) view of some data.
This view can be broken into smaller pieces, and I can reliably determine when each has changed.
I render one piece at a time, submitting each piece to a fixed size thread pool to avoid overloading the system. I left this out of the example project, as it has no bearing on the results.
I cache the rendered results and evict them by key when data in that portion of the view has changed. This is why I am using a page renderer.
Each template used may make use of various tag libraries.
Some tag libraries need to load data from other applications in order to display things properly (actual use case: loading preferences from a shared repository for whether particular items are enabled in the view)
When loaded, these items need to be turned into an object. In my case, it's a GORM object. This is why I am creating a new object at all.
There are quite a few opportunities for improvement in my actual use case, and I'm open to suggestions on that. However, the simplest possible (for me) demonstration of the problem still does suggest that there's a problem. I'm curious whether it should be possible to use map constructors in something called from a PageRenderer at all. I'm surprised that it doesn't work, and it feels like a bug, but obviously a very precise and edge case one.
"Technically it is a bug" (which is the best kind of bug!), and has been reported here: https://github.com/grails/grails-core/issues/11870
I'll update this if/when additional information is available.
It is generally known that ABAP memory (EXPORT/IMPORT) is used for passing data inside ABAP session across call stack, and SAP memory (SET/GET) is session independent and valid for all the ABAP sessions of user session.
The pitfall here is that SET PARAMETER supports only primitive flat types, otherwise it throws the error:
"LS_MARA" must be a character-type field (data type C, N, D or T). by
Global assignment like ASSIGN '(PrgmName)Globalvariable' TO FIELD-SYMBOLS(<lo_data>). is not always a way, for example if one wants to pass structure to some local method variable.
Creating SHMA shared memory objects seems like an overkill for simple testing tasks.
So far I found only this ancient thread were the issue was raised, but the solution from there is stupid and represents a perfect example of how you shouldn't write, a perfect anti-pattern.
What options (except DB) do we have if we want to pass structure or table to another ABAP session?
As usual Sandra has a good answer.
Export/Import To/From Shared buffer/Memory is very powerful.
But use it wisely and make sure you understand that is is on 1 App server and
is non persistent.
You can use rfc to call the other app servers to get the buffer from other servers if necessary. CALL FUNCTION xyz DESTINATION ''
See function TH_SERVER_LIST . ie what you see in SM59 Internal Connection.
Clearly the lack of persistency of shared buffer/memory is of key consideration.
But what is not immediately obvious until you read the docu carefully is how the shared buffer manager will abandon entries based on buffer size and avaliable memory. You can not assume shared buffer entry will be there when you go to access it. It most likely will be, but it can be "dropped", the server might be restarted etc. Use it as a performance helping tool but always assume the entry might not be there.
Shared memory as opposed to shared buffer, suffers from the upper limit issue, requiring other entries to be discarded before more can be added. Both have pros and cons.
In St02 , look for red entries here, buffer limits reached.
See the current parameters button that tells you which profile parameters need to be changed.
A great use of this language element is for logging or for high performance buffering of data that could be reconstructed . It is also ideal for scenarios in badis etc were you can not issue commits. You can "hold" info without issuing a commit or db commit.
You can also update / store your log without even using locking.
Using the simple principle the current workprocess no. is unique.
CALL FUNCTION 'TH_GET_OWN_WP_NO'
IMPORTING
wp_index = wp_index.
Use the index no as part of the key to your data .
if your kernel is 7.40 or later see class CL_OBJECT_BUFFER
otherwise see function SBUF_OBJ_SHOW_OBJECT
Have fun with Shared Buffers/Memory.
One major advantage of share buffers over share memory objects is the ABAP Garbage Collector. SAPSYS Garbage collection can bite you!
In the same application server, you may use EXPORT/IMPORT ... SHARED BUFFER/MEMORY ....
Probable statement for your requirement:
EXPORT mara = ls_mara TO SHARED BUFFER indx(zz) ID 'MARA'.
Between application servers, you may use ABAP Channels.
I am creating a Google dataflow pipeline, using Apache Beam Java SDK. I have a few transforms there, and I finally create a collection of Entities ( PCollection< Entity > ) . I need to write this into the Google DataStore and then, perform another transform AFTER all entities have been written. (such as broadcasting the IDs of the saved objects through a PubSub Message to multiple subscribers).
Now, the way to store a PCollection is by:
entities.DatastoreIO.v1().write().withProjectId("abc")
This returns a PDone object, and I am not sure how I can chain another transform to occur after this Write() has completed. Since DatastoreIO.write() call does not return a PCollection, I am not able to further the pipeline. I have 2 questions :
How can I get the Ids of the objects written to datastore?
How can I attach another transform that will act after all entities are saved?
We don't have a good way to do either of these things (returning IDs of written Datastore entities, or waiting until entities have been written), though this is far from the first similar request (people have asked for this for BigQuery, for example) and we're thinking about it.
Right now your only option is to wait until the entire pipeline finishes, e.g. via pipeline.run().waitUntilFinish(), and then doing what you wanted in your main program (e.g. you can run another pipeline).
I'm using (and loving) Siesta to communicate with a REST web service in my Swift App. I have implemented a series of ResponseTransformers to map the API call responses to model classes so that the Siesta Resources are automatically parsed into object instances. This all works great.
I now want to implement a Siesta PersistantCache object to support an offline mode by having Siesta cache these objects to disk (rather than in memory) by storing them in Realm. I am not sure how to do this because the documentation says (about the EntityCache.writeEntity function):
This method can — and should — examine the entity’s content and/or headers and ignore it if it is not encodable. While they can apply type-based rules, however, cache implementations should not apply resource-based or url-based rules; use Resource.configure(...) to select which resources are cached and by whom.
In an attempt to conform to this guideline, I have created a specific PersistentCache object for each Resource type based on URL Pattern matching during Service Configuration:
class _GFSFAPI: Service {
private init() {
configure("/Challenge/*") { $0.config.persistentCache = SiestaRealmChallengeCache() }
}
However, since the EntityCache protocol methods only include a reference to the Entity (which exposes raw content but not the typed objects), I don't see how I can call the realm write methods during the call to EntityCache.writeEntity or how to pull the objects out of Realm during EntityCache.readEntity.
Any suggestions about how to approach this would be greatly appreciated.
Excellent question. Having a separate EntityCache implementations for each model could certainly work, though it seems like it might be burdensome to create all those little glue classes.
Models in the Cache
Your writeEntity() is called with whatever comes out at the end of all your response transformers. If your transformers are configured to spit out model classes, then writeEntity() sees models. If those models are Realm-friendly models, well, I don’t see any reason why you shouldn’t be able to just call realm.add(entity.content). (If that’s giving you problems, let me know with an update to the question.)
Conversely, when reading from the cache, what readEntity() returns does not go through the transformer pipeline again, so it should return exactly the same thing your transformers would have produced, i.e. models.
Cache Lookup Keys
The particular paragraph you quote from the docs is ill-written and perhaps a bit misleading. When it says you “should not apply resource-based or url-based rules,” it’s really just trying to dissuade you from parsing the forKey: parameter — which is secretly just a URL, but should remain opaque to cache implementations. However, any information you can gather from the given entity is fair game, including the type of entity.content.
The one wrinkle under the current API — and it is a serious wrinkle — is that you need to keep a mapping from Siesta’s key (which you should treat as opaque) to Realm objects of different types. You might do this by:
keeping a Realm model dedicated to keeping a polymorphic mapping from Siesta cache keys to Realm objects of various types,
by adding a siestaKey attribute and doing some kind of union query across models, or
by keeping a (cache key) → (model type, model ID) mapping outside of Realm.
I’d probably pursue the options in that order, but I believe you are in relatively unexplored (though perfectly reasonable) territory here using Realm as the backing for EntityCache. Once you’ve sussed out the options, I’d encourage you to file a Github issue for any suggested API improvements.
If I use breeze to load a partial entity:
var query = EntityQuery.from('material')
.select('Id, MaterialName, MaterialType, MaterialSubType')
.orderBy(orderBy.material);
return manager.executeQuery(query)
.then(querySucceeded)
.fail(queryFailed);
function querySucceeded(data) {
var list = partialMapper.mapDtosToEntities(
manager, data.results, entityNames.material, 'id');
if (materialsObservable) {
materialsObservable(list);
}
log('Retrieved Materials from remote data source',
data, true);
}
...and I also want to have another slightly different partial query from the same entity (maybe a few other fields for example) then I'm assuming that I need to do another separate query as those fields weren't retrieved in the first query?
OK, so what if I want to use the same fields retrieved in the first query (Id, Materialname, MaterialType, MaterialSubType) but I want to call those fields different names in the second query (Materialname becomes just "name", MaterialType becomes "masterType" and so on) then is it possible to clone the partial entity I already have in memory (assuming it is in memory?) and rename the fields or do I still need to do a completely separate query?
I think I would "union" the two cases into one projection if I could afford to do so. That would simplify things dramatically. But it's really important to understand the following point:
You do not need to turn query projection results into entities!
Backgound: the CCJS example
You probably learned about the projection-into-entities technique from the CCJS example in John Papa's superb PluralSight course "Single Page Apps JumpStart". CCJS uses this technique for a very specific reason: to simplify list update without making a trip to the server.
Consider the CCJS "Sessions List" which is populated by a projection query. John didn't have to turn the query results into entities. He could have bound directly to the projected results. Remember that Knockout happily binds to raw data values. The user never edits the sessions on that list directly. If displayed session values can't change, turning them into observable properties is a waste of CPU.
When you tap on a Session, you go to a Session view/edit screen with access to almost every property of the complete session entity. CCJS needs the full entity there so it looks for the full (not partial) session in cache and, if not found, loads the entity from the server. Even to this point there is no particular value in having previously converted the original projection results into (partial) session entities.
Now edit the Session - change the title - and save it. Return to the "Sessions List"
Question
How do you make sure that the updated title appears in the Sessions List?
If we bound the Sessions List HTML to the projection data objects, those objects are not entities. They're just objects. The entity you edited in the session view is not an object in the collection displayed in the Sessions List. Yes, there is a corresponding object in the list - one that has the same session id. But it is not the same object.
Choices
#1: Refresh the list from the server by reissuing the projection query. Bind directly to the projection data. Note that the data consist of raw JavaScript objects, not entities; they are not in the Breeze cache.
#2: Publish an event after saving the real session entity; the subscribing "Sessions List" ViewModel hears the event, extracts the changes, and updates its copy of the session in the list.
#3: Use the projection-into-entity technique so that you can use a session entity everywhere.
Pros and Cons
#1 is easy to implement. But it requires a server trip every time you enter the Sessions List view.
One of the CCJS design goals was that, once loaded, it should be able to operate entirely offline with zero access to the server. It should work crisply when connectivity is intermittent and poor.
CCJS is your always-ready guide to the conference. It tells you instantly what sessions are available, when and where so you can find the session you want, as you're walking the halls, and get there. If you've been to a tech conference or hotel you know that the wifi is generally awful and the app is almost useless if it only works when it has direct access to the server.
#1 is not well suited to the intended operating environment for CCJS.
The CCJS Jumpstart is part way down that "server independent" path; you'll see something much closer to a full offline implementation soon.
You'll also lose the ability to navigate to related entities. The Sessions List displays each session's track, timeslot and room. That's repetitive information found in the "lookup" reference entities. You'll either have to expand the projection to include this information in a "flattened" view of the session (fatter payload) or get clever on the client-side and patch in the track, timeslot and room data by hand (complexity).
#2 helps with offline/intermittent connectivity scenarios. Of course you'll have to set up the messaging system, establish a protocol about saved entities and teach the Sessions List to find and update the affected session projection object. That's not super difficult - the Breeze EntityManager publishes an event that may be sufficient - but it would take even more code.
#3 is good for "server independence", has a small projection payload, is super-easy, and is a cool demonstration of breeze. You have to manage the isPartial flag so you always know whether the session in cache is complete. That's not hard.
It could get more complicated if you needed multiple flavors of "partial entity" ... which seems to be where you are going. That was not an issue in CCJS.
John chose #3 for CCJS because it fit the application objectives.
That doesn't make it the right choice for every application. It may not be the right choice for you.
For example, if you always have a fast, low latency connection, then #1 may be your best choice. I really don't know.
I like the cast-to-entity approach myself because it is easy and works so well most of the time. I do think carefully about that choice before I make it.
Summary
You do not have to turn projection query results into entities
You can bind to projected data directly, without Knockout observable properties, if they are read-only
Make sure you have a good reason to convert projected data into (partial) entities.
CCJS has a good reason to convert projected query data into entities. Do you?