Is there a context that I can use to pass data between scripts of requests in Krakend? - krakend

I'm talking about the script "modifier/lua-backend" modifier. I can't find a way to pass data between the scripts of two responses.
For example, if I have two sequential backend responses, is there a way to pass data from the post-script of the first to the pre-script of the second?

Related

Saving data accross API's in Gramex

I am calling the same database query in multiple form handlers, I want to access the data once for processing and store it to use them across multiple form handlers.
Formhandler, caches the data after your first query so essentially you are not querying to the DB if your query remains same.
And if you are firing the same query through multiple formhandlers you could essentially write a transform function which can do all the different processing after fetching the data (Formhandler will take care of caching and you will not query from different patterns).
/dataapi?mode=getsalesdata&otherparams=.......
/dataapi?mode=getavgsales&otherparams=........
You could also use query function in formhandler to control the dynamic behaviour of your query.
Provide some more details around the use-case to have a tailored response.

Chaining another transform after DataStoreIO.Write

I am creating a Google dataflow pipeline, using Apache Beam Java SDK. I have a few transforms there, and I finally create a collection of Entities ( PCollection< Entity > ) . I need to write this into the Google DataStore and then, perform another transform AFTER all entities have been written. (such as broadcasting the IDs of the saved objects through a PubSub Message to multiple subscribers).
Now, the way to store a PCollection is by:
entities.DatastoreIO.v1().write().withProjectId("abc")
This returns a PDone object, and I am not sure how I can chain another transform to occur after this Write() has completed. Since DatastoreIO.write() call does not return a PCollection, I am not able to further the pipeline. I have 2 questions :
How can I get the Ids of the objects written to datastore?
How can I attach another transform that will act after all entities are saved?
We don't have a good way to do either of these things (returning IDs of written Datastore entities, or waiting until entities have been written), though this is far from the first similar request (people have asked for this for BigQuery, for example) and we're thinking about it.
Right now your only option is to wait until the entire pipeline finishes, e.g. via pipeline.run().waitUntilFinish(), and then doing what you wanted in your main program (e.g. you can run another pipeline).

select2: load remote data from several sources simultaneously

I need some kind of omnisearch: when user types some name or serial number select2 sends several simultaneous ajax calls to retrieve employees, candidates and devices.
As soon as any of these calls returns data (for example employees) it is shown to user.
So if employee data is returned first we show it. As soon as candidates data is returned we combine it with employees data, sort data by name and show it to user again.
Is it possible?
You need to code by yourself such a thing, by default select2 only loads data attached to the select box, it's your responsability to write javascript that will behave in the following way and it's a non-trivial code.
In general your idea will be load (with multiple async calls) the locations you want and store the data you fetched, after performing operations you need (merging with another json) in the select box and refresh it.
I would think you would want to write this on the backend. Have an endpoint that collalesses all the data you want. Select 2 makes one ajax call to the endpoint to retrieve all the data you need in one go.

Using Breeze for arbitrary server response

Have all a structure for creating complex queries and obtaining data on the client side using Breeze and webapi IQueryable<T>.
I would use this structure on the client side to call another webapi controller, intercept the result of the query, and use this to make an Excel file returned by HttpResponseMessage.
See: Returning binary file from controller in ASP.NET Web API
How can I use the executeQuery without getting return data in standard Breeze JSON and without interfering with the data on the client side cache to have the 'octet-stream'.
The goal is to create an 'Export to Excel' without existing frontend paging for a large volume of data.
If you don't want to track changes, call EntityQuery.noTracking() before calling executeQuery(). This will return raw javascript objects without breeze tracking capabilities.
You can't make executeQuery() return binary 'octet-stream' data. But you can use breeze ajax implementation:
var ajaxImpl = breeze.config.getAdapterInstance("ajax");
ajaxImpl.ajax() // by default it is a wrapper to jQuery.ajax
Look http://www.breezejs.com/documentation/customizing-ajax
Create a custom "data service adapter" and specify it when you create an EntityManager for this special purpose. It can be quite simple because you disable the most difficult parts to implement, the metadata and saveChanges methods.
You don't want to cache the results. Therefore, you make sure the manager's metadata store is empty and you should add the " no caching" QueryOption. [exact names escape me as I write this on my phone].
Make sure these steps are really adding tangible value
Specialized server operations often can be performed more simply with native AJAX components.
p.s. I just saw #didar 's answer which is consistent with mine. Blend these thoughts into your solution.

Difference between Deep Insert and $batch OData

Can any one tell me the difference between usage of Deep Insert and $batch - ChangeSet in the context of OData ? I have a scenario that requires creation of a Sales Order Header and Sales Order Items together.
I can either user Deep Insert (BTW is this standard OData spec ?) or
I can use a $batch (this is standard OData spec) call with these two entities specified as a part of the same ChangeSet, which would ensure that they get saved together as a part of a single LUW.
What are the pros / cons of using either of these approaches ? Any experiences ?
Cheers
Deep Insert is part of the OData specification, see http://docs.oasis-open.org/odata/odata/v4.0/os/part1-protocol/odata-v4.0-os-part1-protocol.html#_Toc372793718.
Deep Insert allows creating a tree of related entities in one request. It is insert only.
$batch allows grouping arbitrary requests into one request, and arbitrary modifying operations into LUWs (called change sets).
For insert-only cases Deep Insert is easier: you just POST the same format that you would GET with $expand.
Deep insert or deep update is not currently defined and supported by OData spec. However there are such feature requests, like this: https://data.uservoice.com/forums/72027-wcf-data-services-feature-suggestions/suggestions/4416931-odata-deep-update-request-support
If you decided to use a batch, then you have to do the next set of commands in your batch:
PUT SalesOrderItem
...
PUT SalesOrderItem
PUT SalesOrderHeader
PUT SalesOrderHeader/links$/SalesOrderItem
...
PUT SalesOrderHeader/links$/SalesOrderItem
See also here: How do I update an OData entity and modify its navigation properties in one request?
In our ASP.NET project we decided to go with CQRS pattern and use OData for Query requests and Web API for Commands. Talking in terms of your case we created Web API Controller with action CreateSalesOrder with parameter of class SalesOrderHeaderDto that contains array of SalesOrderItemDtos. Having the data on server you can easily develop insert the whole Order Sale in one transaction with its Order Items. Also there is just two command to be sent on server - ~/api/CreateSalesORder and ~/odata/SalesOrder with include=Items and filter by something... for example first command can return an Id of the Order...
Deep insert gives one operation that will insert all the items as one operation.
The same thing isn't possible in a $batch.
This is not automatic in a batch :
they get saved together as a part of a single LUW
The $batch needs to be in a single change set to expect atomicity.
According to OData 4.0 11.7.4 Responding to a Batch Request:
All operations in a change set represent a single change unit so a service MUST successfully process and apply all the requests in the change set or else apply none of them. It is up to the service implementation to define rollback semantics to undo any requests within a change set that may have been applied before another request in that same change set failed and thereby apply this all-or-nothing requirement. The service MAY execute the requests within a change set in any order and MAY return the responses to the individual requests in any order. The service MUST include the Content-ID header in each response with the same value that the client specified in the corresponding request, so clients can correlate requests and responses.
However, a single changeset is unordered. Given you are doing a deep insert, there is some realtionship between the entities, and given you are doing an insert, in either a contained navigation or a $ref navigation you can't perform both inserts or both inserts and the PUT / POST $ref in an unordered fashion.
A change set is an atomic unit of work consisting of an unordered group of one or more Data Modification requests or Action invocation requests.

Resources