Difference between Deep Insert and $batch OData - odata

Can any one tell me the difference between usage of Deep Insert and $batch - ChangeSet in the context of OData ? I have a scenario that requires creation of a Sales Order Header and Sales Order Items together.
I can either user Deep Insert (BTW is this standard OData spec ?) or
I can use a $batch (this is standard OData spec) call with these two entities specified as a part of the same ChangeSet, which would ensure that they get saved together as a part of a single LUW.
What are the pros / cons of using either of these approaches ? Any experiences ?
Cheers

Deep Insert is part of the OData specification, see http://docs.oasis-open.org/odata/odata/v4.0/os/part1-protocol/odata-v4.0-os-part1-protocol.html#_Toc372793718.
Deep Insert allows creating a tree of related entities in one request. It is insert only.
$batch allows grouping arbitrary requests into one request, and arbitrary modifying operations into LUWs (called change sets).
For insert-only cases Deep Insert is easier: you just POST the same format that you would GET with $expand.

Deep insert or deep update is not currently defined and supported by OData spec. However there are such feature requests, like this: https://data.uservoice.com/forums/72027-wcf-data-services-feature-suggestions/suggestions/4416931-odata-deep-update-request-support
If you decided to use a batch, then you have to do the next set of commands in your batch:
PUT SalesOrderItem
...
PUT SalesOrderItem
PUT SalesOrderHeader
PUT SalesOrderHeader/links$/SalesOrderItem
...
PUT SalesOrderHeader/links$/SalesOrderItem
See also here: How do I update an OData entity and modify its navigation properties in one request?
In our ASP.NET project we decided to go with CQRS pattern and use OData for Query requests and Web API for Commands. Talking in terms of your case we created Web API Controller with action CreateSalesOrder with parameter of class SalesOrderHeaderDto that contains array of SalesOrderItemDtos. Having the data on server you can easily develop insert the whole Order Sale in one transaction with its Order Items. Also there is just two command to be sent on server - ~/api/CreateSalesORder and ~/odata/SalesOrder with include=Items and filter by something... for example first command can return an Id of the Order...

Deep insert gives one operation that will insert all the items as one operation.
The same thing isn't possible in a $batch.
This is not automatic in a batch :
they get saved together as a part of a single LUW
The $batch needs to be in a single change set to expect atomicity.
According to OData 4.0 11.7.4 Responding to a Batch Request:
All operations in a change set represent a single change unit so a service MUST successfully process and apply all the requests in the change set or else apply none of them. It is up to the service implementation to define rollback semantics to undo any requests within a change set that may have been applied before another request in that same change set failed and thereby apply this all-or-nothing requirement. The service MAY execute the requests within a change set in any order and MAY return the responses to the individual requests in any order. The service MUST include the Content-ID header in each response with the same value that the client specified in the corresponding request, so clients can correlate requests and responses.
However, a single changeset is unordered. Given you are doing a deep insert, there is some realtionship between the entities, and given you are doing an insert, in either a contained navigation or a $ref navigation you can't perform both inserts or both inserts and the PUT / POST $ref in an unordered fashion.
A change set is an atomic unit of work consisting of an unordered group of one or more Data Modification requests or Action invocation requests.

Related

How to send a multiple POST request in a single batch with different change sets id in SAP UI5?

I am trying to send multiple create request in a single batch.
Right now all request goes as a single change set id so if any records fails everything is rollback.
Can we send the multiple post request with different change sets id in a single batch?
The body of a oData Batch Request is made up of an ordered series of ChangeSets.
In the Batch request body, each retrieve request and ChangeSet is represented as a distinct MIME part.
You should be using SAP gateway builder I hope. You have redefine/implement the interface procided by SAP ABAP gateway.
/IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_BEGIN.
/IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_END
Read more here in detail
Within the body of the batch response is a response for each retrieve request and ChangeSet that was in the associated Batch request. The order of responses in the response body must match the order of requests in the Batch request.
PS: If you still have trouble, you need to add your application, oData structure and interface methods redifined.
It worked after giving different value to changeSetId, each time when I call createentry.
Reference: https://answers.sap.com/questions/532601/how-to-send-a-multiple-post-request-in-a-single-ba.html

Sorting OData model in SAPUI5

Dear SAPUI5 Developers,
I developed a SAPUI5 Fiori Worklist project by using WebIDE template projects.
In the Component.js file the OData model has been fetched.
var sServiceUrl = this.getMetadata().getManifestEntry("sap.app").dataSources.mainService.uri;
var oModel = new sap.ui.model.odata.ODataModel(sServiceUrl, {
json: true,
loadMetadataAsync: true
});
oModel.attachMetadataFailed(function() {
// Call some functions from APP controller to show suitable message
}, this);
this.setModel(oModel, "BrandSet");
This part of code causes a call to OData server to fetch data from the remote server.
Now I want to order the data in backend and then receive the data. Assume the sorting function has been implemented correctly in the backend.
Thus, if I use $orderby=name or $orderby=price it has to be sorted by name or price respectively.
In some toturial they said for ordering use sorter option inside of the XML view file. Like here:
https://sapui5.hana.ondemand.com/#docs/guide/c4b2a32bb72f483faa173e890e48d812.html
Now my questions are:
How to apply this sorting inside of the Component.js file where the Model is initiated?
The second question is how to apply this ordering when we apply a filter to the model? Like the example that in the following link applied filter:
https://sapui5.hana.ondemand.com/#docs/guide/5295470d7eee46c1898ee46c1b9ad763.html
In fact I am looking for a function or any kind of method that add the $orderby=xxx to the OData service call.
I found a way here: https://sapui5.hana.ondemand.com/docs/api/symbols/sap.ui.model.odata.ODataModel.html#constructor
If I use mParameters.serviceUrlParams then I can add some URL parameter to the service request but it has been said "these parameters will be attached to all requests". Does it mean if I add the $orderbywith this method then I can not get rid of that in the further requests on that data model for example for filtering?
An app would normally be structured a bit differently to what you propose. The general assumption is that there is a lot of data available from the backend and to load all this data at once can cause performance problems, particularly when used over a mobile phone network. Furthermore, the data is an oData Entity Set, that is, a list of many items of the same type, so the data would be presented in the UI with a list or table.
Typically the app would then show the data in some kind of list, such as sap.m.List or sap.m.Table. These controls are designed to work with large volumes of data and would load initially the first 20 items from the entity set. Only when the user scrolls down the list of data would additional items be loaded. Also, with these controls the user can decide to sort or filter the data according to certain fields in your data.
Assuming that your app is work like this, here is the standard approach.
The Main model (as defined in the manifest) would not be loaded in Component.js, but loaded via the binding defined in the xml views of the app. In the views you could define a fixed sort and/or filter in the binding or you could allow the user to set the sort and filter criteria. This would be handled programmatically in the respective controllers. Normally the changes that the user makes to the sort and filter would be applied separately. For example, he/she chooses an new sort order, the oData is reread and the new sort order shown in the UI. Then the user may chose a filter criteria, and this is applied too. Of course, in your programming logic in the controllers you would need to have applied any default sort and filter criteria and then maybe combine or replace these with the criteria selected by the user.
To see an example of this, I would suggest to look at the Template Application “SAP Fiori Master-Detail Application” in the WebIDE.

OData Delete with Filter

I have the problem that our backend uses an OData-"like"-Processor which has some special functions. It is oriented at OData_2.0
So the question will be:
What is the most OData like approach for this kind of the following requests
Our backend Data-Model has no single-attribute-keys. But it's recommended to be OData-Like if possible.
First: I need to delete several objects via one OData Request. My first idea is to use filters to define which objects should be deleted. But I', not sure if this is the right approach.
For Example: I want to delete all Items which have a price greater than 10.00
http://.../<oDataServiceX>/Item?$filter=ItemPrice gt 10.00
Second: When I want to delete an object which is not identifiable by one single key-attribute. How can I define that in the classical OData-Delete-Request-Syntax.
Is the following OData-like?
http://.../<oDataServiceX>/Item(1,54,2) //3 Attributes which define the key for the Item
Or should I do a filter again? (If filter is a proper way of doing this).
http://.../<oDataServiceX>/Item?$filter=keyAttr1 eq 1 and keyAttr2 eq 54 and keyAttr 3 eq 2
You can't delete multiple entries in a single OData query, you need first to retrieve their keys and then send multiple delete requests. There are two ways to improve this process:
Use OData batch to send all delete request in a single HTTP call.
Use some of libraries that can simulate deletion using filter (internally they will issue multiple requests but for the application it will look like a single call). One of such libraries is Simple.OData.Client.
Hope this helps.
Odata v4 supports the format DELETE /entity(key1='', key2='') and so on.
However, for oData v2, one option could be to use the request body to pass some data over. DELETE /entity, with data in the body.
The documentation states that the convention is to delete an entity by key. However, this was the approach followed when we had to delete by multiple keys for an odata v2 service. Also while implementing this using oData v2 libraries, we had to add a routing convention to support Delete without a key.

Using Breeze for arbitrary server response

Have all a structure for creating complex queries and obtaining data on the client side using Breeze and webapi IQueryable<T>.
I would use this structure on the client side to call another webapi controller, intercept the result of the query, and use this to make an Excel file returned by HttpResponseMessage.
See: Returning binary file from controller in ASP.NET Web API
How can I use the executeQuery without getting return data in standard Breeze JSON and without interfering with the data on the client side cache to have the 'octet-stream'.
The goal is to create an 'Export to Excel' without existing frontend paging for a large volume of data.
If you don't want to track changes, call EntityQuery.noTracking() before calling executeQuery(). This will return raw javascript objects without breeze tracking capabilities.
You can't make executeQuery() return binary 'octet-stream' data. But you can use breeze ajax implementation:
var ajaxImpl = breeze.config.getAdapterInstance("ajax");
ajaxImpl.ajax() // by default it is a wrapper to jQuery.ajax
Look http://www.breezejs.com/documentation/customizing-ajax
Create a custom "data service adapter" and specify it when you create an EntityManager for this special purpose. It can be quite simple because you disable the most difficult parts to implement, the metadata and saveChanges methods.
You don't want to cache the results. Therefore, you make sure the manager's metadata store is empty and you should add the " no caching" QueryOption. [exact names escape me as I write this on my phone].
Make sure these steps are really adding tangible value
Specialized server operations often can be performed more simply with native AJAX components.
p.s. I just saw #didar 's answer which is consistent with mine. Blend these thoughts into your solution.

How to create OData based off RFC with multiple tables in the output?

I am working on a large project at work that requires me to create OData's for a large variety of Remote Function Calls. I was able to work out how to model and create OData's for simple RFCs; however, I am struggling with more complex RFCs that use multiple tables as well as simple exporting and importing parameters.
I want to output these tables as well as the importing and exporting parameters via GetEntity and GetEntitySet with just one call. I have done extensive searching online to find solutions but the best solution seems to be redefining the RFC's or calling the OData multiple times which is not ideal.
Is there any way to combine multiple tables with several entries in the output? When I say output, I am referring to the resulting XML from GetEntity/GetEntitySet.
For example, take the below fake RFC definition that takes a PERNR, and outputs a list of direct reports and a structure of employee details.
IMPORTING
PERNR
EXPORTING
S_EMPLOYEE_DETAILS
TABLES
T_DIRECT_REPORTS
Is there a way to combine the table, structure, and importing parameters into one output?
The first thing to understand is that the OData protocol is not intended to solely work like classical function calls. It is based however on entity/relationship kind of model.
So in your case id sugest to create an entity type named 'Employee' with the appropiate properties of your structure S_EMPLOYEE_DETAILS. With this you can e.g. implement the method GET_EMPLOYEE_ENTITY to retrieve a single instance of an employee via PERNR.
The next thing to do would be to get the direct reports of this employee. Since this is a relation 1:N from Employee to Employee in your case you can create a navigation property called 'DirectReports' with appropiate cardinality. Then in your GET_EMPLOYEE_ENTITYSET you can return the instances of table T_DIRECT_REPORTS (note that navigation property is not empty and you have to read the keys of the parent!).
Once you got this working you can move on to the 'best-practise' and implement the method GET_EXPANDED_ENTITY with filling the expand clauses, which is in my opinion the preferred way as you dont need to implement two seperate methods and is consiered faster as well (if many expands happen).
Both methods of implementation can be called via
GET EmployeeSet('12345678')?$expand=DirectReports

Resources