ODataModel passing "expand" parameter in read - odata

I'd like to pass expand parameters to read because it doesn't work if I call the service like this:
oModel1.read("/LinesSet?$expand=ToCells", {

The read API awaits a map of options as a second argument in which we can define any query using the property urlParameters:
oModel1.read("/LinesSet", {
urlParameters: {
"$expand": "ToCells",
"$select": "LineID,ToCells/CellID,...", // reduce data load
},
filters: [ // Filter required from sap/ui/model/Filter
new Filter({/*...*/}), // reduce data load
],
success: this.onSuccess.bind(this),
// ...
});
⚠️ Please note that loading large amounts of data significantly affects memory consumption and UX negatively. This might even lead to crashing the application altogether ultimately. See the section Loading Large Amounts of Data from the documentation.
Whenever you use methods like [...] sap.ui.model.odata.v2.ODataModel#read [...] in application code, your application must not load large amounts of data.
⚠️ read is a low-level API from the application's point of view. There are other APIs and approaches that can help reducing the amount controller code.
Alternative (better) solution
I'd like to emphasize that v2.ODataModel#read is often not required. You can simply make use of the OData Context/ListBinding by assigning the corresponding name of the <NavigationProperty> to the control in XML:
<Table binding="{ToThatRelatedSingleEntity}" items="{ToThatRelatedCollection}" growing="true">
(Note: You might have to add templateShareable to the aggregation binding accordingly as explained in the topic: Lifecycle of Binding Templates)
The binding, not the application, will then prepare a request automatically for you. No need to use an intermediate JSONModel. Same with v4.ODataModel which doesn't even have the read method.
This makes also migrating to OData V4 much easier.

Related

Filtering Smarttables initial read request

im using a sap Smarttable to display my data from an ABAP Backend server. Additionally im using SmartVariantManagement to apply Variants and make them persistent.
The problem in my Application is the initial Load of the Smarttable. It seems like the table is first loading all the available data without any filters from the inital Variant of my Smartvariantmanagement.
Is there any way to apply the filters of Smartvariantmanagement to the initial Load in the Smarttable?
Or even better: Is it possible to shut down a running odata-read request if i apply a new selection in the smartfilterbar and just run the new one instead?
example 1:
you can avoid the initial request by the smarttable property
enableAutoBinding="false"
you can also set some mandatory fields for filtering, now the user performces an explicit call to the database
example 2:
you can also define a filter in the smarttable function
beforeRebindTable="onBeforeRebindTable"
controller:
onBeforeRebindTable: function (oEvent) {
var oBindingParams = oEvent.getParameter("bindingParams");
oBindingParams.filters.push(new sap.ui.model.Filter("PropertyX", "EQ", "myProperty"));
}
regards

Operations on a stream produce a result, but do not modify its underlying data source

Unable to understand how "Operations on a stream produce a result, but do not modify its underlying data source" with reference to java 8 streams.
shapes.stream()
.filter(s -> s.getColor() == BLUE)
.forEach(s -> s.setColor(RED));
As per my understanding, forEach is setting the color of object from shapes then how does the top statement hold true?
The value s isn't being altered in this example, however no deep copy is taken, and there is nothing to stop you altering the object referenced.
Are able to can alter an object via a reference in any context in Java and there isn't anything to prevent it. You can only prevent shallow values being altered.
NOTE: Just because you are able to do this doesn't mean it's a good idea. Altering an object inside a lambda is likely to be dangerous as functional programming models assume you are not altering the data being process (always creating new object instead)
If you are going to alter an object, I suggest you use a loop (non functional style) to minimise confusion.
An example of where using a lambda to alter an object has dire consequences is the following.
map.computeIfAbsent(key, k -> {
map.computeIfAbsent(key, k -> 1);
return 2;
});
The behaviour is not deterministic, can result in both key/values being added and for ConcurrentHashMap, this will never return.
As mentioned Here
Most importantly, a stream isn’t a data structure.
You can often create a stream from collections to apply a number of functions on a data structure, but a stream itself is not a data structure. That’s so important, I mentioned it twice! A stream can be composed of multiple functions that create a pipeline that data that flows through. This data cannot be mutated. That is to say the original data structure doesn’t change. However the data can be transformed and later stored in another data structure or perhaps consumed by another operation.
AND as per Java docs
This is possible only if we can prevent interference with the data
source during the execution of a stream pipeline.
And the reason is :
Modifying a stream's data source during execution of a stream pipeline
can cause exceptions, incorrect answers, or nonconformant behavior.
That's all theory, live examples are always good.
So here we go :
Assume we have a List<String> (say :names) and stream of this names.stream(). We can apply .filter(), .reduce(), .map() etc but we can never change the source. Meaning if you try to modify the source (names) you will get an java.util.ConcurrentModificationException .
public static void main(String[] args) {
List<String> names = new ArrayList<>();
names.add("Joe");
names.add("Phoebe");
names.add("Rose");
names.stream().map((obj)->{
names.add("Monika"); //modifying the source of stream, i.e. ConcurrentModificationException
/**
* If we comment the above line, we are modifying the data(doing upper case)
* However the original list still holds the lower-case names(source of stream never changes)
*/
return obj.toUpperCase();
}).forEach(System.out::println);
}
I hope that would help!
I understood the part do not modify its underlying data source - as it will not add/remove elements to the source; I think you are safe since you alter an element, you do not remove it.
You ca read comments from Tagir and Brian Goetz here, where they do agree that this is sort of fine.
The more idiomatic way to do what you want, would be a replace all for example:
shapes.replaceAll(x -> {
if(x.getColor() == BLUE){
x.setColor(RED);
}
return x;
})

Relay mutation expects data fetched by Relay

I have two Relay mutations that I'm nesting to first add an object then set its name. I believe what I'm passing to the second mutation is in fact data fetched by Relay, but it appears to disagree with me. The code in the React view is as follows:
Relay.Store.update(
new AddCampaignFeatureLabelMutation({
campaign: this.props.campaign
}),
{
onSuccess: (data) => {
Relay.Store.update(
new FeatureLabelNameMutation({
featureLabel: data.addCampaignFeatureLabel.featureLabelEdge.node,
name: this.addLabelInputField.value
})
);
},
onFailure: () => {}
}
);
This does work, but gives me a warning:
Warning: RelayMutation: Expected prop `featureLabel` supplied to `FeatureLabelNameMutation` to be data fetched by Relay. This is likely an error unless you are purposely passing in mock data that conforms to the shape of this mutation's fragment.
Why does Relay think the data isn't fetched? Do I maybe need to explicitly return the new featureLabel in the payload somehow?
I ran into the same problem and it took me some time to figure out what was going on, so this might help others:
As the warning says, you have to provide an entity to the mutation that was fetched by Relay. BUT what the warning does not say is that it has to be fetched with the mutation in mind.
So basically you have to add the mutation you are going to execute on it in the future in the initial query like this:
fragment on Person {
firstname,
lastname,
language,
${UpdatePersonMutation.getFragment('person')}
}
This will add the necessary pieces to the entity in the store which are needed by the mutation.
In you case what you have to do is to add the FeatureLabelNameMutation getFragment to your AddCampaignFeatureLabelMutation query. This will bring back your featureLabel entity with the necessary information for the FeatureLabelNameMutation to succeed without warning.
The Relay documentation is very very poor on this and many other areas.
Relay expects any fragments for your mutation to come from your props. Since you're using data coming from your callback and not something from your container props Relay raises that warning.
Take a look at the source: https://github.com/facebook/relay/blob/master/src/mutation/RelayMutation.js#L289-L307

Dynamic Tag Management - Storing

We're in the process of moving to DTM implementation. We have several variables that are being defined on page. I understand I can make these variables available in DTM through data elements. Can I simply set up a data elem
So set data elements
%prop1% = s.prop1
%prop2% = s.prop2
etc
And then under global rules set
s.prop1 = %s.prop1%
s.prop2 = %s.prop2%
etc
for every single evar, sprop, event, product so they populate whenever they are set on a particular page. Good idea or terrible idea? It seems like a pretty bulky approach which raises some alarm bells. Another option would be to write something that pushes everything to the datalayer, but that seems like essentially the same approach with a redundant step when they can be grabbed directly.
Basically I want DTM to access any and all variables that are currently being set with on-page code, and my understanding is that in order to do that they must be stored in a data element first. Does anyone have any insight into this?
I use this spec for setting up data layers: Data Layer Standard
We create data elements for each key that we use from the standard data layer. For example, page name is stored here
digitalData.page.pageInfo.pageName
We create a data element and standardize the names to this format "page.pageInfo.pageName"
Within each variable field, you access it with the %page.pageInfo.pageName% notation. Also, within javascript of rule tags, you can use this:
_satellite.getVar('page.pageInfo.pageName')
It's a bit unwieldy at times but it allows you to separate the development of the data layer and tag manager tags completely.
One thing to note, make sure your data layer is complete and loaded before you call the satellite library.
If you are moving from a legacy s_code implementation to DTM, it is a good best practice to remove all existing "on page" code (including the reference to the s_code file) and create a "data layer" that contains the data from the eVars and props on the page. Then DTM can reference the object on the page and you can create data elements that map to variables.
Here's an example of a data layer:
<script type="text/javascript">
DDO = {} // Data Layer Object Created
DDO.specVersion = "1.0";
DDO.pageData = {
"pageName":"My Page Name",
"pageSiteSection":"Home",
"pageType":"Section Front",
"pageHier":"DTM Test|Home|Section Front"
},
DDO.siteData = {
"siteCountry":"us",
"siteRegion":"unknown",
"siteLanguage":"en",
"siteFormat":"Desktop"
}
</script>
The next step would be to create data elements that directly reference the values in the object. For example, if I wanted to create a data element that mapped to the page name element in my data layer I would do the following in DTM:
Create a new data element called "pageName"
Select the type as "JS Object"
In the path field I will reference the path to the page name in my data layer example above - DDO.pageData.pageName
Save the data element
Now this data element can be referenced in any variable field within any rule by simply typing a '%'. DTM will find any existing data elements and you can select them.
I also wrote about a simple script you can add to your implementation to help with your data layer validation.Validate your DTM Data Layer with this simple script
Hope this helps.

How to move from untyped DataSets to POCO\LINQ2SQL in legacy application

Good day!
I've a legacy application where data access layer consists of classes where queries are done using SqlConnection/SqlCommand and results are passed to upper layers wrapped in untyped DataSets/DataTable.
Now I'm working on integrating this application into newer one where written in ASP.NET MVC 2 where LINQ2SQL is used for data access. I don't want to rewrite fancy logic of generating complex queries that are passed to SqlConnection/SqlCommand in LINQ2SQL (and don't have permission to do this), but I'd like to have result of these queries as strong-typed objects collection instead of untyped DataSets/DataTable.
The basic idea is to wrap old data access code in a nice-looking from ASP.NET MVC "Model".
What is the fast\easy way of doing this?
Additionally to the answer below here is a nice solution based on AutoMapper: http://elegantcode.com/2009/10/16/mapping-from-idatareaderidatarecord-with-automapper/
An approach that you could take is using the DataReader and transfer. So for every object you want to work with define the class in a data transfer object folder (or however your project is structured) then in you data access layer have something along the lines of the below.
We used something very similar to this in a project with a highly normalized database but in the code we did not need that normalization so we used procedures to put the data into more usable objects. If you need to be able to save these objects as well you will need handle translating the objects into database commands.
What is the fast\easy way of doing
this?
Depending on the number of classes etc this is could not be the fastest approach but it will allow you to use the objects very similarly to the Linq objects and depending on the type of collections used (IList, IEnumerable etc) you will be able to use the extension methods on those types of collections.
public IList<NewClass> LoadNewClasses(string abc)
{
List<NewClass> newClasses = new List<NewClass>();
using (DbCommand command = /* Get the command */)
{
// Add parameters
command.Parameters["#Abc"].Value = abc;
// Could also put the DataReader in a using block
IDataReader reader = /* Get Data Reader*/;
while (reader.Read())
{
NewClass newClass = new NewClass();
newClass.Id = (byte)reader["Id"];
newClass.Name = (string)reader["Name"];
newClasses.Add(newClass);
}
reader.Close();
}
return newClasses;
}

Resources