Can there be duplicate telemetry data in a oneM2M system [closed] - iot

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I am using OM2M (Eclipse) implementation of OneM2M standard. Here, they are generating a Content Instance for each telemetry data, and they use a random number to generate it's ID. Now , let's say for some reason, the device sends same telemetry data twice, then in that case, we will have duplicate entries for this telemetry data since we create a random ID of content instance.
There are two possibilities -
I can use the telemetry timestamp to generate the ID for content
instance. So that there won't be any duplicate entries.
I do nothing and store the duplicate entries, so that we can later analyze the data and capture this anomaly. And change the device configurations accordingly.
Which of the two options is possible using oneM2M?
And how does oneM2M support time-series data streams?
Thanks in advance.

The scenarios you are describing in your question acre actually two different use cases:
Either you want time series data (data that is sent independently whether it has changed in specific intervals, e.g. every minute), or
You want the latest data of your sensor, and only record the changes.
You need to decide which case you want to implement for your scenario, but it seems from your question that the second use case is what you want to implement.
What you propose in option 1) is not possible because the <contentInstance> resource type does not allow updates of an existing resource. Your only possibility with this resource is to create a new <contentInstance> every time you want to store data.
Also, you cannot provide, set or update the resourceIdentifier because it is always assigned by the CSE.
However, there are a couple of options to achieve what you want to do when you only need to store one data record per sensor. You should have a look at the <container> definition because here you can set the maximumNumberOfInstance (mni) attribute to 1. This means that the <container> always makes sure to store one instance of the data automatically (ie. it removes all the older instances). To access your data you would then not directly address the <contentInstance>, but use the <latest> virtual child resource of the <container>. When sending a RETRIEVE request to that resource you would automatically get the latest <contentInstance>, independently from its name or resource identifier.
Another possibility would be using <flexContainer>. Here, you can define you own data points and store data records without any versioning. But I am not sure whether the version of om2m you are using is fully supporting the <flexContainer> resource type.

Related

Use or not use CoreData in an App that syncs transactions? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am a beginner and I have this little debate with my friend who is a Ruby and rails developer with more than five years experience, and has worked basically for the web, and I know the information he gets is from various presentations he have been to.
So, I am learning and bulding a project in the way. This project needs to get data from other devices and also send data from the administrator devices for the other users.
I want to build this app to be able to save data if the device for some reason is offline (the user will travel, and can find himself out of signal).
My friend says that I do not need t save data into the device, or not use CoreData, that I probably need some type of cache to save the data temporarily while the device is offline.
I tell him that this is not like a weather app where you only download the data and show it to the user, I need to make changes to the data and send it back to the server, so other users see the change.
So, my question is:
Do I need to use CoreData to save data locally when the device is offline and send a request to the serve parsing JSON?
Which is the best approach?
Thank you very much for your time and knowledge!
My friend says that I do not need t save data into the device, or not use CoreData, that I probably need some type of cache to save the data temporarily while the device is offline.
Where does your friend think the cache will be located if it isn't on the device? Caching but not saving data are contradictory ideas.
Core Data can be useful as an offline cache. There are other options, including saving property list files and using SQLite directly. Which one is best depends heavily on how you'll need to use the data in the app.
Do I need to use CoreData to save data locally when the device is offline, or use CoreData to save everything and send a request to the server using parse JSON file?
Keeping in mind that we don't have a detailed description of your app,
If the server provides JSON-formatted data, then you need to parse that.
If you want to use the data offline, you need to save it on the device somehow. Whether you call this a cache or not is meaningless.
Core Data is one possible approach. It might or might not be the right one, but that's a separate question that can't be answered without a lot more information about how your app uses this data.
A common approach would be to request data from the server and save it locally. When accessing data in the app, look it up in the local copy. Keep server communication and local data access separate; if they're the same thing then you're talking to the server directly for all data and have no offline access. Keep track of local changes so you can send them back to the server.

Is there a way to relieve connect to the server [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am fairly new to IOS programming
So I do not even know how to ask exactly
But I will explain the problem
I created an application that relies mainly to fetch data from the server sometimes the size of json is too large
Is there a way to save json on the device and not to bring, but recent data or only work to synchronize with the server
the program is objective c and i uses afnetworking
back end is ASP.net-mvc
The general way to handle this use case is, you always store the latestItemId/latestItemTimestamp in your app and every time you need to get new data, you make a call to the server with this information. In your server endpoint, you use this id/timestamp and get data after this id/timestamp.
When the app requests the api endpoint for the first time, the value of latestItemId will be 0. After getting the data every time, you keep updating it. Since you are asking the server to give data only after a specific id, you will only get the needed data (latest data)
For example your server code might look like (using EF and LINQ) (The below code is to give you an idea. I did not check for compilation errors)
public List<string> Messages(int fromId=0,int top=20)
{
var d = yourDbContext.Messages
.Where(x=>x.Id>fromId)
.OrderBy(f=>f.InsertTime)
.Take(top)
.Select(c=>c.MessageBody)
.ToList();
return d;
}
From the IOS client app side, You can keep the data(latestId) in the app memory and/or use NSUserDefaults to store it. NSUserDefaults.standardUserDefaults() method might be helpful (In Swift)

Would I use core data? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
So I have a process for a user to take a photo add info and upload to my database. My question is how should I store that data so it is accessible through all my controllers and when they click the upload button it sends the final object to the server to be added to the database. Would I use core data? Or like a struct? I just want to make sure I am doing this correctly.
This is an opinion oriented answer and it is influenced by developer's familiarity/comfort with the various underlying concepts as well. Hence though I dont consider it as an answer here is my opinion.
Should I use core data, so it is accessible through all my controllers?
Absolutely no! U don't need core data, just to create a shared data source which is being used by multiple ViewController's simultaneously. You can obviously create a Singleton data source object and can be accessed by all the VC's.
But then, core data is not just a shared data source isn't it ?
Core data is a Persistent Data Store where as your structs are not.
Assume user takes a pic, and before it gets uploaded quits the app, or you want to provide offline capability to user, where user can take a pic without internet and queue it for upload and whenever internet comes back your app loads it to server, if u use structs and keep data source in memory, if user quits the app all the efforts done by user to will go waste and obviously user will not appreciate it. On the other hand if u use core data you can obviously have it in sqlite file, and then access it whenever u need it even if user quits the app in between :)
Managed Object Context provides performBlock and performBlockAndWait to synchronize the access to core data in multi threaded environment but with a plain array of struct u have to write it on ur own.
Now there is no point in reinventing the wheel isn't it? We all know data types like array are not thread safe :) So is managedObject Context but managedObject context on iOS 5 onwards provides the amazing handy methods like performBlock and performBlockAndWait which eases the life of developer when dealing with shared data source (managedObject) in multi threaded environment.
Managed Object Context provides notifications about the changes happening in real time and works like a charm with NSFetchedResultsController providing a mechanism to constatly monitor and update the data source
I dont think its a big thing, but in order to achieve the same thing with array u'll have to use KVO. But because KVO wont work with swift objects u'll have to override didSet method and manually throw notification to all VC's when data source changes. Not so elegant solution isn't it :)
Scalability and robustness :
Finally, how many records are u dealing with also matters. I have been a part of a company which uploads and restores thousands of images on/from users device. In a scenario where you are dealing with 1000s of images maintaining a array is always a pain and memory print costly as well because the entire array will be loaded all the time. On the other hand NSFetchedResultsController works on page fault mechanism. It loads data efficiently only when needed.
Scalability is just a matter of adding new fields to managed object entity and robustness is directly proportional to you skill set dealing with core data I believe.
Pinch of Advice :
No matter whether u use array of structs or Core Data, always store images in local file system and keep the local path relative reference in your data source. Holding an entire an image in memory is a real bad idea :D
Hope it helps.

Data transfer preloader or alert box in Delphi [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I have a software that uses a SQL Server express database. When the software runs local the data is loaded fast however when I run it remote there is always some delay populating grids etc..etc..
I'm looking to make a some kind of preloader or an alert box that appears with a bar indicating that data is being loaded into the software and prevent the user from clicking on the form.
Can you guys point me to a tutorial or if you can give just a general idea how to accomplish that ?
Forget optimisations like background threads and async dataset loading until you've got the basic workflow of your app correct. Generally, the thing to do with datasets is to open the minimum number necessary to permit the current user operation, and open others, e.g. needed for drilling down into a selected patient's details only as needed. In each case you open the dataset before the related form is shown; that way, the opportunity for the user to try working with only partially loaded data never arises.
So in a situation like this apparently is, where the user browses a collection of patients
in a Patients table, start out with a form containing a DBGrid connected to a dataset component
that delivers the Patient rows. Don't show the form until after you've opened the Patients table, in read-only mode. And don't open any other datasets yet.
Presumably there's a collection of patient detail tables that need to be opened to show the data of a given patient on one or more forms - I imagine there might be a top-level Patient's ID Details form,
and maybe a number of drill-down ones which can be invoked from it. Again, don't show this form(s) until the tables needed to supply the patient data are open. The easiest way to make the user aware that they should wait while something completes
is to surround the code involved with something like this:
Screen.Cursor := crSqlWait;
Screen.activeForm.Update; // refreshes the current form to ensure
// the cursor gets updated on-screen
// Open the patient details table(s) and create the related form(s) here
Screen.Cursor := crDefault;
// Now, show whichever is the principal patient detail form
Once the user has finished with a patient's details, close the form(s) that were opened to do it and close the related datasets.
Sql Server and Delphi are quite capable of populating a top-level DBGrid with outline info for several thousand patients with hardly any perceptible delay, as long as the data is all retrieved into one dataset (e.g. an AdoQuery) using one SQL SELECT statement. Don't take my word for it, try it with your own data. If it seems to slow, you're doing something wrong.
The key is not to attempt to do more than you need to at the time. As I've explained, only retrieve patient-specific data once the user has selected a top-level patient record to work on. Until the app knows which patient the user is working one, it's pointless trying to retrieve patient-specific data of the type you mentioned in comments and would only slow down the app and generate needless network traffic.

Web API multiple or single call [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
We are re-writing our system using MVC 4.0 and web API and we are currently at a decision splitter.
Would it be more efficient / best practice to do multiple small calls to a Web API rather than one single large one for displaying data on an MVC webpage:-
ie
Multiple calls :
call 1 - returns core data about a user (user model)
call 2 - returns data regarding a users status (status model)
call 3 - returns user history ( history model)
Single call :
returns a Full ViewModel that includes all the core data about a user, his current status and a list of history items
public string UserName { get;set;}
public Status UserStatus { get;set;}
public List<history> { get;set;}
Any advice would be greatly appreciated (additional info, each one of the calls is a separate database call)
While I'm not certain this is the best place for this question (I think this may fall more into the Programmers area), I would say that it really depends on what data you need most often. If you need the whole object for every page in your app, then really is it going to save you anything to make multiple small calls? If some of that data can be cached on the client side, then maybe a lot of small calls would be more efficient, but otherwise, you're increasing the amount of client (client has to retrieve, parse, and then output three streams of data) and server (where the call has to be routed, data retrieved, and data returned) work for little benefit.
Secondly, as #Damien_The_Unbeliever points out, there's the question of outsiders calling this API. If the API is public, or called by multiple apps, it's a question of what is the most efficient package that /most/ apps will need, not just what this app needs. If /most/ apps will need the whole object, then it doesn't make sense to give them calls to retrieve only pieces of that object. If they only need, say, Status, then an API method for just retrieving Status is a good call.
When you are designing an API action always think about performance, how many times in a second your action will be consumed, does your separate operation is taking too long to respond. Make simple benchmarks for each operation by using Stopwatch, then make several parallel calls to see if any operation makes bottleneck.
In my opinion every action should be simple and atomic.

Resources