Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
We are re-writing our system using MVC 4.0 and web API and we are currently at a decision splitter.
Would it be more efficient / best practice to do multiple small calls to a Web API rather than one single large one for displaying data on an MVC webpage:-
ie
Multiple calls :
call 1 - returns core data about a user (user model)
call 2 - returns data regarding a users status (status model)
call 3 - returns user history ( history model)
Single call :
returns a Full ViewModel that includes all the core data about a user, his current status and a list of history items
public string UserName { get;set;}
public Status UserStatus { get;set;}
public List<history> { get;set;}
Any advice would be greatly appreciated (additional info, each one of the calls is a separate database call)
While I'm not certain this is the best place for this question (I think this may fall more into the Programmers area), I would say that it really depends on what data you need most often. If you need the whole object for every page in your app, then really is it going to save you anything to make multiple small calls? If some of that data can be cached on the client side, then maybe a lot of small calls would be more efficient, but otherwise, you're increasing the amount of client (client has to retrieve, parse, and then output three streams of data) and server (where the call has to be routed, data retrieved, and data returned) work for little benefit.
Secondly, as #Damien_The_Unbeliever points out, there's the question of outsiders calling this API. If the API is public, or called by multiple apps, it's a question of what is the most efficient package that /most/ apps will need, not just what this app needs. If /most/ apps will need the whole object, then it doesn't make sense to give them calls to retrieve only pieces of that object. If they only need, say, Status, then an API method for just retrieving Status is a good call.
When you are designing an API action always think about performance, how many times in a second your action will be consumed, does your separate operation is taking too long to respond. Make simple benchmarks for each operation by using Stopwatch, then make several parallel calls to see if any operation makes bottleneck.
In my opinion every action should be simple and atomic.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I am using OM2M (Eclipse) implementation of OneM2M standard. Here, they are generating a Content Instance for each telemetry data, and they use a random number to generate it's ID. Now , let's say for some reason, the device sends same telemetry data twice, then in that case, we will have duplicate entries for this telemetry data since we create a random ID of content instance.
There are two possibilities -
I can use the telemetry timestamp to generate the ID for content
instance. So that there won't be any duplicate entries.
I do nothing and store the duplicate entries, so that we can later analyze the data and capture this anomaly. And change the device configurations accordingly.
Which of the two options is possible using oneM2M?
And how does oneM2M support time-series data streams?
Thanks in advance.
The scenarios you are describing in your question acre actually two different use cases:
Either you want time series data (data that is sent independently whether it has changed in specific intervals, e.g. every minute), or
You want the latest data of your sensor, and only record the changes.
You need to decide which case you want to implement for your scenario, but it seems from your question that the second use case is what you want to implement.
What you propose in option 1) is not possible because the <contentInstance> resource type does not allow updates of an existing resource. Your only possibility with this resource is to create a new <contentInstance> every time you want to store data.
Also, you cannot provide, set or update the resourceIdentifier because it is always assigned by the CSE.
However, there are a couple of options to achieve what you want to do when you only need to store one data record per sensor. You should have a look at the <container> definition because here you can set the maximumNumberOfInstance (mni) attribute to 1. This means that the <container> always makes sure to store one instance of the data automatically (ie. it removes all the older instances). To access your data you would then not directly address the <contentInstance>, but use the <latest> virtual child resource of the <container>. When sending a RETRIEVE request to that resource you would automatically get the latest <contentInstance>, independently from its name or resource identifier.
Another possibility would be using <flexContainer>. Here, you can define you own data points and store data records without any versioning. But I am not sure whether the version of om2m you are using is fully supporting the <flexContainer> resource type.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm learning how to parse JSON from web APIs. I have read that I need to asynchronously return my parsed data from the API for my application, as opposed to synchronously. I'm not sure why it has to be asynchronously. I know it has something to do with threading, but that doesn't clarify it for me.
Why do network requests have to be performed asynchronously?
The fact that you should to do this asynchonously has nothing to do with the nature of the response (JSON or otherwise). It’s just that you’re requesting data from an API on a remote server and you don’t know how long it will take (subject to the nature of the network the device is on, how busy the web server is, etc.).
Bottom line, any task that takes more than a few milliseconds should generally be performed asynchronously to ensure a responsive UI, and this API call will take much more time than that.
Analogy time
Imagine that you're employed in the information booth of a train station to manually update a board with trains' statuses. You read off an old-fashioned ticker tape and move models of the trains around so that passengers can see what's going on. You also answer questions about schedules and such directly, when passengers ask you.
You realize that for one particular portion of the board, some information is missing from your tape. Your colleague has the info, but she isn't in the station. So you leave the board, go over to the phone, and call her. You dial, and wait for her to pick up, and then explain what you need. She doesn't have what you need immediately to hand, so she asks you to wait a moment while she gets it.
Meanwhile, the tape doesn't stop. Information about trains continues to come in. but because you're sitting there on the phone waiting, you're not doing anything with it. The people who are watching the board get frustrated, and the people who have questions for you can't ask them either.
Finally, your colleage comes back and gives you what you asked for. You thank her and return to the board. You realize the board is in very bad shape, not reflecting the current state of the world at all. And the passengers with questions have stormed out and left you a one-star review on the App, I mean Train, Store.
Next day, the same situation comes up. You need information you don't have. Instead of stepping away from the board for several minutes, you quickly fire off a text message, and get right back to talking to passengers and moving things around on the board.
In about the same amount of time that you spent waiting on the phone yesterday, you get a text back from your colleague with the information. You incorporate it into your workflow, and nobody even notices that you spend a couple of seconds reading from your phone instead of the ticker tape. Victory!
The first day, you made a synchronous network request. You initiated a conversation with a remote service and waited until you got the response. When that happened, your UI was locked up, neither taking input from the user nor refreshing its own state.
The second day, you made an asynchronous request. You kept working normally until the response came back, allowing you to continue with all your other tasks.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am fairly new to IOS programming
So I do not even know how to ask exactly
But I will explain the problem
I created an application that relies mainly to fetch data from the server sometimes the size of json is too large
Is there a way to save json on the device and not to bring, but recent data or only work to synchronize with the server
the program is objective c and i uses afnetworking
back end is ASP.net-mvc
The general way to handle this use case is, you always store the latestItemId/latestItemTimestamp in your app and every time you need to get new data, you make a call to the server with this information. In your server endpoint, you use this id/timestamp and get data after this id/timestamp.
When the app requests the api endpoint for the first time, the value of latestItemId will be 0. After getting the data every time, you keep updating it. Since you are asking the server to give data only after a specific id, you will only get the needed data (latest data)
For example your server code might look like (using EF and LINQ) (The below code is to give you an idea. I did not check for compilation errors)
public List<string> Messages(int fromId=0,int top=20)
{
var d = yourDbContext.Messages
.Where(x=>x.Id>fromId)
.OrderBy(f=>f.InsertTime)
.Take(top)
.Select(c=>c.MessageBody)
.ToList();
return d;
}
From the IOS client app side, You can keep the data(latestId) in the app memory and/or use NSUserDefaults to store it. NSUserDefaults.standardUserDefaults() method might be helpful (In Swift)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
So I have a process for a user to take a photo add info and upload to my database. My question is how should I store that data so it is accessible through all my controllers and when they click the upload button it sends the final object to the server to be added to the database. Would I use core data? Or like a struct? I just want to make sure I am doing this correctly.
This is an opinion oriented answer and it is influenced by developer's familiarity/comfort with the various underlying concepts as well. Hence though I dont consider it as an answer here is my opinion.
Should I use core data, so it is accessible through all my controllers?
Absolutely no! U don't need core data, just to create a shared data source which is being used by multiple ViewController's simultaneously. You can obviously create a Singleton data source object and can be accessed by all the VC's.
But then, core data is not just a shared data source isn't it ?
Core data is a Persistent Data Store where as your structs are not.
Assume user takes a pic, and before it gets uploaded quits the app, or you want to provide offline capability to user, where user can take a pic without internet and queue it for upload and whenever internet comes back your app loads it to server, if u use structs and keep data source in memory, if user quits the app all the efforts done by user to will go waste and obviously user will not appreciate it. On the other hand if u use core data you can obviously have it in sqlite file, and then access it whenever u need it even if user quits the app in between :)
Managed Object Context provides performBlock and performBlockAndWait to synchronize the access to core data in multi threaded environment but with a plain array of struct u have to write it on ur own.
Now there is no point in reinventing the wheel isn't it? We all know data types like array are not thread safe :) So is managedObject Context but managedObject context on iOS 5 onwards provides the amazing handy methods like performBlock and performBlockAndWait which eases the life of developer when dealing with shared data source (managedObject) in multi threaded environment.
Managed Object Context provides notifications about the changes happening in real time and works like a charm with NSFetchedResultsController providing a mechanism to constatly monitor and update the data source
I dont think its a big thing, but in order to achieve the same thing with array u'll have to use KVO. But because KVO wont work with swift objects u'll have to override didSet method and manually throw notification to all VC's when data source changes. Not so elegant solution isn't it :)
Scalability and robustness :
Finally, how many records are u dealing with also matters. I have been a part of a company which uploads and restores thousands of images on/from users device. In a scenario where you are dealing with 1000s of images maintaining a array is always a pain and memory print costly as well because the entire array will be loaded all the time. On the other hand NSFetchedResultsController works on page fault mechanism. It loads data efficiently only when needed.
Scalability is just a matter of adding new fields to managed object entity and robustness is directly proportional to you skill set dealing with core data I believe.
Pinch of Advice :
No matter whether u use array of structs or Core Data, always store images in local file system and keep the local path relative reference in your data source. Holding an entire an image in memory is a real bad idea :D
Hope it helps.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am working on an iOS application on which the Core Data was already implemented. So I couldn't understand the Core Data implementations from the scratch. But I could work on Core Data while adding new features. But I am having many doubts on Core Data. I couldn't find out a clear idea from any of the blog.
1) Question 1 - I have setup the architecture for application in a way that it has a Webservice controller class, Webservice helper class, DatabaseManager class, UiViewController classes and Model Objects as part of Core Data.
Web service controller makes the connection to service with NSURLConnection and other related functionalities. Once the response got from web service, it gives a callback to Webservice helper class with blocks.
Web service handler class helps to call the services from all the UIViewControllers. Web service helper class acts as an intermediate class to make web services between UIViewControllers and Web service controller. So when the web service helper gets the callback from web service controller, it sends the response back to UIViewController with the help of blocks.
My question is here, What should be flow of storing the web service response in to core data as well as updating the data in the UI. I would like to know the best practice for doing it. Should I save the data in to core data, then retrieve and display in the UI? But saving the data will take time if the data is big. Should core data operation and updating the UI synchronously.
2) Question 2 - I read about Core data operation concurrency in many blogs, still I am not pretty much clear about the concurrency in Core Data.
According to my knowledge, inorder to achieve concurrency, we have to create two managedobjectcontext, one with NSMainQueueConcurrencyType and other NSPrivateQueueConcurrency. Then all the save and update operations has to be executed in privateMOC[NSPrivateQueueConcurrencyType] and read can be executed with mainMOC[NSMainQueueConcurrencyType]. How this operation is related with performBlock?
3) Question 3 - As we can create multiple moc, should that be of NSConfinementConcurrencyType and execute performBlock on all doc for concurrency?
4) Question 4 - What is difference of implementing concurrency as mentioned in Question 2 & Question 3?
5) Question 5 - Consider, I am reading a record using core data and due to concurrency the same record has to update a value. How this situation can be handled. What I know here is that I have to use the merge policy. But I am not sure how to implement this, since I am not clear about the above cases.
6) Question 6 - In an application, how many managedobjectcontext can be created of type NSMainQueueConcurrencyType, NSConfinementConcurrencyType and NSPrivateQueueCOncurrencyType?
Can anyone answer the above questions?
Thanks in advance.
This really should be several separate questions. I will attempt to answer the architecture question, and perhaps touch on some of the others.
The return path from the web service should not reach any view controllers directly. The point where your service helper has parsed the response and validated it is where you want to save to core data. This task should be handed off to another class.
From the view controller side, you want to use NSFetchedResultsControllers (FRCs) to know when the model has changed. You can setup an FRC to watch any number of objects, including a single object.
FRCs were intended for table views, and there are numerous examples available on how to use them for that purpose. If you have a view where you are editing a single object and you use the web service to save updates, for example, you can have an FRC that is watching the edited object. When the save is complete, the FRC will trigger and you can update the UI to indicate success, or whatever.
Core Data
Core Data concurrency is not trivial, as you've discovered. I've had the best experience with the following setup:
A read-only context with NSMainQueueConcurrencyType. This is the initial context that is tied to the persistent store. This context remains for the entire session.
An NSOperationQueue with a concurrency of 1. Operations on this queue clone the main (read-only) context with a concurrency type of NSConfinementConcurrencyType, and are connected to the same store. Only these cloned contexts are allowed to save. These contexts are discarded when the operation is complete.
A merge handler that will merge changes into the main context.
Operations execute on background threads, and are synchronous with respect to each other. This makes merges simple. Cloned contexts are setup with a merge policy of NSMergeByPropertyObjectTrumpMergePolicy, and the main context with NSMergeByPropertyStoreTrumpMergePolicy.
View controllers, and other main-thread activities, use the main context, which always exists.
There are lots of other setups, including multiple, writeable siblings, parent-child relationships, etc. I recommend picking something simple, because you don't want to be fighting Core Data and threading issues at the same time.
I recommend watching this video by Paul Goracke. The inspiration for my preferred stack was taken directly from Paul's presentation.