I create a SQLite connection with SQLite.swift but I'm not sure where to store the DB connection so I can use it during the duration that a user has the app open across various views.
Is using UserDefaults here an appropriate case? Or EnvironmentObject?
What is the recommendation with iOS apps in terms of keeping a DB connection open for reuse?
Is using UserDefaults here an appropriate case?
Definitely not. Like you said yourself: you want it to exist while the app is open. While UserDefaults is for things you want to store when app is not running.
Or EnvironmentObject?
You could, but semantically it's still wrong: Apple defines it as "A property wrapper type for an observable object supplied by a parent or ancestor view.", which doesn't really fit the DB connection. It's not an observable object with states.
Ideally you step back and look at a more generic architecture of your app.
Views want data in a specific format. They don't care where the data is coming from.
The fact that data is coming from DB is an implementation detail - tomorrow you may decide to retrieve it from remote server, and you don't want to change every single view because of that.
So what you really want is
View talks to some sort of "data provider" interface that defines an interface by which views can get their data regardless of where it's stored.
Your implementation of "data provider" is to talk to the local database (currently, but it can changed base don your needs).
In this structure the DB connection(s) are managed by data provider, and do not need to be shared with anyone. And your views will actually use Observable objects, except those observable objects are data itself, not the connection to database (and in fact views will not "know" where the data is coming from).
I will not go into details on how to make that model happen - there are many other details here (like what's overall architecture of your app), but this is the gist of the idea.
Related
I am starting a new project (learning purposes) and I am trying to figure out what is the best software design pattern to use in the following scenario.
I have several data that need to be downloaded from multiple webservices and store somewhere in my app, to display it later. However each piece of data (e.g. list of teachers, students) will only be used in one or more specific view controllers (e.g. teachersViewController and studentsViewController).
I read that the Singleton pattern or use the AppDelegate to store a variable (an object like ApplicationData) is a bad practise, even more in this example which I want to restrict the data access.
So, which design pattern should I choose? I have read something about dependency injection, but I don't have any clue about it or if it even helps me in this question. If it helps, some examples with explanation would be nice.
You need some sort of database to store downloaded data. Good choices are Realm and Core Data. The right way to process data is:
Check if data is already in DB and show it if available.
Download or update data from server and parse it to objects.
Save objects to DB.
Show data taken from DB to user.
Download data as needed. When you open VC with students then download only students data and so on.
EDITED: If you need all the data on app open then load it and put in a DB before first screen opens. Then just use DB to show data to user.
I am currently doing some tests with Ensembles, specifically testing Core Data light migration.
My current configuration is as follow:
Device-A running my app with data model 1
Device-B running my app with data model 2
data model 2 is based on data model 1 with one additional string property, which is optional
My scenario is as follow:
At the beginning, running my app with data model 1 on both Device-A, and Device-B, everything synced fine using Ensembles (iCloud configuration)
On Device-B, install and run my updated app using data model 2
On Device-A, keep running my old app using data model 1, and add a new record
The result: the new record added on Device-A is uploaded to iCloud and then synced to device-B
My question: can I configure Ensembles to prevent it from uploading changes to iCloud in case that related data model is not the latest one? (i.e. in my case, Device-A uploads an object based on data model 1 while iCloud is already based on data model 2)
Thanks in advance!
UPDATE 1:
Drew, thank you very much for your answer. I definitely agree that uploads can't (and probably shouldn't) be prevented as Ensembles is a decentralised, peer-to-peer system.
In the ideal case, I would like that the device with the new data model will ignore data that is based on the old data model. (in a similar way to the existing behavior where the device with the old data model will ignore any data based on the new data model). Is that supported?
If not, please consider the following scenario as an example:
The old data model has an entity called 'Book' with two properties: title, and author (both fields are non optional)
The new data model has a new optional property called titleFirstLetter that should hold the first letter of the title field.
Currently, when Ensembles is not involved, I have full control when saving new NSManagedObject to the persistence store. Therefore, the updated code of my app which responsible for adding a new book, will make sure to extract the first letter from the title field and save it to the new titleFirstLetter property. (i.e. a book titled Catch-22 will have C in the titleFirstLetter property when book is saved).
In addition, when light migration occurs on the core data stack, I detect that, and perform a one-time procedure where I iterate all existing books in the database, and set the titleFirstLetter according to the title value. From this point and on, the database is consistent and valid, while the new code will ensure that future books added to the database will keep database valid.
Regarding Ensembles, if I don't have any control on old data coming from devices with older data model, how can I fill the new property of titleFirstLetter, if my code is never being called?
Thank you for your kind assistance!
You can't prevent it, no. Ensembles is a decentralised, peer-to-peer system. There is really no way for one device to know the current state of another device, so you couldn't prevent an upload.
The updated device should be capable of handling the old data from the other device. The device with the old model will ignore any data based on the new model, until it too is updated. Then it will merge all of that ignored data.
It is best to avoid migrations where possible, and stick to simple stuff like adding properties or entities, rather than tricky refactors. If you need to make a lot of changes, consider simply starting with a new ensemble (e.g. change the ensembles identifier).
I have already read Rails - How do I temporarily store a rails model instance? and similar questions but I cannot find a successful answer.
Imagine I have the model Customer, which may contain a huge amount of information attached (simple attributes, data in other tables through has_many relation, etc...). I want the application's user to access all data in a single page with a single Save button on it. As the user makes changes in the data (i.e. he changes simple attributes, adds or deletes has_many items,...) I want the application to update the model, but without committing changes to the database. Only when the user clicks on Save, the model must be committed.
For achieving this I need the model to be kept by Rails between HTTP requests. Furthermore, two different users may be changing the model's data at the same time, so these temporary instances should be bound to the Rails session.
Is there any way to achieve this? Is it actually a good idea? And, if not, how can one design a web application in which changes in a model cannot be retained in the browser but in the server until the user wants to commit them?
EDIT
Based on user smallbutton.com's proposal, I wonder if serializing the model instance to a temporary file (whose path would be stored in the session hash), and then reloading it each time a new request arrives, would do the trick. Would it work in all cases? Is there any piece of information that would be lost during serialization/deserialization?
As HTTP requests are stateless you need some kind of storeage between requests. The session is the easiest way to store data between requests. As for you the session will not be enough because you need it to be accessed by multiple users.
I see two ways to achive your goal:
1) Get some fast external data storage like a key-value server (redis, or anything you prefer http://nosql-database.org/) where you put your objects via serializing/deserializing (eg. JSON).
This may be fast depending on your design choices and data model but this is the harder approach.
2) Just store your Objects in the DB as you would regularly do and get them versioned: (https://github.com/airblade/paper_trail). Then you can just store a timestamp when people hit the save-button and you can always go back to this state. This would be the easier approach i guess but may be a bit slower depending on the size of your data model changes ( but I think it'll do )
EDIT: If you need real-time collaboration between users you should probably have a look at something like Firebase
EDIT2: Anwer to your second question, whether you can put the data into a file:
Sure you can do that. But you would need some kind of locking to prevent data loss if more than one person is editing. You will need that aswell if you go for 1) but tools like redis already include locks to achive your goal (eg. redis-semaphore). Depending on your data you may need to build some logic for merging different changes of different users.
3) Another aproach that came to my mind would be doing all editing with Javascript and save it in one db-transaction. This would go well with synchronization tools like firebase (or your own synchronization via Rails streaming API)
My question is mostly related to an architectural or design pattern for hierarchical models in Objective C. For background my app is relatively simple. In general it talks to a web service to retrieve and display things a user can follow. When someone follows something, the thing they are following is conceptually stored for access later by posting to the web service.
I would like advice on where the logic should go to manage the interaction between the web service and the group of things a user follows.
For example, is it appropriate to create a model object like MyStuffModel with an array property named followedThings that holds references to AThingModel objects? And if so, would the logic for refreshing from the web service, etc be written and executed in the model?
Potential code example
#interface MyStuffModel : NSObject
#property (nonatomic, strong) NSArray *followedThings;
- (void)refreshAllFollowedThingsFromWebService;
#end
#implementation MyStuffModel
- (void)refreshAllFollowedThingsFromWebService
{
//call my API client (built on AFNetworking), get back a response
//populate followedThings, notify a view controller, etc
}
#end
Or, should I not have a MyStuffModel object and manage the calls to my web service by calling my API client directly from a view controller?
In your experience, which approach is desired? Or is there another way?
I would do all of the networking from within the model. Here's an outline of how all the pieces fit together
the controller tells the model which items to follow
the model forwards that information to the server
when the server has new information, it uses APNS to notify the model
the model requests the new information from the server
after the data transfer is complete, the model uses NSNotificationCenter to inform the controller that new information is available
the controller reads the information from the model
the controller updates the view with the new information
Using Apple's Push Notification Service (APNS) allows your server to notify your app when new data is available. This helps reduce network traffic since your app doesn't have to constantly poll the server to determine when new data is available. If you aren't familiar with APNS, there's one very important feature of the service that you need to be aware of (since it seems to be a point of confusion for many new users). The service only guarantees delivery of the last message sent. So, for example, if the server gets 10 new items for a particular device, and sends 10 notifications to the device while the device is either off or in a tunnel, then the service is only guaranteed to deliver the 10th message. The point is that you can't use APNS to send any data from the server to the device, since some messages may be lost. You should only use APNS to notify the device that data is available.
I always create model classes (and interfaces, no idea if thats applicable in ObjectiveC).
The model is in many cases a view of the database backend.
Your model class should hide the database access and provide a simple interface, for example using a addNewFollower method. This method should then (optionally do sanity checks) and persist this to the database backend.
This approach allows you to easily replace your database integration without touching the service layer at all. For example using an in-memory mock database for testing.
I always create simple "dumb" objects for models, as they model the data, nothing more. If you're doing networking/api calls, I'd create a separate set of classes that deal strictly with API calls and utilize your models as the interchange data. Mixing data and functionality is always fishy to me.
Writing a clean, reusable, testable, and reliable API client, that can handle errors, parallel/serial calls, logging etc, requires quite a bit of code that really should be separated from your other application tiers. Data is just data, keep it clean, keep it simple.
The other thing is that some endpoints don't always return data exactly as it is defined in that 1 model where you are shoe-horning all of your code.
I wouldn't put it in the controllers either, I personally always create a different set of classes that can be used specifically for API calls, which also throw their own exceptions, handle serialization/de-serialization etc.
app scenario: on the UI, a button is tapped to get contact list from the server. the request goes to subproject which does the download and parsing and returns the result thru its delegate to the UI. so far everything works properly. lets say there is no internet connection and we cant have the contact list. to solve the problem, I want to cache the data in core data. if there is no internet, the cached data will be returned. now the question that bugs me, is it possible to create one data model and use it in subproject to save the data and in UI where data get pulled and edit from the same data model?
so basically i want to access core data from different subprojects and UI.
i couldnt find hints or tutorials regarding this issue. any ideas?
thanks in advance!
edit:
a project "b" that is added to the parent project "a". the project "b" is actually a static library.
if i let the library to do the saving and returning data to UI, wont it be inefficient to get all data from core data then send it to the UI?
i actually hope that there is a way to use same data model in both UI and the library.
i want prevent the UI to have huge load of data. its better to hace core data to handle that incl. memory mangement. i'm still reading some sources and trying to implement it on a test project.
I would argue that only the main project should deal with persistency, as than you can always decide to handle it differently — save it permanently or not, use core data or a home grown sql wrapper…. So it would be up the the delegate to decide what to do with more data.
But along with the delegate protocol you could decide to maintain different model protocols that define, what your models can hold. this would be independent to the implementation. The delegate now could return objects — no matter if core data models or not — to the delegator if this objects conforms to the protocols. The delegator in the sub module now could check for values on the server and/or in the cache.