Multiple persistent stores - ios

I need to keep my data separately in different stores(user profiles). What is the best way to achieve this? I'm going to play with Persistent Object Stores in runtime. Should I simply remove() the current one and addPersistentStore() to make a new or to use the early created instance.

I would have one core data stack (using NSPersistentContainer) for user management. This stack would have the basic account details and the name of the sql file. (Store just the sql filename NOT the full url path, as the path can change in rare circumstance such as an iTunes restore). This would be used for the login, or select account page.
Then I would setup a second core data stack using the sql file name that was stored in the user account object. This would be the main stack used by the application. If you need to logout, then tear down the second stack and start over. Removing and adding store is a bad idea, as it won't deal with the row cache or other managedObjects that are floating around.
Or you could simply have one core data stack and manage the relationship so that every object belongs to a user object. Then you would manage your fetches to only look at objects belonging to the correct user.

Related

Rails: working on temporary instance between requests and then commit changes to database

I have already read Rails - How do I temporarily store a rails model instance? and similar questions but I cannot find a successful answer.
Imagine I have the model Customer, which may contain a huge amount of information attached (simple attributes, data in other tables through has_many relation, etc...). I want the application's user to access all data in a single page with a single Save button on it. As the user makes changes in the data (i.e. he changes simple attributes, adds or deletes has_many items,...) I want the application to update the model, but without committing changes to the database. Only when the user clicks on Save, the model must be committed.
For achieving this I need the model to be kept by Rails between HTTP requests. Furthermore, two different users may be changing the model's data at the same time, so these temporary instances should be bound to the Rails session.
Is there any way to achieve this? Is it actually a good idea? And, if not, how can one design a web application in which changes in a model cannot be retained in the browser but in the server until the user wants to commit them?
EDIT
Based on user smallbutton.com's proposal, I wonder if serializing the model instance to a temporary file (whose path would be stored in the session hash), and then reloading it each time a new request arrives, would do the trick. Would it work in all cases? Is there any piece of information that would be lost during serialization/deserialization?
As HTTP requests are stateless you need some kind of storeage between requests. The session is the easiest way to store data between requests. As for you the session will not be enough because you need it to be accessed by multiple users.
I see two ways to achive your goal:
1) Get some fast external data storage like a key-value server (redis, or anything you prefer http://nosql-database.org/) where you put your objects via serializing/deserializing (eg. JSON).
This may be fast depending on your design choices and data model but this is the harder approach.
2) Just store your Objects in the DB as you would regularly do and get them versioned: (https://github.com/airblade/paper_trail). Then you can just store a timestamp when people hit the save-button and you can always go back to this state. This would be the easier approach i guess but may be a bit slower depending on the size of your data model changes ( but I think it'll do )
EDIT: If you need real-time collaboration between users you should probably have a look at something like Firebase
EDIT2: Anwer to your second question, whether you can put the data into a file:
Sure you can do that. But you would need some kind of locking to prevent data loss if more than one person is editing. You will need that aswell if you go for 1) but tools like redis already include locks to achive your goal (eg. redis-semaphore). Depending on your data you may need to build some logic for merging different changes of different users.
3) Another aproach that came to my mind would be doing all editing with Javascript and save it in one db-transaction. This would go well with synchronization tools like firebase (or your own synchronization via Rails streaming API)

Save data in two persistent stores

I am having an app where there is a search feature that does a network request. However uses the same model framework as the entire app.
This means that when the user searches for something I need to create managed objects from the found data, save them and display them. However this messes up old records with the user recent data.
I would ideally like to save the managed objects found in the search in a separate in-memory persistent store so it doesn't make disorder in the main data.
I haven't done something like this before so what is the best way to approach it?
Thank you!
As has been suggested by #stevesliva, you do not need to involve yourself into the complexities of maintaining multiple partially in-memory stores. The way to go here is to create a child context and fetch the online data into this context. Once you do not need the data any more, just discard the context.
If you decide to save the downloaded data, you can simply "push" the changes to the main context via save:. At that point you could make necessary adjustments to the data so they fit into the user data. Depending on your model, one feasible solution could be to create another attribute on one of the entities that marks linked objects as distinct from the user created objects.

MVC design - handle file upload before saving the record

We've an MVC web app which has a Claim management wizard (similar to a typical Order entry stuff). Claim can have multiple Items and each Item can have multiple files related to it.
Claim --> Items --> Files
While adding a new Claim - we want to allow the user to add one or more items to it and also allow file upload for those items. But we want to keep everything in memory until the Claim is actually saved - so that if the user doesn't complete the Claim entry or discards it, no database interaction is done.
We're able to handle data level in-memory management via session. We serialize the Claim object (which also includes a Claim.Items property) in session. But how to manage files?
We store files in < ClaimID >\< ItemID > folder but while creating a new
claim in memory we don't have any IDs until the record is being
saved in the database (both are auto-increment int).
For now, we've to restrict the user from uploading files until a Claim is saved.
Why not interact with the database? It sounds like you're intending to persist data between multiple requests to the application, and databases are good for that.
You don't have to persist it in the same tables or even in the same database instance as the more long-term persisted data. Maybe create tables or a database for "transient" data, or data that isn't intended to persist long-term until it reaches a certain state. Or perhaps store it in the same tables as the long-term data but otherwise track state to mark it as "incomplete" or in some other way transient.
You could have an off-line process which cleans up the old data from time to time. If deletes are costly on the long-term data tables then that would be a good reason to move the transient data to their own tables, optimized for lots of writes/deletes vs. lots of reads over time.
Alternatively, you could use a temporary ID for the in-memory objects to associate them with the files on the disk prior to be persisted to the database. Perhaps even separate the file-associating ID from the record's primary ID. (I'm assuming that you're using an auto-incrementing integer for the primary key and that's the ID you need to get from the database.)
For example, you could have another identifier on the record which is a Guid (uniqueidentifier in SQL) for the purpose of linking the record to a file on the disk. That way you can create the identifier in-memory and persist it to the database without needing to initially interact with the database to generate the ID. This also has the added benefit of being able to re-associate with different files or otherwise change that identifier as needed without messing with keys.
A Guid shouldn't be used as a primary key in many cases, so you probably don't want to go that route. But having a separate ID could do the trick.

CoreData Update Database Leaving User Entries

First, Thank you for any help provided.
I have an iOS leveraging CoreData to retain various presentations, this data comes from a sqlite file and there is no server connection.
I will have to be able to provide App updates (via appstore), this update may add more data to the database.
The tricky part is that it can not simply overwrite the current database, there are a few user tables that I will not like touched.
Please provide any information I should consider when accomplishing this or any links are greatly appreciated.
Thank you.
Given your app has no server connection, you will have to rely on shipping data within the updated application itself. I would recommend using a plist file or define your own xml or json structure. You can then read this data to create/update core data nsmanagedobjects.
It looks like someone in the past was using plist->coredata on SO
Would you have relationships between user created data and shipped data?
If not, you might go the route of connecting two stored to the persistent store coordinator. The shipped store would be read-only. The store with user created data would be read-write. You can use this approach, too, if you have relationships between shipped and user-created objects, but it's a lot more complicated, since CoreData doesn't manage cross-store relationships for you, and you'll need to write your own logic (doable, but not straight forward).
If you need to have relationships between shipped and user-created objects, you can still ship a CoreData store. When the app launches for the first time (no user-created objects), you copy the store to the Documents folder and user this store to create your CoreData stack. User created objects will be added to this store. Once you have new 'shipped' objects (i.e. a new store in the app-bundle), you'll have to manually migrate that stores data into the store that the user has changed. You'll have to be able to find
(1) objects that need to be deleted
(2) objects that need to be updated (changed)
(3) objects that need to be added
If you mark your shipped objects with a special flag such that you can tell if it's a user created object or a shipped one, that would be doable. You also have to have some sort of ID to be able to tell which objects in the new store correspond to which ones in the existing (old) store.
You do not need to go the route of using plists. In fact, I'd recommend against it. You can easily open two stores at the same time. Either to use both stored, or just to migrate objects from one store to the other store.

How can I persist objects between requests with ASP.NET MVC?

I'm just starting to learn ASP.NET MVC and I'd like to know how I can retain model objects between subsequent requests to controller action methods?
For example say I'm creating a contact list web app. Users can create, update, rename, and delete contacts in their list. However, I also want users to be able to upload a contact list exported from other programs. Yet I don't want to just automatically add all the contacts in the uploaded file I want to give the user a secondary form where they can pick which uploaded contacts should be actualy added to their list.
So first I have a ContactController.Upload() method which shows an upload form. This submits to ContactController.Upload(HttpPostedFileBase file) which reads the file that was posted into a set of Contact model objects. Then I want to display a list of all the names of the contacts in the list and allow the user to select those that should be added to their contact list. This might be a long list that needs to be split up into multiple pages, and I might also want to allow the user to edit the details of the contacts before they are actually added to their contact list.
Where should I save the model objects between when a user uploads a file and when they finally submit the specific contacts they want? I'd rather not immediately load all the uploaded contacts into the back end database, as the user may end up only selecting a handful to actually add. Then the rest would need to be deleted. Also I would have to account for the case when a user uploads a file, but never actually completes the upload.
From what I understand an instance of a controller only lasts for one request. So should I create a static property on my Contact controller that contains all the latest uploaded contact model object collections? And then have some process that periodically checks the age of these collections and clears out any that are older then some specified expiration time?
A static property on the controller is trouble. First off, it won't work in a web farm and second it you'd have to deal with multiple requests from different users. If you really don't want to use your database you could use the ASP.NET Session.
No, you don't want a static property, as that would be static to all instances of the controller, even for other users.
Instead, you should create a table used to upload the data to. This table would be used as an intermediary between when the user uploads the data, and completes the process. Upon completion, you copy the contacts you want to keep into your permanent table, then delete the temporary data. You can then run a process every so often that purges incomplete data that is older than a specified time limit.
You could also use the HttpContext.Cache, which supports expiration (and sliding expiration) out-of-the box.
Alternatively, and perhaps even better (but more work) you could use cookies and have the user modify the data using javascript in her browser before finally posting it to you.
However, I'd strongly recommend to store the uploaded information in the database instead.
As you pointed out, it might be a lot of data and the user might want to edit it before clicking 'confirm'. What happens if the user's machine (or browser) crashes or she has to leave urgently?
Depending on the way you store the data the data in this scenario will probably be lost. Even if you used the user id as a cache key, a server restart, cache expiration or cache overflow would cause data loss.
The best solution is probably a combination of database and cookie storage where the DB keeps the information in a temporary collection. Every n minutes, or upon pagination, the modified data is sent to the server and updated in the DB.
The problem with storing the data in session or memory is what happens if the user uploads 50k contacts or more. You then have a very large data set in memory to deal with which depending on your platform may effect application performance.
If this is never going to be an issue and the size of the imported contacts list is manageable you can use either the session or cache to store the dataset for further modifications. Just remember to clear it when the user has committed the changes, you don't want a few heavy datasets hanging around in session.
If you store the dataset in session using your application controller then it will be available to all controllers while it is needed.

Resources