I have been reading many different posts, threads, etc. regarding the best practices to store level data to be used throughout a game application (level boundary data, images, characters, time, etc.).
Many have suggested that using a Property List (.plist) is appropriate, as it is quite simple to store the game info then read it when needed. While this is easy to create and refer to, plist files are not exactly secure in any way, aside from simple AES encrypting the string contents.
Another method would be perhaps to hard code this data into a separate class file, titled LevelData.m and refer directly to the class to read the level data, as storing it this way should be "safer", as (from what I am aware of) users do not have direct or any access at all to these class files once an application has been packaged and distributed through, say the iOS App Store.
I have also read to use SpriteKit's built in NSCoding, where the data can be archived into a file (or NSData object) and later unarchived from that archive.
My question, can anyone suggest the best approach, or perhaps one that I haven't mentioned? YES, I understand they are all perfectly valid methods to saving big data, particularly non-trivial, and somewhat repetitive data that just needs to be read, and all have their pros and cons in terms of security measures. I am merely just trying to find the safest way to store this data without the user being able to tamper it in any way from other people who may have encountered a similar issue and have found the ideal solution.
NOTE: I'm sure that hosting this data server side and retrieving the data upon application launch would be the ideal secure approach. However, I am just looking to see which method should be the best practice in terms of security strictly through storing data on the device. Thanks!
If you are distrusting the users, how about signing the data which's integrity you distrust and validate the signature using public key encryption (with a pinned certificate in the binary)?
Thus, only data with a valid signature from you can be used.
Yet, after all, if a user dissembles your binary and modifies the public key, that doesn't work either.
As always with those problems, the question is: how hard are you making it for an adversary to break your security meassures - and what hardness is useful?
I know you got an answer already but I personally use a GameData Singleton class with NSCoding. I than save the encoded/archived data into iOSs Keychain (instead of NSUserDefaults, NSBundlePath etc).
To save into keychain you can use this helper
https://github.com/jrendel/SwiftKeychainWrapper
which is the one I use for my apps. Its works very similar to NSUserDefaults and is therefore very easy to use.
You can also get a more complex and feature rich one here.
https://github.com/matthewpalmer/Locksmith
Related
Not quite sure if this is on topic, so when in doubt feel free to close.
We have a client who is missing tracking data for a large segment of his visitors in his Report Suite. However the complete set of data is available in a data warehouse. We are now investigating if it is possible to import them as a data source. I have only experience with enriching data via classifications, however the goal here is to create views (sessions etc) for a past timeframe etc from scratch.
According to the documentation this should be possible. However there is one caveat specifically mentioned in the FAQ:
"Adobe recommends you select new, unused variables to import data
using Data Sources. If you are uncertain about the configuration of
your data file, or want to better understand the risks of re-using
variables, contact Customer Care.“
I take that to mean that I should not import data to props,Evars,events etc. that have been used when data has been collected via the tracker, which would pretty much defeat our purpose (basically we want to merge the data from the data warehouse with existing data). Since I have to go to some intermediaries to reach customer care and this takes a long time I wonder if somebody here can explain what the dangers in re-using variables are (and maybe even if there is still a way to do this).
DISCLAIMER: I'm not familiar with Adobe Analytics, but the problem here is pretty universal. If someone with actual experience/knowledge specific to the product comes along, pay more attention to them than me :)
As a rule, Variable reuse in any system runs the risk of data corruption. I'm not familiar with Adobe Analytics, but a brief read through some blogs imply that this is what they're worried about in terms of variable reuse - if you have a variable that is being used in one section, and you import data into it in another section when is in the same scope, you overwrite the data that the other section was using.
Now, that same blog states that provided you have your data structure set up in a specific way, it can allow you to reuse variables/properties without issue and in fact encourages it, hence the statement in your quote "If you are uncertain about the configuration of your data file...". They're probably warning you that if you know what you're doing and know that there won't be any overwriting, fine, go ahead and reuse, but if you don't, or you aren't sure whether something else might be using the original content, then it's unsafe.
Regarding your specific case, you want to merge the two piece of data together, not overwrite, so reusing your existing variables would overwrite the existing data. Sounds like you will need to import to a second (new) set of variables, and then compare/merge between them within the system, rather than trying to import and merge in one go.
We have since received an answer from Adobe Customer Care.
According to them the issue is that hits created via data imports are indistinguishable from hits created via server calls, so they cannot be removed or corrected after they have been imported. They recommend to use different variables so that the original data remains recognizable and salvageable in case one does import faulty data via the imports.
This was already mentioned in the online documentation, but apparently Adobe thinks this is important enough to issue the extra warning.
I'm putting together an iOS/Mac library that I intend to open source. Part of the library vends data which client applications are likely to want to persist. The data is a mix of insensitive and sensitive. I've been trying to work out the best way to handle this divide. I've considered a few options, but they all involve trade offs I'm unhappy with. Are there any typical best practices for this situation?
Some background:
The class which stores this data is serializable via the NSSecureCoding protocol, but I only serialize the insensitive data.
Objects are still valid if the sensitive information goes missing, though clients of the library would need to take the user through an authentication process again.
Options I've considered:
Read and write to the keychain automatically in -initWithCoder: and -encodeWithCoder:. This puts the least burden on the client, but is arguably "surprising" behavior. I've largely rejected this option, but I'd be happy to be talked back into it.
Include an API to save and retrieve the sensitive information. This is what I currently have implemented and it works well for simple cases. It does put a burden on clients to remember to send these messages, though, which I find somewhat inelegant.
Include an API to serialize/deserialize the sensitive information in a form which the client can store in the keychain on its own. This puts the most burden on the client, and is even less elegant, but also provides the most flexibility in how the sensitive information is stored (keychain groups, etc). Of course it also allows client code to mishandle the sensitive information - but I'm not sure it's up to my library to enforce proper storage.
I implemented #1 first, switched to #2, and am currently mulling over switching to #3. I'm also sure there are other options I haven't considered. Is there an established best practice for this sort of thing (that I just haven't figured out the right search terms to find)?
I thought this would be covered already, but my search returned nothing of relevance.
I am aware that there is NSUserDefaults, Core Data, object archiving, raw SQLite, plists, and of course, storage by web servers. What is unclear and somewhat hazy to a beginner is when to employ each of these various tools.
The usages of web servers vs Core Data is obvious, but what about NSUserDefaults vs plists?
Core Data vs object archiving? A simple breakdown of use cases would really help me understand why there are so many options for storage in iOS.
I try to write a quick and simple list of common use cases, because as #rmaddy says this answer could fill a book chapter:
NSUserDefaults: stores simple user preferences, nothing too complex or secure. If your app has a setting page with a few switches, you could save the data here.
Keychain (see SSKeychain for a great wrapper): used to store sensitive data, like credentials.
PLists: used to store larger structured data (but not huge): it is a really flexible format and can be used in a great number of scenarios. Some examples are:
User generated content storage: a simple list of Geopoint that will be shown by a map or list.
Provide simple initial data to your app: in this case the plist will be included in the NSBundle, instead of being generated by user and filled by user data.
Separate the data needed for a
particular module of your application from other data. For example,
the data needed to build a step-by-step startup tutorial, where each step is similar to the others but just needs different data. Hard-coding this data would easily fill your code, so you could be a better developer and use plists to store the data and read from them instead.
You are writing a library or framework that could be configured in some
way by the developer that uses it.
Object archiving could be useful to serialize more complex objects, maybe full of binary data, that can't (or that you don't want to) be mapped on simpler structures like plists.
Core Data is powerful, can be backed by different persistent stores (SQLite is just one of them, but you can also choose XML files or you can even write your own format!), and gives relationships between elements. It is complex and provides many features useful for the development, like KVO and contexts. You should use it for large data sets of many correlated records, that could be user generated or provided by a server.
Raw SQLite is useful when you need really, really fast access to a relational
data source (Core Data introduces some overhead), or if you need to support the same SQLite format across multiple platforms (you should never mess with CoreData inner SQLite: it uses its own format, so you can't just "import" an existing SQLite in CoreData). For example, for a project I worked for, a webservice provided me some large SQLite instead of jsons or xmls: some of this SQLite were imported to CoreData (operation that could take a while, depending on the source size), because I needed all the features of it, while other SQLites were read directly for a really fast access.
Webserver storage well it should be obvious: if you need to store data to a server it is because the device shouldn't be the only owner of that data. But if you just need to synchronize the same App across different iOS devices (or even with a Mac-ported version of the App) you could also look at iCloud storage, obviously.
I'm developing an iOS application using Core Data. I want to have the persistent store located in a shared location, such as a network drive, so that multiple users can work on the data (at different times i.e. concurrency is not part of the question).
But I also want to offer the ability to work on the data "offline", i.e. by keeping a local persistent store on the iPad. So far, I read that I could do this to some degree by using the persistent store coordinator's migration function, but this seems to imply the old store is then invalidated. Furthermore, I don't necessarily want to move the complete store "offline", but just a part of it: going with the simple "company department" example that Apple offers, I want users to be able to check out one department, along with all the employees associated with that department (and all the attributes associated with each employee). Then, the users can work on the department data locally on their iPad and, some time later, synchronize those changes back to the server's persistent store.
So, what I need is to copy a core data object from one store to another, along with all objects referenced through relationships. And this copy process needs to also ensure that if an object already exists in the target persistent store, that it's overwritten rather than a new object added to the store (I am already giving each object a UID for another reason, so I might be able to re-use the UID).
From all I've seen so far, it looks like there is no simple way to synchronize or copy Core Data persistent stores, is that a fair assessment?
So would I really need to write a piece of code that does the following:
retrieve object "A" through a MOC
retrieve all objects, across all entities, that have a relationship to object "A"
instantiate a new MOC for the target persistent store
for each object retrieved, check the target store if the object exists
if the object exists, overwrite it with the attributes from the object retrieved in steps 1 & 2
if the object doesn't exist, create it and set all attributes as per object retrieved in steps 1 & 2
While it's not the most complicated thing in the world to do, I would've still thought that this requirement for "online / offline editing" is common enough for some standard functionality be available for synchronizing parts of persistent stores?
Your point of views greatly appreciated,
thanks,
da_h-man
I was just half-kidding with the comment above. You really are describing a pretty hard problem - it's very difficult to nail this sort of synchronization, and there's seldom, in any development environment, going to be a turn-key solution that will "just work". I think your pseudo-code description above is a pretty accurate description of what you'll need to do. Although some of the work of traversing the relationships and checking for existing objects can be generalized, you're talking about some potentially complicated exception handling situations - for example, if updating an object, and only 1 out 5 related objects is somehow out of date, do you throw away the update or apply part of it? You say "concurrency" is not a part of the question, but if multiple users can "check out" objects at the same time, unless you plan to have a locking mechanism on those, you would start having conflicts when trying to make updates.
Something to check into are the new features in Core Data for leveraging iCloud - I doubt that's going to help with your problem, but it's generally related.
Since you want to be out on the network with your data, another thing to consider is whether Core Data is the right fit to your problem in general. Since Core Data is very much a technology designed to support the UI and MVC pattern in general, if your data needs are not especially bound to the UI, you might consider another type of DB solution.
If you are in fact leveraging Core Data in significant ways beyond just modeling, in terms of driving your UI, and you want to stick with it, I think you are correct in your analysis: you're going to have to roll your own solution. I think it will be a non-trivial thing to build and test.
An option to consider is CouchDB and an iOS implementation called TouchDB. It would mean adopting more of a document-oriented (JSON) approach to your problem, which may in fact be suitable, based on what you've described.
From what I've seen so far, I reckon the best approach is RestKit. It offers a Core Data wrapper that uses JSON to move data between remote and local stores. I haven't fully tried it yet, but from what the documentation reads, it sounds quite powerful and ideally suited for my needs.
You definetly should check these things:
Parse.com - cloud based data store
PFIncrementalStore https://github.com/sbonami/PFIncrementalStore - subclass of NSIncrementalStore which allows your Persistent Store Coordinator to store data both locally and remotely (on Parse Cloud) at the same time
All this stuff are well-documented. Also Parse.com is going to release iOS local datastore SDK http://blog.parse.com/2014/04/30/take-your-app-offline-with-parse-local-datastore/ wich is going to help keep your data synced.
I've developed quite a few local apps, however this is the first time I'm introducing networking (more specifically posting to, and reading from a database). I am receiving back a JSON object from the database but I am currently using arrays and dictionaries. The objects do have relationships to each other, and I was wondering whether CoreData is the way to go. If so, do I just replicate part of the database I wish to be made viewable in the app and store it in my CoreData model? Are there any tutorials out there for this?
Also, just as a side note, I've also included Facebook integration with which I download the users list of friends. Would CoreData be good for storing this kind of information too? Or would I be better sticking with dictionaries?
Thanks in advance.
Based on my experience (other could say different things) Core Data is the right choice but its adoption could depend on the time you could dedicate to it. At first could be very complicated but I think you could take advantage applying the knowledge in some other projects.
Out of there there are some tutorials or books on Core Data.
First I suggest to read about core-data-on-ios-5-tutorial-getting-started. In the site there are, I think, other tutorials. Then, you could try to read a post on core data I've written some time ago: Mapping Business Objects with Core Data in iOS. Also Apple doc is your friend. So read the Introduction to Core Data Programming Guide to have the details that are going on.
If so, do I just replicate part of the database I wish to be made
viewable in the app and store it in my CoreData model?
Yes, just a part. You can create a minimal model that includes the key parts you need to have in your device. What I want to highlight is that you don't need to take care of normalization concepts when you deal with Core Data. Yes you could, but in CD you deal with objects and it's important to make attention to memory consumption (the framework helps you) and performances.
Would CoreData be good for storing this kind of information too? Or
would I be better sticking with dictionaries?
With CD you could take advantage of NSFetchedResultsController. A NSFetchedResultsController objects is optimized to work with tables and it used to display data also in batches. Through this component you can deal with a lot of elements (say friends in Facebook) without overload the memory. See core-data-tutorial-how-to-use-nsfetchedresultscontroller.
If you want to know something else, let me know.
Hope that helps.