Since the adjustment data used when saving a PHAsset can be completely user-defined, are there any memory size limitations for it?
For instance, can I store something like masks or layers (so basically multiple bitmaps) in it?
The documentation for PHAdjustmentData.init notes (emphasis mine):
Because Photos limits the size of adjustment data, you should keep your edit information short and descriptive. Don’t use image data to describe an edit—instead, save only the minimal information that is needed to recreate the edit.
Your app must provide a non-empty NSData object for the data parameter. If you cannot provide relevant data to describe an edit, you may pass data that encodes an NSUUID object.
In other words, adjustment data isn't for Photoshop-style files that encode an edit in terms of the actual new pixels. Remember, the main idea for adjustment data is that you can use it to revert, reconstruct, and alter whatever work was performed in the last user edit.
Adjustment data is most commonly used for saying things like "blur filter # 20px + darken by 20% + crop to (100,100,300,400)". For more complicated edits, you'll have to get more creative — for example, for an effect that the user paints on with a brush, you can probably record the brush strokes (plus brush radius and any other parameters) in a lot less data than you'd use to store bitmaps.
And failing all that, if you have edits that can only be described using data too large to fit into adjustment data, notice that tip Apple left about using a UUID — you can store your data externally to Photos, and use adjustment data to store a key that lets you look up an edit in your external storage. (Of course, you'll then have to make sure that your external storage is in all the places that iCloud Photo Library syncs to, and have a way to fall back gracefully if you can't access it, etc...)
Oh, and as for what the upper limit is... it's possible that Apple doesn't publish the limit because it might be subject to change across devices, iCloud account status, OS versions, or some other factor. So even if you've found a limit experimentally, it might not always hold true.
I did a few tests and it seems 2 MB is the answer. It would be good to have an official statement, though...
Related
We're looking to share AR experiences (ARWorldMap) over the web (not necessarily to devices nearby, I'm referring to data that can be stored to some server and then retrieved by another user).
Right now we're looking into ARWorldMap which is pretty awesome, but I think it only works on the same device AND with devices nearby. We want to be able to delete these constraints (and therefore save the experience over the web on some server) so that everyone else can automatically see 3D things with their devices (not necessarily at the same time) exactly where they were places.
Do you know if it's possible to send the archived data (ARWorldMap) to some server in some kind of format so that another user can later retrieve that data and load the AR experience on their device?
The ARWorldMap contains the feature points in the enviroment around the user. For example, the exact position and size of a table but including all the other points found by the camera. You can visualize those with the debugOptions.
It does not make sense to share those to a user that is not in the same room.
What you want to do is share the interactions between the users, eg when an object was placed or rotated.
This is something you would need to implement yourself anyway since ARWorldMap only contains anchors and features.
I can recommend Inside Swift Shot from last years WWDC as a starting point.
Yep technically it’s possible as according to docs here. You will have to use the nsssecure coding protocol, might have to extend if you aren’t using a swift backend though. It’s still possible as the arkit anchors are defined by a mix of point maps, rotational data , and image algos. Just implement portions of codable to JSON and voila it’s viable for most backends. Of course this is easier said then done.
First time asking a question on here, so please go easy if I don't provide enough info. Basically part of my iOS app allows users to take a picture which will be stored in a Core Data store. The attribute is a Transformable type, and I have created an NSManagedObject subclass which I simply use to set its image attribute to the new image provided by the user.
I know storing large files in Core Data is a bad idea, which is why I was excited when I saw the "Store in External Record File" option under the image attribute in the Core Data entity. However, my app performance says otherwise, taking several seconds on an iPhone 5 to load only a few images (which I know doesn't sound like much time, but considering how powerful the iPhone 5 is, older devices would likely take much longer with the same data).
I've looked around, and some people say that the Store in External Record File option is only applicable to the OS X environment, even though it is available in an iOS app. However, I also saw this under Apple's "What's New in iOS 5" doc (it's the next to last item under Core Data, near the end):
Managed objects support two significant new features: ordered relationships, and external storage for attribute values. If you specify that the value of a managed object attribute may be stored as an external record, Core Data heuristically decides on a per-value basis whether it should save the data directly in the database or store a URL to a separate file that it manages for you.
So my question is, who's right? Is it true that Apple made a mistake in giving this option for iOS apps, and that it actually does nothing unless you're on the Mac, or does it actually do something and I'm not configuring it the right way, or is it doing what it's supposed to do and the performance is bad anyway?
I've seen some guides explaining how to store large files (like images) as files, and save the URL to them in the Core Data store instead, but since this is essentially what this new option is doing, or maybe should be doing, I'm not sure if following these guides would even help.
I'm really sorry if this has been asked before. Normally I'd be fine with figuring this out on my own, but Core Data is totally new to me, and I'm still not sure how I managed to squeak by the initial setup. Thank you for any help you can offer!
who's right ?
the iOS docset for the NSAttributeDescription class does mention the allowsExternalBinaryDataStorage and the setAllowsExternalBinaryDataStorage: methods so there is little chance that there is a mistake from Apple.
are you doing something wrong or is slow anyway ?
You said that
The attribute is a Transformable type
But Core Data has a Binary data type. Maybe only this one is linked to the external storage capability.
if that's not it, we don't have enough info here:
How many pictures do you store ?
What are their sizes ?
Do you automatically fetch all the images ?
Also, the Apple doc states that:
Core Data heuristically decides on a per-value basis…
Did you use a migration or are you starting from scratch ?
You could have a look in your app's sandbox to see if your pictures are really saved outside of CoreData.
Hope this helps.
Good question!
Check this post:
Storing blobs in external location using built-in CoreData option
Apparently it should work. You should also try it in the simulator and inspect the application data folder to see if the folders are created as described (~/Library/Application Support/iPhone Simulator/... - you will figure out the rest of the path). Also you could inspect the sqlite file with the sqlite3 command to see if the binary data is in the database.
I haven't personally used this option as I would prefer to go for manually saving the images in a folder and store a reference to them in the database instead. This way it will be easier to create UIImage object from the file to be displayed, would have better control on what goes where and so on and so forth. Will take some extra labour though!
Hope that helps you out.
I am trying to download real-time trading data from Bloomberg using the api.
So far I can get bid / ask / last prices successfully but in some exchanges (like canada) quote sizes are in lots.
I can query the lots sizes of course with reference data api and write them for every security in the database or something like that but to convert the size for every quote tick is very "expensive" conversion since they come every second and maybe more often.
So is there any other way to achieve this?
Why do you need to multiply each value by lot size? As long as the lot size is constant each quote is comparable and any computation can be implemented using the exchange values. Any results scaled in a presentation layer if necessary.
I'm developing an app which needs to show some logos. These logos are just 8kb PNG files, and I'm just going to handle a little amount of them (10-20 at most). However, these are downloaded from the Internet because they might change. So, what I'm trying to achieve is, making the app to download them (done), storing them into the file system, and only downloading again whenever they change (might be months).
Everyone seems to use Core Data, which in my opinion is something designed for bigger and more complex things, because my files will always have the same name plus don't have relations between them.
Is the file system the way to go? Any good tutorial?
Yes, the file system is probably your best option for this. You say that you've already implemented the downloading. How have you done so? With NSURLConnection? If so, then at some point, you have an NSData object. This has a couple of write... methods you can use to save the data to a file on the filesystem. Be sure to save the files in the right place, as your app is sandboxed and you can't write anywhere you like.
The advantage Core Data brings is efficiency. Using NSFetchedResultsController to display your logos in a tableview gets you optimized object loading and memory management. It will automatically load only the items which can be displayed on one screen, and as the user flicks through the table it will handle releasing items which move offscreen. Implementing that on your own is not a simple task.
If you want to build and display your data without Core Data, you'll probably want to use NSKeyValueCoder, which will allow you to easily write an array or dictionary of objects (including nested arrays, dictionaries, and images).
By best I mean most efficient.
So don't go on about subjectiveness.
I have a list of websites and I want to store the list on the iphone locally, there must be an URL, title and a small image (like 32x32 max image size). I don't think I should be using CoreData for this. Should I be using a plist?
EDIT:
Efficient's definition I though was obvious. Take up the least amount of room, use lowest memory/CPU.
Sorry I forgot to say About 10-15 max items. And they just get loaded into a table view when the app first loads or when that view is brought back by a nav controller.
If you can, leave the images in the resources, and put the url, title and imagename in a pList. Alternatively, you could just create a "Site" class with the three properties, and generate an array of Sites in code. (Or an Array of Dictionaries)
You say not to "go on about subjectiveness" but you don't provide your definition of efficient for this.
You don't specify how many websites you want to store or how you want to use them or what is important to you - storage size, i/o perf, ability to query in specific ways etc.
It doesn't sound like a plist would be a bad fit but I guess my earlier point is just that way you are going to read, write data is generally equally or more important in setting context for questions like this.