We have a situation where we have a heavy CoreML model (170MB~) that we want to include in our iOS app.
Since we don't want the app size to be that large, we created a smaller model (that has lesser performance) that we can include directly and our intention is the download the heavy model upon app start and switch between the two when the heavy model is downloaded.
Our initial thought was to go to Apple's CoreML Model Deployment solution but it quickly turned out to be impossible for us as Apple requires MLModel archives to be up to 50MB.
So the question is, is there an alternative solution to loading a CoreML model from a remote source, similar to Apple's solution, and how would one implement it?
Any help would be appreciated. Thanks!
Put the mlmodel file on a server you own, download it into the app's Documents folder using your favorite method, create a URL to the downloaded file, use MLModel.compileModel(:at) to compile it, initialize the MLModel (or the automatically generated class) using the compiled model.
Related
I am building my flash card iOS app for reviewing my learning Japanese using SwiftUI language.
The problem is how to storing and updating my images(>500 images). Please help me, any suggestion is appreciated, thanks for reading my post.
I think you're asking about how to manage 500+ images in an Xcode project. You could just add all the images to your project and load them as you would any image. You could use asset catalogs, which have the advantage that they let you store different versions of a resource for use on different devices, and only the ones needed for the device the app runs on will actually be installed on the device. See How Many Images Can/Should you Store in xcassets in Xcode? and Asset Catalog Format Reference for more information about asset catalogs. But any way you slice it, managing 500+ images is going to be cumbersome. There's probably a better way...
Managing all those images in your app isn't just a problem for you as the developer; building them into the app will also create problems for the user. Even if each image is relatively small, having hundreds of them in the app will probably make the app huge. That means it'll take a long time to install, and the app will use a lot of storage on the device. Every time you release a new version of the app, with more words, or even just to fix a few small bugs, the user will have to download all that data all over again.
Instead, you should consider building an app that can fetch the data it needs from a server. Ideally, you could apply that approach to all your app's data, not just the images. Maybe you'll organize your flash cards into sets of a few dozen, so that you can fetch a set of cards and the associated images pretty quickly, and sets that the user hasn't used for a while can be removed to free up space on the device. You'll be able to update a set of flash cards without having to update the app, and when you do update the app your users won't have to download all the data all over again.
You've said that you're a beginner, so this approach might seems very difficult. That's OK, you can start with a simpler approach and then improve as you go along. For example, you might just put all the images on a server and fetch them one at a time as you need them. Your flash card data file could contain just a dictionary with words and the URLs associated with those words. There are lots of examples of loading an image from an URL here on SO and elsewhere, so I'm not going to provide code for that, but it won't be hard to find. The earlier you start thinking about how to design your app so that it can scale as you add more and more words, the easier it will be to maintain the app later.
500 images can have a huge size. Applications that published on Appstore have size limit and Apple does not recommend to make big apps.
Store them on server and load needed images on fly. Also you will get possibility to update your images, remove add new.
If you don't have a backend, you can use something easy and free (Firebase storage for example) or with minimal code writing on AWS.
If you need to keep them on device - store them as files in the Documents or another apps folder, do not use CoreData for it (you can keep only the list of names/urls in database).
After loading image to be displayed for user, you can prefetch next bunch of images.
Use Alamofire, or SDWebImage to load images from network (I prefer last). These frameworks can do many useful things with images.
To load images:
you can have a list of your images (just list of the names and urls)
or
you can know only path and names pattern and generate links dynamically (like https://myserver/imageXXX.png.
I'm working on an iOS app that will need to save data onto files. I chose to go for a Document Based app, precisely an app based on a UIDocumentBrowserViewController so that I can easily save and load files from the system's Files app.
Since the data I need to save/load on a file is quite complex: big hierarchy of different objects, with meta-data, image files, etc. I'm wondering what is the best technology to use going forward.
I came across NSFileWrapperand its ability to save different files as one. And I could definitely use that. But I also saw UIManagedDocument and the ability to use Core Data in my project while maybe saving the content of the Core Data database (I know it's not quite a database, but you know what I mean) into a file that I could write somewhere in the File App.
Is this a behavior I can expect?
To reformulate: I'm wondering if I can read/write files through a UIDocumentBrowserViewController, with data described by a UIManagedDocument that works with Core Data.
Thank you in advance. 🙂
As you have discovered, UIManagedDocument is there for your kind of application. And it does feature methods to write and read additional content like the metadata or image files you have, within the document package.
That being said, I have never used UIManagedDocument, and have never seen it used by others. A quick search of GitHub finds only this one project with two contributors who wrote a wrapper around it in 2013. Also, there does not seem to be any sample code from Apple, and the remark in the the writeAdditionalContent(_:to:originalContentsURL:) documentation that Additional content is not supported on iCloud leaves me a little concerned, but maybe it's a good sign that the Core Data team knows where to draw the line.
I have used the macOS counterpart of UIManagedDocument, NSPersistentDocument. It is in a similar situation of not being used very much, but with many more known technical issues. So a few years ago I switched to BSManagedDocument, which purportedly mimics UIManagedDocument to support Core Data in all its modern glory. I have been happy with BSManagedDocument.
In summary, if I was in your situation, yes I would give UIManagedDocument a try. But don't be surprised if you need to use a DTS support incident or two during your development.
I am working on a project which involves adding AI object detection capabilities to an existing iOS APP. I was able to train my own DNN models and converted to the CoreML's .mlmodel format.
Now I need to transfer my work which includes the .mlmodel files to another developer for integration. However, I don't want them to use my trained .mlmodel files outside of this project (according to contract). Is there any way that I can do to just "hide" the .mlmodel files so they can only be used for this particular APP and can't be simply copied and saved for other uses?
I have done some quick research on iOS library and framework, but I am still not sure if that's the solution I am looking for.
Nope. Once someone has access to your mlmodel file or the compiled version, mlmodelc, they can use it elsewhere.
For example, you can download an app from the App Store, look inside the IPA file, copy their mlmodelc folder into your own app, and start using the model right away.
To prevent outsiders from stealing your model, you can encrypt the model (just like you'd encrypt any other file) but that only works if you can hide the decryption key. You can also add a custom layer to the model, so that it becomes useless without the code for this custom layer.
However, those solutions don't work if you're hiring an external developer to work on your app because they will -- out of necessity -- need to have access to these decryption keys and source code files.
I'm not sure what exactly you want this other developer to do, but if you don't trust them, then:
get a new developer that you do trust,
be prepared to enforce the contract, or
give them a version of your mlmodel file with the weights replaced by random numbers. The model will still work but give nonsense predictions. Once that developer is done with their work, replace the model with the real one. Obviously, this is not a good solution if they need to use the model for whatever work they need to do.
We've recently converted our code to use UIDocument instead of manipulating files on the file system directly, and we've encountered some performance issues as a result. We are wondering whether we are using this class incorrectly, if anyone else had these issues, and what are the common ways to address them.
Our app
We have a "shoebox app" that manages a bunch of documents, each consisting of multiple image files that can be quite heavy, a small metadata file and a small preview image. The user may have many documents on her device (1000+ documents). Each document's files are grouped in an directory and we use NSFileWrapper to read and write them.
When our app starts, it needs the metadata of all the documents in order to show a document index, and a subset of the preview images. More preview images are loaded as the user scrolls.
In order to get that information, we open all the documents, read their metadata and preview image if we need to, close them, and then open again on demand.
Problem #1: Slow loading time
It takes a lot of time to open all the documents and read their metadata. I think there are several factors contributing to this problem:
- Each document open action is relatively slow
- The open document blocks and the completion blocks are executed on the same queue, which makes the operation's latency very bad (my document is open, but the completion block has to wait for X open document blocks before it can run)
We thought about solving this problem using a separate index file, but this approach has the drawback that we will need to manage the metadata in two places and that we will need to keep it synched with the file system in case iCloud changes the files.
Problem #2: Threading
Each open document creates its own "File Access Thread". When we open many documents concurrently, the overhead crushes the app.
We solved this issue by synching the open operations using a semaphore. This approach has the drawback that it slows down the loading even more.
Questions
Is there some fundamental problem with the way we are using UIDocument? If not:
Has anyone encountered a similar loading time problem? What is the common way to solve it? Do other apps keep an index file?
Is there a way to make UI document use a thread pool? If not, how do you control resource usage?
Thanks!
Ok, no good news here.
We tried consulting with friends in the industry, profiling UIDocument and using modified implementations of it that alter various aspects of its operation in order to see if we can improve its performance but to no avail.
My conclusion is that UIDocument is not suitable for this kind of usage - it is just not designed to support the latency and throughput requirements we have for open operations. UIDocument should only be used when you want to open a small number of files at any given moment (much like word processors and drawing apps).
Admittedly, this is exactly what Apple's documentation says, but I guess we had to learn just how serious they were the hard way :)
We ended up using some "tricks" to improve the user experience, and will move away from UIDocument as soon as we can.
So my recommendation is that only if:
Your app resembles a document based app in nature, meaning you will not have more than a few documents open at any given moment
You do not need the information inside the documents in order to "discover" them and show them to the user, or can afford the overhead of managing a separate index file
You really need the auto saving/undo/synchronization/iCloud abilities of this class
then use it. Otherwise consider other solutions.
A a side note which is not directly related to the question but I will add here as a public service: UIDocument's async model required some major changes in the app architecture when we moved from direct file access. If you plan on making this move, evaluate the work you will need to do carefully.
Good luck future programmers.
The document classes have methods to read asynchronously. Have your tried that?
This really sounds like something more suited to Core Data or SQLite, for the metadata. At the very least, cache the metadata in Core Data (a single store for the entire app, not one per document).
First time asking a question on here, so please go easy if I don't provide enough info. Basically part of my iOS app allows users to take a picture which will be stored in a Core Data store. The attribute is a Transformable type, and I have created an NSManagedObject subclass which I simply use to set its image attribute to the new image provided by the user.
I know storing large files in Core Data is a bad idea, which is why I was excited when I saw the "Store in External Record File" option under the image attribute in the Core Data entity. However, my app performance says otherwise, taking several seconds on an iPhone 5 to load only a few images (which I know doesn't sound like much time, but considering how powerful the iPhone 5 is, older devices would likely take much longer with the same data).
I've looked around, and some people say that the Store in External Record File option is only applicable to the OS X environment, even though it is available in an iOS app. However, I also saw this under Apple's "What's New in iOS 5" doc (it's the next to last item under Core Data, near the end):
Managed objects support two significant new features: ordered relationships, and external storage for attribute values. If you specify that the value of a managed object attribute may be stored as an external record, Core Data heuristically decides on a per-value basis whether it should save the data directly in the database or store a URL to a separate file that it manages for you.
So my question is, who's right? Is it true that Apple made a mistake in giving this option for iOS apps, and that it actually does nothing unless you're on the Mac, or does it actually do something and I'm not configuring it the right way, or is it doing what it's supposed to do and the performance is bad anyway?
I've seen some guides explaining how to store large files (like images) as files, and save the URL to them in the Core Data store instead, but since this is essentially what this new option is doing, or maybe should be doing, I'm not sure if following these guides would even help.
I'm really sorry if this has been asked before. Normally I'd be fine with figuring this out on my own, but Core Data is totally new to me, and I'm still not sure how I managed to squeak by the initial setup. Thank you for any help you can offer!
who's right ?
the iOSÂ docset for the NSAttributeDescription class does mention the allowsExternalBinaryDataStorage and the setAllowsExternalBinaryDataStorage: methods so there is little chance that there is a mistake from Apple.
are you doing something wrong or is slow anyway ?
You said that
The attribute is a Transformable type
But Core Data has a Binary data type. Maybe only this one is linked to the external storage capability.
if that's not it, we don't have enough info here:
How many pictures do you store ?
What are their sizes ?Â
Do you automatically fetch all the images ?
Also, the Apple doc states that:
Core Data heuristically decides on a per-value basis…
Did you use a migration or are you starting from scratch ?
You could have a look in your app's sandbox to see if your pictures are really saved outside of CoreData.
Hope this helps.
Good question!
Check this post:
Storing blobs in external location using built-in CoreData option
Apparently it should work. You should also try it in the simulator and inspect the application data folder to see if the folders are created as described (~/Library/Application Support/iPhone Simulator/... - you will figure out the rest of the path). Also you could inspect the sqlite file with the sqlite3 command to see if the binary data is in the database.
I haven't personally used this option as I would prefer to go for manually saving the images in a folder and store a reference to them in the database instead. This way it will be easier to create UIImage object from the file to be displayed, would have better control on what goes where and so on and so forth. Will take some extra labour though!
Hope that helps you out.