Is on-device image recognition possible with dynamic images? - augmented-reality

One question about Image Recognition in AR SDKs. Is it mandatory that the target images should be part of the app itself or can we have set of images in the app memory and perform on-device image recognition with it (the images might change or download when we click on a button each time in the app) ? note: The use case is only Image recognition and not the AR feature

You might have noticed that the class you use to load images from your app bundle and provide them to ARKit for detection is ARReferenceImage.
Scroll down the docs page for that class and you’ll find, in addition to a method for loading reference images, two initializers for Creating Reference Images at runtime:
The CGImage-based initializer is good for cases where you’re loading image content from elsewhere, like fetching from the user’s Photos library or downloading from a server.
The CVPixelBuffer-based initializer is good for cases where you have image content that’s already in GPU memory — say, if you wanted to extract a portion of ARKit’s capturedImage for use in image detection.
There’s one caveat to all this, though. When you put images in your asset catalog at build time, Xcode preflights them to make sure both that each individual image is good for detection and that the whole set of images are distinct enough from each other to be recognized reliably.
If you’re providing images dynamically, you don’t get that preflighting step, which creates design/interaction issues you’ll need to solve yourself:
If you control the dynamic images (e.g. they’re all downloaded from your servers), you can do the preflighting “offline” using a dummy Xcode project.
If you allow users to provide or create any possible image, you’ll need to design your app around the possibility of a user choosing images that don’t detect well.

Related

Cropping multiple images in react native

What would be the right approach to crop several images at ones in React Native? I'm working on an app where user selects many photos but before uploading them to the server, each photo should have the right aspect ratio - 1:1, 2:3, etc.
I'm using react-native-image-crop-picker to allow user to select multiple photos at ones. Initially I used to same library to allow users to crop one photo at a time but it was a bad user experience as people upload 10-20 photos at a time. So I thought of auto-cropping images behind the scene using ImageEditor.cropImage()
Now I get the cropped image but it almost freezes the app until the images are cropped, leading to bad ux.
Any leads on how this should be tackled?
In my opinion this seems like a very difficult scope to cover only by using the react-native-image-picker library. I would rethink this lack of flexibility on the server side when receiving images too. Basically I think it's just not a good UX demanding the user to upload multiple images with a restricted aspect ratio on a mobile device.
But if that's not possible you could try to solving this problem with the following options for better UX:
Option 1: Once importing the images, show them in a grid view in your app, allowing the user to crop every single one of them before uploading (this way the user can do it manually without feeling too overwhelmed, a slightly better approach to the manual cropping)
Option 2: Try to run the automatic cropping images sequentially (not asynchronous), and show the user an ActivityIndicator while the app is busy processing those images and uploading them (lock the app's navigation if you need to, it's understandable from the user's side that uploading multiple images is a slower process).
Option 3: Move the automatic cropping functionality to the server side if possible, this way the app is not overwhelmed with the processing of the images, and the Backend will have more liability by treating all the images that it receives. Not sure if that could be implemented though.

Storing large pictures in iOS swift

I am planning to ship an app with at least 20 pictures that can be at big as 10mb each. They are pictures that the user is likely to zoom in quite a bit therefore it is a requirement to keep the resolution quite high. We are still trying to make them smaller, but even so, its unlikely that they are going to be less than 7mb each.
The images can be shipped with the app as well as additional pictures can also be downloaded. The requirement is for the pictures to be available offline once the user downloads them as the app is to be used in remote areas by researchers.
What is the best mechanism to store them and how should I store them in iOS Swift 3?
Thanks for your help in advance.
You can store your pictures in document directory of the app and store the path in SQLite DB.
There are not rules but a set of best practices.
To store them I suggest you to save them directly into your resources, not Xcode image asset, this is because "image asset" can only be called with the imageNamed: method of UIImage, that has the side effect of cache images.
Then you can create a plist file with an array of image names, and fetch your info from here. If you need something more complicated there is Core Data, but I can't see an application of it with your spec.
What is the size of an uncompressed image in memory? An approx
equation n_pixel_height * n_pixel_width * n_channels in bytes (supposing 8 bit for channel)
If your images are about 10Mb in jpg they are compressed, thus means that they will take a huge amount of memory. memory on this kind of devices is a precious and short resource.
If your app exceeds the memory limit, after a set of callback such as didReceiveMemoryWarning, if you don't free enough memory, you app will be killed.
Alway try on device in this case and not on simulator because the simulator use your mac resources.
Now how to handle big images?
You can use CATiledLayer, you can find a lot of tutorials online. CATiledLayer as the name suggest creates a tiles of layer, each tile should correspond at a piece of your image, and it draws them only when they are visible.
Unfortunately it draws asynchronously this means that your tiles can be shown not exactly at the same time, there some strategy to avoid that issue, one is explained in an apple sample code.
Of course there is a problem that needs to be solved, how can I cut my large images into tiles?
You can do programmatically or provide them already cut as resources of your application

Handle large images in iOS

I want to allow the user to select a photo, without limiting the size, and then edit it.
My idea is to create a thumbnail of the large photo with the same size as the screen for editing, and then, when the editing is finished, use the large photo to make the same edit that was performed on the thumbnail.
When I use UIGraphicsBeginImageContext to create a thumbnail image, it will cause a memory issue.
I know it's hard to edit the whole large image directly due to hardware limits, so I want to know if there is a way I can downsample the large image to less then 2048*2048 wihout memory issues?
I found that there is a BitmapFactory Class which has an inSampleSize option which can downsample a photo in Android platform. How can this be done on iOS?
You need to handle the image loading using UIImage which doesn't actually load the image into memory and then create a bitmap context at the size of the resulting image that you want (so this will be the amount of memory used). Then you need to iterate a number of times drawing tiles from the original image (this is where parts of the image data are loaded into memory) using CGImageCreateWithImageInRect into the destination context using CGContextDrawImage.
See this sample code from Apple.
Large images don't fit in memory. So loading them into memory to then resize them doesn't work.
To work with very large images you have to tile them. Lots of solutions out there already for example see if this can solve your problem:
https://github.com/dhoerl/PhotoScrollerNetwork
I implemented my own custom solution but that was specific to our environment where we had an image tiler running server side already & I could just request specific tiles of large images (madea server, it's really cool)
The reason tiling works is that basically you only ever keep the visible pixels in memory, and there isn't that many of those. All tiles not currently visible are factored out to the disk cache, or flash memory cache as it were.
Take a look at this work by Trevor Harmon. It improved my app's performance.I believe it will work for you too.
https://github.com/coryalder/UIImage_Resize

Create tiled images for CATiledLayer

I am creating a kind of 'map' in my app. This is basically only viewing an image with an imageView/scrollView. However, the image is huge. Like 20,000x15,000 px or something. How can I tile this image so that it fits? When the app tiles by itself, it uses way too much memory, and I want this to be done before the app I launched, and just include the tiled, not the original image. Can photoshop do this?
I have not done a complete search for this yet, as I am away, and typing on an iPhone with limited network connection..
Apple has a project called PhotoScroller. It supports panning and zooming of large images. However, it does this by pre-tiling the images - if you look in the project you will see hundreds of tiles for various zoom sizes. The project however does NOT come with any kind of tiling utility.
So what some people have done is create algorithms or code that anyone can use to create these tiles. I support an open source project PhotoScrollerNetwork that allows people to download huge jpegs from the network, tile them, then display them as PhotoScroller does, and while doing research for this I found several people who had posted tiling software.
I googled "PhotoScroller tiling utility" and got lots of hits, including one here on SO
CATiledLayer is one way to do it and of course the best if you can pre-tile the images downloading them from the internet (pay attention on how many connection you are going to open) or embedding them(increasing overall app size), the other is memory map the image on the file system (but an image with that res could take about 1GB), take a look at this question it could be an intersteing topic SO question about low memory scenario

What is the best practice for letting users take photos and using them also in a table view?

When developing a mobile app, and letting the user take photos (That later will be shown in full size also) but are also viewed in the table views (mid size) and even in the Google maps pin title view, Should I create a thumbnail/s for every image the user take for the smaller ones? or should I just use the regular image?
I am asking because From the tutorials i saw, and as a web developer, all I could figure out is that when using a web service to get groups of small images you usually get the thumbnails first and only when needed get the Full size image.
But this is an embedded (I know it is not embedded, but i don't have a better way to describe this) app, that all the data sits on the device, So there is no upload performance issues, just memory and processor time issues (loading to view the big HD photos that the cameras take today is very heavy I think.
Any way, What is best practice for this?
Thank you,
Erez
It's all about memory usage balanced with performance. If you don't create thumbnails for each photo, there are only so many photo you can hold in memory before you receive memory warnings or have your app terminated by the system (maybe only 6-8 full size UIImages). To avoid that, you might write the photos out to the file system and keep a reference to their location. But then your tableview scrolling will suffer as it attempts to read photos from the file system for display.
So the solution is to create thumbnails for each photo so that you can store a lot of them in memory without any troubles. Your tableview will perform well as the photos are quickly accessible from memory. You'll also want to write the full size photos to the file system (and keep a reference to their location) to avoid having to store them in memory. When it's time to display the full size image, retrieve it from the file system and store it in memory. When it's no longer needed, release it.
I'm assuming that you're in iOS4, and you are saving the photos in the Asset library, there is already a method for you.
http://developer.apple.com/library/ios/#documentation/AssetsLibrary/Reference/ALAsset_Class/Reference/Reference.html
You're looking for the "thumbnail" method.
So, save the large image, and compute the thumbnail when required, I believe, is the way to go.

Resources