What would be the right approach to crop several images at ones in React Native? I'm working on an app where user selects many photos but before uploading them to the server, each photo should have the right aspect ratio - 1:1, 2:3, etc.
I'm using react-native-image-crop-picker to allow user to select multiple photos at ones. Initially I used to same library to allow users to crop one photo at a time but it was a bad user experience as people upload 10-20 photos at a time. So I thought of auto-cropping images behind the scene using ImageEditor.cropImage()
Now I get the cropped image but it almost freezes the app until the images are cropped, leading to bad ux.
Any leads on how this should be tackled?
In my opinion this seems like a very difficult scope to cover only by using the react-native-image-picker library. I would rethink this lack of flexibility on the server side when receiving images too. Basically I think it's just not a good UX demanding the user to upload multiple images with a restricted aspect ratio on a mobile device.
But if that's not possible you could try to solving this problem with the following options for better UX:
Option 1: Once importing the images, show them in a grid view in your app, allowing the user to crop every single one of them before uploading (this way the user can do it manually without feeling too overwhelmed, a slightly better approach to the manual cropping)
Option 2: Try to run the automatic cropping images sequentially (not asynchronous), and show the user an ActivityIndicator while the app is busy processing those images and uploading them (lock the app's navigation if you need to, it's understandable from the user's side that uploading multiple images is a slower process).
Option 3: Move the automatic cropping functionality to the server side if possible, this way the app is not overwhelmed with the processing of the images, and the Backend will have more liability by treating all the images that it receives. Not sure if that could be implemented though.
Related
One question about Image Recognition in AR SDKs. Is it mandatory that the target images should be part of the app itself or can we have set of images in the app memory and perform on-device image recognition with it (the images might change or download when we click on a button each time in the app) ? note: The use case is only Image recognition and not the AR feature
You might have noticed that the class you use to load images from your app bundle and provide them to ARKit for detection is ARReferenceImage.
Scroll down the docs page for that class and you’ll find, in addition to a method for loading reference images, two initializers for Creating Reference Images at runtime:
The CGImage-based initializer is good for cases where you’re loading image content from elsewhere, like fetching from the user’s Photos library or downloading from a server.
The CVPixelBuffer-based initializer is good for cases where you have image content that’s already in GPU memory — say, if you wanted to extract a portion of ARKit’s capturedImage for use in image detection.
There’s one caveat to all this, though. When you put images in your asset catalog at build time, Xcode preflights them to make sure both that each individual image is good for detection and that the whole set of images are distinct enough from each other to be recognized reliably.
If you’re providing images dynamically, you don’t get that preflighting step, which creates design/interaction issues you’ll need to solve yourself:
If you control the dynamic images (e.g. they’re all downloaded from your servers), you can do the preflighting “offline” using a dummy Xcode project.
If you allow users to provide or create any possible image, you’ll need to design your app around the possibility of a user choosing images that don’t detect well.
I'm working on an iPad-only iOS app that essentially downloads large, high quality images (JPEG) from Dropbox and shows the selected image in a UIScrollView and UIImageView, allowing the user to zoom and pan the image.
The app is mainly used for showing the images to potential clients who are interested in buying them as framed prints. The way it works is that the image is first shown, zoomed and panned to show the potential client if they like the image. If they do like it, they can decide if they want to crop a specific area (while keeping to specific aspect ratios/sizes) and the final image (cropped or not) is then sent as an email attachment to production.
The problem I've been facing for a while now, is that even though the app will only be running on new iPads (ie. more memory etc.), I'm unable to find a method of handling the images so that the app doesn't get a memory warning and then crash.
Most of the images are sized 4256x2832, which brings the memory usage to at least 40MB per image. While I'm only displaying one image at a time, image cropping (which is the main memory/crash problem at the moment) is creating a new cropped image, which in turn momentarily bumps the apps total RAM usage to about 120MB, causing a crash.
So in short: I'm looking for a way to manage very large images, have the ability to crop them and after cropping still have enough memory to send them as email attachments.
I've been thinking about implementing a singleton image manager, which all the views would use and it would only contain one big image at a time, but I'm not sure if that's the right way to go, or even if it'd help in any way.
One way to deal with this is to tile the image. You can save the large decompressed image to "disk" as a series of tiles, and as the user pans around pull out only the tiles you need to actually display. You only ever need 1 tile in memory at a time because you draw it to the screen, then throw it out and load the next tile. (You'll probably want to cache the visible tiles in memory, but that's an implementation detail. Even having the whole image as tiles may relieve memory pressure as you don't need one large contiguous block.) This is how applications like Photoshop deal with this situation.
I ended up sort of solving the problem. Since I couldn't resize the original files in Dropbox (the client has their reasons), I went ahead and used BOSImageResizeOperation, which is essentially just a fast, thread-safe library for quickly resizing images.
Using this library, I noticed that images that previously took 40-60MB of memory per image, now only seemed to take roughly half that. Additionally, the resizing is so quick that the original image gets released from memory so fast, that iOS doesn't execute a memory warning.
With this, I've gotten further with the app and I appreciate all the idea, suggestions and comments. I'm hoping this will get the app done and I can get as far away from large image handling as possible, heh.
I'm currently working on a picture-sharing app on iOS, and my developer is struggling mightily with managing memory. I would really appreciate some help.
Take this "user feeds" module, my developer can't design a scroller that maintains a smooth scroll unless much of the thumbnails are preloaded before scrolling starts. This expectedly makes the initial loading experience much longer than desired. He used server-side compression which further compresses IPhone images (originals were around 2mb) that were already compressed to 200kb on the iOS side down to around 20kb. The end result is a highly blurry low-quality thumbnail, especially displayed at the size seen in the video.
https://dl.dropboxusercontent.com/u/76154448/Scrolling%20Down%20Only%20Works%20With%20Highly%20Compressed%20Thumbnails%20and%20Needs%20Pre-loading%20to%20Ensure%20Smooth%20Scrolling.mp4
He originally just used a cropped version of the underlying image as the "thumbnail," but with each picture being 200kb, 10 "thumbnails" loaded is already 2MB of memory used. Another 2MB is being used on thumbnails of user avatars, since those were not yet compressed by the server. We designed the feed, like many other picture apps, so that more images can be loaded by scrolling down.
My questions are this:
What is a good technique to do server side compression of thumbnails without quality loss? How does an app like Streamzoo do this?
https://dl.dropboxusercontent.com/u/76154448/Smooth%20Scrolling%20with%20Streamzoo.mp4
What is a good technique for managing the increase in live bytes? How do picture apps like Pic Collage manage to show up to 200 thumbnails while seemingly keeping every image cached without crashing?
Any responses are greatly appreciated!
He's probably creating all UIImageViews once the server responds. He could use UICollectionView to lazy load views, so only a few of them would be on memory on the same time.
I wrote an article about performance tips and tricks, and this one is covered there.
I am creating a kind of 'map' in my app. This is basically only viewing an image with an imageView/scrollView. However, the image is huge. Like 20,000x15,000 px or something. How can I tile this image so that it fits? When the app tiles by itself, it uses way too much memory, and I want this to be done before the app I launched, and just include the tiled, not the original image. Can photoshop do this?
I have not done a complete search for this yet, as I am away, and typing on an iPhone with limited network connection..
Apple has a project called PhotoScroller. It supports panning and zooming of large images. However, it does this by pre-tiling the images - if you look in the project you will see hundreds of tiles for various zoom sizes. The project however does NOT come with any kind of tiling utility.
So what some people have done is create algorithms or code that anyone can use to create these tiles. I support an open source project PhotoScrollerNetwork that allows people to download huge jpegs from the network, tile them, then display them as PhotoScroller does, and while doing research for this I found several people who had posted tiling software.
I googled "PhotoScroller tiling utility" and got lots of hits, including one here on SO
CATiledLayer is one way to do it and of course the best if you can pre-tile the images downloading them from the internet (pay attention on how many connection you are going to open) or embedding them(increasing overall app size), the other is memory map the image on the file system (but an image with that res could take about 1GB), take a look at this question it could be an intersteing topic SO question about low memory scenario
When developing a mobile app, and letting the user take photos (That later will be shown in full size also) but are also viewed in the table views (mid size) and even in the Google maps pin title view, Should I create a thumbnail/s for every image the user take for the smaller ones? or should I just use the regular image?
I am asking because From the tutorials i saw, and as a web developer, all I could figure out is that when using a web service to get groups of small images you usually get the thumbnails first and only when needed get the Full size image.
But this is an embedded (I know it is not embedded, but i don't have a better way to describe this) app, that all the data sits on the device, So there is no upload performance issues, just memory and processor time issues (loading to view the big HD photos that the cameras take today is very heavy I think.
Any way, What is best practice for this?
Thank you,
Erez
It's all about memory usage balanced with performance. If you don't create thumbnails for each photo, there are only so many photo you can hold in memory before you receive memory warnings or have your app terminated by the system (maybe only 6-8 full size UIImages). To avoid that, you might write the photos out to the file system and keep a reference to their location. But then your tableview scrolling will suffer as it attempts to read photos from the file system for display.
So the solution is to create thumbnails for each photo so that you can store a lot of them in memory without any troubles. Your tableview will perform well as the photos are quickly accessible from memory. You'll also want to write the full size photos to the file system (and keep a reference to their location) to avoid having to store them in memory. When it's time to display the full size image, retrieve it from the file system and store it in memory. When it's no longer needed, release it.
I'm assuming that you're in iOS4, and you are saving the photos in the Asset library, there is already a method for you.
http://developer.apple.com/library/ios/#documentation/AssetsLibrary/Reference/ALAsset_Class/Reference/Reference.html
You're looking for the "thumbnail" method.
So, save the large image, and compute the thumbnail when required, I believe, is the way to go.