How should i implement continuous scrolling through pages in swift? - ios

could you please give me some advices please. I've been struggled for days.
My goal is implement continuous scrolling to show pages from document. Each page is controlled by a viewController. And user should be able zoom in and out.
Should i do it from scratch with scrollView or collectionView? which is better and more memory efficient?
Or there are any off-the-shelf solution for this? (i've searched in Github without success, UIPageViewController is definitive not a solution because it doesn't allow continuous scrolling and only show a whole page)
Thank you very much
Image:
example continuous scrolling

A collection view works just fine for continuous scrolling, and makes efficient use of memory. (Cells are recycled.) If the total contents of your scrolling area are too much to fit into memory you will want to load each page's contents into memory as it scrolls into view, and release it when it scrolls off. (Perhaps have your model store file URLs to each page's contents, and save the page contents into the cell.)
As for zooming, the best way to do that depends on what you mean. If the contents are vector contents like a PDF, you could simply re-render the vector image as the user scrolls. If the contents are very high resolution images you might need to create a mipmapped tiled rendering framework, or use somebody else's. I've written my own mipmapped tiled rendering framework before. It's doable, but a lot of work.
(You take the original huge image and break it into smaller square tiles. You then render the original image into tiles at 50% scale and save those, and then 25%, and then 12.5%, etc, until you get to a size where a single image fills the screen.)

Related

Best practices to load very large images in a scroll view for zooming?

I'm loading a very large image (8000x8000) into an UIImageView that's in a UIScrollView. This works but it consumes a lot of memory and takes a couple of seconds to load.
I've looked into a better approach and all the examples I have found are based on a 2010 WWDC example called PhotoScroller. This example uses a custom UIView with CATiledLayer to break the large image into smaller tiles and draw them to the screen.
https://github.com/master-nevi/WWDC-2010/tree/master/PhotoScroller
Is there a more modern way to do this in iOS 13 or is this still the best approach?
It is still the best approach. 8000x8000 is huge. You cannot load the whole image at once. CATiledLayer means you only need a few tiles loaded at a time — just the portion of the image that the user is looking at at the moment.

Large (UI)Image Memory Management

I'm working on an iPad-only iOS app that essentially downloads large, high quality images (JPEG) from Dropbox and shows the selected image in a UIScrollView and UIImageView, allowing the user to zoom and pan the image.
The app is mainly used for showing the images to potential clients who are interested in buying them as framed prints. The way it works is that the image is first shown, zoomed and panned to show the potential client if they like the image. If they do like it, they can decide if they want to crop a specific area (while keeping to specific aspect ratios/sizes) and the final image (cropped or not) is then sent as an email attachment to production.
The problem I've been facing for a while now, is that even though the app will only be running on new iPads (ie. more memory etc.), I'm unable to find a method of handling the images so that the app doesn't get a memory warning and then crash.
Most of the images are sized 4256x2832, which brings the memory usage to at least 40MB per image. While I'm only displaying one image at a time, image cropping (which is the main memory/crash problem at the moment) is creating a new cropped image, which in turn momentarily bumps the apps total RAM usage to about 120MB, causing a crash.
So in short: I'm looking for a way to manage very large images, have the ability to crop them and after cropping still have enough memory to send them as email attachments.
I've been thinking about implementing a singleton image manager, which all the views would use and it would only contain one big image at a time, but I'm not sure if that's the right way to go, or even if it'd help in any way.
One way to deal with this is to tile the image. You can save the large decompressed image to "disk" as a series of tiles, and as the user pans around pull out only the tiles you need to actually display. You only ever need 1 tile in memory at a time because you draw it to the screen, then throw it out and load the next tile. (You'll probably want to cache the visible tiles in memory, but that's an implementation detail. Even having the whole image as tiles may relieve memory pressure as you don't need one large contiguous block.) This is how applications like Photoshop deal with this situation.
I ended up sort of solving the problem. Since I couldn't resize the original files in Dropbox (the client has their reasons), I went ahead and used BOSImageResizeOperation, which is essentially just a fast, thread-safe library for quickly resizing images.
Using this library, I noticed that images that previously took 40-60MB of memory per image, now only seemed to take roughly half that. Additionally, the resizing is so quick that the original image gets released from memory so fast, that iOS doesn't execute a memory warning.
With this, I've gotten further with the app and I appreciate all the idea, suggestions and comments. I'm hoping this will get the app done and I can get as far away from large image handling as possible, heh.

More memory efficient way of displaying images in iOS?

I have a photo collage app that takes a bunch of photos from a user's facebook/instagram/library, and draws a bunch of them onto the UIView. To control the amount, there is a slider tool that calculates density (for example, every 150px wide, add an image). I'm running into memory issues after lots and lots of UIImageViews are being added.
Is there some better, more efficient way of showing images onto the screen?
Why not use a UICollectionView. It's designed to present a group of objects (including images) in a manner similar to UITableVIew. Like a TableView memory is automatically managed by the controller.
See the documentation for more information.
You didn't make it clear in your question exactly how you're using this set of images. An alternative is to draw the images into the graphic context using Core Graphics. This will draw the images into a single view instead of into several views. There's only one bitmap to keep in memory. See Drawing a PNG Image Into a Graphics Context for Blending Mode Manipulation for an example.

Rendering of a vast grid in a Scrollview container without CATiledLayer?

I'm looking for any advice on ways to have a rather large scrollview (let's say 8192x8192) which is essentially a grid and it has subviews of about 5-100 buttons placed in it.
The brute force approach runs out of memory as CALayer seems to be allocating a bitmap for the size of the scrollview's content (the memory issue is especially prominent when zooming is used)
I next added CATiledLayer to it, that's fixed the memory issue but there is a blurry effect on the grid as tiles are generated asynchronously and is still not ideal in that it's using a lot of memory for what is essentially a trivial 'draw some lines' task.
It seems like if I could somehow get control to draw my own grid via OpenGL each frame and tell UIKit not to create a bitmap buffer for the scrollview it would be perfect but not sure if this is feasible or even the right approach?
On Android I just took control of the entire drawing/zooming/panning but this seems vastly overkill on iOS which seems to offer most of this already?
You should check out the WWDC 2009 video session 102: "Mastering iPhone Scroll Views" along with the ScrollViewSuite sample project from Apple. They explain how to do a tiled scroll view with different zoom levels, which sounds like it is what you need.

CATiledLayer and UIImageView what's the big deal between them?

few months ago I've found a really awesome sample code from Apple site. The sample is called "LargeImageDownsizing" the wonderful thing is that it explain a lot about how image are read from resources and then rendered on screen. Digging into that code I've found something that is disturbing me a little. The downsized image is passed to a view that has a CATiledLayer, but without giving a piece of image at each tile to improve memory performance, it just set the tile size and then load image (I'm making things simple to go to the concept). So my question basically is why?Why use a CATiledLayer if it is not feed in the right way, they could have used a normal UIImageView... So I made few tests to understand if I was right. Modifing the code simple adding a scrollview with an image view as subview and responding to the delegate scrollview for zoom. I went to those conclusions testing on device and sim:
-The memory impact and footprint is exactly the same, even during zooming scrolling operation and it doesn't surprise me at all, the image is decompressed in memory
-Time profile say that a tileview take more time to be drawn during scrolling zoom operation instead of a uiimageview and that doesn't surprise me at all again the uiimageview is already drawn
-If I send memory warning nothing change between the two solution(only on sim)
-Testing Core Animation performance I get the same results around 60FPS
So what's the deal between those two views/layers why should I pick one instead of the other in these specific case? UIImageView seems to win the battle.
I hope that someone could help me to understand that.
They might perform the same for small images because ghen the only difference in terms os performance is that CATiledLayer draws on a background thread. Depending on the tile size CATiledLayer would even be slower because it has to draw multiple tiles for one image.
BUT ...
the point of CATiledLayer is that you don't need to draw all tiles, especially when zooming into a very very large image. It is smart to know which parts are actually needed. It also is smart about evicting tiles that are not needed any more.
Or this mechanism to work you need to provide the individual parts of the image separately. We're talking a total size of an image that probably cannot be held in memory uncompressed.

Resources