Loading UIImages for flip animation unbelievably slow - ios

I am using the UIImageView animationImages property to display a small flipbook animation. That works fine, but loading 35 frames that is only ~200kb takes over 4 seconds on an iPhone 7. This is the code I am using to load the frames:
//start timer
var imgListArray = [UIImage]()
for countValue in 0...34 {
if let image = UIImage(named: "anim_\(countValue).jpg") {
imgListArray.append(image)
}
}
//end timer
self.imageView.animationImages = imgListArray
Given that I can load in a multi-megabyte image in less than a second, I just can't believe it takes so long to load in these little frames (pixel dimensions are 507x189). I have tried using 8-bit and 24-bit PNGs and JPEGs and they all take about the same amount of time.
If I don't append the images to the array it takes about the same amount of time, so the only thing that it can be is calling UIImage(named:).
Can anyone suggest a faster way to load these frames in or faster way to display a flip book animation?

Related

Xcode - Animating with PNG sequence

I have made an animation with After Effect and added it to my Xcode project as a PNG sequence. This leave me with a folder with 164 images, i am animating with a timer. How is that for the app performance? And could i add more animations like this without any problem?
if its images,
first get those images in an array
#IBOutlet weak var animatingImageView: UIImageView!
var imageList = [UIImage]()
now call function
func playAnimation() {
self.animatingImageView.animationImages = imageList
self.animatingImageView.animationDuration = 2.0
self.animatingImageView.startAnimating()
}
you can use
self.animatingImageView. animationRepeatCount
for repeat count,
and also, if you want to stop it after some time interval, do it with a timer, and on timer completion
self.animatingImageView.stopAnimating()
for better performance:
try using image of size close to the imageview
try using cached image
try making image opaque

How to minimize load time for Swift page with lots of images?

I'm building an app that contains a total of about 65 MB of data, mostly in the form of images that are stored as instances of a class. Right now the app runs great, except for one thing: it takes about 10 seconds to load the ViewController that contains all that information. This happens the first time the page loads after the app is opened or is brought to the foreground.
Here's what the code looks like (except there are many, many instances of the AnatomyView class):
class SectionViewController: UIViewController, UIScrollViewDelegate {
class AnatomyView {
var viewName: String = ""
var normalImage: UIImage = UIImage(named: "Logo.jpg")!
var markedImage: UIImage = UIImage(named: "Logo.jpg")!
var steps: [UIImage] = []
var attribution: String = ""
init(viewName: String, normalImage: UIImage, markedImage: UIImage, steps: [UIImage], attribution: String){
self.viewName = viewName
self.normalImage = normalImage
self.markedImage = markedImage
self.steps = steps
self.attribution = attribution
}
}
let lateralYShoulder = AnatomyView(
viewName: "Lateral Y",
normalImage: UIImage(named: "Lateral Y Normal Unmarked.jpg")!,
markedImage: UIImage(named: "Lateral Y Normal Marked.jpg")!,
steps: [UIImage(named: "Lateral Y Step 1.jpg")!, UIImage(named: "Lateral Y Step 2.jpg")!],
attribution: "Case courtesy of Mr Andrew Murphy\nRadiopaedia.org, rID: 48080"
)
What are some strategies to decrease loading time? Is there a way to store the data somewhere else, then import only what's needed? Having such a long loading time makes the app totally unusable.
Edit: Here's how the image displays in the app. There are at most 8-10 images in the scrollview at a time, arranged side-to-side.
You need to load the info as is shown, if you are using a UIScrollView to show all the info the loading time is big, but if you divide the info in a way of grid with UIColllectionView or UITableView the content loads as the user scroll, and the time drops down a lot.
I would recomend you to have a look at CATiledLayer and the following Apple example:
From the description:
"PhotoScroller" demonstrates the use of embedded UIScrollViews and CATiledLayer to create a rich user experience for displaying and paginating photos that can be individually panned and zoomed. CATiledLayer is used to increase the performance of paging, panning, and zooming with high-resolution images or large sets of photos.
https://developer.apple.com/library/content/samplecode/PhotoScroller/Introduction/Intro.html

Images lose quality after saving as GIF

Im developing an iOS app which allows users to take a sequence of photos - afterwards the photos are put in an animation and exported as MP4 and GIF.
While the MP4 presents the source quality, the GIF color grades are visible.
Here the visual comparison:
GIF:
MP4
The code I use for exporting as GIF:
var dictFile = new NSMutableDictionary();
var gifDictionaryFile = new NSMutableDictionary();
gifDictionaryFile.Add(ImageIO.CGImageProperties.GIFLoopCount, NSNumber.FromFloat(0));
dictFile.Add(ImageIO.CGImageProperties.GIFDictionary, gifDictionaryFile);
var dictFrame = new NSMutableDictionary();
var gifDictionaryFrame = new NSMutableDictionary();
gifDictionaryFrame.Add(ImageIO.CGImageProperties.GIFDelayTime, NSNumber.FromFloat(0f));
dictFrame.Add(ImageIO.CGImageProperties.GIFDictionary, gifDictionaryFrame);
InvokeOnMainThread(() =>
{
var imageDestination = CGImageDestination.Create(fileURL, MobileCoreServices.UTType.GIF, _images.Length);
imageDestination.SetProperties(dictFile);
for (int i = 0; i < this._images.Length; i++)
{
imageDestination.AddImage(this._images[i].CGImage, dictFrame);
}
imageDestination.Close();
});
The code I use for exporting as MP4:
var videoSettings = new NSMutableDictionary();
videoSettings.Add(AVVideo.CodecKey, AVVideo.CodecH264);
videoSettings.Add(AVVideo.WidthKey, NSNumber.FromNFloat(images[0].Size.Width));
videoSettings.Add(AVVideo.HeightKey, NSNumber.FromNFloat(images[0].Size.Height));
var videoWriter = new AVAssetWriter(fileURL, AVFileType.Mpeg4, out nsError);
var writerInput = new AVAssetWriterInput(AVMediaType.Video, new AVVideoSettingsCompressed(videoSettings));
var sourcePixelBufferAttributes = new NSMutableDictionary();
sourcePixelBufferAttributes.Add(CVPixelBuffer.PixelFormatTypeKey, NSNumber.FromInt32((int)CVPixelFormatType.CV32ARGB));
var pixelBufferAdaptor = new AVAssetWriterInputPixelBufferAdaptor(writerInput, sourcePixelBufferAttributes);
videoWriter.AddInput(writerInput);
if (videoWriter.StartWriting())
{
videoWriter.StartSessionAtSourceTime(CMTime.Zero);
for (int i = 0; i < images.Length; i++)
{
while (true)
{
if (writerInput.ReadyForMoreMediaData)
{
var frameTime = new CMTime(1, 10);
var lastTime = new CMTime(1 * i, 10);
var presentTime = CMTime.Add(lastTime, frameTime);
var pixelBufferImage = PixelBufferFromCGImage(images[i].CGImage, pixelBufferAdaptor);
Console.WriteLine(pixelBufferAdaptor.AppendPixelBufferWithPresentationTime(pixelBufferImage, presentTime));
break;
}
}
}
writerInput.MarkAsFinished();
await videoWriter.FinishWritingAsync();
I would appreciate for your help!
Kind regards,
Andre
This is just summarization of mine comments...
I do not code on your platform so I only provide generic answer (and insights from mine own GIF encoder/decoder coding experience).
GIF image format supports up to 8bit per pixel leading to max 256 colors per pixel with naive encoding. Cheap encoders just truncates input image to 256 or less colors usually leading to ugly pixelated results. To increase coloring quality of GIF there are 3 approaches I know of:
Multiple frames covering screen with own palettes
Simply you divide image into overlays each with its own palette. This is slow (in therm of decoding as you need to process more frames per single image which can cause sync errors with some viewers and you need to process all frame related chunks multiple times per single image). The encoding itself is fast as you just either separate the frames based on colors or region/position to multiple frames. Here (region/position based) example:
The sample image is taken from here: Wiki
The GIF supports transparency so the sub frames can overlap ... This approach physically increase the colors per pixel possible to N*256 (or N*255 for transparent frames) where N is the number of frames or palettes used per single image.
Dithering
Dithering is technique that approximate color of area to match colors as closely as possible while using only specified colors (from palette) only. This is fast and easily implementable but the result is kind of noisy. For more info see some related answers of mine:
Converting BMP image to set of instructions for a plotter?
c# image dithering routine that accepts an amount of dithering?
Better color quantization method
Cheap encoders just truncate the colors to predefined palette. Much better results are obtained by clustering the used colors based on histogram. For example see:
Effective gif/image color quantization?
The result is usually much better then dithering but the encoding time is huge in comparison to dithering...
The #1 and #3 can be used together to enhance quality even more ...
If you do not have access to the encoding code or pipeline you still can transform image itself before encoding doing the quantization and palette computation instead and load the result directly to GIF encoder which should be possible (if the GIF encoder you are using is at least a bit sophisticated ...)

Large Image Compositing on iOS in Swift

Although I understand the theory behind image compositing, I haven't dealt much with hardware acceleration and I'm running into implementation issues on iOS (9.2, iPhone 6S). My project is to sequentially composite a large number (20, all the way to hundreds) of large images (12 megapixel) on top of each other at decreasing opacities, and I'm looking for advice as to the best framework or technique. I know there must be a good, hardware accelerated, destructive compositing tool capable of handling large files on iOS, because I can perform this task in Safari in an HTML Canvas tag, and load this page in Safari on the iPhone at nearly the same blazing speed.
This can be a destructive compositing task, like painting in Canvas, so I shouldn't have memory issues as the phone will only have to store the current result up to that point. Ideally, I'd like floating point pixel components, and I'd also like to be able to see the progress on screen.
Core Image has filters that seem great, but they are intended to operate losslessly on one or two pictures and return one result. I can feed that result into the filter again with the next image, and so on, but since the filter doesn't render immediately, this chaining of filters runs me out of memory after about 60 images. Rendering to a Core Graphics image object and reading back in as a Core Image object after each filter doesn't help either, as that overloads the memory even faster.
Looking at the documentation, there are a number of other ways for iOS to leverage the GPU - CALayers being a prime example. But I'm unclear if that handles pictures larger than the screen, or is only intended for framebuffers the size of the screen.
For this task - to leverage the GPU to store a destructively composited "stack" of 12 megapixel photos, and add an additional one on top at a specified opacity, repeatedly, while outputing the current contents of the stack scaled down to the screen - what is the best approach? Can I use an established framework/technique, or am I better of diving into OpenGL and Metal myself? I know the iPhone has this capability, I just need to figure out how to leverage it.
This is what I've got so far. Profiler tells me the rendering takes about 350ms, but I run out of memory if I increase to 20 pics. If I don't render after each loop, I can increase to about 60 pics before I run of out memory.
var stackBuffer: CIImage!
var stackRender: CGImage!
var uiImage: UIImage!
let glContext = EAGLContext(API: .OpenGLES3)
let context = CIContext(EAGLContext: glContext)
// Preload list of 10 test pics
var ciImageArray = Array(count: 10, repeatedValue: CIImage.emptyImage())
for i in 0...9 {
uiImage = UIImage(named: String(i) + ".jpg")!
ciImageArray[i] = CIImage(image: uiImage)!
}
// Put the first image in the buffer
stackBuffer = ciImageArray[0]
for i in 1...9 {
// The next image will have an opacity of 1/n
let topImage = ciImageArray[i]
let alphaTop = topImage.imageByApplyingFilter(
"CIColorMatrix", withInputParameters: [
"inputAVector" : CIVector(x:0, y:0, z:0, w:1/CGFloat(i + 1))
])
// Layer the next image on top of the stack
let filter = CIFilter(name: "CISourceOverCompositing")!
filter.setValue(alphaTop, forKey: kCIInputImageKey)
filter.setValue(stackBuffer, forKey: kCIInputBackgroundImageKey)
// Render the result, and read back in
stackRender = context.createCGImage(filter.outputImage!, fromRect: stackBuffer.extent)
stackBuffer = CIImage(CGImage: stackRender)
}
// Output result
uiImage = UIImage(CGImage: stackRender)
compositeView.image = uiImage

Animation by PNG frames

I have several PNG images that if it will be presented one after the other they will create a short animation.
My question is -
Is possible to create an animation with several PNG images, by displaying them one after the other?
Yes you can create animation with pngs,
let animationImagesArray : [UIImage] = [<Add images>]
imageView.animationImages = animationImagesArray
imageView.startAnimating()
You can also set repeat count and animation duration as well.
Update
To load sequence of images via loop its better you name them in a sequence some thing like this (animationImage1.png, animationImage2.png...)
for i in 0..<20
{
let name = "\(prefix)_\(i).png"
let image = UIImage(named: name)!
images.append(image)
}

Resources