Animation by PNG frames - ios

I have several PNG images that if it will be presented one after the other they will create a short animation.
My question is -
Is possible to create an animation with several PNG images, by displaying them one after the other?

Yes you can create animation with pngs,
let animationImagesArray : [UIImage] = [<Add images>]
imageView.animationImages = animationImagesArray
imageView.startAnimating()
You can also set repeat count and animation duration as well.
Update
To load sequence of images via loop its better you name them in a sequence some thing like this (animationImage1.png, animationImage2.png...)
for i in 0..<20
{
let name = "\(prefix)_\(i).png"
let image = UIImage(named: name)!
images.append(image)
}

Related

Loading UIImages for flip animation unbelievably slow

I am using the UIImageView animationImages property to display a small flipbook animation. That works fine, but loading 35 frames that is only ~200kb takes over 4 seconds on an iPhone 7. This is the code I am using to load the frames:
//start timer
var imgListArray = [UIImage]()
for countValue in 0...34 {
if let image = UIImage(named: "anim_\(countValue).jpg") {
imgListArray.append(image)
}
}
//end timer
self.imageView.animationImages = imgListArray
Given that I can load in a multi-megabyte image in less than a second, I just can't believe it takes so long to load in these little frames (pixel dimensions are 507x189). I have tried using 8-bit and 24-bit PNGs and JPEGs and they all take about the same amount of time.
If I don't append the images to the array it takes about the same amount of time, so the only thing that it can be is calling UIImage(named:).
Can anyone suggest a faster way to load these frames in or faster way to display a flip book animation?

iOS 11 animated gif display in UIImageView

I thought iOS 11 was supposed to bring, at long last, native support for animated gifs? But I tried this, and I didn't see any animation:
let im = UIImage(named:"wireframe.gif")!
let iv = UIImageView(image:im)
iv.animationImages = [im] // didn't help
iv.frame.origin = CGPoint(0,100)
iv.frame.size = im.size
self.view.addSubview(iv)
delay(2) {
iv.startAnimating() // nope
}
How is this supposed to work?
iOS 11 does bring a native understanding of animated gifs, but that understanding, infuriatingly, is not built into UIImageView. It is still up to you to translate the animated gif into a sequence of UIImages. Apple now provides sample code, in terms of the ImageIO framework:
https://developer.apple.com/library/content/samplecode/UsingPhotosFramework/Listings/Shared_AnimatedImage_swift.html
That code implements an AnimatedImage class, which is essentially a collection of CGImages extracted from the original animated gif. Thus, using that class, we can display and animate the animated gif in a UIImageView as follows:
let url = Bundle.main.url(forResource: "wireframe", withExtension: "gif")!
let anim = AnimatedImage(url: url)!
var arr = [CGImage]()
for ix in 0..<anim.frameCount {
arr.append(anim.imageAtIndex(index: ix)!)
}
var arr2 = arr.map {UIImage(cgImage:$0)}
let iv = UIImageView()
iv.animationImages = arr2
iv.animationDuration = anim.duration
iv.frame.origin = CGPoint(0,100)
iv.frame.size = arr2[0].size
self.view.addSubview(iv)
delay(2) {
iv.startAnimating()
}
Unfortunately, the inter-frame timing of a GIF can vary between frames, so answers that use ImageIO to load the frames and then set them as the animatedImages on a UIImageView need to properly extract the timings and take them into account.
I recommend Flipboard's FLAnimatedImage, which handles GIFs correctly.
https://github.com/Flipboard/FLAnimatedImage.

Random images to (many) Image Views

Sadly, I've got 36 UIImages and need to set a random image to each one.
My 6 images are named;
"Owl1"
"Owl2"
"Owl3"
"Owl4"
"Owl5"
"Owl6"
So, I want to set one random image to my 36 different UIImages. What is the best way to do this? An array? Here's my "try" so far.
var images: [UIImage] = [
UIImage(named: "Owl1")!,
UIImage(named: "Owl2")!,
UIImage(named: "Owl3")!,
UIImage(named: "Owl4")!,
UIImage(named: "Owl5")!,
UIImage(named: "Owl6")!
]
var randomUIImage = [Image1, Image2, Image3, Image4, Image5...]
randomUIImage.shuffleInPlace()
randomUIImage[0].image = images[0]
randomUIImage[1].image = images[1]
But I realized this will not work, and I can't make this code for all 36 images... Anyone got a better idea? ;-)
Tip: you can use a range + map to create an array of your images.
let images = (1...6).map { UIImage(named: "Owl\($0)") }
(1...6) produces a collection of Ints, from 1 to 6 (including 6), and with map we create a new instance of UIImage for each Int, using them for the naming - since you named your images in order, it's convenient. It's like doing a loop and appending a new intance of UIImage to an array inside the loop, using an index for the naming: "Owl1", "Owl2", etc.
If you also have your UIImageViews in an array, you can assign the images with a loop.
Here's an example (I didn't verify on Xcode but it should be close to what you need):
for view in imageViewsArray { // the array with the 36 imageViews
// a random index for the array of 6 images
let randomIndex = Int(arc4random_uniform(UInt32(images.count))
// assign the randomly chosen image to the image view
view.image = images[randomIndex]
}
You can have an array of image names, and an array of images to hold them..
var imageNames:[String] = ["Owl1", "Owl2"....etc]
var owlImages:[UIImage] = []
Then randomly append the images
for index in 0...imageNames.count - 1 {
var randomInt = Int(arc4random_uniform(UInt32(imageNames.count)) //a random int from 0 to the size of your array
owlImages.append(UIImage(named: imageNames[randomInt] //add the random image to the array
}

Large Image Compositing on iOS in Swift

Although I understand the theory behind image compositing, I haven't dealt much with hardware acceleration and I'm running into implementation issues on iOS (9.2, iPhone 6S). My project is to sequentially composite a large number (20, all the way to hundreds) of large images (12 megapixel) on top of each other at decreasing opacities, and I'm looking for advice as to the best framework or technique. I know there must be a good, hardware accelerated, destructive compositing tool capable of handling large files on iOS, because I can perform this task in Safari in an HTML Canvas tag, and load this page in Safari on the iPhone at nearly the same blazing speed.
This can be a destructive compositing task, like painting in Canvas, so I shouldn't have memory issues as the phone will only have to store the current result up to that point. Ideally, I'd like floating point pixel components, and I'd also like to be able to see the progress on screen.
Core Image has filters that seem great, but they are intended to operate losslessly on one or two pictures and return one result. I can feed that result into the filter again with the next image, and so on, but since the filter doesn't render immediately, this chaining of filters runs me out of memory after about 60 images. Rendering to a Core Graphics image object and reading back in as a Core Image object after each filter doesn't help either, as that overloads the memory even faster.
Looking at the documentation, there are a number of other ways for iOS to leverage the GPU - CALayers being a prime example. But I'm unclear if that handles pictures larger than the screen, or is only intended for framebuffers the size of the screen.
For this task - to leverage the GPU to store a destructively composited "stack" of 12 megapixel photos, and add an additional one on top at a specified opacity, repeatedly, while outputing the current contents of the stack scaled down to the screen - what is the best approach? Can I use an established framework/technique, or am I better of diving into OpenGL and Metal myself? I know the iPhone has this capability, I just need to figure out how to leverage it.
This is what I've got so far. Profiler tells me the rendering takes about 350ms, but I run out of memory if I increase to 20 pics. If I don't render after each loop, I can increase to about 60 pics before I run of out memory.
var stackBuffer: CIImage!
var stackRender: CGImage!
var uiImage: UIImage!
let glContext = EAGLContext(API: .OpenGLES3)
let context = CIContext(EAGLContext: glContext)
// Preload list of 10 test pics
var ciImageArray = Array(count: 10, repeatedValue: CIImage.emptyImage())
for i in 0...9 {
uiImage = UIImage(named: String(i) + ".jpg")!
ciImageArray[i] = CIImage(image: uiImage)!
}
// Put the first image in the buffer
stackBuffer = ciImageArray[0]
for i in 1...9 {
// The next image will have an opacity of 1/n
let topImage = ciImageArray[i]
let alphaTop = topImage.imageByApplyingFilter(
"CIColorMatrix", withInputParameters: [
"inputAVector" : CIVector(x:0, y:0, z:0, w:1/CGFloat(i + 1))
])
// Layer the next image on top of the stack
let filter = CIFilter(name: "CISourceOverCompositing")!
filter.setValue(alphaTop, forKey: kCIInputImageKey)
filter.setValue(stackBuffer, forKey: kCIInputBackgroundImageKey)
// Render the result, and read back in
stackRender = context.createCGImage(filter.outputImage!, fromRect: stackBuffer.extent)
stackBuffer = CIImage(CGImage: stackRender)
}
// Output result
uiImage = UIImage(CGImage: stackRender)
compositeView.image = uiImage

Swift Progress Indicator Image Mask

To start, this project has been built using Swift.
I want to create a custom progress indicator that "fills up" as the script runs. The script will call a JSON feed that is pulled from the remote server.
To better visualize what I'm after, I made this:
My guess would be to have two PNG images; one white and one red, and then simply do some masking based on the progress amount.
Any thoughts on this?
Masking is probably overkill for this. Just redraw the image each time. When you do, you draw the red rectangle to fill the lower half of the drawing, to whatever height you want it; then you draw the droplet image (a PNG), which has transparency in the middle so the red rectangle shows through. So, one PNG is enough because the red rectangle can be drawn "live" each time you redraw.
I liked your drawing so much that I wanted to bring it to life, so here's my working code (my PNG is called tear.png and iv is a UIImageView in my interface; percent should be a CGFloat between 0 and 1):
func redraw(percent:CGFloat) {
let tear : UIImage! = UIImage(named:"tear")!
if tear == nil {return}
let sz = tear.size
let top = sz.height*(1-percent)
UIGraphicsBeginImageContextWithOptions(sz, false, 0)
let con = UIGraphicsGetCurrentContext()
UIColor.redColor().setFill()
CGContextFillRect(con, CGRectMake(0,top,sz.width,sz.height))
tear.drawAtPoint(CGPointMake(0,0))
self.iv.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
I also hooked up a UISlider whose action method converts its value to a CGFloat and calls that method, so that moving the slider back and forth moves the red fill up and down in the teardrop. I could play with this for hours!

Resources