I am developing an ios video trimmer with swift 4. I am trying to render a horizontal list of video thumbnails spread out over various durations both from local video files and and remote urls. When I test it in the simulator the thumbnails get generated in less than a second which is ok. However, when I test this code on an actual device the thumbnail generation is really slow and sometimes crashes. I tried to add the actual image generation to a background thread and then update the UI on the main thread when it is completed but that doesnt seem to work very well and the app crashes after rendering the screen a few times. I am not sure if that is because I am navigating away from the screen while tasks are still trying to complete. I am trying to resolve this problem and have the app generate the thumbnails quicker and not crash. Here is the code that I am using below. I would really appreciate any assistance for this issue.
func renderThumbnails(view: UIView, videoURL: URL, duration: Float64) {
var offset: Float64 = 0
for i in 0..<self.IMAGE_COUNT{
DispatchQueue.global(qos: .userInitiated).async {
offset = Float64(i) * (duration / Float64(self.IMAGE_COUNT))
let thumbnail = thumbnailFromVideo(videoUrl: videoURL,
time: CMTimeMake(Int64(offset), 1))
DispatchQueue.main.async {
self.addImageToView(image: thumbnail, view: view, index: i)
}
}
}
}
static func thumbnailFromVideo(videoUrl: URL, time: CMTime) -> UIImage{
let asset: AVAsset = AVAsset(url: videoUrl) as AVAsset
let imgGenerator = AVAssetImageGenerator(asset: asset)
imgGenerator.appliesPreferredTrackTransform = true
do{
let cgImage = try imgGenerator.copyCGImage(at: time, actualTime: nil)
let uiImage = UIImage(cgImage: cgImage)
return uiImage
}catch{
}
return UIImage()
}
The first sentence of the documentation says not to do what you’re doing! And it even tells you what to do instead.
Generating a single image in isolation can require the decoding of a large number of video frames with complex interdependencies. If you require a series of images, you can achieve far greater efficiency using the asynchronous method, generateCGImagesAsynchronously(forTimes:completionHandler:), which employs decoding efficiencies similar to those used during playback.
(Italics mine.)
Related
I'm trying to get last frame from video. Last frame, not last second (because I have very fast videos, one second can have different scenes).
I've written such code for testing:
private func getLastFrame(from item: AVPlayerItem) -> UIImage? {
let imageGenerator = AVAssetImageGenerator(asset: item.asset)
imageGenerator.requestedTimeToleranceAfter = kCMTimeZero
imageGenerator.requestedTimeToleranceBefore = kCMTimeZero
let composition = AVVideoComposition(propertiesOf: item.asset)
let time = CMTimeMakeWithSeconds(item.asset.duration.seconds, composition.frameDuration.timescale)
do {
let cgImage = try imageGenerator.copyCGImage(at: time, actualTime: nil)
return UIImage(cgImage: cgImage)
} catch {
print("\(error)")
return nil
}
}
But I receive always such error when try to execute it:
Domain=AVFoundationErrorDomain Code=-11832 "Cannot Open"
UserInfo={NSUnderlyingError=0x170240180 {Error
Domain=NSOSStatusErrorDomain Code=-12431 "(null)"},
NSLocalizedFailureReason=This media cannot be used.,
NSLocalizedDescription=Cannot Open}
If I remove requestedTimeTolerance (so it will be on default infinite value) everything is okay, but I always receive brighter imaged than I have in video (maybe it is because not latest frame was captured? Or CGImage → UIImage transform has some troubles?)
Questions:
Why I receive error when zero tolerance is specified? How to get exactly last frame?
Why captured images may be overbrighted that in video? For example if I write such code:
self.videoLayer.removeFromSuperlayer()
self.backgroundImageView.image = getLastFrame(from: playerItem)
I see "brightness jump" (video was darker, image is brighter).
Update 1
I found related issue: AVAssetImageGenerator fails at copying image, but that question is not solved.
I'm working on showing the thumbnails of the videos.
Here is my code.
override func viewDidLoad() {
super.viewDidLoad()
for str in self.imgArray
{
let url = NSURL(string: str)
let movieAsset = AVURLAsset(URL: url!, options: nil)
let assetImageGemerator = AVAssetImageGenerator(asset: movieAsset)
assetImageGemerator.appliesPreferredTrackTransform = true
let frameRef = try! assetImageGemerator.copyCGImageAtTime(CMTimeMake(1, 2), actualTime: nil)
let image = UIImage(CGImage: frameRef)
self.imagesArray.append(image)
}
}
By using this I'm getting thumbnails correctly. The issue is that there is a delay of about 5-10 seconds in generating the thumbnail image. Is there anyway that I could improve the speed of this code and generate the thumbnail fastly?
I don't think there will be a way to actually speed up the code - try with CMTimeMake(0, 10). Maybe it will faster the code since some video files takes some time to seek.
I think you need to cache images you got from the code and consult the cached images next time so that it runs faster overall. There are a lot of ways to cache images - using NSCache is an option.
As a side note, it won't take 5-10 seconds to get thumbnail images. It took less than one second usually.
I have two UIImageView objects on my view stored inside an array called imgViews and I try to download images for them asynchronously with this code:
func showPic(positionIndex: Int){
let urlStr = "https://www.friendesque.com/arranged/userpics/amir/1"
let url = NSURL(string: urlStr)
let session = NSURLSession.sharedSession()
let task = session.dataTaskWithURL(url!, completionHandler: { (data, response, error) -> Void in
if error == nil {
self.imgViews[positionIndex].image = UIImage(data: data)
//self.imgViews[positionIndex].image = UIImage(named: "11665489_10154101221228009_2542754962143804380_n.jpg")
print("Loading Done...")
}
else{
print(error)
}
})
task.resume()
}
and inside my viewDidLoad(), I have
showPic(0)
When I run the code, I see "Loading Done..." immediately which means the picture has been loaded but it takes a very long time (about 1 min) for the UIImageView to actually change to the loaded picture. It's a very small picture (~15K) and it can't be a processing time problem.
I tried loading a resource image (the comment part of the code) instead of the downloaded picture but it's still slow.
I'm really confused. Why is swift so slow at working with images inside a block?
Perhaps when the data task returns it is on a background thread? You will need to switch to the main thread to change a UIImageView. Regardless I would use the UIImageView+AFNetworking category to achieve this. It's simple, well tested and lets you provide a placeholder image to display while it is downloading.
https://github.com/AFNetworking/AFNetworking/blob/master/UIKit%2BAFNetworking/UIImageView%2BAFNetworking.h
to use:
myImageView.setImageWithURL(url!)
I am trying to measure the time taken to load a large photo (JPEG) from file into an UIImageView on iOS 8.0.
My current code:
import UIKit
class ViewController: UIViewController {
#IBOutlet weak var imageView: UIImageView!
#IBAction func loadImage(sender: UIButton) {
if let imageFile = NSBundle.mainBundle().pathForResource("large_photo", ofType: "jpg") {
// start our timer
let tick = Tick()
// loads a very large image file into imageView
// the test photo used is a 4608 × 3456 pixel JPEG
// using contentsOfFile: to prevent caching while testing timer
imageView.image = UIImage(contentsOfFile: imageFile)
// stop our timer and print execution time
tick.tock()
}
}
}
class Tick {
let tickTime : NSDate
init () {
tickTime = NSDate()
}
func tock () {
let tockTime = NSDate()
let executionTime = tockTime.timeIntervalSinceDate(tickTime)
println("[execution time]: \(executionTime)")
}
}
When I load a very large image (4608 x 3456 JPEG) on my test device (5th gen iPod touch), I can see that the execution time is ~2-3 seconds and blocks the main thread. This is observable by the fact that the UIButton remains in a highlighted state for this period of time and no other UI elements allow interaction.
I would therefore expect my timing function to report a time of ~2-3 seconds. However, it reports a time of milliseconds - eg:
[execution time]: 0.0116159915924072
This tick.tock() prints the message to the Console before the image is loaded. This confuses me, as the main thread appears blocked until after the image is loaded.
This leads me to ask the following questions:
if the image is being loaded asynchronously in the background, then
why is user interaction/main thread blocked?
if the image is being loaded on the main thread, why does the
tick.tock() function print to the console before the image is
displayed?
There are 2 parts to what you are measuring here:
Loading the image from disk:
UIImage(contentsOfFile: imageFile)
And decompressing the image from a JPEG to a bitmap to be displayed:
imageView.image = ....
The first part involves actually retrieving the compressed JPEG data from the disk (disk I/O) and creating a UIImage object. The UIImage object holds a reference to the compressed data, until it needs to be displayed. Only at the moment that it's ready to be rendered to the screen does it decompress the image into a bitmap to display (on the main thread).
My guess is that your timer is only catching the disk load part, and the decompression is happening on the next runloop. The decompression of an image that size is likely to take a while, probably the lions share of the time.
If you want to explicitly measure how long the decompression takes, you'll need to do it manually, by drawing the image to an off screen context, like so:
let tick = Tick()
// Load the image from disk
let image = UIImage(contentsOfFile: imageFile)
// Decompress the image into a bitmap
var newImage:UIImage;
UIGraphicsBeginImageContextWithOptions(image.size, true, 0);
image.drawInRect(CGRect(x:0,y:0,width:image.size.width, height:image.size.height))
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
tick.tock()
Here we are replicating the decompression that would happen when you assigned the image to the imageView.image
A handy trick to keep the UI responsive when dealing with images this size is to kick the whole process onto a background thread. This works well because once you have manually decompressed the image, UIKit detects this and doesn't repeat the process.
// Switch to background thread
dispatch_async(dispatch_get_global_queue(Int(DISPATCH_QUEUE_PRIORITY_DEFAULT.value), 0)) {
// Load the image from disk
let image = UIImage(contentsOfFile: imageFile)
// Ref to the decompressed image
var newImage:UIImage;
// Decompress the image into a bitmap
UIGraphicsBeginImageContextWithOptions(image.size, true, 0);
image.drawInRect(CGRect(x:0,y:0,width:image.size.width, height:image.size.height))
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Switch back to main thread
dispatch_async(dispatch_get_main_queue()) {
// Display the decompressed image
imageView.image = newImage
}
}
A disclaimer: The code here has not been fully tested in Xcode, but it's 99% correct if you decide to use it.
I would try to time this using a unit test, since the XCTest framework provides some good performance measurement tools. I think this approach would get around the lazy loading issues... although I'm not 100% on it.
func testImagePerformance() {
let date = NSDate()
measureBlock() {
if let imageFile = NSBundle.mainBundle().pathForResource("large_photo", ofType: "jpg") {
imageView.image = UIImage(contentsOfFile: imageFile)
}
}
}
(Just an aside, you mentioned that the loading blocks the main app thread... you should look into using an NSOperationQueue to make sure that doesn't happen... you probably already know that: http://nshipster.com/nsoperation/)
I'm developing an application which plays an HLS video and rendering it in an UIView.
At a determinate time I want to save a picture of the currently displayed video image. For this I begin an image context graphic, draw the UIView hierarchy in the context and save it in an UIImage with UIGraphicsGetImageFromCurrentImageContext method.
This work really fine on iOS simulator, the rendered image is perfect. But on a device the rendered image is totally white.
Anyone knows why it doesn't work on device ?
Or, is there a working way to take a screenshot of an HLS video on device ?
Thank for any help.
I was able to find a way to save a screenshot of an HLS live stream, by adding an AVPlayerItemVideoOutput object to the AVPlayerItem.
In initialisation:
self.output = AVPlayerItemVideoOutput(pixelBufferAttributes:
Dictionary<String, AnyObject>())
playerItem.addOutput(output!)
To save screenshot:
guard let time = self.player?.currentTime() else { return }
guard let pixelBuffer = self.output?.copyPixelBufferForItemTime(time,
itemTimeForDisplay: nil) else { return }
let ciImage = CIImage(CVPixelBuffer: pixelBuffer)
let temporaryContext = CIContext(options: nil)
let rect = CGRectMake(0, 0,
CGFloat(CVPixelBufferGetWidth(pixelBuffer)),
CGFloat(CVPixelBufferGetHeight(pixelBuffer)))
let videoImage = temporaryContext.createCGImage(ciImage, fromRect: rect)
let image = UIImage(CGImage: videoImage)
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
This seems not to work in the simulator, but works fine on a device. Code is in Swift 2 but should be straightforward to convert to obj-c or Swift 1.x.
People have tried, and failed (like me), apparently because of the nature of HLS. See: http://blog.denivip.ru/index.php/2012/12/screen-capture-in-ios/?lang=en