Take screenshot from HLS video stream with iOS device - ios

I'm developing an application which plays an HLS video and rendering it in an UIView.
At a determinate time I want to save a picture of the currently displayed video image. For this I begin an image context graphic, draw the UIView hierarchy in the context and save it in an UIImage with UIGraphicsGetImageFromCurrentImageContext method.
This work really fine on iOS simulator, the rendered image is perfect. But on a device the rendered image is totally white.
Anyone knows why it doesn't work on device ?
Or, is there a working way to take a screenshot of an HLS video on device ?
Thank for any help.

I was able to find a way to save a screenshot of an HLS live stream, by adding an AVPlayerItemVideoOutput object to the AVPlayerItem.
In initialisation:
self.output = AVPlayerItemVideoOutput(pixelBufferAttributes:
Dictionary<String, AnyObject>())
playerItem.addOutput(output!)
To save screenshot:
guard let time = self.player?.currentTime() else { return }
guard let pixelBuffer = self.output?.copyPixelBufferForItemTime(time,
itemTimeForDisplay: nil) else { return }
let ciImage = CIImage(CVPixelBuffer: pixelBuffer)
let temporaryContext = CIContext(options: nil)
let rect = CGRectMake(0, 0,
CGFloat(CVPixelBufferGetWidth(pixelBuffer)),
CGFloat(CVPixelBufferGetHeight(pixelBuffer)))
let videoImage = temporaryContext.createCGImage(ciImage, fromRect: rect)
let image = UIImage(CGImage: videoImage)
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
This seems not to work in the simulator, but works fine on a device. Code is in Swift 2 but should be straightforward to convert to obj-c or Swift 1.x.

People have tried, and failed (like me), apparently because of the nature of HLS. See: http://blog.denivip.ru/index.php/2012/12/screen-capture-in-ios/?lang=en

Related

IOS - Print image to Phomemo M02 Mini Bluetooth Thermal Printer using CoreBluetooth in Swift 5?

I have a Phomemo M02 Mini Bluetooth Thermal Printer that I want to print the below image to from my IOS app:
In my app the above image gets taken from a UIView. I've tried converting that image into data and then sending that data to the printer through bluetooth using the CoreBluetooth Framework, but the printer just printed no image and wouldn't stop unrolling it's paper unless I unpaired my app from the device. With that said, does anyone know how to properly send image data, or just an image, to a Phomemo M02 Mini Bluetooth Thermal Printer for it to print it, and for the printer to actually stop unrolling it's paper after drawing the image? That would really be appreciated. Thanks.
Heres my code:
The code for turning my UIView into an image:
extension UIView {
func captureShot () -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, false, self.layer.contentsScale)
drawHierarchy(in: self.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return ((image != nil) ? image! : UIImage())
}
}
The code for sending the image data to the printer:
let imageData = printView.captureShot().jpegData(compressionQuality: 100)
if (imageData != nil) {
imageView.image = UIImage(data: imageData!)!
print("image size: \(UIImage(data: imageData!)!.size)")
globalPeripheral.writeValue(imageData!, for: characteristic, type: .withResponse)
}

Capturing a CMSampleBuffer using an RTCAudioSource on iOS

I'm trying to stream a CMSampleBuffer video / audio combo using WebRTC on iOS, but I'm running into trouble trying to capture audio. Video works just fine:
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
print("couldn't get image from buffer :~(")
return
}
let rtcPixelBuffer = RTCCVPixelBuffer(pixelBuffer: pixelBuffer)
let rtcVideoFrame = RTCVideoFrame(buffer: rtcPixelBuffer, rotation: ._0, timeStampNs: timeStampNs)
videoSource.capturer(videoCapturer, didCapture: rtcVideoFrame)
When it comes to audio, I can't see any method on the RTCAudioSource class in order to capture audio, any help would be appreciated!
I found a fork of the WebRTC codebase which solves this issue by adding a way for audio samples to be captured by an RTCAudioDeviceModule:
https://github.com/pixiv/webrtc/blob/87.0.4280.142-pixiv0/README.pixiv.en.md

How to run TFlite Object Detection with a single image in Swift?

I got the tensorflow example app for iOS from here. My model works fine with this tf's app in real time detection, but I'd like to do it with a single image. As far as I could see, the main part to run the model is:
self.result = self.modelDataHandler?.runModel(onFrame: buffer)
This buffer variable is a CVPixelBuffer, I can obtain it from a video frame using CMSampleBufferGetImageBuffer() as the tf's app does. But my app is not using frames, so I don't have this option.
My captured photo is a UIImage, I tried to convert it to a CVPixelBuffer to use it with the code above:
let ciImage: CIImage = CIImage(cgImage: (self.image?.cgImage)!)
let buffer: CVPixelBuffer = self.getBuffer(from: ciImage)!
The getBuffer() is:
func getBuffer(from image: CIImage) -> CVPixelBuffer? {
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
var pixelBuffer : CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.extent.width), Int(image.extent.height), kCVPixelFormatType_32BGRA, attrs, &pixelBuffer)
guard (status == kCVReturnSuccess) else {
print("Error converting ciImage to CVPixelBuffer")
return nil
}
return pixelBuffer
}
And then run it with:
self.result = self.modelDataHandler?.runModel(onFrame: buffer)
let inferences: [Inference] = self.result!.inferences
let time: Double = self.result!.inferenceTime
As a result I have a time of about 50 or 60 ms, but the inferences comes empty. I don't know if my conversion from UIImage to CVPixelBuffer is right or if there is another error or procedure that I'm forgetting.
If you have some questions, please ask me, any help would be great! Thanks.
I've found my problem, my conversion from UIImage to CVPixelBuffer was wrong, no CIImage is needed. From this question I got the right code to do this conversion.

ios swift thumbnail generation

I am developing an ios video trimmer with swift 4. I am trying to render a horizontal list of video thumbnails spread out over various durations both from local video files and and remote urls. When I test it in the simulator the thumbnails get generated in less than a second which is ok. However, when I test this code on an actual device the thumbnail generation is really slow and sometimes crashes. I tried to add the actual image generation to a background thread and then update the UI on the main thread when it is completed but that doesnt seem to work very well and the app crashes after rendering the screen a few times. I am not sure if that is because I am navigating away from the screen while tasks are still trying to complete. I am trying to resolve this problem and have the app generate the thumbnails quicker and not crash. Here is the code that I am using below. I would really appreciate any assistance for this issue.
func renderThumbnails(view: UIView, videoURL: URL, duration: Float64) {
var offset: Float64 = 0
for i in 0..<self.IMAGE_COUNT{
DispatchQueue.global(qos: .userInitiated).async {
offset = Float64(i) * (duration / Float64(self.IMAGE_COUNT))
let thumbnail = thumbnailFromVideo(videoUrl: videoURL,
time: CMTimeMake(Int64(offset), 1))
DispatchQueue.main.async {
self.addImageToView(image: thumbnail, view: view, index: i)
}
}
}
}
static func thumbnailFromVideo(videoUrl: URL, time: CMTime) -> UIImage{
let asset: AVAsset = AVAsset(url: videoUrl) as AVAsset
let imgGenerator = AVAssetImageGenerator(asset: asset)
imgGenerator.appliesPreferredTrackTransform = true
do{
let cgImage = try imgGenerator.copyCGImage(at: time, actualTime: nil)
let uiImage = UIImage(cgImage: cgImage)
return uiImage
}catch{
}
return UIImage()
}
The first sentence of the documentation says not to do what you’re doing! And it even tells you what to do instead.
Generating a single image in isolation can require the decoding of a large number of video frames with complex interdependencies. If you require a series of images, you can achieve far greater efficiency using the asynchronous method, generateCGImagesAsynchronously(forTimes:completionHandler:), which employs decoding efficiencies similar to those used during playback.
(Italics mine.)

Is HEIC/HEIF Supported By UIImage

I was under the impression that UIImage would support HEIC/HEIF files introduced in iOS 11. In my testing that does not appear to be the case though. If I do let image = UIImage(named: "test") which points to test.heic then image is nil. If I use an image literal then it crashes the app. Wondering if this is not implemented yet for now. Thanks.
While Zhao's answer works, it is fairly slow. The below is about 10-20 times faster. It still doesn't work in the simulator for some reason though so keep that in mind.
func convert(url: URL) -> UIImage? {
guard let source = CGImageSourceCreateWithURL(url as CFURL, nil) else { return nil }
guard let cgImage = CGImageSourceCreateImageAtIndex(source, 0, nil) else { return nil }
return UIImage(cgImage: cgImage)
}
This is kind of outlined on page 141 from the slides of a WWDC session but wasn't super clear to me before: https://devstreaming-cdn.apple.com/videos/wwdc/2017/511tj33587vdhds/511/511_working_with_heif_and_hevc.pdf
Unfortunately I still haven't been able to figure out a way to use images in the xcassets folder so you'll either have to include the files outside of assets or pull from on the web. If anyone knows a way around this please post.
In Xcode 10.1 (10B61), UIImage(named: "YourHeifImage") works just like other assets.
Interestingly though, when you want to try this out …and you AirDrop a HEIF pic from your (e.g. iPhone) Photos to your mac, it will get the extension .HEIC (all caps). When you then add that image to your Xcode xcassets, Xcode complains about an incorrect extension:
….xcassets: warning: Ambiguous Content: The image set "IMG_1791" references a file "IMG_1791.HEIC", but that file does not have a valid extension.
If you first change the extension to the lower-case .heic, and then add it to xcassets, all is well.
You can load HEIF via CIImage, then convert to UIImage
CIImage *ciImage = [CIImage imageWithContentsOfURL:url];
imageView.image = [UIImage imageWithCIImage:ciImage];

Resources