I'm trying to get last frame from video. Last frame, not last second (because I have very fast videos, one second can have different scenes).
I've written such code for testing:
private func getLastFrame(from item: AVPlayerItem) -> UIImage? {
let imageGenerator = AVAssetImageGenerator(asset: item.asset)
imageGenerator.requestedTimeToleranceAfter = kCMTimeZero
imageGenerator.requestedTimeToleranceBefore = kCMTimeZero
let composition = AVVideoComposition(propertiesOf: item.asset)
let time = CMTimeMakeWithSeconds(item.asset.duration.seconds, composition.frameDuration.timescale)
do {
let cgImage = try imageGenerator.copyCGImage(at: time, actualTime: nil)
return UIImage(cgImage: cgImage)
} catch {
print("\(error)")
return nil
}
}
But I receive always such error when try to execute it:
Domain=AVFoundationErrorDomain Code=-11832 "Cannot Open"
UserInfo={NSUnderlyingError=0x170240180 {Error
Domain=NSOSStatusErrorDomain Code=-12431 "(null)"},
NSLocalizedFailureReason=This media cannot be used.,
NSLocalizedDescription=Cannot Open}
If I remove requestedTimeTolerance (so it will be on default infinite value) everything is okay, but I always receive brighter imaged than I have in video (maybe it is because not latest frame was captured? Or CGImage → UIImage transform has some troubles?)
Questions:
Why I receive error when zero tolerance is specified? How to get exactly last frame?
Why captured images may be overbrighted that in video? For example if I write such code:
self.videoLayer.removeFromSuperlayer()
self.backgroundImageView.image = getLastFrame(from: playerItem)
I see "brightness jump" (video was darker, image is brighter).
Update 1
I found related issue: AVAssetImageGenerator fails at copying image, but that question is not solved.
Related
I am developing an ios video trimmer with swift 4. I am trying to render a horizontal list of video thumbnails spread out over various durations both from local video files and and remote urls. When I test it in the simulator the thumbnails get generated in less than a second which is ok. However, when I test this code on an actual device the thumbnail generation is really slow and sometimes crashes. I tried to add the actual image generation to a background thread and then update the UI on the main thread when it is completed but that doesnt seem to work very well and the app crashes after rendering the screen a few times. I am not sure if that is because I am navigating away from the screen while tasks are still trying to complete. I am trying to resolve this problem and have the app generate the thumbnails quicker and not crash. Here is the code that I am using below. I would really appreciate any assistance for this issue.
func renderThumbnails(view: UIView, videoURL: URL, duration: Float64) {
var offset: Float64 = 0
for i in 0..<self.IMAGE_COUNT{
DispatchQueue.global(qos: .userInitiated).async {
offset = Float64(i) * (duration / Float64(self.IMAGE_COUNT))
let thumbnail = thumbnailFromVideo(videoUrl: videoURL,
time: CMTimeMake(Int64(offset), 1))
DispatchQueue.main.async {
self.addImageToView(image: thumbnail, view: view, index: i)
}
}
}
}
static func thumbnailFromVideo(videoUrl: URL, time: CMTime) -> UIImage{
let asset: AVAsset = AVAsset(url: videoUrl) as AVAsset
let imgGenerator = AVAssetImageGenerator(asset: asset)
imgGenerator.appliesPreferredTrackTransform = true
do{
let cgImage = try imgGenerator.copyCGImage(at: time, actualTime: nil)
let uiImage = UIImage(cgImage: cgImage)
return uiImage
}catch{
}
return UIImage()
}
The first sentence of the documentation says not to do what you’re doing! And it even tells you what to do instead.
Generating a single image in isolation can require the decoding of a large number of video frames with complex interdependencies. If you require a series of images, you can achieve far greater efficiency using the asynchronous method, generateCGImagesAsynchronously(forTimes:completionHandler:), which employs decoding efficiencies similar to those used during playback.
(Italics mine.)
Want to get thumbnail from video url for that I followed this code which is recommended on most of SO questions:
import AVFoundation
private func thumbnailForVideoAtURL(url: NSURL) -> UIImage? {
let asset = AVAsset(URL: url)
let assetImageGenerator = AVAssetImageGenerator(asset: asset)
var time = asset.duration
time.value = min(time.value, 2)
do {
let imageRef = try assetImageGenerator.copyCGImageAtTime(time, actualTime: nil)
return UIImage(CGImage: imageRef)
} catch let error {
print(error)
return nil
}
}
Then my viewdidload method is like this:
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
let url = NSURL(string: "https://www.youtube.com/watch?v=HrKwx8EdSAE")
if let thumbnailImage = generateThumnail(url!) {
print("hello");
self.imageView.image = thumbnailImage;
}
self.view.addSubview(imageView);
}
But I am getting this error for all of my video urls:
Error Domain=AVFoundationErrorDomain Code=-11850 "Operation Stopped" UserInfo={NSUnderlyingError=0x7fb4a3c04870 {Error Domain=NSOSStatusErrorDomain Code=-12939 "(null)"}, NSLocalizedFailureReason=The server is not correctly configured., NSLocalizedDescription=Operation Stopped}
Can anyone suggest where I am doing wrong or this code is just for local videos? Also if there any alternatives please suggest.
I had go to a lot of places to have found the solution for this.
Basically you have to first get list of video id's(if you need to) and then use the following to get a page that returns a json with link to the actual thumbnail image.
Here is the api link for thumbnails:
https://api.dailymotion.com/video/x26ezrb?fields=thumbnail_medium_url,thumbnail_small_url,thumbnail_large_url
in the above URL, "x26ezrb" is the video id and the "fields" attributes defines the size for thumbnail image. Use the video id's to get their respective thumbnails.
if you hit this link "https://api.dailymotion.com/video/x26ezrb?fields=thumbnail_large_url" you'll get a json like below:
{"thumbnail_large_url":"http://s1.dmcdn.net/HRnTi/x240-oi8.jpg"}
Now all you have to do is parse this json to get the link and then use that link to get to the thumbnail.
NOTE: you have to convert the thumbnail link from "http" into "https" otherwise it won't return anything.
get the url from JSON, split it into components with separator ":" and then combine "https:" with second part of the split url string.
I'm having a weird problem moving a .mov file created by my app from the documents folder to the camera roll. A bit of background:
The app makes time lapse movies. It works specifically with the devices that have a 12 megapixel 4032x3024 sensor. It created the movies in the app's documents folder. The movies can be saved as either 4k or HD. They can also be saved as a 4:3 aspect ratio movie of the entire sensor, or a 16:9 crop of the sensor. If the user wants the movie to be stored in the Camera Roll of the device, they can set that option. My problem exists when trying to move a full size movie (4032x3024) from the app's documents folder to the Camera Roll. I get this error:
Error Domain=NSCocoaErrorDomain Code=-1 "(null)"
The movie is fine, it's still sitting in the document's folder. It just can't be copied to the Camera Roll. If I do this same operation through the same code with any of the other sizes, no problem. A 4:3 HD (1440x1080) works fine, a 16:9 HD (1920x1080) works fine, a 16:9 4k (3880x2160) works fine. It's just the 4:3 4k (4032x3024) that generates this error when I try to move it.
This is the code that does the move:
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: cameraRollURL!)
The URL is OK because it works with the other sizes just fine.
I did fix this when I switched from pull to push approach (as seen from writter side).
Pull approach is when I have an array of UIImage (captured from camera), and feed writer when it is ready to process next image. Push approach is when I have single UIImage (captured from camera) one by one, and feed writer if it is ready to process next image.
Not sure what's a cause. Maybe processing message loop between AVWritter calls.
Advantage: you do not allocate bunch of GB memory in UIImage array at any time if capturing longer video.
Disadvantage: writer may not be ready to write sample if capturing is happening too fast so frames can be dropped because it is processing in real time.
Swift 4:
func initVideo(videoSettings: [String: Any]) -> (assetWriter: AVAssetWriter, writeInput: AVAssetWriterInput, bufferAdapter:AVAssetWriterInputPixelBufferAdaptor)? {
if(FileManager.default.fileExists(atPath: ImagesToVideoUtils.tempPath)){
guard (try? FileManager.default.removeItem(atPath: ImagesToVideoUtils.tempPath)) != nil else {
print("remove path failed")
return nil
}
}
let assetWriter = try! AVAssetWriter(url: ImagesToVideoUtils.fileURL, fileType: AVFileType.mov)
let writeInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: videoSettings)
assert(assetWriter.canAdd(writeInput), "add failed")
assetWriter.add(writeInput)
let bufferAttributes:[String: Any] = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32ARGB)]
let bufferAdapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writeInput, sourcePixelBufferAttributes: bufferAttributes)
return (assetWriter, writeInput, bufferAdapter)
}
func exportVideo_start(assetWriter: AVAssetWriter) -> (DispatchQueue) {
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: CMTime.zero)
let mediaInputQueue = DispatchQueue(label: "mediaInputQueue")
return (mediaInputQueue)
}
func exportVideo_write(videoSettings: [String: Any], img: UIImage, assetWriter: AVAssetWriter, writeInput: AVAssetWriterInput, bufferAdapter:AVAssetWriterInputPixelBufferAdaptor, mediaInputQueue: DispatchQueue, timestamp: CMTime) {
if (writeInput.isReadyForMoreMediaData){
var sampleBuffer:CVPixelBuffer?
autoreleasepool{
sampleBuffer = self.newPixelBufferFrom(cgImage: img.cgImage!, videoSettings: videoSettings)
}
bufferAdapter.append(sampleBuffer!, withPresentationTime: timestamp)
print("Adding frame at \(timestamp)")
}
}
func exportVideo_end( assetWriter: AVAssetWriter, writeInput: AVAssetWriterInput) {
writeInput.markAsFinished()
assetWriter.finishWriting {
DispatchQueue.main.sync {
print("Finished writting")
ImagesToVideoUtils.saveToCameraRoll(videoURL: ImagesToVideoUtils.fileURL)
}
}
}
- (void)saveVideoPath:(NSString *)videoPath {
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.1 * NSEC_PER_SEC)),
dispatch_get_main_queue(), ^{
NSURL *url = [NSURL fileURLWithPath:videoPath];
[[PHPhotoLibrary sharedPhotoLibrary] performChanges:^{
[PHAssetChangeRequest creationRequestForAssetFromVideoAtFileURL:url];
} completionHandler:^(BOOL success, NSError * _Nullable error) {
if (success) {
NSLog(#"succ");
}
if (error) {
NSLog(#"%#",error);
}
}];
});
}
I have a video with these specs
Format : H.264 , 1280x544
FPS : 25
Data Size : 26MB
Duration : 3:00
Data Rate : 1.17 Mbit/s
While experimenting ,I performed a removeTimeRange(range : CMTimeRange) on every other frame (total frames = 4225). This results in the video becoming 2x faster , so the duration becomes 1:30.
However when I export the video, the video becomes 12x larger in size i.e. 325MB.This makes sense since this technique is decomposing the video into about 2112 pieces and stitching it back together. Apparently, in doing so the compression among individual frames is lost, thus causing the enormous size.
This causes stuttering in the video when played with an AVPlayer and therefore poor performance.
Question : How can I apply some kind of compression while stitching back the frames so that the video can play smoothly and also be less in size?
I only want a point in the right direction. Thanks!
CODE
1) Creating an AVMutableComposition from Asset & Configuring it
func configureAssets(){
let options = [AVURLAssetPreferPreciseDurationAndTimingKey : "true"]
let videoAsset = AVURLAsset(url: Bundle.main.url(forResource: "Push", withExtension: "mp4")! , options : options)
let videoAssetSourceTrack = videoAsset.tracks(withMediaType: AVMediaTypeVideo).first! as AVAssetTrack
let comp = AVMutableComposition()
let videoCompositionTrack = comp.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
do {
try videoCompositionTrack.insertTimeRange(
CMTimeRangeMake(kCMTimeZero, videoAsset.duration),
of: videoAssetSourceTrack,
at: kCMTimeZero)
deleteSomeFrames(from: comp)
videoCompositionTrack.preferredTransform = videoAssetSourceTrack.preferredTransform
}catch { print(error) }
asset = comp }
2) Deleting every other frame.
func deleteSomeFrames(from asset : AVMutableComposition){
let fps = Int32(asset.tracks(withMediaType: AVMediaTypeVideo).first!.nominalFrameRate)
let sumTime = Int32(asset.duration.value) / asset.duration.timescale;
let totalFrames = sumTime * fps
let totalTime = Float(CMTimeGetSeconds(asset.duration))
let frameDuration = Double(totalTime / Float(totalFrames))
let frameTime = CMTime(seconds: frameDuration, preferredTimescale: 1000)
for frame in Swift.stride(from: 0, to: totalFrames, by: 2){
let timing = CMTimeMultiplyByFloat64(frameTime, Float64(frame))
print("Asset Duration = \(CMTimeGetSeconds(asset.duration))")
print("")
let timeRange = CMTimeRange(start: timing, duration : frameTime)
asset.removeTimeRange(timeRange)
}
print("duration after time removed = \(CMTimeGetSeconds(asset.duration))")
}
3) Saving the file
func createFileFromAsset(_ asset: AVAsset){
let documentsDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] as URL
let filePath = documentsDirectory.appendingPathComponent("rendered-vid.mp4")
if let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetHighestQuality){
exportSession.outputURL = filePath
exportSession.shouldOptimizeForNetworkUse = true
exportSession.timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration)
exportSession.outputFileType = AVFileTypeQuickTimeMovie
exportSession.exportAsynchronously {
print("finished: \(filePath) : \(exportSession.status.rawValue) ")
if exportSession.status.rawValue == 4{
print("Export failed -> Reason: \(exportSession.error!.localizedDescription))")
print(exportSession.error!)
}
}}}
4) Finally update the ViewController to play the new Composition!
override func viewDidLoad() {
super.viewDidLoad()
// Create the AVPlayer and play the composition
assetConfig.configureAssets()
let snapshot : AVComposition = assetConfig.asset as! AVComposition
let playerItem = AVPlayerItem(asset : snapshot)
player = AVPlayer(playerItem: playerItem)
let playerLayer = AVPlayerLayer(player: player)
playerLayer.frame = CGRect(x : 0, y : 0, width : self.view.frame.width , height : self.view.frame.height)
self.view.layer.addSublayer(playerLayer)
player?.play()
}
If you are using AVMutableComposition,you will notice that each composition may contain one or more AVCompositionTrack(or AVMutableCompositionTrack),and the best way to edit your composition was to operate each track, but not the whole composition.
but if your purpose is to faster your video's rate, editing tracks will not be necessary.
so i will try my best to tell you what i know about your question
About video Stuttering while playing
Possible reason of stuttering
notice that you are using method removeTimeRange(range: CMTimeRange),this method will remove the timeRange on composition yes, but will NOT auto fill the empty of each time range
Visualize Example
[F stand for Frame,E stand for Empty]
org_video --> F-F-F-F-F-F-F-F...
after remove time range, the composition will be like this
after_video --> F-E-F-E-F-E-F-E...
and you might think that the video will be like this
target_video --> F-F-F-F...
this is the most possible reason about stuttering during playback.
Suggested solution
So if you want to shorten your video, make it rate more faster/slower,you possible need to use the method scaleTimeRange:(CMTimeRange) toDuration:(CMTime)
Example
AVMutableComposition * project;//if this video is 200s
//Scale
project.scaleTimeRange:CMTimeRangeMake(kCMTimeZero, project.duration) toDuration:CMTimeMakeWithSeconds(100,kCMTimeMaxTimescale)
this method is to make the video faster/slower.
About the file size
a video file's size might effected by bit rate and format type ,if your using H.264,the most possible reason causing size enlarge will be bit rate.
in your code,you are using AVAssetExportSession
AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetHighestQuality
you gave the preset which is AVAssetExportPresetHighestQuality
in my own application project, after i was using this preset, the video's bit rate will be 20~30Mbps,no matter your source video's bit rate. and, well using apple's preset will not allowed you to set the bit rate manually, so.
Possible Solution
There is a third part tool called SDAVAssetExportSession,this session will allowed you to fully config your export session, you might want to try to study this code about custom export session's preset.
here is what i can tell you right now. wish could help :>
I'm developing an app that allows users to edit photos using PhotoKit. I was previously saving the edited photo to disk as a JPEG. I would like to avoid converting to JPEG and have implemented the modifications in order to do that. It works great for photos taken with the camera, but if you try to edit a screenshot, the PHPhotoLibrary.sharedPhotoLibrary().performChanges block will fail and log The operation couldn’t be completed. (Cocoa error -1.). I am not sure why this is causing the performChanges block to fail, what have I done wrong here?
I've created a sample app available to download that demonstrates the problem, and I've included the relevant code below. The app attempts to edit the newest photo in your photo library. If it succeeds it will prompt for access to edit the photo, otherwise nothing will happen and you'll see the console log. To reproduce the issue, take a screenshot then run the app.
Current code that works with screenshots:
let jpegData: NSData = outputPhoto.jpegRepresentationWithCompressionQuality(0.9)
let contentEditingOutput = PHContentEditingOutput(contentEditingInput: self.input)
var error: NSError?
let success = jpegData.writeToURL(contentEditingOutput.renderedContentURL, options: NSDataWritingOptions.AtomicWrite, error: &error)
if success {
return contentEditingOutput
} else {
return nil
}
Replacement code that causes screenshots to fail:
let url = self.input.fullSizeImageURL
let orientation = self.input.fullSizeImageOrientation
var inputImage = CIImage(contentsOfURL: url)
inputImage = inputImage.imageByApplyingOrientation(orientation)
let outputPhoto = createOutputImageFromInputImage(inputImage)!
let originalImageData = NSData(contentsOfURL: self.input.fullSizeImageURL)!
let imageSource = CGImageSourceCreateWithData(originalImageData, nil)
let dataRef = CFDataCreateMutable(nil, 0)
let destination = CGImageDestinationCreateWithData(dataRef, CGImageSourceGetType(imageSource), 1, nil) //getType automatically selects JPG, PNG, etc based on original format
struct ContextStruct {
static var ciContext: CIContext? = nil
}
if ContextStruct.ciContext == nil {
let eaglContext = EAGLContext(API: .OpenGLES2)
ContextStruct.ciContext = CIContext(EAGLContext: eaglContext)
}
let cgImage = ContextStruct.ciContext!.createCGImage(outputPhoto, fromRect: outputPhoto.extent())
CGImageDestinationAddImage(destination, cgImage, nil)
if CGImageDestinationFinalize(destination) {
let contentEditingOutput = PHContentEditingOutput(contentEditingInput: self.input)
var error: NSError?
let imageData: NSData = dataRef
let success = imageData.writeToURL(contentEditingOutput.renderedContentURL, options: .AtomicWrite, error: &error)
if success {
//it does succeed
return contentEditingOutput
} else {
return nil
}
}
The problem happens due to the fact that adjusted photos are always saved as JPG files, and screenshots are in fact PNG files.
It occurred to me while I was debugging your sample project and saw the in the PhotoEditor, contentEditingOutput.renderedContentURL is a URL to a JPG, while if you examine the result of CGImageSourceGetType(imageSource) it is clear the it's a PNG (returns a PNG UTI: public.png).
So I went and read the documentation for renderedContentURL which states that if editing a photo asset, the altered image is written in JPEG format - which clearly won't work if your image is a PNG. This leads me to think that Apple don't support editing PNG files or don't want you to. Go figure..