I'm stitching videos together in an AVMutableCompositionTrack, using this:
AVMutableVideoCompositionLayerInstruction *passThroughLayer = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
I'm also adding a CALayer with text and images to the composition, using an animationLayer.
At the beginning, I add 5 seconds of nothing to insert a title using insertEmptyTimeRange.
Up to here, everything's working fine.
Now I want to add some «nothing» to the end of the video, using insertEmptyTimeRange again - but that fails miserably.
CMTime creditsDuration = CMTimeMakeWithSeconds(5, 600);
CMTimeRange creditsRange = CMTimeRangeMake([[compositionVideoTrack asset] duration], creditsDuration);
[compositionVideoTrack insertEmptyTimeRange:creditsRange];
[compositionAudioTrack insertEmptyTimeRange:creditsRange];
NSLog(#"credit-range %f from %f", CMTimeGetSeconds(creditsRange.duration), CMTimeGetSeconds(creditsRange.start));
NSLog(#"Total duration %f", CMTimeGetSeconds([[compositionVideoTrack asset] duration]));
The insert-points are correct (first NSLog), but the total duration won't get extended...
Any ideas what I could be doing wrong?
Turns out, it seems to be impossible to add an empty timerange to the end of an AVMutableComposition.
This answer saved my life: AVMutableComposition of a Solid Color with No AVAsset
Related
In my app I want to take frames from a video for filtering them. I try to take frame frim video by time offset. This is my code:
- (UIImage*)getVideoFrameForTime:(NSDate*)time {
CGImageRef thumbnailImageRef = NULL;
NSError *igError = nil;
NSTimeInterval timeinterval = [time timeIntervalSinceDate:self.videoFilterStart];
CMTime atTime = CMTimeMakeWithSeconds(timeinterval, 1000);
thumbnailImageRef =
[self.assetImageGenerator copyCGImageAtTime:atTime
actualTime:NULL
error:&igError];
if (!thumbnailImageRef) {
NSLog(#"thumbnailImageGenerationError %#", igError );
}
UIImage *image = thumbnailImageRef ? [[UIImage alloc] initWithCGImage:thumbnailImageRef] : nil;
return image;
}
Unfortunately, I see only frames which located on integer seconds: 1, 2, 3.. Even when time interval is non-integer (1.5, etc).
How to get frames at any non-integer interval?
Thnx to #shallowThought I found an answer in this question Grab frames from video using Swift
You just need to add this two lines
assetImgGenerate.requestedTimeToleranceAfter = kCMTimeZero;
assetImgGenerate.requestedTimeToleranceBefore = kCMTimeZero;
Use this project to get more frame details
The corresponding project on github: iFrameExtractor.git
If I remember correctly NSDate's accuracy only goes up to the second which explains why frames are only take on integer seconds. You'll have to use a different type of input to get frames at non-integer seconds.
I am creating custom video player using AVPlayer in ios (OBJECTIVE-C).I have a settings button which on clicking will display the available video dimensions and audio formats.
Below is the design:
so,I want to know:
1).How to get the available dimensions from a video url(not a local video)?
2). Even if I am able to get the dimensions,Can I switch between the available dimensions while playing in AVPlayer?
Can anyone give me a hint?
If it is not HLS (streaming) video, you can get Resolution information with the following code.
Sample code:
// player is playing
if (_player.rate != 0 && _player.error == nil)
{
AVAssetTrack *track = [[_player.currentItem.asset tracksWithMediaType:AVMediaTypeVideo] firstObject];
if (track != nil)
{
CGSize naturalSize = [track naturalSize];
naturalSize = CGSizeApplyAffineTransform(naturalSize, track.preferredTransform);
NSInteger width = (NSInteger) naturalSize.width;
NSInteger height = (NSInteger) naturalSize.height;
NSLog(#"Resolution : %ld x %ld", width, height);
}
}
However, for HLS video, the code above does not work.
I have solved this in a different way.
When I played a video, I got the image from video and calculated the resolution of that.
Here is the sample code:
// player is playing
if (_player.rate != 0 && _player.error == nil)
{
AVAssetTrack *track = [[_player.currentItem.asset tracksWithMediaType:AVMediaTypeVideo] firstObject];
CMTime currentTime = _player.currentItem.currentTime;
CVPixelBufferRef buffer = [_videoOutput copyPixelBufferForItemTime:currentTime itemTimeForDisplay:nil];
NSInteger width = CVPixelBufferGetWidth(buffer);
NSInteger height = CVPixelBufferGetHeight(buffer);
NSLog(#"Resolution : %ld x %ld", width, height);
}
As you have mentioned the that it is not a local video, you can call on some web service to return the available video dimensions for that particular video. After that change the URL to other available video and seek to the current position.
Refer This
I need to change live video resolution with the width and height inputted by user. Sorry for my question but I have never done it before.
Please help.
You can change video resolution by using AVMutableVideoComposition and AVAssetExportSession.
First create object of AVMutableVideoComposition shown below.
AVMutableVideoComposition* videoComposition = [AVMutableVideoComposition videoComposition];
videoComposition.frameDuration = CMTimeMake(1, 30);
videoComposition.renderSize = CGSizeMake(YOUR_WIDTH, YOUR_HEIGHT);
Then, create object of AVAssetExportSession,
exporter = [[AVAssetExportSession alloc] initWithAsset:asset presetName:AVAssetExportPresetHighestQuality];
exporter.videoComposition = videoComposition;
And write completionBlock for exporter.
Hope this helps.
If you are using OpenTok, you can use a custom video capturer that is mostly identical to the one found in this sample. The only difference is that you would need to additionally write code to scale the image from the CVPixelBuffer (called imageBuffer) to the size which your user is setting.
One way technique to scale the image would be to use the CoreImage APIs as shown here: https://stackoverflow.com/a/8494304/305340
i want to implement Slowmotion Video like Defalut functionality of Slo-Mo in Camera and i used following code and it worked fine for video.
but in Audio track of that video is not working properly.
double videoScaleFactor =8.0;
compositionAudioTrack scaleTimeRange:CMTimeRangeMake(kCMTimeZero, videoDuration)
toDuration:CMTimeMake(videoDuration.value* videoScaleFactor,videoDuration.timescale)];
[compositionVideoTrack scaleTimeRange:CMTimeRangeMake(kCMTimeZero, videoDuration)
toDuration:CMTimeMake(videoDuration.value* videoScaleFactor, videoDuration.timescale)];
this scenario is woking properly for video slowmotion.But in audio slow-motion it is not working...
Please help me..
i found solution of Audio SlowMotion
double videoScaleFactor =8.0;
[compositionAudioTrack scaleTimeRange:CMTimeRangeMake(kCMTimeZero, videoDuration)
toDuration:CMTimeMake(videoDuration.value* videoScaleFactor,videoDuration.timescale)];
its working properly but not working in AVPlayer
so for that you have to set following property of AVPlayerItem
AVPlayerItem *playerItem = nil;
playerItem.audioTimePitchAlgorithm = AVAudioTimePitchAlgorithmVarispeed;
I am using AvFoundation & AVCaptureVideoDataOutputSampleBufferDelegate to record a video.
I need to implement Zoom functionality in the video being recorded. I am using the following delegate method.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
I am using this for getting video frames because i need to add text and images later on it before the appending it to the AVAssetWriterInput, using
[assetWriterVideoIn appendSampleBuffer:sampleBuffer]
The only way i can think to perform zoom is to scale and crop the "(CMSampleBufferRef)sampleBuffer" that i get from the delegate method.
Please help me out on this. I need to know the possible ways to scale and crop "CMSampleBufferRef".
One solution is to convert the CMSampleBuffer ref to a CIImage, then scale that and write it back to CVPixelBufferRef and append that.
You can see how to do that here which contains the code structure.
Adding filters to video with AVFoundation (OSX) - how do I write the resulting image back to AVWriter?
Another alternative is to scale the video using Layer Instructions like:
AVMutableVideoCompositionLayerInstruction *layerInstruction =
[AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstruction];
layerInstruction.trackID = mutableCompositionTrack.trackID;
[layerInstruction setTransform:CGAffineTransformMakeScale(2.0f,2.0f) atTime:kCMTimeZero];
This tells the composition to scale the mutableCompositionTrack (or whatever variable name you use for the track) by a factor of 2.0 starting at the beginning of video.
Now when you composite the video, add the array of layer intructions and you'll get your scaling without needing to worry about manipulating CMSampleBuffer (it will also be a lot faster).
AVMutableVideoComposition *videoComposition = [AVMutableVideoComposition videoComposition];
videoComposition.renderSize = CGSizeMake(1280, 720);
videoComposition.frameDuration = CMTimeMake(1, 30);
videoComposition.instructions = #[_instructions];