Im instanciating the AVPlayerItemVideoOutput like so:
let videoOutput = AVPlayerItemVideoOutput(pixelBufferAttributes: [String(kCVPixelBufferPixelFormatTypeKey): NSNumber(value: kCVPixelFormatType_32BGRA)])
And retrieving the pixelBuffers like this:
#objc func displayLinkDidRefresh(link: CADisplayLink) {
let itemTime = videoOutput.itemTime(forHostTime: CACurrentMediaTime())
if videoOutput.hasNewPixelBuffer(forItemTime: itemTime) {
if let pixelBuffer = videoOutput.copyPixelBuffer(forItemTime: itemTime, itemTimeForDisplay: nil) {
}
}
}
But for some reason CVPixelBufferGetHeight(pixelBuffer) or Width. always return 1280x720 when the video was taken when the iPhone's camera (landscape or portrait) always height=1280 width=720. EVEN if the video is 4k. If I load a square video from instagram or any other video downloaded from the internet (not created directly with the camera app) the width and height are printed correctly when the resolution is less than 720p. But a different resolution, for ex. a 1008x1792 will throw CVPixelBufferGetHeight(pixelBuffer) = 1280
Videos taken with the camera... it always throws a lower res. I tried 4k and 1080 settings (you can change that in iOS Settings > Camera). still.. even in 1080, I get 1280x720 pixel buffers.
I figured out that th UIPickerController I was using was set to default transcode the selected video from library to a Medium setting. in this case it was 1280x720
I ended up changing this properties of the picker
picker.videoQuality = .typeHigh
picker.videoExportPreset = AVAssetExportPresetHighestQuality
Altho the property that actually makes the change is the videoExportPreset the other one I dont know what it does, even tho the Documentation specifies it is for when you record a video OR you pick a video.
Related
I'm working with AVFoundation, importing videos from the user's library.
I need the real video's dimensions. After making a screen-recording on my iPhone 7, the video size should be (750.0, 1334.0).
When using AVAsset track's naturalSize, I'm always getting (720.0, 1280.0).
How can I get the real video dimension ?
Here's the code I'm using :
guard let track = tracks(withMediaType: AVMediaType.video).first else { return .zero }
return track.naturalSize.applying(track.preferredTransform)
The video recorded using an iOS device will be sized according the settings you set on Settings > Camera > Record Video and ignoring device's screen resolution.
Let me see if I understood it correctly.
At the present most advanced hardware, iOS allows me to record at the following fps: 30, 60, 120 and 240.
But these fps behave differently. If I shoot at 30 or 60 fps, I expect the videos files created from shooting at these fps to play at 30 and 60 fps respectively.
But if I shoot at 120 or 240 fps, I expect the video files creating from shooting at these fps to play at 30 fps, or I will not see the slow motion.
A few questions:
am I right?
is there a way to shoot at 120 or 240 fps and play at 120 and 240 fps respectively? I mean play at the fps the videos were shoot without slo-mo?
How do I control that framerate when I write the file?
I am creating the AVAssetWriter input like this...
NSDictionary *videoCompressionSettings = #{AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : #(videoWidth),
AVVideoHeightKey : #(videoHeight),
AVVideoCompressionPropertiesKey : #{ AVVideoAverageBitRateKey : #(bitsPerSecond),
AVVideoMaxKeyFrameIntervalKey : #(1)}
};
_assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoCompressionSettings];
and there is no apparent way to control that.
NOTE: I have tried different numbers where that 1 is. I have tried 1.0/fps, I have tried fps and I have removed the key. No difference.
This is how I setup `AVAssetWriter:
AVAssetWriter *newAssetWriter = [[AVAssetWriter alloc] initWithURL:_movieURL fileType:AVFileTypeQuickTimeMovie
error:&error];
_assetWriter = newAssetWriter;
_assetWriter.shouldOptimizeForNetworkUse = NO;
CGFloat videoWidth = size.width;
CGFloat videoHeight = size.height;
NSUInteger numPixels = videoWidth * videoHeight;
NSUInteger bitsPerSecond;
// Assume that lower-than-SD resolutions are intended for streaming, and use a lower bitrate
// if ( numPixels < (640 * 480) )
// bitsPerPixel = 4.05; // This bitrate matches the quality produced by AVCaptureSessionPresetMedium or Low.
// else
NSUInteger bitsPerPixel = 11.4; // This bitrate matches the quality produced by AVCaptureSessionPresetHigh.
bitsPerSecond = numPixels * bitsPerPixel;
NSDictionary *videoCompressionSettings = #{AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : #(videoWidth),
AVVideoHeightKey : #(videoHeight),
AVVideoCompressionPropertiesKey : #{ AVVideoAverageBitRateKey : #(bitsPerSecond)}
};
if (![_assetWriter canApplyOutputSettings:videoCompressionSettings forMediaType:AVMediaTypeVideo]) {
NSLog(#"Couldn't add asset writer video input.");
return;
}
_assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoCompressionSettings
sourceFormatHint:formatDescription];
_assetWriterVideoInput.expectsMediaDataInRealTime = YES;
NSDictionary *adaptorDict = #{
(id)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA),
(id)kCVPixelBufferWidthKey : #(videoWidth),
(id)kCVPixelBufferHeightKey : #(videoHeight)
};
_pixelBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptor alloc]
initWithAssetWriterInput:_assetWriterVideoInput
sourcePixelBufferAttributes:adaptorDict];
// Add asset writer input to asset writer
if (![_assetWriter canAddInput:_assetWriterVideoInput]) {
return;
}
[_assetWriter addInput:_assetWriterVideoInput];
captureOutput method is very simple. I get the image from the filter and write it to file using:
if (videoJustStartWriting)
[_assetWriter startSessionAtSourceTime:presentationTime];
CVPixelBufferRef renderedOutputPixelBuffer = NULL;
OSStatus err = CVPixelBufferPoolCreatePixelBuffer(nil,
_pixelBufferAdaptor.pixelBufferPool,
&renderedOutputPixelBuffer);
if (err) return; // NSLog(#"Cannot obtain a pixel buffer from the buffer pool");
//_ciContext is a metal context
[_ciContext render:finalImage
toCVPixelBuffer:renderedOutputPixelBuffer
bounds:[finalImage extent]
colorSpace:_sDeviceRgbColorSpace];
[self writeVideoPixelBuffer:renderedOutputPixelBuffer
withInitialTime:presentationTime];
- (void)writeVideoPixelBuffer:(CVPixelBufferRef)pixelBuffer withInitialTime:(CMTime)presentationTime
{
if ( _assetWriter.status == AVAssetWriterStatusUnknown ) {
// If the asset writer status is unknown, implies writing hasn't started yet, hence start writing with start time as the buffer's presentation timestamp
if ([_assetWriter startWriting]) {
[_assetWriter startSessionAtSourceTime:presentationTime];
}
}
if ( _assetWriter.status == AVAssetWriterStatusWriting ) {
// If the asset writer status is writing, append sample buffer to its corresponding asset writer input
if (_assetWriterVideoInput.readyForMoreMediaData) {
if (![_pixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime]) {
NSLog(#"error", [_assetWriter.error localizedFailureReason]);
}
}
}
if ( _assetWriter.status == AVAssetWriterStatusFailed ) {
NSLog(#"failed");
}
}
I put the whole thing to shoot at 240 fps. These are presentation times of frames being appended.
time ======= 113594.311510508
time ======= 113594.324011508
time ======= 113594.328178716
time ======= 113594.340679424
time ======= 113594.344846383
if you do some calculation between them you will see that the framerate is about 240 fps. So the frames are being stored with the correct time.
But when I watch the video the movement is not in slow motion and quick time says the video is 30 fps.
Note: this app grabs frames from the camera, the frames goes into CIFilters and the result of those filters is converted back to a sample buffer that is stored to file and displayed on the screen.
I'm reaching here, but I think this is where you're going wrong. Think of your video capture as a pipeline.
(1) Capture buffer -> (2) Do Something With buffer -> (3) Write buffer as frames in video.
Sounds like you've successfully completed (1) and (2), you're getting the buffer fast enough and you're processing them so you can vend them as frames.
The problem is almost certainly in (3) writing the video frames.
https://developer.apple.com/reference/avfoundation/avmutablevideocomposition
Check out the frameDuration setting in your AVMutableComposition, you'll need something like CMTime(1, 60) //60FPS or CMTime(1, 240) // 240FPS to get what you're after (telling the video to WRITE this many frames and encode at this rate).
Using AVAssetWriter, it's exactly the same principle but you set the frame rate as a property in the AVAssetWriterInput outputSettings adding in the AVVideoExpectedSourceFrameRateKey.
NSDictionary *videoCompressionSettings = #{AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : #(videoWidth),
AVVideoHeightKey : #(videoHeight),
AVVideoExpectedSourceFrameRateKey : #(60),
AVVideoCompressionPropertiesKey : #{ AVVideoAverageBitRateKey : #(bitsPerSecond),
AVVideoMaxKeyFrameIntervalKey : #(1)}
};
To expand a little more - you can't strictly control or sync your camera capture exactly to the output / playback rate, the timing just doesn't work that way and isn't that exact, and of course the processing pipeline adds overhead. When you capture frames they are time stamped, which you've seen, but in the writing / compression phase, it's using only the frames it needs to produce the output specified for the composition.
It goes both ways, you could capture only 30 FPS and write out at 240 FPS, the video would display fine, you'd just have a lot of frames "missing" and being filled in by the algorithm. You can even vend only 1 frame per second and play back at 30FPS, the two are separate from each other (how fast I capture Vs how many frames and what I present per second)
As to how to play it back at different speed, you just need to tweak the playback speed - slow it down as needed.
If you've correctly set the time base (frameDuration), it will always play back "normal" - you're telling it "play back is X Frames Per Second", of course, your eye may notice a difference (almost certainly between low FPS and high FPS), and the screen may not refresh that high (above 60FPS), but regardless the video will be at a "normal" 1X speed for it's timebase. By slowing the video, if my timebase is 120, and I slow it to .5x I know effectively see 60FPS and one second of playback takes two seconds.
You control the playback speed by setting the rate property on AVPlayer https://developer.apple.com/reference/avfoundation/avplayer
The iOS screen refresh is locked at 60fps, so the only way to "see" the extra frames is, as you say, to slow down the playback rate, a.k.a slow motion.
So
yes, you are right
the screen refresh rate (and perhaps limitations of the human visual system, assuming you're human?) means that you cannot perceive 120 & 240fps frame rates. You can play them at normal speed by downsampling to the screen refresh rate. Surely this is what AVPlayer already does, although I'm not sure if that's the answer you're looking for.
you control the framerate of the file when you write it with the CMSampleBuffer presentation timestamps. If your frames are coming from the camera, you're probably passing the timestamps straight through, in which case check that you really are getting the framerate you asked for (a log statement in your capture callback should be enough to verify this). If you're procedurally creating frames, then you choose the presentation timestamps so that they're spaced 1.0/desiredFrameRate seconds apart!
Is 3. not working for you?
p.s. you can discard & ignore AVVideoMaxKeyFrameIntervalKey - it's a quality setting and has nothing to do with playback framerate.
The code below was inspired by other posts on SO and extracts an image from a video. Unfortunately, the image looks blurry even though the video looks sharp and fully in focus.
Is there something wrong with the code, or is this a natural difficulty of extracting images from videos?
func getImageFromVideo(videoURL: String) -> UIImage {
do {
let asset = AVURLAsset(URL: NSURL(fileURLWithPath: videoURL), options: nil)
let imgGenerator = AVAssetImageGenerator(asset: asset)
imgGenerator.appliesPreferredTrackTransform = true
let cgImage = try imgGenerator.copyCGImageAtTime(CMTimeMake(0, 1), actualTime: nil)
let image = UIImage(CGImage: cgImage)
return image
} catch {
...
}
}
Your code is working without errors or problems. I've tried with a video and the grabbed image was not blurry.
I would try to debug this by using a different timescale for CMTime.
With CMTimeMake, the first argument is the value and the second argument is the timescale.
Your timescale is 1, so the value is in seconds. A value of 0 means 1st second, a value of 1 means 2nd second, etc. Actually it means the first frame after the designated location in the timeline.
With your current CMTime it grabs the first frame of the first second: that's the first frame of the video (even if the video is less than 1s).
With a timescale of 4, the value would be 1/4th of a second. Etc.
Try finding a CMTime that falls right on a steady frame (it depends on your video framerate, you'll have to make tests).
For example if your video is at 24 fps, then to grab exactly one frame of video, the timescale should be at 24 (that way each value unit would represent a whole frame):
let cgImage = try imgGenerator.copyCGImageAtTime(CMTimeMake(0, 24), actualTime: nil)
On the other hand, you mention that only the first and last frames of the video are blurry. As you rightly guessed, it's probably the actual cause of your issue and is caused by a lack of device stabilization.
A note: the encoding of the video might also play a role. Some MPG encoders create incomplete and interpolated frames that are "recreated" when the video plays, but these frames can appear blurry when grabbed with copyCGImageAtTime. The only solution I've found for this rare problem is to grab another frame just before or just after the blurry one.
I can see the nominalFrameRate for some video tracks, but not current frame in AVFoundation docs. How can I get the current frame number of the track as it is played in an AVPlayer? I know frame rates will vary, and nominalFrameRate will always be 0.0 in .m3u8 streams, but surely there must be a way to get the frame number of the currently playing track without having to multiply nominalFrameRate by currentTime?
Thanks.
For iOS 7+ you can use the currentVideoFrameRate property of AVPlayerItemTrack. Its the only consistent property that I've seen measure FPS. The nominalFrameRate property seems to be broken in HLS streams and always returns 0.0 as you mentioned.
AVPlayerItem *item = AVPlayer.currentItem; // Your current item
float fps = 0.00;
for (AVPlayerItemTrack *track in item.tracks) {
if ([track.assetTrack.mediaType isEqualToString:AVMediaTypeVideo]) {
fps = track.currentVideoFrameRate;
}
}
Im trying to figure out how to retrieve a videos frame rate via AVPlayer. AVPlayerItem has a rate variable but it only returns a value between 0 and 2 (usually 1 when playing). Anybody have an idea how to get the video frame rate?
Cheers
Use AVAssetTrack's nominalFrameRate property.
Below method to get FrameRate : Here queuePlayer is AVPlayer
-(float)getFrameRateFromAVPlayer
{
float fps=0.00;
if (self.queuePlayer.currentItem.asset) {
AVAssetTrack * videoATrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] lastObject];
if(videoATrack)
{
fps = videoATrack.nominalFrameRate;
}
}
return fps;
}
Swift 4 version of the answer:
let asset = avplayer.currentItem.asset
let tracks = asset.tracks(withMediaType: .video)
let fps = tracks?.first?.nominalFrameRate
Remember to handle nil checking.
There seems to be a discrepancy in this nominalFrameRate returned for the same media played on different versions of iOS. I have a video I encoded with ffmpeg at 1 frame per second (125 frames) with keyframes every 25 frames and when loading in an app on iOS 7.x the (nominal) frame rate is 1.0, while on iOS 8.x the (nominal) frame rate is 0.99. This seems like a very small difference, however in my case I need to navigate precisely to a given frame in the movie and this difference screws up such navigation (the movie is an encoding of a sequence of presentation slides). Given that I already know the frame rate of the videos my app needs to play (e.g. 1 fps) I can simply rely on this value instead of determining the frame rate dynamically (via nominalFrameRate value), however I wonder WHY there is such discrepancy between iOS versions as far as this nominalFrameRate goes. Any ideas?
The rate value on AVPlayer is the speed relative to real time to which it's playing, eg 0.5 is slow motion, 2 is double speed.
As Paresh Navadiya points out a track also has a nominalFrameRate variable however this seems to sometimes give strange results. the best solution I've found so far is to use the following:
CMTime frameDuration = [myAsset tracksWithMediaType:AVMediaTypeVideo][0].minFrameDuration;
float fps = frameDuration.timescale/(float)frameDuration.value;
The above gives slightly unexpected results for variable frame rate but variable frame rate has slightly odd behavior anyway. Other than that it matches ffmpeg -i in my tests.
EDIT ----
I've found sometimes the above gives time kCMTimeZero. The workaround I've used for this is to create an AVAssetReader with a track output,get the pts of the first frame and second frame then do a subtraction of the two.
I don't know anything in AVPlayer that can help you to calculate the frame rate.
AVPlayerItem rate property is the playback rate, nothing to do with the frame rate.
The easier options is to obtain a AVAssetTrack and read its nominalFrameRate property. Just create an AVAsset and you'll get an array of tracks.
Or use AVAssetReader to read the video frame by frame, get its presentation time and count how many frames are in the same second, then average for a few seconds or the whole video.
This is not gonna work anymore, API has changed, and this post is old. :(
The swift 4 answer is also cool, this is answer is similar.
You get the video track from the AVPlayerItem, and you check the FPS there. :)
private var numberOfRenderingFailures = 0
func isVideoRendering() -> Bool {
guard let currentItem = player.currentItem else { return false }
// Check if we are playing video tracks
let isRendering = currentItem.tracks.contains { ($0.assetTrack?.mediaType == .video) && ($0.currentVideoFrameRate > 5) }
if isRendering {
numberOfRenderingFailures = 0
return true
}
numberOfRenderingFailures += 1
if numberOfRenderingFailures < 5 {
return true
}
return false
}