So I have an AVPlayer that playing a live stream of an .m3u8 video and from what I've found searching it looks like you can't use AVAssetImageGenerator to make a screen shot but instead should use AVPlayerItemVideoOutput and
- (CVPixelBufferRef)copyPixelBufferForItemTime:(CMTime)itemTime itemTimeForDisplay:(CMTime *)outItemTimeForDisplay
but when I try and get the outputs from my AVPlayer.
NSArray *outputs = self.mainPlayer.currentItem.outputs;
I get an empty array.
The video plays just fine. Ultimatly what I want is a method like this.
-(UIImage *)frameFor:(CMTime)time;
At some point the CALayer on the view needs to be getting this image data so their has to be a way to grab that at some point. I tried just capturing the CALayer my AVPLayerLayer is attached to but I don't get anything more than the blank view color (bright pink just to make sure its return something). Their has to be some way of grabbing this data.
You're getting an empty array when you do
NSArray *outputs = self.mainPlayer.currentItem.outputs;
because the AVPlayerItemVideoOutput object needs to be added first
NSDictionary *settings = #{(id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]};
self.videoOutput = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:settings];
[playerItem addOutput:self.videoOutput];
and then when the AVPlayerItem status is AVPlayerStatusReadyToPlay you can capture the current frame using copyPixelBufferForItemTime:itemTimeForDisplay. For that, you need to create your AVPlayer object and add an observer to the AVPlayerItem status property
self.player = [AVPlayer playerWithPlayerItem:playerItem];
[self.player.currentItem addObserver:self forKeyPath:#"status" options:0 context:NULL];
and then in your callback function
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context
{
if (self.player.currentItem.status == AVPlayerStatusReadyToPlay)
{
CMTime currentTime = self.player.currentItem.currentTime;
CVPixelBufferRef buffer = [self.videoOutput copyPixelBufferForItemTime:currentTime itemTimeForDisplay:nil];
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:buffer];
UIImage *image = [UIImage imageWithCIImage:ciImage];
// use `image`
}
}
If you only need to capture a thumbnail and you don't have to play the video first, you can use a simple utility that I've created that follows this logic.
(https://github.com/acotilla91/ACThumbnailGenerator)
How to use:
double bitRate = 1000000; // force video bit rate (can be use to cap video quality and improve performance). Pass 0 to use default bit rate.
self.thumbnailGenerator = [[ACThumbnailGenerator alloc] initWithPreferredBitRate:bitRate];
NSURL *videoURL = [NSURL URLWithString:#"http://qthttp.apple.com.edgesuite.net/1010qwoeiuryfg/sl.m3u8"];
int position = 10; // video position (in seconds) from where thumbnail should be extracted. Always pass 0 for live streams.
[self.thumbnailGenerator loadImageFrom:videoURL position:position withCompletionBlock:^(UIImage *image) {
// use `image`
}];
Hope it helps.
Swift version:
ACThumbnailGenerator-Swift
Usage:
var generator: ACThumbnailGenerator!
func captureSomeImage() {
let streamUrl = URL(string: "https://p-events-delivery.akamaized.net/18oijbasfvuhbfsdvoijhbsdfvljkb6/m3u8/hls_vod_mvp.m3u8")!
generator = ACThumbnailGenerator(streamUrl: streamUrl)
generator.delegate = self
generator.captureImage(at: 300)
}
func generator(_ generator: ACThumbnailGenerator, didCapture image: UIImage, at position: Double) {
// Use `image`
}
Related
AVPlayerLooper accepts a template AVPlayerItem and a AVQueuePlayer as setup parameters, then it internally manipulates items of the queue and player is constantly changing its currentItem.
This works perfect with AVPlayerLayer, which accepts this looped player as parameter and just renders it, but how can I use it with AVPlayerItemVideoOutput, which is being attached to the AVPlayerItem, which the player has multiple inside it? How do I reproduce the same thing AVPlayerLayer does internally?
AVPlayerLooper setup example from docs
NSString *videoFile = [[NSBundle mainBundle] pathForResource:#"example" ofType:#"mov"];
NSURL *videoURL = [NSURL fileURLWithPath:videoFile];
_playerItem = [AVPlayerItem playerItemWithURL:videoURL];
_player = [AVQueuePlayer queuePlayerWithItems:#[_playerItem]];
_playerLooper = [AVPlayerLooper playerLooperWithPlayer:_player templateItem:_playerItem];
_playerLayer = [AVPlayerLayer playerLayerWithPlayer:_player];
_playerLayer.frame = self.view.bounds;
[self.view.layer addSublayer:_playerLayer];
[_player play];
This is how AVPlayerItemVideoOutput supposed to be used
[item addOutput:_videoOutput];
The only workround I came up with is to observe for changes of the currentItem and each time deattach the video output from old item and attach it to new one, like in example below, but this apparently neutralizes the gapless playback which I'm trying to achieve.
- (void)observeValueForKeyPath:(NSString*)path
ofObject:(id)object
change:(NSDictionary*)change
context:(void*)context {
if (context == currentItemContext) {
AVPlayerItem* newItem = [change objectForKey:NSKeyValueChangeNewKey];
AVPlayerItem* oldItem = [change objectForKey:NSKeyValueChangeOldKey];
if(oldItem.status == AVPlayerItemStatusReadyToPlay) {
[newItem removeOutput:_videoOutput];
}
if(newItem.status == AVPlayerItemStatusReadyToPlay) {
[newItem addOutput:_videoOutput];
}
[self removeItemObservers:oldItem];
[self addItemObservers:newItem];
}
}
For more context, I'm trying to come up with a fix for flutter's video_player plugin https://github.com/flutter/flutter/issues/72878
Plugin's code can be found here https://github.com/flutter/plugins/blob/172338d02b177353bf517e5826cf6a25b5f0d887/packages/video_player/video_player/ios/Classes/FLTVideoPlayerPlugin.m
You can do this by subclassing AVQueuePlayer (yay OOP) and creating and adding AVPlayerItemVideoOutputs there, as needed. I've never seen multiple AVPlayerItemVideoOutputs before, but memory consumption seems reasonable and everything works.
#interface OutputtingQueuePlayer : AVQueuePlayer
#end
#implementation OutputtingQueuePlayer
- (void)insertItem:(AVPlayerItem *)item afterItem:(nullable AVPlayerItem *)afterItem;
{
if (item.outputs.count == 0) {
NSLog(#"Creating AVPlayerItemVideoOutput");
AVPlayerItemVideoOutput *videoOutput = [[AVPlayerItemVideoOutput alloc] initWithOutputSettings:nil]; // or whatever
[item addOutput:videoOutput];
}
[super insertItem:item afterItem:afterItem];
}
#end
The current output is accessed like so:
AVPlayerItemVideoOutput *videoOutput = _player.currentItem.outputs.firstObject;
CVPixelBufferRef pixelBuffer = [videoOutput copyPixelBufferForItemTime:_player.currentTime itemTimeForDisplay:nil];
// do something with pixelBuffer here
CVPixelBufferRelease(pixelBuffer);
and configuration becomes:
_playerItem = [AVPlayerItem playerItemWithURL:videoURL];
_player = [OutputtingQueuePlayer queuePlayerWithItems:#[_playerItem]];
_playerLooper = [AVPlayerLooper playerLooperWithPlayer:_player templateItem:_playerItem];
_playerLayer = [AVPlayerLayer playerLayerWithPlayer:_player];
[self.view.layer addSublayer:_playerLayer];
[_player play];
I have spent the whole day and went through a lot of SO answers, Apple references, documentations, etc, but no success.
I want a simple thing: I am playing a video using AVPlayer and I want to pause it and get the current frame as UIImage. That's it.
My video is a m3u8 file located on the internet, it is played normally in the AVPlayerLayer without any problems.
What have I tried:
AVAssetImageGenerator. It is not working, the method copyCGImageAtTime:actualTime: error: returns null image ref. According to the answer here AVAssetImageGenerator doesn't work for streaming videos.
Taking snapshot of the player view. I tried first renderInContext: on AVPlayerLayer, but then I realized that it is not rendering this kind of "special" layers. Then I found a new method introduced in iOS 7 - drawViewHierarchyInRect:afterScreenUpdates: which should be able to render also the special layers, but no luck, still got the UI snapshot with blank black area where the video is shown.
AVPlayerItemVideoOutput. I have added a video output for my AVPlayerItem, however whenever I call hasNewPixelBufferForItemTime: it returns NO. I guess the problem is again streaming video and I am not alone with this problem.
AVAssetReader. I was thinking to try it but decided not to lose time after finding a related question here.
So isn't there any way to get a snapshot of something that I am anyway seeing right now on the screen? I can't believe this.
AVAssetImageGenerator is the best way to snapshot a video, this method return asynchronously a UIImage :
import AVFoundation
// ...
var player:AVPlayer? = // ...
func screenshot(handler:#escaping ((UIImage)->Void)) {
guard let player = player ,
let asset = player.currentItem?.asset else {
return
}
let imageGenerator = AVAssetImageGenerator(asset: asset)
imageGenerator.appliesPreferredTrackTransform = true
let times = [NSValue(time:player.currentTime())]
imageGenerator.generateCGImagesAsynchronously(forTimes: times) { _, image, _, _, _ in
if let img = image {
handler(UIImage(cgImage: img))
}
}
}
(It's Swift 4.2)
AVPlayerItemVideoOutput works fine for me from an m3u8. Maybe it's because I don't consult hasNewPixelBufferForItemTime and simply call copyPixelBufferForItemTime? This code produces a CVPixelBuffer instead of a UIImage, but there are answers that describe how to do that.
This answer mostly cribbed from here
#import "ViewController.h"
#import <AVFoundation/AVFoundation.h>
#interface ViewController ()
#property (nonatomic) AVPlayer *player;
#property (nonatomic) AVPlayerItem *playerItem;
#property (nonatomic) AVPlayerItemVideoOutput *playerOutput;
#end
#implementation ViewController
- (void)setupPlayerWithLoadedAsset:(AVAsset *)asset {
NSDictionary* settings = #{ (id)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
self.playerOutput = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:settings];
self.playerItem = [AVPlayerItem playerItemWithAsset:asset];
[self.playerItem addOutput:self.playerOutput];
self.player = [AVPlayer playerWithPlayerItem:self.playerItem];
AVPlayerLayer *playerLayer = [AVPlayerLayer playerLayerWithPlayer:self.player];
playerLayer.frame = self.view.frame;
[self.view.layer addSublayer:playerLayer];
[self.player play];
}
- (IBAction)grabFrame {
CVPixelBufferRef buffer = [self.playerOutput copyPixelBufferForItemTime:[self.playerItem currentTime] itemTimeForDisplay:nil];
NSLog(#"The image: %#", buffer);
}
- (void)viewDidLoad {
[super viewDidLoad];
NSURL *someUrl = [NSURL URLWithString:#"http://qthttp.apple.com.edgesuite.net/1010qwoeiuryfg/sl.m3u8"];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:someUrl options:nil];
[asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:#"tracks"] completionHandler:^{
NSError* error = nil;
AVKeyValueStatus status = [asset statusOfValueForKey:#"tracks" error:&error];
if (status == AVKeyValueStatusLoaded)
{
dispatch_async(dispatch_get_main_queue(), ^{
[self setupPlayerWithLoadedAsset:asset];
});
}
else
{
NSLog(#"%# Failed to load the tracks.", self);
}
}];
}
#end
I am trying to decode audio samples from a remote HLS (m3u8) stream on an iOS device for further processing of the data, e.g. record to a file.
As reference stream http://devimages.apple.com/iphone/samples/bipbop/bipbopall.m3u8 is used.
By using an AVURLAsset in combination with the AVPlayer I am able to show the video as preview on a CALayer.
I can also get the raw video data (CVPixelBuffer) by using AVPlayerItemVideoOutput. The audio is hearable over the speaker of the iOS device as well.
This is the code I am using at the moment for the AVURLAsset and AVPlayer:
NSURL* url = [NSURL URLWithString:#"http://devimages.apple.com/iphone/samples/bipbop/bipbopall.m3u8"];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:nil];
NSString *tracksKey = #"tracks";
[asset loadValuesAsynchronouslyForKeys:#[tracksKey] completionHandler: ^{
dispatch_async(dispatch_get_main_queue(), ^{
NSError* error = nil;
AVKeyValueStatus status = [asset statusOfValueForKey:tracksKey error:&error];
if (status == AVKeyValueStatusLoaded) {
NSDictionary *settings = #
{
(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA),
#"IOSurfaceOpenGLESTextureCompatibility": #YES,
#"IOSurfaceOpenGLESFBOCompatibility": #YES,
};
AVPlayerItemVideoOutput* output = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:settings];
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:asset];
[playerItem addOutput:output];
AVPlayer* player = [AVPlayer playerWithPlayerItem:playerItem];
[player setVolume: 0.0]; // no preview audio
self.playerItem = playerItem;
self.player = player;
self.playerItemVideoOutput = output;
AVPlayerLayer* playerLayer = [AVPlayerLayer playerLayerWithPlayer: player];
[self.preview.layer addSublayer: playerLayer];
[playerLayer setFrame: self.preview.bounds];
[playerLayer setVideoGravity: AVLayerVideoGravityResizeAspectFill];
[self setPlayerLayer: playerLayer];
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(playerItemNewAccessLogEntry:) name:AVPlayerItemNewAccessLogEntryNotification object:self.playerItem];
[_player addObserver:self forKeyPath:#"status" options:NSKeyValueObservingOptionNew context:&PlayerStatusContext];
[_playerItem addObserver:self forKeyPath:#"status" options:NSKeyValueObservingOptionNew context:&PlayerItemStatusContext];
[_playerItem addObserver:self forKeyPath:#"tracks" options:0 context:nil];
}
});
}];
-(void) observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context
{
if (self.player.status == AVPlayerStatusReadyToPlay && context == &PlayerStatusContext) {
[self.player play];
}
}
To get the raw video data of the HLS stream I use:
CVPixelBufferRef buffer = [self.playerItemVideoOutput copyPixelBufferForItemTime:self.playerItem.currentTime itemTimeForDisplay:nil];
if (!buffer) {
return;
}
CMSampleBufferRef newSampleBuffer = NULL;
CMSampleTimingInfo timingInfo = kCMTimingInfoInvalid;
timingInfo.duration = CMTimeMake(33, 1000);
int64_t ts = timestamp * 1000.0;
timingInfo.decodeTimeStamp = CMTimeMake(ts, 1000);
timingInfo.presentationTimeStamp = timingInfo.decodeTimeStamp;
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(
NULL, buffer, &videoInfo);
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
buffer,
true,
NULL,
NULL,
videoInfo,
&timingInfo,
&newSampleBuffer);
// do something here with sample buffer...
CFRelease(buffer);
CFRelease(newSampleBuffer);
Now I would like to get access to the raw audio data as well, but had no luck so far.
I tried to use MTAudioProcessingTap as described here:
http://venodesigns.net/2014/01/08/recording-live-audio-streams-on-ios/
Unfortunately I could not get this to work properly. I succeeded in getting access to the underlying assetTrack of the AVPlayerItem but the callback mehtods "prepare" and "process" of the MTAudioProcessingTap is never getting called. I am not sure if I am on the right track here.
AVPlayer is playing the audio of the stream via the speaker, so internally the audio seems to be available as raw audio data. Is it possible to get access to the raw audio data? If it is not possible with AVPlayer, are there any other approaches?
If possible, I would not like to use ffmpeg, because the hardware decoder of the iOS device should be used for the decoding of the stream.
I have the following code in my app:
NSURL *url = [NSURL fileURLWithPath: [self.DocDir stringByAppendingPathComponent: self.FileName] isDirectory: NO];
self.avPlayer = [AVPlayer playerWithURL: url];
Float64 duration = CMTimeGetSeconds(self.avPlayer.currentItem.duration);
This worked fine with iOS 6 but with iOS 7 for some reason it returns NaN. When inspecting self.avPlayer.currentItem.duration the CMTime object has 0's with a flag of 17.
Interestingly the player works fine, just the duration is wrong.
Has anyone else experienced the same issues? I am importing the following:
#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>
#import <MediaPlayer/MediaPlayer.h>
#import <CoreMedia/CoreMedia.h>
#import <AVFoundation/AVAsset.h>
After playing around with different ways of initializing the objects I arrived at a working solution:
AVURLAsset *asset = [AVURLAsset assetWithURL: url];
Float64 duration = CMTimeGetSeconds(asset.duration);
AVPlayerItem *item = [AVPlayerItem playerItemWithAsset: asset];
self.avPlayer = [[AVPlayer alloc] initWithPlayerItem: item];
It appears the duration value isn't always immediately available from an AVPlayerItem but it seems to work fine with an AVAsset immediately.
In iOS 7, for AVPlayerItem already created, you can also get duration from the underlaying asset:
CMTimeGetSeconds([[[[self player] currentItem] asset] duration]);
Instead of get it directly from AVPlayerItem, which gives you a NaN:
CMTimeGetSeconds([[[self player] currentItem] duration]);
The recommended way of doing this, as described in the manual is by observing the player item status:
[self.avPlayer.currentItem addObserver:self forKeyPath:#"status" options:NSKeyValueObservingOptionNew|NSKeyValueObservingOptionInitial context:nil];
Then, inside observeValueForKeyPath:ofObject:change:context:
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context
{
// TODO: use either keyPath or context to differentiate between value changes
if (self.avPlayer.currentItem.status == AVPlayerStatusReadyToPlay) {
Float64 duration = CMTimeGetSeconds(self.avPlayer.currentItem.duration);
// ...
}
}
Also, make sure that you remove the observer when you change the player item:
if (self.avPlayer.currentItem) {
[self.avPlayer.currentItem removeObserver:self forKeyPath:#"status"];
}
Btw, you can also observe the duration property directly; however, it's been my personal experience that the results aren't as reliable as they should be ;-)
Swift version
You can get the duration using AVAsset which is AVPlayerItem property:
func getVideoDuration(from player: AVPlayer) -> Double? {
guard let duration = player.currentItem?.asset.duration else { return nil }
let durationSeconds = CMTimeGetSeconds(duration)
return durationSeconds
}
or by creating AVAsset from the scratch:
func getVideoDuration(for videoUrl: URL) -> Double {
let asset = AVAsset(url: videoUrl)
let duration = asset.duration
let durationSeconds = CMTimeGetSeconds(duration)
return durationSeconds
}
I want to use the AVURLAsset to play a video file but on a server not local file. I have read that the AVURLAsset cant be used directly for remote files .
I read another link of stack Overflow
AVURLAsset cannot load with remote file
this link has some method to use AVURLAsset to play remote files but I am not able to understand it fully. My observer is not being called. Can someone please help me? Actually I don't want to use AVPlayer to play video for some reasons. I am grabbing frames from AVAsset and then rendering them as textures in OpenGL so I need to do this by AVURLAsset only.
Here is code to look at
-(void) startPlayer
{
NSURL *url=[NSURL fileURLWithPath:#"http://gamooz.com/wildlife.mp4"];
pItem = [AVPlayerItem playerItemWithURL:url];
player = [AVPlayer playerWithPlayerItem:pItem];
[player play];
pItem addObserver:self forKeyPath:#"status" options:0 context:nil];
}
-(void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change: (NSDictionary *)change context:(void *)context
{
NSLog(#"heyy");
if ([keyPath isEqualToString:#"status"])
{
AVPlayerItem *pItemTemp = (AVPlayerItem *)object;
if (pItemTemp.status == AVPlayerItemStatusReadyToPlay)
{
///now i can use playerItem asset
asset = (AVURLAsset *)pItemTemp.asset;
}
}
}
but the observer is never getting called. Why is that?
Also I put the observer code in some other function and tried to check if the playerItem is ready to play
-(void) checkForPlayer
{
if (pItem.status == AVPlayerItemStatusReadyToPlay)
{
asset = (AVURLAsset *)pItem.asset;
}
}
it is never giving status equal to ready.