AVPlayerItemVideoOutput never gets a pixelBuffer - ios

I've been expanding my testing on my video rendering code and noticed something unusual during testing with AVPlayerItemVideoOutput. In trying to test my rendering code I check for new pixel buffers using hasNewPixelBufferForItemTime.
While testing, this method never returns yes. But the rendering code works in my app just fine using the same setup, rendering frames to openGL textures.
I setup a github project with the very basics showing the error. In the app you can load the video by tapping the button (Not loaded immediately to avoid any conflict with the test). This proves at least the video loads and plays.
The project also has test, that attempts to setup the AVPlayerItemVideoOutput and check for new pixel buffers. This test always fails, but I can't see what I'm doing wrong, or why the exact same steps work in my own app.
The GitHub project is here.
And this is the test method to peruse:
#import <XCTest/XCTest.h>
#import <AVFoundation/AVFoundation.h>
#interface AVPlayerTestTests : XCTestCase
#end
#implementation AVPlayerTestTests
- (void)setUp {
[super setUp];
// Put setup code here. This method is called before the invocation of each test method in the class.
}
- (void)tearDown {
// Put teardown code here. This method is called after the invocation of each test method in the class.
[super tearDown];
}
- (void) testAVPlayer
{
NSURL *fileURL = [[NSBundle bundleForClass:self.class] URLForResource:#"SampleVideo_1280x720_10mb" withExtension:#"mp4"];
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithURL:fileURL];
[self keyValueObservingExpectationForObject:playerItem
keyPath:#"status" handler:^BOOL(id _Nonnull observedObject, NSDictionary * _Nonnull change) {
AVPlayerItem *oPlayerItem = (AVPlayerItem *)observedObject;
switch (oPlayerItem.status) {
case AVPlayerItemStatusFailed:
{
XCTFail(#"Video failed");
return YES;
}
break;
case AVPlayerItemStatusUnknown:
return NO;
break;
case AVPlayerItemStatusReadyToPlay:
{
return YES;
}
default:
break;
}
}];
AVPlayer *player = [AVPlayer playerWithPlayerItem:playerItem];
NSDictionary *pbOptions = #{
(NSString *)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32BGRA],
(NSString *)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary],
(NSString *)kCVPixelBufferOpenGLESCompatibilityKey : #YES
};
AVPlayerItemVideoOutput *output = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:pbOptions];
XCTAssertNotNil(output);
[self waitForExpectationsWithTimeout:100 handler:nil];
if (playerItem.status == AVPlayerItemStatusReadyToPlay) {
[playerItem addOutput:output];
player.rate = 1.0;
player.muted = YES;
[player play];
CMTime vTime = [output itemTimeForHostTime:CACurrentMediaTime()];
// This is what we're testing
BOOL foundFrame = [output hasNewPixelBufferForItemTime:vTime];
XCTAssertTrue(foundFrame);
if (!foundFrame) {
// Cycle over for ten seconds
for (int i = 0; i < 10; i++) {
sleep(1);
vTime = [output itemTimeForHostTime:CACurrentMediaTime()];
foundFrame = [output hasNewPixelBufferForItemTime:vTime];
if (foundFrame) {
NSLog(#"Got frame at %i", i);
break;
}
if (i == 9) {
XCTFail(#"Failed to acquire");
}
}
}
}
}
#end
EDIT 1 : This seems to be a bug and I've filed a Radar

Related

Playing audio while applying video filter in GPUImage

I've googled this question but did not find any solution.
So, my problem is,
I'm applying video filter using GPUImage to a video file. At same time, sound of that video is not playing. I know that sound playing is not supported in GPUImage while applying filters.
So, how can I achieve that?
You can achieve sound playing by adding support for AVPlayer like this.
In GPUImageMovie.h,
#property(readwrite, nonatomic) BOOL playSound;
In GPUImageMovie.m, update #interface GPUImageMovie () like this.
#interface GPUImageMovie() <AVPlayerItemOutputPullDelegate>
{
// Add this
BOOL hasAudioTrack;
.........
........
// Add all below three lines
AVPlayer *theAudioPlayer;
CFAbsoluteTime startActualFrameTime;
CGFloat currentVideoTime;
}
After that, in -(void)startProcessing method, add following lines.
- (void)startProcessing
{
// Add this
currentVideoTime = 0.0f;
.....
.....
// Add this
if (self.playSound)
{
[self setupSound];
}
GPUImageMovie __block *blockSelf = self;
[inputAsset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:#"tracks"] completionHandler: ^{
runSynchronouslyOnVideoProcessingQueue(^{
.........
.........
// Add this
startActualFrameTime = CFAbsoluteTimeGetCurrent() - currentVideoTime;
.......
.......
});
}];
}
Again, In -(AVAssetReader*)createAssetReader update it,
- (AVAssetReader*)createAssetReader
{
.......
.......
BOOL shouldRecordAudioTrack = (([audioTracks count] > 0) && (self.audioEncodingTarget != nil));
// Add this
hasAudioTrack = ([audioTracks count] > 0);
.........
}
In - (void)processAsset,
- (void)processAsset
{
.......
.......
if ([reader startReading] == NO)
{
NSLog(#"Error reading from file at URL: %#", self.url);
return;
}
// Add this
if (self.playSound && hasAudioTrack)
{
[theAudioPlayer seekToTime:kCMTimeZero];
[theAudioPlayer play];
}
.........
.........
}
In -(void)endProcessing,
- (void)endProcessing
{
.........
.........
if (self.playerItem && (displayLink != nil))
{
[displayLink invalidate]; // remove from all run loops
displayLink = nil;
}
// Add this
if (theAudioPlayer != nil)
{
[theAudioPlayer pause];
[theAudioPlayer seekToTime:kCMTimeZero];
}
.........
.........
}
Add new method -(void)setupSound at last,
// Add this
- (void)setupSound
{
if (theAudioPlayer != nil)
{
[theAudioPlayer pause];
[theAudioPlayer seekToTime:kCMTimeZero];
theAudioPlayer = nil;
}
theAudioPlayer = [[AVPlayer alloc] initWithURL:self.url];
}
At last, when you create your GPUImageMovie object, don't forget to set playSound flag to YES.
Like this : movieFile.playSound = YES;
Hope this helps.
how to pause the audio during filter applied by GPUImage?
Like we can pause the video writing by movieWriter.paused = YES; when I pause the movie writing, I need to Pause audio also.

Get link to media from AVFullScreenViewController iOS 8

i'am trying to get links to video when something begin to play (for example any YouTube video).
First i catch when video begin to play
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(videoStarted:) name:#"UIWindowDidBecomeVisibleNotification" object:nil];
then with delay try to get link
-(void)videoStarted:(NSNotification *)notification
{
NSLog(#"notification dic = %#", [notification userInfo]);
[self performSelector:#selector(detectModal) withObject:nil afterDelay:2.5f];
}
-(void)detectModal
{
CUViewController *controller = (CUViewController *)[appDelegate.window rootViewController].presentedViewController;
NSLog(#"Presented modal = %#", [appDelegate.window rootViewController].presentedViewController);
if(controller && [controller respondsToSelector:#selector(item)])
{
id currentItem = [controller item];
if(currentItem && [currentItem respondsToSelector:#selector(asset)])
{
AVURLAsset *asset = (AVURLAsset *)[currentItem asset];
if([asset respondsToSelector:#selector(URL)] && asset.URL)
[self newLinkDetected:[[asset URL] absoluteString]];
NSLog(#"asset find = %#", asset);
}
}
else
{
for (UIWindow *window in [UIApplication sharedApplication].windows) {
if ([window.rootViewController.presentedViewController isKindOfClass:NSClassFromString(#"AVFullScreenViewController")])
{
controller = (CUViewController *)window.rootViewController.presentedViewController;
for(int i = 0; i < [controller.view.subviews count]; i++)
{
UIView *topView = [controller.view.subviews objectAtIndex:i];
NSLog(#"top view = %#", topView);
for(int j = 0; j < [topView.subviews count]; j++)
{
UIView *subView = [topView.subviews objectAtIndex:j];
NSLog(#"sub view = %#", subView);
for (int k = 0; k < [subView.subviews count]; k++)
{
CUPlayerView *subsubView = (CUPlayerView *)[subView.subviews objectAtIndex:k];
NSLog(#"sub sub view = %# class = %#", subsubView, NSStringFromClass([subsubView class]));
if([NSStringFromClass([subsubView class]) isEqualToString:#"AVVideoLayerView"])
{
NSLog(#"player view controller = %#", subsubView.playerController);
CUPlayerController *pController = subsubView.playerController;
NSLog(#"item = %#", [pController player]);
CUPlayerController *proxyPlayer = pController.playerControllerProxy;
if(proxyPlayer)
{
AVPlayer *player = [proxyPlayer player];
NSLog(#"find player = %# chapters = %#", player, proxyPlayer.contentChapters);
break;
}
}
}
}
}
}
}
}
}
CUViewController, CUPlayerView, CUPlayerController - fake classes it's and looks like this
#interface CUPlayerController : UIViewController
#property(nonatomic, retain) id playerControllerProxy;
#property(nonatomic, retain) id player;
#property(nonatomic, retain) id item;
- (id)contentChapters;
#end
everything is okay until this line
NSLog(#"find player = %# chapters = %#", player, proxyPlayer.contentChapters);
player is always nil. Maybe there is a more simple way to get link to media?
First off I'd like to focus on your AVPlayer which plays an AVPlayerItem. An AVPlayerItem object carries a reference to an AVAsset object and presentation settings for that asset. When you use the playerWithURL: method of AVPlayer it automatically creates the AVPlayerItem backed by an asset that is a subclass of AVAsset named AVURLAsset. AVURLAsset has a URL property. So if you use that you can get the NSURL of the currently playing item. Here's an example function of getting URL:
-(NSURL *)urlOfCurrentlyPlayingInPlayer:(AVPlayer *)player{
// get current asset
AVAsset *currentPlayerAsset = player.currentItem.asset;
// make sure the current asset is an AVURLAsset
if (![currentPlayerAsset isKindOfClass:AVURLAsset.class]) return nil;
// return the NSURL
return [(AVURLAsset *)currentPlayerAsset URL];
}
I think it's just a matter of how you fit this thing around your code. Hope this helps.

How to crop video from top?

I am trying to create square video and for that I am using 640*480 camera view but not able to crop video from top? I have tried many thing but no success. so please suggest how to do it, What transform is helpful?
Usage:
Create a property (or any other variable to hold the VideoTranscoder)
self.videoTranscoder = [SCVideoTranscoder new];
self.videoTranscoder.asset = PUT_UR_AVASSET_HERE;
self.videoTranscoder.outputURL = PUT_A_FILE_URL_HERE;
__weak typeof(self) weakSelf = self;
self.videoTranscoder.completionBlock = ^(BOOL success){
//PUT YOUR CODE HERE WHEN THE TRANSCODING IS DONE...
};
[self.videoTranscoder start];
You can cancel the transcoding process, if you just call [transcoder cancel].
SCVideoTranscoder.h file
#import <Foundation/Foundation.h>
#interface SCVideoTranscoder : NSObject
#property (nonatomic, strong) AVAsset *asset;
#property (nonatomic, assign) BOOL cancelled;
#property (nonatomic, strong) NSURL *outputURL;
#property (nonatomic, strong) void (^completionBlock)(BOOL success);
- (void)start;
- (void)cancel;
#end
SCVideoTranscoder.m file
In the file you can find that I setup the video to be 480x480, you can ofc. change this to the value you like!
#import "SCVideoTranscoder.h"
#interface SCVideoTranscoder()
#property (nonatomic, strong) dispatch_queue_t mainSerializationQueue;
#property (nonatomic, strong) dispatch_queue_t rwAudioSerializationQueue;
#property (nonatomic, strong) dispatch_queue_t rwVideoSerializationQueue;
#property (nonatomic, strong) dispatch_group_t dispatchGroup;
#property (nonatomic, strong) AVAssetReader* assetReader;
#property (nonatomic, strong) AVAssetWriter* assetWriter;
#property (nonatomic, strong) AVAssetReaderTrackOutput *assetReaderAudioOutput;
#property (nonatomic, strong) AVAssetReaderTrackOutput *assetReaderVideoOutput;
#property (nonatomic, strong) AVAssetWriterInput *assetWriterAudioInput;
#property (nonatomic, strong) AVAssetWriterInput *assetWriterVideoInput;
#property (nonatomic, assign) BOOL audioFinished;
#property (nonatomic, assign) BOOL videoFinished;
#end
#implementation SCVideoTranscoder
- (void)start
{
NSString *serializationQueueDescription = [NSString stringWithFormat:#"%# serialization queue", self];
// Create the main serialization queue.
self.mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:#"%# rw audio serialization queue", self];
// Create the serialization queue to use for reading and writing the audio data.
self.rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL);
NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:#"%# rw video serialization queue", self];
// Create the serialization queue to use for reading and writing the video data.
self.rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL);
self.cancelled = NO;
// Asynchronously load the tracks of the asset you want to read.
[self.asset loadValuesAsynchronouslyForKeys:#[#"tracks"] completionHandler:^{
// Once the tracks have finished loading, dispatch the work to the main serialization queue.
dispatch_async(self.mainSerializationQueue, ^{
// Due to asynchronous nature, check to see if user has already cancelled.
if (self.cancelled)
return;
BOOL success = YES;
NSError *localError = nil;
// Check for success of loading the assets tracks.
success = ([self.asset statusOfValueForKey:#"tracks" error:&localError] == AVKeyValueStatusLoaded);
if (success)
{
// If the tracks loaded successfully, make sure that no file exists at the output path for the asset writer.
NSFileManager *fm = [NSFileManager defaultManager];
NSString *localOutputPath = [self.outputURL path];
if ([fm fileExistsAtPath:localOutputPath])
success = [fm removeItemAtPath:localOutputPath error:&localError];
}
if (success)
{
success = [self setupAssetReaderAndAssetWriter:&localError];
if (success)
{
[self startAssetReaderAndWriter:&localError];
}else{
[self readingAndWritingDidFinishSuccessfully:success withError:localError];
}
}
});
}];
}
- (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
{
// Create and initialize the asset reader.
self.assetReader = [[AVAssetReader alloc] initWithAsset:self.asset error:outError];
BOOL success = (self.assetReader != nil);
if (success)
{
// If the asset reader was successfully initialized, do the same for the asset writer.
self.assetWriter = [[AVAssetWriter alloc] initWithURL:self.outputURL fileType:AVFileTypeMPEG4 error:outError];
success = (self.assetWriter != nil);
}
if (success)
{
// If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
NSArray *audioTracks = [self.asset tracksWithMediaType:AVMediaTypeAudio];
if ([audioTracks count] > 0)
assetAudioTrack = [audioTracks objectAtIndex:0];
NSArray *videoTracks = [self.asset tracksWithMediaType:AVMediaTypeVideo];
if ([videoTracks count] > 0)
assetVideoTrack = [videoTracks objectAtIndex:0];
if (assetAudioTrack)
{
// If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
NSDictionary *decompressionAudioSettings = #{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
self.assetReaderAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack outputSettings:decompressionAudioSettings];
if (DEBUG) {
self.assetReader.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMake(20, 1));
}else{
self.assetReader.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMake(10, 1));
}
[self.assetReader addOutput:self.assetReaderAudioOutput];
// Then, set the compression settings to 128kbps AAC and create the asset writer input.
AudioChannelLayout stereoChannelLayout = {
.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
.mChannelBitmap = 0,
.mNumberChannelDescriptions = 0
};
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
NSDictionary *compressionAudioSettings = #{
AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
AVEncoderBitRateKey : [NSNumber numberWithInteger:128000],
AVSampleRateKey : [NSNumber numberWithInteger:44100],
AVChannelLayoutKey : channelLayoutAsData,
AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};
self.assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType] outputSettings:compressionAudioSettings];
[self.assetWriter addInput:self.assetWriterAudioInput];
}
if (assetVideoTrack)
{
// If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
NSDictionary *decompressionVideoSettings = #{
(id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_422YpCbCr8],
(id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
};
self.assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
[self.assetReader addOutput:self.assetReaderVideoOutput];
CMFormatDescriptionRef formatDescription = NULL;
// Grab the video format descriptions from the video track and grab the first one if it exists.
NSArray *videoFormatDescriptions = [assetVideoTrack formatDescriptions];
if ([videoFormatDescriptions count] > 0)
formatDescription = (__bridge CMFormatDescriptionRef)[videoFormatDescriptions objectAtIndex:0];
CGSize trackDimensions = {
.width = 0.0,
.height = 0.0,
};
CGAffineTransform videoTransform = [assetVideoTrack preferredTransform];
// If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
if (formatDescription)
trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
else
trackDimensions = [assetVideoTrack naturalSize];
int width = 480;
int height = 480;
int bitrate = 1000000;
NSDictionary *compressionSettings = #{
AVVideoAverageBitRateKey: [NSNumber numberWithInt:bitrate],AVVideoMaxKeyFrameIntervalKey: #(150),
AVVideoProfileLevelKey: AVVideoProfileLevelH264BaselineAutoLevel,
AVVideoAllowFrameReorderingKey: #NO,
AVVideoH264EntropyModeKey: AVVideoH264EntropyModeCAVLC,
AVVideoExpectedSourceFrameRateKey: #(30)
};
if ([SCDeviceModel speedgrade]==0) {
width = 320;
height = 320;
compressionSettings = #{
AVVideoAverageBitRateKey: [NSNumber numberWithInt:bitrate],
AVVideoProfileLevelKey: AVVideoProfileLevelH264BaselineAutoLevel,
AVVideoAllowFrameReorderingKey: #NO};
}
NSDictionary *videoSettings =
#{
AVVideoCodecKey: AVVideoCodecH264,
AVVideoScalingModeKey: AVVideoScalingModeResizeAspectFill,
AVVideoWidthKey: [NSNumber numberWithInt:width],
AVVideoHeightKey: [NSNumber numberWithInt:height],
AVVideoCompressionPropertiesKey: compressionSettings};
// Create the asset writer input and add it to the asset writer.
self.assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetVideoTrack mediaType] outputSettings:videoSettings];
self.assetWriterVideoInput.transform = videoTransform;
[self.assetWriter addInput:self.assetWriterVideoInput];
}
}
return success;
}
- (void)startAssetReaderAndWriter:(NSError **)outError
{
BOOL success = YES;
// Attempt to start the asset reader.
success = [self.assetReader startReading];
if (!success)
*outError = [self.assetReader error];
if (success)
{
// If the reader started successfully, attempt to start the asset writer.
success = [self.assetWriter startWriting];
if (!success)
*outError = [self.assetWriter error];
}
if (success)
{
// If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session.
self.dispatchGroup = dispatch_group_create();
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
self.audioFinished = NO;
self.videoFinished = NO;
if (self.assetWriterAudioInput)
{
// If there is audio to reencode, enter the dispatch group before beginning the work.
dispatch_group_enter(self.dispatchGroup);
// Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on.
[self.assetWriterAudioInput requestMediaDataWhenReadyOnQueue:self.rwAudioSerializationQueue usingBlock:^{
// Because the block is called asynchronously, check to see whether its task is complete.
if (self.audioFinished)
return;
BOOL completedOrFailed = NO;
// If the task isn't complete yet, make sure that the input is actually ready for more media data.
while ([self.assetWriterAudioInput isReadyForMoreMediaData] && !completedOrFailed)
{
// Get the next audio sample buffer, and append it to the output file.
CMSampleBufferRef sampleBuffer = [self.assetReaderAudioOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
BOOL success = [self.assetWriterAudioInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
else
{
completedOrFailed = YES;
}
}
if (completedOrFailed)
{
// Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished).
BOOL oldFinished = self.audioFinished;
self.audioFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterAudioInput markAsFinished];
}
dispatch_group_leave(self.dispatchGroup);
}
}];
}
if (self.assetWriterVideoInput)
{
// If we had video to reencode, enter the dispatch group before beginning the work.
dispatch_group_enter(self.dispatchGroup);
// Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on.
[self.assetWriterVideoInput requestMediaDataWhenReadyOnQueue:self.rwVideoSerializationQueue usingBlock:^{
// Because the block is called asynchronously, check to see whether its task is complete.
if (self.videoFinished)
return;
BOOL completedOrFailed = NO;
// If the task isn't complete yet, make sure that the input is actually ready for more media data.
while ([self.assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
{
// Get the next video sample buffer, and append it to the output file.
CMSampleBufferRef sampleBuffer = [self.assetReaderVideoOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
BOOL success = [self.assetWriterVideoInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
else
{
completedOrFailed = YES;
}
}
if (completedOrFailed)
{
// Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished).
BOOL oldFinished = self.videoFinished;
self.videoFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterVideoInput markAsFinished];
}
dispatch_group_leave(self.dispatchGroup);
}
}];
}
// Set up the notification that the dispatch group will send when the audio and video work have both finished.
dispatch_group_notify(self.dispatchGroup, self.mainSerializationQueue, ^{
BOOL finalSuccess = YES;
NSError *finalError = nil;
// Check to see if the work has finished due to cancellation.
if (self.cancelled)
{
// If so, cancel the reader and writer.
[self.assetReader cancelReading];
[self.assetWriter cancelWriting];
}
else
{
// If cancellation didn't occur, first make sure that the asset reader didn't fail.
if ([self.assetReader status] == AVAssetReaderStatusFailed)
{
finalSuccess = NO;
finalError = [self.assetReader error];
}
// If the asset reader didn't fail, attempt to stop the asset writer and check for any errors.
}
if (finalSuccess)
{
[self.assetWriter finishWritingWithCompletionHandler:^{
NSError *finalError = nil;
if(self.assetWriter.status == AVAssetWriterStatusFailed)
{
finalError = self.assetWriter.error;
}
[self readingAndWritingDidFinishSuccessfully:finalError==nil
withError:finalError];
}];
return;
}
// Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful.
[self readingAndWritingDidFinishSuccessfully:finalSuccess withError:finalError];
});
}
}
- (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error
{
if (!success)
{
// If the reencoding process failed, we need to cancel the asset reader and writer.
[self.assetReader cancelReading];
[self.assetWriter cancelWriting];
dispatch_async(dispatch_get_main_queue(), ^{
if (self.completionBlock) {
self.completionBlock(NO);
}
});
}
else
{
// Reencoding was successful, reset booleans.
self.cancelled = NO;
self.videoFinished = NO;
self.audioFinished = NO;
dispatch_async(dispatch_get_main_queue(), ^{
if (self.completionBlock) {
self.completionBlock(YES);
}
});
}
}
- (void)cancel
{
// Handle cancellation asynchronously, but serialize it with the main queue.
dispatch_async(self.mainSerializationQueue, ^{
// If we had audio data to reencode, we need to cancel the audio work.
if (self.assetWriterAudioInput)
{
// Handle cancellation asynchronously again, but this time serialize it with the audio queue.
dispatch_async(self.rwAudioSerializationQueue, ^{
// Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
BOOL oldFinished = self.audioFinished;
self.audioFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterAudioInput markAsFinished];
}
// Leave the dispatch group since the audio work is finished now.
dispatch_group_leave(self.dispatchGroup);
});
}
if (self.assetWriterVideoInput)
{
// Handle cancellation asynchronously again, but this time serialize it with the video queue.
dispatch_async(self.rwVideoSerializationQueue, ^{
// Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
BOOL oldFinished = self.videoFinished;
self.videoFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterVideoInput markAsFinished];
}
// Leave the dispatch group, since the video work is finished now.
dispatch_group_leave(self.dispatchGroup);
});
}
// Set the cancelled Boolean property to YES to cancel any work on the main queue as well.
self.cancelled = YES;
});
}
#end
[composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
into
AVMutableCompositionTrack *theTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
fix your time range,height,width etc.

How can I use this code to play more sounds?

//Action to play Audio//
-(IBAction)playAudio:(id)sender {
[self.loopPlayer play];
}
//Action to stop Audio//
-(IBAction)stopAudio:(id)sender {
if (self.loopPlayer.isPlaying) {
[self.loopPlayer stop];
self.loopPlayer.currentTime = 0;
self.loopPlayer.numberOfLoops = -1;
[self.loopPlayer prepareToPlay];
}
}
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
//Code that gets audio file "trap synth"//
NSURL* audioFileURL = [[NSBundle mainBundle] URLForResource:#"trapsynth" withExtension:#"wav"];
self.loopPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:audioFileURL error:nil];
}
This is the code i'm using with one button to play the sound when the button is tapped and stop the sound when the button is released. How would I go about adding more sounds to more buttons? I want to have more buttons that play and stop different sounds just like this.
property (nonatomic, strong) AVAudioPlayer *loopPlayer;
This code is also in my ViewController.h file
Ok although the answer provided by Miro is on the write track the code example given has issues.
Should be this in viewDidLoad -
- (void)viewDidLoad {
[super viewDidLoad];
NSURL* audioFileURL1 = [[NSBundle mainBundle] URLForResource:#"trapsynth" withExtension:#"wav"];
self.loopPlayer1 = [[AVAudioPlayer alloc] initWithContentsOfURL:audioFileURL1 error:nil];
NSURL* audioFileURL2 = [[NSBundle mainBundle] URLForResource:#"other_audio_file" withExtension:#"wav"];
self.loopPlayer2 = [[AVAudioPlayer alloc] initWithContentsOfURL:audioFileURL2 error:nil];
}
also stopAudio: method should be this
-(IBAction)stopAudio:(id)sender {
if (self.loopPlayer1.isPlaying && (sender.tag == 1)) {
[self.loopPlayer1 stop];
self.loopPlayer1.currentTime = 0;
self.loopPlayer1.numberOfLoops = -1;
[self.loopPlayer1 prepareToPlay];
}
if (self.loopPlayer2.isPlaying && (sender.tag == 2)) {
[self.loopPlayer2 stop];
self.loopPlayer2.currentTime = 0;
self.loopPlayer2.numberOfLoops = -1;
[self.loopPlayer2 prepareToPlay];
}
}
And finally for playAudio:
-(IBAction)playAudio:(id)sender {
if([sender tag] == 1){
[self.loopPlayer1 play];
}
if([sender tag] == 2){
[self.loopPlayer2 play];
}
}
If you want to play different sounds at the same time you should look into creating separate AVAudioPlayers - if you create a different one for each sound, then you can easily control (play/stop) each of them separately with a specific button.
On the simplest level, you could do something like this, which allows you to use the same button handlers for all your audio. The playAudio checks the tag of the Play button you press (be sure to set the tag value in IB, to 1,2,etc). There really only need be one Stop button.
You could enhance this in many ways, like attempting to reuse the AVAudioPlayer somehow, and loading the audio on the fly instead of all at the beginning. Or storing your audio file info in an array, creating an array of AVAudioPlayers for management, etc. But this is a start.
-(IBAction)playAudio:(id)sender {
// first, stop any already playing audio
[self stopAudio:sender];
if([sender tag] == 1){
[self.loopPlayer1 play];
} else if([sender tag] == 2){
[self.loopPlayer2 play];
}
}
-(IBAction)stopAudio:(id)sender {
if (self.loopPlayer1.isPlaying) {
[self.loopPlayer1 stop];
self.loopPlayer1.currentTime = 0;
self.loopPlayer1.numberOfLoops = -1;
[self.loopPlayer1 prepareToPlay];
} else if (self.loopPlayer2.isPlaying) {
[self.loopPlayer2 stop];
self.loopPlayer2.currentTime = 0;
self.loopPlayer2.numberOfLoops = -1;
[self.loopPlayer2 prepareToPlay];
}
}
- (void)viewDidLoad {
[super viewDidLoad];
NSURL* audioFileURL1 = [[NSBundle mainBundle] URLForResource:#"trapsynth" withExtension:#"wav"];
self.loopPlayer1 = [[AVAudioPlayer alloc] initWithContentsOfURL:audioFileURL error:nil];
NSURL* audioFileURL2 = [[NSBundle mainBundle] URLForResource:#"trapsynth" withExtension:#"wav"];
self.loopPlayer2 = [[AVAudioPlayer alloc] initWithContentsOfURL:audioFileURL error:nil];
}
AND, in the .h file;
property (nonatomic, strong) AVAudioPlayer *loopPlayer1;
property (nonatomic, strong) AVAudioPlayer *loopPlayer2;

AVAssetReader failing after one frame on H.264 .mov file

I'm trying to render an H.264 QuickTime movie to an OpenGL texture on iOS. I am stuck decoding frame buffers from the input file. One frame decodes correctly and displays. All subsequent calls to [AVAssetReaderTrackOutput getNextSample] return NULL, however, and AVAssetReader.status == AVAssetReaderStatusFailed. If I do not specify a value for kCVPixelBufferPixelFormatTypeKey in the settings dict, the status remains AVAssetReaderStatusReading, but the buffer objects returned are empty. The AVAsset in question plays without issue in AVPlayer. Is there anything obviously wrong with my code?
- (id) initWithAsset: (AVAsset *) asset {
if (!(self = [super init])) return nil;
_asset = [asset copy];
[self initReaderToTime:kCMTimeZero];
return self;
}
- (void) initReaderToTime:(CMTime) readStartTime {
_readStartTime = readStartTime;
NSMutableDictionary *outputSettings = [NSMutableDictionary dictionary];
[outputSettings setObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(NSString *)kCVPixelBufferPixelFormatTypeKey];
_trackOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:[[_asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] outputSettings:outputSettings];
NSError *error = nil;
_assetReader = [[AVAssetReader alloc] initWithAsset:_asset error:&error];
if (error) return;
if (![_assetReader canAddOutput:_trackOutput]) return;
[_assetReader addOutput:_trackOutput];
if (![_assetReader startReading]) return;
[self getNextSample];
}
- (void) getNextSample {
if (_assetReader.status != AVAssetReaderStatusReading) {
Log(#"Reader status %d != AVAssetReaderStatusReading. Ending...", _assetReader.status);
return;
}
CMSampleBufferRef sampleBuffer = [_trackOutput copyNextSampleBuffer];
/*
Do things with buffer
*/
[self performSelector:_cmd withObject:nil afterDelay:0];
}
[_trackOutput copyNextSampleBuffer] should appear somewhere in your code.

Resources