decoding h264 in ios - ios

I am a newbie in AVFoundation and decoding process.I need to decode a h264 video file and play it in iphone...can anyone give me guideline to do it.
I dont want to use ffmpeg or any third party library to do that. As far as I know using Avfoundation encoding is possible...here is the code which I thought is used for encoding but not sure at all...
float bitsPerPixel;
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions(currentFormatDescription);
int numPixels = dimensions.width * dimensions.height;
int bitsPerSecond;
// Assume that lower-than-SD resolutions are intended for streaming, and use a lower bitrate
if ( numPixels < (640 * 480) )
bitsPerPixel = 4.05; // This bitrate matches the quality produced by AVCaptureSessionPresetMedium or Low.
else
bitsPerPixel = 11.4; // This bitrate matches the quality produced by AVCaptureSessionPresetHigh.
bitsPerSecond = numPixels * bitsPerPixel;
NSDictionary *videoCompressionSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInteger:dimensions.width], AVVideoWidthKey,
[NSNumber numberWithInteger:dimensions.height], AVVideoHeightKey,
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInteger:bitsPerSecond], AVVideoAverageBitRateKey,
[NSNumber numberWithInteger:30], AVVideoMaxKeyFrameIntervalKey,
nil], AVVideoCompressionPropertiesKey,
nil];
if ([assetWriter canApplyOutputSettings:videoCompressionSettings forMediaType:AVMediaTypeVideo]) {
assetWriterVideoIn = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:videoCompressionSettings];
assetWriterVideoIn.expectsMediaDataInRealTime = YES;
assetWriterVideoIn.transform = [self transformFromCurrentVideoOrientationToOrientation:self.referenceOrientation];
if ([assetWriter canAddInput:assetWriterVideoIn])
[assetWriter addInput:assetWriterVideoIn];
else {
NSLog(#"Couldn't add asset writer video input.");
return NO;
}
}
else {
NSLog(#"Couldn't apply video output settings.");
return NO;
}
return YES;
I am completely naive about this, please help...from where to start///
thanks

The simpler solution is to use MPMoviePlayerController. It takes in input a mov or mv4 file (from local file system or through an URL).
The another option is to use AVPlayer class.
Hope it helps,
David

You can refer and build the official sample code: AVPlayerDemo to see how it works. It uses AV Foundation framework, mainly the AVPlayer APIs to play video files and the performance is excellent.
To play a video file by AVPlayerDemo you should copy the video files to your iOS device by itunes and select iPod library on the AVPlayerDemo App.

Related

audio Streaming AVFoundation using Audio Queues/ buffer in iOS

I need to do audio streaming in an iOS app using Objective C. I have used AVFoundation framework and capture the raw data from microphone and send to sever. However raw data which I am receiving is corrupt, Below is my code.
Please suggest me where I am doing wrong.
session = [[AVCaptureSession alloc] init];
NSDictionary *recordSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey,
[NSNumber numberWithFloat:16000.0], AVSampleRateKey,
[NSNumber numberWithInt: 1],AVNumberOfChannelsKey,
[NSNumber numberWithInt:32], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
nil];
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil];
[session addInput:audioInput];
AVCaptureAudioDataOutput *audioDataOutput = [[AVCaptureAudioDataOutput alloc] init];
dispatch_queue_t audioQueue = dispatch_queue_create("AudioQueue", NULL);
[audioDataOutput setSampleBufferDelegate:self queue:audioQueue];
AVAssetWriterInput *_assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:recordSettings];
_assetWriterVideoInput.performsMultiPassEncodingIfSupported = YES;
if([session canAddOutput:audioDataOutput] ){
[session addOutput:audioDataOutput];
}
[session startRunning];
Capturing:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
AudioBufferList audioBufferList;
NSMutableData *data= [NSMutableData data];
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for( int y=0; y< audioBufferList.mNumberBuffers; y++ ){
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
Float32 *frame = (Float32*)audioBuffer.mData;
[data appendBytes:frame length:audioBuffer.mDataByteSize];
NSString *base64Encoded = [data base64EncodedStringWithOptions:0];
NSLog(#"Encoded: %#", base64Encoded);
}
CFRelease(blockBuffer);
}
I posted a sample of the kind of code you need to make this work. Its approach is nearly the same as yours. You should be able to read it easily.
The app uses AudioUnit to record and playback microphone input and speaker output, NSNetServices to connect two iOS devices on your network, and NSStreams to send an audio stream between the devices.
You can download the source code at:
https://drive.google.com/open?id=1tKgVl0X92SYvgpvbljRzilXNQ6iBcjqM
It requires the latest Xcode 9 beta release to compile, and the latest iOS 11 beta release to run it.
NOTE | A log entry for each method call and event is displayed in a textfield that encompasses the entire screen; there is no interactive interface—no buttons, etc. After installing the app on two iOS devices, simply launch it on both devices to automatically connect to your network and start streaming audio.

Audio recording formats in ios

Which audio format is small in size for speech recording in ios? The quality is need not to be the best but it should be understandable what user speaks.
Assuming you plan to use AVAudioRecorder class, you should provide the recording settings like so -
NSDictionary *recordSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:AVAudioQualityMin], AVEncoderAudioQualityKey,
[NSNumber numberWithInt:16], AVEncoderBitRateKey,
[NSNumber numberWithInt: 2],AVNumberOfChannelsKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,nil];
NSError* error = nil;
AVAudioRecorder audioRecorder = [[AVAudioRecorder alloc]
initWithURL:soundFileURL
settings:recordSettings
error:&error];
Apple's documentation provides details about the settings constants (specifically AVEncoderAudioQualityKey) you could use in your app.
22.05KHz in mono is more than adequate for speech and is 1/4 the size of 44.1KHz in stereo at the same bit depth. You could likely even try dropping it down to 11.025KHz.
Several iOS apps use the Speex encoder for lower-bit rate speech. It's not built-in, but open source source code is available to do the encoding.

wp8 ios audio recording and playback compatibility

I want to get playing audio recordings from iOS on WP8 and vice-versa.
On iOS I'm using AVAudioRecorder for that purpose with the following configuration:
NSString *tempPath = NSTemporaryDirectory();
NSURL *soundFileURL = [NSURL fileURLWithPath:[tempPath stringByAppendingPathComponent:#"sound.aac"]];
NSDictionary *recordSettings = [NSDictionary
dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatMPEG4AAC],
AVFormatIDKey,
[NSNumber numberWithInt:AVAudioQualityMin],
AVEncoderAudioQualityKey,
[NSNumber numberWithInt:8000],
AVEncoderBitRateKey,
[NSNumber numberWithInt: 1],
AVNumberOfChannelsKey,
[NSNumber numberWithFloat:8000.0],
AVSampleRateKey,
[NSNumber numberWithInt:16],
AVEncoderBitDepthHintKey,
nil];
NSError *error = nil;
_audioRecorder = [[AVAudioRecorder alloc]
initWithURL:soundFileURL
settings:recordSettings
error:&error];
_audioRecorder.delegate = self;
The files "sound.aac" contains the recording in the AAC container and playing recorded audio sample works well on iOS.
I couldn't play the "sound.aac" on WP8 after transfering the file to the WP8 device. According the following link: http://msdn.microsoft.com/en-us/library/windowsphone/develop/ff462087(v=vs.105).aspx#BKMK_AudioSupport WP8 should be able to play the file.
The code on WP8 I've used is:
try
{
this.mediaPlayer = new MediaElement();
mediaPlayer.MediaEnded += new RoutedEventHandler(mediaPlayer_MediaEnded);
IsolatedStorageFile myStore = IsolatedStorageFile.GetUserStoreForApplication();
IsolatedStorageFileStream mediaStream = myStore.OpenFile("sound.aac", FileMode.Open, FileAccess.Read);
this.mediaPlayer.SetSource(mediaStream);
this.messageTextBlock.Text = "Playing the message...";
mediaPlayer.Play();
}
catch (Exception exception)
{
MessageBox.Show("Error playing audio!");
Debug.WriteLine(exception);
return;
}
After this the "sound.aac" is playing endlessly with no sound coming from speaker. The message "Playing the message..." is shown, there is no thrown Exception, no mediaPlayer_MediaEnded call. All I can do is to stop the playing.
I don't know how to get it working.

How build ffmpeg optimized for iOS, using hardware decoding probably?

I make a FFMPEG-based player for ios. It works fine on simulator, but on real-device (iPhone 4) the frame rate is low and make my audio and video out of sync. the player works fine on iPhone 4s, so I guess it's just problem about device's computing power.
So, is there anyway to build FFMPEG optimized for iOS device (armv7, arvm7s arch)? or is there anyway to utilize ios device hardware to decode video stream?
My video stream is encode in H264/AAC.
Those streams should play just fine, I assume since your using ffmpeg you are not using a video protocol that iOS supports directly.
We use ffmpeg to do rtsp/rtmp and we get good performance with h264/aac
There are a number of factors that contribute to av/sync issues, usually some type of pre-buffering of the video is required, also network plays a big part in it.
As to your second question, hardware encoding is only available via avfoundation, you can use avassetwriter to encode your video, but again depends wether or not you need real-time.
see this link https://github.com/mooncatventures-group/FFPlayer-beta1/blob/master/FFAVFrames-test/ViewController.m
-(void) startRecording {
// // create the AVComposition
// [mutableComposition release];
// mutableComposition = [[AVMutableComposition alloc] init];
movieURL = [NSURL fileURLWithPath:[NSString stringWithFormat:#"%#/%llu.mov", NSTemporaryDirectory(), mach_absolute_time()]];
NSError *movieError = nil;
assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL
fileType: AVFileTypeQuickTimeMovie
error: &movieError];
NSDictionary *assetWriterInputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:FRAME_WIDTH], AVVideoWidthKey,
[NSNumber numberWithInt:FRAME_HEIGHT], AVVideoHeightKey,
nil];
assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType: AVMediaTypeVideo
outputSettings:assetWriterInputSettings];
assetWriterInput.expectsMediaDataInRealTime = YES;
[assetWriter addInput:assetWriterInput];
assetWriterPixelBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptor alloc]
initWithAssetWriterInput:assetWriterInput
sourcePixelBufferAttributes:nil];
[assetWriter startWriting];
firstFrameWallClockTime = CFAbsoluteTimeGetCurrent();
[assetWriter startSessionAtSourceTime:kCMTimeZero];
startSampleing=YES;
}
The one drawback right now is that a way needs to be determined to read the encoded data as its being written, believe me when I say there are a few of us developers trying to figure out how to do that as we I write this.

Saved video filtering on iOS

How can I make the process that filtering saved video in photo library in iOS?
I got URLs of videos in the library using AssetsLibrary framework,
then, made a preview for the video.
Next step, I wanna make filtering process for video using CIFilter.
In case of real time issue, I made video filter process using AVCaptureVideoDataOutputSampleBufferDelegate.
But in case of saved video, I don't know how to make filter process.
Do I use AVAsset? If I must use that, how can I filter it? and how to save it?
always thank you.
I hope this will help you
AVAsset *theAVAsset = [[AVURLAsset alloc] initWithURL:mNormalVideoURL options:nil];
NSError *error = nil;
float width = theAVAsset.naturalSize.width;
float height = theAVAsset.naturalSize.height;
AVAssetReader *mAssetReader = [[AVAssetReader alloc] initWithAsset:theAVAsset error:&error];
[theAVAsset release];
NSArray *videoTracks = [theAVAsset tracksWithMediaType:AVMediaTypeVideo];
AVAssetTrack *videoTrack = [videoTracks objectAtIndex:0];
mPrefferdTransform = [videoTrack preferredTransform];
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
AVAssetReaderTrackOutput* mAssetReaderOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:videoTrack outputSettings:options];
[mAssetReader addOutput:mAssetReaderOutput];
[mAssetReaderOutput release];
CMSampleBufferRef buffer = NULL;
//CMSampleBufferRef buffer = NULL;
while ( [mAssetReader status]==AVAssetReaderStatusReading ){
buffer = [mAssetReaderOutput copyNextSampleBuffer];//read next image.
}
You should have a look at CVImageBufferRef pixBuf = CMSampleBufferGetImageBuffer(sbuf) then you can have the image pointer first address, so you can add filter to pixBuf, but i find that the performance is not good, If you have any new idea,we can discuss about it further.

Resources