iOS 4: video quality with AVAsset based application - ios

I'm trying to create a simple video application(loading existing video file from ios 4 device, edit it using direct pixel access and save under a different name).
I managed to load, edit and save my movie file on real device(ipod 4g). The only problem that I have is related to movie quality(original vs edited one). I don't know what am I doing wrong, but quality of my output file is very bad comparing to the input one.
Below you can find how am I loading my movie:
// // *** tmp file ***
NSURL *movieUrl = [info objectForKey:#"UIImagePickerControllerMediaURL"];
NSLog(#"picker controller movie url: %#", [movieUrl absoluteString]);
## picker controller movie url: /private/var/mobile/Applications/6253D1-8C0-41-B780-638250817/tmp//trim.2q03uz.MOV
AVURLAsset *movieAsset = [[AVURLAsset alloc] initWithURL:movieUrl options:nil];
CGSize size = [movieAsset naturalSize];
NSLog(#"movie asset natual size: size.width = %f size.height = %f", size.width, size.height);
## movie asset natual size: size.width = 224.000000 size.height = 128.000000
// *** ASSET READER ***
AVAssetReader *assetReader = [[AVAssetReader alloc] initWithAsset:movieAsset error:&error];
NSArray* videoTracks = [movieAsset tracksWithMediaType:AVMediaTypeVideo];
// asset track
AVAssetTrack* videoTrack = [videoTracks objectAtIndex:0];
NSDictionary* dictionary = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
NSLog(#"video track %f %f %d", [videoTrack naturalSize].width, [videoTrack naturalSize].height, [videoTrack nominalFrameRate]);
## video track 224.000000 128.000000 0
/*
also tried
kCVPixelFormatType_32BGRA
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
*/
// asset reader track output
AVAssetReaderTrackOutput* assetReaderOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:videoTrack
outputSettings:dictionary];
if(![assetReader canAddOutput:assetReaderOutput])
NSLog(#"unable to add reader output");
else
[assetReader addOutput:assetReaderOutput];
// *** ASSET WRITER ***
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:exportURL
fileType:AVFileTypeQuickTimeMovie error:&error];
NSParameterAssert(videoWriter);
NSLog(#"asset writer %d %d", [videoWriter status], [error code]);
NSDictionary *videoCompressionProps = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithDouble:128.0*1024.0], AVVideoAverageBitRateKey,
nil];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
//videoCompressionProps, AVVideoCompressionPropertiesKey,<- no difference at all
nil];
AVAssetWriterInput* writerInput = [[AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings] retain];
// set preffered transform for output video
writerInput.transform = [videoTrack preferredTransform];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
writerInput.expectsMediaDataInRealTime = NO;
Then I simply read next sample buffer, modify pixel data and save it with a different name. That already works, but output quality is so bad...
Please guys give me some advice:-).

Try setting:
imagePickerController.videoQuality = UIImagePickerControllerQualityTypeHigh;
before picking the movie.

It looks like you are using the UIImagePickerController. That will compress your video I believe, resulting in the bad quality you're getting.
Also, you may consider setting expectsMediaDataInRealTime to YES, to optimize the dropping of frames.

Related

How to specify fmt chunk in .wav header with Objective-C

My iPhone app works well in iOS10 but doesn't work in iOS11. This app records user's voice as .wav file but its header data seems to be different between iOS10 and iOS11. The wave file is outputted using "fmt chunk" in iOS10 but "JUNK chunk" in iOS11. I need to specify fmt chunk in the header.
Here is the code to output wave file.
// Create file path.
NSDateFormatter *formatter = [[NSDateFormatter alloc] init];
[formatter setDateFormat:#"yMMddHHmmss"];
NSString *fileName = [NSString stringWithFormat:#"%#.wav", [formatter stringFromDate:[NSDate date]]];
self.filePath = [NSTemporaryDirectory() stringByAppendingPathComponent:fileName];
// Change Audio category to Record.
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryRecord error:nil];
// Settings for AVAAudioRecorder.
NSDictionary *settings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:16000.0], AVSampleRateKey,
[NSNumber numberWithUnsignedInt:1], AVNumberOfChannelsKey,
[NSNumber numberWithUnsignedInt:16], AVLinearPCMBitDepthKey,
nil];
self.recorder = [[AVAudioRecorder alloc] initWithURL:[NSURL URLWithString:filePath] settings:settings error:nil];
recorder.delegate = self;
[recorder prepareToRecord];
[recorder record];
I really need your help. Thank you.
I've solved this issue thanks to this post.
Audio file format issue in objective c
Thank you.

AVAssetWriter fails when audio Compression Settings are set

When I have audioCompressionSettings as nil in the following code,
AVAssetWriterInput* audioWriterInput =
[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio
outputSettings:nil
sourceFormatHint:(__bridge CMAudioFormatDescriptionRef)[[audioTrack formatDescriptions] objectAtIndex:0]];
everything works fine.
If I add outputSettings:audioCompressionSettings, my Video comes out as a 0-byte file in iOS 7, 8, 9.
And in iOS 6, the following statement throws an error,
NSParameterAssert([mediaWriter canAddInput:audioWriterInput]);
My audioCompressionSettings is defined as follows,
AudioChannelLayout stereoChannelLayout = {
.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
.mChannelBitmap = 0,
.mNumberChannelDescriptions = 0
};
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
NSDictionary *audioCompressionSettings = #{
AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
AVEncoderBitRateKey : [NSNumber numberWithInteger:128000],
AVSampleRateKey : [NSNumber numberWithInteger:44100],
AVChannelLayoutKey : channelLayoutAsData,
AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};
Please let me know what I am doing wrong.
With audioCompressionSettings nil, iOS 6 spits out this error:
*** WebKit discarded an uncaught exception in the webView:decidePolicyForNavigationAction:
request:frame:decisionListener: delegate:
<NSInternalInconsistencyException> Invalid parameter not satisfying:
[mediaWriter canAddInput:audioWriterInput]
mediaWriter is defined as follows,
AVAssetWriter *mediaWriter = [[AVAssetWriter alloc] initWithURL:outputURL fileType:AVFileTypeMPEG4 error:&error];
mediaWriter.shouldOptimizeForNetworkUse = YES;
NSParameterAssert(mediaWriter);
I actually want the iOS 6 issue to be fixed as I really do not want to do any transformation of the audio but just have it pass through.

AVAudioRecorder not saving recording to path specified

I am recording audio using AVAudioRecorder it is recording smoothly and i can play or get the save file I am using following code to initialize AVAudioRecorder
NSDictionary *settings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatMPEG4AAC], AVFormatIDKey,
[NSNumber numberWithInt:AVAudioQualityLow], AVEncoderAudioQualityKey,
[NSNumber numberWithInt:16], AVEncoderBitRateKey,
[NSNumber numberWithInt: 1], AVNumberOfChannelsKey,
[NSNumber numberWithFloat:16000], AVSampleRateKey,
nil];
NSArray *searchPaths =NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentPath_ = [searchPaths objectAtIndex: 0];
NSString *pathToSave = [documentPath_ stringByAppendingPathComponent:[self dateString]];
self.pathfinal = pathToSave;
NSURL *url = [NSURL fileURLWithPath:pathToSave];
recorder = [[AVAudioRecorder alloc] initWithURL:url settings:settings error:&error];
if (!recorder)
{
return NO;
}
recorder.delegate = self;
recorder.meteringEnabled = YES;
if (![recorder prepareToRecord])
{
return NO;
}
if (![recorder record])
{
return NO;
}
but some of my clients are facing an issue when they stop the recorded (the step where it will save on disk) it does not save anything it creates the file but with no data every time they record.
But when using different iPhone it was not an issue. iPhone had 2gb free hard disk space. Any idea what could be causing that?
In my case, just try not to use AVEncoderBitRateKey will solve the problem.
See this question.
There is no need of AVEncoderBitRateKey to record AAC format audio. AAC is a low quality audio recording. AVEncoderBitRateKey is used to record high quality audio.

Not able to find the DPI for an image in iOS

I want to find the DPI for an image that has been captured from iPhone/iPad Camera
this is how i am trying to get the DPI
CFDictionaryRef exifDict = CMGetAttachment(imageDataSampleBuffer,
kCGImagePropertyExifDictionary ,
NULL);
originalExifDict = (__bridge NSMutableDictionary *)(exifDict);
[originalExifDict objectForKey:(NSString *)kCGImagePropertyDPIHeight]
[originalExifDict objectForKey:(NSString *)kCGImagePropertyDPIWidth]
However both the entries in the dictionary come to be 0.
What is the correct way to find the DPI ?
Thanks in advance for the help
CGSize size;
NSNumber *width = (NSNumber *)CFDictionaryGetValue(exifDict, kCGImagePropertyDPIWidth);
NSNumber *height = (NSNumber *)CFDictionaryGetValue(exifDict, kCGImagePropertyDPIHeight);
size.width = [width floatValue];
size.height = [height floatValue];
//Tell me its work or not.
The information isn't in the metadata that comes with your imageDataSampleBuffer. It is written (72 dpi) at the time the image is saved, unless you have, first, manually set it yourself when editing the metadata, before the save.
For most purposes, it is meaningless, However, some software uses it to calculate the "correct size" of an image when placing it in a document. A 3000 pixel square image at 300 dpi will thus appear 10 inches (c.25.4 cm) square; at 72 dpi it will be nearly 42 inches (c.105.8 cm) square. Also, some online image uploaders (especially those used by stock photo libraries and the like) insist on images having high-ish dpi.
If you are using imagePickerController use this below code
NSURL *assetURL = [info objectForKey:UIImagePickerControllerReferenceURL];
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library assetForURL:assetURL
resultBlock:^(ALAsset *asset) {
NSMutableDictionary *imageMetadata = nil;
NSDictionary *metadata = asset.defaultRepresentation.metadata;
imageMetadata = [[NSMutableDictionary alloc] initWithDictionary:metadata];
NSLog (#"imageMetaData from AssetLibrary %#",imageMetadata);
NSString *dpi = [imageMetadata objectForKey:#"DPIHeight"];
NSLog (#"Dpi: %#",dpi);
}
failureBlock:^(NSError *error) {
NSLog (#"error %#",error);
}];

How to convert WAV file to M4A?

Is there any way to convert my recorded .WAV file to .M4A file in iOS?
And also I have to convert .M4A file to .WAV file.
I tried with Audio Queue Services, but I am not able to do.
This post: From iPod Library to PCM Samples in Far Fewer Steps Than Were Previously Necessary describes how to load a file from the users ipod library and write it to the file system as a linear pcm (wav) file.
I believe that the change that you will need to make to the code to load a file from the file system instead would be in the NSURL that describes where the asset is:
-(IBAction) convertTapped: (id) sender {
// set up an AVAssetReader to read from the iPod Library
NSURL *assetURL = [[NSURL alloc] initFileURLWithPath:#"your_m4a.m4a"];
AVURLAsset *songAsset =
[AVURLAsset URLAssetWithURL:assetURL options:nil];
NSError *assetError = nil;
AVAssetReader *assetReader =
[[AVAssetReader assetReaderWithAsset:songAsset
error:&assetError]
retain];
if (assetError) {
NSLog (#"error: %#", assetError);
return;
}
If you are going in the opposite direction, you will need to change the formatting on the output end:
NSDictionary *outputSettings =[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
[NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)],
AVChannelLayoutKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];
I am not sure of the exact settings that would go in here for m4a, but this should get you closer.
The other option would be to load in ffmpeg lib and do all your conversion in there, but that seems like different than what you want.
TPAACAudioConverter works fine

Resources