How to reduce the avaudio recording file when recording for long time about an hour? - avaudiorecorder

Description of problem: When i record an audio file for the time interval about 1 hours i get its size around 600MB so its to big I want that it should be compressed to les size but how i don`t know....? Reasong for doing compression is that it takes a lot of time in saving that file when i convert it to the NSData . SO please any help or guide me through any how can get out from that problem.....:(
Any suggestion should be very appreciated...!
Thanks In Advance.....!

You will have to choose correct format of encoding for reducing recorded audio file size.
Please see the link: Record audio on iPhone with smallest file size
If still it is more than the expected size, then you can break file into multiple small chunks.
http://sspbond007.blogspot.in/2012/01/objective-c-filesplitter-and-joiner.html

Related

Read an AVAudioFile into a buffer starting at a certain time

Let's say I have an AVAudioFile with a duration of 10 seconds. I want to load that file into an AVAudioPCMBuffer but I only want to load the audio frames that come after a certain number of seconds/milliseconds or after a certain AVAudioFramePosition.
It doesn't look like AVAudioFile's readIntoBuffer methods give me that kind of precision so I'm assuming I'll have to work at the AVAudioBuffer level or lower?
You just need to set the AVAudioFile's framePosition property before reading.

Get byte range positions for a specified time range of an mp3 file

I'd like to be able to determine at what byte positions a segment of an NSData compressed mp3 file begins and ends.
For example, if I am playing an mp3 file using the AVPlayer (or any player) that is 1 minute long and 1000000 bytes, I'd like to know approximately at how many bytes in the file the 30 second mark happens, then how many bytes the 40 second mark happens.
Note that due to the mp3 file being compressed I can't just divide the bytes in half to determine the 30 second mark.
If this can't be done with Swift/Objective-C, do you know if this determination can be done with any programming language? Thanks!
It turns out I had a different problem to solve. I was trying to approximate the byte position of a specific time, say, the 4:29 point of a 32:45 long podcast episode, within a few seconds of accuracy.
I used a function along these lines to calculate the approximate byte position:
startTimeBytesPosition = (startTimeInSeconds / episodeDuration) * episodeFileSize
That function worked like a charm for some episodes, but for others the resulting start time would be off by about 30-40 seconds.
It turns out this inaccuracy was happening because some mp3s contain metadata at the very beginning, and image files stored within metadata can be +500KB, so my calculation of time based on byte position for any episode with an image file would be off by about 500KB (which translated into about 30-40 seconds in this case).
To resolve this, I am first determining the size in bytes of the metadata in an mp3 file, and then use that to offset the approximation function:
startTimeBytesPosition = metadataBytesOffset + (startTimeInSeconds / episodeDuration) * episodeFileSize
So far this code seems to be doing a good job of approximating time based on byte position accurately within a few seconds.
I should note that this assumes that the metadata for the image will always appear at the beginning of the mp3 file, and I don't know if that will always be the case.

Mov file has more frames than written/Possible iOS AVAsset writer usage issue

I am manually generated a .mov video file.
Here is a link to an example file: link, I wrote a few image frames, and then after a long break wrote approximately 15 image frames just to emphasise my point for debuting purposes. When I extract images from the video ffmpeg returns around 400 frames instead of the 15-20 I expected. Is this because the API i am using is inserting these image files automatically? Is it a part of the .mov file format that requires this? Or is it due to the way the library is extracting the image frames from the video? I have tried searching the internet but could not arrive at an answer.
My use case is that I am trying to write the current "sensor data" (from core motion) from core motion while writing a video. For each frame I receive from the camera, I use "AppendPixelBuffer" to write the frame to the video and then
Thanks for any help. The end result is I want a 1:1 ratio of Frames in the video to rows in the CSV file. I have confirmed I am writing the CSV file correctly using various counters etc. So my issue is cleariy the understanding of the movie format or API.
Thanks for any help.
UPDATED
It looks like your ffmpeg extractor is wrong. To extract only the timestamped frames (and not frames sampled at 24Hz) in your file, try this:
ffmpeg -i video.mov -r 1/1 image-%03d.jpeg
This gives me the 20 frames expected.
OLD ANSWER
ffprobe reports that your video has a frame rate of 2.19 frames/s and a duration of 17s, which gives 2.19 * 17 = 37 frames, which is closer to your expected 15-20 than ffmpeg's 400.
So maybe the ffmpeg extractor is at fault?
Hard to say if you don't show how you encode and decode the file.

convert audio to lower sampling rate

I am using AVRecorder to save recording and AVAssetExportsession to append multiple files. But output of the Exportsession is too large.
So I would like to convert it to a lower size before uploading it to the server. How can I convert this to a lower sampling rate.
Use AVAssetWriter (Apple docs: https://developer.apple.com/library/mac/documentation/AVFoundation/Reference/AVAssetWriter_Class/index.html), which will allow you to choose bitrate/channel/etc options for the file.
This related question (AVAssetWriter How to write down-sampled/compressed m4a/mp3 files) has a full code sample using AVAssetWriter if you need that -- be sure, of course, to take note of the answer to that question, in regards to locations for the exported file.

Is it possible to split the recorded wav file into multiple wav files on iOS, given the duration of the splits?

I want to extract a few clips from the recorded wav file. I am not finding much help online regarding this issue. I understand we can't split from compressed formats like mp3, but how do we do it with caf/wav files?
One approach you may consider would be to calculate and read the bytes from an audio file and write them to a new file. Because you are dealing with LPCM formats the calculations are relatively simple.
If for example you have a file of 16bit mono LPCM audio sampled at 44.1kHz that is one minute in duration, then you have a total of (60 secs x 44100Hz) 2,646,000 samples. Times 2 bytes per sample gives a total of 5,292,000 bytes. And if you want audio from 10sec to 30sec then you need to read the bytes from 882,000 to 2,646,000 and write them to a separate file.
There is a bit of code involved but it can be done using Audio File Services Class from the AudioToolbox framework.
Functions you'll need to use are AudioFileOpenURL, AudioFileCreateWithURL, AudioFileReadBytes, AudioFileWriteBytes, and AudioFileClose.
An algorithm would be something like this-
You first set up an AudioFileID which is an opaque type that gets passed in to the AudioFileCreateWithURL function. Then open the file you wish to splice up using AudioFileOpenURL.
Calculate the start and end bytes of what you want to copy.
Next, in a loop preferably, read in the bytes and write them to file. AudioFileReadBytes and AudioFileWriteBytes allow you to do this. Whats good is that you can read and write whatever size bytes you decide on each iteration of the loop.
When finished close the new file and original using AudioFileClose.
Then repeat for each file (audio extraction) to be written.
On an additional note you would split a compressed format by converting the compressed format to LPCM first.

Resources