In my app, I have about 300 NSData objects 0.5 MB in size, and I'm writing them all sequentially into a file with essentially this code (which writes a single 0.5 MB object 300 times):
- (void)createFile {
// create .5 MB block to write
int size = 500000;
Byte *bytes = malloc(size);
for (int i = 0; i < size; i++) {
bytes[i] = 42;
}
NSData *data = [NSData dataWithBytesNoCopy:bytes length:size
freeWhenDone:YES];
// temp output file
NSUUID *uuid = [NSUUID UUID];
NSString *path = [[NSTemporaryDirectory()
stringByAppendingPathComponent:[uuid UUIDString]]
stringByAppendingPathExtension:#"dat"];
NSOutputStream *outputStream = [[NSOutputStream alloc]
initToFileAtPath:path append:NO];
[outputStream open];
double startTime = CACurrentMediaTime();
NSInteger totalBytesWritten;
NSInteger bytesWritten;
Byte *readPtr;
for (int i = 0; i < 300; i++) {
// reset read pointer to block we're writing to the output
readPtr = (Byte *)[data bytes];
totalBytesWritten = 0;
// write the block
while (totalBytesWritten < size) {
bytesWritten = [outputStream write:readPtr maxLength:size
- totalBytesWritten];
readPtr += bytesWritten;
totalBytesWritten += bytesWritten;
}
}
double duration = CACurrentMediaTime() - startTime;
NSLog(#"duration = %f", duration);
[outputStream close];
}
On both my iPod (5th gen) and my iPhone 6, this process takes about 3 seconds, and I was wondering if there was any faster way to do this. I've tried using NSFileManager and NSFileHandle approaches, but they take about the same length of time, which leads me to suppose that this is a fundamental I/O limit I'm running into.
Is there any way to do this faster (this code should compile and run on any device)?
Here's the highest performance I was able to achieve, using mmap & memcpy.
It takes on average about 0.2 seconds to run on my iPhone 6, with some variation up to around 0.5s. YMMV, however, as it would appear that the iPhone 6 has two different flash storage providers - one is TLC and the other is MLC - those with TLC will get significantly better results.
This of course assumes that you are OK with async I/O. If you truly need something synchronous, look for other solutions.
- (IBAction)createFile {
NSData *data = [[self class] dataToCopy];
// temp output file
NSUUID *uuid = [NSUUID UUID];
NSString *path = [[NSTemporaryDirectory()
stringByAppendingPathComponent:[uuid UUIDString]]
stringByAppendingPathExtension:#"dat"];
NSUInteger size = [data length];
NSUInteger count = 300;
NSUInteger file_size = size * count;
int fd = open([path UTF8String], O_CREAT | O_RDWR, 0666);
ftruncate(fd, file_size);
void *addr = mmap(NULL, file_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
double startTime = CACurrentMediaTime();
static dispatch_queue_t concurrentDataQueue;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
concurrentDataQueue = dispatch_queue_create("test.concurrent", DISPATCH_QUEUE_CONCURRENT);
});
for (int i = 0; i < count; i++) {
dispatch_async(concurrentDataQueue, ^{
memcpy(addr + (i * size), [data bytes], size);
});
}
dispatch_barrier_async(concurrentDataQueue, ^{
fsync(fd);
double duration = CACurrentMediaTime() - startTime;
NSLog(#"duration = %f", duration);
munmap(addr, file_size);
close(fd);
unlink([path UTF8String]);
});
}
Two performance tips that I can recommend are: try turning off file-system caching or checking the IO buffer size.
"When reading data that you are certain you won’t need again soon, such as streaming a large multimedia file, tell the file system not to add that data to the file-system caches.
Apps can call the BSD fcntl function with the F_NOCACHE flag to enable or disable caching for a file. For more information about this function, see fcntl."~Performance Tips
or "read much or all of the data into memory before processing it" ~Performance Tips
iPhone 6 uses 16Gb SK hynix flash storage~[Teardown] and the theoretical limit for sequential write is around 40 to 70 Mb/s~[NAND flash].
300 * 0.5 / 3 = 50MB/s for 150MB of data looks fast enough. I suppose you hit the flash storage WRITE speed limit. I believe you run this code in a background thread, so the issue is not in blocking of UI
Related
How to split byteArray in iOS
Iam getting 160 length of arrayByte data..
I need to split into 4 parts..each part contain 40 arrayByte.that data I need to copy and use for decoding..I tried to converted it but its not working..Can some one help to do this..
Finally i got solution Below is updated working code
-(NSMutableData*)decodeOpusData:(NSData*)data
{
NSMutableData *audioData = [[NSMutableData alloc] init];
for (NSUInteger i = 0; i < 4; i ++)
{
int bufferLength = 40;
if([data length]>= 40){
NSData *subData = [data subdataWithRange:NSMakeRange(i*bufferLength, bufferLength)];
Byte *byteData = (Byte*)malloc(sizeof(Byte)*bufferLength);
memcpy(byteData, [subData bytes], bufferLength);
//You can do anything here with data..........
//Below iam decoding audio data using OPUS library
short decodedBuffer[WB_FRAME_SIZE];
int nDecodedByte = sizeof(short) * [self decode:byteData length:bufferLength output:decodedBuffer];
NSData *PCMData = [NSData dataWithBytes:(Byte *)decodedBuffer length:nDecodedByte ];
[audioData appendData:PCMData];
//Decoding audio data using OPUS library
}
}
return audioData;
}
Below code is android.i want to do like this..
ArrayByte length = 160
BUFFER_LENGTH = 40
public fun opusDataDecoder(data:ByteArray){
for (i in 0..3){
val byteArray = ByteArray(BUFFER_LENGTH)
System.arraycopy(data,i * BUFFER_LENGTH,byteArray,0, BUFFER_LENGTH) //BUFFER_LENGTH = 40
val decodeBufferArray = ShortArray(byteArray.size * 8) // decodeBufferArray = 320
val size = tntOpusUtils.decode(decoderHandler, byteArray, decodeBufferArray)
if (size > 0) {
val decodeArray = ShortArray(size)
System.arraycopy(decodeBufferArray, 0, decodeArray, 0, size)
opusDecode(decodeArray)
} else {
Log.e(TAG, "opusDecode error : $size")
}
}
}
Iam getting only first 40 bytes..i want like first 0-40 bytes then 40-80 bytes,then 80-120bytes then 120-160bytes..
But here iam getting always 40 bytes...
Can some one help me how to fix this?
Finally i got solution for split byte array and send it in small packs
Below is updated working code..
-(NSMutableData*)decodeOpusData:(NSData*)data
{
NSMutableData *audioData = [[NSMutableData alloc] init];
for (NSUInteger i = 0; i < 4; i ++)
{
int bufferLength = 40;
if([data length]>= 40){
NSData *subData = [data subdataWithRange:NSMakeRange(i*bufferLength, bufferLength)];
Byte *byteData = (Byte*)malloc(sizeof(Byte)*bufferLength);
memcpy(byteData, [subData bytes], bufferLength);
//You can do anything here with data..........
//Below iam decoding audio data using OPUS library
short decodedBuffer[WB_FRAME_SIZE];
int nDecodedByte = sizeof(short) * [self decode:byteData length:bufferLength output:decodedBuffer];
NSData *PCMData = [NSData dataWithBytes:(Byte *)decodedBuffer length:nDecodedByte ];
[audioData appendData:PCMData];
//Decoding audio data using OPUS library
}
}
return audioData;
}
Can anyone help me out removing the initial silence in recorded audio file?
I am fetching the data bytes of wav file and after ignoring first 44 header bytes getting the end range of 0 bytes which are silent in wave file.
After that from total data bytes, end range of silent audio bytes and total duration of file, I am calculating the silence time of audio file and trimming that much time from audio file.
But the issue is still there is some silent part remaining in audio file.
So not sure if I missed something?
- (double)processAudio:(float)totalFileDuration withFilePathURL:(NSURL *)filePathURL{
NSMutableData *data = [NSMutableData dataWithContentsOfURL:filePathURL];
NSMutableData *Wave1= [NSMutableData dataWithData:[data subdataWithRange:NSMakeRange(44, [data length] - 44)]];
uint8_t * bytePtr = (uint8_t * )[Wave1 bytes] ;
NSInteger totalData = [Wave1 length] / sizeof(uint8_t);
int endRange = 0;
for (int i = 0 ; i < totalData; i ++){
/
if (bytePtr[i] == 0) {
endRange = i;
}else
break;
}
double silentAudioDuration =(((float)endRange/(float)totalData)*totalFileDuration);
return silentAudioDuration;
}
- (void)trimAudioFileWithInputFilePath :(NSString *)inputPath toOutputFilePath : (NSString *)outputPath{
/
NSString *strInputFilePath = inputPath;
NSURL *audioFileInput = [NSURL fileURLWithPath:strInputFilePath];
/
NSString *strOutputFilePath = [outputPath stringByDeletingPathExtension];
strOutputFilePath = [strOutputFilePath stringByAppendingString:#".m4a"];
NSURL *audioFileOutput = [NSURL fileURLWithPath:strOutputFilePath];
newPath = strOutputFilePath;
if (!audioFileInput || !audioFileOutput){
/
}
[[NSFileManager defaultManager] removeItemAtURL:audioFileOutput error:NULL];
AVAsset *asset = [AVAsset assetWithURL:audioFileInput];
CMTime audioDuration = asset.duration;
float audioDurationSeconds = CMTimeGetSeconds(audioDuration);
AVAssetExportSession *exportSession = [AVAssetExportSession exportSessionWithAsset:asset presetName:AVAssetExportPresetAppleM4A];
if (exportSession == nil){
/
}
/
float startTrimTime = [self processAudio:audioDurationSeconds withFilePathURL:audioFileInput];
/
/
float endTrimTime = audioDurationSeconds;
recordingDuration = audioDurationSeconds - startTrimTime;
CMTime startTime = CMTimeMake((int)(floor(startTrimTime * 100)), 100);
CMTime stopTime = CMTimeMake((int)(ceil(endTrimTime * 100)), 100);
CMTimeRange exportTimeRange = CMTimeRangeFromTimeToTime(startTime, stopTime);
exportSession.outputURL = audioFileOutput;
exportSession.outputFileType = AVFileTypeAppleM4A;
exportSession.timeRange = exportTimeRange;
[exportSession exportAsynchronouslyWithCompletionHandler:^{
if (AVAssetExportSessionStatusCompleted == exportSession.status){
}
else if (AVAssetExportSessionStatusFailed == exportSession.status){
}
}];
}
What am I doing wrong here?
It is possible that you don't have complete silence in your files? Perhaps your sample has a value of 1 or 2 or 3 which technically is not silent but it is very quiet.
Wave files are stored as signed numbers if 16 bits and unsigned if 8 bits. You are processing and casting your data to be an unsigned byte:
uint8_t * bytePtr = (uint8_t * )[Wave1 bytes] ;
You need to know the format of your wave file which can be obtained from the header. (It might use sample sizes of say 8 bit, 16 bit, 24 bit, etc.)
If it is 16 bits and mono, you need to use:
int16_t * ptr = (int16_t) [Wave1 bytes];
Your loop counts one byte at a time so you would need to adjust it to increment by the size of your frame size.
You also don't consider mono/stereo.
In general, your processAudio function needs more details and should consider the number of channels per frame (stereo/mono) and the size of the sample size.
Here is a wave header with iOS types. You can cast the first 44 bytes and get the header data so you know what you are dealing with.
typedef struct waveHeader_t
{
//RIFF
char chunkID[4]; ///< Should always contain "RIFF" BigEndian //4
uint32_t chunkSize; ///< total file length minus 8 (little endian!!!) //4
char format[4]; ///< should be "WAVE" Big Endian
// fmt
char subChunk1ID[4]; ///< "fmt " Big Endian //4
uint32_t subChunk1Size; ///< 16 for PCM format //2
uint16_t audioFormat; ///< 1 for PCM format //2
uint16_t numChannels; ///< channels //2
uint32_t sampleRate; ///< sampling frequency //4
uint32_t byteRate; ///< samplerate * numchannels * bitsperSample/8
uint16_t blockAlign; ///< frame size
uint16_t bitsPerSample; ///< bits per Sample
char subChunk2ID[4]; ///< should always contain "data"
uint32_t subChunk2Size; ///<
///< sample data follows this.....
} waveHeader_t;
So your todo list is
Extract the fields from the header
Specifically get number of channels and bits per channel (note **BITS per channel)
Point to the data with the appropriate size pointer and loop through one frame at a time. (A mono frame has one sample that is could be 8, 16, 24 etc bits. A stereo frame has two samples that could be 8, 16, or 24 bits per sample. e.g. LR LR LR LR LR LR would be 6 frames)
The header of an Apple generated wave file is usually not 44 bytes in length. Some Apple generated headers are 4k bytes in length. You have to inspect the wave RIFF header for extra 'FFLR' bytes. If you don't skip past this extra filler padding, you will end up with about an extra tenth of a second in silence (or potentially even bad data).
Is there a way that I can know how long the apps in the foreground has been running? I have three possible solutions in mind:
I. use battery consumption and battery consumption rate (iOS 8 and later tell you the battery usage of the app, but the batter consumption will be difficult to handle)
II. use system processes monitor
III. use Apple's diagnostics logs. This approach is quite "backdoor." Plus I am not sure if Apple allows us to use the information or not.
Can someone tell me if any of the above solution is realistic? If not, I want to know is it possible to find out the duration of a running app on iOS at all?
You can't access any data like that from other apps. Because every app works in its own sandbox and so you don't have the possibility to do that. You can't know because of the battery it consumes, how long an app is running. It depends on the frameworks it uses etc. Also if it's a game with high resolution graphics etc.
So: None of your ideas are possible.
With sysctl, you can get many informations about running processes. See in this code, you can find all running processes and also the started time of each process. This code is not private API so it should be accepted by Apple in case of posting it in Apple Store. Take a look in 'struct kinfo_proc' in sysctl.h, you will find useful infos. I don't know how to detect if a process is in foreground or background. I can find just the start_time, then to calculate running time. However, as you run this application in foreground, other processes are likely in background, isn't it?
#import <sys/sysctl.h>
- (NSArray *)runningProcesses
{
int mib[4] = {CTL_KERN, KERN_PROC, KERN_PROC_ALL, 0};
size_t miblen = 4;
size_t size;
int st = sysctl(mib, miblen, NULL, &size, NULL, 0);
struct kinfo_proc *process = NULL;
struct kinfo_proc *newprocess = NULL;
do {
size += size / 10;
newprocess = realloc(process, size);
if (!newprocess) {
if (process) {
free(process);
}
return nil;
}
process = newprocess;
st = sysctl(mib, miblen, process, &size, NULL, 0);
} while (st == -1 && errno == ENOMEM);
if (st == 0) {
if (size % sizeof(struct kinfo_proc) == 0) {
int nprocess = size / sizeof(struct kinfo_proc);
if (nprocess) {
NSMutableArray * array = [[NSMutableArray alloc] init];
for (int i = nprocess - 1; i >= 0; i--) {
NSString *processID = [[NSString alloc] initWithFormat:#"%d", process[i].kp_proc.p_pid];
NSString *processName = [[NSString alloc] initWithFormat:#"%s", process[i].kp_proc.p_comm];
struct timeval t = process[i].kp_proc.p_un.__p_starttime;
long ms = t.tv_sec;
NSDate *startDate = [[NSDate alloc] initWithTimeIntervalSinceReferenceDate:ms];
NSDictionary *dict = [[NSDictionary alloc] initWithObjects:[NSArray arrayWithObjects:processID, processName, startDate, nil]
forKeys:[NSArray arrayWithObjects:#"ProcessID", #"ProcessName",#"StartDate", nil]];
[array addObject:dict];
}
free(process);
return array;
}
}
}
return nil;
}
I'm developing a mobile application for iOS related to voice recording.
Due to that fact, I'm developing some different sound effects to modify recorded voice but I have a problem to implement some of them.
I'm trying to create echo/delay effect and I need to transform a byte array into a short array but I have no idea how to do it in Objective-C.
Thanks.
This is my current source code to implement it, but like byte is a very short type, when I apply attenuation (what must return a float value) produce an awful noise in my audio.
- (NSURL *)echo:(NSURL *)input output:(NSURL *)output{
int delay = 50000;
float attenuation = 0.5f;
NSMutableData *audioData = [NSMutableData dataWithContentsOfURL:input];
NSUInteger dataSize = [audioData length] - 44;
NSUInteger audioLength = [audioData length];
NSUInteger newAudioLength = audioLength + delay;
// Copy bytes
Byte *byteData = (Byte*)malloc(audioLength);
memcpy(byteData, [audioData bytes], audioLength);
short *shortData = (short*)malloc(audioLength/2);
// create a new array to store new modify data
Byte *newByteData = (Byte*)malloc(newAudioLength);
newByteData = byteData;
for (int i = 44; i < audioLength - delay; i++)
{
newByteData[i + delay] += byteData[i] * attenuation;
}
// Copy bytes in a new NSMutableData
NSMutableData *newAudioData = [NSMutableData dataWithBytes:newByteData length:newAudioLength];
// Store in a file
[newAudioData writeToFile:[output path] atomically:YES];
// Set WAV size
[[AudioUtils alloc] setAudioFileSize:output];
return output;
}
Finally, I could finish my echo effect implementing these four methods. I hope they will be useful for you.
Byte to short array
- (short *) byte2short:(Byte *)bytes size:(int)size resultSize:(int)resultSize{
short *shorts = (short *)malloc(sizeof(short)*resultSize);
for (int i=0; i < size/2; i++){
shorts[i] = (bytes[i*2+1] << 8) | bytes[i*2];
}
return shorts;
}
Short to byte array
- (Byte *) short2byte:(short *)shorts size:(int)size resultSize:(int)resultSize{
Byte *bytes = (Byte *)malloc(sizeof(Byte)*resultSize);
for (int i = 0; i < size; i++)
{
bytes[i * 2] = (Byte) (shorts[i] & 0x00FF);
bytes[(i * 2) + 1] = (Byte) (shorts[i] >> 8);
shorts[i] = 0;
}
return bytes;
}
Effect
- (NSMutableData *) effect:(NSMutableData *)data delay:(int)delay attenuation:(float)attenuation{
NSUInteger audioLength = [data length];
// Copy original data in a byte array
Byte *byteData = (Byte*)malloc(sizeof(Byte)*audioLength);
memcpy(byteData, [data bytes], audioLength);
short *shortData = (short*)malloc(sizeof(short)*(audioLength/2 + delay));
shortData = [self byte2short:byteData size:(int)audioLength resultSize:(int)audioLength/2 + delay];
// Array to store shorts
short *newShortData = shortData;
for (int i = 44; i < audioLength/2; i++)
{
newShortData[i + delay] += (short)((float)shortData[i] * attenuation);
}
Byte *newByteData = [self short2byte:newShortData size:(int)(audioLength/2 + delay) resultSize:(int)(audioLength + delay*2)];
// Copy bytes to a NSMutableData in order to create new file
NSMutableData *newAudioData = [NSMutableData dataWithBytes:newByteData length:(int)(audioLength + delay*2)];
return newAudioData;
}
Echo effect
- (NSURL *)echo:(NSURL *)input output:(NSURL *)output{
NSMutableData *audioData = [NSMutableData dataWithContentsOfURL:input];
// we call effect method that returns a NSMutableData and create a new file
[[self effect:audioData delay:6000 attenuation:0.5f] writeToFile:[output path] atomically:YES];
// We set file's size (is a method I have implemented)
[[AudioUtils alloc] setAudioFileSize:output];
return output;
}
There's no predefined function that will create a short array from a byte array, but it should be fairly simple to do it with a for loop
// create a short array
short *shortData = malloc(sizeof(short)*audioLength);
for (i=0; i<bytearray.length, i++)
{
shortData[i] = byteData[i];
}
The code is not rigorously correct (meaning I didn't compile it, just wrote it here on the fly), but it should give you an idea on how to do it.
Also be aware that saving audio data with two bytes instead of one can give very different results when playing back, but I'll assume you know how to handle with audio data for your specific purposes.
Using the NSStreamEventHasSpace available event, I am trying to write a simple NSString to to an NSOutputStream. Here is the contents of that event:
uint8_t *readBytes = (uint8_t *)[data mutableBytes];
readBytes += byteIndex; // instance variable to move pointer
int data_len = [data length];
unsigned int len = ((data_len - byteIndex >= 12) ?
12 : (data_len-byteIndex));
uint8_t buf[len];
(void)memcpy(buf, readBytes, len);
len = [output write:(const uint8_t *)buf maxLength:len];
NSLog(#"wrote: %s", buf);
byteIndex += len;
I pretty much took it right from Apple. The data is initialized in my viewDidLoad method with
data = [NSMutableData dataWithData:[#"test message" dataUsingEncoding:NSUTF8StringEncoding]];
[data retain];
The HasSpaceAvailable event is called twice. In the first one, the entire message is written with the characters "N." appended to it. In the second time, NSLog reports that a blank message was written (not null). Then, the EndEncountered event occurs. In that event, I have
NSLog(#"event: end encountered");
assert([stream isEqual:output]);
NSData *newData = [output propertyForKey: NSStreamDataWrittenToMemoryStreamKey];
if (!newData) {
NSLog(#"No data written to memory!");
} else {
NSLog(#"finished writing: %#", newData);
}
[stream close];
[stream removeFromRunLoop:[NSRunLoop currentRunLoop]
forMode:NSDefaultRunLoopMode];
[stream release];
output = nil;
break;
I also got this from Apple. However, "No data written to memory!" is logged. No errors occur at anytime, and no data appears to have been received on the other end.
I seem to have fixed this by using low level Core Foundation methods instead of higher level NSStream methods. I used this article as a starting point:
http://oreilly.com/iphone/excerpts/iphone-sdk/network-programming.html
It covers input and output streams in great lenghts and has code examples.
Hope this helps.