Whats is the easiest way to randomize sounds - ios

I have 11 sounds for each soundpack. They are named:
testpack1.mp3,
testpack2.mp3 and so on.
My player initialises them with this code:
NSString * strName = [NSString stringWithFormat:#"testpack%ld", (long) (value+1)];
NSString * strPath = [[NSBundle mainBundle] pathForResource:strName ofType:#"mp3"];
NSURL * urlPath = [NSURL fileURLWithPath:strPath];
self.audioplayer = [[AVAudioPlayer alloc] initWithContentsOfURL:urlPath error:NULL];
This sounds will be played by pressing buttons. For example, I have generated 4 buttons, this 4 buttons play every time only testpack1-4.mp3 but I want that my player takes random out of the 11 sounds. What's the easiest solution?
Note: I don't want to repeat the mp3 unless all are played

How about this?
int randNum = rand() % (11 - 1) + 1;
The formuale is like below
int randNum = rand() % (maxNumber - minNumber) + minNumber;

A suggestion:
It declares 3 variables as static variables, played is a simple C-Array
static UInt32 numberOfSounds = 11;
static UInt32 counter = 0;
static UInt32 played[11];
The method playSound() resets the C-Array to zero values if the counter is 0 and sets the counter to the number of sounds.
When the method is called the random generator creates an index number.
If the value at that index in the array is 0 the sound is played, the index in the array is set and the counter is decreased.
If the sound at that index has been played yet, loop until a non-used index is found.
- (void)playSound
{
if (counter == 0) {
for (int i = 0; i < numberOfSounds; i++) {
played[i] = 0;
}
counter = numberOfSounds;
}
BOOL found = NO;
do {
UInt32 value = arc4random_uniform(numberOfSounds) + 1;
if (played[value - 1] != value) {
NSString * strName = [NSString stringWithFormat:#"testpack1-%d", value];
NSString * strPath = [[NSBundle mainBundle] pathForResource:strName ofType:#"mp3"];
NSURL * urlPath = [NSURL fileURLWithPath:strPath];
self.audioplayer = [[AVAudioPlayer alloc] initWithContentsOfURL:urlPath error:NULL];
played[value - 1] = value;
counter--;
found = YES;
}
} while (found == NO);
}

Related

How to split byte array and send it in small packs in Objective-c iOS

How to split byteArray in iOS
Iam getting 160 length of arrayByte data..
I need to split into 4 parts..each part contain 40 arrayByte.that data I need to copy and use for decoding..I tried to converted it but its not working..Can some one help to do this..
Finally i got solution Below is updated working code
-(NSMutableData*)decodeOpusData:(NSData*)data
{
NSMutableData *audioData = [[NSMutableData alloc] init];
for (NSUInteger i = 0; i < 4; i ++)
{
int bufferLength = 40;
if([data length]>= 40){
NSData *subData = [data subdataWithRange:NSMakeRange(i*bufferLength, bufferLength)];
Byte *byteData = (Byte*)malloc(sizeof(Byte)*bufferLength);
memcpy(byteData, [subData bytes], bufferLength);
//You can do anything here with data..........
//Below iam decoding audio data using OPUS library
short decodedBuffer[WB_FRAME_SIZE];
int nDecodedByte = sizeof(short) * [self decode:byteData length:bufferLength output:decodedBuffer];
NSData *PCMData = [NSData dataWithBytes:(Byte *)decodedBuffer length:nDecodedByte ];
[audioData appendData:PCMData];
//Decoding audio data using OPUS library
}
}
return audioData;
}
Below code is android.i want to do like this..
ArrayByte length = 160
BUFFER_LENGTH = 40
public fun opusDataDecoder(data:ByteArray){
for (i in 0..3){
val byteArray = ByteArray(BUFFER_LENGTH)
System.arraycopy(data,i * BUFFER_LENGTH,byteArray,0, BUFFER_LENGTH) //BUFFER_LENGTH = 40
val decodeBufferArray = ShortArray(byteArray.size * 8) // decodeBufferArray = 320
val size = tntOpusUtils.decode(decoderHandler, byteArray, decodeBufferArray)
if (size > 0) {
val decodeArray = ShortArray(size)
System.arraycopy(decodeBufferArray, 0, decodeArray, 0, size)
opusDecode(decodeArray)
} else {
Log.e(TAG, "opusDecode error : $size")
}
}
}
Iam getting only first 40 bytes..i want like first 0-40 bytes then 40-80 bytes,then 80-120bytes then 120-160bytes..
But here iam getting always 40 bytes...
Can some one help me how to fix this?
Finally i got solution for split byte array and send it in small packs
Below is updated working code..
-(NSMutableData*)decodeOpusData:(NSData*)data
{
NSMutableData *audioData = [[NSMutableData alloc] init];
for (NSUInteger i = 0; i < 4; i ++)
{
int bufferLength = 40;
if([data length]>= 40){
NSData *subData = [data subdataWithRange:NSMakeRange(i*bufferLength, bufferLength)];
Byte *byteData = (Byte*)malloc(sizeof(Byte)*bufferLength);
memcpy(byteData, [subData bytes], bufferLength);
//You can do anything here with data..........
//Below iam decoding audio data using OPUS library
short decodedBuffer[WB_FRAME_SIZE];
int nDecodedByte = sizeof(short) * [self decode:byteData length:bufferLength output:decodedBuffer];
NSData *PCMData = [NSData dataWithBytes:(Byte *)decodedBuffer length:nDecodedByte ];
[audioData appendData:PCMData];
//Decoding audio data using OPUS library
}
}
return audioData;
}

UI Gets Blocked When Using CGImageSourceCreateWithURL and CGImageSourceCopyPropertiesAtIndex

I am checking the dimensions of images via their URLs stored in an array called destination_urls. I then calculate the height of images if they were stretched to the width of the screen while maintaining the aspect ratio. All of this is done in a for loop.
When the code is run, the app UI gets stuck during the for loop. How can I revise the code to make sure the app doesn't freeze?
int t = 0;
int arraycount = [_destination_urls count];
dispatch_async(dispatch_get_main_queue(), ^{
for (int i = 0; i < arraycount; i++) {
NSURL *imageURL = [NSURL URLWithString:[_destination_urls[i] stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]];
CGImageSourceRef imgSource = CGImageSourceCreateWithURL((__bridge CFURLRef)imageURL, NULL);
NSDictionary* imageProps = (__bridge_transfer NSDictionary*) CGImageSourceCopyPropertiesAtIndex(imgSource, 0, NULL);
NSString *imageHeighttoDisplay = [imageProps objectForKey:#"PixelHeight"];
originalimageHeight = [imageHeighttoDisplay intValue];
NSString *imageWidthtoDisplay = [imageProps objectForKey:#"PixelWidth"];
originalimageWidth = [imageWidthtoDisplay intValue];
if (imgSource){
_revisedImageURLHeight[t] = [NSNumber numberWithInt:(screenWidth)*(originalimageHeight/originalimageWidth)];
t = t +1;
CFRelease(imgSource);
}
}
});

How do you set the duration on a new file created with audioFileWritePackets?

I have concatenated 2 audio files using AudioFileReadPacketData to read the existing files and writing the data to a third file using AudioFileWritePackets. the source files seem to contain information about their duration. I can access it using the code below passing it the file url
-(float)lengthForUrl:(NSURL *)url
{
AVURLAsset* audioAsset = [AVURLAsset URLAssetWithURL:url options:nil];
CMTime audioDuration = audioAsset.duration;
float audioDurationSeconds = CMTimeGetSeconds(audioDuration);
return audioDurationSeconds;
}
when I make the same call on my new ( combined file) i get a duration of 0.
how can a set the duration on my new file ?
or preserve it in the transfer process ?
I did not figure out how to set the duration on my combined file but did
figure out how to create it with the duration intact.
I switched from using AssetReader / Writer to using ExtFileReader / Writer
-- working code is below
- (BOOL)combineFilesFor:(NSURL*)url
{
dispatch_async(dispatch_get_main_queue(), ^{
activity.hidden = NO;
[activity startAnimating];
});
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *soundOneNew = [documentsDirectory stringByAppendingPathComponent:Kcombined];
NSLog(#"%#",masterUrl);
NSLog(#"%#",url);
// create the url for the combined file
CFURLRef out_url = (__bridge CFURLRef)[NSURL fileURLWithPath:soundOneNew];
NSLog(#"out file url : %#", out_url);
ExtAudioFileRef af_in;
ExtAudioFileRef af_out = NULL;
Float64 existing_audioDurationSeconds = [self lengthForUrl:masterUrl];
NSLog(#"duration of existing file : %f", existing_audioDurationSeconds);
Float64 new_audioDurationSeconds = [self lengthForUrl:url];
NSLog(#"duration of existing file : %f", new_audioDurationSeconds);
Float64 combined_audioDurationSeconds = existing_audioDurationSeconds + new_audioDurationSeconds;
NSLog(#"duration of combined : %f", combined_audioDurationSeconds);
// put the urls of the files to combine in to an array
NSArray *sourceURLs = [NSArray arrayWithObjects:(__bridge id)((__bridge CFURLRef)
AudioStreamBasicDescription inputFileFormat;
AudioStreamBasicDescription converterFormat;
UInt32 thePropertySize = sizeof(inputFileFormat);
AudioStreamBasicDescription outputFileFormat;
#define BUFFER_SIZE 4096
UInt8 *buffer = NULL;
for (id in_url in sourceURLs) {
NSLog(#"url in = %#",in_url);
OSStatus in_stat = ExtAudioFileOpenURL ( (__bridge CFURLRef)(in_url), &af_in);
NSLog(#"in file open status : %d", (int)in_stat);
bzero(&inputFileFormat, sizeof(inputFileFormat));
in_stat = ExtAudioFileGetProperty(af_in, kExtAudioFileProperty_FileDataFormat,
&thePropertySize, &inputFileFormat);
NSLog(#"in file get property status : %d", (int)in_stat);
memset(&converterFormat, 0, sizeof(converterFormat));
converterFormat.mFormatID = kAudioFormatLinearPCM;
converterFormat.mSampleRate = inputFileFormat.mSampleRate;
converterFormat.mChannelsPerFrame = 1;
converterFormat.mBytesPerPacket = 2;
converterFormat.mFramesPerPacket = 1;
converterFormat.mBytesPerFrame = 2;
converterFormat.mBitsPerChannel = 16;
converterFormat.mFormatFlags = kAudioFormatFlagsNativeEndian |kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
in_stat = ExtAudioFileSetProperty(af_in, kExtAudioFileProperty_ClientDataFormat,
sizeof(converterFormat), &converterFormat);
NSLog(#"in file set property status : %d", (int)in_stat);
if (af_out == NULL) {
memset(&outputFileFormat, 0, sizeof(outputFileFormat));
outputFileFormat.mFormatID = kAudioFormatMPEG4AAC;
outputFileFormat.mFormatFlags = kMPEG4Object_AAC_Main;
outputFileFormat.mSampleRate = inputFileFormat.mSampleRate;
outputFileFormat.mChannelsPerFrame = inputFileFormat.mChannelsPerFrame;
// create the new file to write the combined file 2
OSStatus out_stat = ExtAudioFileCreateWithURL(out_url, kAudioFileM4AType, &outputFileFormat,
NULL, kAudioFileFlags_EraseFile, &af_out);
NSLog(#"out file create status : %d", out_stat);
out_stat = ExtAudioFileSetProperty(af_out, kExtAudioFileProperty_ClientDataFormat,
sizeof(converterFormat), &converterFormat);
NSLog(#"out file set property status : %d", out_stat);
}
// Buffer to read from source file and write to dest file
buffer = malloc(BUFFER_SIZE);
assert(buffer);
AudioBufferList conversionBuffer;
conversionBuffer.mNumberBuffers = 1;
conversionBuffer.mBuffers[0].mNumberChannels = inputFileFormat.mChannelsPerFrame;
conversionBuffer.mBuffers[0].mData = buffer;
conversionBuffer.mBuffers[0].mDataByteSize = BUFFER_SIZE;
while (TRUE) {
conversionBuffer.mBuffers[0].mDataByteSize = BUFFER_SIZE;
UInt32 frameCount = INT_MAX;
if (inputFileFormat.mBytesPerFrame > 0) {
frameCount = (conversionBuffer.mBuffers[0].mDataByteSize / inputFileFormat.mBytesPerFrame);
}
// Read a chunk of input
OSStatus err = ExtAudioFileRead(af_in, &frameCount, &conversionBuffer);
if (err) {
NSLog(#"error read from input file : %d", (int)err);
}else{
NSLog(#"read %d frames from inout file", frameCount);
}
// If no frames were returned, conversion is finished
if (frameCount == 0)
break;
// Write pcm data to output file
err = ExtAudioFileWrite(af_out, frameCount, &conversionBuffer);
if (err) {
NSLog(#"error write out file : %d", (int)err);
}else{
NSLog(#"wrote %d frames to out file", frameCount);
}
}
ExtAudioFileDispose(af_in);
}
ExtAudioFileDispose (af_out);
NSLog(#"combined file duration =: %f", [self lengthForUrl:[NSURL fileURLWithPath:soundOneNew]]);
// move the new file into place
[self deleteFileWithName:KUpdate forType:#""];
[self deleteFileWithName:KMaster forType:#""];
player = nil;
[self replaceFileAtPath:KMaster withData:[NSData dataWithContentsOfFile:soundOneNew]];
[self deleteFileWithName:Kcombined forType:#""];
[self pauseRecording];
dispatch_async(dispatch_get_main_queue(), ^{
activity.hidden = YES;
[activity stopAnimating];
record.enabled = YES;
});
return YES;
}

iOS SoundTouch framework BPM Detection example

I have searched all over the web and cannot find a tutorial on how to use the SoundTouch library for beat detection.
(Note: I have no C++ experience prior to this. I do know C, Objective-C, and Java. So I could have messed some of this up, but it compiles.)
I added the framework to my project and managed to get the following to compile:
NSString *path = [[NSBundle mainBundle] pathForResource:#"song" ofType:#"wav"];
NSData *data = [NSData dataWithContentsOfFile:path];
player =[[AVAudioPlayer alloc] initWithData:data error:NULL];
player.volume = 1.0;
player.delegate = self;
[player prepareToPlay];
[player play];
NSUInteger len = [player.data length]; // Get the length of the data
soundtouch::SAMPLETYPE sampleBuffer[len]; // Create buffer array
[player.data getBytes:sampleBuffer length:len]; // Copy the bytes into the buffer
soundtouch::BPMDetect *BPM = new soundtouch::BPMDetect(player.numberOfChannels, [[player.settings valueForKey:#"AVSampleRateKey"] longValue]); // This is working (tested)
BPM->inputSamples(sampleBuffer, len); // Send the samples to the BPM class
NSLog(#"Beats Per Minute = %f", BPM->getBpm()); // Print out the BPM - currently returns 0.00 for errors per documentation
The inputSamples(*samples, numSamples) song byte information confuses me.
How do I get these pieces of information from a song file?
I tried using memcpy() but it doesn't seem to be working.
Anyone have any thoughts?
After hours and hours of debugging and reading the limited documentation on the web, I modified a few things before stumbling upon this: You need to divide numSamples by numberOfChannels in the inputSamples() function.
My final code is like so:
NSString *path = [[NSBundle mainBundle] pathForResource:#"song" ofType:#"wav"];
NSData *data = [NSData dataWithContentsOfFile:path];
player =[[AVAudioPlayer alloc] initWithData:data error:NULL];
player.volume = 1.0; // optional to play music
player.delegate = self;
[player prepareToPlay]; // optional to play music
[player play]; // optional to play music
NSUInteger len = [player.data length];
soundtouch::SAMPLETYPE sampleBuffer[len];
[player.data getBytes:sampleBuffer length:len];
soundtouch::BPMDetect BPM(player.numberOfChannels, [[player.settings valueForKey:#"AVSampleRateKey"] longValue]);
BPM.inputSamples(sampleBuffer, len/player.numberOfChannels);
NSLog(#"Beats Per Minute = %f", BPM.getBpm());
I've tried this solution to read the BPM from mp3 files (using the TSLibraryImport class to convert to wav) inside the iOS Music Library:
MPMediaItem *item = [collection representativeItem];
NSURL *urlStr = [item valueForProperty:MPMediaItemPropertyAssetURL];
TSLibraryImport* import = [[TSLibraryImport alloc] init];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSURL* destinationURL = [NSURL fileURLWithPath:[documentsDirectory stringByAppendingPathComponent:#"temp_data"]];
[[NSFileManager defaultManager] removeItemAtURL:destinationURL error:nil];
[import importAsset:urlStr toURL:destinationURL completionBlock:^(TSLibraryImport* import) {
NSString *outPath = [documentsDirectory stringByAppendingPathComponent:#"temp_data"];
NSData *data = [NSData dataWithContentsOfFile:outPath];
AVAudioPlayer *player =[[AVAudioPlayer alloc] initWithData:data error:NULL];
NSUInteger len = [player.data length];
int numChannels = player.numberOfChannels;
soundtouch::SAMPLETYPE sampleBuffer[1024];
soundtouch::BPMDetect *BPM = new soundtouch::BPMDetect(player.numberOfChannels, [[player.settings valueForKey:#"AVSampleRateKey"] longValue]);
for (NSUInteger i = 0; i <= len - 1024; i = i + 1024) {
NSRange r = NSMakeRange(i, 1024);
//NSData *temp = [player.data subdataWithRange:r];
[player.data getBytes:sampleBuffer range:r];
int samples = sizeof(sampleBuffer) / numChannels;
BPM->inputSamples(sampleBuffer, samples); // Send the samples to the BPM class
}
NSLog(#"Beats Per Minute = %f", BPM->getBpm());
}];
The strangeness is that the calculated BMP is always the same value:
2013-10-02 03:05:36.725 AppTestAudio[1464:1803] Beats Per Minute = 117.453835
No matter which track was i.e. number of frames or the buffer size (here I used 2K buffer size as for the SoundTouch example in the source code of the library).
For Swift 3:
https://github.com/Luccifer/BPM-Analyser
And use it like:
guard let filePath = Bundle.main.path(forResource: "TestMusic", ofType: "m4a"),
let url = URL(string: filePath) else {return "error occured, check fileURL"}
BPMAnalyzer.core.getBpmFrom(url, completion: nil)
Feel free to comment!

AudioFileWriteBytes is taking way too long to do its job

This problem may be a bit too vast and nebulous for this space, but I'll give it a go.
I have an array of samples that I'm trying to write to a .wav file on my iOS and it is taking up to a minute and a half to do. Here is the loop where the drag is occurring:
for (int i=0; i< 1430529; i++) // 1430529 is the length of the array of samples
{
SInt16 sample;
sample = sample_array[i];
audioErr = AudioFileWriteBytes(audioFile, false, sampleCount*2, &bytesToWrite, &sample);
sampleCount++;
}
Any ideas?
EDIT 1
If it helps, this is the code that precedes it:
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
// THIS IS MY BIT TO CONVERT THE LOCATION TO NSSTRING
NSString *filePath = [[NSString alloc]init];
filePath = [NSString stringWithUTF8String:location];
// HERE I WANT TO REMOVE THE FILE NAME FROM THE LOCATION.
NSString *truncatedFilePath = filePath;
truncatedFilePath = [truncatedFilePath stringByReplacingOccurrencesOfString:#"/recordedFile.wav"
// withString:#"/newFile.caf"];
withString:#"/recordedFile.wav"];
NSLog(truncatedFilePath);
NSURL *fileURL = [NSURL fileURLWithPath:truncatedFilePath];
AudioStreamBasicDescription asbd;
memset(&asbd,0, sizeof(asbd));
asbd.mSampleRate = SAMPLE_RATE;
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
asbd.mBitsPerChannel = 16;
asbd.mChannelsPerFrame = 1;
asbd.mFramesPerPacket = 1;
asbd.mBytesPerFrame = 2;
asbd.mBytesPerPacket = 2;
AudioFileID audioFile;
OSStatus audioErr = noErr;
audioErr = AudioFileCreateWithURL((CFURLRef)fileURL, kAudioFileWAVEType, &asbd, kAudioFileFlags_EraseFile, &audioFile);
assert (audioErr == noErr);
long sampleCount = 0;
UInt32 bytesToWrite = 2;
Why do you need the loop ? Can't you write all the samples in one go, e.g.
numSamples = 1430529;
bytesToWrite = numSamples * 2;
audioErr = AudioFileWriteBytes(audioFile, false, 0, &bytesToWrite, sample_array);
?
Perhaps the number of bytes you are writing at each call to AudioFileWriteBytes is too small. How large is bytesToWrite?

Resources