Objective C " * " syntax and usage - ios

I am re-writing the particle filter library of iOS in Swift from Objective C which is available on Bitbucket and I have a question regarding a syntax of Objective C which I cannot understand.
The code goes as follows:
- (void)setRssi:(NSInteger)rssi {
_rssi = rssi;
// Ignore zeros in average, StdDev -- we clear the value before setting it to
// prevent old values from hanging around if there's no reading
if (rssi == 0) {
self.meters = 0;
return;
}
self.meters = [self metersFromRssi:rssi];
NSInteger* pidx = self.rssiBuffer;
*(pidx+self.bufferIndex++) = rssi;
if (self.bufferIndex >= RSSIBUFFERSIZE) {
self.bufferIndex %= RSSIBUFFERSIZE;
self.bufferFull = YES;
}
if (self.bufferFull) {
// Only calculate trailing mean and Std Dev when we have enough data
double accumulator = 0;
for (NSInteger i = 0; i < RSSIBUFFERSIZE; i++) {
accumulator += *(pidx+i);
}
self.meanRssi = accumulator / RSSIBUFFERSIZE;
self.meanMeters = [self metersFromRssi:self.meanRssi];
accumulator = 0;
for (NSInteger i = 0; i < RSSIBUFFERSIZE; i++) {
NSInteger difference = *(pidx+i) - self.meanRssi;
accumulator += difference*difference;
}
self.stdDeviationRssi = sqrt( accumulator / RSSIBUFFERSIZE);
self.meanMetersVariance = ABS(
[self metersFromRssi:self.meanRssi]
- [self metersFromRssi:self.meanRssi+self.stdDeviationRssi]
);
}
}
The class continues with more code and functions which are not important and what I do not understand are these two lines
NSInteger* pidx = self.rssiBuffer;
*(pidx+self.bufferIndex++) = rssi;
Variable pidx is initialized to the size of a buffer which was previously defined and then in the next line the size of that buffer and buffer plus one is equal to the RSSI variable which is passed as a parameter in the function.
I assume that * has something to do with reference but I just can't figure out the purpose of this line. Variable pidx is used only in this function for calculating trailing mean and standard deviation.

Let explain those code:
NSInteger* pidx = self.rssiBuffer; means that you are getting pointer of the first value of the buffer.
*(pidx+self.bufferIndex++) = rssi; means that you are setting the value of the buffer at index 0+self.bufferIndex to rssiand then increase bufferIndex by 1. Thanks to #Jakub Vano point it out.
In C++, it will look like that
int self.rssiBuffer[1000]; // I assume we have buffer like that
self.rssiBuffer[self.bufferIndex++] = rssi

Related

EmguCV equivalent to Java mat.put(i, 0, mv)

I'm trying to convert a Java class to a C# one using EmguCV. It's for a class in Unsupervised Learning. The teacher made a program using OpenCV and Java. I have to convert it to C#.
The goal is to implement a simple Face Recognition algorithm.
The method I'm stuck at:
Mat sample = train.get(0).getData();
mean = Mat.zeros(/*6400*/sample.rows(), /*1*/sample.cols(), /*CvType.CV_64FC1*/sample.type());
// Calculating it by hand
train.forEach(person -> {
Mat data = person.getData();
for (int i = 0; i < mean.rows(); i++) {
double mv = mean.get(i, 0)[0]; // Gets the value of the cell in the first channel
double pv = data.get(i, 0)[0]; // Gets the value of the cell in the first channel
mv += pv;
mean.put(i, 0, mv); // *********** I'm stuck here ***********
}
});
So far, my C# equivalent is:
var sample = trainSet[0].Data;
mean = Mat.Zeros(sample.Rows, sample.Cols, sample.Depth, sample.NumberOfChannels);
foreach (var person in trainSet)
{
var data = person.Data;
for (int i = 0; i < mean.Rows; i++)
{
var meanValue = (double)mean.GetData().GetValue(i,0);
var personValue = (double)data.GetData().GetValue(i, 0);
meanValue += personValue;
}
}
And I am not finding the put equivalent in C#. But, if I'm being honest, I'm not even sure the previous two lines in my C# equivalent are correct.
Can someone help me figure this one out?
You can convert it like this:
Mat sample = trainSet[0].Data;
Mat mean = Mat.Zeros(sample.Rows, sample.Cols, sample.Depth, sample.NumberOfChannels);
foreach (var person in trainSet)
{
Mat data = person.Data;
for (int i = 0; i < mean.Rows; i++)
{
double meanValue = (double)mean.GetData().GetValue(i, 0);
double personValue = (double)data.GetData().GetValue(i, 0);
meanValue += personValue;
double[] mva = new double[] { meanValue };
Marshal.Copy(mva, 0, mean.DataPointer + i * mean.Cols * mean.ElementSize, 1);
}
}

Finding the lowest NSInteger from NSArray

I am trying to return the lowest number in an array.
Parameter: arrayOfNumbers - An array of NSNumbers.
Return: The lowest number in the array as an NSInteger.
The code I have thus far doesn't give me any errors, but does not pass the unit tests. What am I doing wrong?
- (NSInteger) lowestNumberInArray:(NSArray *)arrayOfNumbers {
NSNumber* smallest = [arrayOfNumbers valueForKeyPath:#"#min.self"];
for (NSInteger i = 0; i < arrayOfNumbers.count; i++) {
if (arrayOfNumbers[i] < smallest) {
smallest = arrayOfNumbers[i];
}
}
NSInteger smallestValue = [smallest integerValue];
return smallestValue;
}
This is the unit test:
- (void) testThatLowestNumberIsReturned {
NSInteger lowestNumber = [self.handler lowestNumberInArray:#[#3, #8, #-4, #0]];
XCTAssertEqual(lowestNumber, -4, #"Lowest number should be -4.");
lowestNumber = [self.handler lowestNumberInArray:#[#83, #124, #422, #953, #1004, #9532, #-1000]];
XCTAssertEqual(lowestNumber, -1000, #"Lowest number should be -1000.");
}
This method
NSNumber* smallest = [arrayOfNumbers valueForKeyPath:#"#min.self"];
will already determine the smallest number in the array, so the loop inside the method is superfluous (on top of being plain wrong, as #vikingosegundo notices).
you are comparing objects with c types, resulting im pointer addresses being compared with an int.
Beside the fact your smallest is already the smallest, as you used the KVC collection operator #min.self (see Glorfindel answer), the following code shows you correct comparison
if (arrayOfNumbers[i] < smallest)
should be
if ([arrayOfNumbers[i] compare:smallest] == NSOrderingAscending)
or
if ([arrayOfNumbers[i] integerValue] < [smallest integerValue])

Accelerate framework "sign" function

I'm trying to find a super fast way of getting the sign of each value in a vector. I was hoping to find a function in the accelerate framework to do this, but couldn't find one. Here's what it would do:
float *inputVector = .... // some audio vector
int length = ...// length of input vector.
float *outputVector = ....// result
for( int i = 0; i<length; i++ )
{
if( inputVector[i] >= 0 ) outputVector[i] = 1;
else outputVector[i] = -1;
}
Ok, I think I've found a way...
vvcopysignf() "Copies an array, setting the sign of each value based on a second array."
So, one method would be to make an array of 1s, then use this function to change the sign of the 1s based on an input array.
float *ones = ... // a vector filled with 1's
float *input = .... // an input vector
float *output = ... // an output vector
int bufferSize = ... // size of the vectors;
vvcopysignf(output, ones, input, &bufferSize);
//output now is an array of -1s and 1s based the sign of the input.

How to set & get value of pointer Swift

In Objetive-C when I want set/change value of pointer. I use
*pointer = value
But In Swift, how to get/set value of pointer?
I'm woking with bitmap pixel:
NSUInteger offsetPixelCountForInput = ghostOrigin.y * inputWidth + ghostOrigin.x;
for (NSUInteger j = 0; j < ghostSize.height; j++) {
for (NSUInteger i = 0; i < ghostSize.width; i++) {
UInt32 * inputPixel = inputPixels + j * inputWidth + i + offsetPixelCountForInput;
UInt32 inputColor = *inputPixel;
newR = MAX(0,MIN(255, newR));
newG = MAX(0,MIN(255, newG));
newB = MAX(0,MIN(255, newB));
*inputPixel = RGBAMake(newR, newG, newB, A(inputColor));
}
}
So I want to convert this code into Swift, but I'm stuck with pointers.
23.03.2016 - Update code
var inputPixels:UnsafeMutablePointer<UInt32> = nil
inputPixels = UnsafeMutablePointer<UInt32>(calloc(inputHeight * inputWidth, UInt(sizeof(UInt32))))
You can work with pointers in Swift. There are UnsafePointer and UnsafeMutablePointer generic types.
Here is a function that takes a float pointer
You can use float variable and pass it's address &floatVar
or you can create and allocate an UnsafeMutablePointer and pass it. But you have to manually allocate and deallocate memory.
When you work with an UnsafeMutablePointer pointer type and want to assign some value to it you have to do the following:
check if points to some variable (not nil)
Assign your value to the memory property
Code Example:
func work (p: UnsafeMutablePointer<Float>) {
if p != nil {
p.memory = 10
}
println(p)
println(p.memory)
}
var f: Float = 0.0
var fPointer: UnsafeMutablePointer<Float> = UnsafeMutablePointer.alloc(1)
work(&f)
work(fPointer)
fPointer.dealloc(1)
You should use pointee property instead of memory in Swift 3
pointer.pointee = value

What could cause "fft Window" value is NaN in Hanning normalized window?

i am trying to build an iOS 7 application that detecting the sound/song pitch(or frequency), For example: 349.23Hz, 392.00Hz, 440.00Hz......
So, I download the "Auto Correllation" project (it's a Musician's ket http://musicianskit.com/developer.php), I run it on iOS 7 Simulator, it works fine, The "hanning fft window" have value (not NaN), and able get the frequency finally.
But, it doesn't work on iPhone device, it cannot has any value in "hanning fft window".
Can anybody have a look into these classes by Kevin Murphy and tell me how I could modify them to work on iPhone device(not the iOS simulator)?
Many many thanks~
I've pasted my code below:
// PitchDetector.m
-(id) initWithSampleRate: (float) rate lowBoundFreq: (int) low hiBoundFreq: (int) hi andDelegate: (id<PitchDetectorDelegate>) initDelegate {
self.lowBoundFrequency = low;
self.hiBoundFrequency = hi;
self.sampleRate = rate;
self.delegate = initDelegate;
bufferLength = self.sampleRate/self.lowBoundFrequency;
hann = (float*) malloc(sizeof(float)*bufferLength);
// applied the Hanning windows, the 'hann' is the Hanning fft Window
vDSP_hann_window(hann, bufferLength, vDSP_HANN_NORM);
sampleBuffer = (SInt16*) malloc(512);
samplesInSampleBuffer = 0;
result = (float*) malloc(sizeof(float)*bufferLength);
return self;
}
-(void) performWithNumFrames: (NSNumber*) numFrames;
{
int n = numFrames.intValue;
float freq = 0;
SInt16 *samples = sampleBuffer;
int returnIndex = 0;
float sum;
bool goingUp = false;
float normalize = 0;
for(int i = 0; i<n; i++) {
sum = 0;
for(int j = 0; j<n; j++) {
//here I found the hann[j] is NaN. seems doesn't have value in hann('hann' is the Hanning fft Window)
//if hann[j] is Not a Number (NaN), the value of sum also to be NaN.
sum += (samples[j]*samples[j+i])*hann[j];
}
if(i ==0 ) normalize = sum;
result[i] = sum/normalize;
}
......
......
}
I am using this same program from:
https://github.com/fotock/PitchDetectorExample/tree/1c68491f9c9bff2e851f5711c47e1efe4092f4de
Although I have not put this on an iPhone yet, only simulator, I was having problems from time time with the program crashing. I found that I needed to manually update it with from a "fork" of the code on github found here:
https://github.com/fotock/PitchDetectorExample/network
I added Jordan Liggitt's bug fixes manually and now the app does not crash. I hope this helps because if it does not, then I will be facing the same issues when I load this app on an iPhone.
Hope it works!
Update
I have now installed this on an iPhone vs the simulator and it works as it should without errors or crashing.

Resources