What is fastest way to group array elements into buckets in iOS? - ios

So I have an array of 500,000 elements:
float* arrayToBucketize=(float*) malloc(sizeof(float)*500000);
and an array that represents the buckets:
int buckets[5]={0,25,50,75,100};
What is the fastest way to go through the first array, look at each float value, compare it to the "buckets" array and replace that float value with the nearest bucket value. So if the float value was 11.25, it would be replaced with 0. On the other hand, 90.10 would be replaced with 100.
Also, I would need any values outside of that range (<0 and >100) to remain unchanged.
I know I can do this with for loops and if conditions; but in the bad habit of optimizing, I am trying to find a more efficient (faster) way of doing this. I am hoping that there is a C function(s) or an iOS function in the Accelerate framework that can do this. Or possibly a series of Accelerate framework matrix functions.
Thanks

For each value inside the bucket range, divide by the lowest common multiple of your bucket values. Round the results to the nearest integer, and then multiply again by the lowest common multiple.
Using the example numbers:
11.25 / 25 = 0.45
0.45 -> 0
0 * 25 = 0
90.10 / 25 = 3.604
3.604 -> 4
4 * 25 = 100
The accelerate framework has vectorized divide, round, and multiply functions, so these should run fairly quickly.

Related

Difference between absdiff and normal subtraction in OpenCV

I am currently planning on training a binary image classification model. The images I want to train on are the difference between two original pictures. In other words, for each data entry, I start out with 2 pictures, take their difference, and the label that difference as a 0 or 1. My question is what is the best way to find this difference. I know about cv2.absdiff and then normal subtraction of images - what is the most effective way to go about this?
About the data: The images I'm training on are screenshots that usually are the same but may have small differences. I found that normal subtraction seems to show the differences less than absdiff.
This is the code I use for absdiff:
diff = cv2.absdiff(img1, img2)
mask = cv2.cvtColor(diff, cv2.COLOR_BGR2GRAY)
th = 1
imask = mask>1
canvas = np.zeros_like(img2, np.uint8)
canvas[imask] = img2[imask]
And then this for normal subtraction:
def extract_diff(self,imageA, imageB, image_name, path):
subtract = imageB.astype(np.float32) - imageA.astype(np.float32)
mask = cv2.inRange(np.abs(subtract),(30,30,30),(255,255,255))
th = 1
imask = mask>1
canvas = np.zeros_like(imageA, np.uint8)
canvas[imask] = imageA[imask]
Thanks!
A difference can be negative or positive.
For some number types, such as uint8 (unsigned 8-bit int), which can't be negative (have no sign), a negative value wraps around and the value would make no sense anymore. Other types can be signed (e.g. floats, signed ints), so a negative value can be represented correctly.
That's why cv.absdiff exists. It always gives you absolute differences, and those are okay to represent in an unsigned type.
Example with numbers: a = 4, b = 6. a-b should be -2, right?
That value, as an uint8, will wrap around to become 0xFE, or 254 in decimal. The 254 value has some relation to the true -2 difference, but it also incorporates the range of values of the data type (8 bits: 256 values), so it's really just "code".
cv.absdiff would give you the absolute of the difference (-2), which is 2.

Compute annual mean using x-arrays

I have a python xarray dataset with time,x,y for its dimensions and value1 as its variable. I'm trying to compute annual mean of value1 for each x,y coordinate pair.
I've run into this function while reading the docs:
ds.groupby('time.year').mean()
This seems to compute a single annual mean for all x,y coordinate pairs in value1 at each given time slice
rather than the annual means of individual x,y coordinate pairs at each given time slice.
While the code snippet above produces the wrong output, I'm very interested in its oversimplified form. I would really like to figure out the "X-arrays trick" to doing annual mean for a given x,y coordinate pair rather than hacking it together myself.
Cam someone point me in the right direction? Should I temporarily turn this into a pandas object?
To avoid the default of averaging over all dimensions, you simply need to supply the dimension you want to average over explicitly:
ds.groupby('time.year').mean('time')
Note, that calling ds.groupby('time.year').mean('time') will be incorrect if you are working with monthly and not daily data. Taking the mean will place equal weight on months of different length, e.g., Feb and July, which is wrong.
Instead use below from NCAR:
def weighted_temporal_mean(ds, var):
"""
weight by days in each month
"""
# Determine the month length
month_length = ds.time.dt.days_in_month
# Calculate the weights
wgts = month_length.groupby("time.year") / month_length.groupby("time.year").sum()
# Make sure the weights in each year add up to 1
np.testing.assert_allclose(wgts.groupby("time.year").sum(xr.ALL_DIMS), 1.0)
# Subset our dataset for our variable
obs = ds[var]
# Setup our masking for nan values
cond = obs.isnull()
ones = xr.where(cond, 0.0, 1.0)
# Calculate the numerator
obs_sum = (obs * wgts).resample(time="AS").sum(dim="time")
# Calculate the denominator
ones_out = (ones * wgts).resample(time="AS").sum(dim="time")
# Return the weighted average
return obs_sum / ones_out
average_weighted_temp = weighted_temporal_mean(ds_first_five_years, 'TEMP')

Generate a Random Number between 0.0001 and 0.002 in Objective C (iOS)?

Does anyone know how I could generate a random number in a range in iOS? I am currently working on a synthesizer in iOS (using SpriteKit and AudioKit) and I am trying to modify the loudness of the synth with changing its variability whenever a slider is being moved as well.
This is what my code looks like:
[Synth setAmplitude: 0.5 + (slider.currentValue * loudnessVar)];
where 0.5 is the default amplitude value and loudnessVar is a random number.
Since, the slider returns values from -170 to 170 , I would need a relatively low number in order to set a value between 0 and 1.
Is anyone able to help with this?
The way to generate a random number in a range is:
NSInteger random = min + arc4random() % (max - min);
So, you can generate a number between 1-20 and divide it by 1000, it's just an example.
I.

Normalize a feature in this table

This has become quite a frustrating question, but I've asked in the Coursera discussions and they won't help. Below is the question:
I've gotten it wrong 6 times now. How do I normalize the feature? Hints are all I'm asking for.
I'm assuming x_2^(2) is the value 5184, unless I am adding the x_0 column of 1's, which they don't mention but he certainly mentions in the lectures when talking about creating the design matrix X. In which case x_2^(2) would be the value 72. Assuming one or the other is right (I'm playing a guessing game), what should I use to normalize it? He talks about 3 different ways to normalize in the lectures: one using the maximum value, another with the range/difference between max and mins, and another the standard deviation -- they want an answer correct to the hundredths. Which one am I to use? This is so confusing.
...use both feature scaling (dividing by the
"max-min", or range, of a feature) and mean normalization.
So for any individual feature f:
f_norm = (f - f_mean) / (f_max - f_min)
e.g. for x2,(midterm exam)^2 = {7921, 5184, 8836, 4761}
> x2 <- c(7921, 5184, 8836, 4761)
> mean(x2)
6676
> max(x2) - min(x2)
4075
> (x2 - mean(x2)) / (max(x2) - min(x2))
0.306 -0.366 0.530 -0.470
Hence norm(5184) = 0.366
(using R language, which is great at vectorizing expressions like this)
I agree it's confusing they used the notation x2 (2) to mean x2 (norm) or x2'
EDIT: in practice everyone calls the builtin scale(...) function, which does the same thing.
It's asking to normalize the second feature under second column using both feature scaling and mean normalization. Therefore,
(5184 - 6675.5) / 4075 = -0.366
Usually we normalize all of them to have zero mean and go between [-1, 1].
You can do that easily by dividing by the maximum of the absolute value and then remove the mean of the samples.
"I'm assuming x_2^(2) is the value 5184" is this because it's the second item in the list and using the subscript _2? x_2 is just a variable identity in maths, it applies to all rows in the list. Note that the highest raw mid-term exam result (i.e. that which is not squared) goes down on the final test and the lowest raw mid-term result increases the most for the final exam result. Theta is a fixed value, a coefficient, so somewhere your normalisation of x_1 and x_2 values must become (EDIT: not negative, less than 1) in order to allow for this behaviour. That should hopefully give you a starting basis, by identifying where the pivot point is.
I had the same problem, in my case the thing was that I was using as average the maximum x2 value (8836) minus minimum x2 value (4761) divided by two, instead of the sum of each x2 value divided by the number of examples.
For the same training set, I got the question as
Q. What is the normalized feature x^(3)_1?
Thus, 3rd training ex and 1st feature makes out to 94 in above table.
Now, normalized form is
x = (x - mean(x's)) / range(x)
Values are :
x = 94
mean(89+72+94+69) / 4 = 81
range = 94 - 69 = 25
Normalized x = (94 - 81) / 25 = 0.52
I'm taking this course at the moment and a really trivial mistake I made first time I answered this question was using comma instead of dot in the answer, since I did by hand and in my country we use comma to denote decimals. Ex:(0,52 instead of 0.52)
So in the second time I tried I used dot and works fine.

Estimated size of the self-join operation on a relation R, given a histogram for R

Query optimizers typically use summaries of data distributions to estimate the sizes of the intermediate tables generated during query processing. One popular such summarization scheme is a histogram, whereby the input range is partitioned into buckets and a cumulative count is maintained of the number of tuples falling in each bucket. The distribution within a bucket is assumed to be uniform for the purposes of estimation.
The following shows one such histogram for a relation R on a discrete attribute a with domain [1..10]:
Bucket 1: range = [1..2] Cumulative tuple count = 6
Bucket 2: range = [3..8] Cumulative tuple count = 30
Bucket 3: range = [9..10] Cumulative tuple count = 10
What is the estimated size of the self-join operation R x R
A) 46
B) 218
C) 248
D) 1,036
E) 5,672
Answer given in solutions : B
How is the answer to be calculated?
The size of a self-join on attribute R is equal to the summation of the frequency of each value of attribute R.
Here the frequency is given in buckets, e.g. the first bucket has 2 values r with frequency = 6, so we can assume the frequency of each value in bucket one is frequency = 3, similarly for bucket two frequency of each = 30/6 = 5, and for bucket three frequency of each value = 10/2 = 5.
Therefore, the size is
Size = [(3^2)*2] + [(5^2)*6] + [(5^2)*2]
= 218
I've been trying to figure this one out myself (it's from the GRE Computer Science subject test preparation exam).
So far I haven't found an answer as to why the answer is 218, but I have found a connection between the numbers given and the correct answer.
It turns out that that sum of the square of the cumulative tuple counts divided by the number of discrete values in each bucket, you get 218. Less abstractly: 6²/2 + 30²/5 + 10²/2 = 218.
It's not an answer, but at least there's a connection =)

Resources