Are the key frames in a CAKeyframeAnimation always hit exactly? - ios

Can anyone tell me if the key frames in a CAKeyframeAnimation are always guaranteed to be hit with their exact values when the animation runs? Or... do they only act as interpolation guides? e.g. If I specify, say, 3 points on a path for some arbitrary property to follow - let's call it 'position' - and I specify an execution time of 0.3f seconds, whilst (obviously) points 1 and 3 must be hit (as they are the terminal points) can I guarantee that point 2 will be evaluated exactly as specified in the key frame array? Surprisingly, I haven't found a single document that gives an adequate answer. I ask this because I'm writing an OpenAL sound-effect synchroniser that uses a keyframe animation's path to trigger various short sounds along its length and whilst most of them get executed, now and again a few don't and I don't know if it's my logic that's wrong or my code.
Thanks in advance.

In general, relying on the "exactness" of a floating-point value that is the result of a calculation is fraught with danger. So for example the following code:
CGFloat x1 = some_function();
CGFloat x2 = some_other_function();
if(x1 == x2)
{
// do something
}
without even knowing what the functions do is most likely incorrect. Even if the functions do very similar calculations the optimizer may have reordered operations causing small rounding errors sufficient to cause the equality test to fail.
It should be:
CGFloat x1 = some_function();
CGFloat x2 = some_other_function();
CGFloat tolerance = 0.1; // or some tolerance suitable for the calculation.
if(fabsf(x1 - x2) < tolerance)
{
// do something
}
where tolerance is some value suitable for the calculation being performed.
So, without knowing the internals of CAKeyframeAnimation, I can tell you that any code that expects exact values would be inherently "fragile". This is not to say that you won't get exact values, you might, but it will depend a lot on the input data.
I hope this helps.

Related

How to scale % change based features so that they are viewed "similarly" by the model

I have some features that are zero-centered values and supposed to represent change between a current value and previous value. Generally speaking i believe there should be some symmetry between these values. Ie. there should be roughly the same amount of positive values as negative values and roughly these values should operate on the same scale.
When i try to scale my samples using MaxAbsScaler, i notice that my negative values for this feature get almost completely drowned out by the positive values. And i don't really have any reason to believe my positive values should be that much larger than my negative values.
So what i've noticed is that fundamentally, the magnitude of percentage change values are not symmetrical in scale. For example if i have a value that goes from 50 to 200, that would result in a 300.0% change. If i have a value that goes from 200 to 50 that would result in a -75.0% change. I get there is a reason for this, but in terms of my feature, i don't see a reason why a change of 50 to 100 should be 3x+ more "important" than the same change in value but the opposite direction.
Given this information, i do not believe there would be any reason to want my model to treat a change of 200-50 as a "lesser" change than a change of 50-200. Since i am trying to represent the change of a value over time, i want to abstract this pattern so that my model can "visualize" the change of a value over time that same way a person would.
Right now i am solving this by using this formula
if curr > prev:
return curr / prev - 1
else:
return (prev / curr - 1) * -1
And this does seem to treat changes in value, similarly regardless of the direction. Ie from the example of above 50>200 = 300, 200>50 = -300. Is there a reason why i shouldn't be doing this? Does this accomplish my goal? Has anyone ran into similar dilemmas?
This is a discussion question and it's difficult to know the right answer to it without knowing the physical relevance of your feature. You are calculating a percentage change, and a percent change is dependent on the original value. I am not a big fan of a custom formula only to make percent change symmetric since it adds a layer of complexity when it is unnecessary in my opinion.
If you want change to be symmetric, you can try direct difference or factor change. There's nothing to suggest that difference or factor change are less correct than percent change. So, depending on the physical relevance of your feature, each of the following symmetric measures would be correct ways to measure change -
Difference change -> 50 to 200 yields 150, 200 to 50 yields -150
Factor change with logarithm -> 50 to 200 yields log(4), 200 to 50 yields log(1/4) = -log(4)
You're having trouble because you haven't brought the abstract questions into your paradigm.
"... my model can "visualize" ... same way a person would."
In this paradigm, you need a metric for "same way". There is no such empirical standard. You've dropped both of the simple standards -- relative error and absolute error -- and you posit some inherently "normal" standard that doesn't exist.
Yes, we run into these dilemmas: choosing a success metric. You've chosen a classic example from "How To Lie With Statistics"; depending on the choice of starting and finishing proportions and the error metric, you can "prove" all sorts of things.
This brings us to your central question:
Does this accomplish my goal?
We don't know. First of all, you haven't given us your actual goal. Rather, you've given us an indefinite description and a single example of two data points. Second, you're asking the wrong entity. Make your changes, run the model on your data set, and examine the properties of the resulting predictions. Do those properties satisfy your desired end result?
For instance, given your posted data points, (200, 50) and (50, 200), how would other examples fit in, such as (1, 4), (1000, 10), etc.? If you're simply training on the proportion of change over the full range of values involved in that transaction, your proposal is just what you need: use the higher value as the basis. Since you didn't post any representative data, we have no idea what sort of distribution you have.

How do I find the required maxima in acceleration data obtained from an iPhone?

I need to find the number of times the accelerometer value stream attains a maximum. I made a plot of the accelerometer values obtained from an iPhones against time, using CoreMotion method to obtain the DeviceMotionUpdates. When the data was being recorded, I shook the phone 9 times (where each extremity was one of the highest points of acceleration).
I have marked the 18 (i.e. 9*2) times when acceleration had attained maximum in red boxes on the plot.
But, as you see, there are some local maxima that I do not want to consider. Can someone direct me towards an idea that will help me achieve detecting only the maxima of importance to me?
Edit: I think I have to use a low pass filter. But, how do I implement this in Swift? How do I choose the frequency of cut-off?
Edit 2:
I implemented a low pass filter and passed the raw motion data through it and obtained the graph as shown below. This is a lot better. I still need a way to avoid the insignificant maxima that can be observed. I'll work in depth with the filter and probably fix it.
Instead of trying to find the maximas, I would try to look for cycles. Especially, we note that the (main) minimas seem to be a lot more consistent than the maximas.
I am not familiar with swift, so I'll layout my idea in pseudo code. Suppose we have our values in v[i] and the derivative in dv[i] = v[i] - v[i - 1]. You can use any other differentiation scheme if you get a better result.
I would try something like
cycles = [] // list of pairs
cstart = -1
cend = -1
v_threshold = 1.8 // completely guessing these figures looking at the plot
dv_threshold = 0.01
for i in v:
if cstart < 0 and
v[i] > v_threshold and
dv[i] < dv_threshold then:
// cycle is starting here
cstart = i
else if cstart > 0 and
v[i] < v_threshold and
dv[i] < dv_threshold then:
// cycle ended
cend = i
cycles.add(pair(cstart, cend))
cstart = -1
cend = -1
end if
Now you note in comments that the user should be able to shake with different force and you should be able to recognise the motion. I would start with a simple 'hard-coded' cases as the one above, and see if you can get it to work sufficiently well. There is a lot of things you could try to get a variable threshold, but you will nevertheless always need one. However, from the data you show I strongly suggest at least limiting yourself to looking at the minimas and not the maximas.
Also: the code I suggested is written assuming you have the full data set, however you will want to run this in real time. This will be no problem, and the algorithm will still work (that is, the idea will still work but you'll have to code it somewhat differently).

Interpolating and predicting CLLocationManager

I need to get an updated user location with at least 10 hz to animate the location smoothly in MapBox for iOS while driving. Since Core Location only provides one point every second I believe I need to do some prediction.
I have tried ikalman but it doesn`t seem to do any difference when updated once a second and queried at 10 hz.
How do i tackle this please?
What you're looking for is extrapolation, not interpolation.
I'm really, really surprised that there's so few resources on extrapolation on the internet. If you want to know more you should read some numerical methods/math book and implement the algorithm yourself.
Maybe simple linear extrapolation will suffice ?
// You need two last points to extrapolate
-(double) getExtrapolatedValueAt:(double)x withPointA:(Point*)A andPointB(Point*)B
{
// X is time, Y is either longtitute or latitude.
return A.y + ( x - A.x ) / (B.x - A.x) * (B.y - A.y);
}
-(Point*) getExtrapolatedPointAtTime:(double)X fromLatitudeA:(Point*)latA andLatitudeB:(Point*)latB andLongtitudeA:(Point*)longA andLongtitudeB:(Coord*)longB
{
double extrapolatedLatitude = [self getExtraploatedValueAt:X withPointA:latA andPointB:latB];
double extrapolatedLongtitude = [self getExtrapolatedValueAt:X withPointA:longA andPointB:longB];
Coord* extrapolatedPoint = [Coord new];
extrapolatedPoint.longtitude = extrapolatedLongtitude;
extrapolatedPoint.latitude = extrapolatedLatitude;
return extrapolatedPoint;
}
Not sure if I got the function right but you can check here:
http://en.wikipedia.org/wiki/Extrapolation
it's really easy.
You should implement the linear extrapolation.
If you find out that linear extrapolation isn't enough (for curves for example) you should just iterate and change it with some other extrapolation algorithm.
Another approach would be to have a 1 sec delay in animation and animate between two known points using interpolation. I don't know if that's acceptable for your use case.
This problem is typically solved with something called "Dead Reckoning". And you're right on track with trying to use a Kalman filter for doing this. If iKalman isn't working for you, you can try to resort to a simpler approach.
There's a lot of this sort of problem solving when dealing with games and network latency, so you can likely reuse an algorithm developed for this purpose.
This seems like a pretty thorough example.
The wiki on Kalman filters may help out as well.
I ended up solving this by using long UIView animations instead (2-3) seconds with easing that start from the current state. This gives the impression of smooth position and heading following "for free".

How to optimize math with 2D vectors?

I’m working on iOS app that performs some calculations with an array of thousand objects. The objects have properties with x and y coordinates, velocity for x and y axises and couple other x-y properties. There is some math to calculate an interaction between physical objects represented by objects in the array. The math is pretty straight forward, basically it is calculation of the forces applied to the objects, speed and change in position (x,y) for each objects. I wrote a code using regular scalar math in Objective C, it worked fine on iPhone 5s, however too slow on other devices like iPhone 4, 4s, 5 and iPad mini. I found that the most time consuming operations were the calculations of the distance between 2 points and calculations of the length of a vector like shown below which involves taking a square root:
float speed = sqrtf(self.dynamic.speed_x,2)+pow(self.dynamic.speed_y,2));
So, I had to do something to make the calculations quicker. I re-wrote the code to make the properties with the coordinates of the objects and such properties as velocity which were presented by X and Y components to be vectors of GLKVector2 type. I was hoping that it would make the calculations of the variables like the distance between 2 vectors (or points, as per my understanding), addition and subtraction of vectors significantly faster due to using special vector functions like GLKVector2Distance, GLKVector2Normalize,GLKVector2Add etc. However, it didn’t help too much in terms of performance, because, as I believe, to put the object with properties of GLKVector2 type to the array I had to use NSValue, as well as to decode the GLKVector2 values back from the object in the array to perform vector calculations. Below is the code from calculation method in object’s implementation:
GLKVector2 currentPosition;
[self.currentPosition getValue:&currentPosition];
GLKVector2 newPosition;
// calculations with vectors. Result is in newPosition.
self.currentPosition = [NSValue value:&newPosition withObjCType:#encode(GLKVector2)];
Moreover, when I rewrote the code to use GLKVector2, I got memory warnings and after some time of running the applications sometimes crashes.
I spend several days trying to find a better way to do the calculations faster, I looked at vecLib, openGL, but have not found a solution that would be understandable for me. I have a feeling that I might have to look at writing code in C and integrate it somehow into objective C, but I don’t understand how to integrate it with the array of objects without using NSValue thousands times.
I would greatly appreciate it if anyone could help with advise on what direction should I look at? Maybe there is some library available that can be easily used in Objective C with group of objects stored in arrays?
Learn how to use Instruments. Any performance optimisation is totally pointless unless you measure speed before and after your changes. An array of 1000 objects is nothing, so you might be worrying about nothing. Or slowdowns are in a totally different place. Use Instruments.
x * x is a multiplication. powf (x, 2.0) is an expensive function call that probably takes anywhere between 20 and 100 times longer.
GLKVector2 is a primitive (it is a union). Trying to stash it into an NSValue is totally pointless and wastes probably 100 times more time than just storing it directly.
Here an answer to you question about integrating physics calculations in C with your current Objective-C class. If you have a fragment of C code in your .m file which may look like
static CGPoint point[MAX_OBJ];
static int n_points = 0;
with a corresponding function in plain C as you suggested for simulating physical interactions that acts on point to update object positions, as in
void tick_world() {
for (int k = 0; k < n_points; k ++) {
float speed = sqrtf((point[k].x*point[k].x) + (point[k].y*point[k].y));
...
}
}
then, your Objective-C class Moving for the moving object could contain a pointer to a particular CGPoint in point that you would define in the interface (probably the corresponding *.h file):
#interface Moving : NSObject {
...
CGPoint *pos;
}
When handling the init message, you can then grab and initialize the next available element in point. If your objects persist throughout run time, this could be done very simply simply by
#implementation
...
-(id)initAtX:(float)x0 Y:(float)y0 {
self = [super init];
if (self) {
if (n_points == MAX_OBJ) {
[self release];
return nil;
}
pos = point + n_points ++;
pos->x = x0;
pos->y = y0;
}
return self;
}
If your Moving objects do not persist, you might want to think of a smart way to recycle slots after destruction. For example, you could initialize all x of point with NAN, and use this as a way to locate a free slot. In your dealloc, you would then pos->x = NAN.
It sounds like you're up against a couple common problems: 1) fast math in a high-level language, and 2) a meta-problem: whether to get the benefit of others' work (OpenGL, and several ideas listed here) in exchange for a steep learning curve as the developer.
Especially for this subject matter, I think the trade is pretty good in favor of using a library. For many (e.g. Eigen), most of the learning curve is about integration of Objective C and C++, which you can quickly put behind you.
(As an aside, often times, computing the square of the distance between objects is sufficient for making comparisons. If that works in your app, you can save cycles by replacing distance(a,b) with distnaceSquared(a,b)

How to CUDA-ize code when all cores require global memory access

Without CUDA, my code is just two for loops that calculate the distance between all pairs of coordinates in a system and sort those distances into bins.
The problem with my CUDA version is that apparently threads can't write to the same global memory locations at the same time (race conditions?). The values I end up getting for each bin are incorrect because only one of the threads ended up writing to each bin.
__global__ void computePcf(
double const * const atoms,
double * bins,
int numParticles,
double dr) {
int i = blockDim.x * blockIdx.x + threadIdx.x;
if (i < numParticles - 1) {
for (int j = i + 1; j < numParticles; j++) {
double r = distance(&atoms[3*i + 0], &atoms[3*j + 0]);
int binNumber = floor(r/dr);
// Problem line right here.
// This memory address is modified by multiple threads
bins[binNumber] += 2.0;
}
}
}
So... I have no clue what to do. I've been Googling and reading about shared memory, but the problem is that I don't know what memory area I'm going to be accessing until I do my distance computation!
I know this is possible, because a program called VMD uses the GPU to speed up this computation. Any help (or even ideas) would be greatly appreciated. I don't need this optimized, just functional.
How many bins[] are there?
Is there some reason that bins[] need to be of type double? It's not obvious from your code. What you have is essentially a histogram operation, and you may want to look at fast parallel histogram techniques. Thrust may be of interest.
There are several possible avenues to consider with your code:
See if there is a way to restructure your algorithm to arrange computations in such a way that a given group of threads (or bin computations) are not stepping on each other. This might be accomplished based on sorting distances, perhaps.
Use atomics This should solve your problem, but will likely be costly in terms of execution time (but since it's so simple you might want to give it a try.) In place of this:
bins[binNumber] += 2.0;
Something like this:
int * bins,
...
atomicAdd(bins+binNumber, 2);
You can still do this if bins are of type double, it's just a bit more complicated. Refer to the documentation for the example of how to do atomicAdd on a double.
If the number of bins is small (maybe a few thousand, or less) then you could create a few sets of bins that are updated by multiple threadblocks, and then use a reduction operation (adding the sets of bins together, element by element) at the end of the processing sequence. In this case, you might want to consider using a smaller number of threads or threadblocks, each of which processes multiple elements, by putting an additional loop in your kernel code, so that after each particle processing is complete, the loop jumps to the next particle by adding gridDim.x*blockDim.x to the i variable, and repeating the process. Since each thread or threadblock has it's own local copy of the bins, it can do this without stepping on other threads accesses.
For example, suppose I only needed 1000 bins of type int. I could create 1000 sets of bins, which would only take up about 4 megabytes. I could then give each of 1000 threads it's own bin set, and then each of the 1000 threads would have it's own bin set to update, and would not require atomics, since it could not interfere with any other thread. By having each thread loop through multiple particles, I can still effectively keep the machine busy this way. When all the particle-binning is done, I then have to add my 1000 bin-sets together, perhaps with a separate kernel call.

Resources