I need to get an updated user location with at least 10 hz to animate the location smoothly in MapBox for iOS while driving. Since Core Location only provides one point every second I believe I need to do some prediction.
I have tried ikalman but it doesn`t seem to do any difference when updated once a second and queried at 10 hz.
How do i tackle this please?
What you're looking for is extrapolation, not interpolation.
I'm really, really surprised that there's so few resources on extrapolation on the internet. If you want to know more you should read some numerical methods/math book and implement the algorithm yourself.
Maybe simple linear extrapolation will suffice ?
// You need two last points to extrapolate
-(double) getExtrapolatedValueAt:(double)x withPointA:(Point*)A andPointB(Point*)B
{
// X is time, Y is either longtitute or latitude.
return A.y + ( x - A.x ) / (B.x - A.x) * (B.y - A.y);
}
-(Point*) getExtrapolatedPointAtTime:(double)X fromLatitudeA:(Point*)latA andLatitudeB:(Point*)latB andLongtitudeA:(Point*)longA andLongtitudeB:(Coord*)longB
{
double extrapolatedLatitude = [self getExtraploatedValueAt:X withPointA:latA andPointB:latB];
double extrapolatedLongtitude = [self getExtrapolatedValueAt:X withPointA:longA andPointB:longB];
Coord* extrapolatedPoint = [Coord new];
extrapolatedPoint.longtitude = extrapolatedLongtitude;
extrapolatedPoint.latitude = extrapolatedLatitude;
return extrapolatedPoint;
}
Not sure if I got the function right but you can check here:
http://en.wikipedia.org/wiki/Extrapolation
it's really easy.
You should implement the linear extrapolation.
If you find out that linear extrapolation isn't enough (for curves for example) you should just iterate and change it with some other extrapolation algorithm.
Another approach would be to have a 1 sec delay in animation and animate between two known points using interpolation. I don't know if that's acceptable for your use case.
This problem is typically solved with something called "Dead Reckoning". And you're right on track with trying to use a Kalman filter for doing this. If iKalman isn't working for you, you can try to resort to a simpler approach.
There's a lot of this sort of problem solving when dealing with games and network latency, so you can likely reuse an algorithm developed for this purpose.
This seems like a pretty thorough example.
The wiki on Kalman filters may help out as well.
I ended up solving this by using long UIView animations instead (2-3) seconds with easing that start from the current state. This gives the impression of smooth position and heading following "for free".
Related
I need to find the number of times the accelerometer value stream attains a maximum. I made a plot of the accelerometer values obtained from an iPhones against time, using CoreMotion method to obtain the DeviceMotionUpdates. When the data was being recorded, I shook the phone 9 times (where each extremity was one of the highest points of acceleration).
I have marked the 18 (i.e. 9*2) times when acceleration had attained maximum in red boxes on the plot.
But, as you see, there are some local maxima that I do not want to consider. Can someone direct me towards an idea that will help me achieve detecting only the maxima of importance to me?
Edit: I think I have to use a low pass filter. But, how do I implement this in Swift? How do I choose the frequency of cut-off?
Edit 2:
I implemented a low pass filter and passed the raw motion data through it and obtained the graph as shown below. This is a lot better. I still need a way to avoid the insignificant maxima that can be observed. I'll work in depth with the filter and probably fix it.
Instead of trying to find the maximas, I would try to look for cycles. Especially, we note that the (main) minimas seem to be a lot more consistent than the maximas.
I am not familiar with swift, so I'll layout my idea in pseudo code. Suppose we have our values in v[i] and the derivative in dv[i] = v[i] - v[i - 1]. You can use any other differentiation scheme if you get a better result.
I would try something like
cycles = [] // list of pairs
cstart = -1
cend = -1
v_threshold = 1.8 // completely guessing these figures looking at the plot
dv_threshold = 0.01
for i in v:
if cstart < 0 and
v[i] > v_threshold and
dv[i] < dv_threshold then:
// cycle is starting here
cstart = i
else if cstart > 0 and
v[i] < v_threshold and
dv[i] < dv_threshold then:
// cycle ended
cend = i
cycles.add(pair(cstart, cend))
cstart = -1
cend = -1
end if
Now you note in comments that the user should be able to shake with different force and you should be able to recognise the motion. I would start with a simple 'hard-coded' cases as the one above, and see if you can get it to work sufficiently well. There is a lot of things you could try to get a variable threshold, but you will nevertheless always need one. However, from the data you show I strongly suggest at least limiting yourself to looking at the minimas and not the maximas.
Also: the code I suggested is written assuming you have the full data set, however you will want to run this in real time. This will be no problem, and the algorithm will still work (that is, the idea will still work but you'll have to code it somewhat differently).
I’m working on iOS app that performs some calculations with an array of thousand objects. The objects have properties with x and y coordinates, velocity for x and y axises and couple other x-y properties. There is some math to calculate an interaction between physical objects represented by objects in the array. The math is pretty straight forward, basically it is calculation of the forces applied to the objects, speed and change in position (x,y) for each objects. I wrote a code using regular scalar math in Objective C, it worked fine on iPhone 5s, however too slow on other devices like iPhone 4, 4s, 5 and iPad mini. I found that the most time consuming operations were the calculations of the distance between 2 points and calculations of the length of a vector like shown below which involves taking a square root:
float speed = sqrtf(self.dynamic.speed_x,2)+pow(self.dynamic.speed_y,2));
So, I had to do something to make the calculations quicker. I re-wrote the code to make the properties with the coordinates of the objects and such properties as velocity which were presented by X and Y components to be vectors of GLKVector2 type. I was hoping that it would make the calculations of the variables like the distance between 2 vectors (or points, as per my understanding), addition and subtraction of vectors significantly faster due to using special vector functions like GLKVector2Distance, GLKVector2Normalize,GLKVector2Add etc. However, it didn’t help too much in terms of performance, because, as I believe, to put the object with properties of GLKVector2 type to the array I had to use NSValue, as well as to decode the GLKVector2 values back from the object in the array to perform vector calculations. Below is the code from calculation method in object’s implementation:
GLKVector2 currentPosition;
[self.currentPosition getValue:¤tPosition];
GLKVector2 newPosition;
// calculations with vectors. Result is in newPosition.
self.currentPosition = [NSValue value:&newPosition withObjCType:#encode(GLKVector2)];
Moreover, when I rewrote the code to use GLKVector2, I got memory warnings and after some time of running the applications sometimes crashes.
I spend several days trying to find a better way to do the calculations faster, I looked at vecLib, openGL, but have not found a solution that would be understandable for me. I have a feeling that I might have to look at writing code in C and integrate it somehow into objective C, but I don’t understand how to integrate it with the array of objects without using NSValue thousands times.
I would greatly appreciate it if anyone could help with advise on what direction should I look at? Maybe there is some library available that can be easily used in Objective C with group of objects stored in arrays?
Learn how to use Instruments. Any performance optimisation is totally pointless unless you measure speed before and after your changes. An array of 1000 objects is nothing, so you might be worrying about nothing. Or slowdowns are in a totally different place. Use Instruments.
x * x is a multiplication. powf (x, 2.0) is an expensive function call that probably takes anywhere between 20 and 100 times longer.
GLKVector2 is a primitive (it is a union). Trying to stash it into an NSValue is totally pointless and wastes probably 100 times more time than just storing it directly.
Here an answer to you question about integrating physics calculations in C with your current Objective-C class. If you have a fragment of C code in your .m file which may look like
static CGPoint point[MAX_OBJ];
static int n_points = 0;
with a corresponding function in plain C as you suggested for simulating physical interactions that acts on point to update object positions, as in
void tick_world() {
for (int k = 0; k < n_points; k ++) {
float speed = sqrtf((point[k].x*point[k].x) + (point[k].y*point[k].y));
...
}
}
then, your Objective-C class Moving for the moving object could contain a pointer to a particular CGPoint in point that you would define in the interface (probably the corresponding *.h file):
#interface Moving : NSObject {
...
CGPoint *pos;
}
When handling the init message, you can then grab and initialize the next available element in point. If your objects persist throughout run time, this could be done very simply simply by
#implementation
...
-(id)initAtX:(float)x0 Y:(float)y0 {
self = [super init];
if (self) {
if (n_points == MAX_OBJ) {
[self release];
return nil;
}
pos = point + n_points ++;
pos->x = x0;
pos->y = y0;
}
return self;
}
If your Moving objects do not persist, you might want to think of a smart way to recycle slots after destruction. For example, you could initialize all x of point with NAN, and use this as a way to locate a free slot. In your dealloc, you would then pos->x = NAN.
It sounds like you're up against a couple common problems: 1) fast math in a high-level language, and 2) a meta-problem: whether to get the benefit of others' work (OpenGL, and several ideas listed here) in exchange for a steep learning curve as the developer.
Especially for this subject matter, I think the trade is pretty good in favor of using a library. For many (e.g. Eigen), most of the learning curve is about integration of Objective C and C++, which you can quickly put behind you.
(As an aside, often times, computing the square of the distance between objects is sufficient for making comparisons. If that works in your app, you can save cycles by replacing distance(a,b) with distnaceSquared(a,b)
This question already has answers here:
What kind of logarithm functions / methods are available in objective-c / cocoa-touch?
(3 answers)
Closed 9 years ago.
I am building a game using Sprite Kit and I want to gradually increase the difficulty (starting at 1.0) based on the time since starting the game.
Someone suggested that I should use a logarithmic calculation for this but I'm unsure how to implement this in Objective-C.
- (float)difficulty
{
timeSinceStart = ???; // I don't what kind of object this should be to make it play nice w/ `log`
return log(???);
}
Update #1
I know that I need to use the log method but I'm uncertain what values I need to pass to it.
Objective C is a superset of the C language, therefore you can use "math.h".
The function that computes the natural logarithm from "math.h" is double log(double x);
EDIT
Since you want the difficulty to increase as a function of time, you would pass the time as the argument to log(double x). How you would use that to calculate and change the "difficulty" is an entirely different question.
If you want to change the shape of the curve, either multiply the expression by a constant, as in 2*log(x) or multiply the parameter by a constant, as in log(2*x). You will have to look at the individual curves to see what will work best for your specific application.
Since log(1.0) == 0 you probably want to do some scaling and translation. Use something like 1.0 + log(1.0 + c * time). At time zero this will give a difficulty of 1.0, and as time advances the difficulty will increase at a progressively slower pace whose rate is determined by c. Small values such as c = 0.01 will give a slow ramp-up, larger values will ramp-up faster.
#pjs gave a pretty clear answer. As to how to figure out the time: You probably want the amount of time spent actually playing, rather than elapsed time since launching the game.
So you will nee to track total time played, game after game.
I suggest you create an entry in NSUserDefaults. You can save and load double values to user defaults using the NSUserDefaults methods setDouble:forKey: and doubleForKey:
I would create an instance variable startPlayingTime, type double
When you start the game action running, capture the start time using
startPlayingTime = [NSDate timeIntervalSinceReferenceDate];
When the user passes the game/exits to the background, use
NSTimeInterval currentPlayTime = [NSDate timeIntervalSinceReferenceDate] - startPlayingTime;
Then read the total time played from user defaults, add currentPlayTime to it, and save it back to user defaults.
You can then make your game difficulty based on
difficulty = log(1+ c * totalPlayTime);
(as explained by pjs, above) and pick some appropriate value for C.
In my application I need to determine what the plates a user can load on their barbell to achieve the desired weight.
For example, the user might specify they are using a 45LB bar and have 45,35,25,10,5,2.5 pound plates to use. For a weight like 115, this is an easy problem to solve as the result neatly matches a common plate. 115 - 45 / 2 = 35.
So the objective here is to find the largest to smallest plate(s) (from a selection) the user needs to achieve the weight.
My starter method looks like this...
-(void)imperialNonOlympic:(float)barbellWeight workingWeight:(float)workingWeight {
float realWeight = (workingWeight - barbellWeight);
float perSide = realWeight / 2;
.... // lots of inefficient mod and division ....
}
My thought process is to determine first what the weight per side would be. Total weight - weight of the barbell / 2. Then determine what the largest to smallest plate needed would be (and the number of each, e.g. 325 would be 45 * 3 + 5 or 45,45,45,5.
Messing around with fmodf and a couple of other ideas it occurred to me that there might be an algorithm that solves this problem. I was looking into BFS, and admit that it is above my head but still willing to give it a shot.
Appreciate any tips on where to look in algorithms or code examples.
Your problem is called Knapsack problem. You will find a lot solution for this problem. There are some variant of this problem. It is basically a Dynamic Programming (DP) problem.
One of the common approach is that, you start taking the largest weight (But less than your desired weight) and then take the largest of the remaining weight. It easy. I am adding some more links ( Link 1, Link 2, Link 3 ) so that it becomes clear. But some problems may be hard to understand, skip them and try to focus on basic knapsack problem. Good luck.. :)
Let me know if that helps.. :)
Firstly , For those of your who dont know - Anytime Algorithm is an algorithm that get as input the amount of time it can run and it should give the best solution it can on that time.
Weighted A* is the same as A* with one diffrence in the f function :
(where g is the path cost upto node , and h is the heuristic to the end of path until reaching a goal)
Original = f(node) = g(node) + h(node)
Weighted = f(node) = (1-w)g(node) +h(node)
My anytime algorithm runs Weighted A* with decaring weight from 1 to 0.5 until it reaches the time limit.
My problem is that most of the time , it takes alot time until this it reaches a solution , and if given somthing like 10 seconds it usaully doesnt find solution while other algorithms like anytime beam finds one in 0.0001 seconds.
Any ideas what to do?
If I were you I'd throw the unbounded heuristic away. Admissible heuristics are much better in that given a weight value for a solution you've found, you can say that it is at most 1/weight times the length of an optimal solution.
A big problem when implementing A* derivatives is the data structures. When I implemented a bidirectional search, just changing from array lists to a combination of hash augmented priority queues and array lists on demand, cut the runtime cost by three orders of magnitude - literally.
The main problem is that most of the papers only give pseudo-code for the algorithm using set logic - it's up to you to actually figure out how to represent the sets in your code. Don't be afraid of using multiple ADTs for a single list, i.e. your open list. I'm not 100% sure on Anytime Weighted A*, I've done other derivatives such as Anytime Dynamic A* and Anytime Repairing A*, not AWA* though.
Another issue is when you set the g-value too low, sometimes it can take far longer to find any solution that it would if it were a higher g-value. A common pitfall is forgetting to check your closed list for duplicate states, thus ending up in a (infinite if your g-value gets reduced to 0) loop. I'd try starting with something reasonably higher than 0 if you're getting quick results with a beam search.
Some pseudo-code would likely help here! Anyhow these are just my thoughts on the matter, you may have solved it already - if so good on you :)
Beam search is not complete since it prunes unfavorable states whereas A* search is complete. Depending on what problem you are solving, if incompleteness does not prevent you from finding a solution (usually many correct paths exist from origin to destination), then go for Beam search, otherwise, stay with AWA*. However, you can always run both in parallel if there are sufficient hardware resources.