Im writing an app that functions much like RunKeeper. Im tracking CLLocations every x seconds to figure out the total distance.
However, if the user moves on a really straight freeway for 10 minutes (Think USA), there is no point in saving 60 tracking points.
What kind of filtering do you use in your apps to reduce the number of track points?
Im thinking about checking the course delta from the last good position and discarding the new posotion if course hasnt changed by more than x degrees. There would of course also be a longer time limit to get a point every now and then anyway.
Related
This question was asked 5 years ago here What is the precision of the UITouch timestamp in iOS? and I'm wondering if anyone has any further information or if things have moved on.
To summarize my very rough understanding, an app gets notified once every screen refresh cycle about any touch events, and the refresh cycle used to be 60Hz, and on some devices may be 120Hz.
This would suggest that there are two possibilities.
The timestamp coincides with the screen refresh cycle, meaning that the timestamp approximately has the resolution of 60Hz or 120Hz - ie, if you get a touch at 0 milliseconds, the next timestamp you could possibly get from another touch would be at 16 milliseconds on a 60Hz device or at 8 milliseconds on a 120Hz device.
Alternatively, it could be that the screen hardware stores the (more or less) exact time of the tap into a buffer somewhere, and then the refresh cycle picks up all the timestamps that have occurred since the last cycle, but these timestamps could fall anywhere in that period. So you could have a tap at 0 milliseconds, or 5 milliseconds or 9 or whatever.
Obviously I'd prefer option 2 to be the case, because in my app I want to know the precise time that the user touched the screen, rather than a value rounded to the nearest 16 millisecond multiple.
Very grateful for any input - thanks!
I am working on some Machine Learning project where I have tracked people with Kalman Filter tracker. I want to calculate that how much time each person is there in the video.
I tried using the following logic :
Suppose the person is present in the 5 frames and the video FPS is 15, then we can say that the person is (5*15) for 75 seconds in the video.
Note:- I have assumed & hard coded the FPS value in the code. I didn't find any way to get the FPS because I am passing frames of video for tracking.
But the problem is if I hard code the FPS value, so whenever the FPS changes (which I don't know when), I have to change in code, otherwise it will give wrong result.
You don't need to hard code fps and you infact dont need fps for this. I believe your tracker will assign unique id to each person detected. Once you have the id, you can sinply start counting the number of seconds for that id. Once that person moves out of the frame, tracker id will be lost and you can stop the timer and thus you will have the total time person spent in the frame.
Have a look at this code: https://github.com/mailrocketsystems/AIComputerVision/blob/master/dwell_time_calculation.py
and maybe this video for explanation: https://www.youtube.com/watch?v=qn26XSinYfg
My suggestion is to calculated elapsed time for processing each captured frame and maintaining an accumulator to calculate the on-screen time for each person detected. The frame rate depends on the amount of processing you do, for this type of projects.
I'm working on a leaderboard system for Garry's Mod and I have run into a slight problem to do with one of the statistics I'm tracking.
I am tracking a lot of statistics including the number of bullet's shot and the number of bullets that actually hit, and am using that information to work out the accuracy of the player like so:
(gunHits / gunShots) * 100
The problem with the way I'm doing it is that people can go to the top of the leaderboards by just logging on and shooting one bullet and hitting someone with it, therefore having an accuracy of 100%.
Is there any way I can get around this?
You could set a minimum amount of shots fired required to be ranked. I think 100 would be a good number.
On a side note you don't need to put any parenthesis on the operation since divide and multiply are operators with the same priority and they are already in the right order. It would've been necessary if you were doing it this way.
100 * (gunHits / gunShots)
I am making an ios application in which it is to be determined that wether the person is sitting or standing.I wanted to know that if there is any method to find automatically that the person is sitting or standing like we can get the height from sea level with the help of CLLocation Manager.So like this can we get the height of iPhone from the ground level in any way?
This is not possible for the following reasons:
The phone can tell you its height above sea level, the accuracy of which has a larger margin of error than the difference between a sitting and a standing person
Even if 1. did not apply, and you knew the precise height of the ground at your current location and the additional height of the phone, this would still be meaningless, as it doesn't take into account buildings, the height of the person, their posture and so forth.
You may have more luck using the motion coprocessor on newer models, you could assume that a standing person moves about more than a sitting person, or something. Or accelerometer readings to detect changes of position. But altitude is definitely not the way to go.
You cannot find out by altitude if a person is standing or sitting.
Accuracy of GPS is much to low. Which is at best 6m for altitude.
But if you are really clever you could try other approaches:
-Use the acceleratoin sensor: A standing person might move a bit more than a sitting one, or moves different. [Sorry, I did not saw that user jrturton has written the same, bit this indicates that this might work]
Sitting persons often type on the keyboard. You can measure that with the accelerometer, by frequence analysis after doing a FFT.
Walking persons: A person that walks does not sit: Detect typical walking steps, with aclerometer or even with an ios API that is new in ios7. (I remeber there is a step counter)
These all are no accurate detections, but may raise the probability to detect a sitting person-.
If you get that to work, I will have major respect. Post an update if you succeed.
Expect 2,5 to 3,5 fulltime working month to get that to work (in some cases)
I want to calculate total distance covered by person using core location.User press start button it will show calculated distance on iphone screen .It also give some updated data after certain period of time.even though the person didn't made any movement it still shows some distance. Can any one help me.