What are the latitude longitude decimal values - geolocation

i want to develop a geolocation based app. Currently i am reading the phones position and get the longitude and latitude values.
These are for example currently
Lat 50.89808359344173
Lon 4.482628385834053
I have 14 decimal places for the latitude an 15 for the longitude.
What is the meaning of these places?

A man is being shown around a natural history museum by a guide, sees a fossilised tyrannosaurus skeleton. He asks the guide how old it is, and the guide tells him it's 67million and 13 years old.
"Wow, how can you be so accurate?"
"Well, I was told it was 67million years old when I started here, and that was 13 years ago".
The precision of your result is similar to this. Or to calculating your taxes on a transaction and realising you need to pay USD1492.19482019331. The software has done some calcuations based on GPS and/or wifi location signals, and that's the answer it came up with, but it isn't really so precise as to be accurate to the nearest atom's width, it's false precision; precision in a result that is greater than the accuracy of that result.
Just how accurate it is will depend on which services are turned on (gps or wifi-based only), available distinctive wifi signals (makes the wifi-based better) whether you are indoors or in a built-up area (makes GPS less accurate) speed you are moving. It could be better than a few meters or even as bad as 1km. There's also a difference in absolute and relative accuracy, so it might be e.g. 30m out, but after the phone has moved 100m still be 30m out, and have tracked that relative movement of 100m to within a meter or so.
The phone might not only be inaccurate but also be inaccurate in trying to judge how inaccurate it is! This is a reason in itself not to round the figure; how much to round them? It's also a good general principle that you should do any rounding at the last possible point; better to have false precision greater than your accuracy than to introduce a rounding error by rounding early and actually lose accuracy further.
If you're storing the positions in a memory-conscious way, or presenting the figures directly, then 5 or 6 decimal places is probably more than enough to be sure you're being more precise than you can really claim to be.
If you're plotting a point on a map, you may as well just feed in the ridiculously over-precise figures you get; it won't hurt anything, and its simpler than trying to second-guess just how accurate you really are.

They are just the decimal portion of degrees. It's a lot easier to do calculations with decimal degrees than degrees, minutes and seconds.
So for example: 32.5° = 32° 30'.

Related

How to find the incorrect movement in pose estimation [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
i am trying to build a pose estimation based model which is capable of Identifying the incorrect movement of a pose relative to a predefined action.
Example like performing excercises squats, pull-ups, yoga, etc
Among these if the user is not doing an action as per the one given in instructions means i have to find out those points in pose.
What i have tried so far:
built a rule based logics which identifies the direction and angles between two joints/lines and gives corrections
But the problem with this is we can't write rules for all the frames in action sequence. So looking for a better solution.
This idea could work if timing and sequence of the motions is not important. This also assumes that you know what motion is being attempted (tho this could be modified for a kinda brute force classification as well):
Create a way to record continuous motions in the way you described (angles of indexed joints). Then in order to train a new motion, collect a "golden set" where that motion is performed as perfectly as possible several times 5-10 (1 would work fine as well if you can ensure a high quality sample). Combine those sets (concat them, don't average or anything). You can think of each data point as a high dimensional data point like xyz but with as many joint angles as you are tracking. If speed/ optimization is a concern, you will probably want to sort this data to improve subsequent searching.
It depends on which joints you care about, but if some joints move much more than others, you may want to normalize the angle data per joint (ie instead of using their raw values, divide their raw values by their total range in the motion). The benefit of this is that it will keep one joint with huge motion from overpowering one with much less but which is still important. A note though is to be careful of joints with very little motion, as normalizing them can significantly increase noise, so you should probably just zero any joint that is below a certain range in the motion.
Now when someone is performing the motion, take a live sample of the user data (joint angles, and normalize each joint the same as in the golden data if you performed normalization) and find the "closest" point in your combined golden sample. In the simplest sense "closeness" can be a high dimensional distance estimate. Then in comparing the live data to the closest golden, you can inform the user in what ways their current pose is inaccurate. Squared distance can be used since it is just for ranking, so all you would need to do is find the difference in each dimension ie (angle1Difference, angle2Difference, angle3Difference,...) and sum their squares: ie distSq = a1D x a1D + a2D x a2D +a3D x a3D...
Note that for a given data point you need to capture all the joints you care about together (the independent data sets are less meaningful as it could allow the right range of motions but in the wrong order or coordination)

Inconsistent speed and distance units

When changing the distance and speed unit of the road model from km and km/h to m and m/s (and also adjusting the speed value of our vehicle which is said to use these same units), we get inconsistent results. Using the default units, the vehicle is moving really slow even when giving it a decent speed of 70 km/h. When using m/s and also converting the speed, the vehicle is moving really fast even though the behaviour of the vehicle should be the same.
We are using the same graph of Leuven and methods to construct it as used in the Taxi example. Upon further inspection, we saw that 1000 is used as speed for the taxi's with the default units (km/h and km if we are correct).
Did we miss something, or is there something wrong with the units of the parsed graph?

What is Rmax/RPeak (Ratio) in terms of Supercomputer

I am working on top500 supercomputer database.(http://www.top500.org/)
Rmax is maximum performance
RPeak is theorotical maximum performance.
Does Ratio of Rmax to RPeak results to something? Like say efficiency? or anything which could say something about a supercomputer.
Could it be something like Lie factor?
Rmax is determine by HPL benchmark. Details aren't always published, unfortunately, but in most cases, the problem dimension requires a decent fraction of total memory.
Rpeak is determined by multiplying the number of floating point units (usually vector) per processor times processor count times the number of floating point instructions that can be issued per second. This is a bit hard today because of frequency variation.
The ratio can be viewed as an efficiency factor, although it may not be productive to use the result for assigning value to systems. 75% of 1000 is the same as 100% of 750, and if they have the same dollar and power costs, what difference does it make?
I tend to view the combination of Top500, Graph500 and HPCG results as a more robust way to compare systems, but one cannot ignore power and dollar costs if one pays for systems (most users do not, at least directly).

Objective-C normalise dB level from iPhone's built in mic

I am building a dB meter as part of an app I am creating, I have got it receiving the peak and average power's from the mic on my iPhone (values ranging from -60 to 0.4) and now I need to figure out how to convert these power levels into db levels like the ones in this chart http://www.gcaudio.com/resources/howtos/loudness.html
Does anyone have any idea how I could do this? I can't figure an algorithm out for the life of me and it is kind of crucial as the whole point of the app is to do with real word audio levels, if that makes sense.
Any help will be greatly appreciated.
Apple's done a pretty good job at making the frequency response of the microphone flat and consistent between devices, so the only thing you'll need to determine is a calibration point. For this you will require a reference audio source and calibrated Sound Pressure level meter.
It's worth noting that sound pressure measurements are often measured against the A-weighting scale. This is frequency weighted for the human aural system. In order to measure this, you will need to apply the relevant filter curve to results taken form the microphone.
Also be aware of the difference between peak and mean (in this case RMS) measurements.
As far as I can tell from looking at the documentation, the "power level" returned from an AVAudioRecorder (assuming that's what you're using – you didn't specify) is already in decibels. See here from the documentation for averagePowerForChannel:
Return Value
The current average power, in decibels, for the sound
being recorded. A return value of 0 dB indicates full scale, or
maximum power; a return value of -160 dB indicates minimum power (that
is, near silence).

When to use geometric vs arithmetic mean?

So I guess this isn't technically a code question, but it's something that I'm sure will come up for other folks as well as myself while writing code, so hopefully it's still a good one to post on SO.
The Google has directed me to plenty of nice lengthy explanations of when to use one or the other as regards financial numbers, and things like that.
But my particular context doesn't fit in, and I'm wondering if anyone here has some insight. I need to take a whole bunch of individual users' votes on how "good" a particular item is. I.e., some number of users each give a particular item a score between 0 and 10, and I want to report on what the 'typical' score is. What would be the intuitive reasons to report the geometric and/or arithmetic mean as the typical response?
Or, for that matter, would I be better off reporting the median instead?
I imagine there's some psychology involved in what the "best" method might be...
Anyway, there you have it.
Thanks!
Generally speaking, the arithmetic mean will suffice. It is much less computationally intensive than the geometric mean (which involves taking an n-th root).
As for the psychology involved, the geometric mean is never greater than the arithmetic mean, so arithmetic is the best choice if you'd prefer higher scores in general.
The median is most useful when the data set is relatively small and the chance of a massive outlier relatively high. Depending on how much precision these votes can take, the median can sometimes end up being a bit arbitrary.
If you really really want the most accurate answer possible, you could go for calculating the arithmetic-geomtric mean. However, this involved calculating both arithmetic and geometric means repeatedly, so it is very computationally intensive in comparison.
you want the arithmetic mean. since you aren't measuring the average change in average or something.
Arithmetic mean is correct.
Your scale is artificial:
It is bounded, from 0 and 10
8.5 is intuitively between 8 and 9
But for other scales, you would need to consider the correct mean to use.
Some other examples
In counting money, it has been argued that wealth has logarithmic utility. So the median between Bill Gates' wealth and a bum in the inner city would be a moderately successful business person. (Arithmetic average would hive you Larry Page.)
In measuring sound level, decibels already normalizes the effect. So you can take arithmetic average of decibels.
But if you are measuring volume in watts, then use quadratic means (RMS).
The answer depends on the context and your purpose. Percent changes were mentioned as a good time to use geometric mean. I use geometric mean when calculating antennas and frequencies since the percentage change is more important than the average or middle of the frequency range or average size of the antenna is concerned. If you have wildly varying numbers, especially if most are similar but one or two are "flyers" (far from the range of the others) the geometric mean will "smooth" the results (not let the different ones exert a change in the results more than they should). This method is used to calculate bullet group sizes (the "flyer" was probably human error, not the equipment, so the average is ""unfair" in that case). Another variation similar to geometric mean is the root mean square method. First you take the square root of the numbers, take THAT mean, and then square your answer (this provides even more smoothing). This is often used in electrical calculations and most electical meters are calculated in "RMS" (root mean square), not average readings. Hope this helps a little. Here is a web site that explains it pretty well. standardwisdom.com

Resources