Can I detect the force or pressure exerted by a user (musician's) finger?
Not yet.
There's a technical demo showing how to do it. See http://tenonedesign.com/blog/pressure-sensitive-drawing-on-ipad/ . Hopefully something comes out of that. :)
Yes you can detect finger area, which is similar. Really! You just don't get point-pressure which requires a different sensor technology.
You can get a number that's the major radius of your finger in millimeters from every finger individually. It's essentially a processed number related to the number of pixels you are covering with the finger. Unfortunately, I haven't been able to get a straight answer on whether this is a problem to ship this way, so I have to ship with it disabled. But it works. It returns a value from 7.0 to about 11.0, it varies wildly, and you might want to low-pass filter changes to this value.
float vf = 10.0;
id valFloat = [thisTouch valueForKey:"pathMajorRadius"];
if(valFloat != nil)
{
vf = [valFloat floatValue];
}
As an iPad developer, it drives me COMPLETELY insane that this has worked since the iPad shipped, works now in iOS4.2, and still doesn't appear to be sanctioned. This feature is a basic requirement for many classes of application if you want something that's more than a $2 toy.
You cannot detect pressure but you can detect velocity of movement (distance/time) and you could establish a linear relationship between velocity and force -> volume. You could make a bell ring louder, for example, by swiping your finger vigorously, rapidly across the bell, and quietly by a gentle, short, slow stroke. Probably would work OK with harp strings.
Done. See my answer in Tap pressure strength detection using accelerometer
Related
How To recognise in iOS which finger I used for tap in iOS;
Left hand, Thumb: off Index: on Middle: on Ring: off Pinky: off
Short answer is that it cannot be done.
It is not something Apple could/would generate either as their code is designed for an extremely broad audience and something like finger sizes is a huge variant - A 7ft person may have enormous fingertips while a young child has very small fingertips. Also some users use their thumbs and others use their index fingers.
IF, and I say a BIG if, is if you wished to create this, you would have to create something that stored a user initially, asked them to go through touching a certain point on the screen for every finger, store their finger touches (assuming they consistently touched in the same manner almost 100% of the time- soft touches would cover less area so consistency would very difficult to conjure an algorithm to recognise correctly) and then assume that they would be touching certain things on a screen that was modern enough to recognise and measure accurately the specific space being touched on a screen.
Is it possible to calculate small distances with CoreMotion?
For example a user moves his iOS device up or down, left and right and facing the device in front of him (landscape).
EDIT
Link as promised...
https://www.youtube.com/watch?v=C7JQ7Rpwn2k position stuff starts at about 23 minutes in.
His summary...
The best thing to do is to try and not use position in your app.
There is a video that I will find to show you. But short answer... No. The margin for error is too great and the integration that you have to do (twice) just amplifies this error.
At best you will end up with the device telling you it is slowly moving in one direction all the time.
At worst it could think it's hurtling around the planet.
2020 Update
So, iOS has added the measure app that does what the OP wanted. And uses a combination of accelerometer and gyroscope and magnetometer in the phone along with ARKit to get the external reference that I was talking about in this answer.
Iām not 100% certain but if you wanted to do something like the OP was asking you should be able to dig into ARKit and find some apis in there that do what you want.
šš»
I want a lower level representation of the touch pattern. If someone lays their hand on the screen I want to see the hand shape. Is this possible in iOS?
As asked, the answer is no. You might be able to do something by fetching all of the touch events in your beginTouches method, but even that is going to be a rough guess at best as the touch reporting is relatively limited when compared to taking a palm print (I don't have the technical details in front of me, but it's on the order of 10 simultaneous touches).
I am making an ios application in which it is to be determined that wether the person is sitting or standing.I wanted to know that if there is any method to find automatically that the person is sitting or standing like we can get the height from sea level with the help of CLLocation Manager.So like this can we get the height of iPhone from the ground level in any way?
This is not possible for the following reasons:
The phone can tell you its height above sea level, the accuracy of which has a larger margin of error than the difference between a sitting and a standing person
Even if 1. did not apply, and you knew the precise height of the ground at your current location and the additional height of the phone, this would still be meaningless, as it doesn't take into account buildings, the height of the person, their posture and so forth.
You may have more luck using the motion coprocessor on newer models, you could assume that a standing person moves about more than a sitting person, or something. Or accelerometer readings to detect changes of position. But altitude is definitely not the way to go.
You cannot find out by altitude if a person is standing or sitting.
Accuracy of GPS is much to low. Which is at best 6m for altitude.
But if you are really clever you could try other approaches:
-Use the acceleratoin sensor: A standing person might move a bit more than a sitting one, or moves different. [Sorry, I did not saw that user jrturton has written the same, bit this indicates that this might work]
Sitting persons often type on the keyboard. You can measure that with the accelerometer, by frequence analysis after doing a FFT.
Walking persons: A person that walks does not sit: Detect typical walking steps, with aclerometer or even with an ios API that is new in ios7. (I remeber there is a step counter)
These all are no accurate detections, but may raise the probability to detect a sitting person-.
If you get that to work, I will have major respect. Post an update if you succeed.
Expect 2,5 to 3,5 fulltime working month to get that to work (in some cases)
After coming across this question, I am concerned that there will not be an answer to the question, but I will hope, anyways.
I have setup a few geofences (most small and one large). I am using the simulator and I have outputted the radius of the large CLRegion and it tells me that the radius is 10881.98m around a certain coordinate, but when I simulate the geolocation to 11281.86m away from that same certain coordinate, it does not trigger the locationManager:didExitRegion: delegate method for the large region.
While the large region will not trigger locationManager:didExitRegion:, I have confirmed that the smaller regions will trigger the delegate method every time. Is there a reason why this is not firing? Is there a distance buffer around a region? Is it documented somewhere?
Any help would be great.
EDIT: From testing, I need to cut down the radius by around 45.28% in order to have the geofence trigger. Obviously this is not a great solution, as it is very imprecise and it goes against the whole idea of geofencing.
My guess is that this is an issue unique to the simulator. While CLRegion does not technically have a buffer or padding, the OS takes substantially longer to determine you have physically left the geofence area. On fences of that size, I would image it could take longer. On smaller regions, 100-200M, I've seen it take several minutes of driving, but easily 300-400M before triggering an event. From what the Apple Engineer told me at WWDC 2013, the OS takes its time in determining that you left. It is also harder for the system to determine you left because of its reliance on cell tower triangulation and known wifi networks. It needs to go well beyond the known networks before it can safely trigger the exit event.
I know it isn't an exact answer, but hopefully you'll understand a bit more how they work under the hood and what Apple's expectation of them is. Good luck.