Getting strange magnetometerData in iPhone when programming iOS app with Swift - ios

I am programming an iOS app which reads the data from magnetometer. When I run the app on two iPhones, they are creating strange data.
(All data are in [x, y, z] notation) An iPhone 5S creates about [100, 10, -100] and another iPhone 6S creates about [150, 225, -700]. The values shift about 10% when reading multiple times and holding the phone still, and they change little when I turn them.
However, the real magnetic field should be about [0, -30, -30] (measured by an app)
Why am I getting these strange data? (I also measured data from accelerometer, and the data are correct)
Here is my project's source code: https://github.com/lxylxy123456/FGFS-Controller/
What I did is basically like this
let motionManager = CMMotionManager()
motionManager.startMagnetometerUpdates()
if let magnetometerData = motionManager.magnetometerData {
mx = magnetometerData.magneticField.x
my = magnetometerData.magneticField.y
mz = magnetometerData.magneticField.z
}
Mx.text = Float(mx).description
My.text = Float(my).description
Mz.text = Float(mz).description

The magnetometerData is raw data, uncalibrated for internal bias as well as externalities (aka metal), and is essentially meaningless. There is no merit in using the raw data. There is basically never a reason to use these values.
Use the CMDeviceMotion's magneticField at the very least. Even better, ask for information that is germane to your real needs. If you want to know the device's heading, ask for that. If you want to know the device's orientation with reference to magnetic north, ask for that.

Related

Marker based initial positioning with ARCore/ARKit?

problem situation: Creating AR-Visualizations always at the same place (on a table) in a comfortable way. We don't want the customer to place the objects themselves like in countless ARCore/ARKit examples.
I'm wondering if there is a way to implement those steps:
Detect marker on the table
Use the position of the marker as the initial position of the AR-Visualization and go on with SLAM-Tracking
I know there is something like an Marker-Detection API included in the latest build of the TangoSDK. But this technology is limited to a small amount of devices (two to be exact...).
best regards and thanks in advance for any idea
I am also interested in that topic. I think the true power of AR can only be unleashed when paired with environment understanding.
I think you have two options:
wait for the new Vuforia 7 to be released and supposedly it is going to support visual markers with ARCore and ARKit.
Engage CoreML / Computer Vision - in theory it is possible but I haven't seen many examples. I think it might be a bit difficult to start with (e.g. build and calibrate model).
However Apple have got it sorted:
https://youtu.be/E2fd8igVQcU?t=2m58s
if using Google Tango, you can implement this using the built in Area Descriptions File (ADF) system.
The system has a holding screen and you are told to "walk around". Within a few seconds, you can relocalise to an area the device has previously been. (or pull the information from a server etc..)
Googles VPS (Visual Positioning Service) is a similar Idea, (closed Beta still) which should come to ARCore. It will, as far as I understand, allow you to localise a specific location using the camera feed from a global shared map of all scanned locations. I think, when released, it will try to fill the gap of an AR Cloud type system, which will solve these problems for regular developers.
See https://developers.google.com/tango/overview/concepts#visual_positioning_service_overview
The general problem of relocalising to a space using pre-knowledge of the space and camera feed only is solved in academia and other AR offerings, hololens etc... Markers/Tags aren't required.
I'm unsure, however, which other commercial systems provide this feature.
This is what i got so far for ARKit.
#objc func tap(_ sender: UITapGestureRecognizer){
let touchLocation = sender.location(in: sceneView)
let hitTestResult = sceneView.hitTest(touchLocation, types: .featurePoint)
if let hitResult = hitTestResult.first{
if first == nil{
first = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
}else if second == nil{
second = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
}else{
third = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
let x2 = first!.x
let z2 = -first!.z
let x1 = second!.x
let z1 = -second!.z
let z3 = -third!.z
let m = (z1-z2)/(x1-x2)
var a = atan(m)
if (x1 < 0 && z1 < 0){
a = a + (Float.pi*2)
}else if(x1 > 0 && z1 < 0){
a = a - (Float.pi*2)
}
sceneView.scene.rootNode.addChildNode(yourNode)
let rotate = SCNAction.rotateBy(x: 0, y: CGFloat(a), z: 0, duration: 0.1)
yourNode.runAction(rotate)
yourNode.position = first!
if z3 - z1 < 0{
let rotate = SCNAction.rotateBy(x: 0, y: CGFloat.pi, z: 0, duration: 0.1)
yourNode.runAction(rotate)
}
}
}
}
Theory is:
Make three dots A,B,C such that AB is perpendicular to AC. Tap dots in order A-B-C.
Find angle of AB in x=0 of ARSceneView which gives required rotation for node.
Any one of the point can be refrenced to calculate position to place node.
From C find if node needs to be flipped.
I am still working on some exceptions that needs to be satisfied.
At the moment both ARKit 3.0 and ARCore 1.12 have all necessary API tools to fulfil almost any marker-based tasks for a precise positioning of 3D model.
ARKit
Right out-of-the-box, ARKit has the ability to detect 3D objects and place ARObjectAnchors in a scene as well as to detect images and use ARImageAnchors for accurate positioning. Main ARWorldTrackingConfiguration() class includes both instance properties – .detectionImages and .detectionObjects. It's not superfluous to say that ARKit primordially has indispensable built-in features from several frameworks:
CoreMotion
SceneKit
SpriteKit
UIKit
CoreML
Metal
AVFoundation
In addition to the above, ARKit 3.0 has tight integration with a brand-new RealityKit module helping to implement multiuser connectivity, list of ARAnchors and shared sessions.
ARCore
Although ARCore has a feature called Augmented Images, the framework has no built-in machine learning algorithms, helping us detect real-environment 3D objects, but Google ML Kit framework does have. So, as an Android developer you can use both frameworks at the same time to precisely auto-composite 3D model over a real object in AR scene.
It is worth recognizing that ARKit 3.0 has a more robust and advanced toolkit than ARCore 1.12.

IPhone X true depth image analysis and CoreML

I understant that my question is not directly related to programming itself and looks more like research. But probably someone can advise here.
I have an idea for app, when user takes a photo and app will analyze it and cut everythig except required object (a piece of clothin for example) and will save it in a separate image. Yesterday it was very difficult task, because developer should create pretty good neural network and educate it. But after Apple released iPhone X with true depth camera, half of the problems can be solved. As per my understanding, developer can remove background much more easily, because iPhone will know where background is located.
So only several questions left:
I. What is the format of photos which are taken by iPhone X with true depth camera? Is it possible to create neural network that will be able to use information about depth from the picture?
II. I've read about CoreML, tried some examples, but it's still not clear for me - how the following behaviour can be achieved in terms of External Neural Network that was imported into CoreML:
Neural network gets an image as an input data.
NN analyzes it, finds required object on the image.
NN returns not only determinated type of object, but cropped object itself or array of coordinates/pixels of the area that should be cropped.
Application gets all required information from NN and performs necessary actions to crop an image and save it to another file or whatever.
Any advice will be appreciated.
Ok, your question is actually directly related to programming:)
Ad I. The format is HEIF, but you access data of the image (if you develop an iPhone app) by means of iOS APIs, so you easily get information about bitmap as CVPixelBuffer.
Ad II.
1. Neural network gets an image as an input data.
As mentioned above, you want to get your bitmap first, so create a CVPixelBuffer. Check out this post for example. Then you use CoreML API. You want to use MLFeatureProvider protocol. An object which conforms to is where you put your vector data with MLFeatureValue under a key name picked by you (like "pixelData").
import CoreML
class YourImageFeatureProvider: MLFeatureProvider {
let imageFeatureValue: MLFeatureValue
var featureNames: Set<String> = []
init(with imageFeatureValue: MLFeatureValue) {
featureNames.insert("pixelData")
self.imageFeatureValue = imageFeatureValue
}
func featureValue(for featureName: String) -> MLFeatureValue? {
guard featureName == "pixelData" else {
return nil
}
return imageFeatureValue
}
}
Then you use it like this, and feature value will be created with initWithPixelBuffer initializer on MLFeatureValue:
let imageFeatureValue = MLFeatureValue(pixelBuffer: yourPixelBuffer)
let featureProvider = YourImageFeatureProvider(imageFeatureValue: imageFeatureValue)
Remember to crop/scale image before this operation so as to your network is being fed with a vector of a proper size.
NN analyzes it, finds required object on the image.
Use prediction function on your CoreML model.
do {
let outputFeatureProvider = try yourModel.prediction(from: featureProvider)
//success! your output feature provider has your data
} catch {
//your model failed to predict, check the error
}
NN returns not only determinated type of object, but cropped object itself or array of coordinates/pixels of the area that should be cropped.
This depends on your model and whether you imported it correctly. Under the assumption you did, you access output data by checking returned MLFeatureProvider (remember that this is a protocol, so you would have to implement another one similar to what I made for you in step 1, smth like YourOutputFeatureProvider) and there you have a bitmap and rest of the data your NN spits out.
Application gets all required information from NN and performs necessary actions to crop an image and save it to another file or whatever.
Just reverse step 1, so from MLFeatureValue -> CVPixelBuffer -> UIImage. There are plenty of questions on SO about this so I won't repeat answers.
If you are a beginner, don't expect to have results overnight, but the path is here. For an experienced dev I would estimate this work for several hours to get work done (plus model learning time and porting it to CoreML).
Apart from CoreML (maybe you find your model too sophisticated and it won't be able to port it to CoreML) check out Matthjis Hollemans' github (very good resources on different ways of porting models to iOS). He is also around here and knows a lot in the subject.

How do I find the required maxima in acceleration data obtained from an iPhone?

I need to find the number of times the accelerometer value stream attains a maximum. I made a plot of the accelerometer values obtained from an iPhones against time, using CoreMotion method to obtain the DeviceMotionUpdates. When the data was being recorded, I shook the phone 9 times (where each extremity was one of the highest points of acceleration).
I have marked the 18 (i.e. 9*2) times when acceleration had attained maximum in red boxes on the plot.
But, as you see, there are some local maxima that I do not want to consider. Can someone direct me towards an idea that will help me achieve detecting only the maxima of importance to me?
Edit: I think I have to use a low pass filter. But, how do I implement this in Swift? How do I choose the frequency of cut-off?
Edit 2:
I implemented a low pass filter and passed the raw motion data through it and obtained the graph as shown below. This is a lot better. I still need a way to avoid the insignificant maxima that can be observed. I'll work in depth with the filter and probably fix it.
Instead of trying to find the maximas, I would try to look for cycles. Especially, we note that the (main) minimas seem to be a lot more consistent than the maximas.
I am not familiar with swift, so I'll layout my idea in pseudo code. Suppose we have our values in v[i] and the derivative in dv[i] = v[i] - v[i - 1]. You can use any other differentiation scheme if you get a better result.
I would try something like
cycles = [] // list of pairs
cstart = -1
cend = -1
v_threshold = 1.8 // completely guessing these figures looking at the plot
dv_threshold = 0.01
for i in v:
if cstart < 0 and
v[i] > v_threshold and
dv[i] < dv_threshold then:
// cycle is starting here
cstart = i
else if cstart > 0 and
v[i] < v_threshold and
dv[i] < dv_threshold then:
// cycle ended
cend = i
cycles.add(pair(cstart, cend))
cstart = -1
cend = -1
end if
Now you note in comments that the user should be able to shake with different force and you should be able to recognise the motion. I would start with a simple 'hard-coded' cases as the one above, and see if you can get it to work sufficiently well. There is a lot of things you could try to get a variable threshold, but you will nevertheless always need one. However, from the data you show I strongly suggest at least limiting yourself to looking at the minimas and not the maximas.
Also: the code I suggested is written assuming you have the full data set, however you will want to run this in real time. This will be no problem, and the algorithm will still work (that is, the idea will still work but you'll have to code it somewhat differently).

How to get user's step count in using accelerometer data?

I want to calculate user steps(like pedometer).I know that with iPhone 5s, 6 and 6+ we can use CMStepCounter or CMPedometer class(which use M7 chip of devices) but iPhone 5 and lower versions does not support M7 chip, so we can't use CoreMotion. By searching all over internet i came to know that we can use accelerometer sensor for this purpose. But after spending a lot of time still i'm not able to make an accurate algorithm that works.
Edit2: After spending several days on searching google i tried a lot but still unable to find an working algorithm for counting user step using accelerometer.
Can anybody out there who can help me?
CMMotionManager is what you are looking for if you are using later versions of iOS.
However, if you want to continue with iOS 5 or lower you need to use the following although this is deprecated.
UIAccelerometer * accelerometer = [UIAccelerometer sharedAccelerometer];
accelerometer.delegate = self;
The method where you can get x, y, z values is:
-(void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration
{
//Check your x,y,z values to find step ...
}
If you need to know the logic behind step counter you can search and read through in Google :)

Decoding the CLLocationAccuracy const's

the following are listed in CLLocation.h but from my experience they are deceiving names- possibly originally thought up to serve two purposes, 1. to test the accuracy of the location returned, but also 2. to set how hard the location manager works, specifically what is enabled (gps (how many sat channels), how hard the wifi works, triangulation etc.
extern const CLLocationAccuracy kCLLocationAccuracyBestForNavigation; // (raw value: -2)
extern const CLLocationAccuracy kCLLocationAccuracyBest; // (raw value: -1)
extern const CLLocationAccuracy kCLLocationAccuracyNearestTenMeters; // (raw value: 10)
extern const CLLocationAccuracy kCLLocationAccuracyHundredMeters; // (raw value: 100)
extern const CLLocationAccuracy kCLLocationAccuracyKilometer; // (raw value: 1000)
extern const CLLocationAccuracy kCLLocationAccuracyThreeKilometers; // (raw value: 3000)
I would love to take a look at CLLocation.m, but as that is not likely to happen any time soon- does anyone have any field testing showing what they think is going on with these different modes.
ie, kCLLocationAccuracyBest = 10 satellite (channels/trunks?), 100% power to wifi etc..
I'm kind of guessing at straws here- I think this is the type of information apple should have provided-
what I really want to know is, what is actually happening with kCLLocationAccuracyThreeKilometers in relation to battery draw- is the gps on? 1 sat trunk? wifi enabled? wifi on a timer? who knows? I know I'd like to
I agree with Olie that hiding the details of the algorithm is intended to protect the app developer from worrying about how location is determined. That said, I believe it's still reasonable to ask the question: "what are the power implications of my accuracy selection?".
I have a little bit of information that might guide your decision on which to use, but I don't know the true details of Apple's implementation.
First, assume that as the reading becomes more accurate, the system will need to use more power-hungry radios. For example, the GPS will be required for the most detailed readings, inside 100 Meters, and it uses the most power.
Here is an educated guess at the mechanism used to determine the accuracy. List is ordered with (1) being the highest battery drain.
GPS - kCLLocationAccuracyBestForNavigation;
GPS - kCLLocationAccuracyBest;
GPS - kCLLocationAccuracyNearestTenMeters;
WiFi (or GPS in rural area) - kCLLocationAccuracyHundredMeters;
Cell Tower - kCLLocationAccuracyKilometer;
Cell Tower - kCLLocationAccuracyThreeKilometers;
When choosing, it is recommended by Apple that you select the most coarse-grained accuracy that your application can afford.
Hope that helps, a.little.
In the business district of a major city, wifi and cell tower triangulation are both very good. Residential suburbs they're not so good. In rural areas they barely work if they work at all.
GPS doesn't work very well indoors, and can take a very long time to get any fix at all without cell tower assistance (possibly 20 minutes!!). It takes that long for the satelites to broadcast enough information to determine your location, and there can be packet loss (clouds, buildings, trees, mountains, etc). It's worth noting that a proper high end GPS will have an antenna the size of a basket ball, no handheld GPS can get a perfect signal.
Even outdoors with perfect signal, GPS is inaccurate when you change direction rapidly (such as on the highway or a windy road). The BestForNavigation setting uses the accelerometer and gyroscope to offset this.
Currently, the iOS platform uses:
GPS: very accurate, but high power draw, slow and not always available. some hardware doesn't have a GPS.
WiFi: lots of power draw, and only works in the city. Can also be flat out wrong (eg place you in the wrong city)
Cell Tower: almost no power draw at all, and works well in the city. Not so great in rural areas. Doesn't exist on some hardware.
Accelerometer: slight improvements to other location fixes, but huge power draw.
Gyroscope: slight improvements to other location fixes, but huge power draw. iPhone 4 only.
You give it an accuracy in meters that you need (the constants are just nice names for meters), and it will use a combination of the above, to get you that level of accuracy with the fastest possible fix and lowest possible power draw. The technique it uses will change, from one user to another, and will change depending on where in the world the user is standing at the time.
The whole point of using extern rather than exposing what is actually happening is so that the under-gerwerkkins can change and your code doesn't have to worry about it to pick up the improvements.
That said, CLLocationAccuracy is typedef-ed to double, so I think it's fair to guess that kCLLocationAccuracyNearestTenMeters = 10.0, kCLLocationAccuracyHundredMeters = 100.0, etc. Best is likely either 0, 1 or kCLLocationAccuracyNearestTenMeters, and BestForNavigation is probably one they tossed it to help folks like TomTom, etc.
If you REALLY want to know, you can print out the values -- they're just doubles.
I do not believe that the number of satellites or power to wifi is altered based on your desired accuracy. The way I understand the algorithms, there is an approximation calculation that, the more times through the loop, the more accurate it gets. Hence, less-accurate just bails earlier.
But, again, the more important point is: it doesn't matter. Apple specifically doesn't describe what goes on behind the scenes because that's not part of the design. The design is: if you use kCLLocationAccuracyKilometer, you'll get an answer that's within a kilometer, etc. And Apple is now free to change how they arrive at that without you caring. This sort of isolation is a basic tenet of object oriented programming.
EDIT:
CORRECTION -- I'm just now watching the WWDC session on location (Session 115) and, at about 22:00 or so, he talks about how, when using BestForNavigation, this adds in some gyroscope correction (when available.) However, he warns that, while this is power & CPU intensive, and should be only used when necessary, as with turn-by-turn navigation.
I'm not sure how much more I can talk about this publically but, if you're a registered developer, you can get the sessions from iTunes-U.
(This is WWDC-2010, btw.)

Resources