I am working on a small education demo which should measure height and width of the object using iOS camera.
EDIT:
I have a new theory to measure the width of an object.
In above image, if i can get Angle α and Angle ß, i can get width of the unknown side by using trigonometry formulas. I have the values of b1 and b2 already.
OLD:
Right now, i am focusing on measuring length only.
As per my knowledge it should be 3 step process.
User snaps one end of the object.
User snaps other end of the object.
User snaps center of the object. (Suggest me a better way for these please)
I get the approximate measurements using above process, but for the 3rd step, in which user snaps the center of the object. I want to show pointer location on screen (as camera overlay) to help user determine the center of the object.
This is how i am doing it right now.
How can i draw pointer location for 3rd step?
Note: Please suggest alternative/best way to make it possible. I would love another suggestions. Thanks.!!
First of all I must appreciate your work you have done till now. Another good thing is the way of explaining, salute!!!!!
After reading of your question, I feel that you dont need code, you can do it. I think you need direction only.
As per your explanation, you want to record angle of rotation of the device.
If you want to measure angle of rotation, you have to use compas readings. But compas readings will change if user tild the device. So you have to use accelerometer to measure tilding of device.
In short you have to make some combination and equation of both compas and accelerometer readings. Use compas to measure angle and use accelerometer to measure tilding of device.
If you want further information to implement it, you can ask me.
Hope this will help you....
Related
It seems HitResult only gives us intersection with a surface (plane) or a point cloud. How can I get a point in the middle of air with my click, and thus put an object floating in the air?
It really depends on what you mean by "in the air". Two possibilities I see:
"Above a detected surface" Do a normal hit test against a plane, and offset the returned pose by some Y distance to get the hovering location. For example:
Pose.makeTranslation(0, 0.5f, 0).compose(hitResult.getHitPose())
returns a pose that is 50cm above the hit location. Create an anchor from this and you're good to go. You also could just create the anchor at the hit location and compose with the y translation each frame to allow for animating the hover height.
"Floating in front of the current device position" For this you probably want to compose a translation on the right hand side of the camera pose:
frame.getPose().compose(Pose.makeTranslation(0, 0, -1.0f)).extractTranslation()
gives you a translation-only pose that is 1m in front of the center of the display. If you want to be in front of a particular screen location, I put some code in this answer to do screen point to world ray conversion.
Apologies if you're in Unity/Unreal, your question didn't specify so I assumed Java.
The reason why you see so often a hit result being interpreted as the desired position by the user is that actually there is no closed-form solution for this user interaction. Which of the infinite possible positions along the ray starting from the camera pointing towards the scene was desired by the User? 2D coordinates from a click still leave the third dimension undefined.
As you said "middle of the air", why not take the centre between the camera position and the hitresult?
You can extract the current position using pose.getTranslation https://developers.google.com/ar/reference/java/com/google/ar/core/Pose.html#getTranslation(float[],%20int)
I need to be able to interact with a representation of a cilinder that has many different parts in it. When the users taps over on of the small rectangles, I need to display a popover related to the specific piece (form).
The next image demonstrates a realistic 3d approach. But, I repeat, I need to solve the problem, the 3d is NOT required (would be really cool though). A representation that complies the functional needs will suffice.
The info about the parts to make the drawing comes from an API (size, position, etc)
I dont need it to be realistic really. The simplest aproximation would be to show a cilinder in a 2d representation, like a rectangle made out of interactable small rectangles.
So, as I mentioned, I think there are (as I see it) two opposite approaches: Realistic or Simplified
Is there a way to achieve a nice solution in the middle? What libraries, components, frameworks that I should look into?
My research has led me to SceneKit, but I still dont know if I will be able to interact with it. Interaction is a very important part as I need to display a popover when the user taps on any small rectangle over the cylinder.
Thanks
You don't need any special frameworks to achieve an interaction like this. This effect can be achieved with standard UIKit and UIView and a little trigonometry. You can actually draw exactly your example image using 2D math and drawing. My answer is not an exact formula but involves thinking about how the shapes are defined and break the problem down into manageable steps.
A cylinder can be defined by two offset circles representing the end pieces, connected at their radii. I will use an orthographic projection meaning the cylinder doesn't appear smaller as the depth extends into the background (but you could adapt to perspective if needed). You could draw this with CoreGraphics in a UIView drawRect.
A square slice represents an angle piece of the circle, offset by an amount smaller than the length of the cylinder, but in the same direction, as in the following diagram (sorry for imprecise drawing).
This square slice you are interested in is the area outlined in solid red, outside the radius of the first circle, and inside the radius of the imaginary second circle (which is just offset from the first circle by whatever length you want the slice).
To draw this area you simply need to draw a path of the outline of each arc and connect the endpoints.
To check if a touch is inside one of these square slices:
Check if the touch point is between angle a from the origin at a.
Check if the touch point is outside the radius of the inside circle.
Check if the touch point is inside the radius of the outside circle. (Note what this means if the circles are more than a radius apart.)
To find a point to display the popover you could average the end points on the slice or find the middle angle between the two edges and offset by half the distance.
Theoretically, doing this in Scene Kit with either SpriteKit or UIKit Popovers is ideal.
However Scene Kit (and Sprite Kit) seem to be in a state of flux wherein nobody from Apple is communicating with users about the raft of issues folks are currently having with both. From relatively stable and performant Sprite Kit in iOS 8.4 to a lot of lost performance in iOS 9 seems common. Scene Kit simply doesn't seem finished, and the documentation and community are both nearly non-existent as a result.
That being said... the theory is this:
Material IDs are what's used in traditional 3D apps to define areas of an object that have different materials. Somehow these Material IDs are called "elements" in SceneKit. I haven't been able to find much more about this.
It should be possible to detect the "element" that's underneath a touch on an object, and respond accordingly. You should even be able to change the state/nature of the material on that element to indicate it's the currently selected.
When wanting a smooth, well rounded cylinder as per your example, start with a cylinder that's made of only enough segments to describe/define the material IDs you need for your "rectangular" sections to be touched.
Later you can add a smoothing operation to the cylinder to make it round, and all the extra smoothing geometry in each quadrant of unique material ID should be responsive, regardless of how you add this extra detail to smooth the presentation of the cylinder.
Idea for the "Simplified" version:
if this representation is okey, you can use a UICollectionView.
Each cell can have a defined size thanks to
collectionView:layout:sizeForItemAtIndexPath:
Then each cell of the collection could be a small rectangle representing a
touchable part of the cylinder.
and using
collectionView:(UICollectionView *)collectionView
didSelectItemAtIndexPath:(NSIndexPath *)indexPath
To get the touch.
This will help you to display the popover at the right place:
CGRect rect = [collectionView layoutAttributesForItemAtIndexPath:indexPath].frame;
Finally, you can choose the appropriate popover (if the app has to work on iPhone) here:
https://www.cocoacontrols.com/search?q=popover
Not perfect, but i think this is efficient!
Yes, SceneKit.
When user perform a touch event, that mean you knew the 2D coordinate on screen, so your only decision is to popover a view or not, even a 3D model is not exist.
First, we can logically split the requirement into two pieces, determine the touching segment, showing right "color" in each segment.
I think the use of 3D model is to determine which piece of data to show in your case if I don't get you wrong. In that case, the SCNView's hit test method will do most of work for you. What you should do is to perform a hit test, take out the hit node and the hit's local 3D coordinate of this node, you can then calculate which segment is hit by this touch and do the decision.
Now how to draw the surface of the cylinder would be the only left question, right? There are various ways to do, for example simply paint each image you need and programmatically and attach it to the cylinder's material or have your image files on disk and use as material for the cylinder ...
I think the problem would be basically solved.
I get feedback from my users, that "from time to time" my Game-App has a bug were the ship gets completely uncontrollable. After investigating into this, it looks like the attitude, reported by CoreMotion drift away very fast (below a second) and it does that suddenly. You can play for up to five minutes, then it happens suddenly that the ship moves to one of the screen borders and does not move away from that point any more.
My question: Has anybody made the same experience with CoreMotion attitude and what are your ways or ideas to get control over this sudden, massive drifts?
The code I'm using to get the attitude in the update() of SpriteKit is:
if let motion = motionManager.deviceMotion {
let x = CGFloat(motion.attitude.yaw - basePosition.x)
let y = CGFloat(motion.attitude.roll - basePosition.y)
ship.physicsBody?.applyImpulse(CGVectorMake(X_SENSITITVITY * x, Y_SENSITITVITY * y))
}
where basePosition, X_SENSITITVITY, Y_SENSITITVITY are constant values in the game.
motionManager is defined by private var motionManager = CMMotionManager() at the top of the class.
As far as I understand the documentation, deviceMotion uses a combination of gravity and attitude measure to minimise the long term drift somehow.
Maybe also important to notice: When the Game runs in a silent environment without the vibrations of cars etc, it works perfectly fine.
I would like to have people to play my game whenever they need a rest - like on long train rides or flights - or kids in the backseat of the car.
I figured out the same drift problem. I compared the CMDeviceMotion's attitude with the CLLocation's magneticHeading. Therefor, I walked 10 times around a small table and put the device after every round on the exact same place.
I figured out, that the DeviceMotion's attitude drifts around 30 degrees every round. Thus, after 10 rounds the attitude is around 300 degrees off.
According to Apple's WWDC 2012 talk "Session 524: Understanding core motion" the used sensor fusion depends on the specified reference frame. Phil Adam mentions, that the sensor fusion algorithm also uses the magnetometer if the xArbitraryCorrected reference frame is specified. I did the same test with xArbitrary, xArbitraryCorrected and xMagneticNorth, but there is no difference. The compass's uncertainty is around 2 - 3 degree (with a heading filter of 1.0 degree).
Maybe it's a bug, I don't know. But I expected at least a difference between xArbitrary and xArbitraryCorrected.
ok, I've found a way to handle this. And as I see a number of views on this question there might be some interest in the answer and some people dealing with the same issue.
From my understanding, there're two things you can rely on:
The change of devices in motion. Specifically CMDeviceMotion.rotationRate
The attitude of a device towards earth. Specifically CMDeviceMotion.gravity
It looks like the first uses changes in current position by reading some kind of current forces. This is a current value and there's no need to sum up the values - and therefore no errors are summed up.
The second one (gravity) measures the current forces against earth. Again: current values, no sum up, no sum up of errors, no drift.
Anything else, especially CMDeviceMotion.attitude, takes measures and adds them up. Leading to adding up the small errors in each measure, leading to a drift in the result.
I've tried to use rotation rate and calculate the attitude myself by adding up. Well, Apple is not perfect in doing that, but way better then any of my solutions :-)
Talk is cheap. Where's the code? Here it is. This method is called during update() in SpriteKit (for every frame)
private func update(ship: SKSpriteNode) {
// Orientation can be landscape left or right
let orientation = UIApplication.sharedApplication().statusBarOrientation == UIInterfaceOrientation.LandscapeLeft ? -1.0 : 1.0
// Get rotation rate
let x = orientation*motionManager.deviceMotion.rotationRate.x
let y = orientation*motionManager.deviceMotion.rotationRate.y
// If user tilts device very slow it does not affect the ship
let xf = CGFloat(abs(x) >= X_RESPONSIVNESS ? X_SENSITITVITY*x : 0.0)
let yf = CGFloat(abs(y) >= Y_RESPONSIVNESS ? Y_SENSITITVITY*y : 0.0)
// Apply as impulse
ship.physicsBody?.applyImpulse(CGVectorMake(xf, yf))
}
As my game is more about "how does the user tilt the device to let me know where you want the ship go to" I simply used rotationRate to measure this and apply it as impulse. As an add-on you can rotate the device which means that what was a rotation towards the top of the screen is actually a rotation to the bottom after rotation. So I have to invert the impulse when the device is rotated.
You can now play the game in any direction. Landscape left, Landscape right, overhead, in a vibrating environment like cars or trains. It works like a charm. I'm even looking forward to a 2.5 hours flight next Saturday...
I'd like to get a reference CMAttitude based on the ground level, for instance, to draw horizon line.
Actually, I'm able to rotate my views by getting a reference attitude at any time and using multiplyByInverseOfAttitude to get handset rotation compared to previous attitude. That's fine.
But I'm unable to find how to get it for the ground at start. I'm mainly in portrait mode, IOS5, and using CMAttitudeReferenceFrameXTrueNorthZVertical (as I also make use of CoreLocation).
I've looked to bubble level or teapot samples (using accelerator) but haven't found a simple answer or sample to my problem with device motion attitudes. I'm probably missing something.
Thanks.
My own answer.
Actually, I was in wrong way considering the use of reference attitude. I just needed to compute the rotation angle from the gravity available in deviceMotion object like this:
double rotation = atan2(dm.gravity.x, dm.gravity.y) - M_PI;
I need to track single object's motion from frame to frame. I only need to know its position. But sometimes the object may go beyond the frame's boundaries partly (even most part) and sometimes it can approach the camera so closely that it will not fit in the frame. Which algorithm is the best for this purpose?
The question seems not to be specific. What are you tracking?
Is it just a colored object (which is simplest), use threshold method for that color and use contour or method of moment to find its position. Then even if it goes out and come back, still it will be tracked.
http://aishack.in/tutorials/tracking-colored-objects-in-opencv/
Or whatever it is, try to isolate the blob first.
And if i misunderstood the question, please specify with few more details.