Manual ray cast when a plane is not in the hit results during a ARCore session - arcore

I am creating points on a detected plane, but sometimes the plane is not tracked anymore (fast movement for example) and a hitTest may not return a hit on this plane.
ARKit will return hit result for every known plane could ARCore do the same?
Apparently, this notion exists in Unreal integration (EGoogleARCoreLineTraceChannel::InfinitePlane), could it be available in the Java API?
Also to work around this problem, I do a manual ray cast and for some reason, I have a really small offset between my computed position and the hitTest result.
A Screen to World coordinates would help to make sure no bias is introduced there. Is that something possible?
Thanks in advance for the help!
Julien.

As a current alternative I used the code from Jonas Jongejan and Dan Moore AR Drawing to get the right Ray origin and it is working much better.
The secret was to generate 2 points near and front screen point and start the ray at touchRay.direction.scale(AppSettings.getStrokeDrawDistance());. I now have a very accurate match between my manual ray cast and the result of the hitTest.

Related

ARKit plane with real world object above it

Thanks in advance for reading my question. I am really new to ARKit and have followed several tutorials which showed me how to use plane detection and using different textures for the planes. The feature is really amazing but here is my question. Would it be possible for the player to place the plane all over the desired area first and then interact with the new ground? For example, could I use the plane detection to detect and put grass texture over an area and then drive a real RC car over it? Just like driving it on real grass.
I have tried out the plane detection on my iPhone 6s while what I found is when I tried to put anything from real world on the top of plane surface it just simply got covered by the plane. Could you please give me some clue if it is possible to make the plane just stay on the ground without covering the real world object?
I think that's sth what you are searching for:
ARKit hide objects behind walls
Or another way is i think to track the position of the real world object for example with apples turicreate or CoreML or both -> then don't draw your stuff on the affected position.
Tracking moving objects is not supported, that's actually what it would be needed to make a real object interact with the a virtual one.
Said that I would recommend you using 2D image recognition and "read" every camera frame to detect the object while moving in the camera's view space. Look for the AVCaptureVideoDataOutputSampleBufferDelegate protocol in Apple's developer site
Share your code and I could help with some ideas

Object detection in 2D laser scan

Currently, I desperately try to detect an object (robot) based on 2D laser scans (of another robot). In the following two pictures, the blue arrow corresponds to the pose of the laser scanner and points towards the object, that I would like to detect.
one side of the object
two sides of the object
Since it is basically a 2D picture, my first approach was to to look for some OpenCV implementations such as HoughLinesP or LSDDetector in order to detect the lines. Unfortunately, since the focus of OpenCV is more on "real" images with "real" lines, this approach does not really work with the point clouds, as far as I have understood it correctly. Another famous library is the point-cloud library, which on the other hand focus more on 3D point clouds.
My current approach would be to segment the laser scans and then use some iterative closest point (ICP) C++ implementation to find a 2D point cloud template in the laser scans. Since I am not that familiar with object detection and all that nice stuff, I am quite sure that there are some more sophisticated solutions...
Do you have any suggestions?
Many thanks in advance :)
To get lines from points, you could try RANSAC.
You would iteratively fit lines to the points, then remove points corresponding to the new line and repeat until there is not enough points or the support is too low or something like that.
Hope it helps.

How to find the coordinates of moving objects to draw rectangle

Does anyone know how to locate the coordinates of the moving object? I have found some examples online about tracking the objects by using optical flow, but I only got some tracked points on the moving objects. May I just draw rectangle around the each moving object instead? Is there a way to get the coordinates of each moving object? Appreciate any help in advance. Thanks!
Fit a rectangle to the points you get with optical flow and you can consider the centre of the fitted rectangle as a fair estimate of 2D trajectory of the whole moving body..
u can use the Moments operator
first calculate the contour size....
and just add this code block
Moments moment = moments((cv::Mat)contours[index]);
area = moment.m00;//m00 gives the area
x = moment.m10/area;//gives the x coordinate
y = moment.m01/area; //gives y coordiante
where the contours is the output of the findcontours(),
It is pretty hard to tell the coordinates of an object only from a couple of points on it. You can use moments (here is a tutorial) tu get a quite stable point describing where is Your object.
You may also do some additional work, like segmentation using tracked points to get the contour of tracked object, which should make it even easier to find its mass centre. Went overboard with ths.
There is also a tracking method called CAMSHIFT which returns a rectangle bounding the tracked object.
If You know precisely what are You tracking, and can make sure that some known points on tracked object are tracked, and You are able to recognise them, than You can use POSIT to determine the object's 3D coordinates and orientation. Take a glance at ArUco to get the idea about what I'm talking about.
To get the 3D position from previous methods, You can use stereo vision, and use centre of mass from both cameras to compute the coordinates.

How to detect PizzaMarker

did somebody tried to find a pizzamarker like this one with "only" OpenCV so far?
I was trying to detect this one but couldn't get good results so far. I do not know where this marker is in picture (no ROI is possible), the marker will be somewhere in the room (different ligthning effects) and not faceing orthoonal towards us. What I want - the corners and later the orientation of this marker extracted with the corners but first of all only the 5Corners. (up, down, left, right, center)
I was trying so far: threshold, noiseclearing, find contours but nothing realy helped for a good result. Chessboards or square markers are normaly found because of their (parallel) lines- i guess this can't help me here...
What is an easy way to find those markers?
How would you start?
Use other colorformat like HSV?
A step-by-step idea or tutorial would be realy helpfull. Cause i couldn't find tuts at the net. Maybe this marker isn't called pizzamarker -> does somebody knows the real name?
thx for help
First - thank you for all of your help.
It seems that several methods are usefull. Some more or less time expansive.
For me it was the easiest with a template matching but not with the same marker.
I used only a small part of it...
this can be found 5 times(4 times negative and one positive) in this new marker:
now I use only the 4 most negatives Points and the most positive and got my 5 points that I finaly wanted. To make this more sure, I check if they are close to each other and will do a cornerSubPix().
If you need something which can operate in real-time I'd go down the edge detection route and look for intersecting lines like these guys did. Seems fast and robust to lighting changes.
Read up on the Hough Line Transform in openCV to get started.
Addendum:
Black to White is the strongest edge you can have. If you create a gradient image and use the strongest edges found in the scene (via histogram or other) you will be able to limit the detection to only the black/white edges. Look for intersections. This should give you a small number of center points to apply Hough ellipse detection (or alternate) to. You could rotate in a template as a further check if you wish.
BTW.. OpenCV has Edge Detection, Hough transform and FitEllipse if you do go down this route.
actually this 'pizza' pattern is one of the building blocks of the haar featured used in the
Viola–Jones object detection framework.
So what I would do is compute the summed area table, or integral image using cv::integral(img) and then run exhaustive search for this pattern, on various scales (size dependant).
In each window you are using only 9 points (top-left, top-center, ..., bottom left).
You can train and use cvHaarDetectObjects to detect the marker using VJ.
Probably not the fastest method but it should work.
You can find more info on object detection methods using OpenCV here: http://opencv.willowgarage.com/documentation/object_detection.html

OpenCV based Labyrinth Maze solver

I am building an automatic maze solver using the following as an inspiration:
http://www.youtube.com/watch?v=Prq78ctJ2Rk&feature=related
I have built the maze control with steppers and I am using the following stepper motor control board:
http://www.sparkfun.com/products/10025
I am using a vision system to control the maze solver. I also found a link where this problem has been solved:
http://cse.logicol.org/?p=52
They have used template matching to identify the ball. The team mentioned in the above link also uploaded a video where it looks like they have canny edge detection for finding the path and executing a PID algorithm.
http://www.youtube.com/watch?v=8b5ARjT22bg&feature=player_embedded
Now, I have also established template matching and edge detection in opencv. I have also established controls of my stepper via USB serial port. How do I implement the navigation algorithm? How do I implement the PID control? I know the concept of PID control theoretically but I just don't know to implement it using the information from the camera. I am just clueless about making the ball follow the line.
Please find an attached image of the result I have obtained so far.
Sai
I didn't quite understand your question but If you ask what commands give to the ballgiven its position here is my guess:
1. you find the location of the ball.
2. you have the line of the desired path drown on the board and detected
using canny.
3. Find the closest point to the ball which is on the path line. If
it was a straight line then the calculation is simple geometrical
formulae dist(point,line). Let us call the result D.
4. The resulting point on the line is where the ball should be.
5. Advance distance D along the path line. This would give you your
destination point.
6. Subtract ball coordinates from destination point, and then, using atan2()
method to calculate in which direction to move the ball.
7. Activate motores to tilt board in that direction.
Clarification to step 5. Why did I say to advance distance D along the path? Because thus you direct the ball in at most 45 degrees of the path line. This gives you relatively smooth motor movement.
If I didnt understand your question please tell me and I will correct my answer

Resources