I am building an automatic maze solver using the following as an inspiration:
http://www.youtube.com/watch?v=Prq78ctJ2Rk&feature=related
I have built the maze control with steppers and I am using the following stepper motor control board:
http://www.sparkfun.com/products/10025
I am using a vision system to control the maze solver. I also found a link where this problem has been solved:
http://cse.logicol.org/?p=52
They have used template matching to identify the ball. The team mentioned in the above link also uploaded a video where it looks like they have canny edge detection for finding the path and executing a PID algorithm.
http://www.youtube.com/watch?v=8b5ARjT22bg&feature=player_embedded
Now, I have also established template matching and edge detection in opencv. I have also established controls of my stepper via USB serial port. How do I implement the navigation algorithm? How do I implement the PID control? I know the concept of PID control theoretically but I just don't know to implement it using the information from the camera. I am just clueless about making the ball follow the line.
Please find an attached image of the result I have obtained so far.
Sai
I didn't quite understand your question but If you ask what commands give to the ballgiven its position here is my guess:
1. you find the location of the ball.
2. you have the line of the desired path drown on the board and detected
using canny.
3. Find the closest point to the ball which is on the path line. If
it was a straight line then the calculation is simple geometrical
formulae dist(point,line). Let us call the result D.
4. The resulting point on the line is where the ball should be.
5. Advance distance D along the path line. This would give you your
destination point.
6. Subtract ball coordinates from destination point, and then, using atan2()
method to calculate in which direction to move the ball.
7. Activate motores to tilt board in that direction.
Clarification to step 5. Why did I say to advance distance D along the path? Because thus you direct the ball in at most 45 degrees of the path line. This gives you relatively smooth motor movement.
If I didnt understand your question please tell me and I will correct my answer
Related
I am creating points on a detected plane, but sometimes the plane is not tracked anymore (fast movement for example) and a hitTest may not return a hit on this plane.
ARKit will return hit result for every known plane could ARCore do the same?
Apparently, this notion exists in Unreal integration (EGoogleARCoreLineTraceChannel::InfinitePlane), could it be available in the Java API?
Also to work around this problem, I do a manual ray cast and for some reason, I have a really small offset between my computed position and the hitTest result.
A Screen to World coordinates would help to make sure no bias is introduced there. Is that something possible?
Thanks in advance for the help!
Julien.
As a current alternative I used the code from Jonas Jongejan and Dan Moore AR Drawing to get the right Ray origin and it is working much better.
The secret was to generate 2 points near and front screen point and start the ray at touchRay.direction.scale(AppSettings.getStrokeDrawDistance());. I now have a very accurate match between my manual ray cast and the result of the hitTest.
First of all I'm a total newbie in image processing, so please don't be too harsh on me.
That being said, I'm developing an application to analyse changes in blood flow in extremities using thermal images obtained by a camera. The user is able to define a region of interest by placing a shape (circle,rectangle,etc.) on the current image. The user should then be able to see how the average temperature changes from frame to frame inside the specified ROI.
The problem is that some of the images are not steady, due to (small) movement by the test subject. My question is how can I determine the movement between the frames, so that I can relocate the ROI accordingly?
I'm using the Emgu OpenCV .Net wrapper for image processing.
What I've tried so far is calculating the center of gravity using GetMoments() on the biggest contour found and calculating the direction vector between this and the previous center of gravity. The ROI is then translated using this vector but the results are not that promising yet.
Is this the right way to do it or am I totally barking up the wrong tree?
------Edit------
Here are two sample images showing slight movement downwards to the right:
http://postimg.org/image/wznf2r27n/
Comparison between the contours:
http://postimg.org/image/4ldez2di1/
As you can see the shape of the contour is pretty much the same, although there are some small differences near the toes.
Seems like I was finally able to find a solution for my problem using optical flow based on the Lukas-Kanade method.
Just in case anyone else is wondering how to implement it in Emgu/C#, here's the link to a Emgu examples project, where they use Lukas-Kanade and Farneback's algorithms:
http://sourceforge.net/projects/emguexample/files/Image/BuildBackgroundImage.zip/download
You may need to adapt a few things, e.g. the parameters for the corner detection (the frame.GoodFeaturesToTrack(..) method) , but it's definetly something to start with.
Thanks for all the ideas!
I would like to create a program that can identify arrows in a video feed and determine the direction they are pointing at (left or right). My aim is to use this program with an arduino robot in order to determine the direction in which the bot should move.
my problem is which method to use. I ve narrowed my options down to template matching or SURF. template matching is good because it is rotation independent, therefore it can determine between left and right arrows. However since the bot will be moving, the size of the template arrow might not be equal to that of the video feed, resulting in no matches.
SURF solves this problem however it is rotation invariant. This means that Left arrows and right arrows will be considered as the same thing.
Can anyone please suggest an approach I can use for this program.
Thanks in advance for any help
P.S I will be using OpenCV for implementation.
I managed to solve the problem by using canny edge detection and HoughLinesP. The system works pretty well but has a limited rotation range at which it will detect the direction correctly (approx 15 degrees).
basically I first performed colour detection to detect the arrow, then used houghlinesp to find its outline. Out of these lines, I eliminated all those which are horizontal or vertical, leaving just the ones at the tip as shown in red. I then used the end points of each line to determine the direction.
did somebody tried to find a pizzamarker like this one with "only" OpenCV so far?
I was trying to detect this one but couldn't get good results so far. I do not know where this marker is in picture (no ROI is possible), the marker will be somewhere in the room (different ligthning effects) and not faceing orthoonal towards us. What I want - the corners and later the orientation of this marker extracted with the corners but first of all only the 5Corners. (up, down, left, right, center)
I was trying so far: threshold, noiseclearing, find contours but nothing realy helped for a good result. Chessboards or square markers are normaly found because of their (parallel) lines- i guess this can't help me here...
What is an easy way to find those markers?
How would you start?
Use other colorformat like HSV?
A step-by-step idea or tutorial would be realy helpfull. Cause i couldn't find tuts at the net. Maybe this marker isn't called pizzamarker -> does somebody knows the real name?
thx for help
First - thank you for all of your help.
It seems that several methods are usefull. Some more or less time expansive.
For me it was the easiest with a template matching but not with the same marker.
I used only a small part of it...
this can be found 5 times(4 times negative and one positive) in this new marker:
now I use only the 4 most negatives Points and the most positive and got my 5 points that I finaly wanted. To make this more sure, I check if they are close to each other and will do a cornerSubPix().
If you need something which can operate in real-time I'd go down the edge detection route and look for intersecting lines like these guys did. Seems fast and robust to lighting changes.
Read up on the Hough Line Transform in openCV to get started.
Addendum:
Black to White is the strongest edge you can have. If you create a gradient image and use the strongest edges found in the scene (via histogram or other) you will be able to limit the detection to only the black/white edges. Look for intersections. This should give you a small number of center points to apply Hough ellipse detection (or alternate) to. You could rotate in a template as a further check if you wish.
BTW.. OpenCV has Edge Detection, Hough transform and FitEllipse if you do go down this route.
actually this 'pizza' pattern is one of the building blocks of the haar featured used in the
Viola–Jones object detection framework.
So what I would do is compute the summed area table, or integral image using cv::integral(img) and then run exhaustive search for this pattern, on various scales (size dependant).
In each window you are using only 9 points (top-left, top-center, ..., bottom left).
You can train and use cvHaarDetectObjects to detect the marker using VJ.
Probably not the fastest method but it should work.
You can find more info on object detection methods using OpenCV here: http://opencv.willowgarage.com/documentation/object_detection.html
I have a field filled with obstacles, I know where they are located, and I know the robot's position. Using a path-finding algorithm, I calculate a path for the robot to follow.
Now my problem is, I am guiding the robot from grid to grid but this creates a not-so-smooth motion. I start at A, turn the nose to point B, move straight until I reach point B, rinse and repeat until the final point is reached.
So my question is: What kind of techniques are used for navigating in such an environment so that I get a smooth motion?
The robot has two wheels and two motors. I change the direction of the motor by turning the motors in reverse.
EDIT: I can vary the speed of the motors basically the robot is an arduino plus ardumoto, I can supply values between 0-255 to the motors on either direction.
You need feedback linearization for a differentially driven robot. This document explains it in Section 2.2. I've included relevant portions below:
The simulated robot required for the
project is a differential drive robot
with a bounded velocity. Since
the differential drive robots are
nonholonomic, the students are encouraged to use feedback linearization to
convert the kinematic control output
from their algorithms to control the
differential drive robots. The
transformation follows:
where v, ω, x, y are the linear,
angular, and kinematic velocities. L
is an offset length proportional to the
wheel base dimension of the robot.
One control algorithm I've had pretty good results with is pure pursuit. Basically, the robot attempts to move to a point along the path a fixed distance ahead of the robot. So as the robot moves along the path, the look ahead point also advances. The algorithm compensates for non-holonomic constraints by modeling possible paths as arcs.
Larger look ahead distances will create smoother movement. However, larger look ahead distances will cause the robot to cut corners, which may collide with obstacles. You can fix this problem by implementing ideas from a reactive control algorithm called Vector Field Histogram (VFH). VFH basically pushes the robot away from close walls. While this normally uses a range finding sensor of some sort, you can extrapolate the relative locations of the obstacles since you know the robot pose and the obstacle locations.
My initial thoughts on this(I'm at work so can't spend too much time):
It depends how tight you want or need your corners to be (which would depend on how much distance your path finder gives you from the obstacles)
Given the width of the robot you can calculate the turning radius given the speeds for each wheel. Assuming you want to go as fast as possible and that skidding isn't an issue, you will always keep the outside wheel at 255 and reduce the inside wheel down to the speed that gives you the required turning radius.
Given the angle for any particular turn on your path and the turning radius that you will use, you can work out the distance from that node where you will slow down the inside wheel.
An optimization approach is a very general way to handle this.
Use your calculated path as input to a generic non-linear optimization algorithm (your choice!) with a cost function made up of closeness of the answer trajectory to the input trajectory as well as adherence to non-holonomic constraints, and any other constraints you want to enforce (e.g. staying away from the obstacles). The optimization algorithm can also be initialised with a trajectory constructed from the original trajectory.
Marc Toussaint's robotics course notes are a good source for this type of approach. See in particular lecture 7:
http://userpage.fu-berlin.de/mtoussai/teaching/10-robotics/