The library which drake used to collision detected is different from the FCL(https://github.com/flexible-collision-library/fcl)? Is Drake collision detection based on FCL? Or where can I find the difference?
Can I use FCL to get the contact force between sphere and box? Or can Drake achieve it?
Thanks very much!
Thanks for asking.
In general, the collision detection is based partly on FCL broadphase+narrowphase and partly on Drake custom code.
The contact force also depends on the contact mode: point contact, hydroelastics, hydroelastics with fallback. See set_contact_model() and Hydroelastic Contact User Guide.
Recently we added an example that includes sphere-and-box contact at https://github.com/RobotLocomotion/drake/tree/master/examples/hydroelastic/ball_plate
By default, it uses hydroelastic contact. You can use the option --contact_model=point to use point contact. In that case, I believe it will use FCL to calculate penetration between the sphere and the box. By default, the example uses hydroelastic contact, which doesn't use FCL for narrowphase collision detection but still use FCL for broadphase detection as Russ said.
I'm not sure that I answer your questions. If you could be specific to which function in this
list of query functions (ComputePointPairPenetration, ComputeSignedDistancePairwiseClosestPoints, HasCollisions, etc.) or you could provide a small example, I can give more details what's happening under the hood.
Drake provides compete support for collision detection / contact-force computation. It currently does use FCL for one piece of that computation (broad-phase detection).
Related
In my case I am trying to build an application that measure the distance between the camera and any detected human body, exactly like this.
I started with android platform, the best match was Use ARCore as input for Machine Learning models, but I have no clue to how to change it to on Stream_mode.
After losing hope on android, I found that I can use MedeaPipe pose detection to detect the human body and by measuring the distance between two poses I can estimate how far the person is. But I know that ARCore uses what called hitTest, which it uses depth api to measure the distance.
Also, there is a MedeaPipeUnityPlugin.
So my questions are:
Does MedeaPipe provide an AR Experience, if it used as I mentioned? and If there is another way to use MediaPipe, please let me know.
Do we call it AR Experience, even if we do not have a 3D understanding of the environment?
I am working on identifying an object by using Kinect sensor so to get x,y,z coordinates of the object.
I am trying to find the related information for this but could not able to find much. I have seen the videos as well but nobody is sharing the information or any sample code?
This is what I want to achieve https://www.youtube.com/watch?v=nw3yix3XomY
Proabably, few people may asked same question but as I am new to the Kinect and these libraries due to which I need little more guidance.
I read somewhere that object detection is not possible using Kinect v1. We need to use 3rd party libraries like open CV or point-clouds (pcl).
Can somebody help me that even by using third party libraries how exactly can I identify object via a Kinect sensor?
It will be really helpful.
Thank you.
As the author of the video you linked stated in the comment, following this PCL tutorial will help you. As you found out already, realizing this may not be possible using the standalone SDK. Relying on PCL will help you not reinvent the wheel.
The idea there is to:
Downsample the cloud to have less data to deal with in the next steps (this also reduces noise a bit).
Identify keypoints/features (i.e. points, areas, textures that remain somehow invariant to some transformations).
Compute the keypoint descriptors, mathematical representations of these features.
For each scene keypoint descriptor, find nearest neighbor into the model keypoints descriptor cloud and add it to the correspondences vector.
Perform clustering on the keypoints and detect the model in the scene.
The software in the tutorial needs the user to manually feed in the model and scene files. It doesn't do that on live feed, as the video you linked.
The process should be pretty similar though. I'm not sure how cpu-intensive the detection is, so it might require additional performance tweaking.
Once you have frame-by-frame detection in place, you could start thinking about actually tracking an object across the frames. But that's another topic.
I am looking for a way to use the encoder information from the motors that drive the wheels of my robot to map a line circuit. The robot navigates around using a single light sensor following a line and on its second lap I want it to recognize where it is in the circuit. I've read a lot about SLAM but not sure I could implement this with robotc and only the encoder information.
Any help and advice on the best way to tackle this would be greatly appreciated.n
You can use an Odometry model to make a prediction on the movement of your robot. Assuming a vehicle with a preferred forward direction on a plane, you would have (x,y,theta) as your state, and then have a state transition depending on your encoder values. What the function looks like really depends on the configuration of your robot. I remember that Introduction to Autonomous Mobile Robots had a good coverage on the subject. You'll find lots of examples on the net, though. Simultaneous Localization and Mapping (SLAM) would be to use a probabilistic Odometry model, and then perform some correction based on your sensor. At first I thought this wasn't very feasible with your setup, but I actually think it is. Using an Occupancy-Grid based Rao-Blackwellized Particle Filter might give you some good results. I haven't used the CAS Toolbox, but have a look as it seems a good place to start.
All the path following steering algorithms (e.g. for robots steering to follow a colored terrain) that I can find are predictive, so they rely on the robot being able to sense some distance beyond its body.
I need path following behavior on a robot with a light sensor on its underside. It can only see terrain it is directly over and so can't make any predictions; are there any standard examples of good techniques to use for this?
I think that the technique you are looking for will most likely depend on what environment will you be operating in as well as to what type of your resources will your robot have access to. I have used NXT robots in the past, so you might consider this video interesting (This video is not mine).
Assuming that you will be working on a flat non glossy surface, you can let your robot wander around until it finds a predefined colour. The robot can then kick in a 'path following' mechanism and will keep tracking the line. If it does not sense the line any more, it might want to try to turn right and/or left (since the line might no longer be under the robot because it has found a bend).
In this case though the robot will need in advance what is the colour of the line that it needs to follow.
The reason the path finding algorithms you are seeing are predictive is because the robot needs to be able to interpret what it is "seeing" in context.
For instance, consider a coloured path in the form of a straight line. Even in this simple example, how is the robot to know:
Whether there is a coloured square in front of it, hence it should advance
Which direction it is even travelling in.
These two questions are the fundamental goals the algorithm you are looking for would answer (and things would get more complex as you add more difficult terrain and paths).
The first can only be answered with suitable forward-looking ability (hence a predictive algorithm), and the latter can only be answered with some memory of the previous state.
Based solely on the details you provided in your question, you wouldn't be able to implement an appropriate solution. Although, I would imagine that your sensor input and on-board memory would in fact be suitable for a predictive solution, you may just need to investigate further what the capabilities of your hardware allow for.
I'm planning on doing my Final Year Project of my degree on Augmented Reality. It will be using markers and there will also be interaction between virtual objects. (sort of a simulation).
Do you recommend using libraries like ARToolkit, NyARToolkit, osgART for such project since they come with all the functions for tracking, detection, calibration etc? Will there be much work left from the programmers point of view?
What do you think if I use OpenCV and do the marker detection, recognition, calibration and other steps from scratch? Will that be too hard to handle?
I don't know how familiar you are with image or video processing, but writing a tracker from scratch will be very time-consuming if want it to return reliable results. The effort also depends on which kind of markers you plan to use. Artoolkit e.g. compares the marker's content detected from the video stream to images you earlier defined as markers. Hence it tries to match images and returns a value of probability that a certain part of the video stream is a predefined marker. Depending on the threshold you are going to use and the lighting situation, markers are not always recognized correctly. Then there are other markers like datamatrix, qrcode, framemarkers (used by QCAR) that encode an id optically. So there is no image matching required, all necessary data can be retrieved from the video stream. Then there are more complex approaches like natural feature tracking, where you can use predefined images, given that they offer enough contrast and points of interest so they can be recognized later by the tracker.
So if you are more interested in the actual application or interaction than in understanding how trackers work, you should base your work on an existing library.
I suggest you to use OpenCV, you will find high quality algorithms and it is fast. They are continuously developing new methods so soon it will be possible to run it real-time in mobiles.
You can start with this tutorial here.
Mastering OpenCV with Practical Computer Vision Projects
I did the exact same thing and found Chapter 2 of this book immensely helpful. They provide source code for the marker tracking project and I've written a framemarker generator tool. There is still quite a lot to figure out in terms of OpenGL, camera calibration, projection matrices, markers and extending it, but it is a great foundation for the marker tracking portion.