In Python Drake, I have a MultibodyPlant system that I have defined from a URDF. I am working on a trajectory optimization that involves hand-calculated contact dynamics, and I want to test out the trajectory that I calculated using the built-in simulator in Drake. I'm currently visualizing the entire calculated trajectory using MultibodyPositionToGeometryPose with TrajectorySource on a complete numpy array of all bodies in my system.
However, I want to test out my trajectory with the system dynamics handled instead by Drake. Is there a way to pass in a trajectory for some of the positions (i.e. joints connected to some of the links), and let the Drake simulation calculate the rest of them? I'd like to do this before attempting to implement a stabilized LQR controller to evaluate just the accuracy of my hand-calculated contact dynamics.
That feature is coming soon in the newest contact solver in Drake (https://github.com/RobotLocomotion/drake/issues/16738), but as of today, you could do this using a PidController with the state_projection matrix to select the subset of joints you would want to command. Those joints will need to have actuators on them.
Related
have IMU sensor that gives me the raw data such as orientation, Angular and Linear acceleration. Im using ROS and doing some Gazebo UUV simulation. Furthermore, I want to get linear velocity from the raw IMU data.
So the naive integration method is as follow: The simplest method for IMU state estimation is the naive integration of the measured data. We estimate the attitude by integrating 3D angular velocity data obtained by the IMU. Assuming that the time step Δ𝑡 is small, the attitude at each time step can be incrementally calculated.
If I do integration over time there will be accumulated error and will not be accurate with the time when for example the robot makes turns. So Im looking for some methods ( ROS packages or outside ROS framework) or code that can correct that error.
Any help?
I would first recommend that you try fitting your input sensor data into an EKF or UKF node from the robot_localization package. This package is the most used & most optimized pose estimation package in the ROS ecosystem.
It can handle 3D sensor input, but you would have to configure the parameters (there are no real defaults, all config). Besides the configuration docs above, the github has good examples of yaml parameter configurations (Ex.) (you'll want a separate file from the launch file) and example launch files (Ex.).
I want to create high fidelity meshes of urban street scenes using pointcloud data. The data consists of pointclouds using a HDL64-e and the scenes are very similar to the one in the Kitti Dataset.
Currently im only able to use the 'raw' point clouds and odometry of the car. Previous works already implemented the LeGO-LOAM algorithm to create a monolithic map and better odometry estimates.
Available Data:
Point Clouds with 10Hz timings
Odometry estimates with higher frequencies (LOAM Output)
Monolithic map of the scene (LOAM Output) (~1.500.000 Points)
I already did some research and came to the conclusion, that I can either
use the monolithic map with algorithms like Poisson Reconstruction, Advancing Front, etc... (using CGAL)
go the robotics way and use some packages like Voxgraph (which uses Marching Cubes internally)
As we might want to integrate image data at a later step the second option would be preferred.
Questions:
Is there a State-of-the-Art way to go?
Is it possible to get a mesh that can preserve small features like curbs and sign posts? (I know there might be a feasable limit on how fine the mesh can be)
I am very interested in some feedback and a discourse on how to tackle this problem 'the right way'.
Thank you for your suggestions/answers in advance!
i am doing final year project as lane tracking using a camera. the most challenging task now is how i can measure distance between the camera (the car that carries it actually) and the lane.
While the lane is easily recognized (Hough line transform) but i found no way to measure distance to it.
given the fact that there is a way to measure distance to object in front of camera based on Pixel width of the object, but it does not work here be because the nearest point of the line, is blind in the camera.
What you want is to directly infer the depth map with a monocular camera.
You can refer my answer here
https://stackoverflow.com/a/64687551/11530294
Usually, we need a photometric measurement from a different position in the world to form a geometric understanding of the world(a.k.a depth map). For a single image, it is not possible to measure the geometric, but it is possible to infer depth from prior understanding.
One way for a single image to work is to use a deep learning-based method to direct infer depth. Usually, the deep learning-based approaches are all based on python, so if you only familiar with python, then this is the approach that you should go for. If the image is small enough, i think it is possible for realtime performance. There are many of this kind of work using CAFFE, TF, TORCH etc. you can search on git hub for more option. The one I posted here is what i used recently
reference:
Godard, Clément, et al. "Digging into self-supervised monocular depth estimation." Proceedings of the IEEE international conference on computer vision. 2019.
Source code: https://github.com/nianticlabs/monodepth2
The other way is to use a large FOV video for a single camera-based SLAM. This one has various constraints such as need good features, large FOV, slow motion, etc. You can find many of this work such as DTAM, LSDSLAM, DSO, etc. There are a couple of other packages from HKUST or ETH that does the mapping given the position(e.g if you have GPS/compass), some of the famous names are REMODE+SVO open_quadtree_mapping etc.
One typical example for a single camera-based SLAM would be LSDSLAM. It is a realtime SLAM.
This one is implemented based on ROS-C++, I remember they do publish the depth image. And you can write a python node to subscribe to the depth directly or the global optimized point cloud and project it into a depth map of any view angle.
reference: Engel, Jakob, Thomas Schöps, and Daniel Cremers. "LSD-SLAM: Large-scale direct monocular SLAM." European conference on computer vision. Springer, Cham, 2014.
source code: https://github.com/tum-vision/lsd_slam
I am currently working on a Sign Language Recognition application, where I would like to use a Hidden Markov Model as the classification stage, meaning that I will classify a gesture/posture to obtain the relevant letter or word.
I have currently completed the first stage where I am detecting the hand. Currently I can obtain a number of parameters (features) which I can use for my machine learning stage such as:
convex hull of hand
convexity defects
centroid of the hand
bounding rotated ellipses/rectangles (e.g. obtain any angle needed in terms of rotation)
contour of the hand
moments (I am not sure what these are extactly)
These are all possible to do through openCv.
My question: once I have all these features, how can I execute the 'Feature Extraction' stage? i.e. if a machine learning algorithm, in this case the HMM requires a set of probabilities, how can I use the above information?
One idea I have is to create a special data structure with such information which uniquely identifies each gesture, but how do I feed it to the machine learning technique? (in this case the Hidden Markov Model)
Can any one be able to guide me as to what I should at least search for at this particular stage or guide me to show what is actually the real difficulty I have?
Once you have your set of observations ready, you could feed it to the Viterbi Algorithm to detect the best state sequence that may have produced these observations. Also, you can train your HMM over a data set of samples using the Baum-Welch algorithm. You could have a look at my blog post which is a simple explanation of recognizing dynamic hand gestures using HMM (although I am NOT using openCV or scanning the contour of the hand). Hope this can help you in getting a general idea about the processing and learning phase.
After doing optical flow (lk) on a video what's the best way to find the objects based on this data and track them?
This probably sounds very noobish, but I would like to be able to define a clear outline around objects, so if it's a weirdly shaped bottle or something to be able to detect the edges.
I'm not sure LK is the best algorithm, since it computes the motion of a sparse set of corner-like points, and tracking behaves usually better from a dense optical flow result (such as Farneback or Horn Schunck). After computing the flow, as a first step, you can do some thresholding on its norm (to retain the moving parts), and try to extract connected regions from this result. But be warned that your tasks is not going to be easy if you don't have a model of the object you want to track.
On the other hand, if you are primarily interested in tracking and a bit of interactivity is acceptable, you can have a look at the camshift sample code to see how to select and track an image region based on its appearance.
--- EDIT ---
If your camera is static, then use background subtraction instead. Using OpenCV 2.4 beta, you have to look for the class BackgroundSubtractor and its subclasses in the video module documentation.
Note also that optical flow can be realtime (or not very far) with good choices of parameters, and also with GPU implementation. On windows, you can use flowlib from TU Graz/Gpu4Vision group. OpenCV also has some GPU dense optical flow, for example the class gpu::BroxOpticalFlow.
--- EDIT 2 ---
Joining single-pixel detections into big objects is a task called connected component labelling. There is a fast algorithm for that, implemented in OpenCV. So this gives you a pipeline which is:
motion detection (pix level) ---> connected comp. labeling ---> object tracking (adding motion information, possible trajectories for Kalman filtering...).
But we'll have to stop here, because we'll soon be far beyond the scope of your initial question ;-)
You can use TLD or CLM for doing object tracking (it is loosely based on the the idea of optical flow tracking and model learning at the same time).
You can find following links useful
https://www.gnebehay.com/tld/
https://www.gnebehay.com/cmt/