I know that a tachometer is an instrument that measures the rotation speed of a shaft or disk.
A tachometer can be used for the following:
1. An NXT motor has a built in tachometer that keeps track of the current angle (in degrees) of the motor axle.
2. Is used to determine the speed of the wheels.
What else could it be used for?
A tachometer used to keep track of an angle of any rotational axis, like the one you mention in your question, has a more general parent name that might be more searchable: an encoder. Encoders are used to track angles of more than just wheels; they can track robotic arm joint angles, camera angles, etc. Any moveable joint might be fitted with an encoder so that a robot can measure where it is in space. For further Googling and learning, the computation of turning a model of a robot + the data from that robot's encoders into an end position is called kinematics.
Related
A computer-tomography device has a roentgen matrix of 20x500 dots with the resolution of 2mm in each direction. This matrix is rotating around a belt, shich transports items to be analysed. A special reconstruction algorithm produces 3D model of the items from many-many matixes captured from all 360 perspectives ( one image per 1° angle).
The problem is, the reconstruction algorithm is very sensitive to the belt speed/position. Measuring the belt position requires quite complicated and expensive positining sensors and very fine mechanics.
I wonder if it is possible to calculate the belt velocity using the roentgen-image itself. It has a width of 40mm and should be sufficient for capturing the movement. The problem is, the movement is always in 2 directions - rotation and X (belt). For those working in CT-area, are you aware of some applications/publishings about such direct measurement of the belt/table velocity?
P.S.: It is not vor medical application.
Hmm, interesting idea.
Are you doing a full 180 degree for the reconstruction? I'd go with the 0 and 180 degree cone beam images. They should be approximately the same, minus some non-linear stuff, artifacts, Poisson noise and difference in 'shadows' and scattering due to perspective.
You could translate the 180 image along the negative x-axis, to the direction opposite of the movement. You could then subtract images at some suitable intervals along this axis. When the absolute of the sum hits a minimum the translation should be approx at the distance the object has moved between 0 and 180, as the mirror images cancel each other out partially.
This could obviously be ruined by artifacts and wonkily shaped heavy objects. Still, worth a try. I'm assuming your voltage is pretty high if you are doing industrial stuff.
EDIT: "A special reconstruction algorithm produces 3D model of the items from many-many matixes captured from all 360 perspectives ( one image per 1° angle)."
So yes, you are using 180+ degrees. You could then perhaps use multiple opposite images for a more robust routine. How do you get the full circle? Are you shooting through the belt?
it's now the standard practices to fuse the measurements from accelerometers and gyro through Kalman filter, for applications like self-balancing 2-wheel carts: for example: http://www.mouser.com/applications/sensor_solutions_mems/
accelerometer gives a reading of the tilt angle through arctan(a_x/a_y). it's very confusing to use the term "acceleration" here, since what it really means is the projection of gravity along the devices axis (though I understand that , physically, gravity is really just acceleration ).
here is the big problem: when the cart is trying to move, the motor drives the cart and creates a non-trivial acceleration in horizontal direction, this would make the a_x no longer a just projection of gravity along the device x-axis. in fact it would make the measured tilt angle appear larger. how is this handled? I guess given the maturity of Segway, there must be some existing ways to handle it. anybody has some pointers?
thanks
Yang
You are absolutely right. You can estimate pitch and roll angles using projection of gravity vector. You can obtain gravity vector by utilizing motionless accelerometer, but if accelerometer moves, then it measures gravity + linear component of acceleration and the main problem here is to sequester gravity component from linear accelerations. The best way to do it is to pass the accelerometer signal through Low-Pass filter.
Please refer to
Low-Pass Filter: The Basics or
Android Accelerometer: Low-Pass Filter Estimated Linear Acceleration
to learn more about Low-Pass filter.
Sensor fusion algorithm should be interesting for you as well.
I have two calibrated cameras looking at an overlapping scene. I am trying to estimate the pose of camera2 with respect to camera1 (because camera2 can be moving; but both camera1 and 2 will always have some features that are overlapping).
I am identifying features using SIFT, computing the fundamental matrix and eventually the essential matrix. Once I solve for R and t (one of the four possible solutions), I obtain the translation up-to-scale, but is it possible to somehow compute the translation in real world units? There are no objects of known size in the scene; but I do have the calibration data for both the cameras. I've gone through some info about Structure from Motion and stereo pose estimation, but the concept of scale and the correlation with real world translation is confusing me.
Thanks!
This is the classical scale problem with structure from motion.
The short answer is that you must have some other source of information in order to resolve scale.
This information can be about points in the scene (e.g. terrain map), or some sensor reading from the moving camera (IMU, GPS, etc.)
Assume I have two independent cameras looking at the same scene (there are features that are visible from both) and that I know the calibration parameters of both the cameras individually (I can also perform stereo calibration at a certain baseline but I don't know if that would be useful). One of the cameras is fixed and stable, the other is noisy in terms of its pose (translation and rotation). As the pose keeps changing over time, is it possible to accurately estimate the pose of the moving camera with respect to the stationary one using image data from both cameras (in opencv)?
I've been doing a little bit of reading, and this is what I've gathered so far:
Find features using SIFT and the point correspondences.
Find the fundamental matrix.
Find essential matrix and perform SVD to obtain the R and t values between the cameras.
Does this approach work on a frame-by-frame basis? And how does the setup help in getting the scale factor? Pointers and suggestions would be very helpful.
Thanks!
I have a code for head pose which returns Yaw, Pitch and Roll in terms of angle. I need to test this code whether it works fine or not.
1) With these three angles as input how can I draw a 3D line using opencv as below. IF so can some one provide a code snippet to draw a 3D line like this in my camera window?
2) Also I need to test the accuracy of my pose code. Is there any datasets available to test the pose code, if so can some one provide the links? Or is there any other ways to test the head poses' accuracy?
Check out PRIMA head pose estimation database. It is freely available. PRIMA Head Pose Estimation Database consists of 2790 face images of 15 people with variation of yaw and pitch from -90 to +90 degrees. People in the dataset wear glasses or not and have various skin color. Background is neutral and face has a visible contrast from the background. Although the resolution of images is quite small - 384 x 288.
In case of PRIMA, each image is labeled with yaw and pitch angles, but no roll. You can read those two values by parsing image file name, e.g. personne01157+15-30.jpg has a face oriented in such way that pitch = +15 degress, yaw = -30 degrees. Here you have an example how to parse database file names (although is old OpenCV API, you can use just the part which extracts and parses file names).
I have used this database for research purposes in my master thesis, you only have to cite their paper in your work if you would like to do that.
Check out the following:
http://www.vision.ee.ethz.ch/~gfanelli/head_pose/head_forest.html#db
and
http://gi4e.unavarra.es/databases/hpdb/#!prettyPhoto