I would like to construct a platform for my robot that can rotate in 360 degrees.
I have stepper motor that is able to rotate so.
The problem is I have some sensors and controllers placed on that platform so I need to power them up and read data from them.
I need to have wires going from the lower unrotatible level of the robot to that platform.
Picture:
Any thoughts on how could I achieve something like this?
For your application, you will need something to transmit the electrical signals and power to your rotating platform. Thankfully, there is a device called a slip ring which will do just this.
From Wikipedia:
A slip ring is an electromechanical device that allows the transmission of power and electrical signals from a stationary to a rotating structure. A slip ring can be used in any electromechanical system that requires rotation while transmitting power or signals. It can improve mechanical performance, simplify system operation and eliminate damage-prone wires dangling from movable joints.
You can find them from your favorite electronics vendor, but here is an example from Adafruit, distributed by Digikey.
Related
I am working on a line-following robot that works with computer vision. The image processing is done on an iPhone using OpenCV. For controlling the robot I have two ideas:
Generate sound of given frequencies (e.g. the higher the frequency, the more the robot should move to the left; the lower the frequency, the more the robot should move to the right). I have already done this successfully on an Android device. I found this code: Produce sounds of different frequencies in Swift, however I do not understand how I can play a sound indefinitely until a new frequency is given. Is this possible with this code, if yes, how?
If it would be possible (which I don't know) to precisely control the output waves of the sound in stereo (one channel for the left motor, one channel for the right motor), I could in theory directly control the motor driver from those 'sound' waves. Would this be possible or is this too complicated?
Note that I would like to avoid to use wireless communication such as a bluetooth or wifi module, since the environment in which the robot will be used will be used will have lots of possible interference.
Thank you in advance!
What about infrared communication? Makes more sense than sound.
What sensors does ARCore use: single camera, dual-camera, IMU, etc. in a compatible phone?
Also, is ARCore dynamic enough to still work if a sensor is not available by switching to a less accurate version of itself?
Updated: May 10, 2022.
About ARCore and ARKit sensors
Google's ARCore, as well as Apple's ARKit, use a similar set of sensors to track a real-world environment. ARCore can use a single RGB camera along with IMU, what is a combination of an accelerometer, magnetometer and a gyroscope. Your phone runs world tracking at 60fps, while Inertial Measurement Unit operates at 1000Hz. Also, there is one more sensor that can be used in ARCore – iToF camera for scene reconstruction (Apple's name is LiDAR). ARCore 1.25 supports Raw Depth API and Full Depth API.
Read what Google says about it about COM method, built on Camera + IMU:
Concurrent Odometry and Mapping – An electronic device tracks its motion in an environment while building a three-dimensional visual representation of the environment that is used for fixing a drift in the tracked motion.
Here's Google US15595617 Patent: System and method for concurrent odometry and mapping.
in 2014...2017 Google tended towards Multicam + DepthCam config (Tango project)
in 2018...2020 Google tended to SingleCam + IMU config
in 2021 Google returned to Multicam + DepthCam config
We all know that the biggest problem for Android devices is a calibration. iOS devices don't have this issue ('cause Apple controls its own hardware and software). A low quality of calibration leads to errors in 3D tracking, hence all your virtual 3D objects might "float" in a poorly-tracked scene. In case you use a phone without iToF sensor, there's no miraculous button against bad tracking (and you can't switch to a less accurate version of tracking). The only solution in such a situation is to re-track your scene from scratch. However, a quality of tracking is much higher when your device is equipped with ToF camera.
Here are four main rules for good tracking results (if you have no ToF camera):
Track your scene not too fast, not too slow
Track appropriate surfaces and objects
Use well lit environment when tracking
Don't track reflected of refracted objects
Horizontal planes are more reliable than vertical ones
SingleCam config vs MultiCam config
The one of the biggest problems of ARCore (that's ARKit problem too) is an Energy Impact. We understand that the higher frame rate is – the better tracking's results are. But the Energy Impact at 30 fps is HIGH and at 60 fps it's VERY HIGH. Such an energy impact will quickly drain your smartphone's battery (due to an enormous burden on CPU/GPU). So, just imagine that you use 2 cameras for ARCore – your phone must process 2 image sequences at 60 fps in parallel as well as process and store feature points and AR anchors, and at the same time, a phone must simultaneously render animated 3D graphics with Hi-Res textures at 60 fps. That's too much for your CPU/GPU. In such a case, a battery will be dead in 30 minutes and will be as hot as a boiler)). It seems users don't like it because this is not-good AR experience.
I have a Raspberry Pi connected to a monitor and a camera tracking the user. I would like to know the distance of the user to the screen (or to the camera if that is better). Preferably I would like to know the distance from the users face straight to the screen.
Can I do this with just one camera and OpenCV? What about with two cameras?
Otherwise, should I just use a different sensor like the ultrasonic sensor? Is this sensor appropriate if it's below or on the side of the screen? What type of spread/'field of view' does it have?
You could do this with two cameras, I think, by comparing how far the images are displaced, and using some trigonometry. The math will be non-trivial, however. This sounds like a good application for an ultrasonic sensor. The popular HC-SR04 gives pretty accurate (for my purposes) readings from about 30cm to 2m, provided the object is on-axis. I get some useful measurements for objects up to about 20 degrees off axis, but it's considerably less accurate. You can connect a HC-SR04 to the GPIO pins, but I prefer to use commercial i2c interfaces, because doing the timing in the Pi CPU is a pain. In any event, the HC-SR04 is so cheap that you haven't lost a great deal if you buy one just to experiment with.
Can you, please, suggest me ways of determining the distance between camera and a pixel in an image (in real world units, that is cm/m/..).
The information I have is: camera horizontal (120 degrees) and vertical (90 degrees) field of view, camera angle (-5 degrees) and the height at which the camera is placed (30 cm).
I'm not sure if this is everything I need. Please tell me what information should I have about the camera and how can I calculate the distance between camera and one pixel?
May be it isn't right to tell 'distance between camera and pixel ', but I guess it is clear what I mean. Please write in the comments if something isn't clear.
Thank you in advance!
What I think you mean is, "how can I calculate the depth at every pixel with a single camera?" Without adding some special hardware this is not feasible, as Rotem mentioned in the comments. There are exceptions, and though I expect you may be limited in time or budget, I'll list a few.
If you want to find depths so that your toy car can avoid collisions, then you needn't assume that depth measurement is required. Google "optical flow collision avoidance" and see if that meets your needs.
If instead you want to measure depth as part of some Simultaneous Mapping and Localization (SLAM) scheme, then that's a different problem to solve. Though difficult to implement, and perhaps not remotely feasible for a toy car project, there are a few ways to measure distance using a single camera:
Project patterns of light, preferably with one or more laser lines or laser spots, and determine depth based on how the dots diverge or converge. The Kinect version 1 operates on this principle of "structured light," though the implementation is much too complicated to reproduce completely. For a collision warning simple you can apply the same principles, only more simply. For example, if the projected light pattern on the right side of the image changes quickly, turn left! Learning how to estimate distance using structured light is a significant project to undertake, but there are plenty of references.
Split the optical path so that one camera sensor can see two different views of the world. I'm not aware of optical splitters for tiny cameras, but they may exist. But even if you find a splitter, the difficult problem of implementing stereovision remains. Stereovision has inherent problems (see below).
Use a different sensor, such as the somewhat iffy but small Intel R200, which will generate depth data. (http://click.intel.com/intel-realsense-developer-kit-r200.html)
Use a time-of-flight camera. These are the types of sensors built into the Kinect version 2 and several gesture-recognition sensors. Several companies have produced or are actively developing tiny time-of-flight sensors. They will generate depth data AND provide full-color images.
Run the car only in controlled environments.
The environment in which your toy car operates is important. If you can limit your toy car's environment to a tightly controlled one, you can limit the need to write complicated algorithms. As is true with many imaging problems, a narrowly defined problem may be straightforward to solve, whereas the general problem may be nearly impossible to solve. If you want your car to run "anywhere" (which likely isn't true), assume the problem is NOT solvable.
Even if you have an off-the-shelf depth sensor that represents the best technology available, you would still run into limitations:
Each type of depth sensing has weaknesses. No depth sensors on the market do well with dark, shiny surfaces. (Some spot sensors do okay with dark, shiny surfaces, but area sensors don't.) Stereo sensors have problems with large, featureless regions, and also require a lot of processing power. And so on.
Once you have a depth image, you still need to run calculations, and short of having a lot of onboard processing power this will be difficult to pull off on a toy car.
If you have to make many compromises to use depth sensing, then you might consider just using a simpler ultrasound sensor to avoid collisions.
Good luck!
I have a game controller (e.g. Wii) which gives me in software its position and orientation.
The given spatial data has a lot of noise, both because of the hardware, and because the user is a human, and human hands aren't stable.
I want to implement something similar to a gimbal, to stabilize the input and remove noise.
Ideally, if the hand is held in place, the spatial data should stay the same, even if in reality the hand moves a little bit.
And when the hand is moving, the data should be stable and smooth, and not all jiggly and noisy.
I tried simple things like a moving average, but it's really not as effective as an actual gimbal.
Searching for anything related to "gimbal" either results in game rotation math (gimbal lock), or electronics projects that use a physical gimbal.
Are there any resources related to stabilizing spatial data?