Line follower overshoot - robotics

The line follower robot reaches the top of inclined surface and then goes to plane surface at that time sensor distance from ground increases and robot overshoots.
What is the solution to this problem?

Calibrate the sensors so that it does works even when the robot reaches on top of the inclined surface(i.e the robot reaches the peak and the distance between the sensor and the ground increases).

Related

Accelerations to Velocity in iOS - wrong sign

I am currently developing an App which should calculate the speed by the accelerations of the device. To achieve this I am using Sensing Kit. The app records data of the users motion and saves it to the local storage. When the user stops the recording it is possible draw the collected accelerations in velocity by time (Three graphs for every axis of the accelerometer).
Sensing Kit uses startAccelerometerUpdates of the CMMotionManager to get the accelerations. To calculate the velocity I do some signal processing and an integration of the acceleration (and multiply it by 9.81 because apple measures the acceleration in increments of gravity). The result is the velocity over the recorded time (Its not very precise but that doesn't matter in my case).
I tested my app by sliding the phone over a table with the screen up and the upper screen side in moving direction. The movement later shown up in the resulting graph of Y-Axis, but it has a negative velocity (The accelerations are negative, too). I expect that the velocity and the acceleration should be positive, because I moved the device in the positive direction of the Y-Axis.
The same happens with the X-Axis wehen I move the phone on the table in the direction of this axis.
I tested it today without Sensing Kit and get same results.
The gravity is always as I expect a negative velocity on the z-Axis, because its an acceleration to the ground.
Can somebody explain to me why the acceleration of the sensor has the wrong sign?
Thank you.
https://developer.apple.com/documentation/foundation/unitacceleration
https://developer.apple.com/documentation/foundation/units_and_measurement
The Foundation library has a function for this and you could use this to convert between UnitAcceleration and UnitSpeed.

Quantify how good is a object detection in image processing

I developed Oil&Gas pipeline detector, which works fairly well. The detector outputs is a line that represents the pipe position in respect to the camera.
Although it has very low false-positive rate, I would like to quantify how reliable is my detection. So I can provide it to other components that receives the line information.
Initially I started to compute the standard deviation of the last 10 samples, which gave me a good starting point, since when a false positive is detected, the std deviation increases. However since the camera moves over time this metric not reliable, because the movement itself would increase the value.
I have camera velocities information, so I thought I could kind of fuse the "predicted" measurement with the detected, with a Kalman Filter, for instance. The filter covariances would give the estimation I want.
Edited to add more relevant information:
Single camera with known parameters and fixed local length.
The camera is attached to a robot's body.
Camera moves with low velocities (max: 0.5 m/s max and 0.3 rad/s).
The detector output is the line angle and shorter distance camera-line meters.
However I'm not sure if Kalman filter is the right/best technique to apply here. Does anyone have any suggestion how I can handle this?

How to make a stable feedback controller which can only generate intermittent impulses?

I have a kuka iiwa (7 joint robot arm). Attached to it is a circular aluminum platter with a steel ball on it. The goal of the project (for giggles/ challenge) was to use the internal torque sensors of the robot in order to balance the ball in the middle of the platter. Due to the fact that I was unable/ not allowed to use FRI (fast robot interface) whereby I can control the robot from C at about 3ms feedback loop, I can only update the robot position at about 4Hz... My quick and dirty solution consisted of the following:
Measure the torque on the final two axes of the arm and apply a mapping to generate the ball's position (filtration and hysteresis were well implemented to improve the data quality). If the ball velocity was sufficiently stable, generate a motion that would cancel out that velocity (with an impulse "go to angle and return to neutral position" motion). Overlaid on that was also a small proportional gain which would tend the ball towards the center of the platter.
My question: What is the professional/ correct solution to this situation (where your controller can only hit the system with impulses rather than continuous feedback)?
Here is a picture of the setup:
A slightly dampened negative feedback loop.
Where bal.posX dictates strength y.rot.arm ?
https://en.wikipedia.org/wiki/Control_system
I once coded something alike it, my colleague who's into PLC's called it that.
I did fuzzy logic optimization then..

What are the uses of a tachometer in ground mobile robots?

I know that a tachometer is an instrument that measures the rotation speed of a shaft or disk.
A tachometer can be used for the following:
1. An NXT motor has a built in tachometer that keeps track of the current angle (in degrees) of the motor axle.
2. Is used to determine the speed of the wheels.
What else could it be used for?
A tachometer used to keep track of an angle of any rotational axis, like the one you mention in your question, has a more general parent name that might be more searchable: an encoder. Encoders are used to track angles of more than just wheels; they can track robotic arm joint angles, camera angles, etc. Any moveable joint might be fitted with an encoder so that a robot can measure where it is in space. For further Googling and learning, the computation of turning a model of a robot + the data from that robot's encoders into an end position is called kinematics.

Position estimation for iOS using accelerometer

In iOS, it is easy to access Linear Acceleration which is equal to subtracting Gravity from Raw acceleration.
I am trying to estimate position by double integrating Linear Acceleration. For that I recorded the data by keeping the phone steady on table.
Then I did double integration in Matlab using cumtrapz but when I plot the position it grows with time.
What am I doing wrong? I was expecting that the position should be 0.
From what I've read this is too error-prone to be useful. Accelerometer based position calculations are subject to small drift errors, which accumulate over time. (If the phone is traveling at 100 kph constant velocity when your app first launches, you can't tell.) All you can measure is acceleration.
There would always been biasing errors from the sensors which can grow with time while integration. Can you calculate the drift of sensor when the device is at rest?. And then try to take the mean of the drift and subtract it from the input so that sensor shows 0 at rest, Then try to double integrate it

Resources