Using CeeBot, I haven't found a way to get the tilt angle of a robot.
For example, if the robot have to shoot at an enemy, it has to change the angle of the canon to aim at the enemy.
But if the robot is not on a flat ground, but on a steep, the angle of the canon will have to take into account the angle of the steep.
Is it possible to know this angle ?
You're looking for the "pitch" value of your robot.
For example, this relatively simple code segment finds the nearest TargetBot and then uses the pitch value plus some basic trigonometry to feed the correct value into aim()
//our variables
object KillMe;
float range;
float ZDif;
float absoluteElevation;
float correctedElevation;
//calculate the angle we need to adjust our cannon
range = distance(this.position, KillMe.position);
absoluteElevation = atan((KillMe.position.z - position.z)/distance(this.position,KillMe.position));
correctedElevation = absoluteElevation - pitch;
aim(correctedElevation);
fire(0.1);
Obviously, it doesn't take into account the min/max elevations for the particular bot weapon you're using, or the min-max range of it.
Related
I have been working on Orientation estimation and I need to estimate correct heading when I am walking in straight line. After facing some roadblocks, I started from basics again.
I have implemented a Complementary Filter from here, which uses Gravity vector obtained from Android (not Raw Acceleration), Raw Gyro data and Raw Magnetometer data. I am also applying a low pass filter on Gyro and Magnetometer data and use it as input.
The output of the Complementary filter is Euler angles and I am also recording TYPE_ROTATION_VECTOR which outputs device orientation in terms of 4D Quaternion.
So I thought to convert the Quaternions to Euler and compare them with the Euler obtained from Complementary filter. The output of Euler angles is shown below when the phone is kept stationary on a table.
As it can be seen, the values of Yaw are off by a huge margin.
What am I doing wrong for this simple case when the phone is stationary
Then I walked in my living room and I get the following output.
The shape of Complementary filter looks very good and it is very close to that of Android. But the values are off by huge margin.
Please tell me what am I doing wrong?
I don't see any need to apply a low-pass filter to the Gyro. Since you're integrating the gyro to get rotation, it could mess everything up.
Be aware that TYPE_GRAVITY is a composite sensor reading synthesized from gyro and accel inside Android's own sensor fusion algorithm. Which is to say that this has already been passed through a Kalman filter. If you're going to use Android's built-in sensor fusion anyway, why not just use TYPE_ROTATION_VECTOR?
Your angles are in radians by the looks of it, and the error in the first set wasn't too far from 90 degrees. Perhaps you've swapped X and Y in your magnetometer inputs?
Here's the approach I would take: first write a test that takes accel and gyro and synthesizes Euler angles from it. Ignore gyro for now. Walk around the house and confirm that it does the right thing, but is jittery.
Next, slap an aggressive low-pass filter on your algorithm, e.g.
yaw0 = yaw;
yaw = computeFromAccelMag(); // yaw in radians
factor = 0.2; // between 0 and 1; experiment
yaw = yaw * factor + yaw0 * (1-factor);
Confirm that this still works. It should be much less jittery but also sluggish.
Finally, add gyro and make a complementary filter out of it.
dt = time_since_last_gyro_update;
yaw += gyroData[2] * dt; // test: might need to subtract instead of add
yaw0 = yaw;
yaw = computeFromAccelMag(); // yaw in radians
factor = 0.2; // between 0 and 1; experiment
yaw = yaw * factor + yaw0 * (1-factor);
They key thing is to test every step of the way as you develop your algorithm, so that when the mistake happens, you'll know what caused it.
I need to measure distance of wall from user. When user open the camera and point to the any surface i need to get the distance. I have read some link Is it possible to measure distance to object with camera? and i used code for find the iphone camera angle from here http://blog.sallarp.com/iphone-accelerometer-device-orientation.
- (void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration
{
// Get the current device angle
float xx = -[acceleration x];
float yy = [acceleration y];
float angle = atan2(yy, xx);
}
d = h * tan angle
But nothing happen in the nslog and camera.
In the comments, you shared a link to a video: http://youtube.com/watch?v=PBpRZWmPyKo.
That app is not doing anything particularly sophisticated with the camera, but rather appears to be calculate distances using basic trigonometry, and it accomplishes this by constraining the business problem in several critical ways:
First, the app requires the user to specify the height at which the phone's camera lens is being held.
Second, the user is measuring the distance to something sitting on the ground and aligning the bottom of that to some known location on the screen (meaning you have a right triangle).
Those two constraints, combined with the accelerometer and the camera's lens focal length, would allow you to calculate the distance.
If your target cross-hair was in the center of the screen, it greatly simplifies the problem and it becomes a matter of simple trigonometry, i.e. your d = h * tan(angle).
BTW, the "angle" code in the question appears to measure the rotation about the z-axis, the clockwise/counter-clockwise rotation as the device faces you. For this problem, though, you want to measure the rotation of the device about its x-axis, the forward/backward tilt. See https://stackoverflow.com/a/16555778/1271826 for example of how to capture the device orientation in space. Also, that answer uses CoreMotion, whereas the article referenced in your question is using an API that has since been deprecated.
The only way this would be possible is if you could read out the setting of the auto-focus mechanism in the lens. To my knowledge this is not possible.
I want to find out the pitch, yaw, and roll on an iPad 1. Since there is no deviceMotion facility, can I get this data from the accelerometer? I assume that I can use the vector that it returns to compare against a reference vector i.e. gravity.
Does iOS detect when the device is still and then take that as the gravity vector? Or do I have to do that?
Thanks.
It's definitely possible to calculate the Pitch and Roll from accelerometer data, but Yaw requires more information (gyroscope for sure but possibly compass could be made to work).
For an example look at Hungry Shark for iOS . Based on how their tilt calibration ui works I'm pretty sure they're using the accelerometer instead of the gyroscope.
Also, here are some formula's I found on a blog post from Taylor-Robotic a for calculating pitch and roll:
Now that we have 3 outputs expressed in g we should be able to
calculate the pitch and the roll. This requires two further equations.
pitch = atan (x / sqrt(y^2 + z^2))
roll = atan (y / sqrt(x^2 + z^2))
This will produce the pitch and roll in radians, to convert them into
friendly degrees we multiply by 180, then divide by PI.
pitch = (pitch * 180) / PI
roll = (roll * 180) / PI
The thing I'm still looking for is how to calibrate the pitch and roll values based on how the user is holding the device. If I can't figure it out soon, I may open up a separate question. Good Luck!
The context is an iPad game in which I want an on-screen object to be controlled by X/Y tilt of the device.
I was wondering if anybody can point me in the direction of resources for deciding on an appropriate mapping of tilt to the movement behaviour (e.g. whether people tend to use the "raw" rotation values to control the acceleration, velocity or direct position, and how comfortable players have been found to be with these different types of 'mapping' of device rotation to object movement).
I appreciate that the appropriate choice can depend on the particular type of game/object being controlled, and that some trial and error will be needed, but I wondered as a starting point at least what existing knowledge there was to draw on.
First you're going to want to apply a low-pass filter to isolate tilt from noise and your user's shaky hands, Apple shows this in their accelerometer examples
x = accel.x * alpha + x * (1.0 - alpha);
y = accel.y * alpha + y * (1.0 - alpha);
more alpha causes more responsiveness at the cost of more noisy input.
Unless your game is intentionally simulating a ball balancing on the face of the screen, you probably don't want to apply your tilt values to acceleration, but rather to target velocity or target position, applying "fake" smooth acceleration to get it there.
Also, this answer has a neat idea if Unity3d is acceptable for your project, and this paper has some handy numbers on the practical limits of using tilt control as input, including making the important point that users have a much easier time controlling absolute angle position than velocity of tilt.
So, we desire to move our character - we'll call him 'loco' - based on the accelerometer's x and y data.
Now, if we do want the magnitude of the tilt to affect the rate of loco's travel, we can simply factor the accelerometer x or y value directly into our algorithm for his movement.
For example: (psuedocode)
// we'll call our accelerometer value for x 'tiltX'
// tiltX is .5; // assume that accelerometer updated the latest x value to .5
// unitOfMovement = 1; // some arbitraty increment you choose for a unit of movement
loco.x += unitOfMovement * tiltValueX;
This will cause loco to move that number of pixels for each game cycle (or whatever cycle you are driving the updates for movement by) and that here is 1 pixel multiplied by the accelerometer value. So if the character normally moves 1 pixel right at full tiltX (1.0), then if tiltX comes in from the accelerometer at .5, loco will move half that. When the value of tiltX increases, so too will the movement of Loco.
Another consideration is whether you want the movement tied directly to the rate of accelerometer events? Probably not, so you can use an array to just hold the last ten x values (same concept for y values too) sent by the accelerometer and use the latest average of the array (sum of x values / number of elements in array) at each game loop. That way, the sensitivity will feel more appropriate than driving the movement by whatever the update rate of the accelerometer may be.
Have a look at Point in Tilt Direction - iPhone which is first answer is very helpful I think.
I'd like to deflect a ball at an angle depending on where it hits a paddle. Right now, I'm only changing the y coordinate, which results in an uninteresting deflection. It will angle but independent on impact location against the paddle. I'd like something more fun. Speed, momentum, mass and other factors don't need to be taken into consideration. Just angle depending on impact location of paddle. I've read this Not a number error (NAN) doing collision detection in an iphone app but it seems overly complicated for what I'm looking for. Is there a simpler way to calculate the deflection?
The objects are two UIImageViews.
Well, nothing realistic but you could do something so that the outbound angle is only dependent on where on the paddle it hits.
I have never done any iPhone or objective C coding so I'll just write up something in pseudo/C code.
First I'd calculate the speed, which is the length of the speed vector, or:
double speed = sqrt(velX * velX + velY * velY); // trigonometry, a^2 + o^2 = h^2
Then we want to calculate the new angle based on where we hit the paddle. I'm going to assume that you store the X collision in impactX and the length of the paddle in paddleLength. That way we can calculate an outbound angle. First let's figure out how to calculate the range so that we get a value between -1 and 1.
double proportionOfPaddle = impactX / (double) paddleLength; // between 0 and 1
double impactRange = proportionOfPaddle * 2 - 1; // adjust to -1 and 1
Let's assume that we do not want to deflect the ball completely to the side, or 90 degrees, since that would be pretty hard to recover from. Since I'm going to use the impactRange as the new velY, I'm going to scale it down to say -0.9 to 0.9.
impactRange = impactRange * 0.9;
Now we need to calculate the velX so that the speed is constant.
double newVelX = impactRange;
double newVelY = sqrt(speed * speed - newVelX * newVelX); // trigonometry again
Now you return the newVelX and newVelY and you have an impact and speed dependent bounce.
Good luck!
(Might very well be bugs in here, and I might have inverted the X or Y, but I hope you get the general idea).
EDIT: Adding some thoughts about getting the impactX.
Let's assume you have the ball.center.x and the paddle.center.x (don't know what you call it, but let's assume that paddle.center.x will give us the center of the paddle) we should be able to calculate the impactRange from that.
We also need the ball radius (I'll assume ball.width as the diameter) and the paddle size (paddle.width?).
int ballPaddleDiff = paddle.center.x - ball.center.x;
int totalRange = paddle.width + ball.width;
The smallest value for ballPaddleDiff would be when the ball is just touching the side of the paddle. That ballPaddleDiff would then be paddle.width/2 + ball.width/2. So, the new impactRange would therefore be
double impactRange = ballPaddleDiff / (double) totalRange / 2;
You should probably check the impactRange so that it actually is between -1 and 1 so that the ball doesn't shoot off into the stars or something.
You don't necessarily want realistic, you want fun. Those aren't always one and the same. If you wanted realistic, you can't throw out speed, momentum, mass, etc. In a normal game of ping pong, the point where it hits the paddle doesn't really matter, theres not a sweet spot like on a tennis racket.
Develop a mathematical function that will return an output vector, or a velocity and a unit vector, representing the output angle and velocity of the ball, givin an input angle, velocity, impact point on the paddle, and velocity of the paddle.
We expect already that the output angle = -1 * input angle. Output velocity also would be expected to be -1 * the input velocity. So if you want to mix it up, adjust those. You could increase the output angle proportional to the distance from the center of the paddle. You could also increase the angle or the speed proportional to the velocity of the paddle when its hit.
There's a lot of ways you could do that, so I can't really tell you exactly what function you would use, you're going to have to figure that out with testing and playing. If you still need more info add more specifics to your question.
The following code (C++ but easy enough to convert to ObjC), takes an incoming 2D vector and reflects it based on a surface normal (the face of your pong bat).
You could add some random 'fun factor' by randomizing an offset that you'd either apply to 'scalar' - to change velocity, or to the surface normal, to alter the reflection angle.
I'm using this in my iPhone project, and it works fine :)
void vec2ReflectScalar(vec2& vResult, const vec2& v1, const vec2& normal, float scalar)
{
vec2 _2ndotvn;
float dotVal = vec2DotProduct(v1, normal);
vec2Scale(_2ndotvn, normal, scalar * 2.f * dotVal);
vec2Subtract(vResult, v1, _2ndotvn);
}
void vec2Reflect(vec2& vResult, const vec2& v1, const vec2& normal)
{
vec2ReflectScalar(vResult, v1, normal, 1.f);
}