I am working on a project where I should implement object tracking technique using the camera of Parrot AR Drone 2.0. So the main idea is, a drone should be able to identify a specified colour and then follow it by keeping some distance.
I am using the opencv API to establish communication with the drone. This API provides function:
ARDrone::move3D(double vx, double vy, double vz, double vr)
which moves the AR.Drone in 3D space and where
vx: X velocity [m/s]
vy: Y velocity [m/s]
vz: Z velocity [m/s]
vr: Rotational speed [rad/s]
I have written an application which does simple image processing on the images obtained from the camera of the drone using OpenCV and finds needed contours of the object to be tracked. See the example below:
Now the part I am struggling is finding the technique using which I should find the velocities to be sent to the move3D function. I have read that common way of doing controlling is by using PID controlling. However, I have read about that and could not get how it could be related to this problem.
To summarise, my question is how to move a robot towards an object detected in its camera? How to find coordinates of certain objects from the camera?
EDIT:
So, I just realized you are using a drone and your cordinate system WRT the drone is likely x forward into image, y left of image (image columns), z up vertically(image rows). My answer has coordinates WRT the camera x = columns, y = rows, z = depth (into image) Keep that n mind when you read my outline. Also everything I wrote is psuedo-code, it won't run without many modifications
Original Post:
A PID controller is a Proportional–integral–derivative controller. It decides an action sequenced based on the your specific error.
For your problem lets assume the optimal tracking means the rectangle is in the center of the image, and it takes up ~30% of the pixel space. This means that you move your camera/bot until these conditions are met. We will call these goal parameters
x_ideal = image_width / 2
y_ideal = image_height / 2
area_ideal = image_width * image_height * 0.3
Now lets say your bounding box is characterized by 4 parameters
(x_bounding, y_bounding, width_bounding_box, height_bounding_box)
Your error would be something along the lines of:
x_err = x_bounding - x_ideal;
y_err = y_bounding - y_ideal;
z_err = area_ideal - (width_bounding_box * height_bounding_box)
Notice I have tied the z distance (depth) to the size of the object. This assumes that the object being tracked is rigid and doesn't change size. Any change in size is due to the objects distance to the camera (a bigger bounding box means object is close, a small one means object is far). This is a bit of an estimation, but without having any parameters on the camera or the object itself we can only make these general statements.
We need to keep the sign in mind when creating our control sequences, this is why the order matters when doing subtraction. Lets think about this logically. x_err determines how far off the bounding box is horizontally from the desired position. In our case this should be positive, meaning the bot should move to the left so the object moves closer to the center of the image. The box is too small, meaning the object is too far away, etc.
z_err < 0 : means bot is too close and needs to slow down, Vz should be reduced
z_err = 0 : keep the speed command the same, no change
z_err > 0 : we need to get closer, Vz should increase
x_err < 0 : means bot is to the right and needs to turn left(decreasing x), Vx should be reduced
x_err = 0 : keep the speed in X the same, no change to Vx
x_err > 0 : means bot is to the left and needs to turn right(increasing x), Vx should be increased
We can do the same for each y axis. Now we use this error to create a command sequence for the bot.
That description sounds a lot like a PID controller. Observe a state, figure out an error, create a control sequence to reduce the error, then repeat the process over and over. In your case the velocity would be the actions output by your algorithm. You will essentially have 3 PIDs running
PID for X
PID for Y
PID for Z
Because these are orthogonal by nature, we can say each system is independent (and ideally it is), moving in the x direction shouldn't affect the Y direction. This example also completely ignores the bearing information (Vr), but it's meant to be a thought exercise, not a complete solution
The exact velocity of the corrections is determined by your PID coefficients and this is where things get a little tricky. Here is a easy to read (almost no math) overview or PID control. You will have to play with your system (aka "Tune" your parameters) through a bit of experimentation. This is made even more difficult because the camera is not a full 3d sensor, so we can't extract true measurements from the environment. It hard to convert an error of ~30 pixels into m/s without knowing more information about your sensor/environment, But I hope this gave you a general idea of how to proceed
Related
I am trying to use a PID controller to stop a quadcopter at a specific location while travelling horizontally, however currently it overshoots/undershoots depending on the max velocity. I have tried manually tuning the P,I and D gains with limited success. Essentially the velocity needs to go from maxSpeed to 0 at the very end of the flight path.
I run a loop that executes every 0.1 of a second.
The pitch input to the quadcopter is in m/s and I recalculate the distance to the target on each iteration.
some pseudo code
kP = 0.25
kI = 0.50
kD = 90
timeStep = 0.1
maxSpeed = 10
currentError = initialDistanceToLocation - currentDistanceToLocation
derivativeError = (currentError - previousError) / timeStep
previousError = currentError
output = kP * currentError + kI * integralError + kD * derivativeError
integralError = integralError + currentError * timeStep
if >= maxSpeed {
output = maxSpeed
} else if output <= 0 {
output = 0
}
return output
Is there a way to reliably tune this PID controller to this system that will work for different max velocities, and/or are there other factors I need to consider?
A few ideas:
check in your code if you calculate the integralError after you calculate the output, too. This can, of course, lead to undefined behavior since in the output calculation, integralError could be anything depending on the programming language.
From my experience, you should get rid of the D part entirely, since in a digital environment and due to measurement noise, it could have destabilizing effects
I highly recommend looking at this tutorial which guides you through important aspects (even though it concentrates on fixed-point PI controllers)
Summary
To stop on a point, one easy way is to switch from a velocity PID controller to a position PID controller whenever you want to stop on a point, and turn up D (which dampens velocity in a position controller) to prevent position overshoot of your target point. See below for details.
Reminder: D in a velocity controller = useless. D in a position controller = essential. See my comment here, and notes below.
Details
I've done this before. I didn't spend a ton of time optimizing it, but what I found to work well was to switch from a velocity controller which tries to overfly waypoints, to a position controller which tries to stop on them, once you are near the desired stopping point.
Note that your position controller can also be forced into a velocity controller by scaling your gains to produce desired velocities, or even by scaling your position error to arbitrarily produce a desired velocity, and by moving the desired position point continually to keep the vehicle following a path. The video below demos that last part too.
The position controller was a PID controller on position: the vehicle's pitch angle was directly proportional to the distance from the target in the y-axis, and the vehicle's roll angle was directly proportional to the distance from the target in the x-axis. That's it. The integral would take out steady-state error to ultimately stop perfectly on the desired position, despite disturbances, wind, imperfections, etc., and the derivative was a dampening term which would reduce the pitch and roll angles as velocity increased, so as to not overshoot the target. In other words: P and D fight against each other, and this is good and intended! Increasing P tries harder to make the vehicle tilt when the vehicle has no velocity, and increasing D causes the vehicle to tilt less and therefore overshoot less the more velocity (derivative of position) that it has! P gets it moving quickly (increases acceleration), and D stops it from moving at too high of a velocity (decreases velocity).
For any controller where neutralizing the driving force (pitch and roll in this case) instantly stops a change in the derivative of the error, D is NOT needed. Ex: for velocity controllers, the second you remove throttle on a car for instance, velocity stops changing, assuming small drag. So, D is NOT needed. BUT, for position controllers, the second you remove throttle on a car (or tilt on a quadcopter), position keeps changing rapidly due to inertia and existing velocity! So, D very much is needed. That's it! That's the rule! For position-based controllers, D very much is needed, since it dampens velocity and acts as the dampening term to prevent overshoot of the target.
Watch this video here, from 0:49 to 1:10 to see the vehicle quickly take off autonomously in this PID position control mode, and auto-position itself at the center of the room. This is exactly what I'm talking about.
At approximately 3:00 in the video I say, "[the target waypoint] is leading the vehicle by two-and-one-half meters down the waypoint path." You should know that that 2.5 meters is the forced distance error (position error) that I am enforcing in the position PID controller in order to keep the vehicle moving at the shown fixed velocity at that point in the video. So...I'm basically using my position controller as a crude sort of velocity controller with a natural filter on the commanded path shape due to my "pure pursuit" or "vector flow-field" waypoint-following algorithm.
Anyway, I explain a lot of the above concepts, and much more, in my more-thorough answer here, so study this too: Physics-based controls, and control systems: the many layers of control.
Assuming the static scene, with a single camera moving exactly sideways at small distance, there are two frames and a following computed optic flow (I use opencv's calcOpticalFlowFarneback):
Here scatter points are detected features, which are painted in pseudocolor with depth values (red is little depth, close to the camera, blue is more distant). Now, I obtain those depth values by simply inverting optic flow magnitude, like d = 1 / flow. Seems kinda intuitive, in a motion-parallax-way - the brighter the object, the closer it is to the observer. So there's a cube, exposing a frontal edge and a bit of a side edge to the camera.
But then I'm trying to project those feature points from camera plane to the real-life coordinates to make a kind of top view map (where X = (x * d) / f and Y = d (where d is depth, x is pixel coordinate, f is focal length, and X and Y are real-life coordinates). And here's what I get:
Well, doesn't look cubic to me. Looks like the picture is skewed to the right. I've spent some time thinking about why, and it seems that 1 / flow is not an accurate depth metric. Playing with different values, say, if I use 1 / power(flow, 1 / 3), I get a better picture:
But, of course, power of 1 / 3 is just a magic number out of my head. The question is, what is the relationship between optic flow in depth in general, and how do I suppose to estimate it for a given scene? We're just considering camera translation here. I've stumbled upon some papers, but no luck trying to find a general equation yet. Some, like that one, propose a variation of 1 / flow, which isn't going to work, I guess.
Update
What bothers me a little is that simple geometry points me to 1 / flow answer too. Like, optic flow is the same (in my case) as disparity, right? Then using this formula I get d = Bf / (x2 - x1), where B is distance between two camera positions, f is focal length, x2-x1 is precisely the optic flow. Focal length is a constant, and B is constant for any two given frames, so that leaves me with 1 / flow again multiplied by a constant. Do I misunderstand something about what optic flow is?
for a static scene, moving a camera precisely sideways a known amount, is exactly the same as a stereo camera setup. From this, you can indeed estimate depth, if your system is calibrated.
Note that calibration in this sense is rather broad. In order to get real accurate depth, you will need to in the end supply a scale parameter on top of the regular calibration stuff you have in openCV, or else there is a single uniform ambiguity of the 3D (This last step is often called going to the "metric" reconstruction from only the "Euclidean").
Another thing which is apart of broad calibration is lens distortion compensation. Before anything else, you probably want to force your cameras to behave like pin-hole cameras (which real-world cameras usually dont).
With that said, optical flow is definetely very different from a metric depth map. If you properly calibraty and rectify your system first, then optical flow is still not equivalent to disparity estimation. If your system is rectified, there is no point in doing a full optical flow estimation (such as Farnebäck), because the problem is thereafter constrained along the horizontal lines of the image. Doing a full optical flow estimation (giving 2 d.o.f) will introduce more error after said rectification likely.
A great reference for all this stuff is the classic "Multiple View Geometry in Computer Vision"
I am working on a project to detect the 3D location of the object. I have two cameras set up at two corners of the room and I have obtained the Fundamental matrix between them. These cameras are internally calibrated. My images are 2592 X 1944
K = [1228 0 3267
0 1221 538
0 0 1 ]
F = [-1.098e-7 3.50715e-7 -0.000313
2.312e-7 2.72256e-7 4.629e-5
0.000234 -0.00129250 1 ]
Now, How do I proceed so that given a 3D point in space, I should be able to get points on the image which correspond to the same object in the room. If I can obtain the right projection matrices (with correct scale) I can use them later as inputs to OpenCV's traingulatePoints function to obtain the location of the object.
I have been stuck at this since a long time. So, please help me.
Thanks.
From what I gather, you have obtained the Fundamental matrix through some means of calibration? Either way, with the fundamental matrix (or the calibration rig itself) you can obtain the pose difference via decomposition of the Essential matrix. Once you have that, you can use matched feature points (using a feature extractor and descriptor like SURF, BRISK, ...) to identify which points in one image belong to the same object point as another feature point in the other image.
With that information, you should be able to triangulate away.
Sorry its not coming in size of comment..
so #user2167617 reply to your comment.
Pretty much. A few pointers, though: the singular values should be (s,s,0), so (1.3, 1.05, 0) is a pretty good guess. About the R: Technically, this is right, however, ignoring signs. It might very well be that you get a rotation matrix which does not satisfy the constraint deteminant(R) = 1 but is instead -1. You might want to multiply it with -1 in that case. Generally, if you run into problems with this approach, try to determine the Essential Matrix using the 5 point algorithm (implemented into the very newest version of OpenCV, you will have to build it yourself). The scale is indeed impossible to obtain with these informations. However, it's all to scale. If you define for example the distance between the cameras being 1 unit, then everything will be measured in that unit.
May be it will be simplier use cv::reprojectImageTo3D function? It will give you 3D coordinates.
The context is an iPad game in which I want an on-screen object to be controlled by X/Y tilt of the device.
I was wondering if anybody can point me in the direction of resources for deciding on an appropriate mapping of tilt to the movement behaviour (e.g. whether people tend to use the "raw" rotation values to control the acceleration, velocity or direct position, and how comfortable players have been found to be with these different types of 'mapping' of device rotation to object movement).
I appreciate that the appropriate choice can depend on the particular type of game/object being controlled, and that some trial and error will be needed, but I wondered as a starting point at least what existing knowledge there was to draw on.
First you're going to want to apply a low-pass filter to isolate tilt from noise and your user's shaky hands, Apple shows this in their accelerometer examples
x = accel.x * alpha + x * (1.0 - alpha);
y = accel.y * alpha + y * (1.0 - alpha);
more alpha causes more responsiveness at the cost of more noisy input.
Unless your game is intentionally simulating a ball balancing on the face of the screen, you probably don't want to apply your tilt values to acceleration, but rather to target velocity or target position, applying "fake" smooth acceleration to get it there.
Also, this answer has a neat idea if Unity3d is acceptable for your project, and this paper has some handy numbers on the practical limits of using tilt control as input, including making the important point that users have a much easier time controlling absolute angle position than velocity of tilt.
So, we desire to move our character - we'll call him 'loco' - based on the accelerometer's x and y data.
Now, if we do want the magnitude of the tilt to affect the rate of loco's travel, we can simply factor the accelerometer x or y value directly into our algorithm for his movement.
For example: (psuedocode)
// we'll call our accelerometer value for x 'tiltX'
// tiltX is .5; // assume that accelerometer updated the latest x value to .5
// unitOfMovement = 1; // some arbitraty increment you choose for a unit of movement
loco.x += unitOfMovement * tiltValueX;
This will cause loco to move that number of pixels for each game cycle (or whatever cycle you are driving the updates for movement by) and that here is 1 pixel multiplied by the accelerometer value. So if the character normally moves 1 pixel right at full tiltX (1.0), then if tiltX comes in from the accelerometer at .5, loco will move half that. When the value of tiltX increases, so too will the movement of Loco.
Another consideration is whether you want the movement tied directly to the rate of accelerometer events? Probably not, so you can use an array to just hold the last ten x values (same concept for y values too) sent by the accelerometer and use the latest average of the array (sum of x values / number of elements in array) at each game loop. That way, the sensitivity will feel more appropriate than driving the movement by whatever the update rate of the accelerometer may be.
Have a look at Point in Tilt Direction - iPhone which is first answer is very helpful I think.
I'm implementing a 2D game with ships in space.
In order to do it, I'm using LÖVE, which wraps Box2D with Lua. But I believe that my question can be answered by anyone with a greater understanding of physics than myself - so pseudo code is accepted as a response.
My problem is that I don't know how to move my spaceships properly on a 2D physics-enabled world. More concretely:
A ship of mass m is located at an initial position {x, y}. It has an initial velocity vector of {vx, vy} (can be {0,0}).
The objective is a point in {xo,yo}. The ship has to reach the objective having a velocity of {vxo, vyo} (or near it), following the shortest trajectory.
There's a function called update(dt) that is called frequently (i.e. 30 times per second). On this function, the ship can modify its position and trajectory, by applying "impulses" to itself. The magnitude of the impulses is binary: you can either apply it in a given direction, or not to apply it at all). In code, it looks like this:
function Ship:update(dt)
m = self:getMass()
x,y = self:getPosition()
vx,vy = self:getLinearVelocity()
xo,yo = self:getTargetPosition()
vxo,vyo = self:getTargetVelocity()
thrust = self:getThrust()
if(???)
angle = ???
self:applyImpulse(math.sin(angle)*thrust, math.cos(angle)*thrust))
end
end
The first ??? is there to indicate that in some occasions (I guess) it would be better to "not to impulse" and leave the ship "drift". The second ??? part consists on how to calculate the impulse angle on a given dt.
We are in space, so we can ignore things like air friction.
Although it would be very nice, I'm not looking for someone to code this for me; I put the code there so my problem is clearly understood.
What I need is an strategy - a way of attacking this. I know some basic physics, but I'm no expert. For example, does this problem have a name? That sort of thing.
Thanks a lot.
EDIT: Beta provided a valid strategy for this and Judge kindly implemented it directly in LÖVE, in the comments.
EDIT2: After more googling I also found openSteer. It's on C++, but it does what I pretended. It will probably be helpful to anyone reaching this question.
It's called motion planning, and it's not trivial.
Here's a simple way to get a non-optimal trajectory:
Stop. Apply thrust opposite to the direction of velocity until velocity is zero.
Calculate the last leg, which will be the opposite of the first, a steady thrust from a standing start that gets the ship to x0 and v0. The starting point will be at a distance of |v0|^2/(2*thrust) from x0.
Get to that starting point (and then make the last leg). Getting from one standing point to another is easy: thrust toward it until you're halfway there, then thrust backward until you stop.
If you want a quick and dirty approach to an optimal trajectory, you could use an iterative approach: Start with the non-optimal approach, above; that's just a time sequence of thrust angles. Now try doing little variations of that sequence, keeping a population of sequences that get close to the goal. reject the worst, experiment with the best -- if you're feeling bold you could make this a genetic algorithm -- and with luck it will start to round the corners.
If you want the exact answer, use the calculus of variations. I'll take a crack at that, and if I succeed I'll post the answer here.
EDIT: Here's the exact solution to a simpler problem.
Suppose instead of a thrust that we can point in any direction, we have four fixed thrusters pointing in the {+X, +Y, -X, -Y} directions. At any given time we will firing at most one of the +/-X and at most one of the +/-Y (there's no point in firing +x and -X at the same time). So now the X and Y problems are independent (they aren't in the original problem because thrust must be shared between X and Y). We must now solve the 1-D problem -- and apply it twice.
It turns out the best trajectory involves thrusting in one direction, then the other, and not going back to the first one again. (Coasting is useful only if the other axis's solution will take longer than yours so you have time to kill.) Solve the velocity problem first: suppose (WLOG) that your target velocity is greater than your initial velocity. To reach the target velocity you will need a period of thrust (+) of duration
T = (Vf - Vi)/a
(I'm using Vf: final velocity, Vi: initial velocity, a: magnitude of thrust.)
We notice that if that's all we do, the location won't come out right. The actual final location will be
X = (Vi + Vf)T/2
So we have to add a correction of
D = Xf - X = Xf -(Vi+Vf)T/2
Now to make the location come out right, we add a period of thrust in one direction before that, and an equal period in the opposite direction after. This will leave the final velocity undisturbed, but give us some displacement. If the duration of this first period (and the third) is t, then the displacement we get from it is
d = +/-(at^2 + atT)
The +/- depends on whether we thrust + then -, or - then +. Suppose it's +.
We solve the quadratic:
t = (-aT + sqrt(a^2 T^2 + 4 a D))/2a
And we're done.
In the absence of additional info, we can assume there are 3 forces acting upon the spaceship and eventually dictating its trajectory:
"impulses" : [user/program-controlled] force.
The user (or program) appear to have full control over this, i.e. it controls the direction of the impulse and its thrust (probably within a 0-to-max range)
some external force: call it gravity, whatever...
Such force could be driven by several sources but we're just interested in the resulting combined force: at a given time and space this external force acts upon the ship with a given strengh and direction. The user/program has no control over these.
inertia: this is related to the ship's current velocity and direction. This force generally causes the ship to continue in its current direction at its current speed. There may be other [space-age] parameters controlling the inertia but generally, it is proportional to both velocity and to the ship's mass (Intuitively, it will be easier to bring a ship to a stop if its current velocity is smaller and/or if its mass is smaller)
Apparently the user/program only controls (within limits) the first force.
It is unclear, from the question, whether the problem at hand is:
[Problem A] to write a program which discovers the dynamics of the system (and/or adapts to changes these dynamics).
or..
[Problem B] to suggest a model -a formula- which can be used to compute the combined force eventually applied to the ship: the "weighed" sum of the user-controlled impulse and the other two system/physics-driven forces.
The latter question, Problem B, is more readily and succinctly explained, so let's suggest the following model:
Constant Parameters:
ExternalForceX = strength of the external force in the X direction
ExternalForceY = id. Y direction
MassOfShip = coeficient controlling
Variable Parameters:
ImpulseAngle = direction of impulse
ImpulseThrust = force of thrust
Formula:
Vx[new] = (cos(ImpulseAngle) * ImpulseThrust) + ExternalForceX + (MassOfShip * Vx[current])
Vy[new] = (sin(ImpulseAngle) * ImpulseThrust) + ExternalForceY + (MassOfShip * Vy[current])
Note that the above model assumes a constant External force (constant both in terms of its strength and direction); that is: akin to that of a gravitational field relatively distant from the area displayed (just like say the Earth gravity, considered within the span of a football field). If the scale of the displayed area is big relative to the source(s) of external forces, the middle term of the formulas above should then be modified to include: a trigonometric factor based on the angle between the center of the source and the current position and/or a [reversely] proportional factor based on the distance between the center of the source and the current position.
Similarly, the Ship's mass is assumed to remain constant, it could well be a variable, based say on the mass of the Ship when empty, to which the weight of fuel is removed/added as the game progresses.
Now... All the above assume that the dynamics of the system are controlled by the game designer: essentially choosing a set of values for the parameter mentioned and possibly adding a bit of complexity in the math of the formula (and also ensuring proper scaling to generally "keep" the ship within the display area).
What if instead, the system dynamics were readily programmed into the game (and assumed to be hidden/random), and the task at hand is to write a program which will progressively decide the direction and thrust value of the impulses to drive the ship to its targeted destination, in a way that its velocity at the target be as close as possible to getTargetVelocity()? This is the "Problem A".
This type of problem can be tackled with a PID Controller. In a nuthell, such a controller "decides" which amount of action (in this game's case = which impulse angle and amount of thrust to apply), based on three, weighed, factors, loosely defined below:
how far-off we are the current values from "set point": this is the P=Proportional part of PID
how fast are we approaching the "set point": this is the D=Derivative part of PID
how long and how much have we been away from the "set point": this is the I=Intergral part of PID
A less sophisticated controller could for example only use the proportional factor. This would result in oscillating, sometimes with much amplitude on either side of the set point ("I'm X units away from where I'm supposed to be: let me yank the steering wheel and press on gas"). Such overshooting of the set point are tempered by the Derivative factor ("Yeah, I'm still not where I'm supposed to be but the progress I made since the last time I check is very big: better slow down a bit"). Finally the Integral part takes into account the fact that all things being equal with regards to the combined Proportional and Derivative part, a smaller or bigger action would be appropriate depending on whether we've been "off-track" for a long time or not and of much off track we've been all this time (eg. "Lately we've been tracking rather close to where we're supposed to be, no point in making rash moves")
We can discuss the details implementing PID controllers for the specific needs of the space ship game, if that is effectively what is required. The idea was to provide a flavor of what can be done.
To just get from the current position to the destination with an initial velocity, then apply thrust along the normalized difference between the shortest path and the current velocity. You don't actually need the angle.
-- shortest path minus initial velocity
dx,dy = x0 - x - vx, y0 - y - vy
-- normalize the direction vector
magnitude = sqrt(dx*dx + dy*dy)
dx,dy = dx/magnitude, dy/mangitude
-- apply the thrust in the direction we just calculated
self:applyImpulse(thrust*dx, thrust*dy)
Note that this does not take the target velocity into account because that gets extremely complicated.
I have a very small Lua module for handling 2D vectors in this paste bin. You are welcome to use it. The code above would reduce to:
d = destination - position - velocity
d:normalize()
d = d * thrust
self:applyImpulse(d.x, d.y)
Are you expelling fuel? Your mass will change with time if you are.
Thrust is a reactive force. It's the rate of change of mass, times the speed of the exhaust relative to the spaceship.
Do you have external forces? If you do, these need to enter into your impulse calculation.
Let's assume a magical thrust with no mass being expelled, and no external forces.
Impulse has units of momentum. It's the integral of a force over time.
First off, you'll need to figure out exactly what the API calls "thrust" and impulse. If you're feeding it a thrust multiplied by a scalar (number), then applyImpulse has to do something else to your input to be able to use it as an impulse, because the units don't match up.
Assuming your "thrust" is a force, then you multiply that thrust by the time interval (1/30 second) to get the impulse, and break out the components.
Don't know if I'm answering your question, but hopefully that helps you to understand the physics a bit.
It's easier to think about if you separate the ship's velocity into components, parallel and perpendicular to the target velocity vector.
Considering along the perpendicular axis, the ship wants to come in line with the target position as soon as possible, and then stay there.
Along the parallel axis, it should be accelerating in whatever direction will bring it close to the target velocity. (Obviously if that acceleration takes it away from the target point, you'll need to decide what to do. Fly past the point and then double-back?)
I would deal with the two of them separately, and probably perpendicular first. Once it's working, and if that doesn't prove nice enough, you can start to think about if there are ways to get the ship to fire intelligent angles between perpendicular and parallel.
(EDIT: also, I forgot to mention, this will take some adjustment to deal with the scenario where you are offset a lot in the perpendicular direction but not much in the parallel direction. The important lesson here is to take the components, which gives you useful numbers on which to base a decision.)
Your angle is the Inverse Tangent the Opposite/Adjacent
So angle = InvTan(VY/VX)
No sure what your are talking about concerning wanting to drift??