I'm trying to make a freecam sort of tool for a rather obscure game. I'm trying to figure out how to move the XY position of the freecam based on the car's rotation value, but I'm struggling to do so. I've tried using Calculating X Y movement based on rotation angle? and modifying it a bit, but it doesn't work as intended. The game's rotation uses a float that ranges from -1 to 1, -1 being 0 degrees and 1 being 360 degrees.
Putting rot at -1 corresponds to X+
Putting rot at 0 corresponds to Z+
Putting rot at 1 corresponds to X-
Here's my cheat engine code:
speed = 10000
local yaw = math.rad((180*(getAddressList().getMemoryRecordByDescription('rot').Value))+180)
local px = getAddressList().getMemoryRecordByDescription('playerx').Value
local py = getAddressList().getMemoryRecordByDescription('playery').Value
local pz = getAddressList().getMemoryRecordByDescription('playerz').Value
local siny = math.sin(yaw) -- Sine of Horizontal (Yaw)
local cosy = math.cos(yaw) -- Cosine of Horizontal (Yaw)
getAddressList().getMemoryRecordByDescription('playerx').Value = ((getAddressList().getMemoryRecordByDescription('playerx').Value)+(cosy*speed))
getAddressList().getMemoryRecordByDescription('playerz').Value = ((getAddressList().getMemoryRecordByDescription('playerz').Value)+(siny*speed))
print(yaw)
in opencv, i know i can use Perspective Transformation to do in this way:
pts1 = np.float32([[56,65],[368,52],[28,387],[389,390]])
pts2 = np.float32([[0,0],[300,0],[0,300],[300,300]])
M = cv2.getPerspectiveTransform(pts1,pts2)
dst = cv2.warpPerspective(img,M,(300,300))
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.html
then i can rotate the image by 90 degree by rotate function.
cv::rotate(image, image, ROTATE_90_CLOCKWISE);
But i think i should be able to do the two steps in one step.
How should I create the matrix to multiple with PerspectiveTransform M,
i thought about this:
0 -1 0
1 0 0
0 0 1
but this is wrong.
note: I will do it in Java.
I'd need some help from an "iso guru". I am fiddling on a game where there are two cannons placed on an isometric grid. When one cannon fires a bullet, it should fly in a curved trajectory, like shown below. While this would be an easy task on an x/y plane, I have no clue how to calculate a curved path (with variable height) on an isometric plane.
Could someone point me into the right direction please? I'd need to fire bullets from one field to any given other, while the bullets' flying altitude (the "strength" of the curve) depends on the given shot power.
Any hints? :(
Image: http://postimg.org/image/6lcqnwcrr/
This may help. The trajectory function takes some trajectory parameters (velocity, elevation, starting position and gravity) and returns a function that calcs the y position from the x position in world space.
The converter returns a function that converts between world and screen co-ords for a given projection angle.
What follows is an example of it being used to calculate the trajectory for some points in screen space.
It's really for indication purposes. It has a bunch of potential divide by zeroes but it generates trajectories that look ok for sensible elevations, projections and velocities.
-- A trajectory in world space
function trajectory(v,elevation,x0,y0,g)
x0 = x0 or 0
y0 = y0 or 0
local th = math.rad(elevation or 45)
g = g or 9.81
return function(x)
x = x-x0
local a = x*math.tan(th)
local b = (g*x^2)/(2*(v*math.cos(th))^2)
return y0+a-b
end
end
-- convert between screen and world
function converter(iso)
iso = math.rad(iso or 0)
return function(toscreen,x,y)
if toscreen then
y = y+x*math.sin(iso)
x = x*math.cos(iso)
else
x = x/math.cos(iso)
y = y-x*math.sin(iso)
end
return x,y
end
end
-- velocity 60m/s at an angle of 70 deg
t = trajectory(60,70,0,0)
-- iso projection of 30 deg
c = converter(30)
-- x in screen co-ords
for x = 0,255 do
local xx = c(false,x,0) -- x in world co-ords
local y = t(xx) -- y in world co-ords
local _,yy = c(true,xx,y) -- y in screen co-ords
local _,y0 = c(true,xx,0) --ground in screen co-ords
yy = math.floor(yy) -- not needed
if yy>y0 then print(x,yy) end -- if it's above ground
end
If there are no lateral forces you can use the 2D equation for ballistic motion in XZ plane (so y=0 at all time), then rotate via a 3D transformation around z axis to account for actual orientation of canon in 3D space. This transformation matrix is very simple, you can unfold the multiplication (write out the terms multiplied) to get the 3D equations.
I'm using PCL to obtain the transformation matrix from ICP (getTransformationMatrix()).
The result obtained for exemple for a translation movement without rotation is
0.999998 0.000361048 0.00223594 -0.00763852
-0.000360518 1 -0.000299474 -0.000319525
-0.00223602 0.000298626 0.999998 -0.00305045
0 0 0 1
how can I find the trasformation from the matrix?
The idea is to see the error made between the stimation and the real movement
I have not used the library you refer to here, but it is pretty clear to me that the result you provide is a homogenous transform i.e. the upper left 3x3 matrix (R) is the rotation matrix and the right 3x1 (T) is the translation:
M1 = [ **[** [R], [T] **], [** 0 0 0 1 **]** ]
refer to the 'Matrix Representation' section here:
http://en.wikipedia.org/wiki/Kinematics
This notation is used so that you can get the final point after successive transforms by multiplying the transform matrices.
If you have a point p0 transformed n times you get the point p1 as:
P0 = [[p0_x], [p0_y], [p0_z], [1]]
P1 = [[p1_x], [p1_y], [p1_z], [1]]
M = M1*M2*...*Mn
P1 = M*P0
tROTA is the matrix with translation and rotation:
auto trafo = icp.getFinalTransformation();
Eigen::Transform<float, 3, Eigen::Affine> tROTA(trafo);
float x, y, z, roll, pitch, yaw;
pcl::getTranslationAndEulerAngles(tROTA, x, y, z, roll, pitch, yaw);
-- Update 2 --
The following article is really useful (although it is using Python instead of C++) if you are using a single camera to calculate the distance: Find distance from camera to object/marker using Python and OpenCV
Best link is Stereo Webcam Depth Detection. The implementation of this open source project is really clear.
Below is the original question.
For my project I am using two camera's (stereo vision) to track objects and to calculate the distance. I calibrated them with the sample code of OpenCV and generated a disparity map.
I already implemented a method to track objects based on color (this generates a threshold image).
My question: How can I calculate the distance to the tracked colored objects using the disparity map/ matrix?
Below you can find a code snippet that gets the x,y and z coordinates of each pixel. The question: Is Point.z in cm, pixels, mm?
Can I get the distance to the tracked object with this code?
Thank you in advance!
cvReprojectImageTo3D(disparity, Image3D, _Q);
vector<CvPoint3D32f> PointArray;
CvPoint3D32f Point;
for (int y = 0; y < Image3D->rows; y++) {
float *data = (float *)(Image3D->data.ptr + y * Image3D->step);
for (int x = 0; x < Image3D->cols * 3; x = x + 3)
{
Point.x = data[x];
Point.y = data[x+1];
Point.z = data[x+2];
PointArray.push_back(Point);
//Depth > 10
if(Point.z > 10)
{
printf("%f %f %f", Point.x, Point.y, Point.z);
}
}
}
cvReleaseMat(&Image3D);
--Update 1--
For example I generated this thresholded image (of the left camera). I almost have the same of the right camera.
Besides the above threshold image, the application generates a disparity map. How can I get the Z-coordinates of the pixels of the hand in the disparity map?
I actually want to get all the Z-coordinates of the pixels of the hand to calculate the average Z-value (distance) (using the disparity map).
See this links: OpenCV: How-to calculate distance between camera and object using image?, Finding distance from camera to object of known size, http://answers.opencv.org/question/5188/measure-distance-from-detected-object-using-opencv/
If it won't solve you problem, write more details - why it isn't working, etc.
The math for converting disparity (in pixels or image width percentage) to actual distance is pretty well documented (and not very difficult) but I'll document it here as well.
Below is an example given a disparity image (in pixels) and an input image width of 2K (2048 pixels across) image:
Convergence Distance is determined by the rotation between camera lenses. In this example it will be 5 meters. Convergence distance of 5 (meters) means that the disparity of objects 5 meters away is 0.
CD = 5 (meters)
Inverse of convergence distance is: 1 / CD
IZ = 1/5 = 0.2M
Size of camera's sensor in meters
SS = 0.035 (meters) //35mm camera sensor
The width of a pixel on the sensor in meters
PW = SS/image resolution = 0.035 / 2048(image width) = 0.00001708984
The focal length of your cameras in meters
FL = 0.07 //70mm lens
InterAxial distance: The distance from the center of left lens to the center of right lens
IA = 0.0025 //2.5mm
The combination of the physical parameters of your camera rig
A = FL * IA / PW
Camera Adjusted disparity: (For left view only, right view would use positive [disparity value])
AD = 2 * (-[disparity value] / A)
From here you can compute actual distance using the following equation:
realDistance = 1 / (IZ – AD)
This equation only works for "toe-in" camera systems, parallel camera rigs will use a slightly different equation to avoid infinity values, but I'll leave it at this for now. If you need the parallel stuff just let me know.
if len(puntos) == 2:
x1, y1, w1, h1 = puntos[0]
x2, y2, w2, h2 = puntos[1]
if x1 < x2:
distancia_pixeles = abs(x2 - (x1+w1))
distancia_cm = (distancia_pixeles*29.7)/720
cv2.putText(imagen_A4, "{:.2f} cm".format(distancia_cm), (x1+w1+distancia_pixeles//2, y1-30), 2, 0.8, (0,0,255), 1,
cv2.LINE_AA)
cv2.line(imagen_A4,(x1+w1,y1-20),(x2, y1-20),(0, 0, 255),2)
cv2.line(imagen_A4,(x1+w1,y1-30),(x1+w1, y1-10),(0, 0, 255),2)
cv2.line(imagen_A4,(x2,y1-30),(x2, y1-10),(0, 0, 255),2)
else:
distancia_pixeles = abs(x1 - (x2+w2))
distancia_cm = (distancia_pixeles*29.7)/720
cv2.putText(imagen_A4, "{:.2f} cm".format(distancia_cm), (x2+w2+distancia_pixeles//2, y2-30), 2, 0.8, (0,0,255), 1,
cv2.LINE_AA)
cv2.line(imagen_A4,(x2+w2,y2-20),(x1, y2-20),(0, 0, 255),2)
cv2.line(imagen_A4,(x2+w2,y2-30),(x2+w2, y2-10),(0, 0, 255),2)
cv2.line(imagen_A4,(x1,y2-30),(x1, y2-10),(0, 0, 255),2)
cv2.imshow('imagen_A4',imagen_A4)
cv2.imshow('frame',frame)
k = cv2.waitKey(1) & 0xFF
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
I think this is a good way to measure the distance between two objects