I have a very simple terrain map, 256x256 tiles for example, it's divided into tiles (same squares...). every tile have height, slope...
Something like figure below. My default look will be an iso view. (Each tile can be divied into smaller tiles for smooth, i called tesselation)
D3DXMatrixOrthoLH(&matProj, videoWidth , videoHeight , -100000, 100000);
float xPitch=0;
float yPitch=PI/3.0; //rotate yPitch 60 degree
float zPitch=PI/4.0; //rotate zPitch 45 degree
Now I need to select a unit tile onscreen with mouse (where to move to or build something...)! I have mouse position Mx,My I need to know what tile it is! If the map is flat, it's very easy! but with height, it's difficult. I'm planning to make the map quite static (not rotate often)... only translation. So I can store all the Vertex coordinate (x,y) on screen. By using D3DXVec3Project. And then search the triangles that contain the mouse position-> the tile we needed! However with this approach we may need to search 5-10 or 20 triangles. Do you know any betterway, more optimized or elegant ? Thanks!
I read about something like RayCasting for detection, maybe it can be used in my case. Because there is no eye pos in my view setup, the view vector is constant!
3D Screenspace Raycasting/Picking DirectX9
D3DXVec3Unproject also look quite promising!
Image : http://i1335.photobucket.com/albums/w666/greenpig83/sc4_zpsaaa61249.png
Related
I am currently trying to map textures using image labels onto 2 different triangles (because im using right angle wedges so i need 2 to make scalene triangles), but here is the problem, I can only set positional, size, and rotational data so I need to figure out how I can use this information to correctly map the texture onto the triangle
the position is based on the topleft corner and size of triangle (<1,1> corner is at the bottom right and <0,0> corner is at top left) and the size is based on triangle size also (<1,1> is same size as triangle and <0,0> is infinitely tiny) and rotation is central based.
I have the UV coordinates (given 0-1) and face vertices, all from an obj file. The triangles in 3D are made up of 2 wedges which are split at a right angle from the longest surface and from the opposite angle.
I don't quite understand this however it may be help to change the canvas properties on the Surface GUI
I store my 3D points (many points) in a TGLPoints object. There is no other object in the scene than points. When drawing the points, I would like to fit them to the screen so they do not look far away or too close. I tried TGLCamera.ZoomAll but with no success and also the solution given here which adjusts the camera location, depth of view and scene scale:
objSize:=YourCamera.TargetObject.BoundingSphereRadius;
if objSize>0 then begin
if objSize<1 then begin
GLCamera.SceneScale:=1/objSize;
objSize:=1;
end else GLCamera.SceneScale:=1;
GLCamera.AdjustDistanceToTarget(objSize*0.27);
GLCamera.DepthOfView:=1.5*GLCamera.DistanceToTarget+2*objSize;
end;
The points did not appear on the screen this time.
What should I do to fit the 3D points to screen?
For each point build the scale factor by taking length of vector from points position to camera position. Then using this scale build your transformation matrix that you will apply to camera matrix. If scale is large that means point is farther you will apply reverse translation to bring that point in close proximity. I hope this is clear. To compute translation vector use following formula
translation vector = translation vector +/- (abs(scale)/2)
+/- will be decided by the scale magnitude if it is too far from camera you chose - in above equation.
I need to find orientation of corn pictures (as examples below) they have different angles to right or left. I need to turn them upside (90 degree angle with their normal) (when they look like a water drop)
Is there any way I can do it easily?
As starting point - find image moments (and Hu moments for complex forms like pear). From the link:
Information about image orientation can be derived by first using the
second order central moments to construct a covariance matrix.
I suspect that usage of some image processing library like OpenCV could give more reliable results in common case
From the OP I got the impression you a rookie in this so I stick to something simple:
compute bounding box of image
simple enough go through all pixels and remember min,max of x,y coordinates of non background pixels
compute critical dimensions
Just cast few lines through the bounding box computing the red points positions. So select the start points I choose 25%,50%,75% of height. First start from left and stop on first non background pixel. Then start from right and stop on first non background pixel.
axis aligned position
start rotating the image with some step remember/stop on position where the red dots are symmetric so they are almost the same distance from left and from right. Also the bounding box has maximal height and minimal width in axis aligned position so you can also exploit that instead ...
determine the position
You got 4 options if I call the distance l0,l1,l2,r0,r1,r2
l means from left, r means from right
0 is upper (bluish) line, 1 middle, 2 bottom
then you wanted position is if (l0==r0)>=(l1==r1)>=(l2==r2) and bounding box is bigger in y axis then in x axis so rotate by 90 degrees until match is found or determine the orientation directly from distances and rotate just once ...
[Notes]
You will need accessing pixels of image so I strongly recommend to use Graphics::TBitmap from VCL. Look here gfx in C specially the section GDI Bitmap and also at this finding horizon on high altitude photo might help a bit.
I use C++ and VCL so you have to translate to Pascal but the VCL stuff is the same...
I would like to move a 3D plane in a 3D space, and have the movement match
the screens pixels so I can snap the plane to the edges of the screen.
I have played around with the focal length, camera position and camera scale,
and I have managed to get a plane to match the screen pixels in terms of size,
however when moving the plane things are not correct anymore.
So basically my current status is that I feed the plane size with values
assuming that I am working with standard 2D graphics.
So if I set the plane size to 128x128, it more or less is viewed as a 2D sqaure with that
exact size.
I am not using and will not use Orthographic view, I am using and will be using Projection view because my application needs some perspective to it.
How can this be calculated?
Does anyone have any links to resources that I can read?
you need to grab the transformation matrices you use in the vertex shader and apply them to the point/some points that represents the plane
that will result in a set of points in -1,-1 to 1,1 (after dividing by w) which you will need to map to the viewport
I am just starting out in XNA and have a question about rotation. When you multiply a vector by a rotation matrix in XNA, it goes counter-clockwise. This I understand.
However, let me give you an example of what I don't get. Let's say I load a random art asset into the pipeline. I then create some variable to increment every frame by 2 radians when the update method runs(testRot += 0.034906585f). The main thing of my confusion is, the asset rotates clockwise in this screen space. This confuses me as a rotation matrix will rotate a vector counter-clockwise.
One other thing, when I specify where my position vector is, as well as my origin, I understand that I am rotating about the origin. Am I to assume that there are perpendicular axis passing through this asset's origin as well? If so, where does rotation start from? In other words, am I starting rotation from the top of the Y-axis or the x-axis?
The XNA SpriteBatch works in Client Space. Where "up" is Y-, not Y+ (as in Cartesian space, projection space, and what most people usually select for their world space). This makes the rotation appear as clockwise (not counter-clockwise as it would in Cartesian space). The actual coordinates the rotation is producing are the same.
Rotations are relative, so they don't really "start" from any specified position.
If you are using maths functions like sin or cos or atan2, then absolute angles always start from the X+ axis as zero radians, and the positive rotation direction rotates towards Y+.
The order of operations of SpriteBatch looks something like this:
Sprite starts as a quad with the top-left corner at (0,0), its size being the same as its texture size (or SourceRectangle).
Translate the sprite back by its origin (thus placing its origin at (0,0)).
Scale the sprite
Rotate the sprite
Translate the sprite by its position
Apply the matrix from SpriteBatch.Begin
This places the sprite in Client Space.
Finally a matrix is applied to each batch to transform that Client Space into the Projection Space used by the GPU. (Projection space is from (-1,-1) at the bottom left of the viewport, to (1,1) in the top right.)
Since you are new to XNA, allow me to introduce a library that will greatly help you out while you learn. It is called XNA Debug Terminal and is an open source project that allows you to run arbitrary code during runtime. So you can see if your variables have the value you expect. All this happens in a terminal display on top of your game and without pausing your game. It can be downloaded at http://www.protohacks.net/xna_debug_terminal
It is free and very easy to setup so you really have nothing to lose.