Okay guys, I have spent a good two weeks trying to figure this out. I've tried some of my own ways to work this out by math alone and had no success. I also looked everywhere and have seen people recommend Viewport.Project().
This is not that simple. Everywhere I've looked, including MSDN areas and all forums, just suggest to be able to use it but as I try and use it they don't explain the matrix and values it requires to work. I have found no useful information on how to correctly use this method and it's seriously driving me insane. Please help this poor fella out.
First thing Im going to do is post my current code. I have five or so different versions none of them have worked. The closest I got was getting NAN which I don't understand. I'm trying to have text displayed on my screen based on where asteroids are and if they are very far the text will act as a guide so players can go to asteroids.
Vector3 camLookAt = Vector3.Zero;
Vector3 up = new Vector3(0.0f, 0.0f, 0.0f);
float nearClip = 1000.0f;
float farClip = 100000.0f;
float viewAngle = MathHelper.ToRadians(90f);
float aspectRatio = (float)viewPort.Width / (float)viewPort.Height;
Matrix view = Matrix.CreateLookAt(new Vector3(0, 0, 0), camLookAt, up);
Matrix projection = Matrix.CreatePerspectiveFieldOfView(viewAngle, aspectRatio, nearClip, farClip);
Matrix world = Matrix.CreateTranslation(camPosition.X, camPosition.Y, 0);
Vector3 projectedPosition = graphicsDevice.Viewport.Project(new Vector3(worldPosition.X, worldPosition.Y, 0), projection, view, Matrix.Identity);
screenPosition.X = projectedPosition.X;
screenPosition.Y = projectedPosition.Y;
if (screenPosition.X > 400) screenPosition.X = 400;
if (screenPosition.X < 100) screenPosition.X = 100;
if (screenPosition.Y > 400) screenPosition.Y = 400;
if (screenPosition.Y < 100) screenPosition.Y = 100;
return screenPosition;
As far as I know Project is the camera position. My game is 2D so vector3 is annoying I thought maybe my Z could be CameraZoom, projection might be the object we want to convert to 2D Screen, view might be the size of how much the camera can see, and the last one I'm not sure.
After about 2 weeks of searching for information with no possible improvements in code or knowledge and being more confused as I look at MDSN tutorials I decided I'd post a question because I'm extremely close to just not implementing world position convert to screen position. All help is appreciated thanks :)
Plus I'm using a 2D game and it does add confusion when most times people talk about the Z axis when a 2D game does not have a Z axis its just transforming sprites to appear like a zoom or movement. Thanks again :)
I may be misunderstanding here, but I don't think you need to be using a 3D camera provided with XNA for a 2D game. Unless you're trying to make a 2.5D game using 3D for some sort of parallax system or whatever, you don't need to use that at all. Take a look at these:
2D Camera Implemetation in XNA
Simpler 2D Camera
XNA 2D tutorials
2D in XNA works differently than 3D. You don't need to worry about the 3D viewport or a 3D camera or anything. There is no nearclipping or farclipping. 2D is well-handled in XNA and I think you are misunderstanding a bit how XNA works.
PS: You don't need to use Vector3s. Instead, use Vector2s. I think you will find them much easier to work with in a 2D game. ^^
Related
I'm trying to use Kudan AR in a project, and I have a couple questions:
1) The marker size relation to the scene seems pretty weird to me. For example, I'm using a 150x150 px image as a marker, and when I use it in the scene it occupies 150 unities! It requires all my objects to be extremely huge, sometimes even extending further than the camera far plane, which breaks the augmentation. Is it correct, or am I missing something?
2) I'm trying to use a marker to define the starter position of the augmentation, and then switch to the markerless tracking to have a broader experience. They have a sample code using the native iOS lib (https://wiki.kudan.eu/Marker_to_Markerless), but no reference on how to do it in Unity. That's what I'm trying:
markerlessDriver.localScale = new Vector3(markerDriver.localScale.x, markerDriver.localScale.x, markerDriver.localScale.z);
markerlessDriver.localPosition = markerDriver.localPosition;
markerlessDriver.localRotation = markerDriver.localRotation;
target.SetParent(markerlessDriver);
tracker.ChangeTrackingMethod(markerlessTracking);
// from the floor placer.
Vector3 floorPosition; // The current position in 3D space of the floor
Quaternion floorOrientation; // The current orientation of the floor in 3D space, relative to the device
tracker.FloorPlaceGetPose(out floorPosition, out floorOrientation);
tracker.ArbiTrackStart(floorPosition, floorOrientation);
It switches, but the position/rotation of the model goes off. Any idea on how that can be done?
Thanks in advance!
Okay, this is driving me crazy. I've looked through tons of examples and can't seem to get quite what I need. I'm using XNA and I have a plane of vertices and my camera is up in the sky looking down on the vertices.
What I want is to rotate the camera around on the Y axis, basically get the same result as adjusting its YAW. However whenever I try to rotate around on the Y axis or adjust its YAW nothing actually happens. I can however get the effect I want by creating a Y rotation matrix on the world, but that doesn't feel like the "correct" way of doing it, I want the camera itself to spin and not the world. Here's a code snippet for what I have:
cameraPosition = Vector3.Transform(new Vector3(
cameraOffset.X - cameraOffset.X,
zoomAmount,
cameraOffset.Z - cameraOffset.Z),
Matrix.CreateRotationY(rotationAngle)) + cameraOffset;
view = Matrix.CreateLookAt(cameraPosition, cameraTarget, new Vector3(0, 0, 1));
Thanks!
Since you have the camera looking straight down, it sounds like what you want is for the camera to roll in local space. To roll your camera, you must rotate the Up vector you are feeding to the CreateLookAt(). Like this:
Vector3 newUp = Vector3.Transform(Vector3.UnitZ, Matrix.CreateRotationY(rotationAngle));
view = Matrix.CreateLookAt(cameraPosition, cameraTarget, newUp);
I've been working on a project that helps create a virtual reality experience on the laptop and/or desktop. I am using XNA 4.0 on Visual Studio 2010. The current scenario looks like this. I have interfaced the movements of a persons head through kinect. So if the person moves his head right relative to the laptop, the scene seen in the image is rotated towards the left giving the effect of a virtual tour or like looking through the window experience.
To enhance the visual appeal, I want to add a darkness at the back plane. Like the box looks as if it was a tunnel.
The box was made using trianglestrips. The BasicEffect used for the planes of the box is called effect.
effect.VertexColorEnabled = true;
effect.EnableDefaultLighting();
effect.FogEnabled = true;
effect.FogStart = 35.0f;
effect.FogEnd = 100.0f;
effect.FogColor = new Vector3(0.0f, 0.0f, 0.0f);
effect.World = world;
effect.View = cam.view;
effect.Projection = cam.projection;
On compiling the error is regarding some normals.
I have no clue what they mean by that. I have dug the internet hard enough. (I was first under the impression that ill put a black omnilight in the backside of the box).
The error is attached below:
'verts' is the VertexPositionColor [][] that is used to build the box.
How do I solve this error ? Is the method/approach correct ?
Any help shall be welcome.
Thanks.
Your Vertex has Position and Color channels, but is has no normals... so you have to provide vertex has it.
You can use VertexPostionNormalTexture if you don't need the color, or build a custom struct that provides the normal...
Here your are a custom implementation: VertexPositionNormalColor
You need to add a normal (vector3) to your vertex type.
Also if you want Distance fog you will have to write your own shader as BasicEffect only implements depth fog (which while not looking as good is faster)
I've been recently trying to create some 3D rendering code in Silverlight 5 with XNA. Unfortunately I have been having trouble getting anything ( using my custom shader ) to work.
The basic effect is used on a cube and uses only VertexPositionColor information but when I switch to using a custom shader nothing seems to render ( or renders off-screen ).
To try and help myself with this issue I even got hold of the BasicEffect hlsl code but it doesn't do anything I am not doing.
The code takes in a world, view and projection matrix and multiplies each one by a position in the following order:
float4 pos_ws = mul(position, World);
float4 pos_vs = mul(pos_ws, View);
float4 pos_ps = mul(pos_vs, Projection);
I changed my code to do the same thing ( instead of passing in a single WorldViewProjection matrix ) and my shader uses this to calculate a position and then just applies a color to the pixel. Yet nothing is rendering.
I'm pretty stuck on this, I'm passing ok at basic 3D but passing ok doesn't seem to cut it! :)
So it turns out the issue is fairly simple!
I actually deleted this question initially because I knew the issue was likely my matrices and so it was unlikely I'd get much help!
After some stumbling on google, and more coffee than I'd like to admit to, I found the answer.
XNA transposes it's matricies on the sly and doesn't tell you! I had tried transposing the view and projection matrices in some vain hope that I'd know what I was doing but it wasn't helping.
Instead I now pass in a single WorldViewProjection_Transposed matrix which is calculated using the following.
Matrix worldViewProjection_Transpose = Matrix.Transpose(world * view * projection);
This seems to work at the moment and I am hoping it is this simple.
I am sure I will come across a million more problems as the models I need to render become more complex but I decided to leave this on in case anyone in a similar situation ( and experience level ) to me is struggling :)
I'm creating an ios drawing app and would like to understand how to rotate the texture I'm using for drawing to have the texture follow the direction of my stroke.
Sketchbook for iOS and brushes are two apps that Ive seen accomplish this.
Does anyone have any idea of how to achieve this?
Here's an example of the concept that I'm trying to capture: http://drawsketch.about.com/od/learntodraw/ss/pencilshading_5.htm
Attached is a screenshot from the sketchbook app of this in practice.
UPDATE:
I was able to figure this out (thanks to SO community for helping out!) and posted my answer here: https://stackoverflow.com/a/11298219/111856
You can just rotate the brush texture as you're drawing it. You can get the angle by taking the arctangent of the y delta divided by the x delta for each segment of the stroke:
atan2(newY - prevY, newX - prevX);
Then rotate your brush texture by that amount before blending it at each point along the line.