SlimDX Camera setup - directx

Please, tell me what I'm doing wrongly:
that's my Camera class
public class Camera
{
public Matrix view;
public Matrix world;
public Matrix projection;
public Vector3 position;
public Vector3 target;
public float fov;
public Camera(Vector3 pos, Vector3 tar)
{
this.position = pos;
this.target = tar;
view = Matrix.LookAtLH(position, target, Vector3.UnitY);
projection = Matrix.PerspectiveFovLH(fov, 1.6f, 0.001f, 100.0f);
world = Matrix.Identity;
}
}
that's my Constant buffer struct:
struct ConstantBuffer
{
internal Matrix mWorld;
internal Matrix mView;
internal Matrix mProjection;
};
and here I'm drawing the triangle and setting camera:
x+= 0.01f;
camera.position = new Vector3(x, 0.0f, 0.0f);
camera.view = Matrix.LookAtLH(camera.position, camera.target, Vector3.UnitY);
camera.projection = Matrix.PerspectiveFovLH(camera.fov, 1.6f, 0.0f, 100.0f);
var buffer = new Buffer(device, new BufferDescription
{
Usage = ResourceUsage.Default,
SizeInBytes = sizeof(ConstantBuffer),
BindFlags = BindFlags.ConstantBuffer
});
////////////////////////////// camera setup
ConstantBuffer cb;
cb.mProjection = Matrix.Transpose(camera.projection);
cb.mView = Matrix.Transpose(camera.view);
cb.mWorld = Matrix.Transpose(camera.world);
var data = new DataStream(sizeof(ConstantBuffer), true, true);
data.Write(cb);
data.Position = 0;
context.UpdateSubresource(new DataBox(0, 0, data), buffer, 0);
//////////////////////////////////////////////////////////////////////
// set the shaders
context.VertexShader.Set(vertexShader);
context.PixelShader.Set(pixelShader);
// draw the triangle
context.Draw(4, 0);
swapChain.Present(0, PresentFlags.None);
Please, if you can see what's wrong, tell me! :) I have spent two days writing this already..
Attempt the second:
#paiden I initialized fov now ( thanks very much :) ) but still no effect (now it's fov = 1.5707963267f;) and #Nico Schertler , thank you too, I put it in use by
context.VertexShader.SetConstantBuffer(buffer, 0);
context.PixelShader.SetConstantBuffer(buffer, 0);
but no effect still... probably my .fx file is wrong? for what purpose do I need this:
cbuffer ConstantBuffer : register( b0 ) { matrix World; matrix View; matrix Projection; }
Attepmpt the third:
#MHGameWork
Thank you very much too, but no effect still ;)
If anyone has 5 minutes time, I can just drop source code to his/her e-mail and then we will publish the answer... I guess it will help much to some newbies like me :)
unsafe
{
x+= 0.01f;
camera.position = new Vector3(x, 0.0f, 0.0f);
camera.view = Matrix.LookAtLH(camera.position, camera.target, Vector3.UnitY);
camera.projection = Matrix.PerspectiveFovLH(camera.fov, 1.6f, 0.01f, 100.0f);
var buffer = new Buffer(device, new BufferDescription
{
Usage = ResourceUsage.Default,
SizeInBytes = sizeof(ConstantBuffer),
BindFlags = BindFlags.ConstantBuffer
});
THE PROBLEM NOW - I SEE MY TRIANGLE BUT THE CAMERA DOESN'T MOVE

You have set your camera's nearplane to 0. This makes all the value in your matrix divide by zero, so you get a matrix filled with 'NAN's
Use a near plane value of about 0.01 in your case, it will solve the problem

I hope you still need help. Here is my camera class, which can be used, and can be easily moved around the scene using mouse/keyboard.
http://pastebin.com/JtiUSiHZ
Call the "TakeALook()" method in each frame (or when you move the camera).
You can move around it with the "CameraMove" method. It takes a Vector3 - where you want to move your camera (dont give it huge values, I use 0,001f for each frame)
And with the "CameraRotate()" you can turn it around - it take a Vector2 as a Left-Right and Up-Down rotation.
Its pretty easy. I use EventHandlers to call there two function, but feel free to edit as you wish.

Related

Drawing SharpDX effects on bitmap is too slow, what am I doing wrong?

SO basically, I need performance. Currently in my job we use GDI+ graphics to draw bitmap. Gdi+ graphics contains a method called DrawImage(Bitmap,Points[]). That array contains 3 points and the rendered image result with a skew effect.
Here is an image of what is a skew effect :
Skew effect
At work, we need to render between 5000 and 6000 different images each single frame which takes ~ 80ms.
Now I thought of using SharpDX since it provides GPU accelerations. I use direct2D since all I need is in 2 dimensions. However, the only way I saw to reproduce a skew effect is the use the SharpDX.effects.Skew and calculate matrix to draw the initial bitmap with a skew effect ( I will provide the code below). The rendered image is exactly the same as GDI+ and it is what I want. The only problem is it takes 600-700ms to render the 5000-6000images.
Here is the code of my SharpDX :
To initiate device :
private void InitializeSharpDX()
{
swapchaindesc = new SwapChainDescription()
{
BufferCount = 2,
ModeDescription = new ModeDescription(this.Width, this.Height, new Rational(60, 1), Format.B8G8R8A8_UNorm),
IsWindowed = true,
OutputHandle = this.Handle,
SampleDescription = new SampleDescription(1, 0),
SwapEffect = SwapEffect.Discard,
Usage = Usage.RenderTargetOutput,
Flags = SwapChainFlags.None
};
SharpDX.Direct3D11.Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.BgraSupport | DeviceCreationFlags.Debug, swapchaindesc, out device, out swapchain);
SharpDX.DXGI.Device dxgiDevice = device.QueryInterface<SharpDX.DXGI.Device>();
surface = swapchain.GetBackBuffer<Surface>(0);
factory = new SharpDX.Direct2D1.Factory1(FactoryType.SingleThreaded, DebugLevel.Information);
d2device = new SharpDX.Direct2D1.Device(factory, dxgiDevice);
d2deviceContext = new SharpDX.Direct2D1.DeviceContext(d2device, SharpDX.Direct2D1.DeviceContextOptions.EnableMultithreadedOptimizations);
bmpproperties = new BitmapProperties(new SharpDX.Direct2D1.PixelFormat(SharpDX.DXGI.Format.B8G8R8A8_UNorm, SharpDX.Direct2D1.AlphaMode.Premultiplied),
96, 96);
d2deviceContext.AntialiasMode = AntialiasMode.Aliased;
bmp = new SharpDX.Direct2D1.Bitmap(d2deviceContext, surface, bmpproperties);
d2deviceContext.Target = bmp;
}
And here is my code I use to recalculate every image positions each frame (each time I do a mouse zoom in or out, I asked for a redraw). You can see in the code two loop of 5945 images where I asked to draw the image. No effects takes 60ms and with effects, it takes up to 700ms as I mentionned before :
private void DrawSkew()
{
d2deviceContext.BeginDraw();
d2deviceContext.Clear(SharpDX.Color.Blue);
//draw skew effect to 5945 images using SharpDX (370ms)
for (int i = 0; i < 5945; i++)
{
AffineTransform2D effect = new AffineTransform2D(d2deviceContext);
PointF[] points = new PointF[3];
points[0] = new PointF(50, 50);
points[1] = new PointF(400, 40);
points[2] = new PointF(40, 400);
effect.SetInput(0, actualBmp, true);
float xAngle = (float)Math.Atan(((points[1].Y - points[0].Y) / (points[1].X - points[0].X)));
float yAngle = (float)Math.Atan(((points[2].X - points[0].X) / (points[2].Y - points[0].Y)));
Matrix3x2 Matrix = Matrix3x2.Identity;
Matrix3x2.Skew(xAngle, yAngle, out Matrix);
Matrix.M11 = Matrix.M11 * (((points[1].X - points[0].X) + (points[2].X - points[0].X)) / actualBmp.Size.Width);
Matrix.M22 = Matrix.M22 * (((points[1].Y - points[0].Y) + (points[2].Y - points[0].Y)) / actualBmp.Size.Height);
effect.TransformMatrix = Matrix;
d2deviceContext.DrawImage(effect, new SharpDX.Vector2(points[0].X, points[0].Y));
effect.Dispose();
}
//draw no effects, only actual bitmap 5945 times using SharpDX (60ms)
for (int i = 0; i < 5945; i++)
{
d2deviceContext.DrawBitmap(actualBmp, 1.0f, BitmapInterpolationMode.NearestNeighbor);
}
d2deviceContext.EndDraw();
swapchain.Present(1, PresentFlags.None);
}
After benching a lot, I realized the line that make it really slow is :
d2deviceContext.DrawImage(effect, new SharpDX.Vector2(points[0].X, points[0].Y));
My guess is my code or my setup does not use GPU acceleration of SharpDX like it should and this is why the code is really slow. I would expect at least better performance from SharpDX than GDI+ for this kind of stuff.

loading mesh with DirectXTK

I use DirectXTK to load mesh. First, I import .fbx into vs2015 and build, then get .cmo file. Then I use DirectXTK load .cmo as follow:
bool MeshDemo::BuildModels()
{
m_fxFactory.reset(new EffectFactory(m_pd3dDevice));
m_states.reset(new CommonStates(m_pd3dDevice));
m_model = Model::CreateFromCMO(m_pd3dDevice, L"T54.cmo", *m_fxFactory, true, true);
return true;
}
void MeshDemo::Render()
{
m_pImmediateContext->ClearRenderTargetView(m_pRenderTargetView, Colors::Silver);
m_pImmediateContext->ClearDepthStencilView(m_pDepthStencilView, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
XMVECTOR qid = XMQuaternionIdentity();
const XMVECTORF32 scale = { 0.01f, 0.01f, 0.01f };
const XMVECTORF32 translate = { 0.f, 0.f, 0.f };
XMMATRIX world = XMLoadFloat4x4(&m_world);
XMVECTOR rotate = XMQuaternionRotationRollPitchYaw(0, XM_PI / 2.f, XM_PI / 2.f);
rotate = XMQuaternionRotationRollPitchYaw(0, XM_PI / 2.f, XM_PI / 2.f);
XMMATRIX local = XMMatrixMultiply(world, XMMatrixTransformation(
g_XMZero, qid, scale, g_XMZero, rotate, translate));
local *= XMMatrixRotationX(-XM_PIDIV2);
m_model->Draw(m_pImmediateContext, *m_states, local, XMLoadFloat4x4(&m_view),
XMLoadFloat4x4(&m_proj));
m_pSwapChain->Present(0, 0);
}
But I can't get correct results,the angle of wheel and some details are different with the .fbx model.
what should I do? Any idear?
Your model is fine, but your culling is inverted, and this is why the render is wrong.
Triangles sent to the GPU have a winding order, it has to be consistent, clock wise or counter clock wise by rearranging the triangle vertices. Then, a render state define what is the front side, and what has to be culled away, front, back or none.

[XNA]Drawing a simple transparent plane

I'm working with XNA and a have a problem.
A basic problem, but argh !!!
I want to draw a ocean, BASIC ocean, just a plane, blue and transparent.
just a plane.
I have try with Vertex, with models and textures.
How tha alpha channel work in XNA ? StencilState, DepthBuffer, nothing would work.
Can you explain how to do this ?
Use VertexPositionColor enough ?
Excuse me, but I am looking for a long time.
class Ocean
{
Effect shader0;
public Vector3 Position;
GraphicsDevice Graphics;
Camera camera;
Model Mesh;
Texture2D waterTexture;
Rectangle screen;
Texture2D test;
public Ocean(Vector3 pos, int size, GraphicsDevice gra, Camera cam, ContentManager content)
{
Graphics = gra;
shader0 = content.Load<Effect>("Ocean");
//shader0 = new BasicEffect(this.Graphics);
waterTexture = content.Load<Texture2D>("Images/shaderUnderwater");
screen = new Rectangle(0, 0, this.Graphics.Viewport.Width, this.Graphics.Viewport.Height);
Position = pos;
Mesh = content.Load<Model>("Models/ocean");
camera = cam;
foreach (ModelMesh mesh in this.Mesh.Meshes)
{
foreach (ModelMeshPart part in mesh.MeshParts)
{
part.Effect = shader0;
}
}
}
bool underWater;
Vector3 lightDirection = new Vector3(-1.0f, -1.0f, -1.0f);
public void Draw(SpriteBatch spriteBatch, Player player)
{
Matrix world = Matrix.CreateScale(100f) * Matrix.CreateRotationX(MathHelper.ToRadians(-90f)) * Matrix.CreateTranslation(Position);
//this.shader0.EnableDefaultLighting();
this.shader0.Parameters["World"].SetValue(world);
this.shader0.Parameters["View"].SetValue(player.Camera.View);
this.shader0.Parameters["Projection"].SetValue(player.Camera.Projection);
// Dessin du model
foreach (ModelMesh mesh in this.Mesh.Meshes)
{
foreach (Effect effect in mesh.Effects)
{
effect.CurrentTechnique = effect.Techniques["Textured"];
effect.Parameters["DiffuseColor"].SetValue(new Vector4(1f, 0.2f, 0.2f, 1f) ); // a reddish light
effect.Parameters["DiffuseLightDirection"].SetValue(new Vector3(1, 0, 0) ); // coming along the x-axis
effect.Parameters["SpecularColor"].SetValue(new Vector4(0, 1, 0 ,1f) ); // with green highlights
effect.Parameters["World"].SetValue(world);
effect.Parameters["View"].SetValue(player.Camera.View);
effect.Parameters["Projection"].SetValue(player.Camera.Projection);
}
mesh.Draw();
}
this.Graphics.BlendState = BlendState.Opaque;
//Effet
if (camera.Position.Y < Position.Y)
spriteBatch.Draw(this.waterTexture, this.screen, Color.White);
}
public void Update(GameTime gameTime)
{
}
void onWater()
{
underWater = true;
}
}
}
If you are using the BasicEffect to render the ocean, then BasicEffect has an Alpha property, which affects the transparency. Please see here: http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.basiceffect.alpha.aspx
You can use BasicEffect to create transparency with any way that you are using for rendering your model.

Transparency shader cover model

I am using XNA, implementing the HLSL shader and i have a problem with transparency
in the shader;
When rendering two models and they are facing each other the model behind is seen only
when it is first rendered
Let me explain...
blue cone = vector3(0,0,0) - first target
green cone = vector3(50,0,50) - second target
here rendering blue first and then the green and blue cone is seen
can see
now the other way before the green then blue and you do not see
cannot see
As long as they are two cones I can calculate the distance from the camera and render
before the most distant (the only solution I found by searching on the net), but if I
have several models and sizes, it can happen that a model A is most distant of a model B
but that its size may lead him to hide the model B.
Here put some code i use
.fx file
float4x4 World;
float4x4 View;
float4x4 Projection;
float3 Color;
texture ColorTexture : DIFFUSE ;
sampler2D ColorSampler = sampler_state {
Texture = <ColorTexture>;
FILTER = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
struct VertexShaderInput
{
float4 Position : POSITION0;
float3 UV : TEXCOORD0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
float3 UV : TEXCOORD0;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
float4 projPosition = mul(viewPosition, Projection);
output.Position = projPosition;
output.UV = input.UV;
return output;
}
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
float4 map = tex2D(ColorSampler,input.UV);
return float4(map.rgb * Color, map.a);
}
technique Textured
{
pass Pass1
{
ZEnable = true;
ZWriteEnable = true;
AlphaBlendEnable = true;
DestBlend = DestAlpha;
SrcBlend=BlendFactor;
VertexShader = compile vs_3_0 VertexShaderFunction();
PixelShader = compile ps_3_0 PixelShaderFunction();
}
}
draw code in XNA project
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.DarkGray);
for (int i = 0; i < 2; i++)
{
ModelEffect.Parameters["View"].SetValue(this.View);
ModelEffect.Parameters["Projection"].SetValue(this.Projection);
ModelEffect.Parameters["ColorTexture"].SetValue(this.colorTextured);
ModelEffect.CurrentTechnique = ModelEffect.Techniques["Textured"];
Vector3 Scala = new Vector3(2, 3, 1);
if (i == 0)
{
World = Matrix.CreateScale(Scala) * Matrix.CreateWorld(FirstTarget, Vector3.Forward, Vector3.Up);
ModelEffect.Parameters["Color"].SetValue(new Vector3(0, 0, 255));// blu
}
else
{
World = Matrix.CreateScale(Scala) * Matrix.CreateWorld(SecondTarget, Vector3.Forward, Vector3.Up);
ModelEffect.Parameters["Color"].SetValue(new Vector3(0, 255, 0));// verde
}
ModelEffect.Parameters["World"].SetValue(World);
foreach (ModelMesh mesh in Emitter.Meshes)
{
foreach (ModelMeshPart effect in mesh.MeshParts)
{
effect.Effect = ModelEffect;
}
mesh.Draw();
}
}
base.Draw(gameTime);
}
the for cycle I need for chage sort rendering...
Is there any procedure to be able to work around this problem?
I hope I explained myself, I think we should work on the file .fx ...or not?
I'm on the high seas :)
This is a classic computer graphics problem -- what you'll need to do depends on your specific model and need, but transparency is as you've discovered order dependent -- as Nico mentioned, if you sort entire meshes back-to-front (draw the back ones first) for each frame, you will be okay, but what about curved surfaces that need to draw sometimes in front of themselves (that is, they are self-occluding from the camera's point of view)? Then you have to go much farther and sort the polys within the mesh (adios, high performance!). If you don't sort, chances are, the order will look correct on average 50% or less of the time (unless your model is posed just right).
If you are drawing transparent cones, as they rotate to different views they will look correct sometimes and wrong other times, if two-sided rendering is enabled. Most of the time wrong.
One option is to just turn off depth-write buffering during the pass(es) where you draw the transparent items. Again, YMMV according to the scene's needs, but this can be a useful fix in many cases. Another is to segment the model and sort the meshes.
In games, many strategies have been followed, including re-ordering the models by hand, forcing the artists to limit transparency to certain passes only, drawing two passes per transparent layer (one with transparent color, and another with opaque but no color write to get the Z buffer correct), sending models back for a re-do, or even, if the errors are small or the action is fast, just accepting broken transparency.
There have been various solutions proposed to this general problem -- "OIT" (Order Independent Transparency) is a big enough topic to get its own wikipedia page: http://en.wikipedia.org/wiki/Order-independent_transparency

XNA mouse picking issues

I am pulling my hair out over this mouse picking thing. I do not know if the problem lies in my Ray calculation or my BoundingSpheres, anyway here's the code for my ray calculations:
public Ray CalculateRay(InputManager input)
{
Vector3 nearSource = new Vector3(input.CurrentMousePosition, 0f);
Vector3 farSource = new Vector3(input.CurrentMousePosition, 1f);
Vector3 nearPoint = Engine.Device.Viewport.Unproject(nearSource, _projectionMatrix,
_viewMatrix, Matrix.Identity);
Vector3 farPoint = Engine.Device.Viewport.Unproject(farSource,
_projectionMatrix, _viewMatrix, Matrix.Identity);
Vector3 direction = farPoint - nearPoint;
direction.Normalize();
return new Ray(nearPoint, direction);
}
and for the intersection check:
public bool RayIntersectsModel()
{
foreach (ModelMesh mesh in _model.Meshes)
{
BoundingSphere sphere = mesh.BoundingSphere;
sphere.Center = _position;
Ray ray = Engine.Camera.CalculateRay(Engine.Input);
if (sphere.Intersects(ray) != null)
{
return true;
}
}
return false;
}
It's not like it isn't working at all but it seems to be very inaccurate... or something. The models are just spheres, very simple. Any help or insight would be greatly appreciated!
Thanks!
Well, when I was transforming my bounding sphere I was first applying the world matrix then the bone transforms. This seems to put the bounding sphere in the wrong place. Switching it to first applying the bone transforms THEN the world matrix did it.

Resources