I am currently developing on a new mechanism to visualize laser beam hits on spaceships' shields. The development on the CPU side is done and the Vertex Shader is working fine, but I have an issue while creating the Pixel Shader: Any transparency value below 0.5 is invisible.
The following Pixel Shader is incredible simple: If the pixel is inside the hit radius, I show a semi-transparent blue pixel, otherwise nothing is shown.
float4 PS(PS_IN input) : SV_Target
{
if (input.impactDistances.x > 0.0f && input.impactDistances.x < 2.0f)
{
return float4(0.0f, 0.0f, 1.0f, 0.5f);
}
return float4(1.0f, 1.0f, 1.0f, 0.0f);
}
This results in something like this (see blue area above the yellow arrow).
Now, if I change the line
return float4(0.0f, 0.0f, 1.0f, 0.5f);
to
return float4(0.0f, 0.0f, 1.0f, 0.4f);
then the impact areas are completely invisible and I can't think of anything that causes this behaviour. Do you have any idea?
If it helps, here are my settings for the blend state that I use for the entire game.
var blendStateDescription = new SharpDX.Direct3D11.BlendStateDescription();
blendStateDescription.RenderTarget[0].IsBlendEnabled = true;
blendStateDescription.RenderTarget[0].RenderTargetWriteMask = SharpDX.Direct3D11.ColorWriteMaskFlags.All;
blendStateDescription.RenderTarget[0].SourceBlend = SharpDX.Direct3D11.BlendOption.SourceAlpha;
blendStateDescription.RenderTarget[0].BlendOperation = SharpDX.Direct3D11.BlendOperation.Add;
blendStateDescription.RenderTarget[0].DestinationBlend = SharpDX.Direct3D11.BlendOption.InverseSourceAlpha;
blendStateDescription.RenderTarget[0].SourceAlphaBlend = SharpDX.Direct3D11.BlendOption.Zero;
blendStateDescription.RenderTarget[0].AlphaBlendOperation = SharpDX.Direct3D11.BlendOperation.Add;
blendStateDescription.RenderTarget[0].DestinationAlphaBlend = SharpDX.Direct3D11.BlendOption.One;
blendStateDescription.AlphaToCoverageEnable = true;
_blendState = new SharpDX.Direct3D11.BlendState(_device, blendStateDescription);
_deviceContext.OutputMerger.SetBlendState(_blendState);
blendStateDescription.AlphaToCoverageEnable = true;
will definitely create an issue in your case, setting it to false will apply correct alpha blending.
Related
I use DirectXTK to load mesh. First, I import .fbx into vs2015 and build, then get .cmo file. Then I use DirectXTK load .cmo as follow:
bool MeshDemo::BuildModels()
{
m_fxFactory.reset(new EffectFactory(m_pd3dDevice));
m_states.reset(new CommonStates(m_pd3dDevice));
m_model = Model::CreateFromCMO(m_pd3dDevice, L"T54.cmo", *m_fxFactory, true, true);
return true;
}
void MeshDemo::Render()
{
m_pImmediateContext->ClearRenderTargetView(m_pRenderTargetView, Colors::Silver);
m_pImmediateContext->ClearDepthStencilView(m_pDepthStencilView, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
XMVECTOR qid = XMQuaternionIdentity();
const XMVECTORF32 scale = { 0.01f, 0.01f, 0.01f };
const XMVECTORF32 translate = { 0.f, 0.f, 0.f };
XMMATRIX world = XMLoadFloat4x4(&m_world);
XMVECTOR rotate = XMQuaternionRotationRollPitchYaw(0, XM_PI / 2.f, XM_PI / 2.f);
rotate = XMQuaternionRotationRollPitchYaw(0, XM_PI / 2.f, XM_PI / 2.f);
XMMATRIX local = XMMatrixMultiply(world, XMMatrixTransformation(
g_XMZero, qid, scale, g_XMZero, rotate, translate));
local *= XMMatrixRotationX(-XM_PIDIV2);
m_model->Draw(m_pImmediateContext, *m_states, local, XMLoadFloat4x4(&m_view),
XMLoadFloat4x4(&m_proj));
m_pSwapChain->Present(0, 0);
}
But I can't get correct results,the angle of wheel and some details are different with the .fbx model.
what should I do? Any idear?
Your model is fine, but your culling is inverted, and this is why the render is wrong.
Triangles sent to the GPU have a winding order, it has to be consistent, clock wise or counter clock wise by rearranging the triangle vertices. Then, a render state define what is the front side, and what has to be culled away, front, back or none.
I'm working with OpenGl on iOS and Android, what I'm trying to do is draw a model, a sphere, set the camera/eye coords inside the sphere, set a texture and enable panning and zoom to achieve a 360 degree effect, I just made for Android using OpenGl 1.0, but I was having a lot of problems in iOS, so I made it using OpenGl 2.0, everything is set and working, but I'm having a problem with the panning, in order to rotate the Model View matrix, I'm applying the rotate transformation, it works but if I change any axis rotation, it messes up the other two axis, At the end if I apply a rotation in both axis, X and Y, the sphere rotates like if some kind of transformation has been don in the Z axis, the texture ends upside-down or being displayed in diagonal, I'm doing the exact same transformations in Android and I don't have any problem there, anybody has some experience with this issue? Any suggestion? clue? code? article? I think that when I apply the first transformation the coords in the space just change and the next transformation is not being applied properly.
Here's my iOS code :
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(GL_COLOR_BUFFER_BIT);
GLKMatrixStackPush(_obStack);
GLKMatrixStackRotate(_obStack,
GLKMathDegreesToRadians(_obAngleX),
0.0f, 1.0f, 0.0f);
GLKMatrixStackRotate(_obStack,
GLKMathDegreesToRadians(_obAngleY),
1.0f, 0.0f, 0.0f);
GLKMatrixStackScale(_obStack,
_obScaleFactor,
_obScaleFactor,
_obScaleFactor);
self.obEffect.transform.modelviewMatrix = GLKMatrixStackGetMatrix4(_obStack);
// Prepare effect
[self.obEffect prepareToDraw];
// Draw Model
glDrawArrays(GL_TRIANGLES, 0, panoramaVertices);
GLKMatrixStackPop(_obStack);
self.obEffect.transform.modelviewMatrix = GLKMatrixStackGetMatrix4(_obStack);
}
This is my Android code :
public void onDrawFrame(GL10 arGl) {
arGl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
arGl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
arGl.glPushMatrix();
arGl.glRotatef(obAngleY, 1.0f, 0.0f, 0.0f);
arGl.glRotatef(obAngleX, 0.0f, 1.0f, 0.0f);
arGl.glScalef(obScaleFactor, obScaleFactor, obScaleFactor);
if (obModel != null) {
obModel.draw(arGl);
}
arGl.glPopMatrix();
}
I was trying over and over again with several implementations, every single time with no success, at the end, I went back and implemented the OpenGL 1.0 version for iOS and under this scenario, whenever the matrix is being rotated, the axis aren't, so this was my solution, implement the OpenGL 1.0 version.
I am trying to create a Unity Augmented Reality Application that works of the pose determined by the ADF localization. I add the camera feed to the "Persistent State" demo, but my ADF localized frame does not align with the camera feed. The ground always seems to be transformed to be above the horizon.
I had the same problem and I've found a solution. The problem is that in the PoseController script, the horizon is not set correctly with the real world horizon, but in the AugmentedReality example it is done correctly (in the ARScreen script). So in my AR application things looked like they moved up as i moved far away from them.
To fix this, you can either start your application inside the AugmentedReality example as Guillaume said or make the next changes in the PoseController scripts:
-First of all we have the matrix m_matrixdTuc which is initialized as follows:
m_matrixdTuc = new Matrix4x4();
m_matrixdTuc.SetColumn(0, new Vector4(1.0f, 0.0f, 0.0f, 0.0f));
m_matrixdTuc.SetColumn(1, new Vector4(0.0f, 1.0f, 0.0f, 0.0f));
m_matrixdTuc.SetColumn(2, new Vector4(0.0f, 0.0f, -1.0f, 0.0f));
m_matrixdTuc.SetColumn(3, new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
Well, we must rename it as m_matrixcTuc and change the -1.0f value:
m_matrixcTuc = new Matrix4x4();
m_matrixcTuc.SetColumn(0, new Vector4(1.0f, 0.0f, 0.0f, 0.0f));
m_matrixcTuc.SetColumn(1, new Vector4(0.0f, -1.0f, 0.0f, 0.0f));
m_matrixcTuc.SetColumn(2, new Vector4(0.0f, 0.0f, 1.0f, 0.0f));
m_matrixcTuc.SetColumn(3, new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
This is because we're not gonna use this matrix directly to get the transform of the camera but we're gonna make some operations before to get the true m_matrixdTuc.
-2nd, we must initialize this three variables:
private Matrix4x4 m_matrixdTuc;
// Device frame with respect to IMU frame.
private Matrix4x4 m_imuTd;
// Color camera frame with respect to IMU frame.
private Matrix4x4 m_imuTc;
(the true m_matrixdTuc and the other 2 are gotten from the ARScreen script)
-3rd we need the _SetCameraExtrinsics method from ARScreen:
/// <summary>
/// The function is for querying the camera extrinsic, for example: the transformation between
/// IMU and device frame. These extrinsics is used to transform the pose from the color camera frame
/// to the device frame. Because the extrinsic is being queried using the GetPoseAtTime()
/// with a desired frame pair, it can only be queried after the ConnectToService() is called.
///
/// The device with respect to IMU frame is not directly queryable from API, so we use the IMU
/// frame as a temporary value to get the device frame with respect to IMU frame.
/// </summary>
private void _SetCameraExtrinsics()
{
double timestamp = 0.0;
TangoCoordinateFramePair pair;
TangoPoseData poseData = new TangoPoseData();
// Getting the transformation of device frame with respect to IMU frame.
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_IMU;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
PoseProvider.GetPoseAtTime(poseData, timestamp, pair);
Vector3 position = new Vector3((float)poseData.translation[0],
(float)poseData.translation[1],
(float)poseData.translation[2]);
Quaternion quat = new Quaternion((float)poseData.orientation[0],
(float)poseData.orientation[1],
(float)poseData.orientation[2],
(float)poseData.orientation[3]);
m_imuTd = Matrix4x4.TRS(position, quat, new Vector3(1.0f, 1.0f, 1.0f));
// Getting the transformation of IMU frame with respect to color camera frame.
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_IMU;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_CAMERA_COLOR;
PoseProvider.GetPoseAtTime(poseData, timestamp, pair);
position = new Vector3((float)poseData.translation[0],
(float)poseData.translation[1],
(float)poseData.translation[2]);
quat = new Quaternion((float)poseData.orientation[0],
(float)poseData.orientation[1],
(float)poseData.orientation[2],
(float)poseData.orientation[3]);
m_imuTc = Matrix4x4.TRS(position, quat, new Vector3(1.0f, 1.0f, 1.0f));
m_matrixdTuc = Matrix4x4.Inverse(m_imuTd) * m_imuTc * m_matrixcTuc;
}
Now as you can see, we get the true value of the m_matrixdTuc variable that will be used in the OnTangoPoseAvailable method.
I don't really understand the maths behind this methods but I've found that it works perfectly on my application. Hope it works for you too :)
I already tried something similar. If I understand well, the objects you add to be augmented are rotated toward the top of the screen, and this is especially visible when they are far, as they look like lifted upward.
To make it work, I simply started my project inside the Experimental Augmented Reality app with the "red map marker": there, the camera feed is already aligned to the world properly.
I have been hardly coding on a Direct3D9 based game. Everything went excellent util I hit a big problem. I created a class that wraps the process of loading a mesh from a .x file. I successfully loaded a cube with only one face visible. In theory, that face should look like a square but it is actually rendered as a rectangle. I am quite sure that there is something wrong with the D3DPRESENT_PARAMETERS structure. Down bellow are only the most important lines of my application's initialization.
First part to be created is the focus window:
HWND hWnd = CreateWindowEx(0UL, L"NewFrontiers3DWindowClass", Title.c_str(), WS_POPUP | WS_EX_TOPMOST, 0, 0, 1280, 1024, nullptr, (HMENU)false, hInstance, nullptr);
Then I fill out the D3DPRESENT_PARAMETERS structure.
D3DDISPLAYMODE D3DMM;
SecureZeroMemory(&D3DMM, sizeof(D3DDISPLAYMODE));
if(FAILED(hr = Direct3D9->GetAdapterDisplayMode(Adapter, &D3DMM)))
{
// Error is processed here
}
PresP.BackBufferWidth = D3DMM.Width;
PresP.BackBufferHeight = D3DMM.Height;
PresP.BackBufferFormat = BackBufferFormat;
PresP.BackBufferCount = 1U;
PresP.MultiSampleType = D3DMULTISAMPLE_NONE;
PresP.MultiSampleQuality = 0UL;
PresP.SwapEffect = D3DSWAPEFFECT_DISCARD;
PresP.hDeviceWindow = hWnd;
PresP.Windowed = false;
PresP.EnableAutoDepthStencil = EnableAutoDepthStencil;
PresP.AutoDepthStencilFormat = AutoDepthStencilFormat;
PresP.Flags = D3DPRESENTFLAG_DISCARD_DEPTHSTENCIL;
PresP.FullScreen_RefreshRateInHz = D3DMM.RefreshRate;
PresP.PresentationInterval = PresentationInterval;
Then the Direct3D9 device is created, followed by the SetRenderState functions.
Next, the viewport is assigned.
D3DVIEWPORT9 D3D9Viewport;
SecureZeroMemory(&D3D9Viewport, sizeof(D3DVIEWPORT9));
D3D9Viewport.X = 0UL;
D3D9Viewport.Y = 0UL;
D3D9Viewport.Width = (DWORD)D3DMM.Width;
D3D9Viewport.Height = (DWORD)D3DMM.Height;
D3D9Viewport.MinZ = 0.0f;
D3D9Viewport.MaxZ = 1.0f;
if(FAILED(Direct3D9Device->SetViewport(&D3D9Viewport)))
{
// Error is processed here
}
After this initialization, I globally declare some parameters that will be used later.
D3DXVECTOR3 EyePt(0.0f, 0.0f, -5.0f), Up(0.0f, 1.0f, 0.0f), LookAt(0.0f, 0.0f, 0.0f);
D3DXMATRIX View, Proj, World;
The update function looks like this:
Mesh.Render(Direct3D9Device);
D3DXMatrixLookAtLH(&View, &EyePt, &LookAt, &Up);
Direct3D9Device->SetTransform(D3DTS_VIEW, &View);
D3DXMatrixPerspectiveFovLH(&Proj, D3DX_PI/4, 1.0f, 1.0f, 1000.f);
Direct3D9Device->SetTransform(D3DTS_PROJECTION, &Proj);
D3DXMatrixTranslation(&World, 0.0f, 0.0f, 0.0f);
Direct3D9Device->SetTransform(D3DTS_WORLD, &World);
The device is not a null pointer.
I recently realized that there is no difference between declaring and setting up a view port and not doing so.
If there is anybody who can point me to the right answer, please help me solve this annoying problem.
If you don't set any transformation matrices, so the identity transformation is applied to your mesh, then face of the cube will be stretched to the same shape of the viewport. If your viewport isn't square (eg. it's the same size as the screen) then your cube's face also won't be square.
You can use a square viewport to workaround this problem, but that will limit your rendering to just that square on the screen. If you want to render to the entire screen you'll need to set a suitable projection matrix. You can calculate a normal perspective perspective matrix using D3DXMatrixPerspectiveFovLH. If you want an orthogonal perspective, where everything is the same size regardless of the distance from the camera, then use D3DXMatrixOrthoLH to calculate the perspective matrix. Note that if you use your viewport's width and height with the later function it will shrink your cube. A unit size cube will be rendered as a single pixel on the screen. You can either use a world or view transform to scale it up again, or use something like width/height and 1 as your width and height parameters to D3DXMatrixOrthoLH.
If you go with D3DXMatrixPerspectiveFovLH then you want something like this:
D3DXMatrixPerspectiveFovLH(&Proj, D3DX_PI/4, (double) D3DMM.Width / D3DMM.Height,
1.0f, 1000.f);
I think your problem not in D3DPP parameters but in your projective matrix. If you use D3DXMatrixPerspectiveFovLH, check aspect ratio to be 1280 / 1024 = 1.3333f
Please, tell me what I'm doing wrongly:
that's my Camera class
public class Camera
{
public Matrix view;
public Matrix world;
public Matrix projection;
public Vector3 position;
public Vector3 target;
public float fov;
public Camera(Vector3 pos, Vector3 tar)
{
this.position = pos;
this.target = tar;
view = Matrix.LookAtLH(position, target, Vector3.UnitY);
projection = Matrix.PerspectiveFovLH(fov, 1.6f, 0.001f, 100.0f);
world = Matrix.Identity;
}
}
that's my Constant buffer struct:
struct ConstantBuffer
{
internal Matrix mWorld;
internal Matrix mView;
internal Matrix mProjection;
};
and here I'm drawing the triangle and setting camera:
x+= 0.01f;
camera.position = new Vector3(x, 0.0f, 0.0f);
camera.view = Matrix.LookAtLH(camera.position, camera.target, Vector3.UnitY);
camera.projection = Matrix.PerspectiveFovLH(camera.fov, 1.6f, 0.0f, 100.0f);
var buffer = new Buffer(device, new BufferDescription
{
Usage = ResourceUsage.Default,
SizeInBytes = sizeof(ConstantBuffer),
BindFlags = BindFlags.ConstantBuffer
});
////////////////////////////// camera setup
ConstantBuffer cb;
cb.mProjection = Matrix.Transpose(camera.projection);
cb.mView = Matrix.Transpose(camera.view);
cb.mWorld = Matrix.Transpose(camera.world);
var data = new DataStream(sizeof(ConstantBuffer), true, true);
data.Write(cb);
data.Position = 0;
context.UpdateSubresource(new DataBox(0, 0, data), buffer, 0);
//////////////////////////////////////////////////////////////////////
// set the shaders
context.VertexShader.Set(vertexShader);
context.PixelShader.Set(pixelShader);
// draw the triangle
context.Draw(4, 0);
swapChain.Present(0, PresentFlags.None);
Please, if you can see what's wrong, tell me! :) I have spent two days writing this already..
Attempt the second:
#paiden I initialized fov now ( thanks very much :) ) but still no effect (now it's fov = 1.5707963267f;) and #Nico Schertler , thank you too, I put it in use by
context.VertexShader.SetConstantBuffer(buffer, 0);
context.PixelShader.SetConstantBuffer(buffer, 0);
but no effect still... probably my .fx file is wrong? for what purpose do I need this:
cbuffer ConstantBuffer : register( b0 ) { matrix World; matrix View; matrix Projection; }
Attepmpt the third:
#MHGameWork
Thank you very much too, but no effect still ;)
If anyone has 5 minutes time, I can just drop source code to his/her e-mail and then we will publish the answer... I guess it will help much to some newbies like me :)
unsafe
{
x+= 0.01f;
camera.position = new Vector3(x, 0.0f, 0.0f);
camera.view = Matrix.LookAtLH(camera.position, camera.target, Vector3.UnitY);
camera.projection = Matrix.PerspectiveFovLH(camera.fov, 1.6f, 0.01f, 100.0f);
var buffer = new Buffer(device, new BufferDescription
{
Usage = ResourceUsage.Default,
SizeInBytes = sizeof(ConstantBuffer),
BindFlags = BindFlags.ConstantBuffer
});
THE PROBLEM NOW - I SEE MY TRIANGLE BUT THE CAMERA DOESN'T MOVE
You have set your camera's nearplane to 0. This makes all the value in your matrix divide by zero, so you get a matrix filled with 'NAN's
Use a near plane value of about 0.01 in your case, it will solve the problem
I hope you still need help. Here is my camera class, which can be used, and can be easily moved around the scene using mouse/keyboard.
http://pastebin.com/JtiUSiHZ
Call the "TakeALook()" method in each frame (or when you move the camera).
You can move around it with the "CameraMove" method. It takes a Vector3 - where you want to move your camera (dont give it huge values, I use 0,001f for each frame)
And with the "CameraRotate()" you can turn it around - it take a Vector2 as a Left-Right and Up-Down rotation.
Its pretty easy. I use EventHandlers to call there two function, but feel free to edit as you wish.