I have been hardly coding on a Direct3D9 based game. Everything went excellent util I hit a big problem. I created a class that wraps the process of loading a mesh from a .x file. I successfully loaded a cube with only one face visible. In theory, that face should look like a square but it is actually rendered as a rectangle. I am quite sure that there is something wrong with the D3DPRESENT_PARAMETERS structure. Down bellow are only the most important lines of my application's initialization.
First part to be created is the focus window:
HWND hWnd = CreateWindowEx(0UL, L"NewFrontiers3DWindowClass", Title.c_str(), WS_POPUP | WS_EX_TOPMOST, 0, 0, 1280, 1024, nullptr, (HMENU)false, hInstance, nullptr);
Then I fill out the D3DPRESENT_PARAMETERS structure.
D3DDISPLAYMODE D3DMM;
SecureZeroMemory(&D3DMM, sizeof(D3DDISPLAYMODE));
if(FAILED(hr = Direct3D9->GetAdapterDisplayMode(Adapter, &D3DMM)))
{
// Error is processed here
}
PresP.BackBufferWidth = D3DMM.Width;
PresP.BackBufferHeight = D3DMM.Height;
PresP.BackBufferFormat = BackBufferFormat;
PresP.BackBufferCount = 1U;
PresP.MultiSampleType = D3DMULTISAMPLE_NONE;
PresP.MultiSampleQuality = 0UL;
PresP.SwapEffect = D3DSWAPEFFECT_DISCARD;
PresP.hDeviceWindow = hWnd;
PresP.Windowed = false;
PresP.EnableAutoDepthStencil = EnableAutoDepthStencil;
PresP.AutoDepthStencilFormat = AutoDepthStencilFormat;
PresP.Flags = D3DPRESENTFLAG_DISCARD_DEPTHSTENCIL;
PresP.FullScreen_RefreshRateInHz = D3DMM.RefreshRate;
PresP.PresentationInterval = PresentationInterval;
Then the Direct3D9 device is created, followed by the SetRenderState functions.
Next, the viewport is assigned.
D3DVIEWPORT9 D3D9Viewport;
SecureZeroMemory(&D3D9Viewport, sizeof(D3DVIEWPORT9));
D3D9Viewport.X = 0UL;
D3D9Viewport.Y = 0UL;
D3D9Viewport.Width = (DWORD)D3DMM.Width;
D3D9Viewport.Height = (DWORD)D3DMM.Height;
D3D9Viewport.MinZ = 0.0f;
D3D9Viewport.MaxZ = 1.0f;
if(FAILED(Direct3D9Device->SetViewport(&D3D9Viewport)))
{
// Error is processed here
}
After this initialization, I globally declare some parameters that will be used later.
D3DXVECTOR3 EyePt(0.0f, 0.0f, -5.0f), Up(0.0f, 1.0f, 0.0f), LookAt(0.0f, 0.0f, 0.0f);
D3DXMATRIX View, Proj, World;
The update function looks like this:
Mesh.Render(Direct3D9Device);
D3DXMatrixLookAtLH(&View, &EyePt, &LookAt, &Up);
Direct3D9Device->SetTransform(D3DTS_VIEW, &View);
D3DXMatrixPerspectiveFovLH(&Proj, D3DX_PI/4, 1.0f, 1.0f, 1000.f);
Direct3D9Device->SetTransform(D3DTS_PROJECTION, &Proj);
D3DXMatrixTranslation(&World, 0.0f, 0.0f, 0.0f);
Direct3D9Device->SetTransform(D3DTS_WORLD, &World);
The device is not a null pointer.
I recently realized that there is no difference between declaring and setting up a view port and not doing so.
If there is anybody who can point me to the right answer, please help me solve this annoying problem.
If you don't set any transformation matrices, so the identity transformation is applied to your mesh, then face of the cube will be stretched to the same shape of the viewport. If your viewport isn't square (eg. it's the same size as the screen) then your cube's face also won't be square.
You can use a square viewport to workaround this problem, but that will limit your rendering to just that square on the screen. If you want to render to the entire screen you'll need to set a suitable projection matrix. You can calculate a normal perspective perspective matrix using D3DXMatrixPerspectiveFovLH. If you want an orthogonal perspective, where everything is the same size regardless of the distance from the camera, then use D3DXMatrixOrthoLH to calculate the perspective matrix. Note that if you use your viewport's width and height with the later function it will shrink your cube. A unit size cube will be rendered as a single pixel on the screen. You can either use a world or view transform to scale it up again, or use something like width/height and 1 as your width and height parameters to D3DXMatrixOrthoLH.
If you go with D3DXMatrixPerspectiveFovLH then you want something like this:
D3DXMatrixPerspectiveFovLH(&Proj, D3DX_PI/4, (double) D3DMM.Width / D3DMM.Height,
1.0f, 1000.f);
I think your problem not in D3DPP parameters but in your projective matrix. If you use D3DXMatrixPerspectiveFovLH, check aspect ratio to be 1280 / 1024 = 1.3333f
Related
I was trying to draw a half circle with renderEncoder's drawIndexedPrimitives
[renderEncoder setVertexBuffer:self.vertexBuffer offset:0 atIndex:0];
[renderEncoder drawIndexedPrimitives:MTLPrimitiveTypeTriangleStrip
indexCount:self.indexCount
indexType:MTLIndexTypeUInt16
indexBuffer:self.indicesBuffer
indexBufferOffset:0];
where the vertexBuffer and indicesBuffer for the circle were created by calculation
int segments = 10;
float vertices02[ (segments +1)* (3+4)];
vertices02[0] = centerX;
vertices02[1] = centerY;
vertices02[2] = 0;
//3, 4, 5, 6 are RGBA
vertices02[3] = 1.0;
vertices02[4] = 0;
vertices02[5] = 0.0;
vertices02[6] = 1.0;
uint16_t indices[(segments -1)*3];
for (int i = 1; i <= segments ; i++){
float degree = (i -1) * (endDegree - startDegree)/ (segments -1) + startDegree;
vertices02[i*7] = (centerX + cos([self degreesToRadians:degree])*radius);
vertices02[i*7 +1] = (centerY + sin([self degreesToRadians:degree])*radius);
vertices02[i*7 +2] = 0;
vertices02[i*7 +3] = 1.0;
vertices02[i*7 +4] = 0;
vertices02[i*7 +5] = 0.0;
vertices02[i*7 +6] = 1.0;
if (i < segments){
indices[(i-1)*3 + 0] = 0;
indices[(i-1)*3 + 1] = i;
indices[(i-1)*3 + 2] = i+1;
}
}
So I am combining 9 Triangle to form a 180 degree circle.
Then create vertexBuffer and indicesBuffer
self.vertexBuffer = [device newBufferWithBytes:vertexArrayPtr
length:vertexDataSize
options:MTLResourceOptionCPUCacheModeDefault];
self.indicesBuffer = [device newBufferWithBytes:indexArrayPtr
length:indicesDataSize
options:MTLResourceOptionCPUCacheModeDefault];
The result is like this:
I believe this is Anti-Aliasing problem from Metal of iOS. I used to create half circle in OpenGL using same technique but the edges was much smoother.
Any suggestions to tackle the problem?
Suggested by warrenm, I should set the CAMetalLayer's drawableSize equals screenSize x scale. There are improvements:
Another Suggestion by warrenm, using MTKView and setting sampleCount = 4 solved the problem:
There are a couple of things to consider here. First, you need to ensure that (when possible) the size of the grid you're rasterizing to matches the resolution of the display it will be viewed on. Second, you might need to use subpixel techniques to eke out additional smoothness, since raster techniques tend to undersample continuous functions.
In Metal, the way we match the rendered image size to the display is by ensuring that the drawable size of the Metal layer matches the pixel dimensions it will occupy on the screen. When using CAMetalLayer directly, the default behavior is for the drawable size of the layer to be the size of the layer's bounds multiplied by the layer's contentsScale property. Setting the latter to the scale of the UIScreen onto which the layer is composited will match the layer's dimensions to the screen's pixels (ignoring other transformations that might be applied to the layer or its view hierarchy).
When using MTKView, the autoResizeDrawable property determines whether the view automatically manages its layer's drawable size. This is the default behavior, but if you set this property to NO, you can manually set the drawable size to something else (e.g., use adaptive resolution rendering when fragment-bound).
In order to sample more finely, we have our choice among any number of antialiasing techniques, but perhaps the easiest of these is multisampled antialiasing (MSAA), a hardware feature that—as the name suggests—takes multiple samples for each pixel along the edges of primitives, in order to reduce the jagged effects of aliasing.
In Metal, using MSAA requires setting multisampling state (i.e., the sample count) on both the render pipeline state and the textures used for rendering. MSAA is a two-step process, where a render target that can hold the data for multiple fragments per pixel is rendered to, then a resolve step combines these samples into the final color for each pixel. When using CAMetalLayer (or drawing off-screen), you must create a texture of type MTLTextureType2DMultisample for each active color/depth attachment. These textures are configured as the texture property of their respective color/depth attachments, and the resolveTexture property is set to a texture of type MTLTextureType2D, into which the MSAA targets are resolved.
When using MTKView, simply setting the sampleCount on the view to match the sampleCount of the render pipeline descriptor is sufficient to get MetalKit to create and manage the appropriate resources. By default, the render pass descriptors you receive from a view will have an internally-managed MSAA color target set as the primary color attachment, and the current drawable's texture set as the resolve texture of that attachment. In this way, enabling MSAA with MetalKit only requires a couple of lines of code.
I'm working with OpenGl on iOS and Android, what I'm trying to do is draw a model, a sphere, set the camera/eye coords inside the sphere, set a texture and enable panning and zoom to achieve a 360 degree effect, I just made for Android using OpenGl 1.0, but I was having a lot of problems in iOS, so I made it using OpenGl 2.0, everything is set and working, but I'm having a problem with the panning, in order to rotate the Model View matrix, I'm applying the rotate transformation, it works but if I change any axis rotation, it messes up the other two axis, At the end if I apply a rotation in both axis, X and Y, the sphere rotates like if some kind of transformation has been don in the Z axis, the texture ends upside-down or being displayed in diagonal, I'm doing the exact same transformations in Android and I don't have any problem there, anybody has some experience with this issue? Any suggestion? clue? code? article? I think that when I apply the first transformation the coords in the space just change and the next transformation is not being applied properly.
Here's my iOS code :
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(GL_COLOR_BUFFER_BIT);
GLKMatrixStackPush(_obStack);
GLKMatrixStackRotate(_obStack,
GLKMathDegreesToRadians(_obAngleX),
0.0f, 1.0f, 0.0f);
GLKMatrixStackRotate(_obStack,
GLKMathDegreesToRadians(_obAngleY),
1.0f, 0.0f, 0.0f);
GLKMatrixStackScale(_obStack,
_obScaleFactor,
_obScaleFactor,
_obScaleFactor);
self.obEffect.transform.modelviewMatrix = GLKMatrixStackGetMatrix4(_obStack);
// Prepare effect
[self.obEffect prepareToDraw];
// Draw Model
glDrawArrays(GL_TRIANGLES, 0, panoramaVertices);
GLKMatrixStackPop(_obStack);
self.obEffect.transform.modelviewMatrix = GLKMatrixStackGetMatrix4(_obStack);
}
This is my Android code :
public void onDrawFrame(GL10 arGl) {
arGl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
arGl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
arGl.glPushMatrix();
arGl.glRotatef(obAngleY, 1.0f, 0.0f, 0.0f);
arGl.glRotatef(obAngleX, 0.0f, 1.0f, 0.0f);
arGl.glScalef(obScaleFactor, obScaleFactor, obScaleFactor);
if (obModel != null) {
obModel.draw(arGl);
}
arGl.glPopMatrix();
}
I was trying over and over again with several implementations, every single time with no success, at the end, I went back and implemented the OpenGL 1.0 version for iOS and under this scenario, whenever the matrix is being rotated, the axis aren't, so this was my solution, implement the OpenGL 1.0 version.
For dimension consideration, I resize the opengl view to 2.0 scale than origin, like this:
NSInteger Dimension = 2;
self.glView = [[WQPaintGLView alloc] initWithFrame:CGRectMake(0, 0, width*Dimension, height*Dimension)];
CGAffineTransform tScale = CGAffineTransformMakeScale((float)1/Dimension, (float)1/Dimension);
CGAffineTransform tTranslate = CGAffineTransformTranslate(tScale, -width, -height);
self.glView.transform = tTranslate;
[self.canvasContainerView addSubview:self.glView];
But get a strange issue, see:
I can only draw stuff in the left bottom 1/4 area.
What did I wrong?
The UIView transform and openGL are not very compatible. Also the view resizing after the openGL initialization could be troublesome and in most cases a new render buffer must be created from the view.
Anyway since you scaled the view to have a larger surface you should check for following calls:
glViewport should define what part of the buffer you are writing at. Usually it is set like (0, 0, viewWidth, viewHeight). In your case it must include the scale as well.
glOrtho (or glFrustum) define your coordinate system if used. Those should most likely be the same no matter the view scale.
Any other matrix usage or scissors that may be defined by the view's frame.
By all means if possible remove the transform on the view and try to find a better solution.
I am trying to create a Unity Augmented Reality Application that works of the pose determined by the ADF localization. I add the camera feed to the "Persistent State" demo, but my ADF localized frame does not align with the camera feed. The ground always seems to be transformed to be above the horizon.
I had the same problem and I've found a solution. The problem is that in the PoseController script, the horizon is not set correctly with the real world horizon, but in the AugmentedReality example it is done correctly (in the ARScreen script). So in my AR application things looked like they moved up as i moved far away from them.
To fix this, you can either start your application inside the AugmentedReality example as Guillaume said or make the next changes in the PoseController scripts:
-First of all we have the matrix m_matrixdTuc which is initialized as follows:
m_matrixdTuc = new Matrix4x4();
m_matrixdTuc.SetColumn(0, new Vector4(1.0f, 0.0f, 0.0f, 0.0f));
m_matrixdTuc.SetColumn(1, new Vector4(0.0f, 1.0f, 0.0f, 0.0f));
m_matrixdTuc.SetColumn(2, new Vector4(0.0f, 0.0f, -1.0f, 0.0f));
m_matrixdTuc.SetColumn(3, new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
Well, we must rename it as m_matrixcTuc and change the -1.0f value:
m_matrixcTuc = new Matrix4x4();
m_matrixcTuc.SetColumn(0, new Vector4(1.0f, 0.0f, 0.0f, 0.0f));
m_matrixcTuc.SetColumn(1, new Vector4(0.0f, -1.0f, 0.0f, 0.0f));
m_matrixcTuc.SetColumn(2, new Vector4(0.0f, 0.0f, 1.0f, 0.0f));
m_matrixcTuc.SetColumn(3, new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
This is because we're not gonna use this matrix directly to get the transform of the camera but we're gonna make some operations before to get the true m_matrixdTuc.
-2nd, we must initialize this three variables:
private Matrix4x4 m_matrixdTuc;
// Device frame with respect to IMU frame.
private Matrix4x4 m_imuTd;
// Color camera frame with respect to IMU frame.
private Matrix4x4 m_imuTc;
(the true m_matrixdTuc and the other 2 are gotten from the ARScreen script)
-3rd we need the _SetCameraExtrinsics method from ARScreen:
/// <summary>
/// The function is for querying the camera extrinsic, for example: the transformation between
/// IMU and device frame. These extrinsics is used to transform the pose from the color camera frame
/// to the device frame. Because the extrinsic is being queried using the GetPoseAtTime()
/// with a desired frame pair, it can only be queried after the ConnectToService() is called.
///
/// The device with respect to IMU frame is not directly queryable from API, so we use the IMU
/// frame as a temporary value to get the device frame with respect to IMU frame.
/// </summary>
private void _SetCameraExtrinsics()
{
double timestamp = 0.0;
TangoCoordinateFramePair pair;
TangoPoseData poseData = new TangoPoseData();
// Getting the transformation of device frame with respect to IMU frame.
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_IMU;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
PoseProvider.GetPoseAtTime(poseData, timestamp, pair);
Vector3 position = new Vector3((float)poseData.translation[0],
(float)poseData.translation[1],
(float)poseData.translation[2]);
Quaternion quat = new Quaternion((float)poseData.orientation[0],
(float)poseData.orientation[1],
(float)poseData.orientation[2],
(float)poseData.orientation[3]);
m_imuTd = Matrix4x4.TRS(position, quat, new Vector3(1.0f, 1.0f, 1.0f));
// Getting the transformation of IMU frame with respect to color camera frame.
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_IMU;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_CAMERA_COLOR;
PoseProvider.GetPoseAtTime(poseData, timestamp, pair);
position = new Vector3((float)poseData.translation[0],
(float)poseData.translation[1],
(float)poseData.translation[2]);
quat = new Quaternion((float)poseData.orientation[0],
(float)poseData.orientation[1],
(float)poseData.orientation[2],
(float)poseData.orientation[3]);
m_imuTc = Matrix4x4.TRS(position, quat, new Vector3(1.0f, 1.0f, 1.0f));
m_matrixdTuc = Matrix4x4.Inverse(m_imuTd) * m_imuTc * m_matrixcTuc;
}
Now as you can see, we get the true value of the m_matrixdTuc variable that will be used in the OnTangoPoseAvailable method.
I don't really understand the maths behind this methods but I've found that it works perfectly on my application. Hope it works for you too :)
I already tried something similar. If I understand well, the objects you add to be augmented are rotated toward the top of the screen, and this is especially visible when they are far, as they look like lifted upward.
To make it work, I simply started my project inside the Experimental Augmented Reality app with the "red map marker": there, the camera feed is already aligned to the world properly.
I've been unsuccessful at getting a simple cube geometry with shading turned on to display correctly.
This is c# code, but the values are being passed through SlimDX directly to C++ code.
pParams.BackBufferWidth = 0;
pParams.BackBufferHeight = 0;
pParams.BackBufferCount = 1;
pParams.BackBufferFormat = Format::X8R8G8B8;
pParams.Multisample = MultisampleType::None;
pParams.MultisampleQuality = 0;
pParams.DeviceWindowHandle = this.Handle;
pParams.Windowed = true;
pParams.AutoDepthStencilFormat = Format.D24X8;
pParams.EnableAutoDepthStencil = true;
pParams.PresentFlags = PresentFlags.None;
pParams.FullScreenRefreshRateInHertz = 0;
pParams.PresentationInterval = PresentInterval.Immediate;
pParams.SwapEffect = SwapEffect.Discard;
... are the values in the PresentParameter struct used to set up my Direct3D9Device object.
During a rendering, SetRenderState is called as follows:
this.D3DDevice.Clear(ClearFlags.Target | ClearFlags.ZBuffer, this.BackColor, 10000.0f, 0);
this.D3DDevice.SetRenderState(RenderState.Ambient, false);
this.D3DDevice.SetRenderState(RenderState.ZEnable, ZBufferType.UseZBuffer);
this.D3DDevice.SetRenderState(RenderState.ZWriteEnable, true);
this.D3DDevice.SetRenderState(RenderState.ZFunc, Compare.LessEqual);
this.D3DDevice.BeginScene();
Again, this is passed through to C++ code, which marshals the values in to calls a C++ programmer would not fear.
The primitives are diffuse colored vertices (D3DFVF_XYZ | D3DFVF_DIFFUSE). The wireframe view looks like this:
wireframe view http://gallery.me.com/robert.perkins/100045/z-fightingwireframe/web.jpg
The nearer pair of larger triangles is the near face of a cube.
The filled view looks like this:
full view 1 http://gallery.me.com/robert.perkins/100045/Z-fighting/web.jpg
Or this, on a subsequent rendering call:
full view 2 http://gallery.me.com/robert.perkins/100045/zfight2/web.jpg
I'm not sure how to fix this. Where should I begin looking?
Edit: The camera projection matrix looks about like this for one of the frames:
{[[M11:0.6281456 M12:0.7659309 M13:0.1370506 M14:0]
[M21:0.7705086 M22:-0.5877584 M23:-0.2466911 M24:0]
[M31:-0.1083957 M32:0.2605566 M33:-0.9593542 M34:0]
[M41:-3.225646 M42:-1.096823 M43:20.91392 M44:1]]}
And, the view matrix looks like this:
camera.ViewMatrix = {[[M11:0.6281456 M12:0.7659309 M13:0.1370506 M14:0]
[M21:0.7705086 M22:-0.5877584 M23:-0.2466911 M24:0]
[M31:-0.1083957 M32:0.2605566 M33:-0.9593542 M34:0]
[M41:-3.225646 M42:-1.096823 M43:20.91392 M44:1]]}
Clear the Z-Buffer to 1.0f not 10000.0f.
From the Clear docs in the SDK:
[in] Clear the depth buffer to this new z value which ranges from 0 to 1..
It may also be useful to see your projection matrix and viewport settings ...
Edit: How do you build that projection matrix? You have set zNear to 0 and zFar to 1. Try setting your zNear to 0.001f and zFar to 1000.0f and see whether that helps you at all...
A hunch: Try enabling the Z-Buffer before you clear.