integrate Oculus SDK Distortion within simple DirectX Engine - directx

I was working some time on a very simple DirectX11 Render Engine. Today I managed to Setup Stereo Rendering (Rendering the Scene twice into textures) for my Oculus Rift integration.
[Currently]
So what I am basically doing currently is:
I have a 1280 x 800 Window
render the whole scene into the RenderTargetViewLeft_ (1280 x 800)
render the content of RenderTargetViewLeft_ as a "EyeWindow" (like in the tutorial) to the left side of the Screen (640 x 800)
render the whole Scene into the RenderTargetViewRight_ (1280 x 800)
render the content of RenderTargetViewRight_ as a "EyeWindow" (like in the tutorial) to the right side of the Screen (640 x 800)
so all of this works so far, I got the Scene rendered twice into seperate Textures, ending up in a Splitscreen.
[DirectX11 Render Loop]
bool GraphicsAPI::Render()
{
bool result;
// [Left Eye] The first pass of our render is to a texture now.
result = RenderToTexture(renderTextureLeft_);
if (!result)
{
return false;
}
// Clear the buffers to begin the scene.
BeginScene(0.0f, 0.0f, 0.0f, 1.0f);
// Turn off the Z buffer to begin all 2D rendering.
TurnZBufferOff();
// Render The Eye Window orthogonal to the screen
RenderEyeWindow(eyeWindowLeft_, renderTextureLeft_);
// Turn the Z buffer back on now that all 2D rendering has completed.
TurnZBufferOn();
// [Right Eye] ------------------------------------
result = RenderToTexture(renderTextureRight_);
if (!result)
{
return false;
}
TurnZBufferOff();
RenderEyeWindow(eyeWindowRight_, renderTextureRight_);
TurnZBufferOn();
// [End] Present the rendered scene to the screen.
EndScene(); // calls Present
return true;
}
[What I want to do now]
Now I am trying to achieve a Barrel Distortion with the Oculus SDK. Currently I am not concerning about a different virtual camera for the second Image, just want to achieve the Barrel distortion for now.
I have read the Developers Guide [1] and also tried to look into the TinyRoom Demo, but I don't understand completely what's necessary now to achieve the distortion with the SDK in my already working DirectX Engine.
In Developers Guide Render Texture Initialization, they show how to create a texture for the API. I guess it means, I need to setup ALL my RendertargetViews with the according API size (Render Targets are currently sized 1280 x 800) - and even change the DepthStencilView and Backbuffer sice aswell I guess.
The render-loop would look something like this then:
ovrHmd_BeginFrame(hmd, 0);
BeginScene(0.0f, 0.0f, 0.0f, 1.0f);
...
// Render Loop as the Code above
...
ovrHmd_EndFrame(hmd, headPose, EyeTextures);
// EndScene(); // calls Present, not needed on Oculus Rendering
I feel something's missing so I am sure I don't got all that right.
[Update]
So, i achieve to Render the Scene with barrel Distortion using the Oculus API. Though the polygon of the left- and right image are too far seperated, but this could be caused by using my default 1280 x 800 Texture Size for the Render Targets. The CameraStream seems aswell not rendered orthogonal to the Screen when moving the HMD. Gonna do some further testing ;)
[1] - Oculus Developers Guide: https://developer.oculus.com/documentation/

The key Point of an 3d HMD Support is generally to render the whole graphics Scene twice. Once with the left virtual camera, and once with the right virtual camera - the "eye" distance between them varies, but it's approximaetly about 65mm.
To store the Scene, one has to render the graphic Scene to textures. I have rendered my Scene first using my left virtual camera into a RenderTextureLeft_, and afterwards I have rendered the exact same Scene with my right virtual camera into a RenderTextureRight_. This technique is called "render to texture". This means we save the Image for further post processing stuff into a separate Texture instead of rendering it directly into the backbuffer to Display on the Monitor.
So well, but how can we render to the Oculus Rift now? It's important to set up an hmd instance and configure it correctly first. This is really good explained here in the official docs [1]: https://developer.oculus.com/documentation/pcsdk/latest/concepts/dg-render/
After both RenderTextures (left eye, right eye) are successfully rendered into textures and the oculus device has been configured accordingly, one needs to supply the Oculus SDK with both of the rendered Textures to print them on the hmd's Monitor and doing the Barrel distortion using the Oculus SDK (not the Client distortion which is no longer supported in the newer SDK Versions).
Here I am showing my DirectX code which supplys the oculus sdk with both of my renderetextures and doing also the Barrel distortion:
bool OculusHMD::RenderDistortion()
{
ovrD3D11Texture eyeTexture[2]; // Gather data for eye textures
Sizei size;
size.w = RIFT_RESOLUTION_WIDTH;
size.h = RIFT_RESOLUTION_HEIGHT;
ovrRecti eyeRenderViewport[2];
eyeRenderViewport[0].Pos = Vector2i(0, 0);
eyeRenderViewport[0].Size = size;
eyeRenderViewport[1].Pos = Vector2i(0, 0);
eyeRenderViewport[1].Size = size;
eyeTexture[0].D3D11.Header.API = ovrRenderAPI_D3D11;
eyeTexture[0].D3D11.Header.TextureSize = size;
eyeTexture[0].D3D11.Header.RenderViewport = eyeRenderViewport[0];
eyeTexture[0].D3D11.pTexture = graphicsAPI_->renderTextureLeft_->renderTargetTexture_;
eyeTexture[0].D3D11.pSRView = graphicsAPI_->renderTextureLeft_->GetShaderResourceView();
eyeTexture[1].D3D11.Header.API = ovrRenderAPI_D3D11;
eyeTexture[1].D3D11.Header.TextureSize = size;
eyeTexture[1].D3D11.Header.RenderViewport = eyeRenderViewport[1];
eyeTexture[1].D3D11.pTexture = graphicsAPI_->renderTextureRight_->renderTargetTexture_;
eyeTexture[1].D3D11.pSRView = graphicsAPI_->renderTextureRight_->GetShaderResourceView();
ovrHmd_EndFrame(hmd_, eyeRenderPose_, &eyeTexture[0].Texture);
return true;
}
The presentation of the Stereo Image including the Barrel distortion as Kind of a post processing effect is finally done via this line:
ovrHmd_EndFrame(hmd_, eyeRenderPose_, &eyeTexture[0].Texture);
Hopefully the code helps the one or other to understand the Pipeline better.

Related

How to integrate ARKit into GPUImage render with SCNRender?

The graph is below:
ARFrame -> 3DModelFilter(SCNScene + SCNRender) -> OtherFilters -> GPUImageView.
Load 3D model:
NSError* error;
SCNScene* scene =[SCNScene sceneWithURL:url options:nil error:&error];
Render 3D model:
SCNRenderer* render = [SCNRenderer rendererWithContext:context options:nil];
render.scene = scene;
[render renderAtTime:0];
Now,I am puzzle on how to apply ARFrame's camera transform to the SCNScene.
Some guess:
Can I assign ARFrame camera's transform to the transform of camera node in scene without any complex operation?
The ARFrame camera's projectMatrix do not have any help to me in this case?
update 2017-12-23.
First of all, thank #rickster for your reply. According to your suggestion, I add code in ARSession didUpdateFrame callback:
ARCamera* camera = frame.camera;
SCNMatrix4 cameraMatrix = SCNMatrix4FromMat4(camera.transform);
cameraNode.transform = cameraMatrix;
matrix_float4x4 mat4 = [camera projectionMatrixForOrientation:UIInterfaceOrientationPortrait viewportSize:CGSizeMake(375, 667) zNear:0.001 zFar:1000];
camera.projectionTransform = SCNMatrix4FromMat4(mat4);
Run app.
1. I can't see the whole ship, only part of it. So I add a a translation to the camera's tranform. I add the code below and can see the whole ship.
cameraMatrix = SCNMatrix4Mult(cameraMatrix, SCNMatrix4MakeTranslation(0, 0, 15));
2. When I move the iPhone up or down, the tracking seem's work. But when I move the iPhone left or right, the ship is follow my movement until disappear in screen.
I think there is some important thing I missed.
ARCamera.transform tells you where the camera is in world space (and its orientation). You can assign this directly to the simdTransform property of the SCNNode holding your SCNCamera.
ARCamera.projectionMatrix tells you how the camera sees the world — essentially, what its field of view is. If you want content rendered by SceneKit to appear to inhabit the real world seen in the camera image, you'll need to set up SCNCamera with the information ARKit provides. Conveniently, you can bypass all the individual SCNCamera properties and set a projection matrix directly on the SCNCamera.projectionTransform property. Note that property is a SCNMatrix4, not a SIMD matrix_float4x4 as provided by ARKit, so you'll need to convert it:
scnCamera.projectionTransform = SCNMatrix4FromMat4(arCamera.projectionMatrix);
Note: Depending on how your view is set up, you may need to use ARCamera.projectionMatrixForOrientation:viewportSize:zNear:zFar: instead of ARCamera.projectionMatrix so you get a projection appropriate for your view's size and UI orientation.

Drawing a 2D HUD messes up rendering of my 3D models?

I'm using XNA 3.1
I have recently created a 2D Heads Up Display (HUD) by using Components.Add(myComponent) to my game. The HUD looks fine, showing a 2D map, crosshairs and framerate counter. The thing is, whenever the HUD is on screen the 3D objects in the game no longer draw correctly.
Something further from my player might get drawn after something closer, the models sometimes lose definition when I walk past them. When I remove the HUD all is drawn normally.
Are their any known issues regarding this that I should be aware of? How should I draw a 2D HUD over my 3D game area?
This is what it looks like without a GameComponent:
And here's how it looks with a GameComponent (in this case it's just some text offscreen in the upper left corner that shows framerate), notice how the tree in the back is appearing in front of the tree closer to the camera:
You have to enable the depth buffer:
// XNA 3.1
GraphicsDevice.RenderState.DepthBufferEnable = true;
GraphicsDevice.RenderState.DepthBufferWriteEnable = true;
// XNA 4.0
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
SpriteBatch.Begin alters the state of the graphics pipeline:
SpriteBatch render states for XNA 3.1
SpriteBatch render states for XNA 4.0
In both versions depth buffering is disabled, that's what causes the issue.
Again, I cannot stress this enough:
Always make sure that ALL render states are correctly set before drawing any geometry.
BlendState
DepthStencilState
RasterizerState
Viewports
RenderTargets
Shaders
SamplerStates
Textures
Constants
Educate yourself on the purpose of each state and each stage in the rendering pipeline. If in doubt, try resetting everything to default.

Keep pixel aspect with different resolution in xna game

I'm currently developping an old-school game with XNA 4.
My graphics assets are based on 568x320 resolution (16/9 ration), I want to change my window resolution (1136x640 for example) and my graphics are scaled without stretching, that they keep pixel aspect.
How can I reach this ?
You could use a RenderTargetto achieve your goal. It sounds like you don't want to have to render accordingly to every possible screen size, so if your graphics aren't dependant on other graphical features like a mouse, then I would use a RenderTarget and draw all the pixel data to that and afterwards draw it to the actual screen allowing the screen to stretch it.
This technique can be used in other ways too. I use it to draw objects in my game, so I can easily change the rotation and location without having to calculate every sprite for the object.
Example:
void PreDraw()
// You need your graphics device to render to
GraphicsDevice graphicsDevice = Settings.GlobalGraphicsDevice;
// You need a spritebatch to begin/end a draw call
SpriteBatch spriteBatch = Settings.GlobalSpriteBatch;
// Tell the graphics device where to draw too
graphicsDevice.SetRenderTarget(renderTarget);
// Clear the buffer with transparent so the image is transparent
graphicsDevice.Clear(Color.Transparent);
spriteBatch.Begin();
flameAnimation.Draw(spriteBatch);
spriteBatch.Draw(gunTextureToDraw, new Vector2(100, 0), Color.White);
if (!base.CurrentPowerUpLevel.Equals(PowerUpLevels.None)) {
powerUpAnimation.Draw(spriteBatch);
}
// DRAWS THE IMAGE TO THE RENDERTARGET
spriteBatch.Draw(shipSpriteSheet, new Rectangle(105,0, (int)Size.X, (int)Size.Y), shipRectangleToDraw, Color.White);
spriteBatch.End();
// Let the graphics device know you are done and return to drawing according to its dimensions
graphicsDevice.SetRenderTarget(null);
// utilize your render target
finishedShip = renderTarget;
}
Remember, in your case, you would initialize your RenderTarget with dimensions of 568x320 and draw according to that and not worry about any other possible sizes. Once you give the RenderTarget to the spritebatch to draw to the screen, it will "stretch" the image for you!
EDIT:
Sorry, I skimmed through the question and missed that you don't want to "stretch" your result. This could be achieved by drawing the final RenderTarget to your specified dimensions according to the graphics device.
Oh Gosh !!!! I've got it ! Just give SamplerState.PointClamp at your spriteBatch.Begin methods to keep that cool pixel visuel effet <3
spriteBatch.Begin(SpriteSortMode.Immediate,
BlendState.AlphaBlend,
SamplerState.PointClamp,
null,
null,
null,
cam.getTransformation(this.GraphicsDevice));

Scaling entire screen in XNA

Using XNA, I'm trying to make an adventure game engine that lets you make games that look like they fell out of the early 90s, like Day of the Tentacle and Sam & Max Hit the Road. Thus, I want the game to actually run at 320x240 (I know, it should probably be 320x200, but shh), but it should scale up depending on user settings.
It works kind of okay right now, but I'm running into some problems where I actually want it to look more pixellated that it currently does.
Here's what I'm doing right now:
In the game initialization:
public Game() {
graphics = new GraphicsDeviceManager(this);
graphics.PreferredBackBufferWidth = 640;
graphics.PreferredBackBufferHeight = 480;
graphics.PreferMultiSampling = false;
Scale = graphics.PreferredBackBufferWidth / 320;
}
Scale is a public static variable that I can check anytime to see how much I should scale my game relative to 320x240.
In my drawing function:
spriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.NonPremultiplied, SamplerState.PointClamp, DepthStencilState.Default, RasterizerState.CullNone, null, Matrix.CreateScale(Game.Scale));
This way, everything is drawn at 320x240 and blown up to fit the current resolution (640x480 by default). And of course I do math to convert the actual coordinates of the mouse into 320x240 coordinates, and so forth.
Now, this is great and all, but now I'm getting to the point where I want to start scaling my sprites, to have them walk into the distance and so forth.
Look at the images below. The upper-left image is a piece of a screenshot from when the game is running at 640x480. The image to the right of it is how it "should" look, at 320x240. The bottom row of images is just the top row blown up to 300% (in Photoshop, not in-engine) so you can see what I'm talking about.
In the 640x480 image, you can see different "line thicknesses;" the thicker lines are how it should really look (one pixel = 2x2, because this is running at 640x480), but the thinner lines (1x1 pixel) also appear, when they shouldn't, due to scaling (see the images on the right).
Basically I'm trying to emulate a 320x240 display but blown up to any resolution using XNA, and matrix transformations aren't doing the trick. Is there any way I could go about doing this?
Render everything in the native resolution to a RenderTarget instead of the back buffer:
SpriteBatch targetBatch = new SpriteBatch(GraphicsDevice);
RenderTarget2D target = new RenderTarget2D(GraphicsDevice, 320, 240);
GraphicsDevice.SetRenderTarget(target);
//perform draw calls
Then render this target (your whole screen) to the back buffer:
//set rendering back to the back buffer
GraphicsDevice.SetRenderTarget(null);
//render target to back buffer
targetBatch.Begin();
targetBatch.Draw(target, new Rectangle(0, 0, GraphicsDevice.DisplayMode.Width, GraphicsDevice.DisplayMode.Height), Color.White);
targetBatch.End();

iOS - zoom only part of display using openGL ES

I'm working on a level builder app to build content for a game. Part of the iPad screen is a dedicated control panel, while the rest is a graphical representation of the level being built. I need to zoom the level area in and out without affecting the control panel. I'm using openGL ES for rendering. Can anyone give me some pointers here? Can I split the screen with different viewports and so just scale one?
The trick is to understand that OpenGL is a state machine and there is no such thing like "global initialization". As long as you follow the badly written tutorials and have your projection matrix setup in the window resize handler you'll be stuck. What you actually do is something like this:
void render_perspective_scene(void);
void render_ortho_scene(void);
void render_HUD();
void display()
{
float const aspect = (float)win_width/(float)win_height;
glViewport(0,0,win_width,win_height);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-aspect*near/lens, aspect*near/lens, -near/lens, near/lens, near, far);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
render_perspective_scene();
glEnable(GL_SCISSOR_TEST);
// just clear the depth buffer, so that everything that's
// drawn next will overlay the previously rendered scene.
glClear(GL_DEPTH_BUFFER_BIT);
glViewport(ortho_x, ortho_y, ortho_width, ortho_height);
glScissor(ortho_x, ortho_y, ortho_width, ortho_height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-aspect*scale, aspect*scale, -scale, scale, 0, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
render_ortho_scene();
// Same for the HUD, only that we render
// that one in pixel coordinates.
glViewport(hud_x, hud_y, hud_width, hud_height);
glScissor(hud_x, hud_y, hud_width, hud_height);
glClear(GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, win_width, 0, win_height, 0, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
render_HUD();
}
The important part is, that you set the viewport/scissor and the projection within the drawing handler prior to rendering that sub-part.
If you're rendering with OpenGL ES (presumably 2.0), then you have full control over the scaling of what you render. You decide where the scale gets applied, and you decide how things are rendered.
I'd guess your code currently looks a bit like this.
Get view scale
Apply view scale to view matrix
Render level
Render control panel
When it should look like this.
Render control panel
Get view scale
Apply view scale to view matrix
Render level
The matrices (or whatever transformation stuff you have) you use for transforming the level should not be used for transforming the control panel.

Resources