I've been working on a project that helps create a virtual reality experience on the laptop and/or desktop. I am using XNA 4.0 on Visual Studio 2010. The current scenario looks like this. I have interfaced the movements of a persons head through kinect. So if the person moves his head right relative to the laptop, the scene seen in the image is rotated towards the left giving the effect of a virtual tour or like looking through the window experience.
To enhance the visual appeal, I want to add a darkness at the back plane. Like the box looks as if it was a tunnel.
The box was made using trianglestrips. The BasicEffect used for the planes of the box is called effect.
effect.VertexColorEnabled = true;
effect.EnableDefaultLighting();
effect.FogEnabled = true;
effect.FogStart = 35.0f;
effect.FogEnd = 100.0f;
effect.FogColor = new Vector3(0.0f, 0.0f, 0.0f);
effect.World = world;
effect.View = cam.view;
effect.Projection = cam.projection;
On compiling the error is regarding some normals.
I have no clue what they mean by that. I have dug the internet hard enough. (I was first under the impression that ill put a black omnilight in the backside of the box).
The error is attached below:
'verts' is the VertexPositionColor [][] that is used to build the box.
How do I solve this error ? Is the method/approach correct ?
Any help shall be welcome.
Thanks.
Your Vertex has Position and Color channels, but is has no normals... so you have to provide vertex has it.
You can use VertexPostionNormalTexture if you don't need the color, or build a custom struct that provides the normal...
Here your are a custom implementation: VertexPositionNormalColor
You need to add a normal (vector3) to your vertex type.
Also if you want Distance fog you will have to write your own shader as BasicEffect only implements depth fog (which while not looking as good is faster)
Related
I'm trying to put together a quick demo using iOS GLKit to render a retail store map using OpenGL using the source CAD files. I was able to render the walls and aisles in 2D, then programmatically add some artificial depth to create a series of cubes. All of this looks fine when looking top down, but I noticed that when I turned on the floor (with a z-value that is well below the aisles and walls that some of those objects are actually rendered under the floor:
...however if you rotate the model you can see that nothing is actually below the floor and some of the aisles are rendering outside of the wall:
You can view the code at StoreMapGLKitViewController.m, it all seems pretty simple to me, but I'm sure I'm making some kind of OpenGL rookie mistake.
So when you are messing with the Z values, and z = 0 for all the things, I'd imagine you'd still be able to see some of your walls and aisles, but they would also hang out the bottom a bit. As long as you don't care about that (its a demo, right) then that should be fine for now I would thenk.
Ends up that the depth buffer wasn't setup correcting so the depth test wasn't doing anything. Adding the code below fixed it.
GLKView *view = (GLKView *)self.view;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
Working with Delphi / Firemonkey XE8. Had some decent luck with it recently, although you have to hack the heck out of it to get it to do what you want. My current project is to evaluate it's Low-Level 3D capabilities to see if I can use them as a starting point for a Game Project. I also know Unity3D quite well, and am considering using Unity3D instead, but I figure that Delphi / Firemonkey might give me some added flexibility in my game design because it is so minimal.
So I decided to dig into an Embarcadero-supplied sample... specifically the LowLevel3D sample. This is the cross-platform sample that shows you how to do nothing other than draw a rotating square on the screen with some custom shaders of your choice and have it look the same on all platforms (although it actually doesn't work AT ALL the same on all platforms... but we won't get into that).
Embc does not supply the original uncompiled shaders for the project (which I might add is really lame), and requires you to supply your own compatible shaders (some compiled, some not) for the various platforms you're targeting (also lame)...so my first job has been to create a shader that would work with their existing sample that does something OTHER than what the sample already does. Specifically, if I'm creating a 2D game, I wanted to make sure that I could do sprites with alpha transparency, basic stuff.... if I can get this working, I'll probably never have to write another shader for the whole game.
After pulling my hair out for many hours, I came up with this little shader that works with the same parameters as the demo.
Texture2D mytex0: register(t0);
Texture2D mytex1: register(t1);
float4 cccc : register(v0) ;
struct PixelShaderInput
{
float4 Pos: COLOR;
float2 Tex: TEXCOORDS;
};
SamplerState g_samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
RasterizerState MyCull {
FrontCounterClockwise = FALSE;
};
float4 main(PixelShaderInput pIn): SV_TARGET
{
float4 cc,c;
float4 ci = mytex1.Sample(g_samLinear, pIn.Tex.xy);
c = ci;
c.a = 0;//<----- DOES NOT actually SET ALPHA TO ZERO ... THIS IS A PROBLEM
cc = c;
return cc;
}
Never-mind that it doesn't actually do much with the parameters, but check out the line where I set the output's ALPHA to 0. Well.... I found that this actually HAS NO EFFECT!
But it gets spookier than this. I found that turning on CULLING in the Delphi App FIXED this issue. So I figure... no big deal then, I'll just manually draw both sides of the sprite... right? Nope! When I manually drew a double sided sprite.. the problem came back!
Check this image: shader is ignoring alpha=0 when double-sided
In the above picture, clearly alpha is SOMEWHAT obeyed because the clouds are not surrounded by a black box, however, the cloud itself is super saturated (I find that if I multiply rgb*a, then the colors come out approximately right, but I'm not going to do that in real-life for obvious reasons.
I'm new to the concept of writing custom shaders. Any insight is appreciated.
I use the OpenCV for show in a new windows the left and right image from a stereo camera. Now I want to see the same thing on the Oculus Rift but when I connect the Oculus the image doesn't became in the Characteristic Circled image suitable with the lens inside the Oculus...
I need to process by myself the image ? It's not Automatic?
This is the code for show the windows:
cap >> frame; //cap= camera 1 & cap2=camera 2
cap.read(frame);
sz1 = frame.size();
//second camera
cap2 >> frame2;
cap2.read(frame2);
sz2 = frame2.size();
cv::Mat bothFrames(sz2.height, sz2.width + sz1.width, CV_8UC3);
// Move right boundary to the left.
bothFrames.adjustROI(0, 0, 0, -sz1.width);
frame2.copyTo(bothFrames);
// Move the left boundary to the right, right boundary to the right.
bothFrames.adjustROI(0, 0, -sz2.width, sz1.width);
frame.copyTo(bothFrames);
// restore original ROI.
bothFrames.adjustROI(0, 0, sz2.width, 0);
cv::imencode(".jpg", bothFrames, buf, params);
I have another problem. I'm trying to add the OVR Library to my code but I have the error "System Ambibuous Symbol" because some class inside the OVR Library used the same namaspace... This error arise when I add the
#include "OVR.h"
using namespace OVR;
-.-"
The SDK is meant to perform lens distortion correction, chromatic aberration correction (different refractive indices for different color light causes color fringing in image without correction), time warp, and possibly other corrections in the future. Unless you have a heavy weight graphics pipeline that you're hand optimizing, it's best to use the SDK rendering option.
You can learn about the SDK and different kinds of correction here:
http://static.oculusvr.com/sdk-downloads/documents/Oculus_SDK_Overview.pdf
It also explains how the distortion corrections are applied. The SDK is open source so you could also just read the source for a more thorough understanding.
To fix your namespace issue, just don't switch to the OVR namespace! Every time you refer to something from the OVR namespace, prefix it with OVR:: - e.g, OVR::Math - this is, after all, the whole point of namespaces :p
Model doesn't display correctly in XNA, ignores some bone deformations Reply Quote Edit
I am very new to 3D modelling, however I am required to do some for a project I have undertaken.
The basic principal is that I need a human model that can be deformed to the users measurements (measured using Kinect, but that is another story!). For example I want to stretch the stomach area for larger users etc.
Using 3Ds Max I have been able to rig a human model using a biped and then add some extra bones to change the stomach:
This all looks well and good, however when I load it into XNA, the stomach deformation has vanished-
I am somewhat at a loss as to why this has happened and any suggestions would be most welcome, or any links to tutorials on how this kind of thing should be done.
Furthermore, when I view the exported FBX in an FBX viewer plugin for QuickTime, the deformations show absolutely fine.
The code for displaying the model (its F# code, converted form a c# example, however I have tried it with the original c# code and get the same results) is:
override this.Draw(gameTime)=
// Copy any parent transforms.
let (transforms:Matrix array) = Array.zeroCreate model.Bones.Count
model.CopyAbsoluteBoneTransformsTo(transforms);
this.Game.GraphicsDevice.BlendState <- BlendState.Opaque
this.Game.GraphicsDevice.DepthStencilState <- DepthStencilState.Default
// Draw the model. A model can have multiple meshes, so loop.
for mesh in model.Meshes do
// This is where the mesh orientation is set, as well
// as our camera and projection.
for e:Effect in mesh.Effects do
let effect = e :?> BasicEffect
effect.EnableDefaultLighting()
effect.World <- mesh.ParentBone.Transform *
Matrix.CreateRotationZ(modelRotation)
* Matrix.CreateTranslation(modelPosition)
effect.View <- Matrix.CreateLookAt(cameraPosition,
focusPoint, Vector3.Up)
effect.Projection <- Matrix.CreatePerspectiveFieldOfView(
MathHelper.ToRadians(45.0f), aspectRatio,
1.0f, 10000.0f)
// Draw the mesh, using the effects set above.
mesh.Draw();
base.Draw(gameTime);
Wonder if anyone has any ideas as to what has gone wrong or any ideas of how to sort this.
Any suggestions would be much appreciated.
Thanks
If you added the extra bones to the stomach, then told max to morph some associated vertices in accordance with the new bones by some weighting factor, then you would need to modify Xna's default content processor to tell it how to build the model to take that into account. By default, it won't.
Look at the Skinned model sample on the app hub: http://create.msdn.com/en-US/education/catalog/sample/skinned_model
All the joints(elbows, knees, etc) morph a bit as the joints flex. Ultimately, you are wanting a single vertex to be influenced by more than one transform.
Hi I'm trying to follow an answer about making part of a texture transparent when using Alpha Blending from this question
The only problem is that this only works in XNA 3.1 and I'm working in XNA 4.0 so stuff like RenderState doesn't exist in the same context, and I have no idea where to find the GfxComponent class library.
I still want to do the same thing as in the example question, a circular area radiating from the mouse position that makes the covering texture transparent when the mouse is hovering over it.
3.5
GraphicsDevice.RenderState.AlphaBlendEnable = true;
4.0
GraphicsDevice.BlendState = BlendState.AlphaBlend;
See Shawn Hargreaves post for more info: http://blogs.msdn.com/b/shawnhar/archive/2010/06/18/spritebatch-and-renderstates-in-xna-game-studio-4-0.aspx
EDIT: In the post you can see Shawn using BlendState. You create a new instance of this, set it up however you like, and pass this to the graphics device. Like so:
BlendState bs = new BlendState();
bs.AlphaSourceBlend = Blend.One;
bs.AlphaDestinationBlend = Blend.Zero;
bs.ColorSourceBlend = Blend.Zero;
bs.ColorDestinationBlend = Blend.One;
bs.AlphaBlendFunction = BlendFunction.Add;
graphicsDevice.BlendState = bs;
That clearer?