XNA Alpha Blending part of a texture in Game Studio 4.0 - xna

Hi I'm trying to follow an answer about making part of a texture transparent when using Alpha Blending from this question
The only problem is that this only works in XNA 3.1 and I'm working in XNA 4.0 so stuff like RenderState doesn't exist in the same context, and I have no idea where to find the GfxComponent class library.
I still want to do the same thing as in the example question, a circular area radiating from the mouse position that makes the covering texture transparent when the mouse is hovering over it.

3.5
GraphicsDevice.RenderState.AlphaBlendEnable = true;
4.0
GraphicsDevice.BlendState = BlendState.AlphaBlend;
See Shawn Hargreaves post for more info: http://blogs.msdn.com/b/shawnhar/archive/2010/06/18/spritebatch-and-renderstates-in-xna-game-studio-4-0.aspx
EDIT: In the post you can see Shawn using BlendState. You create a new instance of this, set it up however you like, and pass this to the graphics device. Like so:
BlendState bs = new BlendState();
bs.AlphaSourceBlend = Blend.One;
bs.AlphaDestinationBlend = Blend.Zero;
bs.ColorSourceBlend = Blend.Zero;
bs.ColorDestinationBlend = Blend.One;
bs.AlphaBlendFunction = BlendFunction.Add;
graphicsDevice.BlendState = bs;
That clearer?

Related

iOS - how to draw a line of custom shapes

I am porting my android app over to iOS using Swift.
With android, I am able to draw a line made up of a series of custom shapes, defined as path objects, by using the PathDashPathEffect object.
m_StampPath = new Path();
m_StampPath.moveTo(...);
m_StampPath.cubicTo(...);
m_StampPath.cubicTo(...);
...
m_StampPath.close();
m_WavyLine = new PathDashPathEffect(m_StampPath, fStampOffset, 0.0f, PathDashPathEffect.Style.MORPH);
// this is a Paint object
pt.setPathEffect(m_WavyLine);
pt.setStyle(Paint.Style.STROKE);
LinePath = new Path();
LinePath.moveTo(...);
LinePath.lineTo(...);
canvas.drawPath(LinePath, pt);
How can I achieve the same thing by using swift in iOS ?
I have not found anything as yet, but perhaps I am looking in the wrong place.

Drawing a 2D HUD messes up rendering of my 3D models?

I'm using XNA 3.1
I have recently created a 2D Heads Up Display (HUD) by using Components.Add(myComponent) to my game. The HUD looks fine, showing a 2D map, crosshairs and framerate counter. The thing is, whenever the HUD is on screen the 3D objects in the game no longer draw correctly.
Something further from my player might get drawn after something closer, the models sometimes lose definition when I walk past them. When I remove the HUD all is drawn normally.
Are their any known issues regarding this that I should be aware of? How should I draw a 2D HUD over my 3D game area?
This is what it looks like without a GameComponent:
And here's how it looks with a GameComponent (in this case it's just some text offscreen in the upper left corner that shows framerate), notice how the tree in the back is appearing in front of the tree closer to the camera:
You have to enable the depth buffer:
// XNA 3.1
GraphicsDevice.RenderState.DepthBufferEnable = true;
GraphicsDevice.RenderState.DepthBufferWriteEnable = true;
// XNA 4.0
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
SpriteBatch.Begin alters the state of the graphics pipeline:
SpriteBatch render states for XNA 3.1
SpriteBatch render states for XNA 4.0
In both versions depth buffering is disabled, that's what causes the issue.
Again, I cannot stress this enough:
Always make sure that ALL render states are correctly set before drawing any geometry.
BlendState
DepthStencilState
RasterizerState
Viewports
RenderTargets
Shaders
SamplerStates
Textures
Constants
Educate yourself on the purpose of each state and each stage in the rendering pipeline. If in doubt, try resetting everything to default.

Distance Fog XNA 4.0

I've been working on a project that helps create a virtual reality experience on the laptop and/or desktop. I am using XNA 4.0 on Visual Studio 2010. The current scenario looks like this. I have interfaced the movements of a persons head through kinect. So if the person moves his head right relative to the laptop, the scene seen in the image is rotated towards the left giving the effect of a virtual tour or like looking through the window experience.
To enhance the visual appeal, I want to add a darkness at the back plane. Like the box looks as if it was a tunnel.
The box was made using trianglestrips. The BasicEffect used for the planes of the box is called effect.
effect.VertexColorEnabled = true;
effect.EnableDefaultLighting();
effect.FogEnabled = true;
effect.FogStart = 35.0f;
effect.FogEnd = 100.0f;
effect.FogColor = new Vector3(0.0f, 0.0f, 0.0f);
effect.World = world;
effect.View = cam.view;
effect.Projection = cam.projection;
On compiling the error is regarding some normals.
I have no clue what they mean by that. I have dug the internet hard enough. (I was first under the impression that ill put a black omnilight in the backside of the box).
The error is attached below:
'verts' is the VertexPositionColor [][] that is used to build the box.
How do I solve this error ? Is the method/approach correct ?
Any help shall be welcome.
Thanks.
Your Vertex has Position and Color channels, but is has no normals... so you have to provide vertex has it.
You can use VertexPostionNormalTexture if you don't need the color, or build a custom struct that provides the normal...
Here your are a custom implementation: VertexPositionNormalColor
You need to add a normal (vector3) to your vertex type.
Also if you want Distance fog you will have to write your own shader as BasicEffect only implements depth fog (which while not looking as good is faster)

Strange blending effect in OpenGL ES 2.0 using GLKit

This happens on GLKit with MonoTouch 5.3, but I think the problem may be of general OpenGL ES 2.0 nature.
I have three faces, one of them red, one green and one blue, all with an alpha value of 1.0, so they should be opaque. When they are rendered on the black background, everything is okay. If the green face is in front of the others, everything is okay as well. But if
the red face is in front of the green
or the blue face is in front of one of the others
the foreground color is not rendered, but the background face is fully visible. This seems to be some kind of blending effect, but I don't see anything special in my code, and I have tried out several things like glBlendFunc, but it didn't change anything.
I could post my code, but since the code is quite simple, I thought perhaps someone knows the answer immediately.
Update: As this seems to be a Depth-Sorting issue, here are some important parts of the code:
var aContext = new EAGLContext(EAGLRenderingAPI.OpenGLES2);
var view = this.View as GLKView;
view.Delegate = new GenericGLKViewDelegate(this);
view.Context = aContext;
view.DrawableColorFormat = GLKViewDrawableColorFormat.RGBA8888;
view.DrawableDepthFormat = GLKViewDrawableDepthFormat.Format16;
view.DrawableMultisample = GLKViewDrawableMultisample.Sample4x;
here is the DrawInRect method:
_baseEffect.PrepareToDraw();
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
GL.EnableVertexAttribArray((int)GLKVertexAttrib.Position);
GL.EnableVertexAttribArray((int)GLKVertexAttrib.Color);
GL.VertexAttribPointer((int)GLKVertexAttrib.Position, 3, VertexAttribPointerType.Float, false, 0, _squareVertices);
GL.VertexAttribPointer((int)GLKVertexAttrib.Color, 4, VertexAttribPointerType.UnsignedByte, true, 0, _squareColors);
GL.DrawArrays(BeginMode.TriangleStrip, 0, 9);
GL.DisableVertexAttribArray((int)GLKVertexAttrib.Position);
GL.DisableVertexAttribArray((int)GLKVertexAttrib.Color);
I tried every possible combination of ColorFormat, DepthFormat and Multisample, but it's still the same.
Solution:
I was missing some calls to enable the depth buffer, these calls were missing in the ViewDidLoad method:
GL.ClearDepth(30);
GL.Enable(EnableCap.DepthTest);
GL.DepthFunc(DepthFunction.Lequal);
This does not sound like a blending issue- rather a depth-sorting issue.
Is the depth buffer enabled? Are you clearing it with each frame?

XNA 2D convert a world position into screen position

Okay guys, I have spent a good two weeks trying to figure this out. I've tried some of my own ways to work this out by math alone and had no success. I also looked everywhere and have seen people recommend Viewport.Project().
This is not that simple. Everywhere I've looked, including MSDN areas and all forums, just suggest to be able to use it but as I try and use it they don't explain the matrix and values it requires to work. I have found no useful information on how to correctly use this method and it's seriously driving me insane. Please help this poor fella out.
First thing Im going to do is post my current code. I have five or so different versions none of them have worked. The closest I got was getting NAN which I don't understand. I'm trying to have text displayed on my screen based on where asteroids are and if they are very far the text will act as a guide so players can go to asteroids.
Vector3 camLookAt = Vector3.Zero;
Vector3 up = new Vector3(0.0f, 0.0f, 0.0f);
float nearClip = 1000.0f;
float farClip = 100000.0f;
float viewAngle = MathHelper.ToRadians(90f);
float aspectRatio = (float)viewPort.Width / (float)viewPort.Height;
Matrix view = Matrix.CreateLookAt(new Vector3(0, 0, 0), camLookAt, up);
Matrix projection = Matrix.CreatePerspectiveFieldOfView(viewAngle, aspectRatio, nearClip, farClip);
Matrix world = Matrix.CreateTranslation(camPosition.X, camPosition.Y, 0);
Vector3 projectedPosition = graphicsDevice.Viewport.Project(new Vector3(worldPosition.X, worldPosition.Y, 0), projection, view, Matrix.Identity);
screenPosition.X = projectedPosition.X;
screenPosition.Y = projectedPosition.Y;
if (screenPosition.X > 400) screenPosition.X = 400;
if (screenPosition.X < 100) screenPosition.X = 100;
if (screenPosition.Y > 400) screenPosition.Y = 400;
if (screenPosition.Y < 100) screenPosition.Y = 100;
return screenPosition;
As far as I know Project is the camera position. My game is 2D so vector3 is annoying I thought maybe my Z could be CameraZoom, projection might be the object we want to convert to 2D Screen, view might be the size of how much the camera can see, and the last one I'm not sure.
After about 2 weeks of searching for information with no possible improvements in code or knowledge and being more confused as I look at MDSN tutorials I decided I'd post a question because I'm extremely close to just not implementing world position convert to screen position. All help is appreciated thanks :)
Plus I'm using a 2D game and it does add confusion when most times people talk about the Z axis when a 2D game does not have a Z axis its just transforming sprites to appear like a zoom or movement. Thanks again :)
I may be misunderstanding here, but I don't think you need to be using a 3D camera provided with XNA for a 2D game. Unless you're trying to make a 2.5D game using 3D for some sort of parallax system or whatever, you don't need to use that at all. Take a look at these:
2D Camera Implemetation in XNA
Simpler 2D Camera
XNA 2D tutorials
2D in XNA works differently than 3D. You don't need to worry about the 3D viewport or a 3D camera or anything. There is no nearclipping or farclipping. 2D is well-handled in XNA and I think you are misunderstanding a bit how XNA works.
PS: You don't need to use Vector3s. Instead, use Vector2s. I think you will find them much easier to work with in a 2D game. ^^

Resources