XNA 4.0 equivalent of RenderTarget2D.GetTexture()? - xna

I'm currently working on a game and I'm trying to implement a color key collision. I followed the road not taken tutorial, and worked until the point of getting the texture from the render target, which is unavailable in XNA 4 and I don't seem to find an equivalent? Any help :D
Thanks in advance !!

In XNA 4.0 the RenderTarget2D class inherits from Texture2D, which means that you can simply cast your render target to a texture:
Texture2D texture = (Texture2D)renderTarget;

Related

Text rendering in metal

I am trying to develop my own mini game engine in Apple metal on a mac and I am stuck at a place where I want to render text on the GPU. I do not have much graphics programming experience and hence I am not sure how to do it. I stumbled upon an article written by warren more using signed distance fields. But I do not know how it works and I am unable to understand it completely (lack of my graphics knowledge) to implement it myself. The blog post has a code sample which is written in obj-c but unfortunately i do not know obj-c. Is there some swift version of it? Or can someone explain / give pointers on how to render text in metal?
I have been down this road before. I think you might find SceneKit useful if you are after 3D text.
If you are OK with using SceneKit to drive your rendering: SCNText with a SCNView.
If you have your own command buffer, and you can get away with blending your text on TOP of the rest of your graphics: you can still use SCNText, by using the render() method of a SCNRenderer to render to encode a scene's render commands onto a command buffer.
If you want to avoid SceneKit's rendering process, I would recommend doing this: create a SCNText in a SCNTransaction like so:
import SceneKit
SCNTransaction.begin()
let sceneText = SCNText(string: text, extrusionDepth: extrusionDepth)
SCNTransaction.commit()
let mdlMesh = MDLMesh(scnGeometry: sceneText, bufferAllocator: yourBufferAllocator)
let mesh = try MTKMesh(mesh: mdlMesh, device: MTLCreateSystemDefaultDevice()!)
This MTKMesh will have three vertex buffers; the first one (0) is a list of positions in packed_float3 format, the second (1) a list of normals in packed_float3 format, the third (2) a list of texture coordinates in packed_float2 format. Just make sure to reflect that in your vertex shader. It will have 1-5 submeshes with their own index buffers, corresponding I believe to front, back, front chamfer, back chamfer, and extrusion side.
Now, if you are after 2D text, you can either use this method above with an extrusionDepth close to zero, or you can harness CoreText directly to do font metrics and render textured quads with a font atlas texture like the commenter suggested.
The ability to understand Objective-C is certainly useful as well, but you may not need it for this problem specifically. I tried to be brief on my explanations since I don't know what your exact goal is with this problem, but I can provide more detail on any of those methods upon request.

Render an SCNGeometry as a wireframe

I'm using SceneKit on iOS and I have a geometry I want to render as a wireframe. So basically I want to draw only the lines, so no textures.
I figured out that I could use the shaderModifiers property of the used SCNMaterial to accomplish this. Example of a shader modifier:
material.shaderModifiers = [
SCNShaderModifierEntryPointFragment: "_output.color.rgb = vec3(1.0) - _output.color.rgb;"
]
This example apparently simply inverts the output colors. I know nothing about this 'GLSL' language I have to use for the shader fragment.
Can anybody tell me what code I should use as the shader fragment to only draw near the edges, to make the geometry look like a wireframe?
Or maybe there is a whole other approach to render a geometry as a wireframe. I would love to hear it.
Try setting the material fillMode to .lines (iOS 11+, and macOS 10.13+):
sphereNode.geometry?.firstMaterial?.fillMode = .lines
Now it is possible (at least in Cocoa) with:
gameView.debugOptions.insert(SCNDebugOptions.showWireframe)
or you can do it interactively if enabling the statistics with:
gameView.showsStatistics = true
(gameView is an instance of SCNView)
This is not (quite) an answer, because this a question without an easy answer.
Doing wireframe rendering entirely in shader code is a lot more difficult than it seems like it should be, especially on mobile where you don't have a geometry shader. The problem is that the vertex shader (and subsequently the fragment shader) just doesn't have the information needed to know where polygon edges are.
I know nothing about this 'GLSL' language I have to use for the shader fragment.
If you really want to tackle this problem, you'll need to learn some more about GLSL (the OpenGL Shading Language). There are loads of books and tutorials out there for that.
Once you've got some GLSL under your belt, take a look at some of the questions (like this one pulled from the Related sidebar) and other stuff people have written about the problem. (Note that when you're looking for mobile-specific limitations, OpenGL ES has the same limitations as WebGL on the desktop.)
With SceneKit, you have the additional wrinkle that you probably don't have a barycentric-coordinates vertex attribute (aka SCNGeometrySource) for the geometry you're working with, and you probably don't want to do the hard work of generating one. In OS X, you can use an SCNProgram with a geometryShader to add barycentric coordinates before the vertex/fragment shaders run — but then you have to do your own shading (i.e. you can't piggyback on the SceneKit shading like you can with shader modifiers). And that isn't available in iOS — the hardware there doesn't do geometry shaders. You might be able to fake it using texture coordinates if those happen to be lined up right in your geometry.
It might be easier to just draw the object using lines — try making a new SCNGeometry from the sources and elements of your original (solid) geometry, but when recreating the SCNGeometryElement, use SCNPrimitiveTypeLine.

GLPaint based OpenGL ES Blending Issue

I'm working on an app based on Apple's GLPaint sample code. I've changed the clear color to transparent black and have added an opacity slider, however when I mix colors together with a low opacity setting they don't mix the way I'm expecting. They seem to mix the way light mixes, not the way paint mixes. Here is an example of what I mean:
The "Desired Result" was obtained by using glReadPixels to render each color separately and merge it with the previous rendered image (i.e. using apple's default blending).
However, mixing each frame with the previous is too time consuming to be done on the fly, how can I get OpenGL to blend the colors properly? I've been researching online for quite a while and have yet to find a solution that works for me, please let me know if you need any other info to help!
From the looks of it, with your current setup, there is no easy solution. For what you are trying to do, you need custom shaders. Which is not possible using just GLKit.
Luckily you can mix GLKit and OpenGL ES.
My recommendation would be to:
Stop using GLKit for everything except setting up your rendering
surface with GLKView (which is tedious without GLKit).
Use an OpenGl program with custom shaders to draw to a texture that is backing an FBO.
Use a second program with custom shaders that does post processing (after drawing above texture to a quad which is then rendered to the screen).
A good starting point would be to load up the OpenGl template that comes with Xcode. And start modifying it. Be warned: If you don't understand shaders, the code here will make little sense. It draws 2 cubes, one using GLKit, and one without - using custom shaders.
References to start learning:
Intro to shaders
Rendering to a Texture
Shader Toy - This should help you experiment with your post processing frag shader.
GLEssentials example - This shows how to render to texture using OpenGL ( a bit outdated.)
Finally, if you are really serious about using OpenGL ES to it's full potential, you really should invest the time to read through OpenGL ES 2.0 programming guide. Even though it is 6 years old, it is still relevant and the only book I've found that explains all the concepts correctly.
Your "Current Result" is additive color, which is how OpenGL is supposed to work. To work like mixing paint would be substractive color. You don't have control over this with OpenGL ES 1.1, but you could write a custom fragment shader for OpenGL ES 2.0 that would do substractive color. If you are blending textures images from iOS, you need to know if the image data has been premultiplied by alpha or not, in order to do blending. OpenGL ES expects the non-premultiplied format.
You need to write that code in the function which is called on color change.
and each time you need to set BlendFunc.
CGFloat red , green, blue;
// set red, green ,blue with desire color combination
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glColor4f(red* kBrushOpacity,
green * kBrushOpacity,
blue* kBrushOpacity,
kBrushOpacity);
To do more things by using BlendFunc use this link
Please specify if it works or not. It work for me.

Distance Fog XNA 4.0

I've been working on a project that helps create a virtual reality experience on the laptop and/or desktop. I am using XNA 4.0 on Visual Studio 2010. The current scenario looks like this. I have interfaced the movements of a persons head through kinect. So if the person moves his head right relative to the laptop, the scene seen in the image is rotated towards the left giving the effect of a virtual tour or like looking through the window experience.
To enhance the visual appeal, I want to add a darkness at the back plane. Like the box looks as if it was a tunnel.
The box was made using trianglestrips. The BasicEffect used for the planes of the box is called effect.
effect.VertexColorEnabled = true;
effect.EnableDefaultLighting();
effect.FogEnabled = true;
effect.FogStart = 35.0f;
effect.FogEnd = 100.0f;
effect.FogColor = new Vector3(0.0f, 0.0f, 0.0f);
effect.World = world;
effect.View = cam.view;
effect.Projection = cam.projection;
On compiling the error is regarding some normals.
I have no clue what they mean by that. I have dug the internet hard enough. (I was first under the impression that ill put a black omnilight in the backside of the box).
The error is attached below:
'verts' is the VertexPositionColor [][] that is used to build the box.
How do I solve this error ? Is the method/approach correct ?
Any help shall be welcome.
Thanks.
Your Vertex has Position and Color channels, but is has no normals... so you have to provide vertex has it.
You can use VertexPostionNormalTexture if you don't need the color, or build a custom struct that provides the normal...
Here your are a custom implementation: VertexPositionNormalColor
You need to add a normal (vector3) to your vertex type.
Also if you want Distance fog you will have to write your own shader as BasicEffect only implements depth fog (which while not looking as good is faster)

Using a custom shader in Silverlight 5 XNA

I've been recently trying to create some 3D rendering code in Silverlight 5 with XNA. Unfortunately I have been having trouble getting anything ( using my custom shader ) to work.
The basic effect is used on a cube and uses only VertexPositionColor information but when I switch to using a custom shader nothing seems to render ( or renders off-screen ).
To try and help myself with this issue I even got hold of the BasicEffect hlsl code but it doesn't do anything I am not doing.
The code takes in a world, view and projection matrix and multiplies each one by a position in the following order:
float4 pos_ws = mul(position, World);
float4 pos_vs = mul(pos_ws, View);
float4 pos_ps = mul(pos_vs, Projection);
I changed my code to do the same thing ( instead of passing in a single WorldViewProjection matrix ) and my shader uses this to calculate a position and then just applies a color to the pixel. Yet nothing is rendering.
I'm pretty stuck on this, I'm passing ok at basic 3D but passing ok doesn't seem to cut it! :)
So it turns out the issue is fairly simple!
I actually deleted this question initially because I knew the issue was likely my matrices and so it was unlikely I'd get much help!
After some stumbling on google, and more coffee than I'd like to admit to, I found the answer.
XNA transposes it's matricies on the sly and doesn't tell you! I had tried transposing the view and projection matrices in some vain hope that I'd know what I was doing but it wasn't helping.
Instead I now pass in a single WorldViewProjection_Transposed matrix which is calculated using the following.
Matrix worldViewProjection_Transpose = Matrix.Transpose(world * view * projection);
This seems to work at the moment and I am hoping it is this simple.
I am sure I will come across a million more problems as the models I need to render become more complex but I decided to leave this on in case anyone in a similar situation ( and experience level ) to me is struggling :)

Resources