I want to make an 2D Shooter Game in XNA. The Terrain shall consist of an Bitmap Image which should be used as an collision map. I tried to do some Character Movement, but I failed with the side-collision and walking up slopes. Do you have any Ideas for that?
There's an excellent tutorial on pixel-perfect collision available on the MSDN App Hub.
Basically what you end up doing is pulling all the information from the texture (via GetData()) as an array, and looping through the overlapping pixels in each texture to see if they're both opaque, black, or whatever else you want to use to determine solidity. It gets a bit more complicated if you need scalable/rotated images, but the tutorial above contains instructions for that as well.
Related
I have a problem in which in my game I have to use SpriteSortMode.Texture because I have a lot of objects with few textures, so I cannot afford to use SpriteSortMode.BackToFront.
The thing is this means I cannot draw by layers, unless I do SpriteBatch.Begin with the exact same settings, which is what I'm currently doing.
I only have 3 draw layers I need - a Tileset surface, Objects like rocks or characters on the surface, and UI.
Other solutions I've found is using texture quads (which supposedly also improves tileset drawing performance), going 3D with orthogonal view which I haven't researched yet.
I'm hoping there's a better to make this work.
Why would having a lot of objects with few textures mean you have to use SpriteSortMode.Texture?
"This can improve performance when drawing non-overlapping sprites of uniform depth." says the MSDN page, and this is clearly not what you are doing.
Just use the default SpriteSortMode.Deferred and draw things back to front in order.
I am still developing on my sci-fi video game using my own custom game engine. Now, I want to implement the combat system in my game and in the engine. While nearly everything is clear to me, I wonder how to do proper laser beams like the ones known from Star Wars, Star Trek, Babylon 5, etc.?
I did some online research, however I did not find any suitable article. I am pretty sure I searched with the wrong keywords/tags. Can you give me some hints how to implement such effects as laser beams? I think, it'd be enough to know the proper techniques or terms I need for online research...
A common way is to draw three (or more) intersecting transparent planes like this, if you excuse my crude drawing:
Each of them then bears the same laser texture that fades to black near the top and bottom edges:
If you add any subtle detail, remember to scale the texture coordinates appropriately based on the length of the beam and enable wrapping.
Finally, and most importantly, use a shader that shows only the planes facing the camera, while fading away the ones at a glancing angle to hide the fact that we're using intersecting planes and make the beam look smooth and plausible. The blending should be additive. You should also add some extra effects to the ends of the beam, again to hide the planes.
I'm having a very simple terrain map with tiles! All the tiles are same size, just different height (z value) !
I can render them OK, but there are thousands of tiles , but not all of them are on screen, only a portion (that ahead of view)! So i'm doing a batch rendering, collect only tiles that appear on screen then Render them all in 1 call!
I try to use D3DXVec3Project to project vertex on World space to Screen space, then detect which triangle is on Screen, however this is very slow, call this for whole map take to 7ms (about 250x250 calls ).
Right now i'm using iso view (D3DXMatrixOrthoLH), there is no camera or eye, when I want to move arround the map, I just translate the world!
I think this is a very common problem that all engine must face to optimize, but I cant search for it ! Is it visible detection , culling or clipping... ?
Thanks! Should I just render all the tiles on screen, and let DirectX auto clip for us ? (If I remember well, last time I try render them all, it's still very slow)
img : http://i1335.photobucket.com/albums/w666/greenpig83/terrain2_zps24b77283.png
Yes, in complex scenes, typically, we must cull invisible geometry to achieve interactive frame-rates. Of course it greatly depends on scene itself, capabilities of API, and target hardware.
Here are first steps of a good terrain renderer (in order of complexity):
Frustum culling - test for collision between camera's frustum (visible volume) and objects (such as meshes and terrain tiles). No collision means object is invisible. Based on collision detection algorithms. Of course, you will need camera (view and projection matrices) for that. Also you will need a good math lib.
Spatial partitioning (ex: "Quad tree" in case of terrain) - grouping objects to a specific data structures, which allows avoid collision tests which are known being impossible in advance. Incredibly speeds up frustum culling. For example, we don't need to test all tiles that are behind the camera.
Level of Detail (LOD) - different techniques which allows render objects, that are far away from camera, less detailed, reducing resources consumption. Allows render amazing, realistic, detailed scenes with huge terrains.
Now you know what to ask Google for ;) , but still I'll add some links.
For beginners:
braynzarsoft's tutorials - you probably be interested in latest ones, about terrain and collision detection
rastertek terrain tutorials
Advanced:
vterrain.org - source of infinite knowledge about terrain rendering (articles, papers, links to implementations)
Mr. Hoppe's papers on progressive meshes
Hope it helps =)
I have been able to convert a 3D mesh from Maya into Voxel art (looks like a bunch of cubes--similar to legos), all done in Maya. I plan on using the 3D art to wrap around my 2D textures to make it 2.5D. My question is: does the mesh being voxelized allow me to use the pieces as particles that i can put into a particle engine in XNA to have awesome dynamic effects?
No, because you get a set of vertices and index defining triangles with no information about cubes.
But you can create an algorithm that extract the info from the model. It's a bit hard but it's feasible.
I'd do it creating a 3d grid, and foreach face I'd launch rays from that face to the opposite face, taking every collision with the mesh, getting for each ray a number of collisions that should be pair (0, 2, 4,...), this between two points should have a solid volume.
That way it can be converted to voxels... on each collision it would be useful to store the bones that are related to the triangle that collides, this way you would be able to animate the voxel model.
I am currently working on a 2D "Worms" clone in XNA, and one of the features is "deformable" terrain (e.g. when a rocket hits the terrain, there is an explosion and a chunk of the terrain disappears).
How I am currently doing this is by using a texture that has a progressively higher Red value as it approaches the center. I cycle through every pixel of that "Deform" texture, and if the current pixel overlaps a terrain pixel and has a high enough red value, I modify the color array representing the terrain to transparent. If the current pixel does NOT have a high enough Red value, I blacken the terrain color (it gets blacker the closer the Red value is to the threshold). At the end of this operation I use SetData to update my terrain texture.
I realize this is not a good way to do it, not only because I have read about pipeline stalls and such, but also because it can become quite laggy if lots of craters are being added at the same time. I want to remake my Crater Generation on the GPU instead using Render Targets "ping-ponging" between being the target and the texture to modify. That isn't the problem, I know how to do that. the problem is I don't know how to keep my burn effect using this method.
Here's how the burn effect looks right now:
Does anybody have an idea how I would create a similar burn effect (darkening the edges around the formed crater)? I am completely unfamiliar with Shaders but if it requires it I would be really thankful if someone walked me through on how to do it. If there are any other ways that'd be great too.
Sounds like you're good in the right way. But you're doing a lot of things by hand, which can also be done by just drawing sprites and applying the right formulas.
For example:
Suppose your terrain is saved into a giant texture in the alpha channel of the texture. 1 is terrain, 0 is nothing.
An explosion happens and the terrain has to be deformed. Update your texture easily by just drawing a black transparent sphere (or explosion area) onto your texture. The terrain is gone, because the alpha value is 0 of the black sphere. Your texture is now up to date, everything was done by the spriteBatch. And nothing had to be checked.
I don't know if you wanted a solution for this as well, but now you have one.
For the burn effect
Now that we have our terrain in a texture, we can do a post effect on the drawing by using a shader (just like you said). The shader obtains the texture's alpha channel and can now do different things to get our burn effect.
The first option is to do edge detection. Check a few pixels in all 4 directions and see if the pixel is at the edge. If so, it needs to do a burn by, for example, multiplying it with the distance to the edge (or any other function you like)
Another way is quite similar to the first one, but does it in two steps. First you do the same kind of edge detection, but you save the edges in a seperate texture. Now, when you are drawing your texture, you can overlay your edges. So it's quite the same as just drawing the ground at once.
The main difference for the second option is that you can also choose to just draw your normal ground and you are not adjusting the pixel in the ground texture on rendering.
I know this is a long story, but it is a nice technique. Have a look at toon shaders, they do edge detection as well, even though it is 3D.
Keywords: Toon shading, HLSL, Post effects, edge detection, image processing.
Recommended reading: http://rbwhitaker.wikidot.com/xna-tutorials