I am trying to cut a mesh in half or at least be able to delete faces from it in real-time.
How to go about doing this i wonder?
Lock the vertex buffers, memset the selected face or vertex to 0, does not work for me.
has anyone a solution or a tutorial on this i really want this feature in my program!
Cheers
Oh - that one is easy. There is no need to modify the mesh. D3D can already do this for you!
Set the clip-plane via IDirect3DDevice9::SetClipPlane, then enable the plane via the D3DRS_CLIPPLANEENABLE renderstate. You can even set multiple clip-planes at the same time if you want to..
Here is a link to the msdn-entry: http://doc.51windows.net/Directx9_SDK/?url=/directx9_sdk/graphics/reference/d3d/interfaces/idirect3ddevice9/setclipplane.htm
And if you do a google search on "D3D SetClipPlane" you will find lots of discussions and example codes how to use it.
If you need to dynamically delete triangles from a mesh the best/fastest way is to use indexed triangles. When you create the index buffer, use the 'D3DUSAGE_DYNAMIC' flag. When you want to delete triangles, lock it with the 'D3DLOCK_DISCARD' flag. Write the entire new list of indices into the buffer, leaving out the triangles you want to delete.
The index buffer will be much smaller than the vertex buffer, so re-uploading only indices won't be as much of a drag on the system as the vertex buffer would be. But if it would be a big problem for you to convert to indexed triangle lists, then doing these stame operations using the vertex buffer instead is probably your next best option.
You say that setting the vertex to 0 doesn't work. In what way doesn't it work?
If you set the position of all the verticies of a triangle to (0.0, 0.0, 0.0) then the resulting triangle will be of zero size and shouldn't be drawn. Just to be sure you could set it to an offscreen position instead of zero.
Related
I have a simple program that renders a couple of 3D objects, using DirectX 3D 9 and HLSL. I'm just starting off with HLSL, I have no experience with 3D rendering.
I am able to change the texture & color of the models and fade between two textures without problems, however I was wondering what the best way to simply fade a 3D object (blend it with the background) would be. I would assume that it wouldn't be done as fading between two textures (using lerp), since I want the object faded to the entire background, so there would be many different textures behind it.
I'm using the LPD3DXEFFECT as my effect class, DrawIndexedPrimitive as the drawing function in each pass, and I only have a single pass. I'm also using Shader Model 3, as this is an older project.
The only way that I thought it possible would be to simply get the color of the pixel before you apply any changes, and then do calculations on it with the color of the texture of the model to attain a faded pixel. However, after looking over the internet, it does not appear that it's actually possible to get the color of a pixel before doing anything to it with HLSL.
Is it even possible to do something like this using HLSL? Am I missing something that could assist me here?
Any help is appreciated!
Forgive me if I'm misunderstanding, but it sounds like you're trying to simulate transparency instead of using built-in transparency.
If you're trying to get the color of the pixels behind the object and want to avoid using transparency, I'd start by trying to use the last rendered frame as a texture, then reference that texture in your current shader. There may be some way to do it within the same frame - to force all other rendering to go first, then handle the one object - but I don't know it.
After a long grind, I finally found a very good workaround for my problem, and I will try to explain my understanding of it for anyone else that has a smillar issue. Thanks to Alexander Stewart for suggesting that there may be an in-built way to do it.
Method Description
Instead of taking care of the background fade in the HLSL pixel shader, there is another way to do it, using a method called Frame Buffer Alpha Blending (full MS Docs documentation: https://learn.microsoft.com/en-us/windows/win32/direct3d9/frame-buffer-alpha).
The basic idea behind this method is to provide a simple way of blending a given pixel that is to be rendered, with the existing pixel on the screen. There is a formula that is followed: FinalColor = ObjectPixelColor * SourceBlendFactor + BackgroundPixelColor * DestinationBlendFactor, all of these "variables" being groups of 4 float values, in the format (R, G, B, A).
How I Implemented it
Before doing anything with the actual shaders, in my Visual Studio C++ file I have to pass a few flags to my render device (I used LPDIRECT3DDEVICE9 as my device class). I had to set render states for both D3DRS_SRCBLEND and D3DRS_DESTBLEND, which are reffering to ObjectPixelColor and DestinationBlendFactor respectivelly in the formula above. These will be my factors that will be multiplying each one of my object and background pixel colors. There are many possible values that can be assigned to D3DRS_SRCBLEND and D3DRS_DESTBLEND, full list is available in the MS Docs link above, but in order to achieve what I wanted to (simply a way to fade an object into the background with an alpha number going from 0 to 1), I figured out the flags should be like this: SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA); SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);.
After setting these flags, before passing through my shaders & rendering, I just needed to set one more flag: SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);. I was also able to alternate between TRUE and FALSE here without changing anything else with no rendering problems (although my project was very simple, it will probably cause issues on larger projects). You can then pass any arguments you want, such as the alpha number, to the HLSL shader as a global variable (I did it using SetValue()).
Going back to my HLSL shader, after these changes, passing a color float4 variable taken from the tex2D() function from my pixel shader with an alpha value between 0 and 1 yielded the correct alpha, provided there aren't other issues (another issue that I had but hadn't realized at the time was the fact that my transparent object was actually rendering before the background, so I can only reccomend to check the rendering order when working on rendering projects).
I'm sure there could have probably been a better way of implementing this with the latest DirectX, but my compiler only supports Shader Model 3 and lower.
I need to draw some objects wireframe and some solid. I assume it is a bad practice to call RSSetState for every object.
Probably i can split objects in two groups and draw wireframe group and then solid group. But i'm not sure it is possible because i may have other things to consider.
Maybe i can somehow create index buffer for lines and draw wireframe with lines, but i still need culling to work on these lines, is it possible?
rasterDesc.FillMode = D3D11_FILL_SOLID;
...
rasterDesc.FillMode = D3D11_FILL_WIREFRAME;
Yes, it is possible, but would be time wasting, because if you also want culling, you could enable wireframe with it, so your extra work and time would be wasted.
As stated on MSDN:
In Direct3D 10, device state is grouped into state objects which greatly reduce the cost of state changes.
Although this is for D3D10, it is exactly the same in D3D11 (D3D11 was partly optimized). The best would be to slit your objects in to two groups, as you stated, or try to do as many groups as possible as you stated it's maybe not possible.
Side note: You probably also set your vertex buffer, index buffer and primitive topology list (at least it's what I do) each frame, so setting the RSSetState is basically the same thing!
I'm new to OpenGL and I can't find info about which way to choose if I want to move object.
I've created a simple triangle using only vertices array (no shaders) and display it on the screen. Next I want to move my triangle (to the right/left sides). As I see it there is two ways:
use GLKBaseEffect and play with GLKMatrix4MakeTranslation and GLKBaseEffect.transform.projectionMatrix property
change coordinates in vertices array
which way is more correct?
Thank you.
Translation. You never want to change coordinates in array.
I have written a 2D Jump&Run Engine resulting in a 320x224 (320x240) image. To maintain the old school "pixely"-feel to it, I would like to scale the resulting image by 2 or 3 or 4, according to the resolution of the user.
I don't want to scale each and every sprite, but the resulting image!
Thanks in advance :)
Bob's answer is correct about changing the filtering mode to TextureFilter.Point to keep things nice and pixelated.
But possibly a better method than scaling each sprite (as you'd also have to scale the position of each sprite) is to just pass a matrix to SpriteBatch.Begin, like so:
sb.Begin(/* first three parameters */, Matrix.CreateScale(4f));
That will give you the scaling you want without having to modify all your draw calls.
However it is worth noting that, if you use floating-point offsets in your game, you will end up with things not aligned to pixel boundaries after you scale up (with either method).
There are two solutions to this. The first is to have a function like this:
public static Vector2 Floor(Vector2 v)
{
return new Vector2((float)Math.Floor(v.X), (float)Math.Floor(v.Y));
}
And then pass your position through that function every time you draw a sprite. Although this might not work if your sprites use any rotation or offsets. And again you'll be back to modifying every single draw call.
The "correct" way to do this, if you want a plain point-wise scale-up of your whole scene, is to draw your scene to a render target at the original size. And then draw your render target to screen, scaled up (with TextureFilter.Point).
The function you want to look at is GraphicsDevice.SetRenderTarget. This MSDN article might be worth reading. If you're on or moving to XNA 4.0, this might be worth reading.
I couldn't find a simpler XNA sample for this quickly, but the Bloom Postprocess sample uses a render target that it then applies a blur shader to. You could simply ignore the shader entirely and just do the scale-up.
You could use a pixelation effect. Draw to a RenderTarget2D, then draw the result to the screen using a Pixel Shader. There's a tool called Shazzam Shader Editor that let's you try out pixel shaders and it includes one that does pixelation:
http://shazzam-tool.com/
This may not be what you wanted, but it could be good for allowing a high-resolution mode and for having the same effect no matter what resolution was used...
I'm not exactly sure what you mean by "resulting in ... an image" but if you mean your end result is a texture then you can draw that to the screen and set a scale:
spriteBatch.Draw(texture, position, source, color, rotation, origin, scale, effects, depth);
Just replace the scale with whatever number you want (2, 3, or 4). I do something similar but scale per sprite and not the resulting image. If you mean something else let me know and I'll try to help.
XNA defaults to anti-aliasing the scaled image. If you want to retain the pixelated goodness you'll need to draw in immediate sort mode and set some additional parameters:
spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.None);
GraphicsDevice.SamplerStates[0].MagFilter = TextureFilter.Point;
GraphicsDevice.SamplerStates[0].MinFilter = TextureFilter.Point;
GraphicsDevice.SamplerStates[0].MipFilter = TextureFilter.Point;
It's either the Point or the None TextureFilter. I'm at work so I'm trying to remember off the top of my head. I'll confirm one way or the other later today.
I am using XNA for a 2D project. I have a problem and I don't know which way to solve it. I have a texture (an image) that is drawn to the screen for example:
|+++|+++|
|---|---|
|+++|+++|
Now I want to be able to destroy part of that structure/image so that it looks like:
|+++|
|---|---|
|+++|+++|
so that collision now will work as well for the new image.
Which way would be better to solve this problem:
Swap the whole texture with another texture, that is transparent in the places where it is destroyed.
Use some trickery with spriteBatch.Draw(sourceRectangle, destinationRectangle) to get the desired rectangles drawn, and also do collision checking with this somehow.
Split the texture into 4 smaller textures each of which will be responsible for it's own drawing/collision detection.
Use some other smart-ass way I don't know about.
Any help would be appreciated. Let me know if you need more clarification/examples.
EDIT: To clarify I'll provide an example of usage for this.
Imagine a 4x4 piece of wall that when shot at, a little 1x1 part of it is destroyed.
I'll take the third option:
3 - Split the texture into 4 smaller
textures each of which will be
responsible for it's own
drawing/collision detection.
It's not hard do to. Basically it's just the same of TileSet struct. However, you'll need to change your code to fit this approach.
Read a little about Tiles on: http://www-cs-students.stanford.edu/~amitp/gameprog.html#tiles
Many sites and book said about Tiles and how to use it to build game worlds. But you can use this logic to everything which the whole is compost from little parts.
Let me quick note the other options:
1 - Swap the whole texture with
another texture, that is transparent
in the places where it is destroyed.
No.. have a different image to every different position is bad. If you need to change de texture? Will you remake every image again?
2- Use some trickery with
spriteBatch.Draw(sourceRectangle,
destinationRectangle) to get the
desired rectangles drawn, and also do
collision checking with this somehow.
Unfortunately it's don't work because spriteBatch.Draw only works with Rectangles :(
4 Use some other smart-ass way I don't
know about.
I can't imagine any magic to this. Maybe, you can use another image to make masks. But it's extremely processing-expensive.
Check out this article at Ziggyware. It is about Deformable Terrain, and might be what you are looking for. Essentially, the technique involves settings the pixels you want to hide to transparent.
Option #3 will work.
A more robust system (if you don't want to be limited to boxes) would use per-pixel collision detection. The process basically works as follows:
Calculate a bounding box (or circle) for each object
Check to see if two objects overlap
For each overlap, blit the sprites onto a hidden surface, comparing pixel values as you go. If a pixel is already set when you try to draw the pixel from the second sprite, you have a collision.
Here's a good XNA example (another Ziggyware article, actually): 2D Per Pixel Collision Detection
Some more links:
Can someone explain per-pixel collision detection
XNA 2-d per-pixel collision
I ended up choosing option 3.
Basically I have a Tile class that contains a texture and dimention. Dimention n means that there are n*n subtiles within that tile. I also have an array that keeps track of which tiles are destroyed or not. My class looks like this in pseudo code:
class Tile
texture
dimention
int [,] subtiles; //0 or 1 for each subtile
public Tile() // constructor
subtiles = new int[dimention, dimention];
intialize_subtiles_to(1);
public Draw() // this is how we know which one to draw
//iterate over subtiles
for(int i..
for(int j ...)
if(subtiles[i,j] == 1)
Vector2 draw_pos = Vector2(i*tilewidth,
j*tileheight)
spritebatch.Draw(texture, draw_pos)
In a similar fashion I have a collision method that will check for collision:
public bool collides(Rectangle rect)
//iterate over subtiles
for i...
for j..
if(subtiles[i,j]==0) continue;
subtile_rect = //figure out the rect for this subtile
if(subtile_rect.intersects(rect))
return true;
return false;
And so on. You can imagine how to "destroy" certain subtiles by setting their respective value to 0, and how to check if the whole tile is destroyed.
Granted with this technique, the subtiles will all have the same texture. So far I can't think of a simpler solution.