is there a way that I can adjust the uv positions of textures on my mesh when rendering without manually recalculating and changing all the UVs? If so, which would be more efficient? I read something about a transformation that might work but it sounded like it might not work if I'm also transforming the position and size of my mesh. This would be a problem so please take this into account when considering any possibilities.
(Sorry if I'm talking weird. I'm a bit sleep deprived, atm.)
thanks
I'm a bit sleep deprived
I suggest to get some sleep instead of programming with directx.
is there a way that I can adjust the uv positions of textures
IDirect3DDevice9::SetTransform(D3DTS_TEXTURE0, &matrix). IN shaders transformations must be done manually.
Related
I have a 2D game in 4 directions, and I'm having problems with FPS (or GPU) because I have to draw a lot of textures.
I've read a lot about techniques to optimize performance, but I don't know what I can do anymore.
The main problem is that in some occasions I have about 200 creatures, where I have to draw his body (it is a single sprite) but also draw spells and other animations on his body. So, I think that is when it starts to give conflicts because the loop where I draw each creature, must change the textures for each creature, that is body>animation1>animation2>animation3 and this about 200 (creatures) times at 60 fps. Which lowers the fps to about 40-50.
Any suggestions?
This is how it looks:
The issue is probably - as you have already suggested - the constant switching between different textures. This is much slower than drawing the same number of sprites with the same texture.
To change this, consider putting all your textures into a single big texture. You then always draw that texture. This obviously would look quite wrong, so you also have to tell XNA which part of the texture you want to draw. For that, you can use the SourceRectangle parameter that can be passed to SpriteBatch.Draw(...). That way, you can always render the same texture but can still have different images on screen.
See also this answer about texture atlasses for more details.
I have a question and hope you can help me with it. I've been busy making a game with xna and have recently started getting into shaders (hlsl). There's this shader that i like, use, and would like to improve.
The shader creates an outline by drawing the back faces of a model (in black), and translating the vertex along its normal. Now, for a smooth-shaded model, this is fine. I, however, am using flat-shaded models (I'm posting this from my phone, but if anyone is confused about what that means, I can upload a picture later). This means that the vertices are translated along the normal of its corresponding face, resulting in visible gaps between the faces.
Now, the question is: is there a way to calculate (either in the shader or in xna), how i should translate each vertex, or is the only viable option just making a copy of the 3d model, but with smooth shading?
Thanks in advance, hope you can educate me!
EDIT: Alternatively, I could load only a smooth-shaded model, and try to flat-shade it in the shader. That would, however, mean that I have to be able to find the normals of all vertices of the corresponding face, add their normals, and normalize the result. Is there a way to do this?
EDIT2: So far, I've found a few options that don't seem to work in the end: setting "shademode" in hlsl is now deprecated. Setting the fillmode to "wireframe" would be neat (while culling front faces), if only I would be able to set the line thickness.
I'm working on a new idea here. I could maybe iterate through vertices, find their position on the screen, and draw 2d lines between those points using something like the RoundLine library. I'm going to try this, and see if it works.
Ok, I've been working on this problem for a while and found something that works quite nicely.
Instead doing a lot of complex mathematics to draw 2d lines at the right depth, I simply did the following:
Set a rasterizer state that culls front faces, draws in wireframe, and has a slightly negative depth bias.
Now draw the model in all black (I modified my shader for this)
Set a rasterizer state that culls back faces, draws in fillmode.solid and has a 0 depth bias.
Now draw the model normally.
Since we can't change the thickness of wireframe lines, we're left with a very slim outline. For my purposes, this was actually not too bad.
I hope this information is useful to somebody later on.
I want to simulate stroking a carpet, so you would have a graphic of a fury carpet and with your finger you can move around and stroke it. I need to shift pixels and create some fake distortion around where I am touching.
Anyone have any tips ?
Firstly I guess do I have enough to work with assuming I have 1 jpeg of the material. Not any skeleton or 3d file, just a flat image
this can be also improved with 'fur rendering'
I've some examples:
http://www.ozone3d.net/benchmarks/fur/
http://www.xbdev.net/directx3dx/specialX/Fur/index.php
or new demo from NVidia:
http://www.youtube.com/watch?v=2Fp5N-pOxKA - around 35sec
Sounds like a typical task to be solved with OpenGL shaders.
As MrTJ says: Shaders is your key here.
Apart from your diffuse use a second texture as your "carpet" map that you modify. Maybe use the like a normal map, storing a directional vector per texel.
Use your "carpet" map in your shader and distort however you like to create your desired carpet effect.
As a learning experience, I'm writing an Immediate mode managed DirectX 9 application.
I'm manually calculating Vertex normals across all triangles in a scene to allow smooth Gouraud shading.
This works as expected, but I'm guessing this is not the most efficient approach. Is it possible to get the GPU to do this for me?
You could in theory generate the vertex normals inside the vertex shader. That involves computation every single time you render a mesh using that shader though, so why not generate them in advance.
If you mean you want to generate them in advance of rendering, but use the GPU instead of the CPU, I would say that it's not worth the bother of speeding up something you are only going to do once. Besides, I'm not sure if DX9 has a way to get computed vertex information back from a shader (DX10 does).
All in all, the best thing to do in most cases is the traditional: compute vertex normals in the program that saves the data files that contain the meshes - do it as a pre-computation step. Usually you have them if the mesh came from a 3d package like Max or Maya, because there is artistic information in the normals, unless you know the whole mesh is supposed to be perfectly smooth (or faceted), it's not computable in the general case.
I've been using OpenGL for years, but after trying to use D3D for the first time, I wasted a significant amount of time trying figure out how to make my scene lights stay fixed in the world rather than fixed on my objects.
In OpenGL light positions get transformed just like everything else with the MODELVIEW matrix, so to get lights fixed in space, you set up your MODELVIEW the way you want for the lights, and call glLightPosition then set it up for your geometry and make geometry calls. In D3D that doesn't help.
(Comment -- I eventually figured out the answer to this one, but I couldn't find anything helpful on the web or in the MSDN. It would have saved me a few hours of head scratching if I could have found this answer then.)
The answer I discovered eventually was that while OpenGL only has its one amalgamated MODELVIEW matrix, in D3D the "world" and "view" transforms are kept separate, and placing lights seems to be the major reason for this. So the answer is you use D3DTS_VIEW to set up matrices that should apply to your lights, and D3DTS_WORLD to set up matrices that apply to the placement of your geometry in the world.
So actually the D3D system kinda makes more sense than the OpenGL way. It allows you to specify your light positions whenever and wherever the heck you feel like it once and for all, without having to constantly reposition them so that they get transformed by your current "view" transform. OpenGL has to work that way because it simply doesn't know what you think your "view" is vs your "model". It's all just a modelview to GL.
(Comment - apologies if I'm not supposed to answer my own questions here, but this was a real question that I had a few weeks ago and thought it was worth posting here to help others making the shift from OpenGL to D3D. Basic overviews of the D3D lighting and rendering pipeline seem hard to come by.)
For the fixed function pipeline, the lights position and direction are set in world space. The docs for the light structures do tell you that, but I'm not surprised that you missed it in the docs. There's not much information on the fixed function pipeline anymore as the focus has to programmable shaders.