Hi. I've got a small game using directX10 and C++. However, I started making it using the meshloaderfromOBJ10 direct X sample and I have just been building on it. However, my objects are all looking just plain black althought they have colour.
I know this is because of the light, but seemingly changing any of the code to do with the light does nothing from this sample.
I was wondering if anyone knows a simple(ish) method to just light everything, as in make it bright everywhere. I don't care about shadows or reflection or anything like that as for what I'm doing it is not necessary but it would be nice to see my objects instead of just silhouettes.
Cheers.
However, my objects are all looking just plain black althought they have colour.
If shader expects texture to be set and reads material information from that texture, and you set null texture to corresponding texture stage(sampler or whatever it is called in DX10), shader will read black color from that null texture. And black material without specular/emissive or reflection mapping will always look black, no matter how many lights you use.
Use white texture on materials/objects without texture (assuming your shader understands material color and multiplies it with texture color). Or switch to DX9 and use fixed-function pipeline which treats missing textures as white. Or modify the shader to support materials without texture.
method to just light everything, as in make it bright everywhere
You can use "global" ambient (which you'll have to add in your shader, because D3D10 doesn't have fixed function pipeline). However, you don't really want it, because
I don't care about shadows
you actually care about shadows just don't know it yet. global ambient value will make all materials evenly colored without any gradients. It means that if you have an untextured complicated mesh, you won't be able to figure out what you're looking at and everything that is textured will look ugly. Also, black materials will still remain black. So to "make it bright everywhere", you'll need "sun" - directional light source or a very big point light.
Related
I would like to do something like this:
Have the camera on and tap on the screen to get the color of that area and then replace that color with a texture. I have done something similar by replacing the color on the screen with another color (that is still not working right though), but replacing with a texture is more complex i think.
So please, can somebody give me a hint on how i can do this?
Also , on how to create the texture.
Thank you,
Alex
basically you will want to do this with a boolean operation in the fragment shader.
you'll need to feed two textures to the shader, one is the camera image, the other is the replacement image. then you need a function which determines if the per-fragment color from the camera texture is within a certain color range (which you choose), and depending on that either show the camera texture or the other texture.
your question is a bit vague, you should try to break it down into smaller problems. the tricky part, if you haven't done this before, is getting the OpenGL boilerplate code right.
you need to know:
how to write, compile and use basic GLSL shaders
how to load images into OpenGL textures and use them in your shaders (search for sampler2d)
a good first step is to do the following:
figure out how to show a texture as a flat fullscreen image using 2D geometry. You'll need to render two triangles, and map the texture's coordinates (UV) onto the triangle points.
if you follow this tutorial you'll be able to do the thing you want:
http://www.raywenderlich.com/70208/opengl-es-pixel-shaders-tutorial
Does anyone know of a good source for writing positional light code in OpenGL ES2?
All the tutorials I have seen expect your model to be at the world center (0,0,0), and the light is affecting that.
Although this might be useful in many cases, how about lights that can exist anywhere in the world? That is a lot more useful to me. :)
I am more looking for the shader code to implement this, but the current target platform is iOS with C++.
I googled "fragment shader spot light" and got the first page. It doesn't have a reflection effect but it does have positions and directions (you can use or throw out any).
Though I would suggest you to write your own shader depending on your specific needs and search web for effects you want as internet is overflowing with them.
Also you wrote that the model is in center and then applied that the light is in center. Anyway, to get the effect of light being elsewhere just use the uniform with light position and subtract that vector from the pixel position (describing the object position in light position coordinates) and use the result with the part of the shader relevant to light.
Hi I'm using Firemonkey because of it's cross platform capabilities. I want to render a particle system. Now I'm using a TMesh which works well enough to display the particles fast. Each particle is represented in the mesh via a two textured triangles. Using different texture coordinates I can show many different particle types with the same mesh. The problem is, that every particle can have its own transparency/opacity. With my current approach I cannot set the tranparency individually for each triangle (or even vertex). What can I do?
I realized that there are some other properties in TMesh.Data.VertexBuffer, like Diffuse or other sets of textures (TexCoord1-3), but these properties are not used (not even initalized) in TMesh. It also seems not easy to simply change this behavior by inheriting from TMesh. It seems one have to inherit from a lower level control to initialize the VertextBuffer with more properties. Before I try that, I'd like to ask if it would be possible to control the transparency of a triangle with that. E.g. can I set a transparent color (Diffuse) or use a transparent texture (TextCoor1)? Or is there a better way to draw the particles in Firemonkey.
I admit that I don't know much about that particular framework, but you shouldn't be able to change transparency via vertex points in a 3D model. The points are usually x,y,z coordinates. Now, the vertex points would have an effect on how the sprites are lighted if you are using a lighting system. You can also use the vertex information to apply different transparency effects
Now, there's probably a dozen different ways to do this. Usually you have a texture with different degrees of alpha values that can be set at runtime. Graphics APIs usually have some filtering function that can quickly apply values to sprites/textures, and a good one will use your graphics chip if available.
If you can use an effect, it's usually better since the nuclear way is to make a bunch of different copies of a sprite and then apply effects to them individually. If you are using Gouraud Shading, then it gets easier since Gouraud uses code to fill in texture information.
Now, are you using light particles? Some graphics APIs actually have code that makes light particles.
Edit: I just remembered Vertex Shaders, which could.
I am currently working on a 2D "Worms" clone in XNA, and one of the features is "deformable" terrain (e.g. when a rocket hits the terrain, there is an explosion and a chunk of the terrain disappears).
How I am currently doing this is by using a texture that has a progressively higher Red value as it approaches the center. I cycle through every pixel of that "Deform" texture, and if the current pixel overlaps a terrain pixel and has a high enough red value, I modify the color array representing the terrain to transparent. If the current pixel does NOT have a high enough Red value, I blacken the terrain color (it gets blacker the closer the Red value is to the threshold). At the end of this operation I use SetData to update my terrain texture.
I realize this is not a good way to do it, not only because I have read about pipeline stalls and such, but also because it can become quite laggy if lots of craters are being added at the same time. I want to remake my Crater Generation on the GPU instead using Render Targets "ping-ponging" between being the target and the texture to modify. That isn't the problem, I know how to do that. the problem is I don't know how to keep my burn effect using this method.
Here's how the burn effect looks right now:
Does anybody have an idea how I would create a similar burn effect (darkening the edges around the formed crater)? I am completely unfamiliar with Shaders but if it requires it I would be really thankful if someone walked me through on how to do it. If there are any other ways that'd be great too.
Sounds like you're good in the right way. But you're doing a lot of things by hand, which can also be done by just drawing sprites and applying the right formulas.
For example:
Suppose your terrain is saved into a giant texture in the alpha channel of the texture. 1 is terrain, 0 is nothing.
An explosion happens and the terrain has to be deformed. Update your texture easily by just drawing a black transparent sphere (or explosion area) onto your texture. The terrain is gone, because the alpha value is 0 of the black sphere. Your texture is now up to date, everything was done by the spriteBatch. And nothing had to be checked.
I don't know if you wanted a solution for this as well, but now you have one.
For the burn effect
Now that we have our terrain in a texture, we can do a post effect on the drawing by using a shader (just like you said). The shader obtains the texture's alpha channel and can now do different things to get our burn effect.
The first option is to do edge detection. Check a few pixels in all 4 directions and see if the pixel is at the edge. If so, it needs to do a burn by, for example, multiplying it with the distance to the edge (or any other function you like)
Another way is quite similar to the first one, but does it in two steps. First you do the same kind of edge detection, but you save the edges in a seperate texture. Now, when you are drawing your texture, you can overlay your edges. So it's quite the same as just drawing the ground at once.
The main difference for the second option is that you can also choose to just draw your normal ground and you are not adjusting the pixel in the ground texture on rendering.
I know this is a long story, but it is a nice technique. Have a look at toon shaders, they do edge detection as well, even though it is 3D.
Keywords: Toon shading, HLSL, Post effects, edge detection, image processing.
Recommended reading: http://rbwhitaker.wikidot.com/xna-tutorials
I have a couple of problems when doing blending in WebGL. One of them is the way that colors are rendered regardless of the alpha value when blending is on. Meaning darker colors are always blended with what is underneath, even when alpha is set to 1.0. Yes, the more brighter colors are rendered differently depending on the alpha value, so there isn't a problem in the way I set up my shaders, I think.
That again I haven't had a chance to render a full scene yet, I am currently doing only testing with WebGl, so I only draw simple object on top of the default background. Will these blending problems be "fixed" once I render every bit of the screen using objects, or is this a limitation with WebGL?
Try setting the blend function like this:
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
That should be the default, at least it seems to be in Firefox.