Explanation needed on Deferred lights - deferred

I have implemented a simple deferred shading engine in opengl but now im stuck on the point lights part.
I have searched evry where and i simply cant find a good explanation on how the point lights work.
First why do they have to be draw as a mesh?
Cant i simply pass the values ot the vertex shader and run a loop for all the lights that i have passed?
i mean for the direcitonal light you just have to do a simpel phong on shading on every pixel...
I guess im just looking for a nice simple explanation on how the point lights work in deferred shading, if some one could explain it to me or point me in the right direction it would be much apreciated.
thanks.

Related

HLSL How to properly outline a flat-shaded model

I have a question and hope you can help me with it. I've been busy making a game with xna and have recently started getting into shaders (hlsl). There's this shader that i like, use, and would like to improve.
The shader creates an outline by drawing the back faces of a model (in black), and translating the vertex along its normal. Now, for a smooth-shaded model, this is fine. I, however, am using flat-shaded models (I'm posting this from my phone, but if anyone is confused about what that means, I can upload a picture later). This means that the vertices are translated along the normal of its corresponding face, resulting in visible gaps between the faces.
Now, the question is: is there a way to calculate (either in the shader or in xna), how i should translate each vertex, or is the only viable option just making a copy of the 3d model, but with smooth shading?
Thanks in advance, hope you can educate me!
EDIT: Alternatively, I could load only a smooth-shaded model, and try to flat-shade it in the shader. That would, however, mean that I have to be able to find the normals of all vertices of the corresponding face, add their normals, and normalize the result. Is there a way to do this?
EDIT2: So far, I've found a few options that don't seem to work in the end: setting "shademode" in hlsl is now deprecated. Setting the fillmode to "wireframe" would be neat (while culling front faces), if only I would be able to set the line thickness.
I'm working on a new idea here. I could maybe iterate through vertices, find their position on the screen, and draw 2d lines between those points using something like the RoundLine library. I'm going to try this, and see if it works.
Ok, I've been working on this problem for a while and found something that works quite nicely.
Instead doing a lot of complex mathematics to draw 2d lines at the right depth, I simply did the following:
Set a rasterizer state that culls front faces, draws in wireframe, and has a slightly negative depth bias.
Now draw the model in all black (I modified my shader for this)
Set a rasterizer state that culls back faces, draws in fillmode.solid and has a 0 depth bias.
Now draw the model normally.
Since we can't change the thickness of wireframe lines, we're left with a very slim outline. For my purposes, this was actually not too bad.
I hope this information is useful to somebody later on.

Positional light shader code in OpenGL ES2.0?

Does anyone know of a good source for writing positional light code in OpenGL ES2?
All the tutorials I have seen expect your model to be at the world center (0,0,0), and the light is affecting that.
Although this might be useful in many cases, how about lights that can exist anywhere in the world? That is a lot more useful to me. :)
I am more looking for the shader code to implement this, but the current target platform is iOS with C++.
I googled "fragment shader spot light" and got the first page. It doesn't have a reflection effect but it does have positions and directions (you can use or throw out any).
Though I would suggest you to write your own shader depending on your specific needs and search web for effects you want as internet is overflowing with them.
Also you wrote that the model is in center and then applied that the light is in center. Anyway, to get the effect of light being elsewhere just use the uniform with light position and subtract that vector from the pixel position (describing the object position in light position coordinates) and use the result with the part of the shader relevant to light.

Directional Lights

I'm working on a game idea (2D) that needs directional lights. Basically I want to add light sources that can be moved and the light rays interact with the other bodies on the scene.
What I'm doing right now is some test where using sensors (box2d) and ccDrawLine I could achieve something similar to what I want. Basically I send a bunch of sensors from certain point and with raycast detect collisions, get the end points and draw lines over the sensors.
Just want to get some opinions if this is a good way of doing this or is other better options to build something like this?
Also I would like to know how to make a light effect over this area (sensors area) to provide a better looking light effect. Any ideas?
I can think of one cool looking effect you could apply. Put some particles inside the area where light is visible, like sparks shining and falling down very slowly, something like on this picture
Any approach to this problem would need to use collision detection anyway so your is pretty nice providing you have limited number of box2d objects.
Other approach when you have a lot of box2d objects I would think of is to render your screen to texture with just solid colors (should be fast) and perform ray tracing on that generated texture to find pixels that are going to be affected by light. That way you are limited to resolution not the number of box2d objects.
There is a good source code here about dynamic and static lights in a 2D space.
It's Ruby code but easy to understand so it shouldn't be long to port it to Obj-C/Cocos2D/box2D.
I really hope it will help you as it helped me.
Hm, interesting question. Cocos2D does provide some rather flexible masking effects. You could have a gradient mask that you lay over your objects, where its position depends on the position of the "light", thereby giving the effect that your objects were being coloured by the light.

Detecting Textures in OpenCV?

I'm trying to detect a texture using OpenCV. The texture would be similar to that of a brush on a paintbrush, so on an image it would have many many little lines together. I've tried using Hough Lines to be able to distinguish the texture from other things, but it hasn't been working out too well, as too many other false positives are detected. Other than that, I've had ideas about using Template Matching as well as Fast-Fourier Transforms, but I haven't tried testing or implementing them.
So, would anyone else have any idea of a possible method to do this? Maybe use some other line detector or an edge detector? Or would that bring up too many false positives?
This texture should be able to be detected in a cluttered scene and the algorithm for doing so should be relatively fast, since I want it to be tracked in real-time if possible. Sorry for not being able to post a sample of the texture I want (too less rep l0l), but you can simply search up paintbrush/paintbrush texture if you really need to see what it looks like. But if you've seen a paintbrush before, it should be pretty obvious which part I'm talking about (the part with the brush).
Thanks in advance, really appreciate it.

How do I make the lights stay fixed in the world with Direct3D

I've been using OpenGL for years, but after trying to use D3D for the first time, I wasted a significant amount of time trying figure out how to make my scene lights stay fixed in the world rather than fixed on my objects.
In OpenGL light positions get transformed just like everything else with the MODELVIEW matrix, so to get lights fixed in space, you set up your MODELVIEW the way you want for the lights, and call glLightPosition then set it up for your geometry and make geometry calls. In D3D that doesn't help.
(Comment -- I eventually figured out the answer to this one, but I couldn't find anything helpful on the web or in the MSDN. It would have saved me a few hours of head scratching if I could have found this answer then.)
The answer I discovered eventually was that while OpenGL only has its one amalgamated MODELVIEW matrix, in D3D the "world" and "view" transforms are kept separate, and placing lights seems to be the major reason for this. So the answer is you use D3DTS_VIEW to set up matrices that should apply to your lights, and D3DTS_WORLD to set up matrices that apply to the placement of your geometry in the world.
So actually the D3D system kinda makes more sense than the OpenGL way. It allows you to specify your light positions whenever and wherever the heck you feel like it once and for all, without having to constantly reposition them so that they get transformed by your current "view" transform. OpenGL has to work that way because it simply doesn't know what you think your "view" is vs your "model". It's all just a modelview to GL.
(Comment - apologies if I'm not supposed to answer my own questions here, but this was a real question that I had a few weeks ago and thought it was worth posting here to help others making the shift from OpenGL to D3D. Basic overviews of the D3D lighting and rendering pipeline seem hard to come by.)
For the fixed function pipeline, the lights position and direction are set in world space. The docs for the light structures do tell you that, but I'm not surprised that you missed it in the docs. There's not much information on the fixed function pipeline anymore as the focus has to programmable shaders.

Resources