Phong Shading vs Tessellation - directx

I ran across Phong Shading while looking at the Source Engine. The desription sounds very much like Tessellation. But when I looked it up, I didn't really find anything directly comparing the two. Now in DirectX Tessellation isn't used like Phong Shading in HLSL. What's the difference? And which one should I use?

Phong shading is not directly related to DX11 tessellation, but because they both can smooth lighting details I can see how you could be confused.
Tessellation dynamically increases geometric detail based on some parameters (often camera distance). This can increase lighting quality (maybe this is the relationship to phong?) as well as silhouette detail. The shading advantages (not silhouette detail) can actually be simulated entirely in pixel shaders without tessellation.
Phong shading is a pixel shading technique. It does not affect geometric detail. It is similar to standard OpenGL Gouraud shading, except instead of interpolating a lighting value across the pixels of a surface, the normal is interpolated across the surface and renormalized at each pixel. This gives more accurate lighting results often called "per pixel lighting" as opposed to "per vertex lighting"
You could reasonably (and probably commonly) use both effects at the same time at different parts of the pipeline.

As Justin mentioned Phong Shading is a shading routine used for more accurate lighting per pixel. Tessellation is used to alter the geometric detail in a mesh by dynamically generating more triangles to achieve a higher surface detail and a smoother result. It can be used successfully for dynamic level of detail depending on distance to camera or size on screen.
To add to this topic I thought I should mention that there is a Tessellation algorithm called Phong Tessellation that takes inspiration from Phong Shading and applies this algorithm to Tessellation. This means that the vertices are modified with a similar normal interpolation and achieves high detail silhouettes as well as better surface detail. Phong Tessellation has a simpler shader than the common other local tessellation algorithm PN-Triangles and I used this to achieve higher detail heads in one of the games that I worked on.
Phong Tessellation

Related

How are mipmapped textures sampled?

My question is specifically in regards to Metal, since I don't know if the answer would change for another API.
What I believe I understand so far is this:
A mipmapped texture has precomputed "levels of detail", where lower levels of detail are created by downsampling the original texture in some meaningful way.
Mipmap levels are referred to in descending level of detail, where level 0 is the original texture, and higher-levels are power-of-two reductions of it.
Most GPUs implement trilinear filtering, which picks two neighboring mipmap levels for each sample, samples from each level using bilinear filtering, and then linearly blends those samples.
What I don't quite understand is how these mipmap levels are selected. In the documentation for the Metal standard library, I see that samples can be taken, with or without specifying an instance of a lod_options type. I would assume that this argument changes how the mipmap levels are selected, and there are apparently three kinds of lod_options for 2D textures:
bias(float value)
level(float lod)
gradient2d(float2 dPdx, float2 dPdy)
Unfortunately, the documentation doesn't bother explaining what any of these options do. I can guess that bias() biases some automatically chosen level of detail, but then what does the bias value mean? What scale does it operate on? Similarly, how is the lod of level() translated into discrete mipmap levels? And, operating under the assumption that gradient2d() uses the gradient of the texture coordinate, how does it use that gradient to select the mipmap level?
More importantly, if I omit the lod_options, how are the mipmap levels selected then? Does this differ depending on the type of function being executed?
And, if the default no-lod-options-specified operation of the sample() function is to do something like gradient2D() (at least in a fragment shader), does it utilize simple screen-space derivatives, or does it work directly with rasterizer and interpolated texture coordinates to calculate a precise gradient?
And finally, how consistent is any of this behavior from device to device? An old article (old as in DirectX 9) I read referred to complex device-specific mipmap selection, but I don't know if mipmap selection is better-defined on newer architectures.
This is a relatively big subject that you might be better off asking on https://computergraphics.stackexchange.com/ but, very briefly, Lance Williams' paper "Pyramidal Parametrics" that introduced trilinear filtering and the term "MIP mapping", has a suggestion that came from Paul Heckbert (see page three, 1st column) that I think may still be used, to an extent, in some systems.
In effect the approaches to computing the MIP map levels are usually based on the assumption that your screen pixel is a small circle and this circle can be mapped back onto the texture to get, approximately, an ellipse. You estimate the length of the longer axis expressed in texels of the highest resolution map. This then tells you which MIP maps you should sample. For example, if the length was 6, i.e. between 2^2 and 2^3, you'd want to blend between MIP map levels 2 and 3.

Why do we implement lighting in the Pixel Shader?

I am reading Introduction to 3D Game Programing with DirectX 11 by Frank D. Luna, and can't seem to understand why do we implement lighting in Pixel Shader? I would be grateful if you could send me some reference pages on the subject.
Thank you.
Lighting can be done many ways. There are hundreds of SIGGRAPH papers on the topic.
For games, there are a few common approaches (or more often, games will employ a mixture of these approaches)
Static lighting or lightmaps: Lighting is computed offline, usually with a global-illumination solver, and the results are baked into textures. These lightmaps are blended with the base diffuse textures at runtime to create the sense of sophisticated shadows and subtle lighting, but none of it actually changes. The great thing about lightmaps is that you can capture very interesting and sophisticated lighting techniques that are very expensive to compute and then 'replay' them very inexpensively. The limitation is that you can't move the lights, although there are techniques for layering a limited number of dynamic lights on-top.
Deferred lighting: In this approach, the scene is rendered many times to encode information into offscreen textures, then additional passes are made to compute the final image. Here often there is one rendering pass per light in the scene. See deferred shading. The good thing about deferred shading is that it is very easy to make the renderer scale with art-driven content without as many hard limits--you can just do more passes for more lights for example which are simply additive. The problem with deferred shading is that each pass tends to do little computation, and the many passes really push hard on the memory bandwidth of modern GPUs which have a lot more compute power than bandwidth.
Per-face Forward lighting: This is commonly known as flat shading. Here the lighting is performed once per triangle/polygon using a face-normal. On modern GPUs, this is usually done on the programmable vertex shader but could also use a geometry shader to compute the per-face normal rather than having to replicate it in vertices. The result is not very realistic, but very cheap to draw since the color is constant per face. This is really only used if you are going for a "Tron look" or some other non-photorealistic rendering technique.
Vertex Forward lighting: This is classic lighting where the light computation is performed per vertex with a per-vertex normal. The colors at each vertex are then interpolated across the face of the triangle/polygon (Gouraud shading). This lighting is cheap, and on modern GPUs would be done in the vertex shader, but the result can be too smooth for many complex materials, and any specular highlights tend to get blurred or missed.
Per-pixel Forward lighting: This is the heart of your question: Here the lighting is computed once per pixel. This can be something like classic Phong or Blinn/Phong shading where the normal is interpolated between the vertices or normal maps where a second texture provides the normal information for the surface. In a modern GPU, this is done in the pixel shader and can provide much more surface information, better specular highlights, roughness, etc. at the expensive of more pixel shader computation. On modern GPUs, they tend to have a lot of compute power relative to the memory bandwidth, so per-pixel lighting is very affordable compared to the old days. In fact, Physically Based Rendering techniques are quite popular in modern games and these tend to have very long and complex pixel shaders combining data from 6 to 8 textures for every pixel on every surface in the scene.
That's a really rough survey and as I said there's a ton of books, articles, and background on this topic.
The short answer to your question is: because we can!

Simple flat shading using Stage3D/AGAL

I'm relatively new to 3D development and am currently using Actionscript, Stage3D and AGAL to learn. I'm trying to create a scene with a simple procedural mesh that is flat shaded. However, I'm stuck on exactly how I should be passing surface normals to the shader for the lighting. I would really like to just use a single surface normal for each triangle and do flat, even shading for each. I know it's easy to achieve better looking lighting with normals for each vertex, but this is the look I'm after.
Since the shader normally processes every vertex, not every triangle, is it possible for me to just pass a single normal per triangle, rather than one per vertex? Is my thinking completely off here? If anyone had a working example of doing simple, flat shading I'd greatly appreciate it.
I'm digging up an old question here since I stumbled on it via google and can see there is no accepted answer.
Stage3D does not have an equivalent "GL_FLAT" option for it's shader engine. What this means is that the fragment shader program always receives a "varying" or interpolated value from the output of the three respective vertices (via the vertex program). If you want flat shading, you have basically only one option:
Create three unique vertices for each triangle and set the normal for
each vertex to the face normal of the triangle. This way, each vertex
will calculate the same lighting and result in the same vertex color.
When the fragment shader interpolates, it will be interpolating three
identical values, resulting in flat shading.
This is pretty lame. The requirement of unique vertices per triangle means you can't share vertices between triangles. This will definitely increase your vertex count, causing increased delays during your VertexBuffer3D uploads as well as overall lower frame rates. However, I have not seen a better solution anywhere.

Which is faster: creating a detailed mesh before execution or tessellating?

For simplicity of the problem let's consider spheres. Let's say I have a sphere, and before execution I know the radius, the position and the triangle count. Let's also say the triangle count is sufficiently large (e.g. ~50k triangles).
Would it be faster generally to create this sphere mesh before hand and stream all 50k triangles to the graphics card, or would it be faster to send a single point (representing the centre of the sphere) and use tessellation and geometry shaders to build the sphere on the GPU?
Would it still be faster if I had 100 of these spheres in different positions? Can I use hull/geometry shaders to create something which I can then combine with instancing?
Tessellation is certainly valuable. Especially when combined with displacement from a heightmap. The isolated environment described in your question is bound not to fully answer your question.
Before using tessellation you would need to know that you will become CPU poly/triangle bound and therefore need to start utilizing the GPU to help you increase the overall triangles of your game/scene. Calculations are very fast on the GPU so yes using tessellation multiple subdivision levels is advisable if you are going to do it...though sometimes I've been happy with just subdividing 3-4 times from a 200 tri plane.
Mainly tessellation is used for environmental/static mesh scene objects so that you can spend your tri's on characters and other moving/animated models without becoming CPU bound.
Checkout engines like Unity3D and CryEngine for tessellation examples to help the learning curve.
I just so happen to be working with this at the same time.
In terms of FPS, the pre-computed method would be faster in this situation since you can
dump one giant 50K triangle sphere payload (like any other model) and
draw it in multiple places from there.
The tessellation method would be slower since all the triangles would
be generated from a formula, multiple times per frame.

OpenGL ES 2.0 Vertex Transformation Algorithms

I'm developing an image warping iOS app with OpenGL ES 2.0.
I have a good grasp on the setup, the pipeline, etc., and am now moving along to the math.
Since my experience with image warping is nil, I'm reaching out for some algorithm suggestions.
Currently, I'm setting the initial vertices at points in a grid type fashion, which equally divide the image into squares. Then, I place an additional vertex in the middle of each of those squares. When I draw the indices, each square contains four triangles in the shape of an X. See the image below:
After playing with photoshop a little, I noticed adobe uses a slightly more complicated algorithm for their puppet warp, but a much more simplified algorithm for their standard warp. What do you think is best for me to apply here / personal preference?
Secondly, when I move a vertex, I'd like to apply a weighted transformation to all the other vertices to smooth out the edges (instead of what I have below, where only the selected vertex is transformed). What sort of algorithm should I apply here?
As each vertex is processed independently by the vertex shader, it is not easy to have vertexes influence each other's positions. However, because there are not that many vertexes it should be fine to do the work on the CPU and dynamically update your vertex attributes per frame.
Since what you are looking for is for your surface to act like a rubber sheet as parts of it are pulled, how about going ahead and implementing a dynamic simulation of a rubber sheet? There are plenty of good articles on cloth simulation in full 3D such as Jeff Lander's. Your application could be a simplification of these techniques. I have previously implemented a simulation like this in 3D. I required a force attracting my generated vertexes to their original grid locations. You could have a similar force attracting vertexes to the pixels at which they are generated before the simulation is begun. This would make them spring back to their default state when left alone and would progressively reduce the influence of your dragging at more distant vertexes.

Resources