I am trying to get the fog in Scenekit to follow a curve from the starting distance to ending distance, rather than having it be linear. This is what the fog distance graph would look like:
What would be the best way to create a volumetrically stored fog opacity curve like this?
I know you can set the density curve to make it exponential/quadratic, and also tried that but I wanted to make this type of curve as well.
I tried changing fogStartDistance and fogEndDistance but the effect wasn't correct.
Here is a snippet from a Metal shader I have, this performs a "standard" fog look and feel. It was used in a MetalKit scene, so should work in SceneKit but you will have to figure out how to inject it into the scene. The fragment shader mentioned above would be a good place to start looking.
// Metal Shader Language
float4 fog(float4 position, float4 color){
float distance = position.z / position.w;
float density = 0.5;
// the line that does the exponential falloff of the fog as we get closer to camera
float fog = 1.0 - clamp(exp(-density * distance), 0.0, 1.0);
// the color you want the fog to be
float4 fogColor = float4(1.0);
color = mix(color, fogColor, fog);
return color;
}
I hope this helps
You can use a custom fragment shader modifier to implement your own curve.
That could definitely be done, but you will have to implement the fog yourself, as fog is just the ZDepth saved out and then multiplied over the final render pass.
You could model your curve as a nurbs/hermite curve or equation and then sample points along the curve.
Hermite has a time from 0 to 1(range from 0 to 1), this would correspond to the ZDepth value[0 at camera, 1 at farthest point].
The sampled point(could be any value depending how you modelled the curve) along the hermite(between range 0 and 1) would then be multiplied, against the ZDepth value.
FogStartDistance and FogEndDistance does a linear multiple over the distance, and sadly you can only specify the exponent of the blend.
Related
Looking for
I’m having trouble accessing the z-coordinate of the rendered pixel in world space. In SceneKit, I am looking for a 3d plane whose rendered color is directly related to the z-coordinates of the rendered point.
Situation
I’m working with SpriteKit and I’m using a SK3DNode to embed a SceneKit scene inside my SpriteKit scene. For the SceneKit scene, I’m using a .dae Collada file exported from Blender. It contains a plane mesh and a light.
I’m applying shader modifiers to modify the geometry and the lighting model.
self.waterGeometry.shaderModifiers = #{
SCNShaderModifierEntryPointGeometry : self.geomModifier,
SCNShaderModifierEntryPointSurface : self.cellShadingModifier
};
The geometry modifier code (self.geomModifier):
// Waves Modifier
float Amplitude = 0.02;
float Frequency = 15.0;
vec2 nrm = _geometry.position.xz;
float len = length(nrm)+0.0001; // for robustness
nrm /= len;
float a = len + Amplitude*sin(Frequency * _geometry.position.z + u_time * 1.6);
_geometry.position.xz = nrm * a;
The geometry modifier applies a sine transformation to the _surface property to simulate waves. In the image below, the sketched sprites are SpriteKit sprites which have a higher zPosition and do not interfere with the SK3DNode. Notice the subtle waves (z displacement) as a result of the geometry modifier.
The next step, I want to output color to be computed based on the point’s z-coordinate in world space. This could be either _surface.diffuse or _output.color, that doesn't matter that much to (would imply a different point of insertion for the shader modifier but not an issue).
I have tried
The following code in the surface modifier (self.cellShadingModifier).
vec4 geometry = u_inverseViewTransform * vec4(_surface.position, 1.0);
if (geometry.y < 0.0) {
_surface.diffuse.rgb *= vec3(0.4);
}
_surface.position is in view space, and I hoped to transform it to world space by using u_inverseViewTransform. Apple docs says:
Geometric fields (such as position and normal) are expressed in view
space. You can use SceneKit’s uniforms (such as
u_inverseViewTransform) to operate in a different coordinate space,
[...]
As you can see it is flickering and does not appear to be based on the the geometry.position I just modified. I have tested this both in the simulator and on device (iPad Air). I believe I am making a simple error as I'm probably confusing _surface and _geometry properties.
Can anyone tell me where I can get the z-coordinate (world space) of the currently shaded point of the mesh, so I can use it in my rendering method?
Note
I have also tried to access _geometry inside the surface shader modifier, but I get the error Use of undeclared identifier '_geometry', which is strange because Apple documentation says:
You can use the structures defined by earlier entry points in later
entry points. For example, a snippet associated with the
SCNShaderModifierEntryPointFragment entry point can read from the
_surface structure defined by the SCNShaderModifierEntryPointSurface entry point.
Note 2
I could have the LightingModel shader calculate off of the generated sine wave (and avoid the search for the z-coordinate), but in the future I may be adding additional waves and using the z-coordinate would be more maintainable, not to mention elegant.
I've also been learning how to use the shader modifiers. I have a solution to this which works for me, using both the inverse model transform and the inverse view transform.
The following code will paint the right-hand side of the model at the centre of the scene with a red tint. You should be able to check the other position element (y, I think) to get the result you want.
vec4 orig = _surface.diffuse;
vec4 transformed_position = u_inverseModelTransform * u_inverseViewTransform * vec4(_surface.position, 1.0);
if (transformed_position.z < 0.0) {
_surface.diffuse = mix(vec4(1.0,0.0,0.0,1.0), orig, 0.5);
}
In OpenGL, I am using the following in my pixel shaders to get the correct pixel position, which is used to sample diffuse, normal, position gbuffer textures:
ivec2 texcoord = ivec2(textureSize(unifDiffuseTexture) * (gl_FragCoord.xy / UnifAmbientPass.mScreenSize));
So far, this is what I do in HLSL:
float2 texcoord = input.mPosition.xy / gScreenSize;
Most notably, in GLSL I am using textureSize() to get accurate pixel position. I am wondering, is there a HLSL equivalent to textureSize()?
In HLSL, you have GetDimensions
But it may be costlier than reading it from a constant buffer, even if it looks easier to use at first to do quick tests.
Also, you have alternative, using SV_Position and Load, just use the xy as an uint2, you remove the need of an user interpolator carrying a texture coordinate to index the screen.
Here the full documentation of a TextureObject
I've been studying shaders in HLSL for an XNA project (so no DX10-DX11) but almost all resouces I found were tutorial of effects where the most part of the work was done in the pixel shader. For istance in lights the vertex shader is used only to serve to the pixel one normals and other things like that.
I'd like to make some effect based on the vertex shader rather than the pixel one, like deformation for istance. Could someone suggest me a book or a website? Even the bare effect name would be useful since than I could google it.
A lot of lighting, etc. is done in the pixel shader because the resulting image quality will be much better.
Imagine a sphere that is created by subdividing a cube or icosahedron. If lighting calculations are done in the vertex shader, the resulting values will be interpolated between face edges, which can lead to a flat or faceted appearance.
Things like blending and morphing are done in the vertex shader because that's where you can manipulate the vertices.
For example:
matrix World;
matrix View;
matrix Projection;
float WindStrength;
float3 WindDirection;
VertexPositionColor VS(VertexPositionColor input)
{
VertexPositionColor output;
matrix wvp = mul(mul(World,View),Projection);
float3 worldPosition = mul(World,input.Position);
worldPosition += WindDirection * WindStrength * worldPosition.y;
output.Position = mul(mul(View,Projection),worldPositioninput);
output.Color = input.Color;
return output;
}
(Pseudo-ish code since I'm writing this in the SO post editor.)
In this case, I'm offsetting vertices that are "high" on the Y axis with a wind direction and strength. If I use this when rendering grass, for instance, the tops of the blades will lean in the direction of the wind, while the vertices that are closer to the ground (ideally with a Y of zero) will not move at all. The math here should be tweaked a bit to take into account really tall things that would cause unacceptable large changes, and the wind should not be uniformly applied to all blades, but it should be clear that here the vertex shader is modifying the mesh in a non-uniform way to get an interesting effect.
No matter the effect you are trying to achieve - morphing, billboards (so the item you're drawing always faces the camera), etc., you're going to wind up passing some parameters into the VS that are then selectively applied to vertices as they pass through the pipeline.
A fairly trivial example would be "inflating" a model into a sphere, based on some parameter.
Pseudocode again,
matrix World;
matrix View;
matrix Projection;
float LerpFactor;
VertexShader(VertexPositionColor input)
float3 normal = normalize(input.Position);
float3 position = lerp(input.Position,normal,LerpFactor);
matrix wvp = mul(mul(World,View),Projection);
float3 outputVector = mul(wvp,position);
....
By stepping the uniform LerpFactor from 0 to 1 across a number of frames, your mesh (ideally a convex polyhedron) will gradually morph from its original shape to a sphere. Of course, you could include more explicit morph targets in your vertex declaration and morph between two model shapes, collapse it to a less complex version of a model, open the lid on a box (or completely unfold it), etc. The possibilites are endless.
For more information, this page has some sample code on generating and using morph targets on the GPU.
If you need some good search terms, look for "xna bones," "blendweight" and "morph targets."
i have a sphere in my 3d project, and i have an earth texture, i use the algorithm from wiki to calculate the texture coordinate.
the code in my effect file look like this:
float pi = 3.14159265359f;
output.uvCoords.x = 0.5 + atan2(input.normal.z, input.normal.x) / (2 * pi);
output.uvCoords.y = 0.5f - asin(input.normal.y) / pi;
the result is the picture below:
look from left( there is a line, this is my question)
look from front
3.look from right
Not pretend to be a complete answer at all, but there are some ideas:
Try 6.28 instead 6.18, because 3.14 * 2 = 6.28. It is always a good idea to create variables or macro instead of plain numbers, to prevent such sad mistakes in future
Try to use more precise value of Pi (numbers to the right of the decimal point)
Try to normalize normal vector before calculations
Even better calculate texcoords on CPU once and for all, instead of calculating on each shader invocation. You can use any asset library for this purpose or just quickly move your HLSL to main code.
#define PI 3.14159265359f
#define PImul2 6.28318530718f // pi*2
#define PIdiv2 1.57079632679f // pi/2
#define PImul3div2 2.09439510239 // 3*pi/2
#define PIrev 0.31830988618f // 1/pi
...
Hope it helps.
finally i figure it out by myself, The problem lies in the fact, that i'am calculating the texture coordinates in the vertex shader. The problem is that one vertex is on the far right of the texture, while the other 2 vertices of a triangle are on the far left of the texture, which results in almost the whole texture being visible on such a triangle. so there is a line of jumbled texture coords. the solution is i should send the normal to pixel shader and calculate the texture coord in the pixel shader
Let's say we have a texture (in this case 8x8 pixels) we want to use as a sprite sheet. One of the sub-images (sprite) is a subregion of 4x3 inside the texture, like in this image:
(Normalized texture coordinates of the four corners are shown)
Now, there are basically two ways to assign texture coordinates to a 4px x 3px-sized quad so that it effectively becomes the sprite we are looking for; The first and most straightforward is to sample the texture at the corners of the subregion:
// Texture coordinates
GLfloat sMin = (xIndex0 ) / imageWidth;
GLfloat sMax = (xIndex0 + subregionWidth ) / imageWidth;
GLfloat tMin = (yIndex0 ) / imageHeight;
GLfloat tMax = (yIndex0 + subregionHeight) / imageHeight;
Although when first implementing this method, ca. 2010, I realized the sprites looked slightly 'distorted'. After a bit of search, I came across a post in the cocos2d forums explaining that the 'right way' to sample a texture when rendering a sprite is this:
// Texture coordinates
GLfloat sMin = (xIndex0 + 0.5) / imageWidth;
GLfloat sMax = (xIndex0 + subregionWidth - 0.5) / imageWidth;
GLfloat tMin = (yIndex0 + 0.5) / imageHeight;
GLfloat tMax = (yIndex0 + subregionHeight - 0.5) / imageHeight;
...and after fixing my code, I was happy for a while. But somewhere along the way, and I believe it is around the introduction of iOS 5, I started feeling that my sprites weren't looking good. After some testing, I switched back to the 'blue' method (second image) and now they seem to look good, but not always.
Am I going crazy, or something changed with iOS 5 related to GL ES texture mapping? Perhaps I am doing something else wrong? (e.g., the vertex position coordinates are slightly off? Wrong texture setup parameters?) But my code base didn't change, so perhaps I am doing something wrong from the beginning...?
I mean, at least with my code, it feels as if the "red" method used to be correct but now the "blue" method gives better results.
Right now, my game looks OK, but I feel there is something half-wrong that I must fix sooner or later...
Any ideas / experiences / opinions?
ADDENDUM
To render the sprite above, I would draw a quad measuring 4x3 in orthographic projection, with each vertex assigned the texture coords implied in the code mentioned before, like this:
// Top-Left Vertex
{ sMin, tMin };
// Bottom-Left Vertex
{ sMin, tMax };
// Top-Right Vertex
{ sMax, tMin };
// Bottom-right Vertex
{ sMax, tMax };
The original quad is created from (-0.5, -0.5) to (+0.5, +0.5); i.e. it is a unit square at the center of the screen, then scaled to the size of the subregion (in this case, 4x3), and its center positioned at integer (x,y) coordinates. I smell this has something to do too, especially when either width, height or both are not even?
ADDENDUM 2
I also found this article, but I'm still trying to put it together (it's 4:00 AM here)
http://www.mindcontrol.org/~hplus/graphics/opengl-pixel-perfect.html
There's slightly more to this picture than meets the eye, the texture coordinates are not the only factor in where the texture gets sampled. In your case I believe the blue is probably what want to have.
What you ultimately want is to sample each texel in center. You don't want to be taking samples on the boundary between two texels, because that either combines them with linear sampling, or arbitrarily chooses one or the other with nearest, depending on which way the floating point calculations round.
Having said that, you might think that you don't want to have your texcoords at (0,0), (1,1) and the other corners, because those are on the texel boundary. However an important thing to note is that opengl samples textures in the center of a fragment.
For a super simple example, consider a 2 by 2 pixel monitor, with a 2 by 2 pixel texture.
If you draw a quad from (0,0) to (2,2), this will cover 4 pixels. If you texture map this quad, it will need to take 4 samples from the texture.
If your texture coordinates go from 0 to 1, then opengl will interpolate this and sample from the center of each pixel, with the lower left texcoord starting at the bottom left corner of the bottom left pixel. This will ultimately generate texcoord pairs of (0.25, 0.25), (0.75,0.75), (0.25, 0.75), and (0.75, 0.25). Which puts the samples right in the middle of each texel, which is what you want.
If you offset your texcoords by a half pixel as in the red example, then it will interpolate incorrectly, and you'll end up sampling the texture off center of the texels.
So long story short, you want to make sure that your pixels line up correctly with your texels (don't draw sprites at non-integer pixel locations), and don't scale sprites by arbitrary amounts.
If the blue square is giving you bad results, can you give an example image, or describe how you're drawing it?
Picture says 1000 words: