Looking for
I’m having trouble accessing the z-coordinate of the rendered pixel in world space. In SceneKit, I am looking for a 3d plane whose rendered color is directly related to the z-coordinates of the rendered point.
Situation
I’m working with SpriteKit and I’m using a SK3DNode to embed a SceneKit scene inside my SpriteKit scene. For the SceneKit scene, I’m using a .dae Collada file exported from Blender. It contains a plane mesh and a light.
I’m applying shader modifiers to modify the geometry and the lighting model.
self.waterGeometry.shaderModifiers = #{
SCNShaderModifierEntryPointGeometry : self.geomModifier,
SCNShaderModifierEntryPointSurface : self.cellShadingModifier
};
The geometry modifier code (self.geomModifier):
// Waves Modifier
float Amplitude = 0.02;
float Frequency = 15.0;
vec2 nrm = _geometry.position.xz;
float len = length(nrm)+0.0001; // for robustness
nrm /= len;
float a = len + Amplitude*sin(Frequency * _geometry.position.z + u_time * 1.6);
_geometry.position.xz = nrm * a;
The geometry modifier applies a sine transformation to the _surface property to simulate waves. In the image below, the sketched sprites are SpriteKit sprites which have a higher zPosition and do not interfere with the SK3DNode. Notice the subtle waves (z displacement) as a result of the geometry modifier.
The next step, I want to output color to be computed based on the point’s z-coordinate in world space. This could be either _surface.diffuse or _output.color, that doesn't matter that much to (would imply a different point of insertion for the shader modifier but not an issue).
I have tried
The following code in the surface modifier (self.cellShadingModifier).
vec4 geometry = u_inverseViewTransform * vec4(_surface.position, 1.0);
if (geometry.y < 0.0) {
_surface.diffuse.rgb *= vec3(0.4);
}
_surface.position is in view space, and I hoped to transform it to world space by using u_inverseViewTransform. Apple docs says:
Geometric fields (such as position and normal) are expressed in view
space. You can use SceneKit’s uniforms (such as
u_inverseViewTransform) to operate in a different coordinate space,
[...]
As you can see it is flickering and does not appear to be based on the the geometry.position I just modified. I have tested this both in the simulator and on device (iPad Air). I believe I am making a simple error as I'm probably confusing _surface and _geometry properties.
Can anyone tell me where I can get the z-coordinate (world space) of the currently shaded point of the mesh, so I can use it in my rendering method?
Note
I have also tried to access _geometry inside the surface shader modifier, but I get the error Use of undeclared identifier '_geometry', which is strange because Apple documentation says:
You can use the structures defined by earlier entry points in later
entry points. For example, a snippet associated with the
SCNShaderModifierEntryPointFragment entry point can read from the
_surface structure defined by the SCNShaderModifierEntryPointSurface entry point.
Note 2
I could have the LightingModel shader calculate off of the generated sine wave (and avoid the search for the z-coordinate), but in the future I may be adding additional waves and using the z-coordinate would be more maintainable, not to mention elegant.
I've also been learning how to use the shader modifiers. I have a solution to this which works for me, using both the inverse model transform and the inverse view transform.
The following code will paint the right-hand side of the model at the centre of the scene with a red tint. You should be able to check the other position element (y, I think) to get the result you want.
vec4 orig = _surface.diffuse;
vec4 transformed_position = u_inverseModelTransform * u_inverseViewTransform * vec4(_surface.position, 1.0);
if (transformed_position.z < 0.0) {
_surface.diffuse = mix(vec4(1.0,0.0,0.0,1.0), orig, 0.5);
}
Related
I am trying to get the fog in Scenekit to follow a curve from the starting distance to ending distance, rather than having it be linear. This is what the fog distance graph would look like:
What would be the best way to create a volumetrically stored fog opacity curve like this?
I know you can set the density curve to make it exponential/quadratic, and also tried that but I wanted to make this type of curve as well.
I tried changing fogStartDistance and fogEndDistance but the effect wasn't correct.
Here is a snippet from a Metal shader I have, this performs a "standard" fog look and feel. It was used in a MetalKit scene, so should work in SceneKit but you will have to figure out how to inject it into the scene. The fragment shader mentioned above would be a good place to start looking.
// Metal Shader Language
float4 fog(float4 position, float4 color){
float distance = position.z / position.w;
float density = 0.5;
// the line that does the exponential falloff of the fog as we get closer to camera
float fog = 1.0 - clamp(exp(-density * distance), 0.0, 1.0);
// the color you want the fog to be
float4 fogColor = float4(1.0);
color = mix(color, fogColor, fog);
return color;
}
I hope this helps
You can use a custom fragment shader modifier to implement your own curve.
That could definitely be done, but you will have to implement the fog yourself, as fog is just the ZDepth saved out and then multiplied over the final render pass.
You could model your curve as a nurbs/hermite curve or equation and then sample points along the curve.
Hermite has a time from 0 to 1(range from 0 to 1), this would correspond to the ZDepth value[0 at camera, 1 at farthest point].
The sampled point(could be any value depending how you modelled the curve) along the hermite(between range 0 and 1) would then be multiplied, against the ZDepth value.
FogStartDistance and FogEndDistance does a linear multiple over the distance, and sadly you can only specify the exponent of the blend.
Is it possible to use SceneKit's unprojectPoint to convert a 2D point to 3D without having a depth value?
I only need to find the 3D location in the XZ plane. Y can be always 0 or any value since I'm not using it.
I'm trying to do this for iOS 8 Beta.
I had something similar with JavaScript and Three.js (WebGL) like this:
function getMouse3D(x, y) {
var pos = new THREE.Vector3(0, 0, 0);
var pMouse = new THREE.Vector3(
(x / renderer.domElement.width) * 2 - 1,
-(y / renderer.domElement.height) * 2 + 1,
1
);
//
projector.unprojectVector(pMouse, camera);
var cam = camera.position;
var m = pMouse.y / ( pMouse.y - cam.y );
pos.x = pMouse.x + ( cam.x - pMouse.x ) * m;
pos.z = pMouse.z + ( cam.z - pMouse.z ) * m;
return pos;
};
But I don't know how to translate the part with unprojectVector to SceneKit.
What I want to do is to be able to drag an object around in the XZ plane only. The vertical axis Y will be ignored.
Since the object would need to move along a plane, one solution would be to use hitTest method, but I don't think is very good in terms of performance to do it for every touch/drag event. Also, it wouldn't allow the object to move outside the plane either.
I've tried a solution based on the accepted answer here, but it didn't worked. Using one depth value for unprojectPoint, if dragging the object around in the +/-Z direction the object doesn't stay under the finger too long, but it moves away from it instead.
I need to have the dragged object stay under the finger no matter where is it moved in the XZ plane.
First, are you actually looking for a position in the xz-plane or the xy-plane? By default, the camera looks in the -z direction, so the x- and y-axes of the 3D Scene Kit coordinate system go in the same directions as they do in the 2D view coordinate system. (Well, y is flipped by default in UIKit, but it's still the vertical axis.) The xz-plane is then orthogonal to the plane of the screen.
Second, a depth value is a necessary part of converting from 2D to 3D. I'm not an expert on three.js, but from looking at their library documentation (which apparently can't be linked into), their unprojectVector still takes a Vector3. And that's what you're constructing for pMouse in your code above — a vector whose z- and y-coordinates come from the 2D mouse position, and whose z-coordinate is 1.
SceneKit's unprojectPoint works the same way — it takes a point whose z-coordinate refers to a depth in clip space, and maps that to a point in your scene's world space.
If your world space is oriented such that the only variation you care about is in the x- and y-axes, you may pass any z-value you want to unprojectPoint, and ignore the z-value in the vector you get back. Otherwise, pass -1 to map to the far clipping plane, 1 for the near clipping plane, or 0 for halfway in between — the plane whose z-coordinate (in camera space) is 0. If you're using the unprojected point to position a node in the scene, the best advice is to just try different z-values (between -1 and 1) until you get the behavior you want.
However, it's a good idea to be thinking about what you're using an unprojected vector for — if the next thing you'd be doing with it is testing for intersections with scene geometry, look at hitTest: instead.
I've been studying shaders in HLSL for an XNA project (so no DX10-DX11) but almost all resouces I found were tutorial of effects where the most part of the work was done in the pixel shader. For istance in lights the vertex shader is used only to serve to the pixel one normals and other things like that.
I'd like to make some effect based on the vertex shader rather than the pixel one, like deformation for istance. Could someone suggest me a book or a website? Even the bare effect name would be useful since than I could google it.
A lot of lighting, etc. is done in the pixel shader because the resulting image quality will be much better.
Imagine a sphere that is created by subdividing a cube or icosahedron. If lighting calculations are done in the vertex shader, the resulting values will be interpolated between face edges, which can lead to a flat or faceted appearance.
Things like blending and morphing are done in the vertex shader because that's where you can manipulate the vertices.
For example:
matrix World;
matrix View;
matrix Projection;
float WindStrength;
float3 WindDirection;
VertexPositionColor VS(VertexPositionColor input)
{
VertexPositionColor output;
matrix wvp = mul(mul(World,View),Projection);
float3 worldPosition = mul(World,input.Position);
worldPosition += WindDirection * WindStrength * worldPosition.y;
output.Position = mul(mul(View,Projection),worldPositioninput);
output.Color = input.Color;
return output;
}
(Pseudo-ish code since I'm writing this in the SO post editor.)
In this case, I'm offsetting vertices that are "high" on the Y axis with a wind direction and strength. If I use this when rendering grass, for instance, the tops of the blades will lean in the direction of the wind, while the vertices that are closer to the ground (ideally with a Y of zero) will not move at all. The math here should be tweaked a bit to take into account really tall things that would cause unacceptable large changes, and the wind should not be uniformly applied to all blades, but it should be clear that here the vertex shader is modifying the mesh in a non-uniform way to get an interesting effect.
No matter the effect you are trying to achieve - morphing, billboards (so the item you're drawing always faces the camera), etc., you're going to wind up passing some parameters into the VS that are then selectively applied to vertices as they pass through the pipeline.
A fairly trivial example would be "inflating" a model into a sphere, based on some parameter.
Pseudocode again,
matrix World;
matrix View;
matrix Projection;
float LerpFactor;
VertexShader(VertexPositionColor input)
float3 normal = normalize(input.Position);
float3 position = lerp(input.Position,normal,LerpFactor);
matrix wvp = mul(mul(World,View),Projection);
float3 outputVector = mul(wvp,position);
....
By stepping the uniform LerpFactor from 0 to 1 across a number of frames, your mesh (ideally a convex polyhedron) will gradually morph from its original shape to a sphere. Of course, you could include more explicit morph targets in your vertex declaration and morph between two model shapes, collapse it to a less complex version of a model, open the lid on a box (or completely unfold it), etc. The possibilites are endless.
For more information, this page has some sample code on generating and using morph targets on the GPU.
If you need some good search terms, look for "xna bones," "blendweight" and "morph targets."
I am creating a 3D scene and I have just inserted a cube object into it. It is rendered fine at the origin but when I try to rotate it and then translate it I get a huge deformed cube. Here is the problem area in my code:
D3DXMATRIX cubeROT, cubeMOVE;
D3DXMatrixRotationY(&cubeROT, D3DXToRadian(45.0f));
D3DXMatrixTranslation(&cubeMOVE, 10.0f, 2.0f, 1.0f);
D3DXMatrixTranspose(&worldMatrix, &(cubeROT * cubeMOVE));
// Put the model vertex and index buffers on the graphics pipeline to prepare them for drawing.
m_Model->Render(m_Direct3D->GetDeviceContext());
// Render the model using the light shader.
result = m_LightShader->Render(m_Direct3D->GetDeviceContext(), m_Model->GetIndexCount(), worldMatrix, viewMatrix, projectionMatrix,
m_Model->GetTexture(), m_Light->GetDirection(), m_Light->GetDiffuseColor());
// Reset the world matrix.
m_Direct3D->GetWorldMatrix(worldMatrix);
I have discovered that it's the cubeMOVE part of the transpose that is giving me the problem but I have no idea why.
This rotates the cube properly:
D3DXMatrixTranspose(&worldMatrix, &cubeROT);
This translates the cube properly:
D3DXMatrixTranslation(&worldMatrix, 10.0f, 2.0f, 1.0f);
But this creates the deformed mesh:
D3DXMatrixTranspose(&worldMatrix, &cubeMOVE);
I'm quite new to DirectX so any help would be very much appreciated.
I don't think transpose does what you think it does. To combine transformation matrices, you just multiply them -- no need to transpose. I guess it should be simply:
worldMatrix = cubeROT * cubeMOVE;
Edit
The reason "transpose" seems to work for rotation but not translation, is that transpose flips the non-diagonal parts of the matrix. But for an axis-rotation matrix, that leaves the matrix nearly unchanged. (It does change a couple of signs, but that would only affect the direction of the rotation.) For a translation matrix, applying a transpose would completely deform it -- hence the result you see.
Let's say we have a texture (in this case 8x8 pixels) we want to use as a sprite sheet. One of the sub-images (sprite) is a subregion of 4x3 inside the texture, like in this image:
(Normalized texture coordinates of the four corners are shown)
Now, there are basically two ways to assign texture coordinates to a 4px x 3px-sized quad so that it effectively becomes the sprite we are looking for; The first and most straightforward is to sample the texture at the corners of the subregion:
// Texture coordinates
GLfloat sMin = (xIndex0 ) / imageWidth;
GLfloat sMax = (xIndex0 + subregionWidth ) / imageWidth;
GLfloat tMin = (yIndex0 ) / imageHeight;
GLfloat tMax = (yIndex0 + subregionHeight) / imageHeight;
Although when first implementing this method, ca. 2010, I realized the sprites looked slightly 'distorted'. After a bit of search, I came across a post in the cocos2d forums explaining that the 'right way' to sample a texture when rendering a sprite is this:
// Texture coordinates
GLfloat sMin = (xIndex0 + 0.5) / imageWidth;
GLfloat sMax = (xIndex0 + subregionWidth - 0.5) / imageWidth;
GLfloat tMin = (yIndex0 + 0.5) / imageHeight;
GLfloat tMax = (yIndex0 + subregionHeight - 0.5) / imageHeight;
...and after fixing my code, I was happy for a while. But somewhere along the way, and I believe it is around the introduction of iOS 5, I started feeling that my sprites weren't looking good. After some testing, I switched back to the 'blue' method (second image) and now they seem to look good, but not always.
Am I going crazy, or something changed with iOS 5 related to GL ES texture mapping? Perhaps I am doing something else wrong? (e.g., the vertex position coordinates are slightly off? Wrong texture setup parameters?) But my code base didn't change, so perhaps I am doing something wrong from the beginning...?
I mean, at least with my code, it feels as if the "red" method used to be correct but now the "blue" method gives better results.
Right now, my game looks OK, but I feel there is something half-wrong that I must fix sooner or later...
Any ideas / experiences / opinions?
ADDENDUM
To render the sprite above, I would draw a quad measuring 4x3 in orthographic projection, with each vertex assigned the texture coords implied in the code mentioned before, like this:
// Top-Left Vertex
{ sMin, tMin };
// Bottom-Left Vertex
{ sMin, tMax };
// Top-Right Vertex
{ sMax, tMin };
// Bottom-right Vertex
{ sMax, tMax };
The original quad is created from (-0.5, -0.5) to (+0.5, +0.5); i.e. it is a unit square at the center of the screen, then scaled to the size of the subregion (in this case, 4x3), and its center positioned at integer (x,y) coordinates. I smell this has something to do too, especially when either width, height or both are not even?
ADDENDUM 2
I also found this article, but I'm still trying to put it together (it's 4:00 AM here)
http://www.mindcontrol.org/~hplus/graphics/opengl-pixel-perfect.html
There's slightly more to this picture than meets the eye, the texture coordinates are not the only factor in where the texture gets sampled. In your case I believe the blue is probably what want to have.
What you ultimately want is to sample each texel in center. You don't want to be taking samples on the boundary between two texels, because that either combines them with linear sampling, or arbitrarily chooses one or the other with nearest, depending on which way the floating point calculations round.
Having said that, you might think that you don't want to have your texcoords at (0,0), (1,1) and the other corners, because those are on the texel boundary. However an important thing to note is that opengl samples textures in the center of a fragment.
For a super simple example, consider a 2 by 2 pixel monitor, with a 2 by 2 pixel texture.
If you draw a quad from (0,0) to (2,2), this will cover 4 pixels. If you texture map this quad, it will need to take 4 samples from the texture.
If your texture coordinates go from 0 to 1, then opengl will interpolate this and sample from the center of each pixel, with the lower left texcoord starting at the bottom left corner of the bottom left pixel. This will ultimately generate texcoord pairs of (0.25, 0.25), (0.75,0.75), (0.25, 0.75), and (0.75, 0.25). Which puts the samples right in the middle of each texel, which is what you want.
If you offset your texcoords by a half pixel as in the red example, then it will interpolate incorrectly, and you'll end up sampling the texture off center of the texels.
So long story short, you want to make sure that your pixels line up correctly with your texels (don't draw sprites at non-integer pixel locations), and don't scale sprites by arbitrary amounts.
If the blue square is giving you bad results, can you give an example image, or describe how you're drawing it?
Picture says 1000 words: