I have a node size of 64x32 and texture size of 192x192 and I am trying to draw the first part of this texture at the first node, the second part at the second node...
Fragment shader (attached to SKSpriteNode with texture size of 64x32)
void main() {
float bX = 64.0 / 192.0 * (offset.x + 1);
float aX = 64.0 / 192.0 * (offset.x );
float bY = 32.0 / 192.0 * (offset.y + 1);
float aY = 32.0 / 192.0 * (offset.y);
float normalizedX = (bX - aX) * v_tex_coord.x + aX;
float normalizedY = (bY - aY) * v_tex_coord.y + aY;
gl_FragColor = texture2D(u_temp, vec2(normalizedX, normalizedY));
}
offset.x - [0, 2]
offset.y - [0, 5]
u_temp - texture size of 192x192
function to convert a value from [0,1] to, for example, [0, 0.33]
But the result seems to be wrong:
SKSpriteNode with attached texture
SKSpriteNode without texture (what I want to achieve with texture)
When a texture is in an altas, it's not addressed by coordinates from (0,0) to (1,1) anymore. The atlas is really one large texture that has been assembled behind the scenes. When you use a particular named image from an atlas in a normal sprite, SpriteKit is looking up that image name in information about how the atlas was assembled and then telling the GPU something like "draw this sprite with bigAtlasTexture, coordinates (0.1632,0.8814) through (0.1778, 0.9143)". If you're going to write a custom shader using the same texture, you need that information about where it lives inside the atlas, which you get from textureRect:
https://developer.apple.com/documentation/spritekit/sktexture/1519707-texturerect
So you have your texture which is not really one image but defined by a location textureRect() in a big packed-up image of lots of textures. I find it easiest to think in terms of (0,0) to (1,1), so when writing a shader I usually do textureRect => subtract and scale to get to (0,0)-(1,1) => compute desired modified coordinates => scale and add to get to textureRect again => texture2D lookup.
Since your shader will need to know about textureRect but you can't call that from the shader code, you have two choices:
Make an attribute or uniform to hold that information, fill it in from the outside, and then have the shader reference it.
If the shader is only used for a specific texture or for a few textures, then you can generate shader code that's specialized for the required textureRect, i.e., it just has some constants in the code for the texture.
Here's a part of an example using approach #2:
func myShader(forTexture texture: SKTexture) -> SKShader {
// Be careful not to assume that the texture has v_tex_coord ranging in (0, 0) to
// (1, 1)! If the texture is part of a texture atlas, this is not true. I could
// make another attribute or uniform to pass in the textureRect info, but since I
// only use this with a particular texture, I just pass in the texture and compile
// in the required v_tex_coord transformations for that texture.
let rect = texture.textureRect()
let shaderSource = """
void main() {
// Normalize coordinates to (0,0)-(1,1)
v_tex_coord -= vec2(\(rect.origin.x), \(rect.origin.y));
v_tex_coord *= vec2(\(1 / rect.size.width), \(1 / rect.size.height));
// Update the coordinates in whatever way here...
// v_tex_coord = desired_transform(v_tex_coord)
// And then go back to the actual coordinates for the real texture
v_tex_coord *= vec2(\(rect.size.width), \(rect.size.height));
v_tex_coord += vec2(\(rect.origin.x), \(rect.origin.y));
gl_FragColor = texture2D(u_texture, v_tex_coord);
}
"""
let shader = SKShader(source: shaderSource)
return shader
}
That's a cut-down version of some specific examples from here:
https://github.com/bg2b/RockRats/blob/master/Asteroids/Hyperspace.swift
Related
I'm using Metal to render a scene with a z buffer and now need to integrate this z-buffer into SceneKit's rendering. However I can't figure out how to get SceneKit to use this depth better correctly and am not even 100% sure what format SceneKit expects it's z-buffers to be in
Base on this question, my understanding was that SceneKit uses a reverse logarithmic z-buffer in the range of 1 (near) to 0 (far). However I can't get this working and objects I draw with SceneKit don't properly respect the depth buffer: they are either always showing or always hidden
First, here's how the generate a z-buffer texture in a Metal render pass:
struct FragmentOut {
float4 color [[color(0)]];
float depth [[depth(any)]];
};
fragment FragmentOut metalRenderFragment(const InOut in [[ stage_in ]]) {
FragmentOut out;
out.depth = 0; // 0 is far with reverse z buffer
...
float cameraSpaceZ = ...; // Computed in shader
// There constants are taken from SceneKit's camera and inlined here
const float zNear = 0.0010000000474974513;
const float zFar = 1000.0;
float logDepth = log(z / zNear) / log(zFar / zNear);
out.depth = 1.0 - logDepth; // Reverse the depth for scenekit
return out;
}
Then to integrate the depth buffer into SceneKit, I render a full screen quad in scenekit with a SCNProgram that uses the depth texture generated in the previous step:
fragment FragmentOut sceneKitFullScreenQuadFragment(const InOut in [[ stage_in ]],
depth2d<float, access::sample> depthTexture [[texture(1)]])
{
constexpr sampler sampler(filter::linear);
const float depth = depthTexture.sample(sampler, in.uv);
return {
.color = float4(0),
.depth = depth,
};
}
So two questions:
What format does SceneKit use for its z-buffer? Is it a reversed logarithmic z-buffer?
What am I doing wrong in generating the z-buffer values for SceneKit?
SceneKit uses a reverse logarithmic Z-Buffer. This post and this post show you how to get a normalized linear mapping space [0...1]. You need the opposite formula.
Also, you can toggle the value from reverseZ to directZ this way:
let sceneView = self.view as! SCNView
sceneView.usesReverseZ = true // default
Andy Jazz's answer helped but I still found the links confusing. Here's what ultimately worked for me (although there are possibly other ways to do this):
When generating the depth map (this would be inside the the metal shader in my original example) pass in SceneKit's projection transform matrix and use this to transform the depth value:
// In a metal shader generating the depth map
// The z distance from the camera, e.g. if the object
// at the current position is 5 units away, this would be 5.
const float z = ...;
// The camera points along the -z axis, so transform the -z position
// with SceneKit's projection matrix (you can get this from SCNCamera)
const float4 depthPos = (sceneKitState.projectionTransform * float4(0, 0, -z, 1));
// Then do perspective division to get the final depth value
out.depth = depthPos.z / depthPos.w;
Then inside of the SceneKit shader, simply write out the depth, taking into account usesReverseZ:
// In a scenekit, full screen quad shader
const float depth = depthTexture.sample(sampler, in.uv);
return {
.color = float4(0),
.depth = 1.0 - depth,
};
❗️ The above assumes you are using sceneView.usesReverseZ = true (the default). If you are using usesReverseZ = false, simply do .depth = depth instead
I am currently studying shadow mapping, and my biggest issue right now is the transformations between spaces. This is my current working theory/steps.
Pass 1:
Get depth of pixel from camera, store in depth buffer
Get depth of pixel from light, store in another buffer
Pass 2:
Use texture coordinate to sample camera's depth buffer at current pixel
Convert that depth to a view space position by multiplying the projection coordinate with invProj matrix. (also do a perspective divide).
Take that view position and multiply by invV (camera's inverse view) to get a world space position
Multiply world space position by light's viewProjection matrix.
Perspective divide that projection-space coordinate, and manipulate into [0..1] to sample from light depth buffer.
Get current depth from light and closest (sampled) depth, if current depth > closest depth, it's in shadow.
Shader Code
Pass1:
PS_INPUT vs(VS_INPUT input) {
output.pos = mul(input.vPos, mvp);
output.cameraDepth = output.pos.zw;
..
float4 vPosInLight = mul(input.vPos, m);
vPosInLight = mul(vPosInLight, light.viewProj);
output.lightDepth = vPosInLight.zw;
}
PS_OUTPUT ps(PS_INPUT input){
float cameraDepth = input.cameraDepth.x / input.cameraDepth.y;
//Bundle cameraDepth in alpha channel of a normal map.
output.normal = float4(input.normal, cameraDepth);
//4 Lights in total -- although only 1 is active right now. Going to use r/g/b/a for each light depth.
output.lightDepths.r = input.lightDepth.x / input.lightDepth.y;
}
Pass 2 (Screen Quad):
float4 ps(PS_INPUT input) : SV_TARGET{
float4 pixelPosView = depthToViewSpace(input.texCoord);
..
float4 pixelPosWorld = mul(pixelPosView, invV);
float4 pixelPosLight = mul(pixelPosWorld, light.viewProj);
float shadow = shadowCalc(pixelPosLight);
//For testing / visualisation
return float4(shadow,shadow,shadow,1);
}
float4 depthToViewSpace(float2 xy) {
//Get pixel depth from camera by sampling current texcoord.
//Extract the alpha channel as this holds the depth value.
//Then, transform from [0..1] to [-1..1]
float z = (_normal.Sample(_sampler, xy).a) * 2 - 1;
float x = xy.x * 2 - 1;
float y = (1 - xy.y) * 2 - 1;
float4 vProjPos = float4(x, y, z, 1.0f);
float4 vPositionVS = mul(vProjPos, invP);
vPositionVS = float4(vPositionVS.xyz / vPositionVS.w,1);
return vPositionVS;
}
float shadowCalc(float4 pixelPosL) {
//Transform pixelPosLight from [-1..1] to [0..1]
float3 projCoords = (pixelPosL.xyz / pixelPosL.w) * 0.5 + 0.5;
float closestDepth = _lightDepths.Sample(_sampler, projCoords.xy).r;
float currentDepth = projCoords.z;
return currentDepth > closestDepth; //Supposed to have bias, but for now I just want shadows working haha
}
CPP Matrices
// (Position, LookAtPos, UpDir)
auto lightView = XMMatrixLookAtLH(XMLoadFloat4(&pos4), XMVectorSet(0,0,0,1), XMVectorSet(0,1,0,0));
// (FOV, AspectRatio (1000/680), NEAR, FAR)
auto lightProj = XMMatrixPerspectiveFovLH(1.57f , 1.47f, 0.01f, 10.0f);
XMStoreFloat4x4(&_cLightBuffer.light.viewProj, XMMatrixTranspose(XMMatrixMultiply(lightView, lightProj)));
Current Outputs
White signifies that a shadow should be projected there. Black indicates no shadow.
CameraPos (0, 2.5, -2)
CameraLookAt (0, 0, 0)
CameraFOV (1.57)
CameraNear (0.01)
CameraFar (10.0)
LightPos (0, 2.5, -2)
LightLookAt (0, 0, 0)
LightFOV (1.57)
LightNear (0.01)
LightFar (10.0)
If I change the CameraPosition to be (0, 2.5, 2), basically just flipped on the Z axis, this is the result.
Obviously a shadow shouldn't change its projection depending on where the observer is, so I think I'm making a mistake with the invV. But I really don't know for sure. I've debugged the light's projView matrix, and the values seem correct - going from CPU to GPU. It's also entirely possible I've misunderstood some theory along the way because this is quite a tricky technique for me.
Aha! Found my problem. It was a silly mistake, I was calculating the depth of pixels from each light, but storing them in a texture that was based on the view of the camera. The following image should explain my mistake better than I can with words.
For future reference, the solution I decided was to scrap my idea for storing light depths in texture channels. Instead, I basically make a new pass for each light, and bind a unique depth-stencil texture to render the geometry to. When I want to do light calculations, I bind each of the depth textures to a shader resource slot and go from there. Obviously this doesn't scale well with many lights, but for my student project where I'm only required to have 2 shadow casters, it suffices.
_context->DrawIndexed(indexCount, 0, 0); //Draw to regular render target
_sunlight->use(1, _context); //Use sunlight shader (basically just runs a Vertex Shader & Null Pixel shader so depth can be written to depth map)
_sunlight->bindDSVSetNullRenderTarget(_context);
_context->DrawIndexed(indexCount, 0, 0); //Draw to sunlight depth target
bindDSVSetNullRenderTarget(ctx){
ID3D11RenderTargetView* nullrv = { nullptr };
ctx->OMSetRenderTargets(1, &nullrv, _sunlightDepthStencilView);
}
//The purpose of setting a null render target before doing the draw call is
//that a draw call with only a depth target bound is much faster.
//(At least I believe so, from my reading online)
I have the following fragment and vertex shaders.
HLSL code
`
// Vertex shader
//-----------------------------------------------------------------------------------
void mainVP(
float4 position : POSITION,
out float4 outPos : POSITION,
out float2 outDepth : TEXCOORD0,
uniform float4x4 worldViewProj,
uniform float4 texelOffsets,
uniform float4 depthRange) //Passed as float4(minDepth, maxDepth,depthRange,1 / depthRange)
{
outPos = mul(worldViewProj, position);
outPos.xy += texelOffsets.zw * outPos.w;
outDepth.x = (outPos.z - depthRange.x)*depthRange.w;//value [0..1]
outDepth.y = outPos.w;
}
// Fragment shader
void mainFP( float2 depth: TEXCOORD0, out float4 result : COLOR) {
float finalDepth = depth.x;
result = float4(finalDepth, finalDepth, finalDepth, 1);
}
`
This shader produces a depth map.
This depth map must then be used to reconstruct the world positions for the depth values. I have searched other posts but none of them seem to store the depth using the same formula I am using. The only similar post is the following
Reconstructing world position from linear depth
Therefore, I am having a hard time reconstructing the point using the x and y coordinates from the depth map and the corresponding depth.
I need some help in constructing the shader to get the world view position for a depth at particular texture coordinates.
It doesn't look like you're normalizing your depth. Try this instead. In your VS, do:
outDepth.xy = outPos.zw;
And in your PS to render the depth, you can do:
float finalDepth = depth.x / depth.y;
Here is a function to then extract the view-space position of a particular pixel from your depth texture. I'm assuming you're rendering screen aligned quad and performing your position-extraction in the pixel shader.
// Function for converting depth to view-space position
// in deferred pixel shader pass. vTexCoord is a texture
// coordinate for a full-screen quad, such that x=0 is the
// left of the screen, and y=0 is the top of the screen.
float3 VSPositionFromDepth(float2 vTexCoord)
{
// Get the depth value for this pixel
float z = tex2D(DepthSampler, vTexCoord);
// Get x/w and y/w from the viewport position
float x = vTexCoord.x * 2 - 1;
float y = (1 - vTexCoord.y) * 2 - 1;
float4 vProjectedPos = float4(x, y, z, 1.0f);
// Transform by the inverse projection matrix
float4 vPositionVS = mul(vProjectedPos, g_matInvProjection);
// Divide by w to get the view-space position
return vPositionVS.xyz / vPositionVS.w;
}
For a more advanced approach that reduces the number of calculation involved but involves using the view frustum and a special way of rendering the screen-aligned quad, see here.
I'm making a drawing application using swift (based on GLPaint) and open gl. Now I would like to improve the curve so that it varies with stroke speed (in eg thicker if drawing fast)
However, since my knowledge in open gl is quite limited I need some guidance. What I want to do is to vary the size of my texture/point for each CGPoint I calculate and add to the screen. Is it possible?
func addQuadBezier(var from:CGPoint, var ctrl:CGPoint, var to:CGPoint, startTime:CGFloat, endTime:CGFloat) {
scalePoints(from: from, ctrl: ctrl, to: to)
let pointCount = calculatePointsNeeded(from: from, to: to, min: 16.0, max: 256.0)
var vertexBuffer: [GLfloat] = [GLfloat](count: Int(pointCount), repeatedValue:0.0)
var t : CGFloat = startTime + 0.0002
for i in 0..<Int(pointCount) {
let p = calculatePoint(from:from, ctrl: ctrl, to: to)
vertexBuffer.insert(p.x.f, atIndex: i*2)
vertexBuffer.insert(p.y.f, atIndex: i*2+1)
t += (CGFloat(1)/CGFloat(pointCount))
}
glBufferData(GL_ARRAY_BUFFER.ui, Int(pointCount)*2*sizeof(GLfloat), vertexBuffer, GL_STATIC_DRAW.ui)
glDrawArrays(GL_POINTS.ui, 0, Int(pointCount).i)
}
func render()
{
context.presentRenderbuffer(GL_RENDERBUFFER.l)
}
where render() is called every 1/60 s.
shader
attribute vec4 inVertex;
uniform mat4 MVP;
uniform float pointSize;
uniform lowp vec4 vertexColor;
varying lowp vec4 color;
void main()
{
gl_Position = MVP * inVertex;
gl_PointSize = pointSize;
color = vertexColor;
}
Thanks in advance!
In your vertex shader, set gl_pointSize to the width you want. That measurement is in framebuffer pixels, so if the size of your framebuffer changes with the device's scale factor, you'll need to adjust your point size appropriately.
If you find a way to control the line width in the vertex shader it would most likely be the best solution. Not only the lines would have different width but even a single line may have an increasing width (interpolated) between the points. I am not sure you will be able to achieve this on your platform though.
So if you do find a way you would add the point size to your buffer and use it with a new attribute in the vertex shader.
If not you will need to use triangles to draw the line which is generally a better practice anyway. To define vertices between point A and B you can get the normal as W = (B-A).normalized(), normal = N = (W.y, -W.x). Then the 4 positions are k = lineWidth/2.0, t1 = A + N*k, t2 = A - N*k, t3 = B + N*k, t4 = B - N*k. So this is what you add into your buffer and draw as a triangle strip or triangles depending on what you are looking for.
Recently, I jumped in to openGL. Most things have working out quite okay, but I keep banging my head against the wall with this one.
I am trying to rotate/scale an 2D image. I am struggling with the fact it I should rotate first, and then scale, or the other way around. Both ways don't quite work out the way I want.
I have made two short video's showing what it going wrong:
First rotate, then scale
https://dl.dropboxusercontent.com/u/992980/rotate_then_scale.MOV
First scale, then rotate
https://dl.dropboxusercontent.com/u/992980/scale_then_rotate.MOV
The left image is square, the right image is a rectangle. As you can see, with both methods, something is not quite right :)
The black area is the openGL viewport. When the viewport is square, everything is fine, when it is a rectangle, things start to go wrong :) For every image i draw, I calculate a different X and Y scale, in reference to the viewport, I think I am doing something wrong there...
Note that I am quite new to openGL, and I am probably doing something stupid (I hope I am). Hopefully, I can get my question across clearly this way.
Thanks in advance for any help given!
Corjan
The code for drawing one image:
void instrument_renderer_image_draw_raw(struct InstrumentRenderImage* image, struct InstrumentRendererCache* cache, GLuint program) {
// Load texture if not yet done
if (image->loaded == INSTRUMENT_RENDER_TEXTURE_UNLOADED) {
image->texture = instrument_renderer_texture_cache_get(image->imagePath);
if (image->texture == 0) {
image->loaded = INSTRUMENT_RENDER_TEXTURE_ERROR;
}
else {
image->loaded = INSTRUMENT_RENDER_TEXTURE_LOADED;
}
}
// Show image when texture has been correctly loaded into GPU memory
if (image->loaded == INSTRUMENT_RENDER_TEXTURE_LOADED) {
float instScaleX = (float)cache->instBounds.w / cache->instOrgBounds.w;
float instScaleY = (float)cache->instBounds.h / cache->instOrgBounds.h;
float scaleX = (float)image->w / (float)cache->instOrgBounds.w;
float scaleY = (float)image->h / (float)cache->instOrgBounds.h;
// Do internal calculations when dirty
if (image->base.dirty) {
mat4 matScale;
mat4 matRotate;
mat4 matModelView;
mat4 matProjection;
matrixRotateZ(image->angle, matRotate);
matrixScale(scaleX , scaleY * -1, 0, matScale);
matrixMultiply(matRotate, matScale, matModelView);
// Determine X and Y within this instrument's viewport
float offsetX = ((float)cache->instOrgBounds.w - (float)image->w) / 2 / (float)cache->instOrgBounds.w;
float offsetY = ((float)cache->instOrgBounds.h - (float)image->h) / 2 / (float)cache->instOrgBounds.h;
float translateX = ( ((float)image->x / (float)cache->instOrgBounds.w) - offsetX) * 2;
float translateY = ( ( ( (float)cache->instOrgBounds.h - (float)image->y - (float)image->h ) / (float)cache->instOrgBounds.h) - offsetY) * -2;
matrixTranslate(translateX, translateY*-1, -2.4,matModelView);
//matrixPerspective(45.0, 0.1, 100.0, (double)cache->instOrgBounds.w/(double)cache->instOrgBounds.h, matProjection);
matrixOrthographic(-1, 1, -1, 1, matProjection);
matrixMultiply(matProjection, matModelView, image->glMatrix);
image->base.dirty = 0;
}
glUseProgram(program);
glViewport(cache->instBounds.x * cache->masterScaleX,
cache->instBounds.y * cache->masterScaleY,
cache->instBounds.w * cache->masterScaleX,
cache->instBounds.w * cache->masterScaleX);
glUniformMatrix4fv(matrixUniform, 1, GL_FALSE, image->glMatrix);
// Load texture
glBindTexture(GL_TEXTURE_2D, image->texture);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
What framework/library are you using for matrix multiplication?
The thing that needs to come first depends on your matrix representation (e.g. row- vs. column-major and post- vs. pre-multiplication). The library you use dictates that; fixed-function OpenGL (glMultMatrix (...) et al.) was column-major and post-multiplication. Most OpenGL-based frameworks follow tradition, though there are some exceptions like OpenTK. Traditional matrix multiplications were done in the following order:
1. Translation
2. Scaling
3. Rotation
But because of the nature of post-multiplying column-major matrices (matrix multiplication is non-commutative) the operations effectively occured from bottom-to-top. Even though you do the multiplication for translation before the one for rotation, rotation is actually applied to the pre-translated coordinates.
In effect, assuming your matrix library follows OpenGL convention, you are doing the sequence of matrix multiplications in reverse.