Single component texture with float - webgl

Is there way to create only a texture having single component float value each pixel?
I want to draw shadow map into texture, I don't want to use the extension for depth format because it is not good for multi buffer drawing.
I know I can use UBYTE RGBA texture and split float value for each colors, but I afraid of performance effect of that solution.
I know there is a texture format gl.RED in OpenGL 4, if there was such texture format it is suitable for this situation.But It seems WebGL don't have such feature.
First I thought gl.ALPHA is that, but it seems different thing. gl.LUMINANCE also seems different thing.
Is there any way to achive single component of float texture in WebGL?

Pack the float into rgba channels. Just use a standard 8 bit per channel rgba texture.
"vec4 pack_float(float f){",
" const vec4 bit_shift = vec4(256.0*256.0*256.0, 256.0*256.0, 256.0, 1.0);",
" const vec4 bit_mask = vec4(0.0, 1.0/256.0, 1.0/256.0, 1.0/256.0);",
" vec4 res = fract(f * bit_shift);",
" res -= res.xxyz * bit_mask;",
" return res;",
"}",
and
"float unpack_float(vec4 rgba){",
" const vec4 bit_shift = vec4(1.0/(256.0*256.0*256.0), 1.0/(256.0*256.0), 1.0/256.0, 1.0);",
" float res = dot(rgba, bit_shift);",
" return res;",
"}",

Related

How to draw specific part of texture in fragment shader (SpriteKit)

I have a node size of 64x32 and texture size of 192x192 and I am trying to draw the first part of this texture at the first node, the second part at the second node...
Fragment shader (attached to SKSpriteNode with texture size of 64x32)
void main() {
float bX = 64.0 / 192.0 * (offset.x + 1);
float aX = 64.0 / 192.0 * (offset.x );
float bY = 32.0 / 192.0 * (offset.y + 1);
float aY = 32.0 / 192.0 * (offset.y);
float normalizedX = (bX - aX) * v_tex_coord.x + aX;
float normalizedY = (bY - aY) * v_tex_coord.y + aY;
gl_FragColor = texture2D(u_temp, vec2(normalizedX, normalizedY));
}
offset.x - [0, 2]
offset.y - [0, 5]
u_temp - texture size of 192x192
function to convert a value from [0,1] to, for example, [0, 0.33]
But the result seems to be wrong:
SKSpriteNode with attached texture
SKSpriteNode without texture (what I want to achieve with texture)
When a texture is in an altas, it's not addressed by coordinates from (0,0) to (1,1) anymore. The atlas is really one large texture that has been assembled behind the scenes. When you use a particular named image from an atlas in a normal sprite, SpriteKit is looking up that image name in information about how the atlas was assembled and then telling the GPU something like "draw this sprite with bigAtlasTexture, coordinates (0.1632,0.8814) through (0.1778, 0.9143)". If you're going to write a custom shader using the same texture, you need that information about where it lives inside the atlas, which you get from textureRect:
https://developer.apple.com/documentation/spritekit/sktexture/1519707-texturerect
So you have your texture which is not really one image but defined by a location textureRect() in a big packed-up image of lots of textures. I find it easiest to think in terms of (0,0) to (1,1), so when writing a shader I usually do textureRect => subtract and scale to get to (0,0)-(1,1) => compute desired modified coordinates => scale and add to get to textureRect again => texture2D lookup.
Since your shader will need to know about textureRect but you can't call that from the shader code, you have two choices:
Make an attribute or uniform to hold that information, fill it in from the outside, and then have the shader reference it.
If the shader is only used for a specific texture or for a few textures, then you can generate shader code that's specialized for the required textureRect, i.e., it just has some constants in the code for the texture.
Here's a part of an example using approach #2:
func myShader(forTexture texture: SKTexture) -> SKShader {
// Be careful not to assume that the texture has v_tex_coord ranging in (0, 0) to
// (1, 1)! If the texture is part of a texture atlas, this is not true. I could
// make another attribute or uniform to pass in the textureRect info, but since I
// only use this with a particular texture, I just pass in the texture and compile
// in the required v_tex_coord transformations for that texture.
let rect = texture.textureRect()
let shaderSource = """
void main() {
// Normalize coordinates to (0,0)-(1,1)
v_tex_coord -= vec2(\(rect.origin.x), \(rect.origin.y));
v_tex_coord *= vec2(\(1 / rect.size.width), \(1 / rect.size.height));
// Update the coordinates in whatever way here...
// v_tex_coord = desired_transform(v_tex_coord)
// And then go back to the actual coordinates for the real texture
v_tex_coord *= vec2(\(rect.size.width), \(rect.size.height));
v_tex_coord += vec2(\(rect.origin.x), \(rect.origin.y));
gl_FragColor = texture2D(u_texture, v_tex_coord);
}
"""
let shader = SKShader(source: shaderSource)
return shader
}
That's a cut-down version of some specific examples from here:
https://github.com/bg2b/RockRats/blob/master/Asteroids/Hyperspace.swift

Image thresholding in LUA - LOVE

In order to only have only one input image for a "spread like" effect I would like to do some threshold operation on some drawable or to find any other way that work.
Are there any such tools in LOVE2d/Lua ?
I'm not exactly sure about the desired outcome, the "spread like" effect, but to create thresholding, you best use pixel shader something like this.
extern float threshold; //external variable from our lua script
vec4 effect(vec4 color, Image tex, vec2 texture_coords, vec2 screen_coords)
{
vec4 texturecolor = Texel(tex, texture_coords); //default shader code
//get average color of pixel
float average = (texturecolor[0] + texturecolor[1] + texturecolor[2])/3;
//set alpha of pixel to 0 if average of RGB is below threshold
if (average < threshold) {
texturecolor[3] = 0;
}
return texturecolor * color; //default shader code
}
This code calculates the average of RGB for each pixel and if the average is below threshold, it changes alpha of that pixel to 0 to make it invisible.
To use the pixel effect in your code you need to do something like this (only once, perhaps in love.load):
shader = love.graphics.newShader([==[ ... shader code above ... ]==])
and when drawing the image:
love.graphics.setShader(shader)
love.graphics.draw(img)
love.graphics.setShader()
To adjust the threshold:
shader:send("threshold", number) --0 to 1 float
Result:
References:
LÖVE Shader object
love.graphics.newShader for examples of the default shader code

GLSL hue shader producing odd results, IOS only?

I have a cross-platform LibGDX app. This particular GLSL shader code is used to shift the hue of a particular texture.
It works great on Android and when debugging on Desktop, but on an iPad this is the result (excuse photos of screen, easiest way to get data from this device).
Code:
const mat3 rgb2yiq = mat3(0.299, 0.595716, 0.211456, 0.587, -0.274453, -0.522591, 0.114, -0.321263, 0.311135);
const mat3 yiq2rgb = mat3(1.0, 1.0, 1.0, 0.9563, -0.2721, -1.1070, 0.6210, -0.6474, 1.7046);
vec4 outColor = texture2D(u_texture, v_texCoord) * v_color;
float alpha = outColor.a;
// Hue shift
if (u_hueAdjust > 0.0 && u_hueAdjust < 1.0 && alpha > 0.0)
{
vec3 unmultipliedRGB = outColor.rgb / alpha;
vec3 yColor = rgb2yiq * unmultipliedRGB;
float originalHue = atan(yColor.b, yColor.g);
float finalHue = originalHue + u_hueAdjust * 6.28318; //convert 0-1 to radians
float chroma = sqrt(yColor.b * yColor.b + yColor.g * yColor.g);
vec3 yFinalColor = vec3(yColor.r, chroma * cos(finalHue), chroma * sin(finalHue));
outColor.rgb = (yiq2rgb * yFinalColor) * alpha;
}
Obviously there's some really weird artifacts that seem to affect certain areas, in particular black/white colors. But also in general a subtle change in color is noted that isn't attributable to a desired hue-change effect.
Overall this shader is wonky on IOS (but working fine on Android/Desktop), but after playing with it for a while I'm completely out of ideas, anyone lead me in the right direction?
In the documentation for atan, it says The result is undefined if x=0..
Is it possible that yColor.g is zero on the greyscale?
The issue is discussed here: Robust atan(y,x) on GLSL for converting XY coordinate to angle

using pointSize to trigger the fragment shader to draw pixels

I queries the pointSize range gl.getParameter(gl.ALIASED_POINT_SIZE_RANGE) and got [1,1024] this means, that using this point to cover a texture (so it triggers the fragment shader to draw all pixels spans by the pointSize
at best, using this method i cannot render images larger then 1024x1024, ?
I guess i have to bind 2 triangles (6 points) to the fragment shader so it covers all of clipspace and then gl.viewport(x, y, width, height); will map this entire area to the output texture (frame buffer object or canvas)?
is there any other way (maybe something new in webgl2) other then using an attribute in the fragment shader?
Correct, the largest size area you can render with a single point is whatever is returned by gl.getParameter(gl.ALIASED_POINT_SIZE_RANGE)
The spec does not require any size larger than 1. The fact that your GPU/Driver/Browser returned 1024 does not mean that your users' machines will also return 1024.
note: Answering based on your history of questions
The normal thing to do in WebGL for 99% off all cases is to submit vertices. Want to draw a quad, submit 4 vertices and 6 indices or 6 vertex. Want to draw a triangle, submit 3 vertices. Want to draw a circle, submit the vertices for a circle. Want to draw a car, submit the vertices for a car or more likely submit the vertices for a wheel, draw 4 wheels with those vertices, submit the vertices for other parts of the car, draw each part of the car.
You multiply those vertices by some matrices to move, scale, rotate, and project them into 2D or 3D space. All your favorite games do this. The canvas 2D api does this via OpenGL ES internally. Chrome itself does this to render all the parts of this webpage. That's the norm. Anything else is an exception and will likely lead to limitations.
For fun, in WebGL2, there are some other things you can do. They are not the normal thing to do and they are not recommended to actually solve real world problems. They can be fun though just for the challenge.
In WebGL2 there is an global variable in the vertex shader called gl_VertexID which is the count of the vertex currently being processed. You can use that with clever math to generate vertices in the vertex shader with no other data.
Here's some code that draws a quad that covers the canvas
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
const vs = `#version 300 es
void main() {
int x = gl_VertexID % 2;
int y = (gl_VertexID / 2 + gl_VertexID / 3) % 2;
gl_Position = vec4(ivec2(x, y) * 2 - 1, 0, 1);
}
`;
const fs = `#version 300 es
precision mediump float;
out vec4 outColor;
void main() {
outColor = vec4(1, 0, 0, 1);
}
`;
// compile shaders, link program
const prg = twgl.createProgram(gl, [vs, fs]);
gl.useProgram(prg);
const count = 6;
gl.drawArrays(gl.TRIANGLES, 0, count);
}
main();
<canvas></canvas>
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
Example: And one that draws a circle
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
const vs = `#version 300 es
#define PI radians(180.0)
void main() {
const int TRIANGLES_AROUND_CIRCLE = 100;
int triangleId = gl_VertexID / 3;
int pointId = gl_VertexID % 3;
int pointIdOffset = pointId % 2;
float angle = float((triangleId + pointIdOffset) * 2) * PI /
float(TRIANGLES_AROUND_CIRCLE);
float radius = 1. - step(1.5, float(pointId));
float x = sin(angle) * radius;
float y = cos(angle) * radius;
gl_Position = vec4(x, y, 0, 1);
}
`;
const fs = `#version 300 es
precision mediump float;
out vec4 outColor;
void main() {
outColor = vec4(1, 0, 0, 1);
}
`;
// compile shaders, link program
const prg = twgl.createProgram(gl, [vs, fs]);
gl.useProgram(prg);
const count = 300; // 100 triangles, 3 points each
gl.drawArrays(gl.TRIANGLES, 0, 300);
}
main();
<canvas></canvas>
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
There is an entire website based on this idea. The site is based on the puzzle of making pretty pictures given only an id for each vertex. It's the vertex shader equivalent of shadertoy.com. On Shadertoy.com the puzzle is basically given only gl_FragCoord as input to a fragment shader write a function to draw something interesting.
Both sites are toys/puzzles. Doing things this way is not recommended for solving real issues like drawing a 3D world in a game, doing image processing, rendering the contents of a browser window, etc. They are cute puzzles on given only minimal inputs, drawing something interesting.
Why is this technique not advised? The most obvious reason is it's hard coded and inflexible where as the standard techniques are super flexible. For example above to draw a fullscreen quad required one shader. To draw a circle required a different shader. Where a standard vertex buffer based attributes multiplied by matrices can be used for any shape provided, 2d or 3d. Not just any shape, with just a simple single matrix multiply in the shader those shapes can be translated, rotated, scaled, projected into 3D, there rotation centers and scale centers can be independently set, etc.
Note: you are free to do whatever you want. If you like these techniques then by all means use them. The reason I'm trying to steer you away form them is based on your previous questions you're new to WebGL and I feel like you'll end up making WebGL much harder for yourself if you use obscure and hard coded techniques like these instead of the traditional more common flexible techniques that experienced devs use to get real work done. But again, it's up to you, do whatever you want.

Varying Line Width with Open GL using GL_POINTS (iOS)

I'm making a drawing application using swift (based on GLPaint) and open gl. Now I would like to improve the curve so that it varies with stroke speed (in eg thicker if drawing fast)
However, since my knowledge in open gl is quite limited I need some guidance. What I want to do is to vary the size of my texture/point for each CGPoint I calculate and add to the screen. Is it possible?
func addQuadBezier(var from:CGPoint, var ctrl:CGPoint, var to:CGPoint, startTime:CGFloat, endTime:CGFloat) {
scalePoints(from: from, ctrl: ctrl, to: to)
let pointCount = calculatePointsNeeded(from: from, to: to, min: 16.0, max: 256.0)
var vertexBuffer: [GLfloat] = [GLfloat](count: Int(pointCount), repeatedValue:0.0)
var t : CGFloat = startTime + 0.0002
for i in 0..<Int(pointCount) {
let p = calculatePoint(from:from, ctrl: ctrl, to: to)
vertexBuffer.insert(p.x.f, atIndex: i*2)
vertexBuffer.insert(p.y.f, atIndex: i*2+1)
t += (CGFloat(1)/CGFloat(pointCount))
}
glBufferData(GL_ARRAY_BUFFER.ui, Int(pointCount)*2*sizeof(GLfloat), vertexBuffer, GL_STATIC_DRAW.ui)
glDrawArrays(GL_POINTS.ui, 0, Int(pointCount).i)
}
func render()
{
context.presentRenderbuffer(GL_RENDERBUFFER.l)
}
where render() is called every 1/60 s.
shader
attribute vec4 inVertex;
uniform mat4 MVP;
uniform float pointSize;
uniform lowp vec4 vertexColor;
varying lowp vec4 color;
void main()
{
gl_Position = MVP * inVertex;
gl_PointSize = pointSize;
color = vertexColor;
}
Thanks in advance!
In your vertex shader, set gl_pointSize to the width you want. That measurement is in framebuffer pixels, so if the size of your framebuffer changes with the device's scale factor, you'll need to adjust your point size appropriately.
If you find a way to control the line width in the vertex shader it would most likely be the best solution. Not only the lines would have different width but even a single line may have an increasing width (interpolated) between the points. I am not sure you will be able to achieve this on your platform though.
So if you do find a way you would add the point size to your buffer and use it with a new attribute in the vertex shader.
If not you will need to use triangles to draw the line which is generally a better practice anyway. To define vertices between point A and B you can get the normal as W = (B-A).normalized(), normal = N = (W.y, -W.x). Then the 4 positions are k = lineWidth/2.0, t1 = A + N*k, t2 = A - N*k, t3 = B + N*k, t4 = B - N*k. So this is what you add into your buffer and draw as a triangle strip or triangles depending on what you are looking for.

Resources