I have a cross-platform LibGDX app. This particular GLSL shader code is used to shift the hue of a particular texture.
It works great on Android and when debugging on Desktop, but on an iPad this is the result (excuse photos of screen, easiest way to get data from this device).
Code:
const mat3 rgb2yiq = mat3(0.299, 0.595716, 0.211456, 0.587, -0.274453, -0.522591, 0.114, -0.321263, 0.311135);
const mat3 yiq2rgb = mat3(1.0, 1.0, 1.0, 0.9563, -0.2721, -1.1070, 0.6210, -0.6474, 1.7046);
vec4 outColor = texture2D(u_texture, v_texCoord) * v_color;
float alpha = outColor.a;
// Hue shift
if (u_hueAdjust > 0.0 && u_hueAdjust < 1.0 && alpha > 0.0)
{
vec3 unmultipliedRGB = outColor.rgb / alpha;
vec3 yColor = rgb2yiq * unmultipliedRGB;
float originalHue = atan(yColor.b, yColor.g);
float finalHue = originalHue + u_hueAdjust * 6.28318; //convert 0-1 to radians
float chroma = sqrt(yColor.b * yColor.b + yColor.g * yColor.g);
vec3 yFinalColor = vec3(yColor.r, chroma * cos(finalHue), chroma * sin(finalHue));
outColor.rgb = (yiq2rgb * yFinalColor) * alpha;
}
Obviously there's some really weird artifacts that seem to affect certain areas, in particular black/white colors. But also in general a subtle change in color is noted that isn't attributable to a desired hue-change effect.
Overall this shader is wonky on IOS (but working fine on Android/Desktop), but after playing with it for a while I'm completely out of ideas, anyone lead me in the right direction?
In the documentation for atan, it says The result is undefined if x=0..
Is it possible that yColor.g is zero on the greyscale?
The issue is discussed here: Robust atan(y,x) on GLSL for converting XY coordinate to angle
Related
I am trying to display sharp contours from a texture in WebGL.
I pass a texture to my fragment shaders then I use local derivatives to display the contours/outline, however, it is not smooth as I would expect it to.
Just printing the texture without processing works as expected:
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
vec4 color = texture2D(uTextureFilled, texc);
gl_FragColor = color;
With local derivatives, it misses some edges:
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
vec4 color = texture2D(uTextureFilled, texc);
float maxColor = length(color.rgb);
gl_FragColor.r = abs(dFdx(maxColor));
gl_FragColor.g = abs(dFdy(maxColor));
gl_FragColor.a = 1.;
In theory, your code is right.
But in practice most GPUs are computing derivatives on blocks of 2x2 pixels.
So for all 4 pixels of such block the dFdX and dFdY values will be the same.
(detailed explanation here)
This will cause some kind of aliasing and you will miss some pixels for the contour of the shape randomly (this happens when the transition from black to the shape color occurs at the border of a 2x2 block).
To fix this, and get the real per pixel derivative, you can instead compute it yourself, this would look like this :
// get tex coordinates
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
// compute the U & V step needed to read neighbor pixels
// for that you need to pass the texture dimensions to the shader,
// so let's say those are texWidth and texHeight
float step_u = 1.0 / texWidth;
float step_v = 1.0 / texHeight;
// read current pixel
vec4 centerPixel = texture2D(uTextureFilled, texc);
// read nearest right pixel & nearest bottom pixel
vec4 rightPixel = texture2D(uTextureFilled, texc + vec2(step_u, 0.0));
vec4 bottomPixel = texture2D(uTextureFilled, texc + vec2(0.0, step_v));
// now manually compute the derivatives
float _dFdX = length(rightPixel - centerPixel) / step_u;
float _dFdY = length(bottomPixel - centerPixel) / step_v;
// display
gl_FragColor.r = _dFdX;
gl_FragColor.g = _dFdY;
gl_FragColor.a = 1.;
A few important things :
texture should not use mipmaps
texture min & mag filtering should be set to GL_NEAREST
texture clamp mode should be set to clamp (not repeat)
And here is a ShaderToy sample, demonstrating this :
GPUImage's LookupFilter uses an RGB pixel map that's 512x512. When the filter executes, it creates a comparison between a modified version of this image with the original, and extrapolates an image filter.
The filter code is pretty straightforward. Here's an extract so you can see what's going on:
void main()
{
highp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
highp float blueColor = textureColor.b * 63.0;
highp vec2 quad1;
quad1.y = floor(floor(blueColor) / 8.0);
quad1.x = floor(blueColor) - (quad1.y * 8.0);
highp vec2 quad2;
quad2.y = floor(ceil(blueColor) / 8.0);
quad2.x = ceil(blueColor) - (quad2.y * 8.0);
highp vec2 texPos1;
texPos1.x = (quad1.x * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.r);
texPos1.y = (quad1.y * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.g);
highp vec2 texPos2;
texPos2.x = (quad2.x * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.r);
texPos2.y = (quad2.y * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.g);
lowp vec4 newColor1 = texture2D(inputImageTexture2, texPos1);
lowp vec4 newColor2 = texture2D(inputImageTexture2, texPos2);
lowp vec4 newColor = mix(newColor1, newColor2, fract(blueColor));
gl_FragColor = mix(textureColor, vec4(newColor.rgb, textureColor.w), intensity);
}
);
See where the filter map is dependent on this being a 512x512 image?
I'm looking at ways to 4x the color depth here, using a 1024x1024 source image instead, but I'm not sure how this lookup filter image would have originally been generated.
Can something like this be generated in code? If so, I realize it's a very broad question, but how would I go about doing that? If it can't be generated in code, what are my options?
—-
Update:
Turns out the original LUT generation code was included in the header file all along. The questionable part here is from the header file:
Lookup texture is organised as 8x8 quads of 64x64 pixels representing all possible RGB colors:
How is 64x64 a map of all possible RGB channels? 64³ = 262,144 but that only accounts for 1/64th of the presumed 24-bit capacity of RGB, which is 64³ (16,777,216). What's going on here? Am I missing the way this LUT works? How are we accounting for all possible RGB colors with only 1/64th of the data?
for (int by = 0; by < 8; by++) {
for (int bx = 0; bx < 8; bx++) {
for (int g = 0; g < 64; g++) {
for (int r = 0; r < 64; r++) {
image.setPixel(r + bx * 64, g + by * 64, qRgb((int)(r * 255.0 / 63.0 + 0.5),
(int)(g * 255.0 / 63.0 + 0.5),
(int)((bx + by * 8.0) * 255.0 / 63.0 + 0.5)));
}
}
}
}
I'm not quite sure what problem you are actually having. When you say you want "4x the color depth" what do you actually mean. Color depth normally means the number of bits per color channel (or per pixel), which is totally independent of the resolution of the image.
In terms of lookup table accuracy (which is resolution dependent), assuming you are using bilinear filtered texture inputs from the original texture, and filtered lookups into the transform table, then you are already linearly interpolating between samples in the lookup table. Interpolation of color channels will be at higher precision than the storage format; e.g. often fp16 equivalent, even for textures stored at 8-bit per pixel.
Unless you have a significant amount of non-linearity in your color transform (not that common) adding more samples to the lookup table is unlikely to make a significant difference to the output - the interpolation will already be doing a reasonably good job of filling in the gaps.
Lev Zelensky provided the original work for this, so I'm not as familiar with how this works internally, but you can look at the math being performed in the shader to get an idea of what's going on.
In the 512x512 lookup, you have an 8x8 grid of cells. Within those cells, you have a 64x64 image patch. The red values go from 0 to 255 (0.0 to 1.0 in normalized values) going from left to right in that patch, and the green values go from 0 to 255 going down. That means that there are 64 steps in red, and 64 steps in green.
Each cell then appears to increase the blue value as you progress down the patches, left to right, top to bottom. With 64 patches, that gives you 64 blue values to match the 64 red and green ones. That gives you equal coverage across the RGB values in all channels.
So, if you wanted to double the number of color steps, you'd have to double the patch size to 128x128 and have 128 grids. It'd have to be more of a rectangle due to 128 not having an integer square root. Just going to 1024x1024 might let you double the color depth in the red and green channels, but blue would now be half their depth. Balancing the three out would be a little trickier than just doubling the image size.
I'm attempting to see what shaders look like in Interface Builder using sprite kit, and would like to use some of the shaders at ShaderToy. To do it, I created a "shader.fsh" file, a scene file, and added a color sprite to the scene, giving it a custom shader (shader.fsh)
While very basic shaders seem to work:
void main() {
gl_FragColor = vec4(0.0,1.0,0.0,1.0);
}
Any attempt I make to convert shaders from ShaderToy cause Xcode to freeze up (spinning color ball) as soon as the attempt is made to render them.
The shader I am working with for example, is this one:
#define M_PI 3.1415926535897932384626433832795
float rand(vec2 co)
{
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
float size = 30.0;
float prob = 0.95;
vec2 pos = floor(1.0 / size * fragCoord.xy);
float color = 0.0;
float starValue = rand(pos);
if (starValue > prob)
{
vec2 center = size * pos + vec2(size, size) * 0.5;
float t = 0.9 + 0.2 * sin(iGlobalTime + (starValue - prob) / (1.0 - prob) * 45.0);
color = 1.0 - distance(fragCoord.xy, center) / (0.5 * size);
color = color * t / (abs(fragCoord.y - center.y)) * t / (abs(fragCoord.x - center.x));
}
else if (rand(fragCoord.xy / iResolution.xy) > 0.996)
{
float r = rand(fragCoord.xy);
color = r * (0.25 * sin(iGlobalTime * (r * 5.0) + 720.0 * r) + 0.75);
}
fragColor = vec4(vec3(color), 1.0);
}
I've tried:
Replacing mainImage() with main(void) (so that it will be called)
Replacing the iXxxxx variables (iGlobalTime, iResolution) and fragCoord variables with their related variables (based on the suggestions here)
Replacing some of the variables (iGlobalTime)...
While changing mainImage to main() and swapping out the variables got it to work without error in TinyShading realtime tester app - the outcome is always the same in Xcode (spinning ball, freeze). Any advice here would be helpful as there is a surprisingly small amount of information currently available on the topic.
I managed to get this working in SpriteKit using SKShader. I've been able to render every shader from ShaderToy that I've attempted so far. The only exception is that you must remove any code using iMouse, since there is no mouse in iOS. I did the following...
1) Change the mainImage function declaration in the ShaderToy to...
void main(void) {
...
}
The ShaderToy mainImage function has an input named fragCoord. In iOS, this is globally available as gl_FragCoord, so your main function no longer needs any inputs.
2) Do a replace all to change the following from their ShaderToy names to their iOS names...
fragCoord becomes gl_FragCoord
fragColor becomes gl_FragColor
iGlobalTime becomes u_time
Note: There are more that I haven't encountered yet. I'll update as I do
3) Providing iResolution is slightly more involved...
iResolution is the viewport size (in pixels), which translates to the sprite size in SpriteKit. This used to be available as u_sprite_size in iOS, but has been removed. Luckily, Apple provides a nice example of how to inject it into your shader using uniforms in their SKShader documentation.
However, as stated in Shader Inputs section of ShaderToy, the type of iResolution is vec3 (x, y and z) as opposed to u_sprite_size, which is vec2 (x and y). I am yet to see a single ShaderToy that uses the z value of iResolution. So, we can simply use a z value of zero. I modified the example in the Apple documentation to provide my shader an iResolution of type vec3 like so...
let uniformBasedShader = SKShader(fileNamed: "YourShader.fsh")
let sprite = SKSpriteNode()
sprite.shader = uniformBasedShader
let spriteSize = vector_float3(
Float(sprite.frame.size.width), // x
Float(sprite.frame.size.height), // y
Float(0.0) // z - never used
)
uniformBasedShader.uniforms = [
SKUniform(name: "iResolution", vectorFloat3: spriteSize)
]
That's it :)
Here is the change to the shader that works when loaded as a shader with swift:
#define M_PI 3.1415926535897932384626433832795
float rand(vec2 co);
float rand(vec2 co)
{
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
void main()
{
float size = 50.0; //Item 1:
float prob = 0.95; //Item 2:
vec2 pos = floor(1.0 / size * gl_FragCoord.xy);
float color = 0.0;
float starValue = rand(pos);
if (starValue > prob)
{
vec2 center = size * pos + vec2(size, size) * 0.5;
float t = 0.9 + 0.2 * sin(u_time + (starValue - prob) / (1.0 - prob) * 45.0); //Item 3:
color = 1.0 - distance(gl_FragCoord.xy, center) / (0.9 * size);
color = color * t / (abs(gl_FragCoord.y - center.y)) * t / (abs(gl_FragCoord.x - center.x));
}
else if (rand(v_tex_coord) > 0.996)
{
float r = rand(gl_FragCoord.xy);
color = r * (0.25 * sin(u_time * (r * 5.0) + 720.0 * r) + 0.75);
}
gl_FragColor = vec4(vec3(color), 1.0);
}
Play with Item 1: to increase the number of stars in the sky the smaller the number the more stars I like the number to be around 50 not too dense
Item 2: changes the randomness or how close together the stars will appear 1 = none, 0.1 = side by side around 0.75 gives a nice feel.
Item 3 is where most of the magic happens this is the size and pulse of the stars.
float t = 0.9
Changing 0.9, will increase the initial star sign up or down a nice value is 1.4 not too big not too small.
float t = 0.9 + 0.2
Changing the second value in this equation 0.2, will increase the pulse effect width of the stars proportionally to the original size I like with 1.4 a value of 1.2.
To add the shader to your swift project add a sprite to the scene the size of the screen then add the shader like this:
let backgroundImage = SKSpriteNode()
backgroundImage.texture = textureAtlas.textureNamed("any )
backgroundImage.size = screenSize
let shader = SKShader(fileNamed: "nightSky.fsh")
backgroundImage.shader = shader
This works great for green screen, my background is green and using this code it makes green = alpha.
lowp vec4 textureColor = texture2D(u_samplers2D[0], vTextu);
lowp float rbAverage = textureColor.r * 0.5 + textureColor.b * 0.5;
lowp float gDelta = textureColor.g - rbAverage;
textureColor.a = 1.0 - smoothstep(0.0, 0.25, gDelta);
textureColor.a = textureColor.a * textureColor.a * textureColor.a;
gl_FragColor = textureColor;
How do I change the code so that it use a black background instead of green? I'm thinking I could get values for the dark red, green, blues then use that as the alpha? Any pointers would be kind.
You can calculate how close value to black, simplest way is to take maximum of the r g and b values, then set some threshold when you consider color opaque, and map your alpha values to [0..1] in pseudo code:
float brightness = max(textureColor.r, textureColor.g, textureColor.b);
float threshold = 0.1;
float alpha = brightness > threshold ? 1 : (threshold - brightness) / threshold;
gl_FragColor = vec4(textureColor.rgb, alpha);
lowp vec4 textureColor = texture2D(u_samplers2D, vTextu);
lowp float gtemp = smoothstep(0.0, 0.5, textureColor.r);
gl_FragColor = vec4(1.0, 1.0, 1.0, gtemp);
This works for my situation. I was using a greyscale image with black background and just needed the white elements. The problem I had was that I was applying the alpha to grey pixels, but now Im using white and applying the alpha to that. I can adjust the smoothstep values to change the effect. And also add another alpha to fade in or out the image.
I have been learning opengl es from the opengl es 2.0 programming guide. They have a particle effect that looks like an explosion. I am trying to enhance their example code by adding a mat4 projection matrix to the vertex shader, the shader compiles and works, but I am having problems getting the effect to position taking the projection into account. The code I have is as follows
const char* ParticleExplosionVertexShader = STRINGIFY (
uniform float u_time;
uniform vec3 u_centerPosition;
uniform mat4 Projection;
attribute float a_lifetime;
attribute vec3 a_startPosition;
attribute vec3 a_endPosition;
varying float v_lifetime;
void main()
{
if ( u_time <= a_lifetime )
{
gl_Position.xyz = a_startPosition + (u_time * a_endPosition);
gl_Position.xyz += u_centerPosition;
gl_Position.w = 1.0;
}
else
gl_Position = vec4( -1000, -1000, 0, 0 );
v_lifetime = 1.0 - ( u_time / a_lifetime );
v_lifetime = clamp ( v_lifetime, 0.0, 1.0 );
gl_PointSize = ( v_lifetime * v_lifetime ) * 40.0;
}
);
I am able to add the projection to the line without any errors, but unfortunately here its not really required as that code is placing the object of d=screen at the end of its lifetime
gl_Position = Projection * vec4( -1000, -1000, 0, 0 );
I have also tried changing the line
gl_Position.xyz += u_centerPosition;
to
gl_Position += Projection * u_centerPosition;
But I have had no luck getting it to place as I want it
Am I doing something wrong? Or is there a reason the book didn't have a projection matrix such as its not something someone should do with point sprites?
Any help or pointers to what I should look into will be appreciated
Thanks
Edit: Please let me know if you need more information from me
What about multiplying the whole gl_Position by modelview-projection matrix, as with any normal geometry?
Also, you will probably need to modify the line that calculates gl_PointSize, for example try to divide it by gl_Position.w (after multiplication by modelview-projection), otherwise the sprites will all have the same size (is that what you are trying to fix?).