High precision output from a GLES2 shader - ios

I'm doing some GPGPU stuff on a GLES2 platform that supports maximum RGBA8 render targets (iOS). I need to output a vec2 in the range +/- 2.0 with as much precision as I can get, so I'm trying to pack each component into two components of the 8-bit output.
An important requirement is that a decode+encode round trip preserves the encoded value. My current solution does not have this property and as a result my values are drifting all over the place.
This is what I have now (it's a bit verbose because I'm still thinking my way through it):
const float fixed_scale = 4.0;
lowp vec4 encode_fixed(highp vec2 v) {
vec2 scaled = 0.5 + v/fixed_scale; // map to range 0..1
vec2 low = fract(scaled * 255.0); // extract low order part
vec2 high = scaled - low/255.0; // subtract low from high order part
return vec4(low.x,high.x,low.y,high.y); // pack into rgba8
}
vec2 decode_fixed(highp vec4 v) {
vec2 scaled = v.yw + v.xz/255.0; // recombine low and high parts
return (scaled - 0.5) * fixed_scale; // map back to original range
}
EDIT: simpler code, but still drifts

i think that will help you:
vec4 PackFloat8bitRGBA(float val) {
vec4 pack = vec4(1.0, 255.0, 65025.0, 16581375.0) * val;
pack = fract(pack);
pack -= vec4(pack.yzw / 255.0, 0.0);
return pack;
}
float UnpackFloat8bitRGBA(vec4 pack) {
return dot(pack, vec4(1.0, 1.0 / 255.0, 1.0 / 65025.0, 1.0 / 16581375.0));
}
vec3 PackFloat8bitRGB(float val) {
vec3 pack = vec3(1.0, 255.0, 65025.0) * val;
pack = fract(pack);
pack -= vec3(pack.yz / 255.0, 0.0);
return pack;
}
float UnpackFloat8bitRGB(vec3 pack) {
return dot(pack, vec3(1.0, 1.0 / 255.0, 1.0 / 65025.0));
}
vec2 PackFloat8bitRG(float val) {
vec2 pack = vec2(1.0, 255.0) * val;
pack = fract(pack);
pack -= vec2(pack.y / 255.0, 0.0);
return pack;
}
float UnpackFloat8bitRG(vec2 pack) {
return dot(pack, vec2(1.0, 1.0 / 255.0));
}
note the elemination of hardware biasing: pack -= vec4(pack.yzw / 255.0, 0.0) - big thanks to Aras Pranckevičius for that

Ok, I'll answer my own question. This seems to work -- it doesn't drift, but the visual results look a bit inaccurate to me. Notice the rounding in the decoder, which is necessary.
const float fixed_scale = 4.0;
lowp vec4 encode_fixed(highp vec2 v) {
vec2 scaled = 0.5 + v/fixed_scale;
vec2 big = scaled * 65535.0/256.0;
vec2 high = floor(big) / 255.0;
vec2 low = fract(big);
return vec4(low.x,high.x,low.y,high.y);
}
vec2 decode_fixed(highp vec4 v) {
v = floor(v * 255.0 + 0.5);
vec2 scaled = vec2(v.yw * 256.0 + v.xz) / 65535.0;
return (scaled - 0.5) * fixed_scale;
}

Related

How blending image mask?

How can I apply such a mask
to get effect such as bokeh
need to blur edge in mask and apply on image texture. How do that?
Vertex shader:
attribute vec4 a_Position;
void main()
{
gl_Position = a_Position;
}
Fragment shader:
precision lowp float;
uniform sampler2D u_Sampler; // textureSampler
uniform sampler2D u_Mask; // maskSampler
uniform vec3 iResolution;
vec4 blur(sampler2D source, vec2 size, vec2 uv) {
vec4 C = vec4(0.0);
float width = 1.0 / size.x;
float height = 1.0 / size.y;
float divisor = 0.0;
for (float x = -25.0; x <= 25.0; x++)
{
C += texture2D(source, uv + vec2(x * width, 0.0));
C += texture2D(source, uv + vec2(0.0, x * height));
divisor++;
}
C*=0.5;
return vec4(C.r / divisor, C.g / divisor, C.b / divisor, 1.0);
}
void main()
{
vec2 uv = gl_FragCoord.xy / iResolution.xy;
vec4 videoColor = texture2D(u_Sampler, uv);
vec4 maskColor = texture2D(u_Mask, uv);
gl_FragColor = blur(u_Sampler, iResolution.xy, uv);
}
vec4 blurColor = blur(u_Sampler, iResolution.xy, uv);
gl_FragColor = mix(blurColor, videoColor, maskColor.r);
But FYI it's not common to blur in one pass like you have. It's more common to blur in one direction (horizontally), then blur the result of that vertically, then mix the results, blurredTexture, videoTexture, mask.

WEBGL Fluid simulation

I am trying to get a fluid simulation to work using WebGL using http://meatfighter.com/fluiddynamics/GPU_Gems_Chapter_38.pdf as a resource. I have implemented everything but I feel like there are multiple things that aren't working correctly. I added boundaries but it seems like they are having no effect, which makes me suspicious about how much pressure and advection are working. I displayed the divergence and I get very little around where I am moving the object around as well as when the velocity hits the edge (boundary), but the pressure that I get is completely empty. I calculate pressure using the diffusion shader as described in the linked resource.
I know the code I am posting is al little confusing due to the nature of what it is about. I can supply any pictures/links to the simulation if that would help.
--EDIT--
After some more investigation I believe the problem is related to my advection function. or at least a problem. I am unsure how to fix it though.
Instead of posting all of my code, the general process I follow is:
advect velocity
diffuse velocity
add velocity
calculate divergence
compute pressure
subtract gradient
for diffusing velocity and computing pressure I am only do 10 iterations because thats all my computer can handle with my implementation (I will optimize once I get it working), but I feel like the computing pressure and subtracting gradient are not having any effect.
here are the shaders I am using:
//advection
uniform vec2 res;//The width and height of our screen
uniform sampler2D velocity;//input velocity
uniform sampler2D quantity;//quantity to advect
void main() {
vec2 pixel = gl_FragCoord.xy / res.xy;
float i0, j0, i1, j1;
float x, y, s0, s1, t0, t1, dxt0, dyt0;
float dt = 1.0/60.0;
float Nx = res.x -1.0;
float Ny = res.y -1.0;
float i = pixel.x;
float j = pixel.y;
dxt0 = dt ;
dyt0 = dt ;
x = gl_FragCoord.x - dxt0 * (texture2D(velocity, pixel).x );
y = gl_FragCoord.y - dyt0 * (texture2D(velocity, pixel).y );
i0=x-0.5;
i1=x+0.5;
j0=y-0.5;
j1=y+0.5;
s1 = x-i0;
s0 = 1.0-s1;
t1 = y-j0;
t0 = 1.0-t1;
float p1 = (t0 * texture2D(quantity, vec2(i0,j0)/res.xy).r);
float p2 = (t1 * texture2D(quantity, vec2(i0,j1)/res.xy).r);
float p3 = (t0 * texture2D(quantity, vec2(i1,j0)/res.xy).r);
float p4 = (t1 * texture2D(quantity, vec2(i1,j1)/res.xy).r);
float total1 = s0 * (p1 + p2);
float total2 = s1 * (p3 + p4);
gl_FragColor.r = total1 + total2;
p1 = (t0 * texture2D(quantity, vec2(i0,j0)/res.xy).g);
p2 = (t1 * texture2D(quantity, vec2(i0,j1)/res.xy).g);
p3 = (t0 * texture2D(quantity, vec2(i1,j0)/res.xy).g);
p4 = (t1 * texture2D(quantity, vec2(i1,j1)/res.xy).g);
total1 = s0 * (p1 + p2);
total2 = s1 * (p3 + p4);
gl_FragColor.g = total1 + total2;
}
//diffusion shader starts here
uniform vec2 res;//The width and height of our screen
uniform sampler2D x;//Our input texture
uniform sampler2D b;
uniform float alpha;
uniform float rBeta;
void main() {
float xPixel = 1.0/res.x;
float yPixel = 1.0/res.y;
vec2 pixel = gl_FragCoord.xy / res.xy;
gl_FragColor = texture2D( b, pixel );
vec4 leftColor = texture2D(x,vec2(pixel.x-xPixel,pixel.y));
vec4 rightColor = texture2D(x,vec2(pixel.x+xPixel,pixel.y));
vec4 upColor = texture2D(x,vec2(pixel.x,pixel.y-yPixel));
vec4 downColor = texture2D(x,vec2(pixel.x,pixel.y+yPixel));
gl_FragColor.r = (gl_FragColor.r * alpha +leftColor.r + rightColor.r + upColor.r + downColor.r) * rBeta;
gl_FragColor.g = (gl_FragColor.g * alpha +leftColor.g + rightColor.g + upColor.g + downColor.g)* rBeta;
gl_FragColor.b = (gl_FragColor.b * alpha +leftColor.b + rightColor.b + upColor.b + downColor.b)* rBeta;
}
//gradient
uniform vec2 res;//The width and height of our screen
uniform sampler2D velocity;//Our input velocity
uniform sampler2D pressure;//Our input pressure
void main() {
float xPixel = 1.0/res.x;
float yPixel = 1.0/res.y;
vec2 pixel = gl_FragCoord.xy / res.xy;
vec4 leftColor = texture2D(pressure, vec2(pixel.x-xPixel,pixel.y));
vec4 rightColor = texture2D(pressure, vec2(pixel.x+xPixel,pixel.y));
vec4 upColor = texture2D(pressure, vec2(pixel.x,pixel.y-yPixel));
vec4 downColor = texture2D(pressure, vec2(pixel.x,pixel.y+yPixel));
vec2 gradient = xPixel/2.0 * vec2((rightColor.x - leftColor.x), (upColor.y - downColor.y));
//Diffuse equation
gl_FragColor = texture2D(velocity, pixel) ;
gl_FragColor.xy -= gradient;
}
uniform vec2 res;//The width and height of our screen
uniform sampler2D velocity;//Our input texture
void main() {
float xPixel = 1.0/res.x;
float yPixel = 1.0/res.y;
vec2 pixel = gl_FragCoord.xy / res.xy;
vec4 leftColor = texture2D(velocity, vec2(pixel.x-xPixel,pixel.y));
vec4 rightColor = texture2D(velocity, vec2(pixel.x+xPixel,pixel.y));
vec4 upColor = texture2D(velocity, vec2(pixel.x,pixel.y-yPixel));
vec4 downColor = texture2D(velocity, vec2(pixel.x,pixel.y+yPixel));
float div = xPixel/2.0 * ((rightColor.x - leftColor.x) + (upColor.y - downColor.y));
//Diffuse equation
gl_FragColor = vec4(div);
}

GPUImage add hue/color adjustments per-RGB channel (adjust reds to be more pink or orange)

Stumped trying to adjust the hue of a specific channel (or perhaps, more specifically, a specific range of colors - in this case, reds). Looking at the hue filter, I thought maybe I might get somewhere by commenting out the green and blue modifiers, impacting the changes on only the red channel:
precision highp float;
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform mediump float hueAdjust;
const highp vec4 kRGBToYPrime = vec4 (0.299, 0.587, 0.114, 0.0);
const highp vec4 kRGBToI = vec4 (0.595716, -0.274453, -0.321263, 0.0);
const highp vec4 kRGBToQ = vec4 (0.211456, -0.522591, 0.31135, 0.0);
const highp vec4 kYIQToR = vec4 (1.0, 0.9563, 0.6210, 0.0);
const highp vec4 kYIQToG = vec4 (1.0, -0.2721, -0.6474, 0.0);
const highp vec4 kYIQToB = vec4 (1.0, -1.1070, 1.7046, 0.0);
void main ()
{
// Sample the input pixel
highp vec4 color = texture2D(inputImageTexture, textureCoordinate);
// Convert to YIQ
highp float YPrime = dot (color, kRGBToYPrime);
highp float I = dot (color, kRGBToI);
highp float Q = dot (color, kRGBToQ);
// Calculate the hue and chroma
highp float hue = atan (Q, I);
highp float chroma = sqrt (I * I + Q * Q);
// Make the user's adjustments
hue += (-hueAdjust); //why negative rotation?
// Convert back to YIQ
Q = chroma * sin (hue);
I = chroma * cos (hue);
// Convert back to RGB
highp vec4 yIQ = vec4 (YPrime, I, Q, 0.0);
color.r = dot (yIQ, kYIQToR);
// --> color.g = dot (yIQ, kYIQToG);
// --> color.b = dot (yIQ, kYIQToB);
// Save the result
gl_FragColor = color;
}
);
But that just leaves the photo either grey/blue and washed-out or purplish green. Am I on the right track? If not, how can I modify this filter to affect individual channels while leaving the others intact?
Some examples:
Original, and the effect I'm trying to achieve:
(The second image is almost unnoticeably different, however the red channel's hue has been made slightly more pinker. I need to be able to adjust it between pink<->orange).
But here's what I get with B and G commented out:
(Left side: <0º, right side: >0º)
It looks to me like it's not affecting the hue of the reds in the way I'd like it to; possibly I'm approaching this incorrectly, or if I'm on the right track, this code isn't correctly adjusting the red channel hue?
(I also tried to achieve this effect using the GPUImageColorMatrixFilter, but I didn't get very far with it).
Edit: here's my current iteration of the shader using #VB_overflow's code + GPUImage wrapper, which is functionally affecting the input image in a way similar to what I'm aiming for:
#import "GPUImageSkinToneFilter.h"
#implementation GPUImageSkinToneFilter
NSString *const kGPUImageSkinToneFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
// [-1;1] <=> [pink;orange]
uniform highp float skinToneAdjust; // will make reds more pink
// Other parameters
uniform mediump float skinHue;
uniform mediump float skinHueThreshold;
uniform mediump float maxHueShift;
uniform mediump float maxSaturationShift;
// RGB <-> HSV conversion, thanks to http://lolengine.net/blog/2013/07/27/rgb-to-hsv-in-glsl
highp vec3 rgb2hsv(highp vec3 c)
{
highp vec4 K = vec4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);
highp vec4 p = mix(vec4(c.bg, K.wz), vec4(c.gb, K.xy), step(c.b, c.g));
highp vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r));
highp float d = q.x - min(q.w, q.y);
highp float e = 1.0e-10;
return vec3(abs(q.z + (q.w - q.y) / (6.0 * d + e)), d / (q.x + e), q.x);
}
// HSV <-> RGB conversion, thanks to http://lolengine.net/blog/2013/07/27/rgb-to-hsv-in-glsl
highp vec3 hsv2rgb(highp vec3 c)
{
highp vec4 K = vec4(1.0, 2.0 / 3.0, 1.0 / 3.0, 3.0);
highp vec3 p = abs(fract(c.xxx + K.xyz) * 6.0 - K.www);
return c.z * mix(K.xxx, clamp(p - K.xxx, 0.0, 1.0), c.y);
}
// Main
void main ()
{
// Sample the input pixel
highp vec4 colorRGB = texture2D(inputImageTexture, textureCoordinate);
// Convert color to HSV, extract hue
highp vec3 colorHSV = rgb2hsv(colorRGB.rgb);
highp float hue = colorHSV.x;
// check how far from skin hue
highp float dist = hue - skinHue;
if (dist > 0.5)
dist -= 1.0;
if (dist < -0.5)
dist += 1.0;
dist = abs(dist)/0.5; // normalized to [0,1]
// Apply Gaussian like filter
highp float weight = exp(-dist*dist*skinHueThreshold);
weight = clamp(weight, 0.0, 1.0);
// We want more orange, so increase saturation
if (skinToneAdjust > 0.0)
colorHSV.y += skinToneAdjust * weight * maxSaturationShift;
// we want more pinks, so decrease hue
else
colorHSV.x += skinToneAdjust * weight * maxHueShift;
// final color
highp vec3 finalColorRGB = hsv2rgb(colorHSV.rgb);
// display
gl_FragColor = vec4(finalColorRGB, 1.0);
}
);
#pragma mark -
#pragma mark Initialization and teardown
#synthesize skinToneAdjust;
#synthesize skinHue;
#synthesize skinHueThreshold;
#synthesize maxHueShift;
#synthesize maxSaturationShift;
- (id)init
{
if(! (self = [super initWithFragmentShaderFromString:kGPUImageSkinToneFragmentShaderString]) )
{
return nil;
}
skinToneAdjustUniform = [filterProgram uniformIndex:#"skinToneAdjust"];
skinHueUniform = [filterProgram uniformIndex:#"skinHue"];
skinHueThresholdUniform = [filterProgram uniformIndex:#"skinHueThreshold"];
maxHueShiftUniform = [filterProgram uniformIndex:#"maxHueShift"];
maxSaturationShiftUniform = [filterProgram uniformIndex:#"maxSaturationShift"];
self.skinHue = 0.05;
self.skinHueThreshold = 50.0;
self.maxHueShift = 0.14;
self.maxSaturationShift = 0.25;
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setSkinToneAdjust:(CGFloat)newValue
{
skinToneAdjust = newValue;
[self setFloat:newValue forUniform:skinToneAdjustUniform program:filterProgram];
}
- (void)setSkinHue:(CGFloat)newValue
{
skinHue = newValue;
[self setFloat:newValue forUniform:skinHueUniform program:filterProgram];
}
- (void)setSkinHueThreshold:(CGFloat)newValue
{
skinHueThreshold = newValue;
[self setFloat:newValue forUniform:skinHueThresholdUniform program:filterProgram];
}
- (void)setMaxHueShift:(CGFloat)newValue
{
maxHueShift = newValue;
[self setFloat:newValue forUniform:maxHueShiftUniform program:filterProgram];
}
- (void)setMaxSaturationShift:(CGFloat)newValue
{
maxSaturationShift = newValue;
[self setFloat:newValue forUniform:maxSaturationShiftUniform program:filterProgram];
}
#end
I made an example on ShaderToy. Use latest Chrome to see it, on my side it does not work on Firefox or IE because it uses a video as input.
After some experiments it seems to me that for red hues to be more "pink" you need to decrease the hue, but to get more "orange" you need to increase saturation.
In the code I convert to HSV instead of YIQ because this is faster, makes tweaking saturation possible and still allow to tweak hue. Also HSV components are in a [0-1] interval, so no need to handle radians.
So here is how this is done :
You choose a reference hue or color (in your case a red hue)
Shader compute the "distance" from current pixel hue to ref hue
Based on this distance, decrease hue if you want pink, increase saturation if you want orange
It is important to note that hue behaves differently than saturation and value: it should be treated as an angle (more info here).
The reference hue should be hardcoded, chosen by user (by color picking image), or found by analysing image content.
There are many different possible ways the compute the distance, in the example I chose to use the angular distance between hues.
You also need to apply some kind of filtering after computing the distance to "select" only closest colors, like this gaussian like function.
Here is the code, without the ShaderToy stuff:
precision highp float;
// [-1;1] <=> [pink;orange]
const float EFFECT_AMOUNT = -0.25; // will make reds more pink
// Other parameters
const float SKIN_HUE = 0.05;
const float SKIN_HUE_TOLERANCE = 50.0;
const float MAX_HUE_SHIFT = 0.04;
const float MAX_SATURATION_SHIFT = 0.25;
// RGB <-> HSV conversion, thanks to http://lolengine.net/blog/2013/07/27/rgb-to-hsv-in-glsl
vec3 rgb2hsv(vec3 c)
{
vec4 K = vec4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);
vec4 p = mix(vec4(c.bg, K.wz), vec4(c.gb, K.xy), step(c.b, c.g));
vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r));
float d = q.x - min(q.w, q.y);
float e = 1.0e-10;
return vec3(abs(q.z + (q.w - q.y) / (6.0 * d + e)), d / (q.x + e), q.x);
}
// HSV <-> RGB conversion, thanks to http://lolengine.net/blog/2013/07/27/rgb-to-hsv-in-glsl
vec3 hsv2rgb(vec3 c)
{
vec4 K = vec4(1.0, 2.0 / 3.0, 1.0 / 3.0, 3.0);
vec3 p = abs(fract(c.xxx + K.xyz) * 6.0 - K.www);
return c.z * mix(K.xxx, clamp(p - K.xxx, 0.0, 1.0), c.y);
}
// Main
void main ()
{
// Sample the input pixel
vec4 colorRGB = texture2D(inputImageTexture, textureCoordinate);
// get effect amount to apply
float skin_tone_shift = EFFECT_AMOUNT;
// Convert color to HSV, extract hue
vec3 colorHSV = rgb2hsv(colorRGB.rgb);
float hue = colorHSV.x;
// check how far from skin hue
float dist = hue - SKIN_HUE;
if (dist > 0.5)
dist -= 1.0;
if (dist < -0.5)
dist += 1.0;
dist = abs(dist)/0.5; // normalized to [0,1]
// Apply Gaussian like filter
float weight = exp(-dist*dist*SKIN_HUE_TOLERANCE);
weight = clamp(weight, 0.0, 1.0);
// We want more orange, so increase saturation
if (skin_tone_shift > 0.0)
colorHSV.y += skin_tone_shift * weight * MAX_SATURATION_SHIFT;
// we want more pinks, so decrease hue
else
colorHSV.x += skin_tone_shift * weight * MAX_HUE_SHIFT;
// final color
vec3 finalColorRGB = hsv2rgb(colorHSV.rgb);
// display
gl_FragColor = vec4(finalColorRGB, 1.0);
}
More Orange:
More Pink:
--EDIT--
It seems to me that you are not setting the uniform values in your ObjectiveC code. If you forget this shader will get zero for all those.
Code should look like this :
- (id)init
{
if(! (self = [super initWithFragmentShaderFromString:kGPUImageSkinToneFragmentShaderString]) )
{
return nil;
}
skinToneAdjustUniform = [filterProgram uniformIndex:#"skinToneAdjust"];
[self setFloat:0.5 forUniform:skinToneAdjustUniform program:filterProgram]; // here 0.5 so should increase saturation
skinHueUniform = [filterProgram uniformIndex:#"skinHue"];
self.skinHue = 0.05;
[self setFloat:self.skinHue forUniform:skinHueUniform program:filterProgram];
skinHueToleranceUniform = [filterProgram uniformIndex:#"skinHueTolerance"];
self.skinHueTolerance = 50.0;
[self setFloat:self.skinHueTolerance forUniform:skinHueToleranceUniform program:filterProgram];
maxHueShiftUniform = [filterProgram uniformIndex:#"maxHueShift"];
self.maxHueShift = 0.04;
[self setFloat:self.maxHueShift forUniform:maxHueShiftUniform program:filterProgram];
maxSaturationShiftUniform = [filterProgram uniformIndex:#"maxSaturationShift"];
self.maxSaturationShift = 0.25;
[self setFloat:self.maxSaturationShift forUniform:maxSaturationShiftUniform program:filterProgram];
return self;
}
#end

Drawing a grid in a WebGL fragment shader

I'm working on porting a ZUI from SVG over to WebGL for a few reasons, and I'd like to render a grid using a fragment shader.
Here's the basic effect I'm going for https://dl.dropboxusercontent.com/u/412963/steel/restel_2.mp4
I'd like to have a triangle that has thin, 1px lines every 10 units, and a thicker 2px line every 100 units (the units here being arbitrary but consistent with world-space, not screen-space).
Here's what I have so far, without the secondary thicker lines like in the video (note that this is literally a copy from my open buffer, and obviously isn't right):
Vertex Shader:
attribute vec3 aVertexPosition;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
varying float vX;
varying float vY;
void main(void) {
vX = aVertexPosition.x;
vY = aVertexPosition.y;
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
}
Fragment Shader:
precision mediump float;
uniform vec2 resolution;
uniform float uZoomFactor;
varying float vX;
varying float vY;
void main(void) {
float distance = gl_FragCoord.z / gl_FragCoord.w;
float fuzz = 1.0 / distance;
float minorLineFreq;
if (distance > 10.0) {
minorLineFreq = 1.0;
} else if (distance > 5.0) {
minorLineFreq = 1.0;
} else {
minorLineFreq = 0.10;
}
float xd = mod(vX, minorLineFreq) * 88.1;
float yd = mod(vY, minorLineFreq) * 88.1;
if (xd < fuzz) {
gl_FragColor = vec4(0.0,0.0,0.0,1.0);
} else if (yd < fuzz) {
gl_FragColor = vec4(0.0,0.0,0.0,1.0);
} else {
gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0);
}
}
It produces approximately the right image at a certain distance (but notice the banding effect where there's 2px lines instead of 1px):
Grid with banding
Zoomed in grid with unwanted thicker lines
So, how can I get a consistent grid, with 1px thick lines at every distance, all inside of a WebGL fragment shader?
I believe I've found an acceptable solution.
Using the following vertices (drawn in a triangle strip):
[ 1.0 1.0 0.0
-1.0 1.0 0.0
1.0 -1.0 0.0
-1.0 -1.0 0.0]
Vertex shader:
attribute vec4 aVertexPosition;
void main(void) {
gl_Position = aVertexPosition;
}
Fragment Shader:
precision mediump float;
uniform float vpw; // Width, in pixels
uniform float vph; // Height, in pixels
uniform vec2 offset; // e.g. [-0.023500000000000434 0.9794000000000017], currently the same as the x/y offset in the mvMatrix
uniform vec2 pitch; // e.g. [50 50]
void main() {
float lX = gl_FragCoord.x / vpw;
float lY = gl_FragCoord.y / vph;
float scaleFactor = 10000.0;
float offX = (scaleFactor * offset[0]) + gl_FragCoord.x;
float offY = (scaleFactor * offset[1]) + (1.0 - gl_FragCoord.y);
if (int(mod(offX, pitch[0])) == 0 ||
int(mod(offY, pitch[1])) == 0) {
gl_FragColor = vec4(0.0, 0.0, 0.0, 0.5);
} else {
gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0);
}
}
Gives results (depending on the pitch and offset) like:
gl_FragCoord is already scaled to the render target resolution. So you can simply:
precision mediump float;
vec4 color = vec4(1.);
vec2 pitch = vec2(50., 50.);
void main() {
if (mod(gl_FragCoord.x, pitch[0]) < 1. ||
mod(gl_FragCoord.y, pitch[1]) < 1.) {
gl_FragColor = color;
} else {
gl_FragColor = vec4(0.);
}
}
https://glslsandbox.com/e#74754.0

Why does this Vertex Shader have different output on the device vs. the simulator

I have a vertex shader in my app that has wildly different results on the iPad Simulator as opposed to an iPad Mini with the same input.
Above is a screenshot of the problem from the simulator (also the result I am hoping for), and below is the result on the iPad Mini:
I believe that I have narrowed down the problem to one function in the vertex shader, reproduced below
float dist(vec2 a, vec2 b) {
return sqrt(pow(a.x + b.x, 2.0) + pow(a.y + b.y, 2.0));
//when the line below is used the results are identical
// return 1.0;
}
As you can see from my comment, the result of commenting out the first return statement and returning 1.0 is that the two shaders return identical results.
Below is the function in context.
attribute vec3 position;
uniform vec2 wavePos;
uniform float waveAmplitude;
uniform float wavePeriod;
uniform mat4 modelViewProjectMatrix;
//this is the problem function
float dist(vec2 a, vec2 b) {
return sqrt(pow(a.x + b.x, 2.0) + pow(a.y + b.y, 2.0));
//when the line below is used the results are identical
// return 1.0;
}
void main() {
vec3 newPosition = position;
newPosition.z = 0.1 * ((wavePeriod - dist(position.xy, wavePos))/wavePeriod) * waveAmplitude;
// uncommenting this does not solve the inconsistent output problem
//newPosition.z = dist(position.xy, wavePos);
gl_Position = modelViewProjectMatrix * vec4(newPosition, 1.0);
}
changing the function to
highp float dist(vec2 a, vec2 b) {
float xsum = a.x + b.x;
float ysum = a.y + b.y;
return sqrt(xsum*xsum + ysum*ysum);
}
fixes it.

Resources