I've been trying to import some nice shadertoy code that I want to use in p5.js.
https://www.shadertoy.com/view/wtGyRz here's the shadertoy code if anyone wants to look at it.
I believe that I've changed everything to the correct format for WEBGL but it keeps on getting a compile error and I'm not entirely sure why. I'm relatively new to WEBGL and using shaders so help would be greatly appreciated.
So here is my super simple js code:
let theShader;
function preload(){
theShader = loadShader('shader.vert', 'shader.frag');
}
function setup() {
createCanvas(windowWidth, windowHeight, WEBGL);
noStroke();
}
function draw() {
shader(theShader);
theShader.setUniform("iResolution", [width, height]);
theShader.setUniform("iFrame", frameCount);
theShader.setUniform("iMouse", [mouseX, map(mouseY, 0, height, height, 0)]);
theShader.setUniform("iTime", millis()/1000);
rect(0,0,width, height);
}
function windowResized(){
resizeCanvas(windowWidth, windowHeight);
}
and my vertex shader:
attribute vec3 aPosition;
attribute vec2 aTexCoord;
void main() {
vec4 positionVec4 = vec4(aPosition, 1.0);
positionVec4.xy = positionVec4.xy * 2.0 - 1.0;
gl_Position = positionVec4;
}
and finally the point of error, my frag shader
#ifdef GL_ES
precision mediump float;
#endif
#define width 800.f
#define height 450.f
#define numColors 10
#define numCircles 100
uniform vec2 iResolution;
uniform int iFrame;
uniform vec2 iMouse;
uniform float iTime;
vec3 GetColor(vec2 coord, vec4 circles[numCircles], vec3 colors[numColors], float timeDelay){
int value = 0;
float r;
float xc;
float yc;
float d = sqrt((coord[0] - xc)*(coord[0]-xc)+(coord[1]-yc)*(coord[1]-yc));
for(int i = 0; i < numCircles; i++)
{
xc = circles[i][0];
yc = circles[i][1];
r = circles[i][2];
d = sqrt((coord[0] - xc)*(coord[0]-xc)+(coord[1]-yc)*(coord[1]-yc));
if(d <= r && iTime >= timeDelay * float(i))
{
value+= int(circles[i][3]);
}
}
return colors[value % numColors];
}
void main(){
float timeDelay = 0.05f;
float timeFactor = 50.f;
vec3 colors[numColors] = vec3[numColors](vec3(0.976, 0.254, 0.266),vec3(0.952, 0.447, 0.172),vec3(0.972, 0.588, 0.117),\
vec3(0.976, 0.517, 0.290),vec3(0.976, 0.780, 0.309),vec3(0.564, 0.745, 0.427),vec3(0.262, 0.666, 0.545),\
vec3(0.301,0.564,0.556),vec3(0.341,0.458,0.564),vec3(0.152,0.490,0.631));
float timeInt = 0.f;
int i = 0;
float xGap = 1.f/11.f;
float yGap = 1.f/11.f;
vec4 circles[numCircles];
for(float y = height * yGap; y < height; y+= height * yGap)
{
for (float x = width * xGap; x < width; x += width * xGap)
{
circles[i] = vec4(x, y, iTime * timeFactor - timeFactor * timeDelay * float(i),i+1);
i++;
}
}
vec3 col = GetColor(gl_FragCoord, circles, colors, timeDelay);
gl_FragColor = vec4(col,1.0);
}
In all honesty I have no idea why it wouldn't compile. to my untrained eye everything looks fine so i think it might be some sort of small syntax that I'm not used to.
The issue is the shadertoy shader is a GLSL ES 3.0 shader for WebGL2 but p5.js only supports GLSL ES 1.0 and WebGL1
The incompatibilities include the \ at the end of a couple of lines which is what generated this error
glShaderSource: Shader source contains invalid characters
That took a while to find. Firefox gave a better error
WebGL warning: shaderSource: source contains illegal character 0x5c.
Remove the \ characters and you'll start getting more relevant errors like
Darn! An error occurred compiling the fragment shader:
ERROR: 0:33: '%' : integer modulus operator supported in GLSL ES 3.00 and above only
ERROR: 0:37: '0.05f' : Floating-point suffix unsupported prior to GLSL ES 3.00
ERROR: 0:37: '0.05f' : syntax error
Remove the f suffixes and more version related issue appear
An error occurred compiling the fragment shader:
ERROR: 0:33: '%' : integer modulus operator supported in GLSL ES 3.00 and above only
ERROR: 0:39: '[]' : array constructor supported in GLSL ES 3.00 and above only
ERROR: 0:39: '[]' : first-class arrays (array initializer) supported in GLSL ES 3.00 and above only
ERROR: 0:39: '=' : Invalid operation for arrays
ERROR: 0:39: '=' : cannot convert from 'const array[10] of 3-component vector of float' to 'mediump array[10] of 3-component vector of float'
ERROR: 0:56: 'GetColor' : no matching overloaded function found
ERROR: 0:56: '=' : dimension mismatch
ERROR: 0:56: '=' : cannot convert from 'const mediump float' to 'mediump 3-component vector of float
Related
I would like to convert the following fragment Shader written for glsl to Metal Shader.
const float PI = 3.14159265359;
mat2 rotate2d (float _angle) {
return mat2 (cos (_angle), -sin (_angle),
sin (_angle), cos (_angle));
}
void main (void) {
vec2 st = (gl_FragCoord.xy * 2.0 --resolution) /min(resolution.x,resolution.y);
float p = 0.0;
st = rotate2d (sin (time) * PI) * st;
vec2 c = max (abs (st) --0.2,0.0);
p = length (c);
p = ceil (p);
vec3 color = vec3 (1.0-p);
gl_FragColor = vec4 (color, 1.0);
}
At that time, I understand that there is no problem if vec2 etc. is set to float2 etc.
How should I write it?
It's hard to convert this shader without having any informartion about your current render pipeline:
#include <metal_stdlib>
float2x2 rotate2d(float angle)
{
return float2x2(float2(cos(angle), -sin(angle)),
float2(sin(angle), cos(angle)));
}
fragment float4 fragmentShader(RasterizerData in [[stage_in]],
constant simd::float2 &resolution [[ buffer(0) ]],
constant float &time [[ buffer(1) ]])
{
float2 st = (in.textureCoordinate.xy * 2.0 - resolution) / min(resolution.x, resolution.y);
float p = 0.0;
st = rotate2d(sin(time) * M_PI_F) *st;
float2 c = max(abs(st) - 0.2, 0.0);
p = length(c);
p = ceil(p);
float3 color = float3(1.0 - p);
return float4(color, 1);
}
I have fragment shader "fsh" file and I am trying to compile it, it is originally taken from Shadertoy, and it is in GLSL, I am trying to port it to METAL, and I am getting the following error:
program_source:129:12: error: program scope variable must reside in constant address space
const vec3 ro, rd;
As far as I can understand, I can not define ro and rd in global scope like this, how can I fix this ?
Thank you very much.
The code is below:
const vec3 ro, rd;
....
void main(void)
{
float t = u_time;
vec3 col = vec3(0.);
vec2 uv = gl_FragCoord.xy / iResolution.xy; // 0 <> 1
uv -= .5;
uv.x *= iResolution.x/iResolution.y;
vec2 mouse = gl_FragCoord.xy/iResolution.xy;
vec3 pos = vec3(.3, .15, 0.);
float bt = t * 5.;
float h1 = N(floor(bt));
float h2 = N(floor(bt+1.));
float bumps = mix(h1, h2, fract(bt))*.1;
bumps = bumps*bumps*bumps*CAM_SHAKE;
pos.y += bumps;
float lookatY = pos.y+bumps;
vec3 lookat = vec3(0.3, lookatY, 1.);
vec3 lookat2 = vec3(0., lookatY, .7);
lookat = mix(lookat, lookat2, sin(t*.1)*.5+.5);
uv.y += bumps*4.;
CameraSetup(uv, pos, lookat, 2., mouse.x);
t *= .03;
t += mouse.x;
// fix for GLES devices by MacroMachines
#ifdef GL_ES
const float stp = 1./8.;
#else
float stp = 1./8.;
#endif
for(float i=0.; i<1.; i+=stp) {
col += StreetLights(i, t);
}
for(float i=0.; i<1.; i+=stp) {
float n = N(i+floor(t));
col += HeadLights(i+n*stp*.7, t);
}
#ifndef GL_ES
#ifdef HIGH_QUALITY
stp = 1./32.;
#else
stp = 1./16.;
#endif
#endif
for(float i=0.; i<1.; i+=stp) {
col += EnvironmentLights(i, t);
}
col += TailLights(0., t);
col += TailLights(.5, t);
col += sat(rd.y)*vec3(.6, .5, .9);
gl_FragColor = vec4(col, 0.);
}
The equivalent declaration in Metal Shading Language (MSL) would be
constant float3 ro, rd;
However, you should also initialize these variables with values, since your shader functions will not be allowed to mutate them. Something like
constant float3 ro(0, 0, 0), rd(1, 1, 1);
A few more translation hints:
Metal doesn't have syntax for declaring uniforms. Instead, you'll need to pass such values in via a buffer in the constant or device address space. This includes things like your screen resolution and time variables.
Vector type names generally start with the name of their element type, followed by the number of elements (half2, float3). There are no explicit precision qualifiers in MSL.
Rather than writing to special values like gl_FragColor, basic fragment functions in Metal return a color value (which by convention is written to the first color attachment of the framebuffer, provided it passes the depth and stencil test).
I am using Following CIColorKernel Code to generate customFilter.
kernel vec4 customFilter(__sample image, __sample noise, float time, float inputNoise) {
vec2 uv = destCoord() / 1280.0;
float d = length(uv - vec2(0.5,0.5));
float blur = inputNoise;
float myTime = time * 1.0;
vec2 myuv = vec2(uv.x + sin( (uv.y + sin(myTime)) * abs(sin(myTime) + sin(2.0 * myTime) + sin(0.3 * myTime) + sin(1.4 * myTime) + cos(0.7 * myTime) + cos(1.3 * myTime)) * 4.0 ) * 0.02,uv.y) ;
vec2 finalUV = myuv * 1280.0;
vec3 col; col.r = sample(image, samplerTransform(image, finalUV)).r; col.g = sample(image, samplerTransform(image, finalUV)).g; col.b = sample(image, samplerTransform(image, finalUV)).b;
float scanline = sin(uv.y * 1280.0 *400.0)*0.08; col -= scanline;
// vignette
col *= 1.0 - d * 0.5;
return vec4(col, 1.0);
}
this piece of code works fine with iOS 10 / iOS 11 devices, However. It generate weird crash with iOS 12 Device
[CIKernelPool] 16:40: ERROR: parameter has unexpected type 'vec4' (should be a sampler type)
col.r = sample(image, samplerTransform(image, finalUV)).r;
[CIKernelPool] 17:40: ERROR: parameter has unexpected type 'vec4' (should be a sampler type)
col.g = sample(image, samplerTransform(image, finalUV)).g;
[CIKernelPool] 18:40: ERROR: parameter has unexpected type 'vec4' (should be a sampler type)
col.b = sample(image, samplerTransform(image, finalUV)).b;
this seem to happen in all CIColorKernel using __sample. However using sampler in place of __sample and coverting CIColorKernel to CIKernel solves the crash but it doesn't generating the expected result.
As the error stated, you are supplying wrong object to the
sample(image, samplerTransform(image, finalUV)).r
Here image is of type __sample. But it actually requires sampler type.
CIColorKernel does expect __sample type in its parameters. Thus, what you need is to use
CIKernel instead of CIColorKernel. Then you can supply sampler in your kernel.
kernel vec4 customFilter(sampler image, sampler noise, float time, float inputNoise) {
I'm trying to build a Vibrance filter for GPUImage based on this Javascript:
/**
* #filter Vibrance
* #description Modifies the saturation of desaturated colors, leaving saturated colors unmodified.
* #param amount -1 to 1 (-1 is minimum vibrance, 0 is no change, and 1 is maximum vibrance)
*/
function vibrance(amount) {
gl.vibrance = gl.vibrance || new Shader(null, '\
uniform sampler2D texture;\
uniform float amount;\
varying vec2 texCoord;\
void main() {\
vec4 color = texture2D(texture, texCoord);\
float average = (color.r + color.g + color.b) / 3.0;\
float mx = max(color.r, max(color.g, color.b));\
float amt = (mx - average) * (-amount * 3.0);\
color.rgb = mix(color.rgb, vec3(mx), amt);\
gl_FragColor = color;\
}\
');
simpleShader.call(this, gl.vibrance, {
amount: clamp(-1, amount, 1)
});
return this;
}
One would think I should be able to more/less copy paste the shader block:
GPUImageVibranceFilter.h
#interface GPUImageVibranceFilter : GPUImageFilter
{
GLint vibranceUniform;
}
// Modifies the saturation of desaturated colors, leaving saturated colors unmodified.
// Value -1 to 1 (-1 is minimum vibrance, 0 is no change, and 1 is maximum vibrance)
#property (readwrite, nonatomic) CGFloat vibrance;
#end
GPUImageVibranceFilter.m
#import "GPUImageVibranceFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageVibranceFragmentShaderString = SHADER_STRING
(
uniform sampler2D inputImageTexture;
uniform float vibrance;
varying highp vec2 textureCoordinate;
void main() {
vec4 color = texture2D(inputImageTexture, textureCoordinate);
float average = (color.r + color.g + color.b) / 3.0;
float mx = max(color.r, max(color.g, color.b));
float amt = (mx - average) * (-vibrance * 3.0);
color.rgb = mix(color.rgb, vec3(mx), amt);
gl_FragColor = color;
}
);
#else
NSString *const kGPUImageVibranceFragmentShaderString = SHADER_STRING
(
uniform sampler2D inputImageTexture;
uniform float vibrance;
varying vec2 textureCoordinate;
void main() {
vec4 color = texture2D(inputImageTexture, textureCoordinate);
float average = (color.r + color.g + color.b) / 3.0;
float mx = max(color.r, max(color.g, color.b));
float amt = (mx - average) * (-vibrance * 3.0);
color.rgb = mix(color.rgb, vec3(mx), amt);
gl_FragColor = color;
}
);
#endif
#implementation GPUImageVibranceFilter
#synthesize vibrance = _vibrance;
#pragma mark -
#pragma mark Initialization and teardown
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageVibranceFragmentShaderString]))
{
return nil;
}
vibranceUniform = [filterProgram uniformIndex:#"vibrance"];
self.vibrance = 0.0;
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setVibrance:(CGFloat)vibrance;
{
_vibrance = vibrance;
[self setFloat:_vibrance forUniform:vibranceUniform program:filterProgram];
}
#end
But that doesn't compile, crashing with:
Failed to compile fragment shader
Program link log: ERROR: One or more attached shaders not successfully compiled
Fragment shader compile log: (null)
Vertex shader compile log: (null)
The error is certainly clear, but being inexperienced with OpenGL ES shaders, I have no idea what the problem actually is.
One would think I should be able to more/less copy paste the shader block.
This might be the case in desktop GLSL, but in OpenGL ES you cannot declare a float variable (this includes types derived from float such as vec2 or mat4 as well) without first setting the precision - there is no pre-defined default precision for float in the fragment shader.
Implementations guarantee support for mediump and lowp floating-point precision in the fragment shader. You will have to check before setting highp as the default, however.
This whole problem screams "missing precision" to me, but why the compiler is not telling you this in the compile log I really do not know.
Brush up on 4.5.3 Default Precision Qualifiers (pp. 35-36)
Side-note regarding your use of CGFloat:
Be careful using CGFloat.
Depending on your compile target (whether the host machine is 32-bit or 64-bit), that type will either be single-precision or double-precision. If you are passing something declared CGFloat to GL, stop that =P
Use GLfloat instead, because that will always be single-precision as GL requires.
See a related answer I wrote for more details.
I'm using the GPUImage library by Brad Larson, and I think I've found an interesting problem.
The following shader program executes just fine:
NSString *const kDilationFragmentShaderString = SHADER_STRING
(
precision highp float;
uniform int height;
uniform int width;
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform int radius;
void main (void)
{
vec2 uv = textureCoordinate;
vec2 theSize = vec2(width, height);
vec3 theMax = texture2D(inputImageTexture, uv).rgb;
gl_FragColor = vec4(theMax, 1.0);
}
);
This version, however, crashes on large images (ie, a 4x3 image from the camera resized to 2560 on the longest side). To my mind, the only thing significantly different is the set of texture2D calls:
NSString *const kDilationFragmentShaderString = SHADER_STRING
(
precision highp float;
uniform int height;
uniform int width;
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform int radius;
void main (void)
{
vec2 uv = textureCoordinate;
vec2 theSize = vec2(width, height);
vec3 theMax = texture2D(inputImageTexture, uv).rgb;
int i;
int j;
int radsqr = radius*radius;
for (j = -radius; j <= radius; ++j) {
for (i = -radius; i <= radius; ++i) {
if (i * i + j * j > radsqr) continue;
theMax = max(theMax, texture2D(inputImageTexture, uv + vec2(i,j)/theSize).rgb);
}
}
gl_FragColor = vec4(theMax, 1.0);
}
);
I'm running this filter, and then a second filter with a minimum (ie, morphological dilation and then an erosion, or a morphological close operator).
I do realize that a more optimal way to implement this is to try to get all the texture2D calls into their own locations via the vertex shader; however, if the radius is 10, that calls for 314 vertices, which blows past the allowed number of locations. If I run these in the simulator and all other things are equal, then the first finishes just fine, but the second code blows memory up and the memory climbs dramatically for the call of the erosion filter. Running on an iPhone 4s, the first code fragment finishes just fine (and of course, very quickly), but the second code fragment crashes after the dilation, and does not run the erosion call.
Initially, it looks like texture2D is leaking; however, these functions are being called in a thread. When the thread exits, all memory is cleared in the simulator. As a result, the functions, if they work right the first time, can be run multiple times with no problem.
So my question is this: What is the texture2D call doing there that could cause this behavior? Is there a way to flush whatever buffer is created once the filter has completed, independently of ending the thread between calls?
EDIT: Something I've learned in the week since posting this question: The problem is in the for loops themselves. Remove the for loops, and the memory problem disappears. That is,
NSString *const kDilationFragmentShaderString = SHADER_STRING
(
precision highp float;
uniform int height;
uniform int width;
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform int radius;
void main (void)
{
vec2 uv = textureCoordinate;
vec2 theSize = vec2(width, height);
vec3 theMax = texture2D(inputImageTexture, uv).rgb;
int i;
int j;
int radsqr = radius*radius;
for (j = -radius; j <= radius; ++j) {
for (i = -radius; i <= radius; ++i) {
}
}
gl_FragColor = vec4(theMax, 1.0);
}
);
will allocate as much memory as if there were something happening inside of the loop. I'm determining this behavior through the inspector on the simulator. When I run a shader with no for loops on a 1280x1280 image, I get a total of 202 mb allocated, and when I run it with for loops, I get 230 mb allocated, regardless of what happens inside the for loop. The same behavior happens with while loops as well.
If you want to flush things, you can call glFlush() and it will flush the OpenGL command queue for the current context. The other thing you can do is tile your image and work on smaller pieces at a time. This is how applications like Photoshop, Final Cut Pro, and others work, and it can be very memory efficient.