I have fragment shader "fsh" file and I am trying to compile it, it is originally taken from Shadertoy, and it is in GLSL, I am trying to port it to METAL, and I am getting the following error:
program_source:129:12: error: program scope variable must reside in constant address space
const vec3 ro, rd;
As far as I can understand, I can not define ro and rd in global scope like this, how can I fix this ?
Thank you very much.
The code is below:
const vec3 ro, rd;
....
void main(void)
{
float t = u_time;
vec3 col = vec3(0.);
vec2 uv = gl_FragCoord.xy / iResolution.xy; // 0 <> 1
uv -= .5;
uv.x *= iResolution.x/iResolution.y;
vec2 mouse = gl_FragCoord.xy/iResolution.xy;
vec3 pos = vec3(.3, .15, 0.);
float bt = t * 5.;
float h1 = N(floor(bt));
float h2 = N(floor(bt+1.));
float bumps = mix(h1, h2, fract(bt))*.1;
bumps = bumps*bumps*bumps*CAM_SHAKE;
pos.y += bumps;
float lookatY = pos.y+bumps;
vec3 lookat = vec3(0.3, lookatY, 1.);
vec3 lookat2 = vec3(0., lookatY, .7);
lookat = mix(lookat, lookat2, sin(t*.1)*.5+.5);
uv.y += bumps*4.;
CameraSetup(uv, pos, lookat, 2., mouse.x);
t *= .03;
t += mouse.x;
// fix for GLES devices by MacroMachines
#ifdef GL_ES
const float stp = 1./8.;
#else
float stp = 1./8.;
#endif
for(float i=0.; i<1.; i+=stp) {
col += StreetLights(i, t);
}
for(float i=0.; i<1.; i+=stp) {
float n = N(i+floor(t));
col += HeadLights(i+n*stp*.7, t);
}
#ifndef GL_ES
#ifdef HIGH_QUALITY
stp = 1./32.;
#else
stp = 1./16.;
#endif
#endif
for(float i=0.; i<1.; i+=stp) {
col += EnvironmentLights(i, t);
}
col += TailLights(0., t);
col += TailLights(.5, t);
col += sat(rd.y)*vec3(.6, .5, .9);
gl_FragColor = vec4(col, 0.);
}
The equivalent declaration in Metal Shading Language (MSL) would be
constant float3 ro, rd;
However, you should also initialize these variables with values, since your shader functions will not be allowed to mutate them. Something like
constant float3 ro(0, 0, 0), rd(1, 1, 1);
A few more translation hints:
Metal doesn't have syntax for declaring uniforms. Instead, you'll need to pass such values in via a buffer in the constant or device address space. This includes things like your screen resolution and time variables.
Vector type names generally start with the name of their element type, followed by the number of elements (half2, float3). There are no explicit precision qualifiers in MSL.
Rather than writing to special values like gl_FragColor, basic fragment functions in Metal return a color value (which by convention is written to the first color attachment of the framebuffer, provided it passes the depth and stencil test).
Related
I would like to convert the following fragment Shader written for glsl to Metal Shader.
const float PI = 3.14159265359;
mat2 rotate2d (float _angle) {
return mat2 (cos (_angle), -sin (_angle),
sin (_angle), cos (_angle));
}
void main (void) {
vec2 st = (gl_FragCoord.xy * 2.0 --resolution) /min(resolution.x,resolution.y);
float p = 0.0;
st = rotate2d (sin (time) * PI) * st;
vec2 c = max (abs (st) --0.2,0.0);
p = length (c);
p = ceil (p);
vec3 color = vec3 (1.0-p);
gl_FragColor = vec4 (color, 1.0);
}
At that time, I understand that there is no problem if vec2 etc. is set to float2 etc.
How should I write it?
It's hard to convert this shader without having any informartion about your current render pipeline:
#include <metal_stdlib>
float2x2 rotate2d(float angle)
{
return float2x2(float2(cos(angle), -sin(angle)),
float2(sin(angle), cos(angle)));
}
fragment float4 fragmentShader(RasterizerData in [[stage_in]],
constant simd::float2 &resolution [[ buffer(0) ]],
constant float &time [[ buffer(1) ]])
{
float2 st = (in.textureCoordinate.xy * 2.0 - resolution) / min(resolution.x, resolution.y);
float p = 0.0;
st = rotate2d(sin(time) * M_PI_F) *st;
float2 c = max(abs(st) - 0.2, 0.0);
p = length(c);
p = ceil(p);
float3 color = float3(1.0 - p);
return float4(color, 1);
}
I've been trying to import some nice shadertoy code that I want to use in p5.js.
https://www.shadertoy.com/view/wtGyRz here's the shadertoy code if anyone wants to look at it.
I believe that I've changed everything to the correct format for WEBGL but it keeps on getting a compile error and I'm not entirely sure why. I'm relatively new to WEBGL and using shaders so help would be greatly appreciated.
So here is my super simple js code:
let theShader;
function preload(){
theShader = loadShader('shader.vert', 'shader.frag');
}
function setup() {
createCanvas(windowWidth, windowHeight, WEBGL);
noStroke();
}
function draw() {
shader(theShader);
theShader.setUniform("iResolution", [width, height]);
theShader.setUniform("iFrame", frameCount);
theShader.setUniform("iMouse", [mouseX, map(mouseY, 0, height, height, 0)]);
theShader.setUniform("iTime", millis()/1000);
rect(0,0,width, height);
}
function windowResized(){
resizeCanvas(windowWidth, windowHeight);
}
and my vertex shader:
attribute vec3 aPosition;
attribute vec2 aTexCoord;
void main() {
vec4 positionVec4 = vec4(aPosition, 1.0);
positionVec4.xy = positionVec4.xy * 2.0 - 1.0;
gl_Position = positionVec4;
}
and finally the point of error, my frag shader
#ifdef GL_ES
precision mediump float;
#endif
#define width 800.f
#define height 450.f
#define numColors 10
#define numCircles 100
uniform vec2 iResolution;
uniform int iFrame;
uniform vec2 iMouse;
uniform float iTime;
vec3 GetColor(vec2 coord, vec4 circles[numCircles], vec3 colors[numColors], float timeDelay){
int value = 0;
float r;
float xc;
float yc;
float d = sqrt((coord[0] - xc)*(coord[0]-xc)+(coord[1]-yc)*(coord[1]-yc));
for(int i = 0; i < numCircles; i++)
{
xc = circles[i][0];
yc = circles[i][1];
r = circles[i][2];
d = sqrt((coord[0] - xc)*(coord[0]-xc)+(coord[1]-yc)*(coord[1]-yc));
if(d <= r && iTime >= timeDelay * float(i))
{
value+= int(circles[i][3]);
}
}
return colors[value % numColors];
}
void main(){
float timeDelay = 0.05f;
float timeFactor = 50.f;
vec3 colors[numColors] = vec3[numColors](vec3(0.976, 0.254, 0.266),vec3(0.952, 0.447, 0.172),vec3(0.972, 0.588, 0.117),\
vec3(0.976, 0.517, 0.290),vec3(0.976, 0.780, 0.309),vec3(0.564, 0.745, 0.427),vec3(0.262, 0.666, 0.545),\
vec3(0.301,0.564,0.556),vec3(0.341,0.458,0.564),vec3(0.152,0.490,0.631));
float timeInt = 0.f;
int i = 0;
float xGap = 1.f/11.f;
float yGap = 1.f/11.f;
vec4 circles[numCircles];
for(float y = height * yGap; y < height; y+= height * yGap)
{
for (float x = width * xGap; x < width; x += width * xGap)
{
circles[i] = vec4(x, y, iTime * timeFactor - timeFactor * timeDelay * float(i),i+1);
i++;
}
}
vec3 col = GetColor(gl_FragCoord, circles, colors, timeDelay);
gl_FragColor = vec4(col,1.0);
}
In all honesty I have no idea why it wouldn't compile. to my untrained eye everything looks fine so i think it might be some sort of small syntax that I'm not used to.
The issue is the shadertoy shader is a GLSL ES 3.0 shader for WebGL2 but p5.js only supports GLSL ES 1.0 and WebGL1
The incompatibilities include the \ at the end of a couple of lines which is what generated this error
glShaderSource: Shader source contains invalid characters
That took a while to find. Firefox gave a better error
WebGL warning: shaderSource: source contains illegal character 0x5c.
Remove the \ characters and you'll start getting more relevant errors like
Darn! An error occurred compiling the fragment shader:
ERROR: 0:33: '%' : integer modulus operator supported in GLSL ES 3.00 and above only
ERROR: 0:37: '0.05f' : Floating-point suffix unsupported prior to GLSL ES 3.00
ERROR: 0:37: '0.05f' : syntax error
Remove the f suffixes and more version related issue appear
An error occurred compiling the fragment shader:
ERROR: 0:33: '%' : integer modulus operator supported in GLSL ES 3.00 and above only
ERROR: 0:39: '[]' : array constructor supported in GLSL ES 3.00 and above only
ERROR: 0:39: '[]' : first-class arrays (array initializer) supported in GLSL ES 3.00 and above only
ERROR: 0:39: '=' : Invalid operation for arrays
ERROR: 0:39: '=' : cannot convert from 'const array[10] of 3-component vector of float' to 'mediump array[10] of 3-component vector of float'
ERROR: 0:56: 'GetColor' : no matching overloaded function found
ERROR: 0:56: '=' : dimension mismatch
ERROR: 0:56: '=' : cannot convert from 'const mediump float' to 'mediump 3-component vector of float
I am wondering which DDX DDY values the SampleGrad() function expects for a TextureCube object.
I know that it's the change in UV coordinates for 2D textures. So I thought, it would be the change in the direction in this case. However, this does not seem to be the case.
I get different results if I try to use the Sample function vs. SampleGrad:
Sample:
// calculate reflected ray
float3 reflRay = reflect(-viewDir, normal);
// reflection map lookup
return reflectionMap.Sample(linearSampler, reflRay);
SampleGrad:
// calculate reflected ray
float3 reflRay = reflect(-viewDir, normal);
// reflection map lookup
float3 dxr = ddx(reflRay);
float3 dyr = ddy(reflRay);
return reflectionMap.SampleGrad(linearSampler, reflRay, dxr, dyr);
I still don't know which values for DDX and DDY are required, but if found an acceptable workaround that computes the level of detail for my gradients. Unfortunately, the quality of this solution is not as good as a real Sample function with anisotropic filtering.
In case anyone needs it:
The computation is described in: https://microsoft.github.io/DirectX-Specs/d3d/archive/D3D11_3_FunctionalSpec.htm#LODCalculation
My HLSL implementation:
// calculate reflected ray
float3 reflRay = reflect(-viewDir, normal);
// reflection map lookup
float3 dxr = ddx(reflRay);
float3 dyr = ddy(reflRay);
// cubemap size for lod computation
float reflWidth, reflHeight;
reflectionMap.GetDimensions(reflWidth, reflHeight);
// calculate lod based on raydiffs
float lod = calcLod(getCubeDiff(reflRay, dxr).xy * reflWidth, getCubeDiff(reflRay, dyr).xy * reflHeight);
return reflectionMap.SampleLevel(linearSampler, reflRay, lod).rgb;
Helper functions:
float pow2(float x) {
return x * x;
}
// calculates texture coordinates [-1, 1] for the view direction (xy values must be divided by axisMajorValue for proper [-1, 1] range).else
// z coordinate is the faceId
float3 getCubeCoord(float3 viewDir, out float axisMajorValue)
{
// according to dx spec: https://microsoft.github.io/DirectX-Specs/d3d/archive/D3D11_3_FunctionalSpec.htm#PointSampling
// Choose the largest magnitude component of the input vector. Call this magnitude of this value AxisMajor. In the case of a tie, the following precedence should occur: Z, Y, X.
int axisMajor = 0;
int axisFlip = 0;
axisMajorValue = 0.0;
[unroll] for (int i = 0; i < 3; ++i)
{
if (abs(viewDir[i]) >= axisMajorValue)
{
axisMajor = i;
axisFlip = viewDir[i] < 0.0f ? 1 : 0;
axisMajorValue = abs(viewDir[i]);
}
}
int faceId = axisMajor * 2 + axisFlip;
// Select and mirror the minor axes as defined by the TextureCube coordinate space. Call this new 2d coordinate Position.
int axisMinor1 = axisMajor == 0 ? 2 : 0; // first coord is x or z
int axisMinor2 = 3 - axisMajor - axisMinor1;
// Project the coordinate onto the cube by dividing the components Position by AxisMajor.
//float u = viewDir[axisMinor1] / axisMajorValue;
//float v = -viewDir[axisMinor2] / axisMajorValue;
// don't project for getCubeDiff function!
float u = viewDir[axisMinor1];
float v = -viewDir[axisMinor2];
switch (faceId)
{
case 0:
case 5:
u *= -1.0f;
break;
case 2:
v *= -1.0f;
break;
}
return float3(u, v, float(faceId));
}
float3 getCubeDiff(float3 ray, float3 diff)
{
// from: https://microsoft.github.io/DirectX-Specs/d3d/archive/D3D11_3_FunctionalSpec.htm#LODCalculation
// Using TC, determine which component is of the largest magnitude, as when calculating the texel location. If any of the components are equivalent, precedence is as follows: Z, Y, X. The absolute value of this will be referred to as AxisMajor.
// select and mirror the minor axes of TC as defined by the TextureCube coordinate space to generate TC'.uv
float axisMajor;
float3 tuv = getCubeCoord(ray, axisMajor);
// select and mirror the minor axes of the partial derivative vectors as defined by the TextureCube coordinate space, generating 2 new partial derivative vectors dX'.uv & dY'.uv.
float derivateMajor;
float3 duv = getCubeCoord(diff, derivateMajor);
// Calculate 2 new dX and dY vectors for future calculations as follows:
// dX.uv = (AxisMajor*dX'.uv - TC'.uv*DerivativeMajorX)/(AxisMajor*AxisMajor)
float3 res;
res.z = 0.0;
res.xy = (axisMajor * duv.xy - tuv.xy * derivateMajor) / (axisMajor * axisMajor);
return res * 0.5;
}
// dx, dy in pixel coordinates
float calcLod(float2 dX, float2 dY)
{
// from: https://microsoft.github.io/DirectX-Specs/d3d/archive/D3D11_3_FunctionalSpec.htm#LODCalculation
float A = pow2(dX.y) + pow2(dY.y);
float B = -2.0 * (dX.x * dX.y + dY.x * dY.y);
float C = pow2(dX.x) + pow2(dY.x);
float F = pow2(dX.x * dY.y - dY.x * dX.y);
float p = A - C;
float q = A + C;
float t = sqrt(pow2(p) + pow2(B));
float lengthX = sqrt(abs(F * (t+p) / ( t * (q+t))) + abs(F * (t-p) / ( t * (q+t))));
float lengthY = sqrt(abs(F * (t-p) / ( t * (q-t))) + abs(F * (t+p) / ( t * (q-t))));
return log2(max(lengthX,lengthY));
}
I'm trying to understand what is the equivalent of mix OpenGL function in metal. This is the OpenGL code I'm trying to convert:
float udRoundBox( vec2 p, vec2 b, float r )
{
return length(max(abs(p)-b+r,0.0))-r;
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// setup
float t = 0.2 + 0.2 * sin(mod(iTime, 2.0 * PI) - 0.5 * PI);
float iRadius = min(iResolution.x, iResolution.y) * (0.05 + t);
vec2 halfRes = 0.5 * iResolution.xy;
// compute box
float b = udRoundBox( fragCoord.xy - halfRes, halfRes, iRadius );
// colorize (red / black )
vec3 c = mix( vec3(1.0,0.0,0.0), vec3(0.0,0.0,0.0), smoothstep(0.0,1.0,b) );
fragColor = vec4( c, 1.0 );
}
I was able to convert part of it so far:
float udRoundBox( float2 p, float2 b, float r )
{
return length(max(abs(p)-b+r,0.0))-r;
}
float4 cornerRadius(sampler_h src) {
float2 greenCoord = src.coord(); // this is alreay in relative coords; no need to devide by image size
float t = 0.5;
float iRadius = min(greenCoord.x, greenCoord.y) * (t);
float2 halfRes = float2(greenCoord.x * 0.5, greenCoord.y * 0.5);
float b = udRoundBox( float2(greenCoord.x - halfRes.x, greenCoord.y - halfRes.y), halfRes, iRadius );
float3 c = mix(float3(1.0,0.0,0.0), float3(0.0,0.0,0.0), smoothstep(0.0,1.0,b) );
return float4(c, 1.0);
}
But it's producing green screen. I'm trying to achieve corner radius on a video like so:
The mix function is an implementation of linear interpolation, more frequently referred to as a Lerp function.
You can use linear interpolation where you have a value, let's say t and you want to know how that value maps within a certain range.
For example if I have three values:
a = 0
b = 1
and
t = 0.5
I could call mix(a,b,t) and my result would be 0.5. That is because the mix function expects a start range value, an end range value and a factor by which to interpolate, so I get 0.5 which is halfway between 0 and 1.
Looking at the documentation Metal has an implementation of mix that does a linear interpolation.
The problem is, that greenCoord (which was only a good variable name for the other question you asked, by the way) is the relative coordinate of the current pixel and has nothing to do with the absolute input resolution.
If you want a replacement for your iResolution, use src.size() instead.
And it seems you need your input coordinates in absolute (pixel) units. You can achieve that by adding a destination parameter to the inputs of your kernel like so:
float4 cornerRadius(sampler src, destination dest) {
const float2 destCoord = dest.coord(); // pixel position in the output buffer in absolute coordinates
const float2 srcSize = src.size();
const float t = 0.5;
const float radius = min(srcSize.x, srcSize.y) * t;
const float2 halfRes = 0.5 * srcSize;
const float b = udRoundBox(destCoord - halfRes, halfRes, radius);
const float3 c = mix(float3(1.0,0.0,0.0), float3(0.0,0.0,0.0), smoothstep(0.0,1.0,b) );
return float4(c, 1.0);
}
I'm trying to build a Vibrance filter for GPUImage based on this Javascript:
/**
* #filter Vibrance
* #description Modifies the saturation of desaturated colors, leaving saturated colors unmodified.
* #param amount -1 to 1 (-1 is minimum vibrance, 0 is no change, and 1 is maximum vibrance)
*/
function vibrance(amount) {
gl.vibrance = gl.vibrance || new Shader(null, '\
uniform sampler2D texture;\
uniform float amount;\
varying vec2 texCoord;\
void main() {\
vec4 color = texture2D(texture, texCoord);\
float average = (color.r + color.g + color.b) / 3.0;\
float mx = max(color.r, max(color.g, color.b));\
float amt = (mx - average) * (-amount * 3.0);\
color.rgb = mix(color.rgb, vec3(mx), amt);\
gl_FragColor = color;\
}\
');
simpleShader.call(this, gl.vibrance, {
amount: clamp(-1, amount, 1)
});
return this;
}
One would think I should be able to more/less copy paste the shader block:
GPUImageVibranceFilter.h
#interface GPUImageVibranceFilter : GPUImageFilter
{
GLint vibranceUniform;
}
// Modifies the saturation of desaturated colors, leaving saturated colors unmodified.
// Value -1 to 1 (-1 is minimum vibrance, 0 is no change, and 1 is maximum vibrance)
#property (readwrite, nonatomic) CGFloat vibrance;
#end
GPUImageVibranceFilter.m
#import "GPUImageVibranceFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageVibranceFragmentShaderString = SHADER_STRING
(
uniform sampler2D inputImageTexture;
uniform float vibrance;
varying highp vec2 textureCoordinate;
void main() {
vec4 color = texture2D(inputImageTexture, textureCoordinate);
float average = (color.r + color.g + color.b) / 3.0;
float mx = max(color.r, max(color.g, color.b));
float amt = (mx - average) * (-vibrance * 3.0);
color.rgb = mix(color.rgb, vec3(mx), amt);
gl_FragColor = color;
}
);
#else
NSString *const kGPUImageVibranceFragmentShaderString = SHADER_STRING
(
uniform sampler2D inputImageTexture;
uniform float vibrance;
varying vec2 textureCoordinate;
void main() {
vec4 color = texture2D(inputImageTexture, textureCoordinate);
float average = (color.r + color.g + color.b) / 3.0;
float mx = max(color.r, max(color.g, color.b));
float amt = (mx - average) * (-vibrance * 3.0);
color.rgb = mix(color.rgb, vec3(mx), amt);
gl_FragColor = color;
}
);
#endif
#implementation GPUImageVibranceFilter
#synthesize vibrance = _vibrance;
#pragma mark -
#pragma mark Initialization and teardown
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageVibranceFragmentShaderString]))
{
return nil;
}
vibranceUniform = [filterProgram uniformIndex:#"vibrance"];
self.vibrance = 0.0;
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setVibrance:(CGFloat)vibrance;
{
_vibrance = vibrance;
[self setFloat:_vibrance forUniform:vibranceUniform program:filterProgram];
}
#end
But that doesn't compile, crashing with:
Failed to compile fragment shader
Program link log: ERROR: One or more attached shaders not successfully compiled
Fragment shader compile log: (null)
Vertex shader compile log: (null)
The error is certainly clear, but being inexperienced with OpenGL ES shaders, I have no idea what the problem actually is.
One would think I should be able to more/less copy paste the shader block.
This might be the case in desktop GLSL, but in OpenGL ES you cannot declare a float variable (this includes types derived from float such as vec2 or mat4 as well) without first setting the precision - there is no pre-defined default precision for float in the fragment shader.
Implementations guarantee support for mediump and lowp floating-point precision in the fragment shader. You will have to check before setting highp as the default, however.
This whole problem screams "missing precision" to me, but why the compiler is not telling you this in the compile log I really do not know.
Brush up on 4.5.3 Default Precision Qualifiers (pp. 35-36)
Side-note regarding your use of CGFloat:
Be careful using CGFloat.
Depending on your compile target (whether the host machine is 32-bit or 64-bit), that type will either be single-precision or double-precision. If you are passing something declared CGFloat to GL, stop that =P
Use GLfloat instead, because that will always be single-precision as GL requires.
See a related answer I wrote for more details.