Making a fish shader in SceneKit - ios

I am working on a Swift Playground with a 3D model view (Scenekit) and have recently been introduced to shaders (.shader files) for iOS. I've made shaders for Unity before and I was hoping for some help replicating a Unity shader from Shader Graph in the form of a .shader file for a shark. Thanks!
Here's a Unity Shader Graph:
This is the original shader code that I tried for the sinusoidal movement, which didn't work too well, instead it rippled in all directions. I tried changing the axis but after that nothing moved for the shader.
_geometry.position.xz += _geometry.position.xz
sin(30.0 * _geometry.position.y - 3.0 * u_time) * 0.05
(u_time < 3.0 ? u_time / 3.0 : 1.0);
This shader was saved as a .shader file and added to my mesh.

Related

Dissolve SKShader works as expected on simulator, strange behaviour on actual device

I encountered weird behaviour when trying to create dissolve shader for iOS spritekit. I have this basic shader that for now only changes alpha of texture depending on black value of noise texture:
let shader = SKShader(source: """
void main() {\
vec4 colour = texture2D(u_texture, v_tex_coord);\
float noise = texture2D(noise_tex, v_tex_coord).r;\
gl_FragColor = colour * noise;\
}
""", uniforms: [
SKUniform(name: "noise_tex", texture: spriteSheet.textureNamed("dissolve_noise"))
])
Note that this code is called in spriteSheet preload callback.
On simulator this consistently gives expected result ie. texture with different alpha values all over the place. On actual 14.5.1 device it varies:
Applied directly to SKSpriteNode - it makes whole texture semi-transparent with single value
Applied to SKEffectNode with SKSpriteNode as its child - I see miniaturized part of a whole spritesheet
Same as 2 but texture is created from image outside spritesheet - it works as on simulator (and as expected)
Why does it behave like this? Considering this needs to work on iOS 9 devices I'm worried 3 won't work everywhere. So I'd like to understand why it happens and ideally get sure way to force 1 or at least 2 to work on all devices.
After some more testing I finally figured out what is happening. The textures in the shader are whole spritesheets instead of separate textures on devices, so the coordinates go all over the place. (which actually makes more sense than simulator behaviour now that I think about it)
So depending if I want 1 or 2 I need to apply different maths. 2 is easier, since display texture is first rendered onto a buffer, so v_text_coord will take full [0.0, 1.0], so all I need is noise texture rect to do appropriate transform. For 1 I need to additionally provide texture rect to first change it into [0.0, 1.0] myself and then apply that to noise coordinates.
This will work with both spritesheets loaded into the shader or separate images, just in later case it will do some unnecessary calculations.

Firemonkey does strange, bizarre things with Alpha

Working with Delphi / Firemonkey XE8. Had some decent luck with it recently, although you have to hack the heck out of it to get it to do what you want. My current project is to evaluate it's Low-Level 3D capabilities to see if I can use them as a starting point for a Game Project. I also know Unity3D quite well, and am considering using Unity3D instead, but I figure that Delphi / Firemonkey might give me some added flexibility in my game design because it is so minimal.
So I decided to dig into an Embarcadero-supplied sample... specifically the LowLevel3D sample. This is the cross-platform sample that shows you how to do nothing other than draw a rotating square on the screen with some custom shaders of your choice and have it look the same on all platforms (although it actually doesn't work AT ALL the same on all platforms... but we won't get into that).
Embc does not supply the original uncompiled shaders for the project (which I might add is really lame), and requires you to supply your own compatible shaders (some compiled, some not) for the various platforms you're targeting (also lame)...so my first job has been to create a shader that would work with their existing sample that does something OTHER than what the sample already does. Specifically, if I'm creating a 2D game, I wanted to make sure that I could do sprites with alpha transparency, basic stuff.... if I can get this working, I'll probably never have to write another shader for the whole game.
After pulling my hair out for many hours, I came up with this little shader that works with the same parameters as the demo.
Texture2D mytex0: register(t0);
Texture2D mytex1: register(t1);
float4 cccc : register(v0) ;
struct PixelShaderInput
{
float4 Pos: COLOR;
float2 Tex: TEXCOORDS;
};
SamplerState g_samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
RasterizerState MyCull {
FrontCounterClockwise = FALSE;
};
float4 main(PixelShaderInput pIn): SV_TARGET
{
float4 cc,c;
float4 ci = mytex1.Sample(g_samLinear, pIn.Tex.xy);
c = ci;
c.a = 0;//<----- DOES NOT actually SET ALPHA TO ZERO ... THIS IS A PROBLEM
cc = c;
return cc;
}
Never-mind that it doesn't actually do much with the parameters, but check out the line where I set the output's ALPHA to 0. Well.... I found that this actually HAS NO EFFECT!
But it gets spookier than this. I found that turning on CULLING in the Delphi App FIXED this issue. So I figure... no big deal then, I'll just manually draw both sides of the sprite... right? Nope! When I manually drew a double sided sprite.. the problem came back!
Check this image: shader is ignoring alpha=0 when double-sided
In the above picture, clearly alpha is SOMEWHAT obeyed because the clouds are not surrounded by a black box, however, the cloud itself is super saturated (I find that if I multiply rgb*a, then the colors come out approximately right, but I'm not going to do that in real-life for obvious reasons.
I'm new to the concept of writing custom shaders. Any insight is appreciated.

Getting the color of the back buffer in GLSL

I am trying to extract the color behind my shader fragment. I have searched around and found various examples of people doing this as such:
vec2 position = ( gl_FragCoord.xy / u_resolution.xy );
vec4 color = texture2D(u_backbuffer, v_texCoord);
This makes sense. However nobody has shown an example where you pass in the back buffer uniform.
I tried to do it like this:
int backbuffer = glGetUniformLocation(self.shaderProgram->program_, "u_backbuffer");
GLint textureID;
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &textureID);//tried both of these one at a time
glGetIntegerv(GL_RENDERBUFFER_BINDING, &textureID);//tried both of these one at a time
glUniform1i(backbuffer, textureID);
But i just get black. This is in cocos2d iOS FYI
Any suggestions?
You can do this, but only on iOS 6.0. Apple added an extension called GL_EXT_shader_framebuffer_fetch which lets you read the contents of the current framebuffer at the fragment you're rendering. This extension introduces a new variable, called gl_lastFragData, which you can read in your fragment shader.
This question by RayDeeA shows en example of this in action, although you'll need to change the name of the extension as combinatorial points out in their answer.
This should be supported on all devices running iOS 6.0 and is a great way to implement custom blend modes. I've heard that it's a very low cost operation, but haven't done much profiling myself yet.
That is not allowed. You cannot simultaneously sample from an image that you're currently writing to as part of an FBO.

What happened to glDepthRange in OpenGL ES 2.0 for iOS?

I used glDepthRange(1.0, 0.0) in a Mac OS X program to give myself a right-handed coordinate system. Apparently I don't have that option with iOS using OpenGL ES 2.0. Is there a quick fix so that higher z-values show up in front, or do I have to rework all of my math?
well you can try glDepthFunc. the default value is GL_LESS, if you use GL_GREATER, pixels with higher z values will be rendered.
glDepthFunc(GL_GREATER);
alternatively, you can add this line on your vertex shader
gl_Position.z = -gl_Position.z;

iOS Simulator GL_OES_standard_derivatives

On iOS4 GL_OES_standard_derivatives is only supported on the device (from what I see when I output the extensions), is there a way to be able to:
Detect in the fragment shader if the extension is supported or not
If not supported, does anyone have a the code for the dFdx and dFdy? Can't seems that find anything on google.
TIA!
I had the same issue for antialiasing SDM fonts. You can calculate a similar dfdx/dfdx by
Translating 2 2d vectors using the current transform matrix :
vec2 p1(0,0); vec2 p2(1,1);
p1=TransformUsingCurrentMatrix(p1);
p2=TransformUsingCurrentMatrix(p2);
float magic=35; // you'll need to play with this - it's linked to screen size I think :P
float dFdx=(p2.x-p1.x)/magic;
float dFdy=(p2.y-p1.y)/magic;
then send dFdx/dFdy to your shader as uniforms - and simply multiply with your parameter to get the same functionality i.e.
dFdx(myval) now becomes
dFdx*myval;
dFdy(myval) dFdy*myval;
fwidth(myval) abs(dFdx*myval)+abs(dFdy*myval);

Resources