iOS Simulator GL_OES_standard_derivatives - ios

On iOS4 GL_OES_standard_derivatives is only supported on the device (from what I see when I output the extensions), is there a way to be able to:
Detect in the fragment shader if the extension is supported or not
If not supported, does anyone have a the code for the dFdx and dFdy? Can't seems that find anything on google.
TIA!

I had the same issue for antialiasing SDM fonts. You can calculate a similar dfdx/dfdx by
Translating 2 2d vectors using the current transform matrix :
vec2 p1(0,0); vec2 p2(1,1);
p1=TransformUsingCurrentMatrix(p1);
p2=TransformUsingCurrentMatrix(p2);
float magic=35; // you'll need to play with this - it's linked to screen size I think :P
float dFdx=(p2.x-p1.x)/magic;
float dFdy=(p2.y-p1.y)/magic;
then send dFdx/dFdy to your shader as uniforms - and simply multiply with your parameter to get the same functionality i.e.
dFdx(myval) now becomes
dFdx*myval;
dFdy(myval) dFdy*myval;
fwidth(myval) abs(dFdx*myval)+abs(dFdy*myval);

Related

SpriteKit shaders on iOS 16 - Y-axis is inverted

My SpriteKit app, which uses several GLSL fragment shaders, is showing different shader behaviour on iOS 16 as compared to iOS 15 and earlier. On iOS 15 the y-axis increased in the direction of the bottom of the screen, but now suddenly iOS 16 appears to have inverted this and now the y-axis is increasing in the direction of the top of the screen.
The fact that this change is occurring only in my fragment shaders while SpriteKit node positioning remains unchanged between iOS 15 and 16 leads me to believe that this might be a change made in Metal 3.
Is there an elegant solution to achieving consistent behaviour between iOS versions here? I would prefer not to have to detect the user's iOS version and supply a shader uniform to invert the y-axis manually, on a per-shader basis.
Update 9.15.2022
Based on the comments, it looks like this is indeed a bug (95579020) and an Apple Engineer confirmed that a fix has been identified (https://developer.apple.com/forums/thread/713945).
However, the SKShader issue doesn't seem to happen when building on an Apple silicon laptop with Xcode 14.0. Not sure about the CIFilter issue.
Original
Are you using v_tex_coord? I have tested this and it doesn't look like there's a problem between iOS 15 and iOS 16, using v_tex_coord. There must be something else.
Note: I have tested this on the iOS 16.0 simulator, not on a real device.
The documentation also still mentions the bottom-left corner:
vec2 v_tex_coord; Varying The coordinates used to access the texture. These coordinates are normalized so that the point (0.0,0.0) is in the bottom-left corner of the texture.
The results are the same on both iOS versions using the SKShader code below and v_tex_coord.y:
void main() {
vec3 red = vec3(1.0, 0.0, 0.0);
vec3 blue = vec3(0.0, 0.0, 1.0);
vec3 col = mix(red, blue, v_tex_coord.y);
gl_FragColor = vec4(col, 1.0);
}
iOS 15.0 ->
            iOS 16.0 ->

Dissolve SKShader works as expected on simulator, strange behaviour on actual device

I encountered weird behaviour when trying to create dissolve shader for iOS spritekit. I have this basic shader that for now only changes alpha of texture depending on black value of noise texture:
let shader = SKShader(source: """
void main() {\
vec4 colour = texture2D(u_texture, v_tex_coord);\
float noise = texture2D(noise_tex, v_tex_coord).r;\
gl_FragColor = colour * noise;\
}
""", uniforms: [
SKUniform(name: "noise_tex", texture: spriteSheet.textureNamed("dissolve_noise"))
])
Note that this code is called in spriteSheet preload callback.
On simulator this consistently gives expected result ie. texture with different alpha values all over the place. On actual 14.5.1 device it varies:
Applied directly to SKSpriteNode - it makes whole texture semi-transparent with single value
Applied to SKEffectNode with SKSpriteNode as its child - I see miniaturized part of a whole spritesheet
Same as 2 but texture is created from image outside spritesheet - it works as on simulator (and as expected)
Why does it behave like this? Considering this needs to work on iOS 9 devices I'm worried 3 won't work everywhere. So I'd like to understand why it happens and ideally get sure way to force 1 or at least 2 to work on all devices.
After some more testing I finally figured out what is happening. The textures in the shader are whole spritesheets instead of separate textures on devices, so the coordinates go all over the place. (which actually makes more sense than simulator behaviour now that I think about it)
So depending if I want 1 or 2 I need to apply different maths. 2 is easier, since display texture is first rendered onto a buffer, so v_text_coord will take full [0.0, 1.0], so all I need is noise texture rect to do appropriate transform. For 1 I need to additionally provide texture rect to first change it into [0.0, 1.0] myself and then apply that to noise coordinates.
This will work with both spritesheets loaded into the shader or separate images, just in later case it will do some unnecessary calculations.

Firemonkey does strange, bizarre things with Alpha

Working with Delphi / Firemonkey XE8. Had some decent luck with it recently, although you have to hack the heck out of it to get it to do what you want. My current project is to evaluate it's Low-Level 3D capabilities to see if I can use them as a starting point for a Game Project. I also know Unity3D quite well, and am considering using Unity3D instead, but I figure that Delphi / Firemonkey might give me some added flexibility in my game design because it is so minimal.
So I decided to dig into an Embarcadero-supplied sample... specifically the LowLevel3D sample. This is the cross-platform sample that shows you how to do nothing other than draw a rotating square on the screen with some custom shaders of your choice and have it look the same on all platforms (although it actually doesn't work AT ALL the same on all platforms... but we won't get into that).
Embc does not supply the original uncompiled shaders for the project (which I might add is really lame), and requires you to supply your own compatible shaders (some compiled, some not) for the various platforms you're targeting (also lame)...so my first job has been to create a shader that would work with their existing sample that does something OTHER than what the sample already does. Specifically, if I'm creating a 2D game, I wanted to make sure that I could do sprites with alpha transparency, basic stuff.... if I can get this working, I'll probably never have to write another shader for the whole game.
After pulling my hair out for many hours, I came up with this little shader that works with the same parameters as the demo.
Texture2D mytex0: register(t0);
Texture2D mytex1: register(t1);
float4 cccc : register(v0) ;
struct PixelShaderInput
{
float4 Pos: COLOR;
float2 Tex: TEXCOORDS;
};
SamplerState g_samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
RasterizerState MyCull {
FrontCounterClockwise = FALSE;
};
float4 main(PixelShaderInput pIn): SV_TARGET
{
float4 cc,c;
float4 ci = mytex1.Sample(g_samLinear, pIn.Tex.xy);
c = ci;
c.a = 0;//<----- DOES NOT actually SET ALPHA TO ZERO ... THIS IS A PROBLEM
cc = c;
return cc;
}
Never-mind that it doesn't actually do much with the parameters, but check out the line where I set the output's ALPHA to 0. Well.... I found that this actually HAS NO EFFECT!
But it gets spookier than this. I found that turning on CULLING in the Delphi App FIXED this issue. So I figure... no big deal then, I'll just manually draw both sides of the sprite... right? Nope! When I manually drew a double sided sprite.. the problem came back!
Check this image: shader is ignoring alpha=0 when double-sided
In the above picture, clearly alpha is SOMEWHAT obeyed because the clouds are not surrounded by a black box, however, the cloud itself is super saturated (I find that if I multiply rgb*a, then the colors come out approximately right, but I'm not going to do that in real-life for obvious reasons.
I'm new to the concept of writing custom shaders. Any insight is appreciated.

Getting the color of the back buffer in GLSL

I am trying to extract the color behind my shader fragment. I have searched around and found various examples of people doing this as such:
vec2 position = ( gl_FragCoord.xy / u_resolution.xy );
vec4 color = texture2D(u_backbuffer, v_texCoord);
This makes sense. However nobody has shown an example where you pass in the back buffer uniform.
I tried to do it like this:
int backbuffer = glGetUniformLocation(self.shaderProgram->program_, "u_backbuffer");
GLint textureID;
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &textureID);//tried both of these one at a time
glGetIntegerv(GL_RENDERBUFFER_BINDING, &textureID);//tried both of these one at a time
glUniform1i(backbuffer, textureID);
But i just get black. This is in cocos2d iOS FYI
Any suggestions?
You can do this, but only on iOS 6.0. Apple added an extension called GL_EXT_shader_framebuffer_fetch which lets you read the contents of the current framebuffer at the fragment you're rendering. This extension introduces a new variable, called gl_lastFragData, which you can read in your fragment shader.
This question by RayDeeA shows en example of this in action, although you'll need to change the name of the extension as combinatorial points out in their answer.
This should be supported on all devices running iOS 6.0 and is a great way to implement custom blend modes. I've heard that it's a very low cost operation, but haven't done much profiling myself yet.
That is not allowed. You cannot simultaneously sample from an image that you're currently writing to as part of an FBO.

Adjust brightness/contrast/gamma of scene in DirectX?

In a Direct3D application filled with Sprite objects from D3DX, I would like to be able to globally adjust the brightness and contrast. The contrast is important, if at all possible.
I've seen this question here about OpenGL: Tweak the brightness/gamma of the whole scene in OpenGL
But that doesn't give me what I would need in a DirectX context. I know this is something I could probably do with a pixel shader, too, but that seems like shooting a fly with a bazooka and I worry about backwards compatibility with older GPUs that would have to do any shaders in software. It seems like this should be possible, I remember even much older games like the original Half Life having settings like this well before the days of shaders.
EDIT: Also, please note that this is not a fullscreen application, so this would need to be something that would just affect the one Direct3D device and that is not a global setting for the monitor overall.
A lot of games do increasing brightness by, literally, applying a full screen poly over the screen with additive blending. ie
SRCBLEND = ONE
DESTBLEND = ONE
and then applying a texture with a colour of (1, 1, 1, 1) will increase the brightness(ish) of every pixel by 1.
To adjust contrast then you need to do similar but MULTIPLY by a constant factor. This will require blend settings as follows
SRCBLEND = DESTCOLOR
DESTBLEND = ZERO
This way if you blend a pixel with a value of (2, 2, 2, 2) then you will change the contrast.
Gamma is a far more complicated beast and i'm not aware of a way you can fake it like above.
Neither of these solutions is entirely accurate but they will give you a result that looks "slightly" correct. Its much more complicated doing either way properly but by using those methods you'll see effects that will look INCREDIBLY similar to various games you've played, and that's because that's EXACTLY what they are doing ;)
For a game I made in XNA, I used a pixel shader. Here is the code. [disclaimer] I'm not entirely sure the logic is right and some settings can have weird effects (!) [/disclaimer]
float offsetBrightness = 0.0f; //can be set from my C# code, [-1, 1]
float offsetContrast = 0.0f; //can be set from my C# code [-1, 1]
sampler2D screen : register(s0); //can be set from my C# code
float4 PSBrightnessContrast(float2 inCoord : TEXCOORD0) : COLOR0
{
return (tex2D(screen, inCoord.xy) + offsetBrightness) * (1.0 + offsetContrast);
}
technique BrightnessContrast
{
pass Pass1 { PixelShader = compile ps_2_0 PSBrightnessContrast(); }
}
You didn't specify which version of DirectX you're using, so I'll assume 9. Probably the best you can do is to use the gamma-ramp functionality. Pushing the ramp up and down increases or decreases brightness. Increasing or decreasing the gradient alters contrast.
Another approach is to edit all your textures. It would basically be the same approach as the gamma map, except you apply the ramp manually to the pixel data in each of your image files to generate the textures.
Just use a shader. Anything that doesn't at least support SM 2.0 is ancient at this point.
The answers in this forum may be of help, as I have never tried this.
http://social.msdn.microsoft.com/Forums/en-US/windowsdirectshowdevelopment/thread/95df6912-1e44-471d-877f-5546be0eabf2

Resources