ios opengl 3.0 does not compile version - ios

I am trying out opengl 3.0 in xcode 5
this is how I compile the shader
*shader = glCreateShader(type);
glShaderSource(*shader, 1, &source, NULL);
glCompileShader(*shader);
this is my shader
#version 140
attribute vec4 position;
attribute vec4 color;
varying vec4 colorVarying;
attribute vec2 TexCoordIn;
varying vec2 TexCoordOut;
out int rowIndex;
out int colIndex;
void main(void) {
colorVarying = color;
gl_Position = position;
TexCoordOut = TexCoordIn;
}
i try:
glGetString(GL_SHADING_LANGUAGE_VERSION);
returns 235, which is expected. but i get
ERROR: 0:1: '' : version '140' is not supported
from the compile log, I have tried many version numbers and only 100 worked. then i get
Invalid qualifiers 'out' in global variable context
what's wrong? I am running this on the iphone 4 64bit simulator, on my mac air with Intel HD Graphics 3000 384 MB graphics

I'm confused about what you're trying to do. You're saying you want to use OpenGL 3.0, yet you have tagged the question for OpenGL ES 3.0. Since you're talking about iPhone simulator, I'll assume you want to use OpenGL ES 3.0. You should note that on Apple mobile devices, only iPhone 5S has OpenGL ES 3.0-capable GPU. I don't know how well ES 3.0 works in the simulator - at least you should try the iPhone 5S simulator if there is such a thing available.
As for your shader, it looks like a merry mix of different language versions. First, for OpenGL ES 3.0 you need #version 300 es. The only other allowed version in OpenGL ES is #version 100 for ES 2.0. I don't know why you're trying #version 140 since it's only for (desktop) OpenGL 3.1. I don't know why you are expecting and getting 235 from GL_SHADING_LANGUAGE_VERSION? As per the spec, it should return OpenGL ES GLSL ES N.M ..., where N is major and M minor language version number.
Then, as noted by SurvivalMachine, you need to replace your varyings with out and attributes with in.

Related

Metal Shader vertex attributes - cannot convert attribute from MTLAttributeFormatUInt to float1

I have a shader that looks like this:
struct VertexIn {
float a_customIdx [[attribute(0)]];
...
};
vertex vec4 vertex_func(VertexIn v_in [[stage_in]], ...) {...}
In my buffer I'm actually passing in a uint32_t for a_customIdx, so in my MTLVertexAttributeDescriptor I specify its type to be MTLAttributeFormatUInt. When I create the RenderPipelineState I get the error:
cannot convert attribute from MTLAttributeFormatUInt to float1
I get the same error if I use MTLAttributeFormatInt, but can successfully convert a MTLAttributeFormatUShort.
Why is this not a valid operation? According to the documentation for format, "Casting any MTLVertexFormat to a float or half is valid".
I know there are multiple ways I can get around this problem, but I'm curious about why this is invalid - perhaps there's something about alignments and byte sizes I'm missing here.

validateFunctionArguments:3379: failed assertion `Fragment Function , The pixel format (MTLPixelFormatRGBA16Unorm) of the texture

validateFunctionArguments:3379: failed assertion `Fragment Function(ca_uber_fragment_lp0_cp1_fo0): The pixel format (MTLPixelFormatRGBA16Unorm) of the texture (name:) bound at index 0 is incompatible with the data type (MTLDataTypeHalf) of the texture parameter (img_tex_0A [[texture(0)]]). MTLPixelFormatRGBA16Unorm is compatible with the data type(s) (
float
).'
When I run my project on iPhone 8, I got this crash error, someone adviced me to set "edit scheme - Options - Metal API Validation" disabled, and It really can solve it, But I do not know why ? so I'm looking forward to you give me some suggestions, Thanks.
Try to set colorPixelFormat = .bgra8Unorm for your MTLView and MTLRenderPipelineState
In the metal function, you have one of two options - try changing the texture type from float to half or from half to float.
texture2d<float, access::read> myTexture [[texture(0)]]
texture2d<half, access::read> myTexture [[texture(0)]]

unity mvp Matrix on ios

I am working on water simulation, I need to sample _CameraDepthTexture to get Opaque depth, it works well on Windows. But the shader get different depth on IOS.
vert:
o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
o.ref = ComputeScreenPos(o.pos);
COMPUTE_EYEDEPTH(o.ref.z);
frag:
uniform sampler2D_float _CameraDepthTexture;
float raw_depth = UNITY_SAMPLE_DEPTH(tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(uv2)));
On windows, the raw_depth is around 0.98, but On IOS, The raw_depth is around 0.51.
I guess this result interrelate with MVP in difference Platform.

Updating float4 declaration from dx9 to dx11

There's a shader which was given which I'm trying to update to be compatible with the newest Unity 5 (presumably dx11). I don't understand how float4 basic instantiation from dx9 was working. Can someone help me understand the following syntax and then provide an equivalent dx11 syntax?
I understand that float4 normally uses x,y,z,w or xyz,y as arguments, but what does did a single float argument do? Did float4(0.01) make {.01,0,0,0} or does float4(.01) make {.01,.01,.01,.01}?
Original code from the shader:
float4 Multiply19 = float4( 0.01 ) * float4( 0 );
It should make a new float4 with all members (xyzw) set to 0.01 and then multiply all that by 0, effectively making Multiply19 a (0, 0, 0, 0) float4.

GLSL ES precision errors and overflows

I have the following fragment shader:
precision highp float;
varying highp vec2 vTexCoord;
uniform sampler2D uColorTexture;
void main () {
highp vec4 tmp;
tmp = ((texture2D (uColorTexture, vTexCoord) + texture2D (uColorTexture, vTexCoord)) / 2.0);
gl_FragColor = tmp;
}
I know this shader does not make much sense but it should still run correct and I try to reproduce a problem with it. When I analyze this shader with the Xcode OpenGL-ES analyzer it shows an error:
Overflow in implicit conversion, minimum range for lowp float is
(-2,2)
and it not only shows this error, also the rendering output is broken do to overflows. So it's not just a false positive from the analyzer it actually overflows.
Can anyone explain to me why dis produces an overflow although I chose highp everywhere?
You didn't really choose highp everywhere; from the GLSL ES spec chapter 8 (Built-in Functions):
Precision qualifiers for parameters and return values are not shown. For the texture functions, the precision of the return type matches the precision of the sampler type.
and from 4.5.3 (Default Precision Qualifiers):
The fragment language has the following predeclared globally scoped default precision statements: ... precision lowp sampler2D; ...
Which means that in your code texture2D (uColorTexture, vTexCoord) will return a lowp, you are adding two of them, potentially resulting in a value of 2.0.
From 4.5.2 (Precision Qualifiers):
The required minimum ranges and precisions for precision qualifiers are: ... lowp (−2,2) ...
The parentheses in (-2,2) indicate an open range, meaning it includes values up to (but not including) 2.
So I think the fact that you're adding two lowp's together means you're overflowing. Try changing that line to:
tmp = texture2D(uColorTexture, vTexCoord)/2.0 + texture2D(uColorTexture, vTexCoord)/2.0;

Resources