In my iOS project running on the iPad Simulator I have 3 shader programs, each of which uses similar but not identical sets of attributes and uniforms. The first two of these shaders compile and work together perfectly, with no gl errors at any point in the compilation or drawing process. I now need to add a third shader, and this has caused an issue: now, both of the other shaders don't draw anything, and glGetError returns 1282 (GL_INVALID_OPERATION) when I try to use glUniform to pass uniforms. glGetError returns 0 the line before glUniform is called.
I believe that I am setting the program to the correct one before I try to call glUniform, because when I call glGetIntegerv with GL_CURRENT_PROGRAM before passing the uniforms, it matches the gl name of the program I am trying to use (I call glUseProgram with the program I want to use at the beginning of the method where the problem glUniform calls are made). glGetError returns 0 before and after compiling the problem shader. The compilation code is slightly modified from one of Apple's sample projects, and I see nothing in it that should affect the state of other shaders.
- (BOOL)loadGLProgramWithVertexShader: (NSString*) vertexShaderName fragmentShader: (NSString*) fragementShaderName
{
GLuint vertShader, normFragShader;
NSString *vertShaderPathname, *normFragShaderPathName;
// Create shader program.
shaderProgram = glCreateProgram();
// Create and compile vertex shader.
vertShaderPathname = [[NSBundle mainBundle] pathForResource: vertexShaderName ofType:#"vsh"];
if (![self compileShader:&vertShader type:GL_VERTEX_SHADER file:vertShaderPathname]) {
NSLog(#"Failed to compile vertex shader");
return NO;
}
// Create and compile normal mapping fragment shader.
normFragShaderPathName = [[NSBundle mainBundle] pathForResource: fragementShaderName ofType:#"fsh"];
if (![self compileShader:&normFragShader type:GL_FRAGMENT_SHADER file:normFragShaderPathName]) {
NSLog(#"Failed to compile fragment shader");
return NO;
}
// Attach vertex shader to program.
glAttachShader(shaderProgram, vertShader);
// Attach fragment shader to program.
glAttachShader(shaderProgram, normFragShader);
// Bind attribute locations.
// This needs to be done prior to linking.
glBindAttribLocation(shaderProgram, ATTRIB_VERTEX, "position");
glBindAttribLocation(shaderProgram, ATTRIB_TEXTURECOORD, "texCoord");
// Link program.
if (![self linkProgram:shaderProgram]) {
NSLog(#"Failed to link program: %d", shaderProgram);
if (vertShader) {
glDeleteShader(vertShader);
vertShader = 0;
}
if (normFragShader) {
glDeleteShader(normFragShader);
normFragShader = 0;
}
if (shaderProgram) {
glDeleteProgram(shaderProgram);
shaderProgram = 0;
}
return NO;
}
// Get uniform locations.
uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX] = glGetUniformLocation(shaderProgram, "modelViewProjectionMatrix");
uniforms[UNIFORM_TEXTURE] = glGetUniformLocation(shaderProgram, "texture");
// Release vertex and fragment shaders.
if (vertShader) {
glDetachShader(shaderProgram, vertShader);
glDeleteShader(vertShader);
}
if (normFragShader) {
glDetachShader(shaderProgram, normFragShader);
glDeleteShader(normFragShader);
}
return YES;
}
I've found the answer. In my third shader I have a sampler2D named "texture." Changing this name in the Shaders and the call to glGetUniformLocation fixed the problem. I don't understand why, since "texture" is not a reserved word in GLSL and there are no other uses of the word in any other uniform (there is a "texcoord" attribute, but I doubt that this has caused the problem), but it worked.
EDIT: I actually found the specific reason for this a while ago -- I had been using a bit of Apple's GLKit sample project, which binds shader attributes to an enum which is in the sample that I used placed outside of the #implementation of the View Controller, meaning that its scope is outside of a specific instance of the class. The class that had this problem actually had two shaders, and when the second was compiled it erased the previous bindings. The real mystery then is why the first solution that I gave worked in the first place...
Related
I set up a minimal ray tracing pipeline in vulkan, with any and closest hit shaders that write into buffers and ray payloads.
The problem is that buffer writes from the any hit shader seem not to take effect.
Here is the source code for the closest hit shader:
layout(set = 0, binding = 0, std430) writeonly buffer RayStatusBuffer {
uint items[];
} gRayStatus;
layout(location = 0) rayPayloadInEXT uint gRayPayload;
void main(void)
{
gRayStatus.items[0] = 1;
gRayPayload = 2;
}
The any hit shader code is identical, except for writing 3 and 4 for ray status buffer item and ray payload, respectively.
The buffer associated with gRayStatus is initialized to 0 and fed to the pipeline with:
VkDescriptorSetLayoutBinding statusLB{};
statusLB.binding = 0;
statusLB.descriptorType = VK_DESCRIPTOR_TYPE_STORAGE_BUFFER;
statusLB.descriptorCount = 1;
statusLB.stageFlags = VK_SHADER_STAGE_ANY_HIT_BIT_KHR | VK_SHADER_STAGE_CLOSEST_HIT_BIT_KHR;
By calling traceRayEXT(..., flags = 0, ...) from the raygen shader, I can read back the values 1 and 2 for ray status buffer item and ray payload, respectively and as expected.
But when calling traceRayEXT(..., flags = gl_RayFlagsSkipClosestHitShaderEXT, ...) I would expect the output of the any hit shader (3 and 4) to be present, but I get 0 and 4, as if the buffer write would have been ignored.
Any idea on this?
sorry for the late response.
From what I know, there could be two causes:
1° Any hit shaders are not called because of the flag VkGeometryFlagBitsKHR in the struct VkAccelerationStructureGeometryKHR used during the creation of a BLAS.
2° The conditions in which the any hit shaders are called. Look at this image helped me a lot: https://www.google.com/search?q=DxT+rtx+pipeline&tbm=isch&ved=2ahUKEwiTko-Gv-f8AhXCsEwKHUILD4IQ2-cCegQIABAA&oq=DxT+rtx+pipeline&gs_lcp=CgNpbWcQAzoECCMQJzoFCAAQgAQ6BggAEAUQHjoGCAAQCBAeOgQIABAeOgcIABCABBAYUKEPWIg5YPc5aAlwAHgAgAFmiAGyD5IBBDE4LjSYAQCgAQGqAQtnd3Mtd2l6LWltZ8ABAQ&sclient=img&ei=06DTY9PcIMLhsgLClryQCA&bih=1067&biw=1920&client=firefox-b-d#imgrc=38W16ovqUoCyRM
As you can see from the picture an any shader shader is called only if the hit geometry is the closest and is not opaque
My metal default library does not contain the vertex and shader functions from the .metal file of the same directory.
Then the library.makeFunction(name: ..) returns nil for both the vertex and shader functions that should be assigned to pipelineDescriptor vars.
The metal file & headers are copied from the Apple Sample App "BasicTexturing" (Creating and Sampling Textures).
The file APPLShaders.metal and APPLShaderTypes.h contain a vertexShader and samplingShader functions that are loaded by an AAPLRenderer.m
In the sample it's really straightforward
id<MTLLibrary> defaultLibrary = [_device newDefaultLibrary];
id<MTLFunction> vertexFunction = [defaultLibrary newFunctionWithName:#"vertexShader"];
id<MTLFunction> fragmentFunction = [defaultLibrary newFunctionWithName:#"samplingShader"];
I have copied these files to a RayWenderlich Swift tutorial and used the swift version
There is an init to set the library
Renderer.library = device.makeDefaultLibrary()
then
let library = Renderer.library
let importVertexFunction = library?.makeFunction(name: "vertexShader")
let importShaderFunction = library?.makeFunction(name: "samplingShader")
This works just fine!
Same thing in my app with the same files copied over and it does not load the functions.
I have checked compileSources in build settings - it lists the metal file.
Comparing everything in settings and don't see a difference between the working apps and my app.
I don't see any error messages or log messages to indicate a syntax or path problem.
Any ideas?
The Apple sample code AAPLShaders.metal
/*
See LICENSE folder for this sample’s licensing information.
Abstract:
Metal shaders used for this sample
*/
#include <metal_stdlib>
#include <simd/simd.h>
using namespace metal;
// Include header shared between this Metal shader code and C code executing Metal API commands
#import "AAPLShaderTypes.h"
// Vertex shader outputs and per-fragment inputs. Includes clip-space position and vertex outputs
// interpolated by rasterizer and fed to each fragment generated by clip-space primitives.
typedef struct
{
// The [[position]] attribute qualifier of this member indicates this value is the clip space
// position of the vertex wen this structure is returned from the vertex shader
float4 clipSpacePosition [[position]];
// Since this member does not have a special attribute qualifier, the rasterizer will
// interpolate its value with values of other vertices making up the triangle and
// pass that interpolated value to the fragment shader for each fragment in that triangle;
float2 textureCoordinate;
} RasterizerData;
// Vertex Function
vertex RasterizerData
vertexShader(uint vertexID [[ vertex_id ]],
constant AAPLVertex *vertexArray [[ buffer(AAPLVertexInputIndexVertices) ]],
constant vector_uint2 *viewportSizePointer [[ buffer(AAPLVertexInputIndexViewportSize) ]])
{
RasterizerData out;
// Index into our array of positions to get the current vertex
// Our positions are specified in pixel dimensions (i.e. a value of 100 is 100 pixels from
// the origin)
float2 pixelSpacePosition = vertexArray[vertexID].position.xy;
// Get the size of the drawable so that we can convert to normalized device coordinates,
float2 viewportSize = float2(*viewportSizePointer);
// The output position of every vertex shader is in clip space (also known as normalized device
// coordinate space, or NDC). A value of (-1.0, -1.0) in clip-space represents the
// lower-left corner of the viewport whereas (1.0, 1.0) represents the upper-right corner of
// the viewport.
// In order to convert from positions in pixel space to positions in clip space we divide the
// pixel coordinates by half the size of the viewport.
out.clipSpacePosition.xy = pixelSpacePosition / (viewportSize / 2.0);
// Set the z component of our clip space position 0 (since we're only rendering in
// 2-Dimensions for this sample)
out.clipSpacePosition.z = 0.0;
// Set the w component to 1.0 since we don't need a perspective divide, which is also not
// necessary when rendering in 2-Dimensions
out.clipSpacePosition.w = 1.0;
// Pass our input textureCoordinate straight to our output RasterizerData. This value will be
// interpolated with the other textureCoordinate values in the vertices that make up the
// triangle.
out.textureCoordinate = vertexArray[vertexID].textureCoordinate;
return out;
}
// Fragment function
fragment float4
samplingShader(RasterizerData in [[stage_in]],
texture2d<half> colorTexture [[ texture(AAPLTextureIndexBaseColor) ]])
{
constexpr sampler textureSampler (mag_filter::linear,
min_filter::linear);
// Sample the texture to obtain a color
const half4 colorSample = colorTexture.sample(textureSampler, in.textureCoordinate);
// We return the color of the texture
return float4(colorSample);
}
The Apple Sample code header AAPLShaderTypes.h
/*
See LICENSE folder for this sample’s licensing information.
Abstract:
Header containing types and enum constants shared between Metal shaders and C/ObjC source
*/
#ifndef AAPLShaderTypes_h
#define AAPLShaderTypes_h
#include <simd/simd.h>
// Buffer index values shared between shader and C code to ensure Metal shader buffer inputs match
// Metal API buffer set calls
typedef enum AAPLVertexInputIndex
{
AAPLVertexInputIndexVertices = 0,
AAPLVertexInputIndexViewportSize = 1,
} AAPLVertexInputIndex;
// Texture index values shared between shader and C code to ensure Metal shader buffer inputs match
// Metal API texture set calls
typedef enum AAPLTextureIndex
{
AAPLTextureIndexBaseColor = 0,
} AAPLTextureIndex;
// This structure defines the layout of each vertex in the array of vertices set as an input to our
// Metal vertex shader. Since this header is shared between our .metal shader and C code,
// we can be sure that the layout of the vertex array in the code matches the layout that
// our vertex shader expects
typedef struct
{
// Positions in pixel space (i.e. a value of 100 indicates 100 pixels from the origin/center)
vector_float2 position;
// 2D texture coordinate
vector_float2 textureCoordinate;
} AAPLVertex;
#endif /* AAPLShaderTypes_h */
Debug print of my library
Printing description of self.library:
(MTLLibrary?) library = (object = 0x00006000004af7b0) {
object = 0x00006000004af7b0 {
baseNSObject#0 = {
isa = CaptureMTLLibrary
}
Debug print of working library from RayWenderlich sample app
The new added sampleShader and vertexShader are shown in the library along with the existing fragment and vertex functions.
▿ Optional<MTLLibrary>
- some : <CaptureMTLLibrary: 0x600000f54210> -> <MTLDebugLibrary: 0x600002204050> -> <_MTLLibrary: 0x600001460280>
label = <none>
device = <MTLSimDevice: 0x15a5069d0>
name = Apple iOS simulator GPU
functionNames: fragment_main vertex_main samplingShader vertexShader
Did you check the target membership of file? Your code is nothing to weird so please check the target.
Answer - issue of not loading functions into the metal library is resolved by removing a leftover -fcikernel flag in the Other Metal Compiler Flags option of Build Settings of the project target.
The flag was set when testing a CoreImageKernel.metal as documented in https://developer.apple.com/documentation/coreimage/cikernel/2880194-init
I removed the kernel definition file from the app but missed the compiler flag.. and missed it when visually comparing build settings.
I am currently creating an objective-c++ IOS OpenGL ES 3.0 3D game engine, but when I try to load shaders, glCreateProgram() returns 0.
I am using #include <OpenGLES/ES3/gl.h> as my OpenGL base.
Here is my code:
m_program = glCreateProgram(); //ERROR
if (m_program == 0)
{
fprintf(stderr, "Error creating shader program\n"); //Catches ERROR
exit(1);
}
std::string vertexShaderText = LoadShader(fileName + ".vs"); //Returns file contents.
std::string fragmentShaderText = LoadShader(fileName + ".fs"); //Returns file contents.
AddVertexShader(vertexShaderText); //Creates and attaches shader to program
AddFragmentShader(fragmentShaderText); //Creates and attaches shader to program
AddAllAttributes(vertexShaderText); //Identifies all attributes in shader and adds them.
CompileShader(); //Compile the program, link the program and use the program
AddShaderUniforms(vertexShaderText); //Identifies all uniforms and adds them.
AddShaderUniforms(fragmentShaderText); //Identifies all uniforms and adds them.
i wrote a tiny function to switch between shaders, but unfortunatelly it doesn't work.
I create the shaders and return them as shader objects.
program = gl.createProgram(vs, fs, "VertexPosition", "TextureCoord", "VertexNormal");
depthprogram = gl.createProgram(vs, fs, "VertexPosition", false, false);
I use the switch function in my update function which consists of :
function Update(){
gl.switchProgram(program);
gl.Draw(Teapot, cam);
};
All buffers of the model are bind in the draw function, (parameters: the model, camera position)
this.switchProgram = function(program){
if (this.program !== program){
this.lastprogram = this.program;
this.program = program;
gl.useProgram(this.program);
};
};
Here is the resulting error: WebGL: INVALID_OPERATION: drawElements: attribs not setup correctly
If i comment all works fine, but i'm unable to switch :(
depthprogram = gl.createProgram(vs, fs, "VertexPosition", false, false);
gl.switchProgram(program);
Do both shaders use same attributes?
As far as I could conclude from your code, first shader uses:
attribute vertexPos;
attribute vertexNormal;
attribute textureCoord;
while second uses only
attribute vertexPos;
If you try to send attribute that isn't being used in vertex shader of the program, or you don't send attribute that vertex shader requests, then you'll get the warning.
I would say that you don't send vertexNormal and textureCoord for the first shader, although shader needs them.
Hope this helps.
I have a OpenGL shader which uses gl_TexCoord like the following does. But in OpenGL ES, gl_TexCoord is not supported. I wonder what can I do to refactor the code to get it work on OpenGL ES.
void main()
{
//scene depth calculation
float depth = linearize(texture2D(inputImageTexture2,gl_TexCoord[0].xy).x);
if (depthblur)
{
depth = linearize(bdepth(gl_TexCoord[0].xy));
}
...
}
There isn't one. You do it manually with a user-defined varying passed from your vertex shader. That's all it ever was anyway; a per-vertex output that your fragment shader took.