OpenGL ES IOS glCreateProgram() failure - ios

I am currently creating an objective-c++ IOS OpenGL ES 3.0 3D game engine, but when I try to load shaders, glCreateProgram() returns 0.
I am using #include <OpenGLES/ES3/gl.h> as my OpenGL base.
Here is my code:
m_program = glCreateProgram(); //ERROR
if (m_program == 0)
{
fprintf(stderr, "Error creating shader program\n"); //Catches ERROR
exit(1);
}
std::string vertexShaderText = LoadShader(fileName + ".vs"); //Returns file contents.
std::string fragmentShaderText = LoadShader(fileName + ".fs"); //Returns file contents.
AddVertexShader(vertexShaderText); //Creates and attaches shader to program
AddFragmentShader(fragmentShaderText); //Creates and attaches shader to program
AddAllAttributes(vertexShaderText); //Identifies all attributes in shader and adds them.
CompileShader(); //Compile the program, link the program and use the program
AddShaderUniforms(vertexShaderText); //Identifies all uniforms and adds them.
AddShaderUniforms(fragmentShaderText); //Identifies all uniforms and adds them.

Related

Metal defaultLibrary does not load .metal functions

My metal default library does not contain the vertex and shader functions from the .metal file of the same directory.
Then the library.makeFunction(name: ..) returns nil for both the vertex and shader functions that should be assigned to pipelineDescriptor vars.
The metal file & headers are copied from the Apple Sample App "BasicTexturing" (Creating and Sampling Textures).
The file APPLShaders.metal and APPLShaderTypes.h contain a vertexShader and samplingShader functions that are loaded by an AAPLRenderer.m
In the sample it's really straightforward
id<MTLLibrary> defaultLibrary = [_device newDefaultLibrary];
id<MTLFunction> vertexFunction = [defaultLibrary newFunctionWithName:#"vertexShader"];
id<MTLFunction> fragmentFunction = [defaultLibrary newFunctionWithName:#"samplingShader"];
I have copied these files to a RayWenderlich Swift tutorial and used the swift version
There is an init to set the library
Renderer.library = device.makeDefaultLibrary()
then
let library = Renderer.library
let importVertexFunction = library?.makeFunction(name: "vertexShader")
let importShaderFunction = library?.makeFunction(name: "samplingShader")
This works just fine!
Same thing in my app with the same files copied over and it does not load the functions.
I have checked compileSources in build settings - it lists the metal file.
Comparing everything in settings and don't see a difference between the working apps and my app.
I don't see any error messages or log messages to indicate a syntax or path problem.
Any ideas?
The Apple sample code AAPLShaders.metal
/*
See LICENSE folder for this sample’s licensing information.
Abstract:
Metal shaders used for this sample
*/
#include <metal_stdlib>
#include <simd/simd.h>
using namespace metal;
// Include header shared between this Metal shader code and C code executing Metal API commands
#import "AAPLShaderTypes.h"
// Vertex shader outputs and per-fragment inputs. Includes clip-space position and vertex outputs
// interpolated by rasterizer and fed to each fragment generated by clip-space primitives.
typedef struct
{
// The [[position]] attribute qualifier of this member indicates this value is the clip space
// position of the vertex wen this structure is returned from the vertex shader
float4 clipSpacePosition [[position]];
// Since this member does not have a special attribute qualifier, the rasterizer will
// interpolate its value with values of other vertices making up the triangle and
// pass that interpolated value to the fragment shader for each fragment in that triangle;
float2 textureCoordinate;
} RasterizerData;
// Vertex Function
vertex RasterizerData
vertexShader(uint vertexID [[ vertex_id ]],
constant AAPLVertex *vertexArray [[ buffer(AAPLVertexInputIndexVertices) ]],
constant vector_uint2 *viewportSizePointer [[ buffer(AAPLVertexInputIndexViewportSize) ]])
{
RasterizerData out;
// Index into our array of positions to get the current vertex
// Our positions are specified in pixel dimensions (i.e. a value of 100 is 100 pixels from
// the origin)
float2 pixelSpacePosition = vertexArray[vertexID].position.xy;
// Get the size of the drawable so that we can convert to normalized device coordinates,
float2 viewportSize = float2(*viewportSizePointer);
// The output position of every vertex shader is in clip space (also known as normalized device
// coordinate space, or NDC). A value of (-1.0, -1.0) in clip-space represents the
// lower-left corner of the viewport whereas (1.0, 1.0) represents the upper-right corner of
// the viewport.
// In order to convert from positions in pixel space to positions in clip space we divide the
// pixel coordinates by half the size of the viewport.
out.clipSpacePosition.xy = pixelSpacePosition / (viewportSize / 2.0);
// Set the z component of our clip space position 0 (since we're only rendering in
// 2-Dimensions for this sample)
out.clipSpacePosition.z = 0.0;
// Set the w component to 1.0 since we don't need a perspective divide, which is also not
// necessary when rendering in 2-Dimensions
out.clipSpacePosition.w = 1.0;
// Pass our input textureCoordinate straight to our output RasterizerData. This value will be
// interpolated with the other textureCoordinate values in the vertices that make up the
// triangle.
out.textureCoordinate = vertexArray[vertexID].textureCoordinate;
return out;
}
// Fragment function
fragment float4
samplingShader(RasterizerData in [[stage_in]],
texture2d<half> colorTexture [[ texture(AAPLTextureIndexBaseColor) ]])
{
constexpr sampler textureSampler (mag_filter::linear,
min_filter::linear);
// Sample the texture to obtain a color
const half4 colorSample = colorTexture.sample(textureSampler, in.textureCoordinate);
// We return the color of the texture
return float4(colorSample);
}
The Apple Sample code header AAPLShaderTypes.h
/*
See LICENSE folder for this sample’s licensing information.
Abstract:
Header containing types and enum constants shared between Metal shaders and C/ObjC source
*/
#ifndef AAPLShaderTypes_h
#define AAPLShaderTypes_h
#include <simd/simd.h>
// Buffer index values shared between shader and C code to ensure Metal shader buffer inputs match
// Metal API buffer set calls
typedef enum AAPLVertexInputIndex
{
AAPLVertexInputIndexVertices = 0,
AAPLVertexInputIndexViewportSize = 1,
} AAPLVertexInputIndex;
// Texture index values shared between shader and C code to ensure Metal shader buffer inputs match
// Metal API texture set calls
typedef enum AAPLTextureIndex
{
AAPLTextureIndexBaseColor = 0,
} AAPLTextureIndex;
// This structure defines the layout of each vertex in the array of vertices set as an input to our
// Metal vertex shader. Since this header is shared between our .metal shader and C code,
// we can be sure that the layout of the vertex array in the code matches the layout that
// our vertex shader expects
typedef struct
{
// Positions in pixel space (i.e. a value of 100 indicates 100 pixels from the origin/center)
vector_float2 position;
// 2D texture coordinate
vector_float2 textureCoordinate;
} AAPLVertex;
#endif /* AAPLShaderTypes_h */
Debug print of my library
Printing description of self.library:
(MTLLibrary?) library = (object = 0x00006000004af7b0) {
object = 0x00006000004af7b0 {
baseNSObject#0 = {
isa = CaptureMTLLibrary
}
Debug print of working library from RayWenderlich sample app
The new added sampleShader and vertexShader are shown in the library along with the existing fragment and vertex functions.
▿ Optional<MTLLibrary>
- some : <CaptureMTLLibrary: 0x600000f54210> -> <MTLDebugLibrary: 0x600002204050> -> <_MTLLibrary: 0x600001460280>
label = <none>
device = <MTLSimDevice: 0x15a5069d0>
name = Apple iOS simulator GPU
functionNames: fragment_main vertex_main samplingShader vertexShader
Did you check the target membership of file? Your code is nothing to weird so please check the target.
Answer - issue of not loading functions into the metal library is resolved by removing a leftover -fcikernel flag in the Other Metal Compiler Flags option of Build Settings of the project target.
The flag was set when testing a CoreImageKernel.metal as documented in https://developer.apple.com/documentation/coreimage/cikernel/2880194-init
I removed the kernel definition file from the app but missed the compiler flag.. and missed it when visually comparing build settings.

How to manually render a mesh loaded with the DirectX Toolkit

I have a c++/cx project where I'm rendering procedural meshes using DirectX-11, it all seems to work fine, but now I wanted to also import and render meshes from files (from fbx to be exact).
I was told to use the DirectX Toolkit for this.
I followed the tutorials of the toolkit, and that all worked,
but then I tried doing that in my project but it didn't seem to work. The imported mesh was not visible, and the existing procedural meshes were rendered incorrectly (as if without a depth buffer).
I then tried manually rendering the imported mesh (identical to the procedural meshes, without using the Draw function from DirectXTK)
This works better, the existing meshes are all correct, but the imported mesh color's are wrong; I use a custom made vertex and fragment shader, that uses only vertex position and color data, but for some reason the imported mesh's normals are send to shader instead of the vertex-colors.
(I don't even want the normals to be stored in the mesh, but I don't seem to have the option to export to fbx without normals, and even if I remove them manually from the fbx, at import the DirectXTK seem to recalculate the normals)
Does anyone know what I'm doing wrong?
This is all still relatively new to me, so any help appreciated.
If you need more info, just let me know.
Here is my code for rendering meshes:
First the main render function (which is called once every update):
void Track3D::Render()
{
if (!_loadingComplete)
{
return;
}
static const XMVECTORF32 up = { 0.0f, 1.0f, 0.0f, 0.0f };
// Prepare to pass the view matrix, and updated model matrix, to the shader
XMStoreFloat4x4(&_constantBufferData.view, XMMatrixTranspose(XMMatrixLookAtRH(_CameraPosition, _CameraLookat, up)));
// Clear the back buffer and depth stencil view.
_d3dContext->ClearRenderTargetView(_renderTargetView.Get(), DirectX::Colors::Transparent);
_d3dContext->ClearDepthStencilView(_depthStencilView.Get(), D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
// Set render targets to the screen.
ID3D11RenderTargetView *const targets[1] = { _renderTargetView.Get() };
_d3dContext->OMSetRenderTargets(1, targets, _depthStencilView.Get());
// Here I render everything:
_TrackMesh->Render(_constantBufferData);
RenderExtra();
_ImportedMesh->Render(_constantBufferData);
Present();
}
The Present-function:
void Track3D::Present()
{
DXGI_PRESENT_PARAMETERS parameters = { 0 };
parameters.DirtyRectsCount = 0;
parameters.pDirtyRects = nullptr;
parameters.pScrollRect = nullptr;
parameters.pScrollOffset = nullptr;
HRESULT hr = S_OK;
hr = _swapChain->Present1(1, 0, &parameters);
if (hr == DXGI_ERROR_DEVICE_REMOVED || hr == DXGI_ERROR_DEVICE_RESET)
{
OnDeviceLost();
}
else
{
if (FAILED(hr))
{
throw Platform::Exception::CreateException(hr);
}
}
}
Here's the render function which I call on every mesh:
(All of the mesh-specific data is gotten from the imported mesh)
void Mesh::Render(ModelViewProjectionConstantBuffer constantBufferData)
{
if (!_loadingComplete)
{
return;
}
XMStoreFloat4x4(&constantBufferData.model, XMLoadFloat4x4(&_modelMatrix));
// Prepare the constant buffer to send it to the Graphics device.
_d3dContext->UpdateSubresource(
_constantBuffer.Get(),
0,
NULL,
&constantBufferData,
0,
0
);
UINT offset = 0;
_d3dContext->IASetVertexBuffers(
0,
1,
_vertexBuffer.GetAddressOf(),
&_stride,
&_offset
);
_d3dContext->IASetIndexBuffer(
_indexBuffer.Get(),
DXGI_FORMAT_R16_UINT, // Each index is one 16-bit unsigned integer (short).
0
);
_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
_d3dContext->IASetInputLayout(_inputLayout.Get());
// Attach our vertex shader.
_d3dContext->VSSetShader(
_vertexShader.Get(),
nullptr,
0
);
// Send the constant buffer to the Graphics device.
_d3dContext->VSSetConstantBuffers(
0,
1,
_constantBuffer.GetAddressOf()
);
// Attach our pixel shader.
_d3dContext->PSSetShader(
_pixelShader.Get(),
nullptr,
0
);
SetTexture();
// Draw the objects.
_d3dContext->DrawIndexed(
_indexCount,
0,
0
);
}
And this is the vertex shader:
cbuffer ModelViewProjectionConstantBuffer : register(b0)
{
matrix model;
matrix view;
matrix projection;
};
struct VertexShaderInput
{
float3 pos : POSITION;
//float3 normal : NORMAL0; //uncommenting these changes the color data for some reason (but always wrong)
//float2 uv1 : TEXCOORD0;
//float2 uv2 : TEXCOORD1;
float3 color : COLOR0;
};
struct VertexShaderOutput
{
float3 color : COLOR0;
float4 pos : SV_POSITION;
};
VertexShaderOutput main(VertexShaderInput input)
{
VertexShaderOutput output;
float4 pos = float4(input.pos, 1.0f);
// Transform the vertex position into projected space.
pos = mul(pos, model);
pos = mul(pos, view);
pos = mul(pos, projection);
output.pos = pos;
output.color = input.color;
return output;
}
And this is the pixel shader:
struct PixelShaderInput
{
float3 color: COLOR0;
};
float4 main(PixelShaderInput input) : SV_TARGET
{
return float4(input.color.r, input.color.g, input.color.b, 1);
}
The most likely issue is that you are not setting enough state for your drawing, and that the DirectX Tool Kit drawing functions are setting states that don't match what your existing code requires.
For performance reasons, DirectX Tool Kit does not 'save & restore' state. Instead each draw function sets the state it needs fully and then leaves it. I document which state is impacted in the wiki under the State management section for each class.
Your code above sets the vertex buffer, index buffer, input layout, vertex shader, pixel shader, primitive topology, and VS constant buffer in slot 0.
You did not set blend state, depth/stencil state, or the rasterizer state. You didn't provide the pixel shader so I don't know if you need any PS constant buffers, samplers, or shader resources.
Try explicitly setting the blend state, depth/stencil state, and rasterizer state before you draw your procedural meshes. If you just want to go back to the defined defaults instead of whatever DirectX Tool Kit did, call:
_d3dContext->RSSetState(nullptr);
_d3dContext->OMSetBlendState(nullptr, nullptr, 0);
_d3dContext->OMSetDepthStencilState(nullptr, 0xffffffff);
See also the CommonStates class.
It's generally not a good idea to use identifiers that start with _ in C++. Officially all identifiers that start with _X where X is a capital letter or __ are reserved for the compiler and library implementers so it could conflict with some compiler stuff. m_ or something similar is better.

WebGL code not working

I am new to WebGL. I am trying to draw a triangle using WebGL. But my program is showing nothing in the console window I am getting the following warnings:
WebGL: INVALID_OPERATION: useProgram: program not valid
(index):96 WebGL: INVALID_OPERATION: getAttribLocation: program not linked
(index):107 WebGL: INVALID_OPERATION: drawArrays: no valid shader program in use
Here is my code:
var gl;
var canvas=document.getElementById("painter");
//Step 1 Create gl Context using experimental-webgl at this stage
try
{
gl=canvas.getContext("experimental-webgl");
}catch(e)
{
alert("WebGL not Supported");
}
// Step 2 Creating the vertex shader. Pass the attribute vec4 from application
var shader_vertex_source="\n\
attribute vec4 vPosition; \n\
void main(vaoid) \n\
{\n\
gl_Position=vPosition;\n\
}";
//Step 3: Create Fragment Shader. Must pass precision variable from application. Normally specify medium precision as it should be //supported by all devices
var shader_fragment_source="\n\
precision mediump float;\n\
void main(void)\n\
{\n\
gl_FragColor=vec4(1.0,0.0,0.0,1.0);\n\
}";
// Step 4: Configure view of WebGL
gl.viewport(0,0,canvas.width,canvas.height);
gl.clearColor(1.0,1.0,1.0,1.0);
// Step 5: Create Vertices of Triangle. Each pair represent a point (-1,-1), (0,1), (1,-1) will be three vertices of triangle
var vertice=new Float32Array([-1,-1,0,1,1,-1]);
// Step 6: Compile the shaders.
var get_shader=function(shadersource,shadertype)
{
var shader=gl.createShader(shadertype);
gl.shaderSource(shader,shadersource);
gl.compileShader(shader);
return shader;
}
// Now using the above function get compiled vertex and fragment shaders. The following two varibales are compiled shaders
var vertex_shader=get_shader(shader_vertex_source,gl.VERTEX_SHADER);
var fragment_shader=get_shader(shader_fragment_source,gl.FRAGMENT_SHADER);
// Step 7: Create a Program
var program=gl.createProgram();
// Step 8 : attach the shaders to the program
gl.attachShader(program,vertex_shader);
gl.attachShader(program,fragment_shader);
// Step 9: Link the program with gl context
gl.linkProgram(program);
// Step 10: Tell the gl Context to use the linked program
gl.useProgram(program);
// Step 11: Create Buffer to GPU. Neede for GPU
var buffer=gl.createBuffer();
// Step 12: Bind the created buffer to the gl Context
gl.bindBuffer(gl.ARRAY_BUFFER,buffer);
// Step 13: Now send the data to GPU through the buffer. We are going to sent vertice data and drawing info
gl.bufferData(gl.ARRAY_BUFFER,vertice,gl.STATIC_DRAW);
// Step 14: get the position location attirbute from the program using attribute name
var position=gl.getAttribLocation(program,"vPosition");// Our position attribute name is vPosition
// Step 14: Enable the position variable. As it will be used to draw vertices of triangle
gl.enableVertexAttribArray(position);
// Step 15: Tell to use position that gets data from GPU to draw triangle. 2 here mean use two points from the array at a time to //make a vertex
gl.vertexAttribPointer(position,2,gl.FLOAT,false,0,0);
// Step 16 Render using draw methods
gl.clear(gl.COLOR_BUFFER_BIT);
gl.drawArrays(gl.TRIANGLES,0,3);
I am also attaching jsfiddle Link for better help.
I tried your code, but I edited your function get_shader, with a compilation error check:
var get_shader=function(shadersource,shadertype)
{
var shader=gl.createShader(shadertype);
gl.shaderSource(shader,shadersource);
gl.compileShader(shader);
// Check for any compilation error
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
alert(gl.getShaderInfoLog(shader));
return null;
}
return shader;
}
I got the following error:
ERROR: 0:3: 'vaoid' : syntax error
It was coming from your vertex shader:
var shader_vertex_source="\n\
attribute vec4 vPosition; \n\
void main(vaoid) \n\ // RIGHT HERE !
{\n\
gl_Position=vPosition;\n\
}";
So finally it was just a typing mistake.

Compiling GLSL shader breaks other shaders

In my iOS project running on the iPad Simulator I have 3 shader programs, each of which uses similar but not identical sets of attributes and uniforms. The first two of these shaders compile and work together perfectly, with no gl errors at any point in the compilation or drawing process. I now need to add a third shader, and this has caused an issue: now, both of the other shaders don't draw anything, and glGetError returns 1282 (GL_INVALID_OPERATION) when I try to use glUniform to pass uniforms. glGetError returns 0 the line before glUniform is called.
I believe that I am setting the program to the correct one before I try to call glUniform, because when I call glGetIntegerv with GL_CURRENT_PROGRAM before passing the uniforms, it matches the gl name of the program I am trying to use (I call glUseProgram with the program I want to use at the beginning of the method where the problem glUniform calls are made). glGetError returns 0 before and after compiling the problem shader. The compilation code is slightly modified from one of Apple's sample projects, and I see nothing in it that should affect the state of other shaders.
- (BOOL)loadGLProgramWithVertexShader: (NSString*) vertexShaderName fragmentShader: (NSString*) fragementShaderName
{
GLuint vertShader, normFragShader;
NSString *vertShaderPathname, *normFragShaderPathName;
// Create shader program.
shaderProgram = glCreateProgram();
// Create and compile vertex shader.
vertShaderPathname = [[NSBundle mainBundle] pathForResource: vertexShaderName ofType:#"vsh"];
if (![self compileShader:&vertShader type:GL_VERTEX_SHADER file:vertShaderPathname]) {
NSLog(#"Failed to compile vertex shader");
return NO;
}
// Create and compile normal mapping fragment shader.
normFragShaderPathName = [[NSBundle mainBundle] pathForResource: fragementShaderName ofType:#"fsh"];
if (![self compileShader:&normFragShader type:GL_FRAGMENT_SHADER file:normFragShaderPathName]) {
NSLog(#"Failed to compile fragment shader");
return NO;
}
// Attach vertex shader to program.
glAttachShader(shaderProgram, vertShader);
// Attach fragment shader to program.
glAttachShader(shaderProgram, normFragShader);
// Bind attribute locations.
// This needs to be done prior to linking.
glBindAttribLocation(shaderProgram, ATTRIB_VERTEX, "position");
glBindAttribLocation(shaderProgram, ATTRIB_TEXTURECOORD, "texCoord");
// Link program.
if (![self linkProgram:shaderProgram]) {
NSLog(#"Failed to link program: %d", shaderProgram);
if (vertShader) {
glDeleteShader(vertShader);
vertShader = 0;
}
if (normFragShader) {
glDeleteShader(normFragShader);
normFragShader = 0;
}
if (shaderProgram) {
glDeleteProgram(shaderProgram);
shaderProgram = 0;
}
return NO;
}
// Get uniform locations.
uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX] = glGetUniformLocation(shaderProgram, "modelViewProjectionMatrix");
uniforms[UNIFORM_TEXTURE] = glGetUniformLocation(shaderProgram, "texture");
// Release vertex and fragment shaders.
if (vertShader) {
glDetachShader(shaderProgram, vertShader);
glDeleteShader(vertShader);
}
if (normFragShader) {
glDetachShader(shaderProgram, normFragShader);
glDeleteShader(normFragShader);
}
return YES;
}
I've found the answer. In my third shader I have a sampler2D named "texture." Changing this name in the Shaders and the call to glGetUniformLocation fixed the problem. I don't understand why, since "texture" is not a reserved word in GLSL and there are no other uses of the word in any other uniform (there is a "texcoord" attribute, but I doubt that this has caused the problem), but it worked.
EDIT: I actually found the specific reason for this a while ago -- I had been using a bit of Apple's GLKit sample project, which binds shader attributes to an enum which is in the sample that I used placed outside of the #implementation of the View Controller, meaning that its scope is outside of a specific instance of the class. The class that had this problem actually had two shaders, and when the second was compiled it erased the previous bindings. The real mystery then is why the first solution that I gave worked in the first place...

what's the counterpart for gl_TexCoord in OpenGL ES 2.0

I have a OpenGL shader which uses gl_TexCoord like the following does. But in OpenGL ES, gl_TexCoord is not supported. I wonder what can I do to refactor the code to get it work on OpenGL ES.
void main()
{
//scene depth calculation
float depth = linearize(texture2D(inputImageTexture2,gl_TexCoord[0].xy).x);
if (depthblur)
{
depth = linearize(bdepth(gl_TexCoord[0].xy));
}
...
}
There isn't one. You do it manually with a user-defined varying passed from your vertex shader. That's all it ever was anyway; a per-vertex output that your fragment shader took.

Resources