I'm trying to swap the fragement-shader used in a program. The fragment-shaders all have the same variables, just different calculations. I am trying to provide alternative shaders for lower level hardware.
I end up getting single color outputs (instead of a texture), does anyone have an idea what I could be doing wrong? I know the shaders are being used, due to the color changing accordingly.
//if I don't do this:
//WebGL: INVALID_OPERATION: attachShader: shader attachment already has shader
gl.detachShader(program, _.attachedFS);
//select a random shader, all using the same parameters
attachedFS = fragmentShaders[~~(Math.qrand()*fragmentShaders.length)];
//attach the new shader
gl.attachShader(program, attachedFS);
//if I don't do this nothing happens
gl.linkProgram(program);
//if I don't add this line:
//globject.js:313 WebGL: INVALID_OPERATION: uniform2f:
//location not for current program
updateLocations();
I am assuming you have called gl.compileShader(fragmentShader);
Have you tried to test the code on a different browser and see if you get the same behavior? (it could be standards implementation specific)
Have you tried to delete the fragment shader (gl.deleteShader(attachedFS); ) right after detaching it. The
previous shader may still have a pointer in memory.
If this does not let you move forward, you may have to detach both shaders (vertex & frag) and reattach them or even recreate the program from scratch
I found the issue, after trying about everything else without result. It also explains why I was seeing a shader change, but just getting a flat color. I was not updating some of the attributes.
Related
Trying to figure out how to solve my issue of applying shaders to my ARSCNView.
Previously, when using a standard SCNView, i have successfully been able to apply a distortion shader the following way:
if let path = Bundle.main.path(forResource: "art.scnassets/distortion", ofType: "plist") {
if let dict = NSDictionary(contentsOfFile: path) {
let technique = SCNTechnique(dictionary: dict as! [String : AnyObject])
scnView.technique = technique
}
}
Replacing SCNView with ARSCNView gives me the following error(s):
"Error: Metal renderer does not support nil vertex function name"
"Error: _executeProgram - no pipeline state"
I was thinking it's because that ARSCNView uses a different renderer than SCNView. But logging ARSCNView.renderingAPI tells me nothing about the renderer, and i can't seem to choose one when i construct my ARSCNView instance. I must be missing something obvious, because i can't seem to find a single resource when scouring for references online.
My initial idea was instead use a SCNProgram to apply the shaders. But i can't find any resources of how to apply it to an ARSCNView, or if it's even a correct/possible solution, SCNProgram seems to be reserved for materials.
Anyone able to give me any useful pointers of how to solve vertex+fragment shaders for ARSCNView?
SCNTechnique for ARSCNView does not work with GLSL shaders, instead Metal functions need to be provided in the technique's plist file under the keys metalVertexShader and metalFragmentShader.
To the contrary, documentation says any combination of shader should work:
You must specify both fragment and vertex shaders, and you must
specify either a GLSL shader program, a pair of Metal functions, or
both. If both are specified, SceneKit uses whichever shader is
appropriate for the current renderer.
So it might be a mistake, but I guess the documentation is outdated. Since all ARKit running devices also run Metal, GLSL support has not been added to ARSCNViews.
As iOS12 deprecates OpenGL this looks like planned.
I had this issue in ARKit iOS11.4 and 12 and it came down to a series of miss-spelt shaders. I hope this might help someone.
I'm getting
Uncaught TypeError: Type error
When I have the WebGL Inspector enabled (in Chrome), this error originates in a file that starts with 'VM' and ends in a sequence of digits (not sure what code owns that -- is it core browser behavior or the WebGLInspector?). This is the line.
// Call real function
var result = originalFunction.apply(context.rawgl, arguments);
I enabled the debug context and am logging all WebGL calls. This is the call that breaks:
uniform1i(3, 0)
In the WebGL inspector, I see that the uniform at index 3 is my uniform sampler2D uSampler in my fragment shader. The API documentation says that this is a GLint, so the type is correct. I also tried setting some other uniforms first and they also fail with the same error.
I'm reworking some existing code I wrote after following tutorials and one of the things I'm adding is interleaved vertex data. I'm sure that that is the root cause, however, this is the third time I've come across an error like this and my only recourse has been to massage the code until it goes away. It feels random and it's frustrating.
Are there any more debugging techniques? I assume it's an error in the shaders. Is there some way to get a stack trace from them?
uniform1i(3, 0)
Is not valid WebGL. The uniform functions require a WebGLUniformLocation object which can only be gotten by calling gl.getUniformLocation
This is different from OpenGL. The reason is you are not allowed to do math on uniform locations. In OpenGL developers often make that mistake. They'll do something like this
--in shader--
uniform float arrayOfFloats[4];
--in code--
GLint location = glGetUniformLocation(program, "arrayOfFloats");
glUniform1f(location, 123.0f);
glUniform1f(location + 1, 456.0f); // BAD!!!
That second line is not valid OpenGL but it might work depending on the driver.
In WebGL they wanted to make that type of mistake impossible because web pages need to work everywhere whereas OpenGL programs only need to work on the platform they are compiled on. To make it work everywhere they had gl.getUniformLocation return an object so you can't do math on the result.
The correct way to write the code above in OpenGL is
--in shader--
uniform float arrayOfFloats[4];
--in code--
GLint location0 = glGetUniformLocation(program, "arrayOfFloats[0]");
GLint location1 = glGetUniformLocation(program, "arrayOfFloats[1]");
glUniform1f(location0, 123.0f);
glUniform1f(location1, 456.0f);
And in WebGL is
--in shader--
uniform float arrayOfFloats[4];
--in code--
var location0 = gl.getUniformLocation(program, "arrayOfFloats[0]");
var location1 = gl.getUniformLocation(program, "arrayOfFloats[1]");
gl.uniform1f(location0, 123.0);
gl.uniform1f(location1, 456.0);
While reading specification at Khronos, I found:
bufferData(ulong target, Object data, ulong usage)
'usage' parameter can be: STREAM_DRAW, STATIC_DRAW or DYNAMIC_DRAW
My question is, which one should I use?
What are the advantages, what are the differences?
Why would I choose to use some other instead STATIC_DRAW?
Thanks.
For 'desktop' OpenGL, there is a good explanation here:
http://www.opengl.org/wiki/Buffer_Object
Basically, usage parameter is a hint to OpenGL/WebGL on how you intend to use the buffer. The OpenGL/WebGL can then optimize the buffer depending on your hint.
The OpenGL ES docs writes the following, which is not exactly the same as for OpenGL (remember that WebGL is inherited from OpenGL ES):
STREAM
The data store contents will be modified once and used at most a few times.
STATIC
The data store contents will be modified once and used many times.
DYNAMIC
The data store contents will be modified repeatedly and used many times.
The nature of access must be:
DRAW
The data store contents are modified by the application, and used as the source for GL drawing and image specification commands.
The most common usage is STATIC_DRAW (for static geometry), but I have recently created a small particle system where DYNAMIC_DRAW makes more sense (the particles are stored in a single buffer, where parts of the buffer is updated when particles are emitted).
http://jsfiddle.net/mortennobel/YHMQZ/
Code snippet:
function createVertexBufferObject(){
particleBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, particleBuffer);
var vertices = new Float32Array(vertexBufferSize * particleSize);
gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.DYNAMIC_DRAW);
bindAttributes();
}
function emitParticle(x,y,velocityX, velocityY){
gl.bindBuffer(gl.ARRAY_BUFFER, particleBuffer);
// ...
gl.bufferSubData(gl.ARRAY_BUFFER, particleId*particleSize*sizeOfFloat, data);
particleId = (particleId +1 )%vertexBufferSize;
}
Big headhache on XNA 4.0 concerning a depth problem:
I've already found many answers to similar problems, but no one work for me...
The device is set like this:
xnaPanel1.Device.BlendState = BlendState.Opaque;
xnaPanel1.Device.DepthStencilState = DepthStencilState.Default;
xnaPanel1.Device.PresentationParameters.DepthStencilFormat = DepthFormat.Depth24Stencil8;
[...]
Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, 4.0f / 3.0f, 0.1f, 1000f);
As a brutal problem resolver, I have tried most DepthStencilFormat and DepthStencilState possibilities... No one works like i want.
Concerning the projection matrix, I've tried many nearclip and farclip value too. (cube width: 10f) but can't get the correct result.
I've tested this with many different texture, all opaque.
I don't use a BasicEffect but an effect using texture + normal map, can it be the source of the problem?
CubeEffect.fx
[...]
sampler2D colorMap = sampler_state
{
Texture = <colorMapTexture>;
MagFilter = Linear;
MinFilter = Anisotropic;
MipFilter = Linear;
MaxAnisotropy = 16;
};
sampler2D normalMap = sampler_state
{
Texture = <normalMapTexture>;
MagFilter = Linear;
MinFilter = Anisotropic;
MipFilter = Linear;
MaxAnisotropy = 16;
};
[...]
Edit: I tried with a BasicEffect and problem is the same...
So... Thanks for any help ;)
Ok that's it.
pp.DepthStencilFormat = DepthFormat.Depth24Stencil8;
have to be before device creation call.
So I don't know at this time why this:
Device.PresentationParameters.DepthStencilFormat = DepthFormat.Depth24Stencil8;
previously called in my main Draw function, doesn't work...
Conclusions?
PresentationParameters pp = new PresentationParameters();
pp.IsFullScreen = false;
pp.BackBufferHeight = this.renderControl.Height;
pp.BackBufferWidth = this.renderControl.Width;
pp.DeviceWindowHandle = renderControl.Handle;
pp.DepthStencilFormat = DepthFormat.Depth24Stencil8;
this.graphicsDevice = new GraphicsDevice(GraphicsAdapter.DefaultAdapter, GraphicsProfile.HiDef, pp);
Now working fine!
PresentationParameters is a structure that defines how the device is created. You've already seen that when you create the graphics device you need to pass the structure in, which is only used for initial configuration.
The device stores the presentation parameters on it, but changing it does nothing unless you call Reset on the device, which will reinitialize the device to use whatever parameters you've changed. This is an expensive operation (so you won't want to do it very often).
Basically GraphicsDevice.PresentationParameters is an output - writing to it doesn't actually change the device state. It gets updated whenever the device is set-up or reset.
Generally you will be setting up the GraphicsDevice using GraphicsDeviceManager - it handles setting-up, resetting, and tearing-down the device for you. It is part of the default XNA Game template project.
The correct way to modify states is to set the desired values on GraphicsDeviceManager. In your case, you can simply set PreferredDepthStencilFormat.
Once you do this, you need to either set-up the device (ie: specify your settings in your game constructor and XNA will do the rest), or reset the device by calling GraphicsDeviceManager.ApplyChanges - which you should usually only do in response to user input (and obviously not every frame). See this answer for details.
You'll note that there are some presentation parameters that are not directly settable on the GraphicsDeviceManager. To change these you would have to attach an event handler to GraphicsDeviceManager.PreparingDeviceSettings. The event argument will give you access to a version of the presentation parameters you can usefully modify (e.GraphicsDeviceInformation.PresentationParameters) - the settings in there are what gets used when the graphics device is created.
I just recently starting getting these messages, and was wondering if anyone has seen them, or know what may be causing them. I'm using Three.js with Chrome version '21.0.1180.57' on MacOS. I don't get these messages with Safari or FireFox.
PERFORMANCE WARNING: Attribute 0 is disabled. This has signficant performance penalty
WebGL: too many errors, no more errors will be reported to the console for this context.
Same message on Firefox is : "Error: WebGL: Drawing without vertex attrib 0 array enabled forces the browser to do expensive emulation work when running on desktop OpenGL platforms, for example on Mac. It is preferable to always draw with vertex attrib 0 array enabled, by using bindAttribLocation to bind some always-used attribute to location 0."
This is not only a performance drawback, but will also result in bad output.
PROBLEM: This message occurs if a JS tries to run a WebGL shader that is expecting color information in gl_Color on a mesh not providing a color array.
SOLUTION: Use a WebGL shader with constant color not accessing gl_Color or provide a color array in the mesh to be shaded.
If using lightgl.js from Evan Wallace, try to add the option colors:true in the new GL.Mesh statement and provide a propper mesh.colors array of same size as your vertices array. Or try this shader:
blackShader = new GL.Shader(
'void main() { gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; }',
'void main() { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); }'
);
Sorry, I never used Three.js but the problem should be similar, provide color to your mesh before shading.
Looks like a Chrome bug:
http://code.google.com/p/chromium-os/issues/detail?id=32528