I'm writing texture atlas on the fragment shader and I really need to use texture2DLod in order to render the textures correctly in different mip levels. I just found out that WebGL only supports texture2DLod on the vertex shader. Is there some way for me to access texture2DLod on the fragment shader? Perhaps I could use a custom function that does the same?
Simply use texture2D with the third parameter set to the lod you want to use:
gl_FragColor = texture2D(map, uv, lod)
Related
I want to optimize the fragment shader performance. Currently my fragment shader is
fragment half4 fragmen_shader_texture(VertexOutTexture vIn [[stage_in]],
texture2d<half> texture [[texture(0)]]){
constexpr sampler defaultSampler;
half4 color = half4(texture.sample(defaultSampler, vIn.textureCoordinates));
return color;
}
The task of this is to return the texture color. Anyway to optimize more than this.
No options for optimizing the fragment shader AFAICT, it's doing virtually nothing other than sampling the texture. However, depending on your situation, there still might be scope for optimization by:
Reducing bandwidth usage by using a more compact texture format (565 or 4444 instead of 8888, or better still 4-bit or 2-bit PVRTC).
Making sure that alpha blending is disabled if alpha blending is not required.
If the texture has lots of 'empty space' (e.g. think particle texture with a central circular blob and blank corners) then you could make sure the geometry fits it more tightly by rendering it as an Octagon rather than as a quad for instance.
Enable mipmapping if there's any possibility the image can be minimized. Disable more expensive mipmapping options like trilinear/anisotropic filtering.
I have a webgl shader set up with some shaders. I'm using multiple render targets (gl_FragData[])
In the first shader, I can output to
gl_FragData[0] = vec4(..);
gl_FragData[1] = vec4(..);
gl_FragData[2] = vec4(..);
Now with my second shader, I want to output to gl_FragData[3] and save the texture to pass to my third shader.
The second shader doesn't seem to output to gl_FragData[3], yet this works if I use it in my first shader. I want the output of gl_FragData[3] to be stored in a texture and sent to the third shader.
I think it may have to do with the framebuffer, but I've tried changing that and have had no luck. What am I missing?
If you want to use the same framebuffer, you'll need to mask off the unused draw buffers: drawBuffers([COLOR_ATTACHMENT0, COLOR_ATTACHMENT1, COLOR_ATTACHMENT2]) for the first shader, and drawBuffers([NONE, NONE, NONE, COLOR_ATTACHMENT3]) for the second shader.
From EXT_draw_buffers:
Any colors, or color components, associated with a fragment that are not written by the fragment shader are undefined.
I want to implement a collision detector between a moving and a static object. The way I am thinking of doing so is by checking in vertex shader every time if any vertex of the moving object intersects with the position of the static object.
By doing the above, I would get the point of collision in the vertex shader, but I want to use the variable for rendering purposes in the js file.
Is there a way to do it.
In WebGL 1 you can not directly read any data from a vertex shader. The best you can do is use the vertex shader to affect the pixels rendered in the fragment shader. So you could for example set gl_Position so nothing is rendered if it fails your test and a single pixel is rendered if the test passes. Or you can set some varying that sets certain colors based on your test results. Then you can either read the pixel with gl.readPixels or you can just pass the texture you wrote to to another shader in a different draw calls.
In WebGL2 you can use transform feedback to allow a vertex shader to write its varyings to a buffer. You can then use that buffer in other draw calls or read it's contents with gl.getSubBuffer
In WebGL2 you can also do occlusion queries which means you can try to draw something and test if it was actually drawn or if the depth buffer prevented it from being drawn.
How do you implement per instance textures, vertex shaders, and pixel shaders, in the same Vertex Buffer and/or DeviceContext?
I am just trying to find the most efficient way to have different pixel shaders used by the same type of mesh, but colored differently. For example, I would like square and triangle models in the vertex buffer, and for the vertex/pixel/etc shaders to act differently based on instance data.... (If the instance data includes "dead" somehow, the shaders used to draw opaque shapes with solid colors rather than gradients are used.
Given:
1. Different model templates in Vertex Buffer, Square & Triangl, (more eventually).
Instance Buffer with [n] instances of type Square and/or Triangle, etc.
Guesses:
Things I am trying to Research to do this:
A: Can I add a Texture, VertexShader or PixelShader ID to the buffer data so that HLSL or the InputAssembly can determine which Shader to use at draw time?
B. Can I "Set" multiple Pixel and Vertex Shaders into the DeviceContext, and how do I tell DirectX to "switch" the Vertex Shader that is loaded at render time?
C. How many Shaders of each type, (Vertex, Pixel, Hull, etc), can I associate with model definitions/meshes in the default Vertex Buffer?
D. Can I use some sort of Shader Selector in HLSL?
Related C++ Code
When I create an input layout, can I do this without specifying an actual Vertex Shader, or somehow specify more than one?
NS::ThrowIfFailed(
result = NS::DeviceManager::Device->CreateInputLayout(
NS::ModelRenderer::InitialElementDescription,
2,
vertexShaderFile->Data,
vertexShaderFile->Length,
& NS::ModelRenderer::StaticInputLayout
)
);
When I set the VertexShader and PixelShader, how do I associate them with a particular model in my VertexBuffer? Is it possible to set more than one of each?
DeviceManager::DeviceContext->IASetInputLayout(ModelRenderer::StaticInputLayout.Get());
DeviceManager::DeviceContext->VSSetShader(ModelRenderer::StaticVertexShader.Get(), nullptr, 0);
DeviceManager::DeviceContext->PSSetShader(ModelRenderer::StaticPixelShader.Get(), nullptr, 0);
How do I add a Texture, VertexShader or PixelShader ID to the buffer
data so that HLSL or the InputAssembly can determine which Shader to
use at draw time?
You can't assign a Pixel Shader ID to a buffer, that's not how the pipeline works.
A / You can bind only one Vertex/Pixel Shader in a Device context at a time, which defines your pipeline, draw your geometry using this shader, then switch to another Vertex/Pixel shader as needed, draw next geometry...
B/ you can use different shaders using the same model, but that's done on cpu using VSSetShader, PSSetShader....
C/No, for same reason as in B (shaders are set on the CPU)
When I create an input layout, can I do this without specifying an actual Vertex Shader, or somehow specify more than one?
if you don't specify a vertex shader, the pipeline will consider that you draw "null" geometry, which is actually possible (and very fun), but bit out of context, if you provide geometry you need to send the vertex shader data so the runtime can match your geometry layout to the vertex input layout. You can of course create several input layouts by calling the function several times (once per vertex shader/geometry in worst case, but if two models/vertex shaders have the same layout you can omit it).
When I set the VertexShader and PixelShader, how do I associate them with a particular model in my VertexBuffer? Is it possible to set more than one of each?
You bind everything you need (Vertex/Pixel shaders, Vertex/IndexBuffer,Input layout) and call draw (or drawinstanced).
I am trying to do some image processing on the GPU, e.g. median, blur, brightness, etc. The general idea is to do something like this framework from GPU Gems 1.
I am able to write the GLSL fragment shader for processing the pixels as I've been trying out different things in an effect designer app.
I am not sure however how I should do the other part of the task. That is, I'd like to be working on the image in image coords and then outputting the result to a texture. I am aware of the gl_FragCoords variable.
As far as I understand it it goes like that: I need to set up a view (an orthographic one maybe?) and a quad in such a way so that the pixel shader would be applied once to each pixel in the image and so that it would be rendering to a texture or something. But how can I achieve that considering there's depth that may make things somewhat awkward to me...
I'd be very grateful if anyone could help me with this rather simple task as I am really frustrated with myself.
UPDATE:
It seems I'll have to use an FBO, getting one like this: glBindFramebuffer(...)
Use this tutorial, it's targeted at OpenGL 2.0, but most features are available in ES 2.0, the only thing i have doubts is floating point textures.
http://www.mathematik.uni-dortmund.de/~goeddeke/gpgpu/tutorial.html
Basically, you need 4 vertex positions (as vec2) of a quad (with corners (-1,-1) and (1,1)) passed as a vertex attribute.
You don't really need a projection, because the shader will not need any.
Create an FBO, bind it and attach the target surface. Don't forget to check the completeness status.
Bind the shader, set up input textures and draw the quad.
Your vertex shader may look like this:
#version 130
in vec2 at_pos;
out vec2 tc;
void main() {
tc = (at_pos+vec2(1.0))*0.5; //texture coordinates
gl_Position = vec4(at_pos,0.0,1.0); //no projection needed
}
And a fragment shader:
#version 130
in vec2 tc;
uniform sampler2D unit_in;
void main() {
vec4 v = texture2D(unit_in,tc);
gl_FragColor = do_something();
}
If you want an example, I created this project for iOS devices for processing frames of video grabbed from the camera using OpenGL ES 2.0 shaders. I explain more about it in my writeup here.
Basically, I pull in the BGRA data for a frame and create a texture from that. I then use two triangles to generate a rectangle and map the texture on that. A shader is used to directly display the image onscreen, perform some effect on the image and display it, or perform some effect on the image while in an offscreen FBO. In the last case, I can then use glReadPixels() to pull in the image for some CPU-based processing, but ideally I want to fix this so that the processed image is just passed on as a texture to the next set of shaders.
You should also check out ogles_gpgpu, which even supports Android systems. An overview about this topic is given in this publication: Parallel Computing for Digital Signal Processing on Mobile Device GPUs.
You can do more advanced GPGPU things with OpenGL ES 3.0 now. Check out this post for example. Apple now also has the "Metal API" which allows even more GPU compute operations. Both, OpenGL ES 3.x and Metal are only supported by newer devices with A7 chip.