how to get the average color from framebuffers in WebGL (Image Texture) - webgl

This calculation happens each frame. So I wonder whether there is a way using webgl rather than in javascript.
I want to use the avg color to process the render result such as contrast、 bloom effect.

Related

Generate Image from Pixel Array (fast)

I would like to generate a grid picture or bitmap or anything similar with raw pixel data in swift. Since the pixel location, image size etc. are not determined before the user opens the app or presses a refresh button I need a fast way to generate 2732x2048 or more individual pixels and display them on the screen.
First I did use UIGraphicsBeginImageContextWithOptions and drew each pixel with a 1x1 CGRect but this obviously did not scale well.
Afterwards I have used this approach: Pixel Array to UIImage in Swift
But this is still kind of slow with the bigger screens.
Could something like this be done with MetalKit? I would assume that a lower api does render something like this way faster?
Or is there any better way to process something like this in-between MetalKit and CoreGraphics?
Some info regarding the structure of my data:
There is a struct with the pixel color data red, green, blue, alpha for each individual pixel stored as an Array and two image size variables: imageHeight and imageWidth.
The most performant way to do that is to use Metal Compute Function.
Apple has a good documentation to illustrate GPU programming.
Performing Calculations on a GPU
Processing a Texture in a Compute Function

Is there a way to enable blending and depth at the same time in Metal

I have a metal view that displays some textured quads. The textures are loaded from PNGs so are premultiplied. Some of the textures have transparent pixels.
When I enable blending and draw in the right order, the transparency works and you can see quads beneath other quads through the transparent parts of the textures. However, I'm having to calculate the right draw order by sorting which is expensive and slowing down my rendering a lot.
When I've tried to use depth stencils and draw in any order, I can get the order working correctly using z position, but then the blending stops working. The transparent parts of the texture reveal the background color of the metal scene rather than the quad below.
What am I doing wrong? Is there a way to get this working and could someone provide some example code?
The other option I see is to try and do the sorting on the GPU, which would be fine as the GPU frame time is significantly smaller than the CPU frame time. However, I'm also not sure how to do this.
Any help would be greatly appreciated. :)
Alpha blending is an order-dependent transparency technique. This means that the (semi-)transparent objects cannot be rendered in any arbitrary order as is the case for (more expensive) order-independent transparency techniques.
Make sure your transparent 2D objects (e.g., circle, rectangle, etc.) have different depth values. (This way you can define the draw ordering yourself. Otherwise the draw ordering depends on the implementation of the sorting algorithm and the initial ordering before sorting.)
Sort these 2D objects based on their depth value from back to front.
Draw the 2D objects from back to front (painter's algorithm) using alpha blending. (Of course, your 2D objects need an alpha value < 1 to actually see some blending.)
And you need to setup pipelineStateDescriptor correctly:
// To have depth buffer.
pipelineStateDescriptor.depthAttachmentPixelFormat = .depth32Float
// To use transparency.
pipelineStateDescriptor.colorAttachments[0].isBlendingEnabled = true
pipelineStateDescriptor.colorAttachments[0].rgbBlendOperation = .add
pipelineStateDescriptor.colorAttachments[0].alphaBlendOperation = .add
pipelineStateDescriptor.colorAttachments[0].sourceRGBBlendFactor = .sourceAlpha
pipelineStateDescriptor.colorAttachments[0].sourceAlphaBlendFactor = .sourceAlpha
pipelineStateDescriptor.colorAttachments[0].destinationRGBBlendFactor = .oneMinusSourceAlpha
pipelineStateDescriptor.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha
Hope this helps. From here

iOS Metal Shader - Texture read and write access?

I'm using a metal shader to draw many particles onto the screen. Each particle has its own position (which can change) and often two particles have the same position. How can I check if the texture2d I write into does not have a pixel at a certain position yet? (I want to make sure that I only draw a particle at a certain position if there hasn't been drawn a particle yet, because I get an ugly flickering if many particles are drawn at the same positon)
I've tried outTexture.read(particlePosition), but this obviously doesn't work, because of the texture access qualifier, which is access::write.
Is there a way I can have read and write access to a texture2d at the same time? (If there isn't, how could I still solve my problem?)
There are several approaches that could work here. In concurrent systems programming, what you're talking about is termed first-write wins.
1) If the particles only need to preclude other particles from being drawn (and aren't potentially obscured by other elements in the scene in the same render pass), you can write a special value to the depth buffer to signify that a fragment has already been written to a particular coordinate. For example, you'd turn on depth test (using the depth compare function Equal), clear the depth buffer to some distant value (like 1.0), and then write a value of 0.0 to the depth buffer in the fragment function. Any subsequent write to a given pixel will fail to pass the depth test and will not be drawn.
2) Use framebuffer read-back. On iOS, Metal allows you to read from the currently-bound primary renderbuffer by attributing a parameter to your fragment function with [[color(0)]]. This parameter will contain the current color value in the renderbuffer, which you can test against to determine whether it has been written to. This does require you to clear the texture to a predetermined color that will never otherwise be produced by your fragment function, so it is more limited than the above approach, and possibly less performant.
All of the above applies whether you're rendering to a drawable's texture for direct presentation to the screen, or to some offscreen texture.
To answer the read and write part : you can specify a read/write access for the output texture as such :
texture2d<float, access::read_write> outTexture [[texture(1)]],
Also, your texture descriptor must specify usage :
textureDescriptor?.usage = [.shaderRead, .shaderWrite]

Difference between Texture2D and Texture2DMS in DirectX11

I'm using SharpDX and I want to do antialiasing in the Depth buffer. I need to store the Depth Buffer as a texture to use it later. So is it a good idea if this texture is a Texture2DMS? Or should I take another approach?
What I really want to achieve is:
1) Depth buffer scaling
2) Depth test supersampling
(terms I found in section 3.2 of this paper: http://gfx.cs.princeton.edu/pubs/Cole_2010_TFM/cole_tfm_preprint.pdf
The paper calls for a depth pre-pass. Since this pass requires no color, you should leave the render target unbound, and use an "empty" pixel shader. For depth, you should create a Texture2D (not MS) at 2x or 4x (or some other 2Nx) the width and height of the final render target that you're going to use. This isn't really "supersampling" (since the pre-pass is an independent phase with no actual pixel output) but it's similar.
For the second phase, the paper calls for doing multiple samples of the high-resolution depth buffer from the pre-pass. If you followed the sizing above, every pixel will correspond to some (2N)^2 depth values. You'll need to read these values and average them. Fortunately, there's a hardware-accelerated way to do this (called PCF) using SampleCmp with a COMPARISON sampler type. This samples a 2x2 stamp, compares each value to a specified value (pass in the second-phase calculated depth here, and don't forget to add some epsilon value (e.g. 1e-5)), and returns the averaged result. Do 2x2 stamps to cover the entire area of the first-phase depth buffer associated with this pixel, and average the results. The final result represents how much of the current line's spine corresponds to the foremost depth of the pre-pass. Because of the PCF's smooth filtering behavior, as lines become visible, they will slowly fade in, as opposed to the aliased "dotted" line effect described in the paper.

How do you add light with multiple passes in OpenGL?

I have two functions that I want to combine the results of:
drawAmbient
drawDirectional
They each work fine individually, drawing the scene with the ambient light only, or the directional light only. I want to show both the ambient and directional light but am having a bit of trouble. I try this:
[self drawAmbient];
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
[self drawDirectional];
glDisable(GL_BLEND);
but I only see the results from first draw. I calculate the depth in the same way for both sets of draw calls. I could always just render to texture and blend the textures, but that seems redundant. Is there I way that I can add the lighting together when rendering to the default framebuffer?
You say you calculate the depth the same way in both passes. This is of course correct, but as the default depth comparison function is GL_LESS, nothing will actually be rendered in the second pass, since the depth is never less than what is currently in the depth buffer.
So for the second pass just change the depth test to
glDepthFunc(GL_EQUAL);
and then back to
glDepthFunc(GL_LESS);
Or you may also set it to GL_LEQUAL for the whole runtime to cover both cases.
As far as I know, you should render lighting to separate render targets and then combine them. So you will have rendered scene into these targets:
textured without lighting
summary diffuse lighting (fill with ambient color and additively render all light sources)
summary specular lighting (if you use specular component)
Then combine textures, so final_color = textured * diffuse + specular.

Resources