glBlendFunc transparency in OpenGL with GLKit - ios

I'm using GLKit for an iPad app. With this code I setup blending:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
It works fine, but when I try to get a screenshot the blend mode seems wrong. It doesn't matter if I use GLKit's snapshot or glReadPixels.
This is what I get when working with the app:
And this is the screenshot:
Do I have to change the Blend Mode or something before I make the screenshot? And if so, to what?

The problem you are having lies most likely in how the image is generated from the RGBA data. To solve this you will need to skip the alpha channel when creating CGImage with kCGImageAlphaNoneSkipLast or have the correct alpha values in the buffer in the first place.
To explain what is going. Your GL buffer consists of RGBA values but only RGB part is used to present it but when you create the image you use the alpha channel as well, thus the difference. How it comes to this is very simple, lets take a single pixel somewhere in the middle of the screen and go through its events:
You clear the pixel to any color you want
You overwrite the pixel (all 4 channels RGBA) with a solid color received from the texture for instance (.8, .8, .8, 1.0)
You draw a color over that pixel with some smaller alpha value and try to blend it, for instance (.4, .4, .4, .25). Your blend function says to multiply the source color with the source alpha and the destination with 1 - source alpha. That results in (.4, .4, .4, .25)*.25 + (.8, .8, .8, 1.0)*.75 = (.7, .7, .7, .76)
Now the result (.7, .7, .7, .76) is displayed nicely because your buffer only presents the RGB part resulting in seeing (.7, .7, .7, 1.0) but when you use all 4 components to create the image you also use the .76 alpha value which is further used to blend the image itself. Therefor you need to skip the alpha part at some point.
There is another way: As you can see in your case there is really no need to store the alpha value to the render buffer at all as you never use it, in your blend function you only use source alpha. Therefore you may just disable it using glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_FALSE), this also means you need to clear the alpha value to 1.0 (glClearColor(x,x,x,1.0))

Related

Alpha blending of black visibly goes through white color

I'm trying to fade out an object of the scene, but I noticed it fades first gaining value nearly to white, before disappearing due to alpha channel being 0.
For a test, I set a square that's entirely black (0, 0, 0) and then linearly interpolate alpha channel from 1 to 0.
This is the rectangle.
Same rectangle but when alpha value is 0.1 that is vec4(0, 0, 0, 0.1). It's brighter than the background itself.
Blending mode used:
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA)
As far as I understand this mode should just lerp between the background pixel and the newly created source pixel. I just don't see any angle where the output pixel becomes brighter, when mixing anything with (0,0,0).
EDIT:
After doing some testing I feel I need to clarify a few more things.
This is WebGL, and I'm drawing into a canvas on a website. I don't know how it works but it looks as if every draw call gl.drawElements() was drawn to a separate buffer and possibly later on composited into a single image. When debugging I can see my square drawn into an entirely white buffer, this is where the colour might come from.
But this means that blending doesn't happen with the backbuffer, but some buffer I didn't know existed. How do I blend into the back buffer? Do I have to avoid browser composition by rendering to a texture and only then drawing it to the canvas?
EDIT 2:
I managed to get the expected result by setting separate blending for alpha and colour as follows:
gl.blendFuncSeparate(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA, gl.ONE, gl.ONE);
But I'd rather leave this question open hoping that someone could clarify why it didn't work in the first place.
Issue is very well described here:
https://stackoverflow.com/a/35459524/9956631
My color was blended with Canvas background. As I understand, it overwrites alpha, so you leave a seethrough part of canvas where your mesh is. So why my blendFuncSeparate() fixed the issue is because I was leaving DST alpha intact.
To turn it off, you can disable alpha when fetching the glContext. To get OpenGL-like rendering you should also disable premultipliedAlpha:
canvas.getContext('webgl',
{
premultipliedAlpha: false,
alpha: false
})!;
Edit:
To make sure my assumption is right, I set a test.
behind the canvas I've placed a label. Then, on top of that I draw my canvas, with my (0, 0, 0, 0.5) color square on top. Just like this:
<label style="
position: absolute;
width: 400px;
height: 400px;
top:445px;
left:500px;
z-index: -1; // Behind...
font-size: 80px;">LMAO</label>
<canvas id="glCanvas" style="z-index: 2;" width="1200" height="1200"></canvas>
As you can see, label is visible where the square is rendered. So this means, it is being blended with what's behind the canvas instead of current contents of the canvas (as one might assume).

Core Image workingColorSpace & outputColorSpace

I am rendering video frames using Metal Core Image shaders. One of the requirements I have is to be able to pick a particular color (and user selected nearby range) from the CIImage, keep that color in the output and turn every other else black and white (color splash). But I am confused about the right approach that would work for videos shot in all kinds of color spaces (including 10 bit HDR):
First job is to extract the color value from the CIImage at any given pixel location. From my understanding, this can be extracted using the following API:
func render(_ image: CIImage,
toBitmap data: UnsafeMutableRawPointer,
rowBytes: Int,
bounds: CGRect,
format: CIFormat,
colorSpace: CGColorSpace?)
The API says passing NULL to colorSpace will cause the output to be in ciContext outputColorSpace. It's not clear how to correctly use this API to extract the exact color at given pixel locations, given the possibility of both 8 bit and 10 bit input images?
Having extracted the value, the next issue is how to pass the values to Metal Core Image shader? Shaders use normalized color ranges dependent on the workingcolorSpace of ciContext. Do I need to create a 1D texture with the color that should be passed to shader or there is a better way?
Based on your comment, here is another alternative:
You can read the pixel value as floats using the context's working color space. By using float values, you ensure that the bit depth of the input doesn't matter and that extended color values are correctly represented.
So for instance, a 100% red in BT.2020 would result in an extended sRGB value of (1.2483, -0.3880, -0.1434).
To read the value, you could use our small helper library CoreImageExtensions (or check out the implementation to see how to use render to get float values):
let pixelColor = context.readFloat32PixelValue(from: image, at: coordinate, colorSpace: context.workingColorSpace)
// you can convert that to CIVector, which can be passed to a kernel
let vectorValue = CIVector(x: pixelColor.r, y: pixelColor.g, ...)
In your Metal kernel, you can use a float4 input parameter for that color.
You can store and use the color value on later rendering calls as long as you are using the same workingColorSpace for the context.
I think you can achieve that without worrying about color spaces and even without the intermediate rendering step (which should speed up performance a lot).
You can simply crop your image to a 1x1 px square image that contains the specific color and make the image to extent virtually infinitely in all directions.
You can then pass that image into your next kernel and sample it anywhere to retrieve the color value (in the same color space as before).
let pixelCoordinate: CGPoint // the coordinate of the pixel that contains the color
// crop down to a single pixel
let colorPixel = inputImage.cropped(to: CGRect(origin: pixelCoordinate, size: CGSize(width: 1, height: 1))
// make the pixel extent infinite
let colorImage = colorPixel.clampedToExtent()
// simply pass it to your kernel
myKernel.apply(..., arguments: [colorImage, ...])
In the Metal kernel code, you can simply access it via sampler (or sample_t in a color kernel) and sample it like this:
// you can sample at any coord since the image contains the single color everywhere
float4 pickedColor = colorImage.sample(colorImage.coord());
To read the "original" color values from a CIImage, the CIContext used to render the pixels to the bitmap needs to be created with both workingColorSpace and outputColorSpace set to NSNull(). Then there will be no color space conversion and you don't have to worry about color spaces:
let context = CIContext(options: [.workingColorSpace: NSNull(), .outputColorSpace: NSNull()])
And then, when rendering the pixels to bitmap, specify the highest precision color format CIFormat.RGBAf to make sure you are not clipping any values. And use nil for colorSpace parameter. You will get 4 Float32 values per pixel that can be passed to the shader in CIVector as suggested by the first answer.
But here is another thing you can do, borrowing the cropping and clamping idea from the second Answer.
Create an infinite image that contains only the selected color using the approach suggested in that answer.
Crop that image to the frame's extent
Use CIColorAbsoluteDifference where one input is the original frame and another is this uniform color image.
The output of that filter will make all pixels that match the selected color exactly back, and none of the other pixels will be black, since this filter calculates the absolute difference between the colors and only pixels of exactly the same color will produce the (0,0,0) output.
Pass that image to the shader. If the color sampled from the image has exactly 0 in all its color components (ignore alpha) it means you need to copy the input pixel to the output intact. Otherwise set it to whatever you need it to be set (black or white or whatever).

OpenGL ES 1.1 - alpha mask

I'm working on an iPad app, with OpenFrameworks and OpenGL ES 1.1. I need to display a video with alpha channel. To simulate it i have a RGB video (without any alpha channel) and another video containing only alpha channel (on every RGB channel, so the white parts correspond to the visible parts and the black to the invisible). Every video is an OpenGL texture.
In OpenGL ES 1.1 there is no shader, so i found this solution (here : OpenGL - mask with multiple textures) :
glEnable(GL_BLEND);
// Use a simple blendfunc for drawing the background
glBlendFunc(GL_ONE, GL_ZERO);
// Draw entire background without masking
drawQuad(backgroundTexture);
// Next, we want a blendfunc that doesn't change the color of any pixels,
// but rather replaces the framebuffer alpha values with values based
// on the whiteness of the mask. In other words, if a pixel is white in the mask,
// then the corresponding framebuffer pixel's alpha will be set to 1.
glBlendFuncSeparate(GL_ZERO, GL_ONE, GL_SRC_COLOR, GL_ZERO);
// Now "draw" the mask (again, this doesn't produce a visible result, it just
// changes the alpha values in the framebuffer)
drawQuad(maskTexture);
// Finally, we want a blendfunc that makes the foreground visible only in
// areas with high alpha.
glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA);
drawQuad(foregroundTexture);
It's exactly what i want to do but glBlendFuncSeparate() doesn't exist in OpenGL ES 1.1 (or on iOS). I'm trying to do it with glColorMask and i found this : Can't get masking to work correctly with OpenGL
But it doesn't work as well, i guess because his mask texture file contains an 'real' alpha channel, and not mine.
I highly suggest you compute a single RGBA texture instead.
This will be both easier, and faster ( because you're sending 2 RGBA textures each frame - yes, your RGB texture is in fact encoded in RGBA by the hardware, and the A is ignored )
glColorMask won't help you, because it simply says "turn on or off this channel completely".
glBlendFuncSeparate could help you if you had it, but again, it's not a good solution : you're ruining your (very limited) iphone bandwidth by sending twice as much data as needed.
UPDATE :
Since you're using OpenFrameworks, and according to its source code ( https://github.com/openframeworks/openFrameworks/blob/master/libs/openFrameworks/gl/ofTexture.cpp and https://github.com/openframeworks/openFrameworks/blob/master/libs/openFrameworks/video/ofVideoPlayer.cpp ) :
Use ofVideoPlayer::setUseTexture(false) so that ofVideoPlayer::update won't upload the data to video memory;
Get the video data with ofVideoPlayer::getPixels
Interleave the result in the RGBA texture (you can use an GL_RGBA ofTexture and ofTexture::loadData)
Draw using ofTexture::Draw ( this is what ofVideoPlayer does anyway )

How to make a Texture2D 50% transparent? XNA

I'm using SpriteBatch to draw a Texture2D on the screen and was wondering how I could manipulate the the images opacity? Anyone know the best way in accomplishing this?
Assuming you are using XNA 4.0 with premultiplied alpha. In your spritebatch.draw multiply the color by a float, 0.5f for 50% transparency, and draw as you would normally. If you are not using premultiplied alpha I suggest you do for performance reasons and its more intuitive after you get used to it.
Example:
_spriteBatch.Draw(texture, location, Color.White * 0.5f);
Edit:
Also make sure you set your blend state to BlendState.AlphaBlend, or another blend state that supports alpha and is not NonPremultiplied.
Example:
_spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend);
Just use color as new Color(RGBA); where:
R is Red
G is Green
B is Blue
A is Alpha
For instance:
new Color(100, 100, 100, 100);

How can I alphablend only certain parts of a texture in DX 9?

How can I alphablend only certain parts of a texture in DX 9?
For example, layers in Photoshop (or any other photo editing program that supports layers).
You can draw something in a layer (background filled with alpha), then place the layer over the original image (draw the texture on the screen) which leads to the original image + ONLY the things I drew in the layer.
Yes, I know my english is not very "shiny".
Thank you very much, in advance!
P.S. The background of my texture IS filled with alpha.
So you have setup the alpha on the texture you wish to overlay such that 0 is transparent (ie shows whats underneath) and 1 is opaque (ie shows the overlay texture)?
If so then you just need to set up a a simple blend mode:
pDevice->SetRenderState( D3DRS_SRCBLEND, D3DBLEND_SRCALPHA );
pDevice->SetRenderState( D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA );
pDevice->SetRenderState( D3DRS_ALPHABLENDENABLE, TRUE );
Make sure you draw the background first! Also note that values between 0 and 1 represent a linear interpolation between background and the overlay texture.

Resources