Image stitching / texture blending in OpenGL (or OpenGL ES) - image-processing

I've got two YUV images (YUV 420 SP), which have been loaded in fragment shader as textures.
These textures are having an overlapping area.
Now, I am trying to blend these two textures such that there should not be any difference visible in the brightness over the overlapping area. (Basically stitching the images)
[]
Can you suggest any method on how I can mix/blend/stitch these two images.
I have come across the concept of Alpha Masking in OpenCV. I am not sure how it is applicable in OpenGL. Also, since I am using YUV images, which are loaded as textures, as R and RG component (for Y and UV), and further converting it to RGB Color space in the fragment shader.
Note: I am not considering any geometrical alignment for now. I know after the blending/stitching the image will not quite good looking as the images are not oriented or placed properly. My question is only regarding the photometric/color alignment on how I can handle the overlapping area.

Related

Can MetalFX scalers support textures with alpha?

I'm trying to upscale a texture using MetalFX's MTLFXSpatialScaler. The input texture has transparency (it's rgba8Unorm) but in the scaled texture, the transparency has been removed so that previous transparent areas are now rendered as black.
I've confirmed that the scaler's output texture is also rgba8Unorm. Also to confirm this isn't a problem drawing the scaled textures, I used the Metal debug tools to inspect each texture. Here's a pixel from the alpha part of the input texture:
And here's a pixel from the same area in the scaled output (notice how the A is now 1):
Does MTLFXSpatialScaler or MTLFXTemporalScaler support scaling a texture with alpha? Is there some additional setting or pixel for I need to use to enable this?

Writing Textures using Kernel

In single pass (In a single draw cycle) to the shader How many maximum Textures we can draw at once in metal using Kernel ? I have tried to draw six squared textures in a draw cycle.When the textures points overlap actual texture is not presented as expected. Some glitches are available

How to deal with texture distortion caused by "squaring" textures, and the interactions with mipmapping?

Suppose I have a texture which is naturally not square (for example, a photographic texture of something with a 4:1 aspect ratio). And suppose that I want to use PVRTC compression to display this texture on an iOS device, which requires that the texture be square. If I scale up the texture so that it is square during compression, the result is a very blurry image when the texture is viewed from a distance.
I believe that this caused by mipmapping. Since the mipmap filter sees the new larger stretched dimension, it uses that to choose a low mip level, which is actually not correct, since those pixels were just stretched to that size. If it looked at the other dimension, it would choose a higher resolution mip level.
This theory is confirmed (somewhat) by the observation that if I leave the texture in a format that doesn't have to be square, the mipmap versions look just dandy.
There is a LOD Bias parameter, but the docs say that is applied to both dimensions. It seems like what is called for is a way to bias the LOD but only in one dimension (that is, to bias it toward more resolution in the dimension of the texture which was scaled up).
Other than chopping up the geometry to allow the use of square subsets of the original texture (which is infeasible, give our production pipeline), does anyone have any clever hacks they've used to deal with this issue?
It seems to me that you have a few options, depending on what you can do with, say, the vertex UVs.
[Hmm Just realised that in the following I'm assuming that the V coordinates run from the top to the bottom... you'll need to allow for me being old school :-) ]
The first thing that comes to mind is to take your 4N*N (X*Y) source texture and repeat it 4x vertically to give a 4N*4N texture, and then adjust the V coordinates on the model to be 1/4 of their current values. This won't save you much in terms of memory (since it effectively means a 4bpp PVRTC becomes 4x larger) but it will still save bandwidth and cache space, since the other parts of the texture won't be accessed. MIP mapping will also work all the way down to 1x1 textures.
Alternatively, if you want to save a bit of space and you have a pair of 4N*N textures, you could try packing them together into a "sort of" 4N*4N atlas. Put the first texture in the top N rows, then follow it by the N/2 of the top rows. The pack the bottom N/2 rows of the 2nd texture, followed by the second texture, and then the top N/2 rows. Finally, do the bottom N/2 rows of the first texture. For the UVs that access the first texture, do the same divide by 4 for the V parameter. For the second texture, you'll need to divide by 4 and add 0.5
This should work fine until the MIP map level is so small that the two textures are being blended together... but I doubt that will really be an issue.

"warping" an image on iOS

I'm trying to find a way to do something similar to this on iOS:
Does anyone know a simple way to do it?
I don't know of a oneliner to do this, but you can use OpenGL to render a textured grid with quads, which has the texture coordinates equally distributed.
Exampe of 2x2 grid:
{0.0,1.0} {0.33333,1.0} {1.0,1.0}
{0.0,0.33333} {0.33333,0.33333} {1.0,0.33333}
{0.0,0.0} {0.33333,0.0} {1.0,0.0}
If you move shared vertices of adjacent quads (like in your example) while texture coords remain, you get a warp effect. You need a trivial vertex and fragment shader when using OpenGL ES, especially if you want to smoothen the warp effect, which is linearly interpolated per quad/triangle in its simple form.

UIImage/CGImage changing my pixel color

I have an image that is totally white in its RGB components, with varying alpha -- so, for example, 0xFFFFFF09 in RGBA format. But when I load this image with either UIImage or CGImage APIs, and then draw it in a CGBitmapContext, it comes out grayscale, with the RGB components set to the value of the alpha -- so in my example above, the pixel would come out 0x09090909 instead of 0xFFFFFF09. So an image that is supposed to be white, with varying transparency, comes out essentially black with transparency instead. There's nothing wrong with the PNG file I'm loading -- various graphics programs all display it correctly.
I wondered whether this might have something to do with my use of kCGImageAlphaPremultipliedFirst, but I can't experiment with it because CGBitmapContextCreate fails with other values.
The ultimate purpose here is to get pixel data that I can upload to a texture with glTexImage2D. I could use libPNG to bypass iOS APIs entirely, but any other suggestions? Many thanks.
White on a black background with an alpha of x IS a grey value corresponding to x in all the components. Thats how multiplicative blending works.

Resources