Why sRGB extension has lost a constant? - webgl

Old WebGL context has EXT_sRGB extension. That extension exposes 4 constants:
{
SRGB_EXT : 35904,
SRGB_ALPHA_EXT : 35906,
SRGB8_ALPHA8_EXT : 35907,
FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING_EXT : 33296
}
The extension was promoted in WebGL2 and became part of the core, but has lost a constant. WebGL2 has only constants :
{
SRGB : 35904,
SRGB8_ALPHA8 : 35907,
FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING : 33296
}
No SRGB_ALPHA. More over WebGL2 context has none constant with value 35906. I did check both browsers, the situation is same. Also, I checked all other extensions I had locally. All promoted extensions in WebGL2 have merged all their properties into context, but sRGB. Has not found much in docs.
What's wrong with sRGB extension and what reasoning is behind the loss?
Did anyone use SRGB_ALPHA_EXT constant? How? Please share your experience.
Also, something weird happened with disjoint_timer_query extension. That extension was merged in partially. WebGL2 context got some properties of the extension. I have disjoint_timer_query_webgl2 in Chrome which has all missing properties except one getQueryObject which was renamed to getQueryParameter, but in Firefox disjoint_timer_query extension is still available with WebGL2 context.

WebGL2 isn't 100% backward compatible with WebGL1. More like 99%. You found an area that's not.
SRGB_ALPHA_EXT is an unsized format and unsized formats have for the most part been deprecated. The basic non-extension unsized formats still exist but there's a table in the OpenGL ES 3.0 spec specifying what effective sized internal format they become. Extension unsized formats are not covered.
The constants are just that, constant, so you're free to define them in your own code.
const srgba8InternalFormat = 35907;
const srgba8Format = isWebGL2 ? 6408 : 35906;
gl.texImage2D(gl.TEXTURE2D, 0, srgba8InternalFormat, width, height, 0
srgba8Format, gl.UNSIGNED_BYTE, 0
In other words you don't have to reference the constants off of a WebGLRenderingContext. Bonus: your code will run faster and be smaller.

Related

What are gl.MAX_TEXTURE_SIZE and gl.MAX_RENDERBUFFER_SIZE exacly?

I would like to display a graphic (map) in a canvas.
Now this error message appears for large displays:
WebGL warning: width/height: Requested size 8316x3141 was too large, but resize to 4158x1570 succeeded.
I found this code snippet on the website https://docs.developer.canva.com/apps/frontend-development/webgl.
// Ensure we can render images this size
const maxTextureSize = gl.getParameter(gl.MAX_TEXTURE_SIZE);
const maxRenderBufferSize = gl.getParameter(gl.MAX_RENDERBUFFER_SIZE);
const maxSize = Math.max(state.image.width, state.image.height);
if (maxSize > maxTextureSize || maxSize > maxRenderBufferSize) {
throw new Error("Failed to render image. The image is too large.");
}
But neither https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/WebGL_best_practices nor https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/getParameter explain to me exactly what the parameters gl.MAX_TEXTURE_SIZE and MAX_RENDERBUFFER_SIZE mean. I can't find any other documentation.
For me, the error already occurs when my image is much smaller.
Example
maxTextureSize = 16384
maxRenderBufferSize = 16384
mymaxSize = 8316
shows the warnung
WebGL warning: width/height: Requested size 8316x3141 was too large, but resize to 4158x1570 succeeded.
and does not render properly.
For 1D and 2D textures (and any texture types that use similar dimensionality, like cubemaps) the max size of either dimension is GL_MAX_TEXTURE_SIZE
Khronos OpenGL wiki
The same applies for MAX_RENDERBUFFER_SIZE except that it pertains to renderbuffers created via createRenderbuffer or implicitly by the canvas as a backbuffer.
There's been some GPU drivers that falsely report sizes that they're not able to support, are you by chance using some age-old integrated or mobile graphics solution?

How to setup a TIcon instance to support alpha channel icons (SupportsPartialTransparency)

Using Borland C++ Builder 2009 I notice that when replacing images in a TImagelist, the alpha channel data gets corrupted somehow.
TIcon *Icon = new TIcon() ;
for (int x = 0 ; x < OS_Specific_count ; x++)
{
OS_xx_ImageList->GetIcon(x, Icon) ;
Use_ImageList->ReplaceIcon(x, Icon) ;
}
delete Icon ;
The problem is also described (+screenhots) in another Q ( TImageList - True color + alpha channel vs. 8-bit (256 colors) ) but I'm now trying to narrow things down with more specific questions.
While browsing TIcon in the help file I noticed a read-only property: SupportsPartialTransparency.
It appears to be false in my case, and I wonder if this is not the key to solving this problem ? Icon->Transparent = true does not set SupportsPartialTransparency to true !
I wonder what I can do to make sure the TIcon instance correctly 'gets' and 'replaces' the alpha channel information ?
The ImageLists are created at design time and have default properties (nothing changed) and contain 16x16 icons imported via the IDE. The imported icons contain alpha channel information.
I just use TPngImageList, it is compatible with TImageList. Plus the design-time editor is more flexible: You will need to convert .ico to .png. .
It is free and widely available, for example here: https://github.com/TurboPack. No headache with transparency since :) BTW, keeping icons as PNGs is more suitable, you can use them on other development tools and platforms.

Creating an RGB CVOpenGLESTexture in iOS

I am trying to create a 3-channel CVOpenGLESTexture in iOS.
I can successfully create a single-channel texture by specifying kCVPixelFormatType_OneComponent8 in CVPixelBufferCreate() and GL_LUMINANCE for both format and internalFormat in CVOpenGLESTextureCacheCreateTextureFromImage().
Similarly, I can successfully create a 4-channel RGBA texture by specifying kCVPixelFormatType_32BGRA in CVPixelBufferCreate() and GL_RGBA for both format and internalFormat in CVOpenGLESTextureCacheCreateTextureFromImage().
I need to create 3-channel, 24-bit, RGB (or BGR) texture with accessible pixels.
I cannot seem to find the correct parameters (or combination thereof) to CVPixelBufferCreate() and CVOpenGLESTextureCacheCreateTextureFromImage() that will not cause either of them to fail.
Additional Info
The supported FOURCC format types reported by CVPixelFormatDescriptionArrayCreateWithAllPixelFormatTypes() on my device:
32, 24, 16, L565, 5551, L555, 2vuy, 2vuf, yuvs, yuvf, 40, L008, L010, 2C08, r408, v408, y408, y416, BGRA, b64a, b48r, b32a, b16g, R10k, v308, v216, v210, v410, r4fl, grb4, rgg4, bgg4, gbr4, 420v, 420f, 411v, 411f, 422v, 422f, 444v, 444f, y420, f420, a2vy, L00h, L00f, 2C0h, 2C0f, RGhA, RGfA, w30r, w40a, w40m, x420, x422, x444, x44p, xf20, xf22, xf44, xf4p, x22p, xf2p, b3a8.
Interestingly, some of these values are not defined in CVPixelBuffer.h.
When I pass kCVPixelFormatType_24RGB (24 == 0x18) to CVPixelBufferCreate() it succeeds, but then CVOpenGLESTextureCacheCreateTextureFromImage() fails with error code -6683:kCVReturnPixelBufferNotOpenGLCompatible.
Answering myself, though I will be happy to be proved wrong and shown how to do this.
As I show here (answering myself yet again) it is possible to list all the fourCC buffer formats supported on the device, and a bunch of format attributes associated with each such fourCC format.
The flags pertinent to this question are:
kCVPixelFormatOpenGLESCompatibility
kCVPixelFormatContainsAlpha : Should be false;
kCVPixelFormatContainsRGB : Note: supported only from __IPHONE_8_0, but not strictly necessary;
Using the debugger, I found another helpful key: CFSTR("IOSurfaceOpenGLESTextureCompatibility") which will verify that the OpenGL ES texture supports direct pixel access with no need for (the slower) glReadPixels() and glTexImage2D().
Unfortunately, using these flags, it seems that there is currently no such RGB/BGR supported format.

How to add gif and tiff support to TJvDBImage?

TJvDBImage is a good component that support several picture formats. In JvJVCLUtils, it mentioned that the supported format can be expanded by RegisterGraphicSignature procedure. In the comment it mentioned :
WHAT IT IS:
These are helper functions to register graphic formats than can
later be recognized from a stream, thus allowing to rely on the actual
content of a file rather than from its filename extension.
This is used in TJvDBImage and TJvImage.
IMAGE FORMATS:
The implementation is simple: Just register image signatures with
RegisterGraphicSignature procedure and the methods takes care
of the correct instantiation of the TGraphic object. The signatures
register at unit's initialization are: BMP, WMF, EMF, ICO, JPG.
If you got some other image library (such as GIF, PCX, TIFF, ANI or PNG),
just register the signature:
RegisterGraphicSignature(<string value>, <offset>, <class>)
or
RegisterGraphicSignature([<byte values>], <offset>, <class>)
This means:
When <string value> (or byte values) found at <offset> the graphic
class to use is <class>
For example (actual code of the initialization section):
RegisterGraphicSignature([$D7, $CD], 0, TMetaFile); // WMF
RegisterGraphicSignature([1, 0], 0, TMetaFile); // EMF
RegisterGraphicSignature('JFIF', 6, TJPEGImage);
You can also unregister signature. IF you want use TGIFImage instead of
TJvGIFImage, you can unregister with:
UnregisterGraphicSignature('GIF', 0);
or just
UnregisterGraphicSignature(TJvGIFImage); // must add JvGIF unit in uses clause
then:
RegisterGraphicSignature('GIF', 0, TGIFImage); // must add GIFImage to uses clause
I follow the instruction and Added GIFImage in the uses clause at that unit. Also, in procedure GraphicSignaturesNeeded I added :
RegisterGraphicSignature('GIF', 0, TGIFImage);
RegisterGraphicSignature([$4D, $4d, 0, $2A], 0, TWICImage); // TIFF
RegisterGraphicSignature([$49, $49, $2A, 0], 0, TWICImage); // TIFF
The TIFF info is based on
Tip: detecting graphic formats
Then I used the makemodified.bat to re-compile JVCL.
Before the change, loading image to the TJvDBImage will load the file and give endless error of "bitmap image not valid". After change, it refuse to load the file and give the same error for 1 time.
If I load GIF / TIFF image to the field using other tools, when displaying , it give endless error mentioned above. If I load the field content using the above link functions, it can display in a TImage perfectly.
So, what have I missed or doing wrong?
Thank you!

Is there a way to force Magick++ to skip its cache when writing modified PixelPackets?

I have written a program that relies on Magick++ simply for importing and exporting of a wide variety of image formats. It uses Image.getPixels() to get a PixelPacket, does a lot of matrix transformations, then calls Image.syncPixels() before writing a new image. The general approach is the same as the example shown in Magick++'s documentation. More or less, the relevant code is:
Magick::Image image("image01.bmp");
image.modifyImage();
Magick::PixelPacket *imagePixels = image.getPixels(0, 0, 10, 10);
// Matrix manipulation occurs here.
// All actual changes to the PixelPacket direct changes to pixels like so:
imagePixels[i].red = 4; // or any other integer
// finally, after matrix manipulation is done
image.syncPixels();
image.write("image01_transformed.bmp");
When I run the above code, the new image file ("image01_transformed.bmp" in this example) ends up being the same as the original. However, if I write it to a different format, such as "image01_transformed.ppm", I get the correct result: a modified image. I assume this is due to a cached version of the format-encoded image, and that Magick++ is for some reason not aware that the image is actually changed and therefore the cache is out of date. I tested this idea by adding image.blur(1.0, 0.1); immediately before image.syncPixels();, and forcing this inconsequential change did indeed result in the correct result for same-format images.
Is there a way to force Magick++ to realize that the cache is out-of-date? Am I using getPixels() and syncPixels() incorrectly in the first place? Thanks!

Resources