What are gl.MAX_TEXTURE_SIZE and gl.MAX_RENDERBUFFER_SIZE exacly? - webgl

I would like to display a graphic (map) in a canvas.
Now this error message appears for large displays:
WebGL warning: width/height: Requested size 8316x3141 was too large, but resize to 4158x1570 succeeded.
I found this code snippet on the website https://docs.developer.canva.com/apps/frontend-development/webgl.
// Ensure we can render images this size
const maxTextureSize = gl.getParameter(gl.MAX_TEXTURE_SIZE);
const maxRenderBufferSize = gl.getParameter(gl.MAX_RENDERBUFFER_SIZE);
const maxSize = Math.max(state.image.width, state.image.height);
if (maxSize > maxTextureSize || maxSize > maxRenderBufferSize) {
throw new Error("Failed to render image. The image is too large.");
}
But neither https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/WebGL_best_practices nor https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/getParameter explain to me exactly what the parameters gl.MAX_TEXTURE_SIZE and MAX_RENDERBUFFER_SIZE mean. I can't find any other documentation.
For me, the error already occurs when my image is much smaller.
Example
maxTextureSize = 16384
maxRenderBufferSize = 16384
mymaxSize = 8316
shows the warnung
WebGL warning: width/height: Requested size 8316x3141 was too large, but resize to 4158x1570 succeeded.
and does not render properly.

For 1D and 2D textures (and any texture types that use similar dimensionality, like cubemaps) the max size of either dimension is GL_MAX_TEXTURE_SIZE
Khronos OpenGL wiki
The same applies for MAX_RENDERBUFFER_SIZE except that it pertains to renderbuffers created via createRenderbuffer or implicitly by the canvas as a backbuffer.
There's been some GPU drivers that falsely report sizes that they're not able to support, are you by chance using some age-old integrated or mobile graphics solution?

Related

Why sRGB extension has lost a constant?

Old WebGL context has EXT_sRGB extension. That extension exposes 4 constants:
{
SRGB_EXT : 35904,
SRGB_ALPHA_EXT : 35906,
SRGB8_ALPHA8_EXT : 35907,
FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING_EXT : 33296
}
The extension was promoted in WebGL2 and became part of the core, but has lost a constant. WebGL2 has only constants :
{
SRGB : 35904,
SRGB8_ALPHA8 : 35907,
FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING : 33296
}
No SRGB_ALPHA. More over WebGL2 context has none constant with value 35906. I did check both browsers, the situation is same. Also, I checked all other extensions I had locally. All promoted extensions in WebGL2 have merged all their properties into context, but sRGB. Has not found much in docs.
What's wrong with sRGB extension and what reasoning is behind the loss?
Did anyone use SRGB_ALPHA_EXT constant? How? Please share your experience.
Also, something weird happened with disjoint_timer_query extension. That extension was merged in partially. WebGL2 context got some properties of the extension. I have disjoint_timer_query_webgl2 in Chrome which has all missing properties except one getQueryObject which was renamed to getQueryParameter, but in Firefox disjoint_timer_query extension is still available with WebGL2 context.
WebGL2 isn't 100% backward compatible with WebGL1. More like 99%. You found an area that's not.
SRGB_ALPHA_EXT is an unsized format and unsized formats have for the most part been deprecated. The basic non-extension unsized formats still exist but there's a table in the OpenGL ES 3.0 spec specifying what effective sized internal format they become. Extension unsized formats are not covered.
The constants are just that, constant, so you're free to define them in your own code.
const srgba8InternalFormat = 35907;
const srgba8Format = isWebGL2 ? 6408 : 35906;
gl.texImage2D(gl.TEXTURE2D, 0, srgba8InternalFormat, width, height, 0
srgba8Format, gl.UNSIGNED_BYTE, 0
In other words you don't have to reference the constants off of a WebGLRenderingContext. Bonus: your code will run faster and be smaller.

Creating an RGB CVOpenGLESTexture in iOS

I am trying to create a 3-channel CVOpenGLESTexture in iOS.
I can successfully create a single-channel texture by specifying kCVPixelFormatType_OneComponent8 in CVPixelBufferCreate() and GL_LUMINANCE for both format and internalFormat in CVOpenGLESTextureCacheCreateTextureFromImage().
Similarly, I can successfully create a 4-channel RGBA texture by specifying kCVPixelFormatType_32BGRA in CVPixelBufferCreate() and GL_RGBA for both format and internalFormat in CVOpenGLESTextureCacheCreateTextureFromImage().
I need to create 3-channel, 24-bit, RGB (or BGR) texture with accessible pixels.
I cannot seem to find the correct parameters (or combination thereof) to CVPixelBufferCreate() and CVOpenGLESTextureCacheCreateTextureFromImage() that will not cause either of them to fail.
Additional Info
The supported FOURCC format types reported by CVPixelFormatDescriptionArrayCreateWithAllPixelFormatTypes() on my device:
32, 24, 16, L565, 5551, L555, 2vuy, 2vuf, yuvs, yuvf, 40, L008, L010, 2C08, r408, v408, y408, y416, BGRA, b64a, b48r, b32a, b16g, R10k, v308, v216, v210, v410, r4fl, grb4, rgg4, bgg4, gbr4, 420v, 420f, 411v, 411f, 422v, 422f, 444v, 444f, y420, f420, a2vy, L00h, L00f, 2C0h, 2C0f, RGhA, RGfA, w30r, w40a, w40m, x420, x422, x444, x44p, xf20, xf22, xf44, xf4p, x22p, xf2p, b3a8.
Interestingly, some of these values are not defined in CVPixelBuffer.h.
When I pass kCVPixelFormatType_24RGB (24 == 0x18) to CVPixelBufferCreate() it succeeds, but then CVOpenGLESTextureCacheCreateTextureFromImage() fails with error code -6683:kCVReturnPixelBufferNotOpenGLCompatible.
Answering myself, though I will be happy to be proved wrong and shown how to do this.
As I show here (answering myself yet again) it is possible to list all the fourCC buffer formats supported on the device, and a bunch of format attributes associated with each such fourCC format.
The flags pertinent to this question are:
kCVPixelFormatOpenGLESCompatibility
kCVPixelFormatContainsAlpha : Should be false;
kCVPixelFormatContainsRGB : Note: supported only from __IPHONE_8_0, but not strictly necessary;
Using the debugger, I found another helpful key: CFSTR("IOSurfaceOpenGLESTextureCompatibility") which will verify that the OpenGL ES texture supports direct pixel access with no need for (the slower) glReadPixels() and glTexImage2D().
Unfortunately, using these flags, it seems that there is currently no such RGB/BGR supported format.

TBitmap->LoadFromStream failed with Win XP

I'm using C++ Builder XE3 to develop a graph editor. All of the editing and drawing capabilities are made in a DLL that is loaded by the end user applications.
To store information about the available graph objects I use a SQLite database. That database contains BMP icons that are loaded into a TImageList at run-time.
Everything works fine with Win-7, Win-8 and Win-vista but with Win-XP a "Floating point division by 0" occurs when loading the bitmap. I use a temporary memory stream to load the blob from the database and then load it into a temporary TBitmap which is used to add the new icon into the final TImageList.
Here is the function used to do so...
void TIcons::AddMaskedBitmap( TImageList *ptImgList, unsigned char *pucIcon, unsigned int uiSize )
{
TMemoryStream *ptMemStream;
// Use a memory stream
ptMemStream = new TMemoryStream();
ptMemStream->Write( pucIcon, uiSize );
ptMemStream->Position = 0;//Seek( ( int )0, ( unsigned short )soBeginning );
// Load using the cached bmp object
m_ptBmp->Transparent = true;
#warning "floatting point division by 0 error with WinXP"
m_ptBmp->LoadFromStream( ptMemStream ); // floatting point division by 0 error with WinXP
// m_ptBmp->LoadFromFile( ".\\d.bmp" ); // works
// Create a mask
m_ptBmpMask->Assign( m_ptBmp );
m_ptBmpMask->Canvas->Brush->Color = m_ptBmp->TransparentColor;
m_ptBmpMask->Monochrome = true;
// Add it to the list
ptImgList->Add( m_ptBmp, m_ptBmpMask );
// Free mem
m_ptBmp->FreeImage();
m_ptBmpMask->FreeImage();
delete ptMemStream;
}
I've traced the TBitmap::LoadFromStream function and the exception occurs in the CreateDIBSection function.
To make sure the loaded bitmap files are saved using the right encoding I've tried to load them using the TBitmap::LoadFromFile function and it works fine, so I think there's something wrong with the TBitmap::LoadFromStream function but I can't figure out what !
If anyone has an idea...
Thanks.
LoadFromFile is implemented by creating a file stream, and passing that to LoadFromStream. This, if the contents of your memory stream are the same as the contents of the file, then the call to LoadFromStream will succeed.
Thus the only sane conclusion is that the contents of the memory stream are invalid in some way.
The bitmap stored into the database is encoded using the BITMAPV4HEADER structure which is supposed to be supported since Win95/NT4 but there's something wrong.
It works fine if I encode the bitmap using the BITMAPINFOHEADER structure which is an older version of bitmap encoding which does not contain color space information.
Just found out a solution that works for me.
My problem was that software that was developed on Win7, when running on XP was throwing the division by 0 error when loading one of my BMPs.
It turns out that the problematic BMP was saved using Win7 Paint (other BMPs that were ok were saved from Gimp).
All I needed to do to fix it was to open this BMP on XP Paint and save it from there.

Is there a way to force Magick++ to skip its cache when writing modified PixelPackets?

I have written a program that relies on Magick++ simply for importing and exporting of a wide variety of image formats. It uses Image.getPixels() to get a PixelPacket, does a lot of matrix transformations, then calls Image.syncPixels() before writing a new image. The general approach is the same as the example shown in Magick++'s documentation. More or less, the relevant code is:
Magick::Image image("image01.bmp");
image.modifyImage();
Magick::PixelPacket *imagePixels = image.getPixels(0, 0, 10, 10);
// Matrix manipulation occurs here.
// All actual changes to the PixelPacket direct changes to pixels like so:
imagePixels[i].red = 4; // or any other integer
// finally, after matrix manipulation is done
image.syncPixels();
image.write("image01_transformed.bmp");
When I run the above code, the new image file ("image01_transformed.bmp" in this example) ends up being the same as the original. However, if I write it to a different format, such as "image01_transformed.ppm", I get the correct result: a modified image. I assume this is due to a cached version of the format-encoded image, and that Magick++ is for some reason not aware that the image is actually changed and therefore the cache is out of date. I tested this idea by adding image.blur(1.0, 0.1); immediately before image.syncPixels();, and forcing this inconsequential change did indeed result in the correct result for same-format images.
Is there a way to force Magick++ to realize that the cache is out-of-date? Am I using getPixels() and syncPixels() incorrectly in the first place? Thanks!

ABCPDF Font Printing Layout - Machine Dependent

I am using ABCPDF to print a PDF file to a local printer via EMF file. I've based this very closely on ABC PDF's sample "ABCPDFView" project. My application worked fine on my Windows 7 and Windows XP dev boxes, but when I moved to a Windows 2003 test box, simple embedded fonts (like Times New Roman 12) rendered completely wrong (wrong spot, and short and squat, almost like the DPI's were crazily wrong).
Note that I've hardcoded the DPI to 240 here b/c I'm using a weird mainframe print driver that forces 240x240. I can discount that driver as the culprit as, if I save the EMF file locally during print, it shows the same layout problems. If I render to PNG or TIFF files, this looks just fine on all my servers using this same code (put .png in place of .emf). Finally, if I use the ABCPDFView project to manually add in a random text box to my PDF, that text also renders wrong in the EMF file. (Side note, if I print the PDF using Acrobat, the text renders just fine)
Update: I left out a useful point for anyone else having this problem. I can work around the problem by setting RenderTextAsText to "0" (see code below). This forces ABCPDF to render the text as polygons and makes the problem go away. This isn't a great solution though, as it greatly increases the size of my EMF files, and those polygons don't render nearly as cleanly in my final print document.
Anyone have any thoughts on the causes of this weird font problem?
private void DoPrintPage(object sender, PrintPageEventArgs e)
{
using (Graphics g = e.Graphics)
{
//... omitted code to determine the rect, used straight from ABC PDF sample
mDoc.Rendering.DotsPerInch = 240 ;
mDoc.Rendering.ColorSpace = "RGB";
mDoc.Rendering.BitsPerChannel = 8;
mDoc.SetInfo(0, "RenderTextAsText", "0");//the magic is right here
byte[] theData = mDoc.Rendering.GetData(".emf");
using (MemoryStream theStream = new MemoryStream(theData))
{
using (Metafile theEMF = new Metafile(theStream))
{
g.DrawImage(theEMF, theRect);
}
}
//... omitted code to move to the next page
}
Try upgrading to the new version of abcpdf 8, it has its own rendering engine based on Gecko and so you can bypass issues like this when abcpdf is using the inbuilt server version of IE for rendering.
I was originally RDPing in with 1920x1080 resolution, by switching to 1024x768 res for RDP, the problem went away. My main program runs as a service, and starting this service from an RDP session w/ 1024x768 fixes it.
I have an email out w/ ABC PDF to see if they can explain this and offer a more elegant solution, but for now this works.
Please note that this is ABC PDF 7, I have no idea if this issue applies to other versions.
Update: ABC PDF support confirmed that its possible the service is caching the display resolution from the person that started the process. They confirmed that they've seen some other weird issues with Remote Desktop and encouraged me to use this 1024x768 workaround and/or start the service remotely.

Resources