I am working on a C++ project cross-compiled with emscripten to WebAssembly. I am using OpenGL subset on the C++ side compatible with WebGL2.0. The renderer performs first render pass into MSAA FBO,then blits the results into default FBO (screen). Below are the blit errors I am receiving when testing on Chrome and Firefox:
Chrome:
[.WebGL-0000494000387800] GL_INVALID_OPERATION: Attempt to blit from a
multisampled framebuffer and the bounds or format of the color buffer
don't match with the draw framebuffer.
Firefox
WebGL warning: blitFramebuffer: If the source is multisampled, then
the source and dest regions must match exactly.
I am not sure what I am missing here. I have used the following code (courtesy of SOKOL_SAMPLES project):
void emsc_init(const char* canvas_name, int flags) {
_emsc_canvas_name = canvas_name;
_emsc_is_webgl2 = false;
emscripten_get_element_css_size(canvas_name, &_emsc_width, &_emsc_height);
//force our offscreen fbo size also for canvas
_emsc_width = 1920;
_emsc_height = 1080;
emscripten_set_canvas_element_size(canvas_name, _emsc_width, _emsc_height);
emscripten_set_resize_callback(EMSCRIPTEN_EVENT_TARGET_WINDOW, 0, false, _emsc_size_changed);
EMSCRIPTEN_WEBGL_CONTEXT_HANDLE ctx;
EmscriptenWebGLContextAttributes attrs;
emscripten_webgl_init_context_attributes(&attrs);
attrs.antialias = flags & EMSC_ANTIALIAS;
if (flags & EMSC_TRY_WEBGL2) {
attrs.majorVersion = 2;
}
ctx = emscripten_webgl_create_context(canvas_name, &attrs);
if ((flags & EMSC_TRY_WEBGL2) && ctx) {
_emsc_is_webgl2 = true;
}
emscripten_webgl_make_context_current(ctx);
}
After the canvas sizes are set,it reports my sizes correctly.The size of the offscreen multi-sampled FBO is the same. Is it possible that the canvas size doesn't resize the context related FBO attachment automatically? Or is it something else.
I was trying to reproduce the issue with vanilla JS and WebGL and yes, I was getting the same errors if the src and dst values of glBlitFramebuffer didn't match.
Related
How can i read frame buffer current color in my shader for mac application.
I am very easily to do same thing in ios app using [[color(0)]].
I tried using texture as shown below, some pixels are getting missed.
texture2d_array normal_1 [[texture(0)]]
SampleCode
fragment float4 funcname(QuadFragIn inFrag [[ stage_in ]],
texture2d_array<float> normal_1 [[texture(0)]])
{
float4 color_0 = float4(normal_1.sample(tex_sampler, inFrag.m_TexCoord, 0));
float4 color_1 = float4(normal_1.sample(tex_sampler, inFrag.m_TexCoord, 1));
float4 color_2 = float4(normal_1.sample(tex_sampler, inFrag.m_TexCoord, 2));
int index = inFrag.index;
if(index == 0)
{
return color_0;
}
else if(index == 1)
{
return color_1;
}
else
{
return color_2;
}
}[![enter image description here][1]][1]
In the output file attach, the white strip is the concerned issue.
As per Metal shading language specification (https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf, comments under Table 12)
For [[color(m)]], m is used to specify the color attachment index when accessing (reading or
writing) multiple color attachments in a fragment function. The [[color(m)]] attribute is only
supported in iOS.
That means that you can't do pixel readback (reading already rendererd pixels from attached texture in the same render pass).
One way to avoid having to do pixel readback is doing pixel writes and reads in separate render passes: you render what you want in first render pass and then attach the result of the first render pass as a texture, which you can sample or just read.
I'm using the Embarcadero RAD Studio C++ builder XE7 compiler. In an application project, I'm using the both Windows GDI and GDI+ to draw on several device contexts.
My drawing content is something like that:
On the above sample the text background and the user picture are drawn with GDI+. The user picture is also clipped with a rounded path. All the other items (the text and the emojis) are drawn with the GDI.
When I draw to the screen DC, all works fine.
Now I want to draw on a printer device context. Whichever I use for my tests is the new "Export to PDF" printer device available in Windows 10. I prepare my device context to draw on an A4 viewport this way:
HDC GetPrinterDC(HWND hWnd) const
{
// initialize the print dialog structure, set PD_RETURNDC to return a printer device context
::PRINTDLG pd = {0};
pd.lStructSize = sizeof(pd);
pd.hwndOwner = hWnd;
pd.Flags = PD_RETURNDC;
// get the printer DC to use
::PrintDlg(&pd);
return pd.hDC;
}
...
void Print()
{
HDC hDC = NULL;
try
{
hDC = GetPrinterDC(Application->Handle);
const TSize srcPage(793, 1123);
const TSize dstPage(::GetDeviceCaps(hDC, PHYSICALWIDTH), ::GetDeviceCaps(hDC, PHYSICALHEIGHT));
const TSize pageMargins(::GetDeviceCaps(hDC, PHYSICALOFFSETX), ::GetDeviceCaps(hDC, PHYSICALOFFSETY));
::SetMapMode(hDC, MM_ISOTROPIC);
::SetWindowExtEx(hDC, srcPage.Width, srcPage.Height, NULL);
::SetViewportExtEx(hDC, dstPage.Width, dstPage.Height, NULL);
::SetViewportOrgEx(hDC, -pageMargins.Width, -pageMargins.Height, NULL);
::DOCINFO di = {sizeof(::DOCINFO), config.m_FormattedTitle.c_str()};
::StartDoc (hDC, &di);
// ... the draw function is executed here ...
::EndDoc(hDC);
return true;
}
__finally
{
if (hDC)
::DeleteDC(hDC);
}
}
The draw function executed between the StartDoc() and EndDoc() functions is exactly the same as whichever I use to draw on the screen. The only difference is that I added a global clipping rect on my whole page, to avoid the drawing to overlaps on the page margins when the size is too big, e.g. when I repeat the above drawing several times under the first one. (This is experimental, later I will add a page cutting process, but this is not the question for now)
Here are my clipping functions:
int Clip(const TRect& rect, HDC hDC)
{
// save current device context state
int savedDC = ::SaveDC(hDC);
HRGN pClipRegion = NULL;
try
{
// reset any previous clip region
::SelectClipRgn(hDC, NULL);
// create clip region
pClipRegion = ::CreateRectRgn(rect.Left, rect.Top, rect.Right, rect.Bottom);
// select new canvas clip region
if (::SelectClipRgn(hDC, pClipRegion) == ERROR)
{
DWORD error = ::GetLastError();
::OutputDebugString(L"Unable to select clip region - error - " << ::IntToStr(error));
}
}
__finally
{
// delete clip region (it was copied internally by the SelectClipRgn())
if (pClipRegion)
::DeleteObject(pClipRegion);
}
return savedDC;
}
void ReleaseClip(int savedDC, HDC hDC)
{
if (!savedDC)
return;
if (!hDC)
return;
// restore previously saved device context
::RestoreDC(hDC, savedDC);
}
As mentioned above, I expected a clipping around my page. However the result is just a blank page. If I bypass the clipping functions, all is printed correctly, except that the draw may overlap on the page margins. On the other hands, if I apply the clipping on an arbitrary rect on my screen, all works fine.
What I'm doing wrong with my clipping? Why the page is completely broken when I enables it?
So I found what was the issue. Niki was close to the solution. The clipping functions seem always applied to the page in pixels, ignoring the coordinate system and the units defined by the viewport.
In my case, the values passed to the CreateRectRgn() function were wrong, because they remained untransformed by the viewport, although the clipping was applied after the viewport was set in the device context.
This turned the identification of the issue difficult, because the clipping appeared as transformed while the code was read, as it was applied after the viewport, just before the drawing was processed.
I don't know if this is a GDI bug or a wished behavior, but unfortunately I never seen this detail mentioned in all the documents I read about the clipping. Although it seems to me important to know that the clipping isn't affected by the viewport.
I've got a YUV420 pixelbuffer in a UInt8 Array. I need to create a texture out of it in order to render it with OpenGL. In Android there is an easy way to decode my array to an RGB array for the texture. The code is the following:
BitmapFactory.Options bO = new BitmapFactory.Options();
bO.inJustDecodeBounds = false;
bO.inPreferredConfig = Bitmap.Config.RGB_565;
try {
myBitmap= BitmapFactory.decodeByteArray( yuvbuffer,
0,
yuvbuffer.length,
bO);
} catch (Throwable e) {
// ...
}
I need to decode the yuv buffer on my ios platform (Xcode 8.3.3, Swift 3.1) in order to put it into the following method as data:
void glTexImage2D( GLenum target,
GLint level,
GLint internalFormat,
GLsizei width,
GLsizei height,
GLint border,
GLenum format,
GLenum type,
const GLvoid * data);
How can I achieve this decoding?
ALTERNATIVE:
I've described the way I am decoding the YUV-buffer on Android. Maybe there is an other way to create a texture based on yuvpixels without decoding it like this. I've already tried the following method using the FragmentShader (Link), but it is not working for me. I'm getting a black screen or a green screen, but the image is never rendered. There are also some methods using two seperate buffers for Y and for UV - but on this I don't know how to split my YUV-buffer into Y and UV.
Do you have any new examples/samples for yuv-rendering which are not outdated and working?
If you need only to display that image/video, then you don't really need to convert it to rgb texture. You can bind all 3 planes (Y/Cb/Cr) as separate textures, and perform yuv->rgb conversion in fragment shader, with just a three dot products.
I have received a CMSampleBufferRef from a system API that contains CVPixelBufferRefs that are not RGBA (linear pixels). The buffer contains planar pixels (such as 420f aka kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange aka yCbCr aka YUV).
I would like to modify do some manipulation of this video data before sending it off to VideoToolkit to be encoded to h264 (drawing some text, overlaying a logo, rotating the image, etc), but I'd like for it to be efficient and real-time. Buuuut planar image data looks suuuper messy to work with -- there's the chroma plane and the luma plane and they're different sizes and... Working with this on a byte level seems like a lot of work.
I could probably use a CGContextRef and just paint right on top of the pixels, but from what I can gather it only supports RGBA pixels. Any advice on how I can do this with as little data copying as possible, yet as few lines of code as possible?
CGBitmapContextRef can only paint into something like 32ARGB, correct. This means that you will want to create ARGB (or RGBA) buffers, and then find a way to very quickly transfer YUV pixels onto this ARGB surface. This recipe includes using CoreImage, a home-made CVPixelBufferRef through a pool, a CGBitmapContextRef referencing your home made pixel buffer, and then recreating a CMSampleBufferRef resembling your input buffer, but referencing your output pixels. In other words,
Fetch the incoming pixels into a CIImage.
Create a CVPixelBufferPool with the pixel format and output dimensions you are creating. You don't want to create CVPixelBuffers without a pool in real time: you will run out of memory if your producer is too fast; you'll fragment your RAM as you won't be reusing buffers; and it's a waste of cycles.
Create a CIContext with the default constructor that you'll share between buffers. It contains no external state, but documentation says that recreating it on every frame is very expensive.
On incoming frame, create a new pixel buffer. Make sure to use an allocation threshold so you don't get runaway RAM usage.
Lock the pixel buffer
Create a bitmap context referencing the bytes in the pixel buffer
Use CIContext to render the planar image data into the linear buffer
Perform your app-specific drawing in the CGContext!
Unlock the pixel buffer
Fetch the timing info of the original sample buffer
Create a CMVideoFormatDescriptionRef by asking the pixel buffer for its exact format
Create a sample buffer for the pixel buffer. Done!
Here's a sample implementation, where I have chosen 32ARGB as the image format to work with, as that's something that both CGBitmapContext and CoreVideo enjoys working with on iOS:
{
CGPixelBufferPoolRef *_pool;
CGSize _poolBufferDimensions;
}
- (void)_processSampleBuffer:(CMSampleBufferRef)inputBuffer
{
// 1. Input data
CVPixelBufferRef inputPixels = CMSampleBufferGetImageBuffer(inputBuffer);
CIImage *inputImage = [CIImage imageWithCVPixelBuffer:inputPixels];
// 2. Create a new pool if the old pool doesn't have the right format.
CGSize bufferDimensions = {CVPixelBufferGetWidth(inputPixels), CVPixelBufferGetHeight(inputPixels)};
if(!_pool || !CGSizeEqualToSize(bufferDimensions, _poolBufferDimensions)) {
if(_pool) {
CFRelease(_pool);
}
OSStatus ok0 = CVPixelBufferPoolCreate(NULL,
NULL, // pool attrs
(__bridge CFDictionaryRef)(#{
(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32ARGB),
(id)kCVPixelBufferWidthKey: #(bufferDimensions.width),
(id)kCVPixelBufferHeightKey: #(bufferDimensions.height),
}), // buffer attrs
&_pool
);
_poolBufferDimensions = bufferDimensions;
assert(ok0 == noErr);
}
// 4. Create pixel buffer
CVPixelBufferRef outputPixels;
OSStatus ok1 = CVPixelBufferPoolCreatePixelBufferWithAuxAttributes(NULL,
_pool,
(__bridge CFDictionaryRef)#{
// Opt to fail buffer creation in case of slow buffer consumption
// rather than to exhaust all memory.
(__bridge id)kCVPixelBufferPoolAllocationThresholdKey: #20
}, // aux attributes
&outputPixels
);
if(ok1 == kCVReturnWouldExceedAllocationThreshold) {
// Dropping frame because consumer is too slow
return;
}
assert(ok1 == noErr);
// 5, 6. Graphics context to draw in
CGColorSpaceRef deviceColors = CGColorSpaceCreateDeviceRGB();
OSStatus ok2 = CVPixelBufferLockBaseAddress(outputPixels, 0);
assert(ok2 == noErr);
CGContextRef cg = CGBitmapContextCreate(
CVPixelBufferGetBaseAddress(outputPixels), // bytes
CVPixelBufferGetWidth(inputPixels), CVPixelBufferGetHeight(inputPixels), // dimensions
8, // bits per component
CVPixelBufferGetBytesPerRow(outputPixels), // bytes per row
deviceColors, // color space
kCGImageAlphaPremultipliedFirst // bitmap info
);
CFRelease(deviceColors);
assert(cg != NULL);
// 7
[_imageContext render:inputImage toCVPixelBuffer:outputPixels];
// 8. DRAW
CGContextSetRGBFillColor(cg, 0.5, 0, 0, 1);
CGContextSetTextDrawingMode(cg, kCGTextFill);
NSAttributedString *text = [[NSAttributedString alloc] initWithString:#"Hello world" attributes:NULL];
CTLineRef line = CTLineCreateWithAttributedString((__bridge CFAttributedStringRef)text);
CTLineDraw(line, cg);
CFRelease(line);
// 9. Unlock and stop drawing
CFRelease(cg);
CVPixelBufferUnlockBaseAddress(outputPixels, 0);
// 10. Timings
CMSampleTimingInfo timingInfo;
OSStatus ok4 = CMSampleBufferGetSampleTimingInfo(inputBuffer, 0, &timingInfo);
assert(ok4 == noErr);
// 11. VIdeo format
CMVideoFormatDescriptionRef videoFormat;
OSStatus ok5 = CMVideoFormatDescriptionCreateForImageBuffer(NULL, outputPixels, &videoFormat);
assert(ok5 == noErr);
// 12. Output sample buffer
CMSampleBufferRef outputBuffer;
OSStatus ok3 = CMSampleBufferCreateForImageBuffer(NULL, // allocator
outputPixels, // image buffer
YES, // data ready
NULL, // make ready callback
NULL, // make ready refcon
videoFormat,
&timingInfo, // timing info
&outputBuffer // out
);
assert(ok3 == noErr);
[_consumer consumeSampleBuffer:outputBuffer];
CFRelease(outputPixels);
CFRelease(videoFormat);
CFRelease(outputBuffer);
}
I'm using AVFoundation to get camera stream.
I'm using this code to get MTLTextures from:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
id<MTLTexture> texture = nil;
{
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
MTLPixelFormat pixelFormat = MTLPixelFormatBGRA8Unorm;
CVMetalTextureRef metalTextureRef = NULL;
CVReturn status = CVMetalTextureCacheCreateTextureFromImage(NULL, _textureCache, pixelBuffer, NULL, pixelFormat, width, height, 0, &metalTextureRef);
if(status == kCVReturnSuccess)
{
texture = CVMetalTextureGetTexture(metalTextureRef);
if (self.delegate){
[self.delegate textureUpdated:texture];
}
CFRelease(metalTextureRef);
}
}
}
It works fine, except for that generated MTLTexture object is not mipmaped (has only one mip level).
In this call:
CVMetalTextureCacheCreateTextureFromImage(NULL, _textureCache, pixelBuffer, NULL, pixelFormat, width, height, 0, &metalTextureRef);
There is a third parameter called "textureAtributes", I think it's possible to specify that I want mipmaped texture, but I haven't found any word in documentation what exactly goes there. Neither had I find a source code in which something else is passed instead of NULL.
In OpenGLES for iOS there is similar method, with same parameter, and also no words in documentation .
Just received an answer from Metal engineer here. Here's a quote:
No, it is not possible to generate a mipmapped texture from a
CVPixelBuffer directly.
CVPixelBuffer images typically have a linear/stride layout, as non-GPU hardware blocks might be interacting with those, and most GPU
hardware only supports mipmapping from tiled textures. You'll need to
issue a blit to copy from the linear MTLTexture to a private
MTLTexture of your own creation, then generate mipmaps.
As for textureAttributes, there is only one key supported: kCVMetalTextureCacheMaximumTextureAgeKey
There isn't a method to get a mipmapped texture directly, but you can generate one yourself easily enough.
First use your Metal device to create an empty Metal texture that is the same size and format as your existing texture, but has a full mipmap chain. See newTexture documentation
Use your MTLCommandBuffer object to create a blitEncoder object. See blitCommandEncoder documentation
Use the blitEncoder to copy from your camera texture to your empty texture. destinationLevel should be zero as you are only copying the top level mipmap. See copyFromTexture documentation
Finally use the blitEncoder to generate all the mip levels by calling generateMipmapsForTexture. See generateMipMapsForTexture documentation
At the end of this you have a metal texture from the camera with a full mip chain.