Metal Shader Debugging - Capture GPU Frame - ios

I want to debug my metal shader, tho the "Capture GPU Frame" button is not visible and unavailable in the debug menu.
My scheme was initially set up like this:
Capture GPU Frame: Automatically Enabled
Metal API Validation: Enabled
Tho when I change the Capture GPU Frame option to Metal, I do see the capture button, tho my app crashes when I'm trying to make the render command encoder:
commandBuffer.makeRenderCommandEncoder(descriptor: ...)
validateRenderPassDescriptor:644: failed assertion `Texture at colorAttachment[0] has usage (0x01) which doesn't specify MTLTextureUsageRenderTarget (0x04)'
Question one: Why do I need to specify the usage? (It works in Automatically Enabled mode)
Question two: How do I specify the MTLTextureUsageRenderTarget?
Running betas; Xcode 10 and iOS 12.

With newer versions of Xcode you need to explicitly set MTLTextureDescriptor.usage, for my case (a render target) that looks like this:
textureDescriptor.usage = MTLTextureUsageRenderTarget|MTLTextureUsageShaderRead;
The above setting indicates that a texture can be used as a render target and that it could also be read after that by another shader. As the comment above mentioned, you may also want to set the framebufferOnly property, here is how I do that:
if (isCaptureRenderedTextureEnabled) {
mtkView.framebufferOnly = false;
}
Note that this framebufferOnly only setting is left as the default of true when for the optimized case (isCaptureRenderedTextureEnabled = false) which makes it easy to inspect the data that will be rendered in the view (the output of the shader).

Specify usage purpose
textureDescriptor.usage = [.renderTarget , .shaderRead]
or
textureDescriptor.usage = MTLTextureUsage(rawValue: MTLTextureUsage.renderTarget.rawValue | MTLTextureUsage.shaderRead.rawValue)

Related

Can I use OpenGLES3.0 on IOS just by changing the rendering context, not the header files?

Our IOS engine has used OpenGLES2.0 for ages now, but this limits the maximum number of textures that can be bound at once to 8. If I change to OpenGLES3.0 this raises the number on all devices to 16. I do this by changing all my
#import <OpenGLES/ES2/glext.h>
to
#import <OpenGLES/ES3/glext.h>
and by changing the rendering context from
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
to
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
I notice that if I only change the rendering context, then I dont need to change any code. It all compiles and works. If I change the header files I do need to change a few lines of code.
Can I just change the rendering context ? As GLES3.0 is a superset of GLES2.0 will this work ? Upon testing the maximum number of texture units with
glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &maxUnits);
It has indeed changed from 8 to 16, whether I change the headers or not.
Furthermore then, what if the device doesnt supporet GLES3.0 ? I believe devices before the A7 chip don't. Can I then revert to the GLES2 rendering context, and just use the GLES2.0 headers for both ?
Yes, I know OpenGL is deprecated and I should use Metal. We're working on that, but inthe meantime can I just use the GL2 header files with a GL3 rendering context ?
Thanks
Shaun
Users of my program use more than 50 textures simultaneously with OpenGLES2.0 and I did not notice any limitations even on old phones.
Perhaps it is worth revising the drawing technique.

Determining which swap chain formats are supported

When calling IDXGIFactory1::CreateSwapChain with DXGI_FORMAT_B5G6R5_UNORM I get an error that this format isn't supported, specifically E_INVALIDARG One or more arguments are invalid. However, this works fine with a more standard format like DXGI_FORMAT_B8G8R8A8_UNORM.
I'm trying to understand how I can know which swap chain formats are supported. From digging around in the documentation, I can find lists of supported formats for "render targets", but this doesn't appear to be the same set of formats supported for swap chains. B5G6R5 does need 11.1 to have required support for most uses, but it's working as a render target.
https://learn.microsoft.com/en-us/previous-versions//ff471325(v=vs.85)
https://learn.microsoft.com/en-us/windows/win32/direct3ddxgi/format-support-for-direct3d-11-0-feature-level-hardware
https://learn.microsoft.com/en-us/windows/win32/direct3ddxgi/format-support-for-direct3d-11-1-feature-level-hardware
As a test, I looped through all formats and attempted to create swap chains with each. Of the 118 formats, only 8 appear to be supported on my machine (RTX 2070):
DXGI_FORMAT_R16G16B16A16_FLOAT
DXGI_FORMAT_R10G10B10A2_UNORM
DXGI_FORMAT_R8G8B8A8_UNORM
DXGI_FORMAT_R8G8B8A8_UNORM_SRGB
DXGI_FORMAT_B8G8R8A8_UNORM
DXGI_FORMAT_B8G8R8A8_UNORM_SRGB
DXGI_FORMAT_NV12
DXGI_FORMAT_YUY2
What is the proper way to know which swap chain formats are supported?
For additional context, I'm doing off-screen rendering to a 16-bit (565) format. I have an optional "preview window" that I open occasionally to quickly see the rendering results. When I create the window I create a swap chain and do a copy from the real render target into the swap chain back buffer. I'm targeting DirectX 11 or 11.1. I'm able to render to the B5G6R5 format just fine, it's only the swap chain that complains. I'm running Windows 10 1909.
Here's a Gist with resource creation snippets and a full code sample.
https://gist.github.com/akbyrd/c9d312048b49c5bd607ceba084d95bd0
For the swap-chain, it must be supported for "Display Scan-Out". If you need to check a format support at runtime, you can use:
UINT formatSupport = 0;
if (FAILED(device->CheckFormatSupport(backBufferFormat, &formatSupport)))
formatSupport = 0;
UINT32 required = D3D11_FORMAT_SUPPORT_RENDER_TARGET | D3D11_FORMAT_SUPPORT_DISPLAY;
if ((formatSupport & required) != required)
{
// Not supported
}
For all Direct3D Hardware Feature Levels devices, you can always count on DXGI_FORMAT_R8G8B8A8_UNORM working. Unless you are on Windows Vista or ancient WDDM 1.0 legacy drivers, you can also count on DXGI_FORMAT_B8G8R8A8_UNORM
For Direct3D Hardware Feature Level 10.0 or better devices, you can also count on DXGI_FORMAT_R16G16B16A16_FLOAT and DXGI_FORMAT_R10G10B10A2_UNORM being supported.
You can also count on all Direct3D Hardware Feature Level devices supporting DXGI_FORMAT_R8G8B8A8_UNORM_SRGB and DXGI_FORMAT_B8G8R8A8_UNORM_SRGB for swap-chains if you are using the 'legacy' swap effects. For modern swap effects which are required for DirectX 12 and recommended on Windows 10 for DirectX 11 (see this blog post), the swapchain buffer is not created with _SRGB but instead you create just the render target view with it.
See Anatomy of Direct3D 11 Create Device

UIVideoEditorController videoQuality setting not working

I'm currently trying to use UIVideoEditorViewController to trim a video file previously selected from the Photos app by using UIImagePickerController. I verified that after choosing the file and before creating UIVideoEditorViewController the file has the original resolution. However, after trimming the video, the output is always in 360p resolution despite setting the videoQuality property to high. All the applicable settings seem to be ignored for this property and the video ends up being compressed unnecessarily.
I have found multiple people reporting similar issues but yet have to find a workaround for this.
let editor = UIVideoEditorController()
editor.delegate = self
editor.videoMaximumDuration = 0 // No limit for now
// TODO: This should really work, however while testing it seems like the imported videos are always scaled down
// to 320p which equivalents the default value of .low.
editor.videoQuality = .typeHigh
editor.videoPath = internalURL.path
Any help would be greatly appreciated.

webgl replace program shader

I'm trying to swap the fragement-shader used in a program. The fragment-shaders all have the same variables, just different calculations. I am trying to provide alternative shaders for lower level hardware.
I end up getting single color outputs (instead of a texture), does anyone have an idea what I could be doing wrong? I know the shaders are being used, due to the color changing accordingly.
//if I don't do this:
//WebGL: INVALID_OPERATION: attachShader: shader attachment already has shader
gl.detachShader(program, _.attachedFS);
//select a random shader, all using the same parameters
attachedFS = fragmentShaders[~~(Math.qrand()*fragmentShaders.length)];
//attach the new shader
gl.attachShader(program, attachedFS);
//if I don't do this nothing happens
gl.linkProgram(program);
//if I don't add this line:
//globject.js:313 WebGL: INVALID_OPERATION: uniform2f:
//location not for current program
updateLocations();
I am assuming you have called gl.compileShader(fragmentShader);
Have you tried to test the code on a different browser and see if you get the same behavior? (it could be standards implementation specific)
Have you tried to delete the fragment shader (gl.deleteShader(attachedFS); ) right after detaching it. The
previous shader may still have a pointer in memory.
If this does not let you move forward, you may have to detach both shaders (vertex & frag) and reattach them or even recreate the program from scratch
I found the issue, after trying about everything else without result. It also explains why I was seeing a shader change, but just getting a flat color. I was not updating some of the attributes.

Using OSVR camera in OpenCV 3

I'm trying to use the OSVR IR camera in OpenCV 3.1.
Initialization works OK.
Green LED is lit on camera.
When I call VideoCapture.read(mat) it returns false and mat is empty.
Other cameras work fine with the same code and VLC can grab the stream from the OSVR camera.
Some further testing reveals: grab() return true, whereas retrieve(mat) again returns false.
Getting width and height from the camera yields expected results but MODE and FORMAT gets me 0.
Is this a config issue? Can it be solved by a combination of VideoCapture.set calls?
Alternative Official answer received from the developers (after my own solution below):
The reason my camera didn't work out of the box with OpenCV might be that it has old firmware (pre-v7).
Work around (or just update firmware):
I found the answer here while browsing anything remotely linked to the issue:
Fastest way to get frames from webcam
You need to specify that it should use DirectShow.
VideoCapture capture( CV_CAP_DSHOW + id_of_camera );

Resources