Convert ARGB data(.tiff image) to 2D texture using D3D11 - directx

I have a sample ARGB image (.tiff file).
I want to pass it as a 2D texture. I am not sure how to do that or is it even possible ?

I think you can utilize WIC texture loader which is a part of official DirectX Tool Kit.

Related

Copy Metal frame buffer to MTLTexture with different Pixel Format

I need to grab the screen pixels into a texture to perform post processing.
Previously, i have been using BlitCommandEncoder to copy from texture to texture. Source texture being the MTLDrawable texture, onto my destination texture. They both have the same MTLPixelFormatBGRA8Unorm so everything works just fine.
However, now i need to use a frame buffer color attachment texture of MTLPixelFormatRGBA16Float for HDR rendering. So, when i am grabbing the screen pixels, i am actually grabbing from this color attachment texture instead of the Drawable texture. And i am getting this error:
[MTLDebugBlitCommandEncoder internalValidateCopyFromTexture:sourceSlice:sourceLevel:sourceOrigin:sourceSize:toTexture:destinationSlice:destinationLevel:destinationOrigin:options:]:447: failed assertion [sourceTexture pixelFormat](MTLPixelFormatRGBA16Float) must equal [destinationTexture pixelFormat](MTLPixelFormatBGRA8Unorm)
I don't think i need to change my destination texture to RGBA16Float format? Because that will take up double the memory. One full screen texture (color attachment) with that format should be enough for HDR to work right?
Is there other method to successfully perform this kind of copy?
On openGL there is no error when copying with glCopyTexImage2D
Metal automatically converts from source to destination format during rendering. So you could just do a no-op rendering pass to perform the conversion.
Alternatively, if you want to avoid boilerplate no-op rendering code, you can use the MPSImageConversion performance shader that's basically doing the same.

sampler value of texture from DisparityFloat16 pixel format on iOS OpenGLES

I want to use depthDataMap as a texture from iPhoneX true depth camera on my OpenGLES project. Have downloaded some Swift samples, it seems that depthMap can be created and sampled as a float texture on Metal. But on OpenGLES, the only way to create a depth texture from depth buffer is,
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, depthWidth, depthHeight, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, CVPixelBufferGetBaseAddress(depthBuffer));
The sample value is different from the value exported as CIImage from DisparityFloat16 pixel type. The value is much lower, and not a linear scale compared to the CIImage.
This is sampled value in OpenGLES
This is via code: CIImage *image = [CIImage imageWithCVImageBuffer:depthData.depthDataMap];
Does anyone have the same issue?
Well it looks like you're specifying the pixel data type as GL_UNSIGNED_SHORT, try changing it to GL_HALF_FLOAT (if using DisparityFloat16) or GL_FLOAT (if using DisparityFloat32).
Also, if you want to display the depth buffer as a texture, you should be converting the depth data to values that mean something in a grayscale image. If you normalize your depth buffer values to be integers between 0 and 255, your picture will look a whole lot better.
For more information, Apple has examples of this exact thing. They use Metal, but the principal would work with OpenGL too. Here's a really nice tutorial with some sample code that does this as well.

Realtime conversion of ARFrame.capturedImage CVPixelBufferRef to OpenCv Mat

ARKit runs at 60 frames/sec, which equates to 16.6ms per frame.
My current code to convert the CVPixelBufferRef (kCVPixelFormatType_420YpCbCr8BiPlanarFullRange format) to a cv::Mat (YCrCb) runs in 30ms, which causes ARKit to stall and everything to lag.
Does anyone have any ideas on how to to a quicker conversion or do I need to drop the frame rate?
There is a suggestion by Apple to use Metal, but I'm not sure how to do that.
Also I could just take the grayscale plane, which is the first channel, which runs in <1ms, but ideally I need the colour information as well.
In order to process an image in a pixel buffer using Metal, you need to do following.
Call CVMetalTextureCacheCreateTextureFromImage to create CVMetalTexture object on top of the pixel buffer.
Call CVMetalTextureGetTexture to create a MTLTexture object, which Metal code (GPU) can read and write.
Write some Metal code to convert the color format.
I have an open source project (https://github.com/snakajima/vs-metal), which processes pixel buffers (from camera, not ARKit) using Metal. Feel free to copy any code from this project.
I tried to convert Ycbcr to RGB, do image processing in RGB mat and convert it back to Ycbcr, it worked very slowly. I suggest only do that with a static image. For realtime processing, we should process directly in cv::Mat. ARFrame.capturedImage is Ycbcr buffer. So, the solution is
Sperate the buffer to 2 cv::Mat (yPlane and cbcrPlane). Keep in mind, we do not clone memory, we create 2 cv::Mat with base addresses is yPlane address and cbcrPlane address.
Do image process on yPlane and cbcrPlane, size(cbcrPlane) = size(yPlane) / 2.
You can check out my code here: https://gist.github.com/ttruongatl/bb6c69659c48bac67826be7368560216

Is there a workflow for using an HDR image for an irradiance map?

For Metal on iOS is there a workflow for using an HDR image - .hdr or .exr format - for an irradiance map?
You can load an HDR file in RGBE format by converting its shared-exponent representation into one of Metal's pixel formats (such as MTLPixelFormatRGB9E5Float or RGBA16Float, both of which are filterable). From there, you'd just sample it and use it in your shader.

direct11 write data to buffer in pixel shader? (like ssbo in open)

I'm trying to write data to a buffer in hlsl shader. I know in opengl you need ssbo, but is there a corresponding buffer type in direct11?(I'm new to it).
p.s. I'm using monogame so the newest shader model available is 3.0.
thanks!
Shader Model 3 corresponds to the DirectX 9 architecture. This architecture looks as follows:
(source: s-msft.com)
(Source: https://msdn.microsoft.com/en-us/library/windows/desktop/bb219679(v=vs.85).aspx)
As you see, there is only one output from the pixel shader. This output can be a color or depth value and will be rendered to the render target. So, there is no way in DX9.
In DX10 (SM 4), the pipeline looks as follows:
(source: s-msft.com)
Source: https://msdn.microsoft.com/en-us/library/windows/desktop/bb205123(v=vs.85).aspx
Again, the output of the pixel shader is color and depth. No way in DX10 either.
Finally, DirectX 11 (SM 5):
(source: s-msft.com)
Source: https://msdn.microsoft.com/en-us/library/windows/desktop/ff476882(v=vs.85).aspx
Now there is a way from the Pixel Shader to Memory Resources. The buffer type that you would need is the RWBuffer.

Resources