I need to convert an RGB texture to the NV12 format which the video codec understands (Y plane immediately followed by UV plane). DXGI_FORMAT_NV12 provides a straightforward view format mapping using R8 for Y and R8G8 for UV, so I use two pixel shaders with an NV12 texture. Unfortunately, this only works on Windows 8. Can I somehow create a texture that has both R8 and R8G8 shader resource views on Windows 7? Or is there another way I can render the YUV data?
create CreateOffscreenPlainSurface() with following D3DFORMAT
(D3DFORMAT)MAKEFOURCC('N', 'V', '1', '2') //842094158
more detail source code available at my git repo:
https://github.com/sailfish009/sample_direct3d_nv12
Related
I have a sample ARGB image (.tiff file).
I want to pass it as a 2D texture. I am not sure how to do that or is it even possible ?
I think you can utilize WIC texture loader which is a part of official DirectX Tool Kit.
I am working on an Android application that slims or fatten faces by detecting it. Currently, I have achieved that by using the Thin-plate spline algorithm.
http://ipwithopencv.blogspot.com.tr/2010/01/thin-plate-spline-example.html
The problem is that the algorithm is not fast enough for me so I decided to change it to OpenGL. After some research, I see that the lookup table texture is the best option for this. I have a set of control points for source image and new positions of them for warp effect.
How should I create lookup table texture to get warp effect?
Are you really sure you need a lookup texture?
Seems that it`d be better if you had a textured rectangular mesh (or a non-rectangular mesh, of course, as the face detection algorithm you have most likely returns a face-like mesh) and warped it according to the algorithm:
Not only you`d be able to do that in a vertex shader, thus processing each mesh node in parallel, but also it`s less values to process compared to dynamic texture generation.
The most compatible method to achieve that is to give each mesh point a Y coordinate of 0 and X coordinate where the mesh index would be stored, and then pass a texture (maybe even a buffer texture if target devices support it) to the vertex shader, where at the needed index the R and G channels contain the desired X and Y coordinates.
Inside the vertex shader, the coordinates are to be loaded from the texture.
This approach allows for dynamic warping without reloading geometry, if the target data texture is properly updated — for example, inside a pixel shader.
ARKit runs at 60 frames/sec, which equates to 16.6ms per frame.
My current code to convert the CVPixelBufferRef (kCVPixelFormatType_420YpCbCr8BiPlanarFullRange format) to a cv::Mat (YCrCb) runs in 30ms, which causes ARKit to stall and everything to lag.
Does anyone have any ideas on how to to a quicker conversion or do I need to drop the frame rate?
There is a suggestion by Apple to use Metal, but I'm not sure how to do that.
Also I could just take the grayscale plane, which is the first channel, which runs in <1ms, but ideally I need the colour information as well.
In order to process an image in a pixel buffer using Metal, you need to do following.
Call CVMetalTextureCacheCreateTextureFromImage to create CVMetalTexture object on top of the pixel buffer.
Call CVMetalTextureGetTexture to create a MTLTexture object, which Metal code (GPU) can read and write.
Write some Metal code to convert the color format.
I have an open source project (https://github.com/snakajima/vs-metal), which processes pixel buffers (from camera, not ARKit) using Metal. Feel free to copy any code from this project.
I tried to convert Ycbcr to RGB, do image processing in RGB mat and convert it back to Ycbcr, it worked very slowly. I suggest only do that with a static image. For realtime processing, we should process directly in cv::Mat. ARFrame.capturedImage is Ycbcr buffer. So, the solution is
Sperate the buffer to 2 cv::Mat (yPlane and cbcrPlane). Keep in mind, we do not clone memory, we create 2 cv::Mat with base addresses is yPlane address and cbcrPlane address.
Do image process on yPlane and cbcrPlane, size(cbcrPlane) = size(yPlane) / 2.
You can check out my code here: https://gist.github.com/ttruongatl/bb6c69659c48bac67826be7368560216
For Metal on iOS is there a workflow for using an HDR image - .hdr or .exr format - for an irradiance map?
You can load an HDR file in RGBE format by converting its shared-exponent representation into one of Metal's pixel formats (such as MTLPixelFormatRGB9E5Float or RGBA16Float, both of which are filterable). From there, you'd just sample it and use it in your shader.
I'm trying to write data to a buffer in hlsl shader. I know in opengl you need ssbo, but is there a corresponding buffer type in direct11?(I'm new to it).
p.s. I'm using monogame so the newest shader model available is 3.0.
thanks!
Shader Model 3 corresponds to the DirectX 9 architecture. This architecture looks as follows:
(source: s-msft.com)
(Source: https://msdn.microsoft.com/en-us/library/windows/desktop/bb219679(v=vs.85).aspx)
As you see, there is only one output from the pixel shader. This output can be a color or depth value and will be rendered to the render target. So, there is no way in DX9.
In DX10 (SM 4), the pipeline looks as follows:
(source: s-msft.com)
Source: https://msdn.microsoft.com/en-us/library/windows/desktop/bb205123(v=vs.85).aspx
Again, the output of the pixel shader is color and depth. No way in DX10 either.
Finally, DirectX 11 (SM 5):
(source: s-msft.com)
Source: https://msdn.microsoft.com/en-us/library/windows/desktop/ff476882(v=vs.85).aspx
Now there is a way from the Pixel Shader to Memory Resources. The buffer type that you would need is the RWBuffer.