CVPixelBuffer to NSData And Back - ios

new to the video processing and am stuck here for a few days.
I have a CVPixelBufferRef that is in YUV (YCbCr 4:2:0) format. I grab the base address using CVPixelBufferGetBaseAddress.
How do I take the bytes at the base address and create a new CVPixelBufferRef, one that is also in the same YUV format?
I tried:
CVPixelBufferCreateWithBytes(CFAllocatorGetDefault(), 1440, 900, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, currentFrame, 2208, NULL, NULL, (pixelBufferAttributes), &imageBuffer);
Which creates a CVPixelBufferRef, but I can't do anything with it (i.e. convert it to a CIImage, render it, etc.).
Ultimately, my goal is to take the bytes I receive that are from the base address call and to just display them on the screen. I know I can do that directly without the base address call, but I have a limitation that only allows me to receive the base address bytes.

For reference,
The reason I could not get a CIImage from the CVPixelBuffer is because it is not IOSurface backed. To ensure it is IOSurface backed, use CVPixelBufferCreate and then CVPixelBufferGetBaseAddress (or CVPixelBufferGetBaseAddressOfPlane if planar data) and memcpy your bytes into that address.
Hope this helps someone in the future.

Related

How to convert one pixel format to another? (420f to n12)

So I'm getting frames in CV420YpCbCr8BiPlanarFullRange pixel format. That is a bi-planar 4:2:0 thingy.
I need to convert it to some of the formats consumed by a third party library, for example this one.
At first sight it seems that it's the same format, but unfortunately the picture is mangled. Any ideas where is the problem? And what are some libraries that allow for conversion between the formats?
video_frame.p_data = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
doesn't work. Neither does getting the base address of the structure.

What factors determine DXGI_FORMAT?

I am not familiar with directx, but I ran into a problem in a small project, part of which involves capturing directx data. I hope, below I make some sense.
General question:
I would like to know what factors determine the DXGI_FORMAT of a texture in the backbuffer (hardware?, OS?, application?, directx version?). And more importantly, when capturing a texture from the backbuffer, is it possible to receive a texture in the desired format by supplying the format as a parameter, having the format automatically converted if necessary.
Specifics about my problem :
I capture screens from games using Open Broadcaster Software(OBS) and process them using a specific library(OpenCV) prior to streaming. I noticed that, following updates to both Windows and OBS, I get 'DXGI_FORMAT_R10G10B10A2_UNORM' as the DXGI_FORMAT. This is a problem for me, because as far as I know OpenCV does not provide a convenient way for building an OpenCV object when colors are 10bits. Below are a few relevant lines from the modified OBS source file.
d3d11_copy_texture(data.texture, backbuffer);
...
hlog(toStr(data.format)); // prints 24 = DXGI_FORMAT_R10G10B10A2_UNORM
...
ID3D11Texture2D* tex;
bool success = create_d3d11_stage_surface(&tex);
if (success) {
...
HRESULT hr = data.context->Map(tex, subresource, D3D11_MAP_READ, 0, &mappedTex);
...
Mat frame(data.cy, data.cx, CV_8UC4, mappedTex.pData, (int)mappedTex.RowPitch); //This creates an OpenCV Mat object.
//No support for 10-bit coors. Expects 8-bit colors (CV_8UC4 argument).
//When the resulting Mat is viewed, colours are jumbled (Probably because 10-bits did not fit into 8-bits).
Before the updates (when I was working on this a year ago), I was probably receiving DXGI_FORMAT = DXGI_FORMAT_B8G8R8A8_UNORM, because the code above used to work.
Now I wonder what changed, and whether I can modify the source code of OBS to receive data with the desired DXGI_FORMAT.
'create_d3d11_stage_surface' method called above sets the DXGI_FORMAT, but I am not sure if it means 'give me data with this DXGI_FORMAT' or 'I know you work with this format, give me what you have'.
static bool create_d3d11_stage_surface(ID3D11Texture2D **tex)
{
HRESULT hr;
D3D11_TEXTURE2D_DESC desc = {};
desc.Width = data.cx;
desc.Height = data.cy;
desc.Format = data.format;
...
I hoped that, overriding the desc.Format with DXGI_FORMAT_B8G8R8A8_UNORM would result in that format being passed as argument in the ID3D11DeviceContext::Map call above, and I would get data with specified format. But that did not work.
The choice of render target is up to the application, but they need to pick one based on the Direct3D hardware feature level. Formats for render targets in swapchains are usually display scanout formats:
DXGI_FORMAT_R8G8B8A8_UNORM
DXGI_FORMAT_R8G8B8A8_UNORM_SRGB
DXGI_FORMAT_B8G8R8A8_UNORM
DXGI_FORMAT_B8G8R8A8_UNORM
DXGI_FORMAT_R10G10B10A2_UNORM
DXGI_FORMAT_R16G16B16A16_FLOAT
DXGI_FORMAT_R10G10B10_XR_BIAS_A2_UNORM (rare)
See the DXGI documentation for the full list of supported formats and usages by feature level.
Direct3D 11 does not do format conversions when you copy resources such as copying to staging render textures, so if you want to do a format conversion you'll need to handle that yourself. Note that CPU-side conversion code for all the DXGI formats can be found in DirectXTex.
It is the application that decides that format. The simplest one would be R8G8B8A8, which simply represents RGB and alpha values. But, if developer decides that he will be using HDR, the backbuffer would probably be R11B11G10, because you can store way more precise data there, without alpha channel information. If the game is for example black and white, there's no need to keep all RGB channels in the back buffer, you could use simpler format. I hope this helps.

How to store .raw or raw bytes of pictures to local disk documents folder without png or jpeg representation

I want to store the raw bytes of a captured picture using AVCaptureSession. But I have only seen examples of pngrepresentation and jpegrepresentation. I want to store the raw data bytes in local disk documents of phone so it can be reopened at other times and converted into a UIImage for post processing. Is there a way to do this?
for example:
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(pixelBuffer);
Can I store rawImageBytes in documents to open it later?
Sure you can. Create an NSData object containing your bytes and save that using one of the NSData file saving methods (e.g. writeToURL:atomically:.)
You'll need to know the number of bytes in your pixelBuffer though. It looks like you should use CVPixelBufferGetDataSize to get the number of bytes.

DirectX 10+. Copying a texture to a vertex buffer

I need to use texture data as z-coord in my relief vertex buffer. So i need a way to transfer that texture data to buffer, or a way to reinterpret it and use as vertex buffer.
MSDN says ID3D10Device::CopyResource method args must be the same type. So I cant use it.
Now i have to transfer data to CPU and send back as buffer data, but this is so ineffective.
Can anybody prompt proper way?

iOS Loading 16bit Single Channel Image

I have a 16bit grayscale image. I have tried both .png and .tif. .tif works somewhat. I have the following code:
CGDataProviderRef l_Img_Data_Provider = CGDataProviderCreateWithFilename( [m_Name cStringUsingEncoding:NSASCIIStringEncoding] );
CGImageRef l_CGImageRef = CGImageCreate( m_Width, m_Height, 16, 16, m_Width * 2,
CGColorSpaceCreateDeviceGray(), kCGBitmapByteOrder16Big, l_Img_Data_Provider, NULL, false, kCGRenderingIntentDefault );
test_Image = [[UIImage alloc] initWithCGImage:l_CGImageRef];
[_test_Image_View setImage:test_Image];
This results in the following image:
faulty gradient
As you can see, there seems to be an issue at the beginning of the image ( could it be trying to use the byte data from the header? ), and the image is offset by about a fifth ( a little harder to see, look at the left and the right, there is a faint line about a fifth away from the right.
My goal is to convert this to a metal texture and use it from there. Also having issues there. Seem like a byte order issue but maybe we can come back to that.
dave
CGDataProvider doesn't know about the format of the data that it stores. It is just meant for handling generic data:
"The CGDataProvider header file declares a data type that supplies
Quartz functions with data. Data provider objects abstract the
data-access task and eliminate the need for applications to manage
data through a raw memory buffer."
CGDataProvider
Because CGDataProvider is generic you must provide the format of the image data using the CGImageCreate parameters. PNGs and JPGs have their own CGImageCreateWith.. functions for handling encoded data.
The CGImage parameters in your example correctly describe a 16 bit grayscale raw byte format but nothing about TIF encoding so I would guess you are correct in guessing that the corrupted pixels you are see are from the file headers.
There may be other ways to load a 16 bit grayscale image on iOS, but to use that method (or the very similar Metal method) you would need to parse the image bytes from the TIF file and pass that into the function, or create another way to store and parse the image data.

Resources