How to create and reuse an array of CVPixelBufferRef? - ios

In my application I need to
Create 24 CVPixelBufferRef
Add them later to AVAssetWriterInputPixelBufferAdaptor in a custom order to write an mp4 movie.
The VideoExport::addFrame function receives raw pixel data and stores it in the next empty CVPixelBufferRef. Here is the demo code:
// .h
CVPixelBufferRef buffers[24];
// .mm
void VideoExport::addFrame(unsigned char * pixels, long frame_index) {
buffers[frame_index] = NULL;
CVPixelBufferCreateWithBytes(NULL,
frameSize.width,
frameSize.height,
kCVPixelFormatType_24RGB,
(void*)pixels,
frameSize.width * 3,
NULL,
0,
NULL,
&buffers[frame_index]);
}
The pixel buffers seem to populate successfully. The problem is that when I try writing different frames to the movie file by changing index in buffers[index], the same frame is saved, over an over again.
The frame that gets saved seems to always be the last one I sent to addFrame, defeating my attempt of using an array of unique buffers. I suspect that any call to addFrame overwrites the previous data.
Note 1: I have tested that the pixels sent to addFrame are unique.
Note 2: If I add the frame to the movie immediately inside addFrame the produced movie has unique frames, but then I can't shuffle the frame order.
What would be the correct way to create and reuse an array of CVPixelBufferRef? Would a pixelBufferPool help, and if so, how can I use it?
Thank you.

OK, found the issue.
In my addFrame function, the unsigned char * pixels was destroyed after every addFrame call. But to make the buffers array usable later, the pixels need to remain alive until the buffers array is destroyed.

Related

Buffer data in Simulink in continuous time

I need to buffer some signals for a fixed duration to be used within the simulation. The use of buffer block in Simulink requires the frame rate to be known. However, I am using a continuous time solver (with defined maximum step size) so I don't really know how much should I put the buffer size as. There does not seem to be any option wherein a trigger based on time can be used. Can someone suggest how this can be done?
A simple buffer, made using a MATLAB Function Block, that would always have the most recent element at the top, would be,
function y = buffer(x)
% initialize the buffer
y = zeros(100,1);
% Shuffle the elements down
y(2:end) = y(1:end-1);
% add the new element
y(1) = x;

DirectX's first frame

I have a DirectX 11.1 application that isn't doing any real real-time rendering.
Instead, I only render if something in the app is changed, grab the output, and save it to disk.
This works perfectly fine, however, not on the first Drawcall.
Here's some pseudocode roughly lining out what I'm doing:
// init()
D3D11CreateDevice(...);
dxgidevice1->SetMaximumFrameLatency(0)
createTexture(&rt);
createStagingTexture(&staging);
// render
immediateContext->SetRenderTarget(rt);
immediateContext->DrawIndexed(...);
d3d11Device->CreateQuery(...)
immediateContext->End(query)
immediateContext->Flush();
while(immediateContext->GetData(query) == S_FALSE) { }
// get bytes
immediateContext->CopyResource(staging, rt);
immediateContext->Flush();
immediateContext->Map(staging, &mapped);
memcopy (mapped.pData) // returns all 0 bytes first time
// copy to preview
immediateContext->SetRenderTarget(staging)
immediateContext->Map(..) // constant buffer with shader parameters
immediateContext->Draw(...); // fullscreen quad
memcopy(...)
This works perfectly the second and following times i run this, however, the first time memcopy will return an array filled with zeros.
if i change the order to
//render
//copy to preview
//get bytes
i still get zeroes
//render
//copy to preview
//render
//get bytes
gives me data.
It seems like i have to do 2 draw calls before actually getting any results, now, if this is the case, i can draw some blanks when the app starts. But i need to be certain that this is by design of the API, and not some fluke that has random results on random devices?
Is there anyone who can shed some light on this issue?

How to use instancing offsets

Suppose I have a single buffer of instance data for 2 different groups of instances (ie, I want to draw them in separate draw calls because they use different textures). How do I set up the offsets to accomplish this? IASetVertexBuffers and DrawIndexedInstanced both have offset parameters and its not clear to me which ones I need to use. Also, DrawIndexedInstanced isn't exactly clear if the offset value is in bytes or not.
Offsets
Those offsets works independently. You can offset either in ID3D11DeviceContext::IASetVertexBuffers or in ID3D11DeviceContext::DrawIndexedInstanced or in both (then they will combine).
ID3D11DeviceContext::IASetVertexBuffers accepts offset in bytes:
bindedData = (uint8_t*)data + sizeof(uint8_t) * offset
ID3D11DeviceContext::DrawIndexedInstanced accepts all offsets in elements (indices, vertices, instances). They are just values added to indices. Vertex and instance offsets works independently. Index offset also offsets vertices and instances (obviously):
index = indexBuffer[i] + StartIndexLocation
indexVert = index + BaseVertexLocation
indexInst = index + StartInstanceLocation
I prefer offsetting in draw call:
no byte (pointer) arithmetic needed -- less chances to make a mistake
no need to re-bind buffer if you just changing offset -- less visible state changes (and, hopefully, invisible too)
Alternative solution
Instead of splitting rendering to two draw calls, you can merge your texture resources and draw all in one draw call:
both textures binded at same draw call, branching in shader (if/else) depending on integer passed via constant buffer (simplest solution)
texture array (if target hardware supports it)
texture atlas (will need some coding, but always useful)
Hope it helps!

Custom Video File how much would I extract the images?

I alot of people would recommend that why not go with Bink or use DirectShow in order to play a video or even ffmpeg. However, what are movies anyways - just images put all together with sound.
I've already created a program where I take a bunch of images and place them into the customize video file. The cool thing about this - is that I can easily place it on a quad. The issue I'm having is I can only extract one image from the custom video file. When I have more than one; I have problems, which I fully understand.
I have a index lookup table of all the images sizes then the raw images. The calculation I was following was:
offset = NumberOfImages + 1 * sizeof(long).
So, with the one image - if you'll perform the offset of finding the first image would be quite easy. During the for loop it always starts with 0 and and reaches the number of images which is 1. So, it would translate like this:
offset = 1 + 1 * 4 = 8.
So, now I know the offset just for one image which is great. However, a video is with a bunch of images all together. So, I've been thinking to myself...If there was a way to reach up to a certain point then stuff the read data inside a vector.
currentPosition = 0; //-- Set the current position to zero before looping through images in file.
for (UINT i = 0; i < elements; i++) {
long tblSz = (elements + 1) * sizeof(long); // elements + 1 * 4
long off = tblSz + currentPosition; // -- Now let's calculate the offset position inside the file knowing the table size.
// in.seekg(off, std::ios_base::end); //-- Not used.
long videoSz = sicVideoIndexTable[i]; //-- Let's retreive the image size from the index table that's stored inside the file before we process each image.
// in.seekg(0, std::ios_base::beg); //-- Not used.
dataBuf.resize(videoSz); //-- Let's resize the data Buffer vector to fit the image size.
in.seekg(off, std::ios_base::beg); //-- Let's go to the calculated offset position to retrieve the image data.
std::streamsize currpos = in.gcount(); //-- Prototype not used.
in.read(&dataBuf[0], videoSz); //-- Let's read in the data according to the image size.
sVideoDesc.dataPtr = (void*)&dataBuf[0]; //-- Pass what we've read into the temporary structor before pushing it inside a vector to store the collection of images.
sVideoDesc.fileSize = videoSz;
sicVideoArray.push_back(sVideoDesc);
dataBuf.empty(); //-- Now can empty the data vector so it can be reused.
currentPosition = videoSz; //-- Set the current position to the video size so it can recalculate the offset for the next image.
}
I believe the problem lies within the seekg and in.read but that's just my gut telling me that. As you see the current position always changes.
Buttom line question is if I can load one image then why won't I be able to load multiple images from the custom video file? I'm not sure if I'm using seekg or should I just get every character until a certain point them dump the content inside a data buffer vector. I thought reading the block of data would be the answer - but I'm becoming very unsure.
I think I finally understand what your code does. You really should use more descriptive variable names. Or at least add an explanation of what each variable means. Anyway...
I believe your problem is in this line:
currentPosition = videoSz;
When it should be
currentPosition += videoSz;
You basically don't advance through your file.
Also, if you just read the images in sequentially, you might want to change your file format so that instead of a table of image sizes at the beginning, you store each image size directly followed by the image data. That way you don't need to do any of the offset calculations or seeking.

Dynamic buffers behaviour

I have a question regards dynamic vertex and index buffers. Can I change their topology completely? For example, having one set of vertices in one frame, throw them away and recreate vertices with their own properties and count not equal to previous count of vertices. Also I want to know the same about index buffer, can I change the number of indices in dynamic index buffer?
So far in my application I have a warning when trying to update index buffer with larger size:
D3D11 WARNING: ID3D11DeviceContext::DrawIndexed: Index buffer has not enough space! [ EXECUTION WARNING #359: DEVICE_DRAW_INDEX_BUFFER_TOO_SMALL]
Changing the size of a buffer after creation is not possible.
A dynamic buffer allows you to update the data. You can write new data to it as long as it does not exceed the buffer's size.
But buffers don't care about data layout. A buffer with a size of 200 bytes can hold 100 shorts or 50 floats or mixed data; anything less or equal to 200 bytes.

Resources