DirectX's first frame - directx

I have a DirectX 11.1 application that isn't doing any real real-time rendering.
Instead, I only render if something in the app is changed, grab the output, and save it to disk.
This works perfectly fine, however, not on the first Drawcall.
Here's some pseudocode roughly lining out what I'm doing:
// init()
D3D11CreateDevice(...);
dxgidevice1->SetMaximumFrameLatency(0)
createTexture(&rt);
createStagingTexture(&staging);
// render
immediateContext->SetRenderTarget(rt);
immediateContext->DrawIndexed(...);
d3d11Device->CreateQuery(...)
immediateContext->End(query)
immediateContext->Flush();
while(immediateContext->GetData(query) == S_FALSE) { }
// get bytes
immediateContext->CopyResource(staging, rt);
immediateContext->Flush();
immediateContext->Map(staging, &mapped);
memcopy (mapped.pData) // returns all 0 bytes first time
// copy to preview
immediateContext->SetRenderTarget(staging)
immediateContext->Map(..) // constant buffer with shader parameters
immediateContext->Draw(...); // fullscreen quad
memcopy(...)
This works perfectly the second and following times i run this, however, the first time memcopy will return an array filled with zeros.
if i change the order to
//render
//copy to preview
//get bytes
i still get zeroes
//render
//copy to preview
//render
//get bytes
gives me data.
It seems like i have to do 2 draw calls before actually getting any results, now, if this is the case, i can draw some blanks when the app starts. But i need to be certain that this is by design of the API, and not some fluke that has random results on random devices?
Is there anyone who can shed some light on this issue?

Related

How to create and reuse an array of CVPixelBufferRef?

In my application I need to
Create 24 CVPixelBufferRef
Add them later to AVAssetWriterInputPixelBufferAdaptor in a custom order to write an mp4 movie.
The VideoExport::addFrame function receives raw pixel data and stores it in the next empty CVPixelBufferRef. Here is the demo code:
// .h
CVPixelBufferRef buffers[24];
// .mm
void VideoExport::addFrame(unsigned char * pixels, long frame_index) {
buffers[frame_index] = NULL;
CVPixelBufferCreateWithBytes(NULL,
frameSize.width,
frameSize.height,
kCVPixelFormatType_24RGB,
(void*)pixels,
frameSize.width * 3,
NULL,
0,
NULL,
&buffers[frame_index]);
}
The pixel buffers seem to populate successfully. The problem is that when I try writing different frames to the movie file by changing index in buffers[index], the same frame is saved, over an over again.
The frame that gets saved seems to always be the last one I sent to addFrame, defeating my attempt of using an array of unique buffers. I suspect that any call to addFrame overwrites the previous data.
Note 1: I have tested that the pixels sent to addFrame are unique.
Note 2: If I add the frame to the movie immediately inside addFrame the produced movie has unique frames, but then I can't shuffle the frame order.
What would be the correct way to create and reuse an array of CVPixelBufferRef? Would a pixelBufferPool help, and if so, how can I use it?
Thank you.
OK, found the issue.
In my addFrame function, the unsigned char * pixels was destroyed after every addFrame call. But to make the buffers array usable later, the pixels need to remain alive until the buffers array is destroyed.

How to use instancing offsets

Suppose I have a single buffer of instance data for 2 different groups of instances (ie, I want to draw them in separate draw calls because they use different textures). How do I set up the offsets to accomplish this? IASetVertexBuffers and DrawIndexedInstanced both have offset parameters and its not clear to me which ones I need to use. Also, DrawIndexedInstanced isn't exactly clear if the offset value is in bytes or not.
Offsets
Those offsets works independently. You can offset either in ID3D11DeviceContext::IASetVertexBuffers or in ID3D11DeviceContext::DrawIndexedInstanced or in both (then they will combine).
ID3D11DeviceContext::IASetVertexBuffers accepts offset in bytes:
bindedData = (uint8_t*)data + sizeof(uint8_t) * offset
ID3D11DeviceContext::DrawIndexedInstanced accepts all offsets in elements (indices, vertices, instances). They are just values added to indices. Vertex and instance offsets works independently. Index offset also offsets vertices and instances (obviously):
index = indexBuffer[i] + StartIndexLocation
indexVert = index + BaseVertexLocation
indexInst = index + StartInstanceLocation
I prefer offsetting in draw call:
no byte (pointer) arithmetic needed -- less chances to make a mistake
no need to re-bind buffer if you just changing offset -- less visible state changes (and, hopefully, invisible too)
Alternative solution
Instead of splitting rendering to two draw calls, you can merge your texture resources and draw all in one draw call:
both textures binded at same draw call, branching in shader (if/else) depending on integer passed via constant buffer (simplest solution)
texture array (if target hardware supports it)
texture atlas (will need some coding, but always useful)
Hope it helps!

Custom Video File how much would I extract the images?

I alot of people would recommend that why not go with Bink or use DirectShow in order to play a video or even ffmpeg. However, what are movies anyways - just images put all together with sound.
I've already created a program where I take a bunch of images and place them into the customize video file. The cool thing about this - is that I can easily place it on a quad. The issue I'm having is I can only extract one image from the custom video file. When I have more than one; I have problems, which I fully understand.
I have a index lookup table of all the images sizes then the raw images. The calculation I was following was:
offset = NumberOfImages + 1 * sizeof(long).
So, with the one image - if you'll perform the offset of finding the first image would be quite easy. During the for loop it always starts with 0 and and reaches the number of images which is 1. So, it would translate like this:
offset = 1 + 1 * 4 = 8.
So, now I know the offset just for one image which is great. However, a video is with a bunch of images all together. So, I've been thinking to myself...If there was a way to reach up to a certain point then stuff the read data inside a vector.
currentPosition = 0; //-- Set the current position to zero before looping through images in file.
for (UINT i = 0; i < elements; i++) {
long tblSz = (elements + 1) * sizeof(long); // elements + 1 * 4
long off = tblSz + currentPosition; // -- Now let's calculate the offset position inside the file knowing the table size.
// in.seekg(off, std::ios_base::end); //-- Not used.
long videoSz = sicVideoIndexTable[i]; //-- Let's retreive the image size from the index table that's stored inside the file before we process each image.
// in.seekg(0, std::ios_base::beg); //-- Not used.
dataBuf.resize(videoSz); //-- Let's resize the data Buffer vector to fit the image size.
in.seekg(off, std::ios_base::beg); //-- Let's go to the calculated offset position to retrieve the image data.
std::streamsize currpos = in.gcount(); //-- Prototype not used.
in.read(&dataBuf[0], videoSz); //-- Let's read in the data according to the image size.
sVideoDesc.dataPtr = (void*)&dataBuf[0]; //-- Pass what we've read into the temporary structor before pushing it inside a vector to store the collection of images.
sVideoDesc.fileSize = videoSz;
sicVideoArray.push_back(sVideoDesc);
dataBuf.empty(); //-- Now can empty the data vector so it can be reused.
currentPosition = videoSz; //-- Set the current position to the video size so it can recalculate the offset for the next image.
}
I believe the problem lies within the seekg and in.read but that's just my gut telling me that. As you see the current position always changes.
Buttom line question is if I can load one image then why won't I be able to load multiple images from the custom video file? I'm not sure if I'm using seekg or should I just get every character until a certain point them dump the content inside a data buffer vector. I thought reading the block of data would be the answer - but I'm becoming very unsure.
I think I finally understand what your code does. You really should use more descriptive variable names. Or at least add an explanation of what each variable means. Anyway...
I believe your problem is in this line:
currentPosition = videoSz;
When it should be
currentPosition += videoSz;
You basically don't advance through your file.
Also, if you just read the images in sequentially, you might want to change your file format so that instead of a table of image sizes at the beginning, you store each image size directly followed by the image data. That way you don't need to do any of the offset calculations or seeking.

Plot an array into bitmap in C/C++ for thermal printer

I am trying to accomplish something a bit backwards from everyone else. Given an array of sensor data, I wish to print a graph plot of it. My test bench uses a stepper motor to move the input shaft of a sensor, stop, get ADC value of sensor's voltage, repeat.
My current version 0.9 bench does not have a graphical output. The proper end solution will. Currently, I have 35 data points, and I'm looking to get 90 to 100. The results are simply stored in an int array. The index is linear, so it's not a complicated plot, but I'm having problems conceptualizing the plot from bottom-left to top-right to display to the operator. I figure on the TFT screen, I can literally translate an origin and then draw lines from point to point...
Worse, I want to also print out this to a thermal printer, so I'll need to translate this into a sub-384 pixel wide graph. I'm not too worried about the semantics of communicating the image to the printer, but how to convert the array to an image.
It gets better: I'm doing this on an Arduino Mega, so the libraries aren't very robust. At least it has a lot of RAM for the code. :/
Here's an example of when I take my data from the Arduino test and feed it into Excel. I'm not looking for color, but I'd like the graph to appear and this setup not be connected to a computer. Or the network. This is the ESC/POS printer, btw.
The algorithm for this took three main stages:
1) Translate the Y from top left to bottom left.
2) Break up the X into word:bit values.
3) Use Bresenham's algorithm to draw lines between the points. And then figure out how to make the line thicker.
For my exact case, the target bitmap is 384x384, so requires 19k of SRAM to store in memory. I had to ditch the "lame" Arduino Mega and upgrade to the ChipKIT uC32 to pull this off, 32k of RAM, 80 MHz cpu, & twice the I/O!
The way I figured out this was to base my logic on Adafruit's Thermal library for Arduino. In their examples, they include how to convert a 1-bit bitmap into a static array for printing. I used their GFX library to implement the setXY function as well as their GFX Bresenham's algorithm to draw lines between (X,Y)s using my setXY().
It all boiled down to the code in this function I wrote:
// *bitmap is global or class member pointer to byte array of size 384/8*384
// bytesPerRow is 384/8
void setXY(int x, int y) {
// integer divide by 8 (/8) because array size is byte or char
int xByte = x/8;
// modulus 8 (%8) to get the bit to set
uint8_t shifty = x%8;
// right shift because we start from the LEFT
int xVal = 0x80 >> shifty;
// inverts Y from bottom to start of array
int yRow = yMax - y;
// Get the actual byte in the array to manipulate
int offset = yRow*bytesPerRow + xByte;
// Use logical OR in case there is other data in the bitmap,
// such as a frame or a grid
*(bitmap+offset)|=xVal;
}
The big point is to remember with an array, we are starting at the top left of the bitmap, going right across the row, then down one Y row and repeating. The gotchya's are in translating the X into the word:bit combo. You've got to shift from the left (sort-of like translating the Y backwards). Another gotchya is one-off error in bookkeeping for the Y.
I put all of this in a class, which helped prevent me from making one big function to do it all and through better design made the implementation easier than I thought it would be.
Pic of the printout:
Write-up of the project is here.

Missing depth info after first mesh

I'm using SlimDX for a Direct3D 10 apps. In the apps I've loaded 2 to more mesh, with images loaded as texture and using a fx code for shader. The code was modified from SlimDX's sample "SimpleModel10"
I move the draw call, shader setup code into a class that manage 1 mesh, shader (effect) and draw call. Then I initialize 2 copy of this class, then call the draw function one after another.
The output, no matter how I change the Z position of the mesh, the one being draw later will always stay on top. Later, when I use PIX to debug the draw call, I found out that the 2nd mesh doesn't have depth while the first one does. I've tried with 3 meshes, 2nd and 3rd one will not have depth too. The funny thing is all of then are instantiated from the same class, using the same draw call.
What could have cause such problem?
Following is part of the code in the draw function of the class, I've omitted the rest as it's lengthy involved a few classes. I keep the existing OnRenderBegin() and OnRenderEnd() of the sample:
PanelEffect.GetVariableByName("world").AsMatrix().SetMatrix(world);
lock (this)
{
device.InputAssembler.SetInputLayout(layout);
device.InputAssembler.SetPrimitiveTopology(PrimitiveTopology.TriangleList);
device.InputAssembler.SetIndexBuffer(indices, Format.R32_UInt, 0);
device.InputAssembler.SetVertexBuffers(0, binding);
PanelEffect.GetTechniqueByIndex(0).GetPassByIndex(0).Apply();
device.DrawIndexed(indexCount, 0, 0);
device.InputAssembler.SetIndexBuffer(null, Format.Unknown, 0);
device.InputAssembler.SetVertexBuffers(0, nullBinding);
}
Edit: After much debugging and code isolation, I found out the culprit is Font.Draw() in my DrawString() function
internal void DrawString(string text)
{
sprite.Begin(SpriteFlags.None);
string[] texts = text.Split(new string[] {"\r\n"}, StringSplitOptions.None);
int y = PanelY;
foreach (string t in texts)
{
font.Draw(sprite, t, new System.Drawing.Rectangle(PanelX, y, PanelSize.Width, PanelSize.Height), FontDrawFlags.SingleLine, new Color4(Color.Red));
y += font.Description.Height;
}
sprite.End();
}
Comment out Font.Draw solve the problem. Maybe it automatically set some states which causes the next Mesh draw to discard depth. Looking into SlimDX's source code now.
After much debugging in PIX, this is the conclusion.
Calling Font.Draw() will automatically set DepthEnable to false and DepthFunction to D3D10_COMPARISON_NEVER, that's after comparing PIX's detail on the OutputMerger of before and after calling Font.Draw
Solution
Context10_1.Device.OutputMerger.DepthStencilState = depthStencilState;
Put that before the next Mesh draw call fixed the problem.
Previously I only set the DepthStencilState in the OnRenderBegin()

Resources