Can anybody help with converting an SDL_Surface object, a texture loaded from a file, into an IDirect3DTexture9 object.
I honestly don't know why you would ever want to do this. It sounds like a truly horrible idea for a variety of reasons, so please tell us why you want to do this so we can convince you not to ;).
In the meanwhile, a quick overview of how you'd go about it:
IDirect3DTexture9* pTex = NULL;
HRESULT hr = S_OK;
hr = m_d3dDevice->CreateTexture(
surface->w,
surface->h,
1,
usage,
format,
D3DPOOL_MANAGED,
&pTex,
NULL);
This creates the actual texture with the size and format of the SDL_Surface. You'll have to fill in the usage on your own, depending on how you want to use it (see D3DUSAGE). You'll also have to figure out the format on your own - you can't directly map a SDL_PixelFormat to a D3DFORMAT. This won't be easy, unless you know exactly what pixel format your SDL_Surface is.
Now, you need to write the data into the texture. You can't use straight memcpy here, since the SDL_Surface and the actual texture may have different strides. Here's some untested code that may do this for you:
HRESULT hr;
D3DLOCKED_RECT lockedRect;
// lock the texture, so that we can write into it
// Note: if you used D3DUSAGE_DYNAMIC above, you should
// use D3DLOCK_DISCARD as the flags parameter instead of 0.
hr = pTex->LockRect(0, &lockedRect, NULL, 0);
if(SUCCEEDED(hr))
{
// use char pointers here for byte indexing
char* src = (char*) surface->pixels;
char* dst = (char*) lockedRect->pBits;
size_t numRows = surface->h;
size_t rowSize = surface->w * surface->format->BytesPerPixel;
// for each row...
while(numRows--)
{
// copy the row
memcpy(dst, src, rowSize);
// use the given pitch parameters to advance to the next
// row (since these may not equal rowSize)
src += surface->pitch;
dst += lockedRect->Pitch;
}
// don't forget this, or D3D won't like you ;)
hr = pTex->UnlockRect(0);
}
Related
I'm trying to make a simple 3D modeling tool.
there is some work to move a vertex( or vertices ) for transform the model.
I used dynamic vertex buffer because thought it needs much update.
but performance is too low in high polygon model even though I change just one vertex.
is there other methods? or did I wrong way?
here is my D3D11_BUFFER_DESC
Usage = D3D11_USAGE_DYNAMIC;
CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
BindFlags = D3D11_BIND_VERTEX_BUFFER;
ByteWidth = sizeof(ST_Vertex) * _nVertexCount
D3D11_SUBRESOURCE_DATA d3dBufferData;
d3dBufferData.pSysMem = pVerticesInfo;
hr = pd3dDevice->CreateBuffer(&descBuffer, &d3dBufferData, &_pVertexBuffer);
and my update funtion
D3D11_MAPPED_SUBRESOURCE d3dMappedResource;
pImmediateContext->Map(_pVertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &d3dMappedResource);
ST_Vertex* pBuffer = (ST_Vertex*)d3dMappedResource.pData;
for (int i = 0; i < vIndice.size(); ++i)
{
pBuffer[vIndice[i]].xfPosition.x = pVerticesInfo[vIndice[i]].xfPosition.x;
pBuffer[vIndice[i]].xfPosition.y = pVerticesInfo[vIndice[i]].xfPosition.y;
pBuffer[vIndice[i]].xfPosition.z = pVerticesInfo[vIndice[i]].xfPosition.z;
}
pImmediateContext->Unmap(_pVertexBuffer, 0);
As mentioned in the previous answer, you are updating your whole buffer every time, which will be slow depending on model size.
The solution is indeed to implement partial updates, there are two possibilities for it, you want to update a single vertex, or you want to update
arbitrary indices (for example, you want to move N vertices in one go, in different locations, like vertex 1,20,23 for example.
The first solution is rather simple, first create your buffer with the following description :
Usage = D3D11_USAGE_DEFAULT;
CPUAccessFlags = 0;
BindFlags = D3D11_BIND_VERTEX_BUFFER;
ByteWidth = sizeof(ST_Vertex) * _nVertexCount
D3D11_SUBRESOURCE_DATA d3dBufferData;
d3dBufferData.pSysMem = pVerticesInfo;
hr = pd3dDevice->CreateBuffer(&descBuffer, &d3dBufferData, &_pVertexBuffer);
This makes sure your vertex buffer is gpu visible only.
Next create a second dynamic buffer which has the size of a single vertex (you do not need any bind flags in that case, as it will be used only for copies)
_pCopyVertexBuffer
Usage = D3D11_USAGE_DYNAMIC; //Staging works as well
CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
BindFlags = 0;
ByteWidth = sizeof(ST_Vertex);
D3D11_SUBRESOURCE_DATA d3dBufferData;
d3dBufferData.pSysMem = NULL;
hr = pd3dDevice->CreateBuffer(&descBuffer, &d3dBufferData, &_pCopyVertexBuffer);
when you move a vertex, copy the changed vertex in the copy buffer :
ST_Vertex changedVertex;
D3D11_MAPPED_SUBRESOURCE d3dMappedResource;
pImmediateContext->Map(_pVertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &d3dMappedResource);
ST_Vertex* pBuffer = (ST_Vertex*)d3dMappedResource.pData;
pBuffer->xfPosition.x = changedVertex.xfPosition.x;
pBuffer->.xfPosition.y = changedVertex.xfPosition.y;
pBuffer->.xfPosition.z = changedVertex.xfPosition.z;
pImmediateContext->Unmap(_pVertexBuffer, 0);
Since you use D3D11_MAP_WRITE_DISCARD, make sure to write all attributes there (not only position).
Now once you done, you can use ID3D11DeviceContext::CopySubresourceRegion to only copy the modified vertex in the current location :
I assume that vertexID is the index of the modified vertex :
pd3DeviceContext->CopySubresourceRegion(_pVertexBuffer,
0, //must be 0
vertexID * sizeof(ST_Vertex), //location of the vertex in you gpu vertex buffer
0, //must be 0
0, //must be 0
_pCopyVertexBuffer,
0, //must be 0
NULL //in this case we copy the full content of _pCopyVertexBuffer, so we can set to null
);
Now if you want to update a list of vertices, things get more complicated and you have several options :
-First you apply this single vertex technique in a loop, this will work quite well if your changeset is small.
-If your changeset is very big (close to almost full vertex size, you can probably rewrite the whole buffer instead).
-An intermediate technique is to use compute shader to perform the updates (thats the one I normally use as its the most flexible version).
Posting all c++ binding code would be way too long, but here is the concept :
your vertex buffer must have BindFlags = D3D11_BIND_VERTEX_BUFFER | D3D11_BIND_UNORDERED_ACCESS; //this allows to write wioth compute
you need to create an ID3D11UnorderedAccessView for this buffer (so shader can write to it)
you need the following misc flags : D3D11_RESOURCE_MISC_BUFFER_ALLOW_RAW_VIEWS //this allows to write as RWByteAddressBuffer
you then create two dynamic structured buffers (I prefer those over byteaddress, but vertex buffer and structured is not allowed in dx11, so for the write one you need raw instead)
first structured buffer has a stride of ST_Vertex (this is your changeset)
second structured buffer has a stride of 4 (uint, these are the indices)
both structured buffers get an arbitrary element count (normally i use 1024 or 2048), so that will be the maximum amount of vertices you can update in a single pass.
both structured buffers you need an ID3D11ShaderResourceView (shader visible, read only)
Then update process is the following :
write modified vertices and locations in structured buffers (using map discard, if you have to copy less its ok)
attach both structured buffers for read
attach ID3D11UnorderedAccessView for write
set your compute shader
call dispatch
detach ID3D11UnorderedAccessView for write (this is VERY important)
This is a sample compute shader code (I assume you vertex is position only, for simplicity)
cbuffer cbUpdateCount : register(b0)
{
uint updateCount;
};
RWByteAddressBuffer RWVertexPositionBuffer : register(u0);
StructuredBuffer<float3> ModifiedVertexBuffer : register(t0);
StructuredBuffer<uint> ModifiedVertexIndicesBuffer : register(t0);
//this is the stride of your vertex buffer, since here we use float3 it is 12 bytes
#define WRITE_STRIDE 12
[numthreads(64, 1, 1)]
void CS( uint3 tid : SV_DispatchThreadID )
{
//make sure you do not go part element count, as here we runs 64 threads at a time
if (tid.x >= updateCount) { return; }
uint readIndex = tid.x;
uint writeIndex = ModifiedVertexIndicesBuffer[readIndex];
float3 vertex = ModifiedVertexBuffer[readIndex];
//byte address buffers do not understand float, asuint is a binary cast.
RWVertexPositionBuffer.Store3(writeIndex * WRITE_STRIDE, asuint(vertex));
}
For the purposes of this question I'm going to assume you already have a mechanism for selecting a vertex from a list of vertices based upon ray casting or some other picking method and a mechanism for creating a displacement vector detailing how the vertex was moved in model space.
The method you have for updating the buffer is sufficient for anything less than a few hundred vertices, but on large scale models it becomes extremely slow. This is because you're updating everything, rather than the individual vertices you modified.
To fix this, you should only update the vertices you have changed, and to do that you need to create a change set.
In concept, a change set is nothing more than a set of changes made to the data - a list of the vertices that need to be updated. Since we already know which vertices were modified (otherwise we couldn't have manipulated them), we can map in the GPU buffer, go to that vertex specifically, and copy just those vertices into the GPU buffer.
In your vertex modification method, record the index of the vertex that was modified by the user:
//Modify the vertex coordinates based on mouse displacement
pVerticesInfo[SelectedVertexIndex].xfPosition.x += DisplacementVector.x;
pVerticesInfo[SelectedVertexIndex].xfPosition.y += DisplacementVector.y;
pVerticesInfo[SelectedVertexIndex].xfPosition.z += DisplacementVector.z;
//Add the changed vertex to the list of changes.
changedVertices.add(SelectedVertexIndex);
//And update the GPU buffer
UpdateD3DBuffer();
In UpdateD3DBuffer(), do the following:
D3D11_MAPPED_SUBRESOURCE d3dMappedResource;
pImmediateContext->Map(_pVertexBuffer, 0, D3D11_MAP_WRITE, 0, &d3dMappedResource);
ST_Vertex* pBuffer = (ST_Vertex*)d3dMappedResource.pData;
for (int i = 0; i < changedVertices.size(); ++i)
{
pBuffer[changedVertices[i]].xfPosition.x = pVerticesInfo[changedVertices[i]].xfPosition.x;
pBuffer[changedVertices[i]].xfPosition.y = pVerticesInfo[changedVertices[i]].xfPosition.y;
pBuffer[changedVertices[i]].xfPosition.z = pVerticesInfo[changedVertices[i]].xfPosition.z;
}
pImmediateContext->Unmap(_pVertexBuffer, 0);
changedVertices.clear();
This has the effect of only updating the vertices that have changed, rather than all vertices in the model.
This also allows for some more complex manipulations. You can select multiple vertices and move them all as a group, select a whole face and move all the connected vertices, or move entire regions of the model relatively easily, assuming your picking method is capable of handling this.
In addition, if you record the change sets with enough information (the affected vertices and the displacement index), you can fairly easily implement an undo function by simply reversing the displacement vector and reapplying the selected change set.
I am using ffmpeg library. I want to know how much memory one packet can take.
I debug to check the members in on AVPacket, and none of them seem reasonable, such as AVPacket.size, ec.
If you provide your own data buffer, it needs to have a size of mininum FF_MIN_BUFFER_SIZE. You would then set the AVPacket.size to the allocated size, and AVPacket.data to the memory you've allocated.
Note that all FFmpeg decoding routine will simply fail if you provide your own buffer and it's too small.
The other possibility, is let FFmpeg calculates the optimal size for you.
Then do something like:
AVPacket pkt;
pkt.size = 0;
pkt.data = NULL; // <-- the critical part is there
int got_output = 0;
ret = avcodec_encode_audio2(ctx, &pkt, NULL, &got_output);
and provide this AVPacket to the encoding codec. Memory will be allocated automatically.
You will have to call av_free_packet upon return from the encoder and if got_output is set to 1.
FFmpeg will automatically free the AVPacket content in case of error.
AVPacket::size holds the size of the referenced data. Because it is a generic container for data, there can be no definite answer to the question
how much memory one packet can take
It can actually take from zero to a lot. Everything depends on data type, codec and other related parameters.
From FFmpeg examples:
static void audio_encode_example(const char *filename)
{
// ...
AVPacket pkt;
// ...
ret = avcodec_encode_audio2(c, &pkt, NULL, &got_output);
// ...
if (got_output) {
fwrite(pkt.data, 1, pkt.size, f); // <<--- AVPacket.size
av_free_packet(&pkt);
}
I have an FFMPEG AVFrame in YUVJ420P and I want to convert it to a CVPixelBufferRef with CVPixelBufferCreateWithBytes. The reason I want to do this is to use AVFoundation to show/encode the frames.
I selected kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange and tried converting it since the AVFrame has the data in three planes
Y480 Cb240 Cr240. And according to what I've researched this matches the selected kCVPixelFormatType. By being biplanar I need to convert it into a buffer that contains Y480 and CbCr480 Interleaved.
I tried to create a buffer with 2 planes:
frame->data[0] on the first plane,
frame->data[1] and frame->data[2] interleaved on the second plane.
However, I'm getting return error -6661 (invalid a) from CVPixelBufferCreateWithBytes:
"Invalid function parameter. For example, out of range or the wrong type."
I don't have expertise on image processing at all, so any pointers to documentation that can get me started in the right approach to this problem are appreciated. My C skills aren't top of the line either so maybe I'm making a basic mistake here.
uint8_t **buffer = malloc(2*sizeof(int *));
buffer[0] = frame->data[0];
buffer[1] = malloc(frame->linesize[0]*sizeof(int));
for(int i = 0; i<frame->linesize[0]; i++){
if(i%2){
buffer[1][i]=frame->data[1][i/2];
}else{
buffer[1][i]=frame->data[2][i/2];
}
}
int ret = CVPixelBufferCreateWithBytes(NULL, frame->width, frame->height, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, buffer, frame->linesize[0], NULL, 0, NULL, cvPixelBufferSample)
The frame is the AVFrame with the rawData from FFMPEG Decoding.
My C skills aren't top of the line either so maybe im making a basic mistake here.
You're making several:
You should be using CVPixelBufferCreateWithPlanarBytes(). I do not know if CVPixelBufferCreateWithBytes() can be used to create a planar video frame; if so, it will require a pointer to a "plane descriptor block" (I can't seem to find the struct in the docs).
frame->linesize[0] is the bytes per row, not the size of the whole image. The docs are unclear, but the usage is fairly unambiguous.
frame->linesize[0] refers to the Y plane; you care about the UV planes.
Where is sizeof(int) from?
You're passing in cvPixelBufferSample; you might mean &cvPixelBufferSample.
You're not passing in a release callback. The documentation does not say that you can pass NULL.
Try something like this:
size_t srcPlaneSize = frame->linesize[1]*frame->height;
size_t dstPlaneSize = srcPlaneSize *2;
uint8_t *dstPlane = malloc(dstPlaneSize);
void *planeBaseAddress[2] = { frame->data[0], dstPlane };
// This loop is very naive and assumes that the line sizes are the same.
// It also copies padding bytes.
assert(frame->linesize[1] == frame->linesize[2]);
for(size_t i = 0; i<srcPlaneSize; i++){
// These might be the wrong way round.
dstPlane[2*i ]=frame->data[2][i];
dstPlane[2*i+1]=frame->data[1][i];
}
// This assumes the width and height are even (it's 420 after all).
assert(!frame->width%2 && !frame->height%2);
size_t planeWidth[2] = {frame->width, frame->width/2};
size_t planeHeight[2] = {frame->height, frame->height/2};
// I'm not sure where you'd get this.
size_t planeBytesPerRow[2] = {frame->linesize[0], frame->linesize[1]*2};
int ret = CVPixelBufferCreateWithPlanarBytes(
NULL,
frame->width,
frame->height,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
NULL,
0,
2,
planeBaseAddress,
planeWidth,
planeHeight,
planeBytesPerRow,
YOUR_RELEASE_CALLBACK,
YOUR_RELEASE_CALLBACK_CONTEXT,
NULL,
&cvPixelBufferSample);
Memory management is left as an exercise to the reader, but for test code you might get away with passing in NULL instead of a release callback.
How can I solve following task: some app need to
use dozens dx9 terxtures (render them with dx3d)
and
update some of them (whole or in part).
I.e. sometimes (once per frame/second/minute) i need to write bytes (void *) in different formats (argb, bgra, rgb, 888, 565) to some sub-rect of existing texture.
In openGL solution is very simple - glTexImage2D. But here unfamiliar platform features completely confused me.
Interested in solution for both dx9 and dx11.
To update a texture, make sure the texture is created in D3DPOOL_MANAGED memory pool.
D3DXCreateTexture( device, size.x, size.y, numMipMaps,usage, textureFormat, D3DPOOL_MANAGED, &texture );
Then call LockRect to update the data
RECT rect = {x,y,z,w}; // the dimensions you want to lock
D3DLOCKED_RECT lockedRect = {0}; // "out" parameter from LockRect function below
texture->LockRect(0, &lockedRect, &rect, 0);
// copy the memory into lockedRect.pBits
// make sure you increment each row by "Pitch"
unsigned char* bits = ( unsigned char* )lockedRect.pBits;
for( int row = 0; row < numRows; row++ )
{
// copy one row of data into "bits", e.g. memcpy( bits, srcData, size )
...
// move to the next row
bits += lockedRect.Pitch;
}
// unlock when done
texture->UnlockRect(0);
I've got a question about a PixelShader I am trying to implement, and what I currently do (this is just for debugging, and trying to figure stuff out):
int3 loc;
loc.x = (int)(In.TextureUV.x * resolution_XY.x);
loc.y = (int)(In.TextureUV.x * resolution_XY.x);
loc.z = 0;
float4 r = g_txDiffuse.Load(loc);
return float4(r.x, r.y, r.z, 1);
The point is, this is always 0,0,0,1
The texture buffer is created:
D3D11_TEXTURE2D_DESC tDesc;
tDesc.Height = 480;
tDesc.Width = 640;
tDesc.Usage = D3D11_USAGE_DYNAMIC;
tDesc.MipLevels = 1;
tDesc.ArraySize = 1;
tDesc.SampleDesc.Count = 1;
tDesc.SampleDesc.Quality = 0;
tDesc.Format = DXGI_FORMAT_R8_UINT;
tDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
tDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
tDesc.MiscFlags = 0;
V_RETURN(pd3dDevice->CreateTexture2D(&tDesc, NULL, &g_pCurrentImage));
I upload the texture (which should be a live display at the end) via:
D3D11_MAPPED_SUBRESOURCE resource;
pd3dImmediateContext->Map(g_pCurrentImage, 0, D3D11_MAP_WRITE_DISCARD, 0, &resource);
memcpy( resource.pData, g_Images.GetData(), g_Images.GetDataSize() );
pd3dImmediateContext->Unmap( g_pCurrentImage, 0 );
I've checked the resource.pData, the data in there is a valid 8bit monochrome image. I made sure the data coming from the camera is 8bit monochrome 640x480.
There's a few things I don't fully understand:
if I run the Map / memcpy / Unmap routine in every frame, the driver will ultimately crash, the system will be unresponsive. Is there a different way to update a complete texture every frame which should be done?
the texture I uploaded is 8bit, why is the Texture2D.load() a float4 return? Do I have to use a different method to access the texture data? I tried to .sample it, but that didn't work either. Would I have to use a int buffer or something instead?
is there a way to debug the GPU memory, to check if the memcpy worked in the first place?
The Map, memcpy, Unmap really ought not to crash unless2 you are trying to copy too much data into the texture. It would be interesting to know what "GetDataSize()" returns. Does it equal 307,200? If its more than that then there lies your problem.
Texture2D returns a float4 because thats what you've asked for. If you write float r = g_txDiffuse.Load( ... ). The 8-bits get extended to a normalised float as part of the load process. Are you sure, btw, that your calculation of "loc" is correct because as you have it now loc.x and loc.y will always be the same.
You can debug whats going on with DirectX using PIX. Its a great tool and I highly recommend you familiarise yourself with it.