Paint to a bitmap using DirectX under Windows Metro - directx

I am trying to use DirectX to create and paint into a bitmap, and then later use that bitmap to paint to the screen. I have the following test code which I have pieced together from a number of sources. I seem to be able to create the bitmap without errors, and I try and fill it with red. However, when I use the bitmap to paint to the screen I just get a black rectangle. Can anyone tell me what I might be doing wrong? Note that this is running as a Windows 8 Metro app, if that makes a difference.
Note that the formatter seems to be removing my angle brackets and I don't how stop it from doing that. The 'device' variable is of class ID3D11Device, the 'context' variable is ID3D11DeviceContext, the d2dContext variable is ID2D1DeviceContext, the dgixDevice variable is IDXGIDevice, and the 'd2dDevice' is ID2D1Device.
void PaintTestBitmap(GRectF& rect)
{
HRESULT hr;
UINT creationFlags = D3D11_CREATE_DEVICE_BGRA_SUPPORT;
D3D_FEATURE_LEVEL featureLevels[] =
{
D3D_FEATURE_LEVEL_11_1,
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0,
D3D_FEATURE_LEVEL_9_3,
D3D_FEATURE_LEVEL_9_2,
D3D_FEATURE_LEVEL_9_1
};
ComPtr<ID3D11Device> device;
ComPtr<ID3D11DeviceContext> context;
ComPtr<ID2D1DeviceContext> d2dContext;
hr = D3D11CreateDevice( NULL,
D3D_DRIVER_TYPE_HARDWARE,
NULL,
creationFlags,
featureLevels,
ARRAYSIZE(featureLevels),
D3D11_SDK_VERSION,
&device,
NULL,
&context);
if (SUCCEEDED(hr))
{
ComPtr<IDXGIDevice> dxgiDevice;
device.As(&dxgiDevice);
// create D2D device context
ComPtr<ID2D1Device> d2dDevice;
hr = D2D1CreateDevice(dxgiDevice.Get(), NULL, &d2dDevice);
if (SUCCEEDED(hr))
{
hr = d2dDevice->CreateDeviceContext(D2D1_DEVICE_CONTEXT_OPTIONS_NONE, &d2dContext);
if (SUCCEEDED(hr))
{
ID2D1Bitmap1 *pBitmap;
D2D1_SIZE_U size = { rect.m_width, rect.m_height };
D2D1_BITMAP_PROPERTIES1 properties = {{ DXGI_FORMAT_R8G8B8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED }, 96, 96, D2D1_BITMAP_OPTIONS_TARGET, 0 };
hr = d2dContext->CreateBitmap(size, (const void *) 0, 0, &properties, &pBitmap);
if (SUCCEEDED(hr))
{
d2dContext->SetTarget(pBitmap);
d2dContext->BeginDraw();
d2dContext->Clear(D2D1::ColorF(D2D1::ColorF::Red)); // Clear the bitmap to Red
d2dContext->EndDraw();
// If I now use pBitmap to paint to the screen, I get a black rectange
// rather than the red one I was expecting
}
}
}
}
}
Note that I am fairly new to both DirectX and Metro style apps.
Thanks

what you're probably forgetting is that you dont draw your bitmap on your context.
what you can try is
hr = d2dContext->CreateBitmap(size, (const void *) 0, 0, &properties, &pBitmap);
if (SUCCEEDED(hr))
{
ID2D1Image *pImage = nullptr;
d2dContext->GetTarget(&image);
d2dContext->SetTarget(pBitmap);
d2dContext->BeginDraw();
d2dContext->Clear(D2D1::ColorF(D2D1::ColorF::Red)); // Clear the bitmap to Red
d2dContext->SetTarget(pImage);
d2dContext->DrawBitmap(pBitmap);
d2dContext->EndDraw();
}
what you do is store your rendertarget and then draw your bitmap and at the end you set your context to the right render target and let it draw your bitmap on it

Related

How to copy a selection from Direct2DCanvas to clipboard?

I made a small program to draw geometry figures on a D2DBox (with Direct2DCanvas and RenderTarget) and I need to be able to copy a rectangle from it to the clipboard. Tried this, but I'm stuck with the source handle parameter:
bm := TBitmap.Create;
try
bm.SetSize(maxx-minx, maxy-miny);
BitBlt(bm.Canvas.Handle, minx, miny, maxx, maxy, ??? , 0, 0, SRCCOPY);
Clipboard.Assign(bm);
finally
bm.Free;
end;
Any idea where to get a handle from? Or the whole thing is done a different way? Thanx!
BitBlt() requires a GDI HDC to copy from, but TDirect2DCanvas does not have an HDC of its own, and it cannot directly render to an off-screen HDC/TCanvas, such as TBitmap.Canvas, per its documentation:
TDirect2DCanvas will only work for on-screen device contexts. You cannot use the TDirect2DCanvas to draw on a printer device context, for example.
And you can't associate a custom RenderTarget (such as one created with ID2D1Factory.CreateDCRenderTarget() and ID2D1DCRenderTarget.BindDC()) since the TDirect2DCanvas.RenderTarget property is read-only.
So, you will likely have to go a long way to get what you want. Based on code I found in Direct2d Desktop printing C++, which demonstrates copying an arbitrary ID2D1RenderTarget to an arbitrary HDC, you can try the following:
Create a Direct2D ID2D1Bitmap that is bound to the canvas's current RenderTarget using one of the target's CreateBitmap() methods.
Copy pixels from the canvas into the bitmap using the bitmap's CopyFromRenderTarget() method.
Create an IWICBitmap (you can probably use the VCL's TWICImage for this) and render the Direct2D bitmap to it using one of the ID2D1Factory.CreateWicBitmapRenderTarget() methods with one of the ID2D1RenderTarget.DrawBitmap() methods.
Create a GDI DIB bitmap and render the IWICBitmap to it using the WIC bitmap's CopyPixels() method.
Finally, you use the DIB however you needed, such as select/copy it to your final HDC, or you can simply place its contents directly on the clipboard using the CF_DIB format.
Here is the code (sorry, it is in C++, I'm not going to translate it to Delphi):
void Direct2DRender::RenderToDC(HDC hDC, UINT uiWidth, UINT uiHeight)
{
HRESULT hr = S_OK;
IWICImagingFactory *pImageFactory = WICImagingFactory::GetInstance().GetFactory();
CComPtr<IWICBitmap> wicBitmap;
hr = pImageFactory->CreateBitmap( uiWidth, uiHeight, GUID_WICPixelFormat32bppBGR, WICBitmapCacheOnLoad, &wicBitmap);
D2D1_SIZE_U bitmapPixelSize = D2D1::SizeU( uiWidth, uiHeight);
float dpiX, dpiY;
m_pRenderTarget->GetDpi( &dpiX, &dpiY);
CComPtr<ID2D1Bitmap> d2dBitmap;
hr = m_pRenderTarget->CreateBitmap( bitmapPixelSize, D2D1::BitmapProperties(
D2D1::PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_IGNORE),
dpiX, dpiY), &d2dBitmap );
D2D1_POINT_2U dest = D2D1::Point2U(0,0);
D2D1_RECT_U src = D2D1::RectU(0, 0, uiWidth, uiHeight);
hr = d2dBitmap->CopyFromRenderTarget(&dest, m_pRenderTarget, &src);
D2D1_RENDER_TARGET_PROPERTIES rtProps = D2D1::RenderTargetProperties();
rtProps.pixelFormat = D2D1::PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_IGNORE);
rtProps.type = D2D1_RENDER_TARGET_TYPE_DEFAULT;
rtProps.usage = D2D1_RENDER_TARGET_USAGE_NONE;
CComPtr<ID2D1RenderTarget> wicRenderTarget;
hr = m_pDirect2dFactory->CreateWicBitmapRenderTarget( wicBitmap, rtProps, &wicRenderTarget);
wicRenderTarget->BeginDraw();
wicRenderTarget->DrawBitmap(d2dBitmap);
hr = wicRenderTarget->EndDraw();
// Render the image to a GDI device context
HBITMAP hDIBBitmap = NULL;
try
{
// Get a DC for the full screen
HDC hdcScreen = GetDC(NULL);
if (!hdcScreen)
throw 1;
BITMAPINFO bminfo;
ZeroMemory(&bminfo, sizeof(bminfo));
bminfo.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bminfo.bmiHeader.biWidth = uiWidth;
bminfo.bmiHeader.biHeight = -(LONG)uiHeight;
bminfo.bmiHeader.biPlanes = 1;
bminfo.bmiHeader.biBitCount = 32;
bminfo.bmiHeader.biCompression = BI_RGB;
void* pvImageBits = nullptr; // Freed with DeleteObject(hDIBBitmap)
hDIBBitmap = CreateDIBSection(hdcScreen, &bminfo, DIB_RGB_COLORS, &pvImageBits, NULL, 0);
if (!hDIBBitmap)
throw 2;
ReleaseDC(NULL, hdcScreen);
// Calculate the number of bytes in 1 scanline
UINT nStride = DIB_WIDTHBYTES(uiWidth * 32);
// Calculate the total size of the image
UINT nImage = nStride * uiHeight;
// Copy the pixels to the DIB section
hr = wicBitmap->CopyPixels(nullptr, nStride, nImage, reinterpret_cast<BYTE*>(pvImageBits));
// Copy the bitmap to the target device context
::SetDIBitsToDevice(hDC, 0, 0, uiWidth, uiHeight, 0, 0, 0, uiHeight, pvImageBits, &bminfo, DIB_RGB_COLORS);
DeleteObject(hDIBBitmap);
}
catch (...)
{
if (hDIBBitmap)
DeleteObject(hDIBBitmap);
// Rethrow the exception, so the client code can handle it
throw;
}
}
In that same discussion, another alternative is described:
Create a DIB section, and select it into a DC
Create a DC render target
Bind the render target to the DC that corresponds to the DIB section
Draw using Direct2D. After calling EndDraw the DIB contains what was rendered.
The final step is to draw the dib where you need it.
So, try moving your drawing code to its own function that takes an ID2D1RenderTarget as input and draws on it as needed. Then, you can create an HDC-based RenderTarget when you want to place a bitmap on the clipboard, and use TDirect2DCanvas.RenderTarget when you want to draw on your D2DBox.

Why sprites render over objects?

I want to render a cube on a picture like in this tutorial. Problem is that it renders only the picture and the cube doesn't render. Can you help me ? Thankyou
m_spriteBatch->Begin();
m_spriteBatch->Draw(m_background.Get(), m_fullscreenRect);
//
// Clear the back buffer
//
g_pImmediateContext->ClearRenderTargetView( g_pRenderTargetView, Colors::MidnightBlue );
g_pImmediateContext->ClearDepthStencilView(g_pDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0);
g_pImmediateContext->OMSetRenderTargets(1, &g_pRenderTargetView, g_pDepthStencilView);
//
// Update variables
//
ConstantBuffer cb;
cb.mWorld = XMMatrixTranspose( g_World );
cb.mView = XMMatrixTranspose( g_View );
cb.mProjection = XMMatrixTranspose( g_Projection );
g_pImmediateContext->UpdateSubresource( g_pConstantBuffer, 0, nullptr, &cb, 0, 0 );
//
// Renders a triangle
//
g_pImmediateContext->VSSetShader( g_pVertexShader, nullptr, 0 );
g_pImmediateContext->VSSetConstantBuffers( 0, 1, &g_pConstantBuffer );
g_pImmediateContext->PSSetShader( g_pPixelShader, nullptr, 0 );
g_pImmediateContext->DrawIndexed( 36, 0, 0 ); // 36 vertices needed for 12 triangles in a triangle list
//
// Present our back buffer to our front buffer
//
m_spriteBatch->End();
g_pSwapChain->Present( 0, 0 );
SpriteBatch batches up draws for performance, so it's likely being drawn after the cube draw. If you want to make sure the sprite background draws first, then you need to call End before you submit your cube. You also need to call Begin after you set up the render target:
// Clear
g_pImmediateContext->ClearDepthStencilView(g_pDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0);
g_pImmediateContext->OMSetRenderTargets(1, &g_pRenderTargetView, g_pDepthStencilView);
// Draw background image
m_spriteBatch->Begin();
m_spriteBatch->Draw(m_background.Get(), m_fullscreenRect);
m_spriteBatch->End();
// Draw objects
context->OMSetBlendState(…);
context->OMSetDepthStencilState(…);
context->IASetInputLayout(…);
context->IASetVertexBuffers(…);
context->IASetIndexBuffer(…);
context->IASetPrimitiveTopology(…);
You can omit the ClearRenderTargetView if the m_background texture covers the whole screen.
For more on how SpriteBatch draw order and batching works, see the wiki.
Based on this answer by #ChuckWalbourn I fixed the problem.
g_pImmediateContext->ClearRenderTargetView( g_pRenderTargetView, Colors::MidnightBlue );
g_pImmediateContext->ClearDepthStencilView(g_pDepthStencilView, D3D11_CLEAR_DEPTH |
D3D11_CLEAR_STENCIL, 1.0f, 0);
m_spriteBatch->Begin();
m_spriteBatch->Draw(m_background.Get(), m_fullscreenRect);
m_spriteBatch->End();
states = std::make_unique<CommonStates>(g_pd3dDevice);
g_pImmediateContext->OMSetBlendState(states->Opaque(), Colors::Black, 0xFFFFFFFF);
g_pImmediateContext->OMSetDepthStencilState(states->DepthDefault(), 0);
// Set the input layout
g_pImmediateContext->IASetInputLayout(g_pVertexLayout);
UINT stride = sizeof(SimpleVertex);
UINT offset = 0;
g_pImmediateContext->IASetVertexBuffers(0, 1, &g_pVertexBuffer, &stride, &offset);
// Set index buffer
g_pImmediateContext->IASetIndexBuffer(g_pIndexBuffer, DXGI_FORMAT_R16_UINT, 0);
// Set primitive topology
g_pImmediateContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
// Draw objects

How to Flip FaceOSC in Processing3.2.1

I am new to the Processing and now trying to use FaceOSC. Everything was done already, but it is hard to play the game I made when everything is not a mirror view. So I want to flip the data that FaceOSC sent to processing to create video.
I'm not sure if FaceOSC sent the video because I've tried flip like a video but it doesn't work. I also flipped like a image, and canvas, but still doesn't work. Or may be I did it wrong. Please HELP!
//XXXXXXX// This is some of my code.
import oscP5.*;
import codeanticode.syphon.*;
OscP5 oscP5;
SyphonClient client;
PGraphics canvas;
boolean found;
PVector[] meshPoints;
void setup() {
size(640, 480, P3D);
frameRate(30);
initMesh();
oscP5 = new OscP5(this, 8338);
// USE THESE 2 EVENTS TO DRAW THE
// FULL FACE MESH:
oscP5.plug(this, "found", "/found");
oscP5.plug(this, "loadMesh", "/raw");
// plugin for mouth
oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width");
oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");
// initialize the syphon client with the name of the server
client = new SyphonClient(this, "FaceOSC");
// prep the PGraphics object to receive the camera image
canvas = createGraphics(640, 480, P3D);
}
void draw() {
background(0);
stroke(255);
// flip like a vdo here, does not work
/* pushMatrix();
translate(canvas.width, 0);
scale(-1,1);
image(canvas, -canvas.width, 0, width, height);
popMatrix(); */
image(canvas, 0, 0, width, height);
if (found) {
fill(100);
drawFeature(faceOutline);
drawFeature(leftEyebrow);
drawFeature(rightEyebrow);
drawFeature(nosePart1);
drawFeature(nosePart2);
drawFeature(leftEye);
drawFeature(rightEye);
drawFeature(mouthPart1);
drawFeature(mouthPart2);
drawFeature(mouthPart3);
drawFeature(mouthPart4);
drawFeature(mouthPart5);
}
}
//XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
void drawFeature(int[] featurePointList) {
for (int i = 0; i < featurePointList.length; i++) {
PVector meshVertex = meshPoints[featurePointList[i]];
if (i > 0) {
PVector prevMeshVertex = meshPoints[featurePointList[i-1]];
line(meshVertex.x, meshVertex.y, prevMeshVertex.x, prevMeshVertex.y);
}
ellipse(meshVertex.x, meshVertex.y, 3, 3);
}
}
/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
public void found(int i) {
// println("found: " + i); // 1 == found, 0 == not found
found = i == 1;
}
//XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The scale() and translate() snippet you're trying to use makes sense, but it looks like you're using it in the wrong place. I'm not sure what canvas should do, but I'm guessing the face features is drawn using drawFeature() calls is what you want to mirror. If so, you should do place those calls in between pushMatrix() and popMatrix() calls, right after the scale().
I would try something like this in draw():
void draw() {
background(0);
stroke(255);
//flip horizontal
pushMatrix();
translate(width, 0);
scale(-1,1);
if (found) {
fill(100);
drawFeature(faceOutline);
drawFeature(leftEyebrow);
drawFeature(rightEyebrow);
drawFeature(nosePart1);
drawFeature(nosePart2);
drawFeature(leftEye);
drawFeature(rightEye);
drawFeature(mouthPart1);
drawFeature(mouthPart2);
drawFeature(mouthPart3);
drawFeature(mouthPart4);
drawFeature(mouthPart5);
}
popMatrix();
}
The push/pop matrix calls isolate the coordinate space.
The coordinate system origin(0,0) is the top left corner: this is why everything is translated by the width before scaling x by -1. Because it's not at the centre, simply mirroring won't leave the content in the same place.
For more details checkout the Processing Transform2D tutorial
Here's a basic example:
boolean mirror;
void setup(){
size(640,480);
}
void draw(){
if(mirror){
pushMatrix();
//translate, otherwise mirrored content will be off screen (pivot is at top left corner not centre)
translate(width,0);
//scale x -= 1 mirror
scale(-1,1);
//draw mirrored content
drawStuff();
popMatrix();
}else{
drawStuff();
}
}
//this could be be the face preview
void drawStuff(){
background(0);
triangle(0,0,width,0,0,height);
text("press m to toggle mirroring",450,470);
}
void keyPressed(){
if(key == 'm') mirror = !mirror;
}
Another option is to mirror each coordinate, but in your case it would be a lot of effort when scale(-1,1) will do the trick. For reference, to mirror the coordinate, you simply need to subtract the current value from the largest value:
void setup(){
size(640,480);
background(255);
}
void draw(){
ellipse(mouseX,mouseY,30,30);
//subtract current value(mouseX in this case) from the largest value it can have (width in this case)
ellipse(width-mouseX,mouseY,30,30);
}
You can run these examples right here:
var mirror;
function setup(){
createCanvas(640,225);
fill(255);
}
function draw(){
if(mirror){
push();
//translate, otherwise mirrored content will be off screen (pivot is at top left corner not centre)
translate(width,0);
//scale x -= 1 mirror
scale(-1,1);
//draw mirrored content
drawStuff();
pop();
}else{
drawStuff();
}
}
//this could be be the face preview
function drawStuff(){
background(0);
triangle(0,0,width,0,0,height);
text("press m to toggle mirroring",450,470);
}
function keyPressed(){
if(key == 'M') mirror = !mirror;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.4/p5.min.js"></script>
function setup(){
createCanvas(640,225);
background(0);
fill(0);
stroke(255);
}
function draw(){
ellipse(mouseX,mouseY,30,30);
//subtract current value(mouseX in this case) from the largest value it can have (width in this case)
ellipse(width-mouseX,mouseY,30,30);
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.4/p5.min.js"></script>

DirectX: How to make a 2D image constantly pan / scroll in place

I am trying to find an efficient way to pan a 2D image in place using DirectX 9
I've attached a picture below that hopefully explains what I want to do. Basically, I want to scroll the tu and tv coordinates of all the quad's vertices across the texture to produce a "scrolling in place" effect for a 2D texture.
The first image below represents my loaded texture.
The second image is the texture with the tu,tv coordinates of the four vertices in each corner showing the standard rendered image.
The third image illustrates what I want to happen; I want to move the vertices in such a way that the box that is rendered straddles the end of the image and wraps back around in such a way that the texture will be rendered as shown with the two halves of the cloud separated.
The fourth image shows my temporary (wasteful) solution; I simply doubled the image and pan across until I reach the far right edge, at which point I reset the vertices' tu and tv so that the box being rendered is back on the far right.
Is there a legitimate way to do this without breaking everything into two separate quads?
I've added details of my set up and my render code below, if that helps clarify a path to a solution with my current design.
I have a function that sets up DirectX for 2D render as follows. I've added wrap properties to texture stage 0 as recommended:
VOID SetupDirectXFor2DRender()
{
pd3dDevice->SetSamplerState( 0, D3DSAMP_MINFILTER, D3DTEXF_POINT );
pd3dDevice->SetSamplerState( 0, D3DSAMP_MAGFILTER, D3DTEXF_POINT );
pd3dDevice->SetSamplerState( 0, D3DSAMP_MIPFILTER, D3DTEXF_POINT );
// Set for wrapping textures to enable panning sprite render
pd3dDevice->SetSamplerState( 0, D3DSAMP_ADDRESSU, D3DTADDRESS_WRAP );
pd3dDevice->SetSamplerState( 0, D3DSAMP_ADDRESSV, D3DTADDRESS_WRAP );
pd3dDevice->SetRenderState( D3DRS_ALPHAFUNC, D3DCMP_GREATEREQUAL );
pd3dDevice->SetRenderState( D3DRS_ALPHAREF, 0 );
pd3dDevice->SetRenderState( D3DRS_ALPHABLENDENABLE, true );
pd3dDevice->SetRenderState( D3DRS_ALPHATESTENABLE, false );
pd3dDevice->SetRenderState( D3DRS_SRCBLEND , D3DBLEND_SRCALPHA );
pd3dDevice->SetRenderState( D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA) ;
pd3dDevice->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_MODULATE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_ALPHAOP, D3DTOP_MODULATE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_ALPHAARG1, D3DTA_TEXTURE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_ALPHAARG2, D3DTA_DIFFUSE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_SELECTARG1 );
pd3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE );
pd3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE );
return;
}
On each frame, I render things as follows:
VOID RenderAllEntities()
{
HRESULT hResult;
// Void pointer for DirectX buffer locking
VOID* pVoid;
hResult = pd3dDevice->Clear( 0,
NULL,
D3DCLEAR_TARGET,
0x0,
1.0f,
0 );
hResult = pd3dDevice->BeginScene();
// Do rendering on the back buffer here
hResult = pd3dDevice->SetFVF( CUSTOMFVF );
hResult = pd3dDevice->SetStreamSource( 0, pVertexBuffer, 0, sizeof(CUSTOM_VERTEX) );
for ( std::vector<RenderContext>::iterator renderContextIndex = queuedContexts.begin(); renderContextIndex != queuedContexts.end(); ++renderContextIndex )
{
// Render each sprite
for ( UINT uiIndex = 0; uiIndex < (*renderContextIndex).uiNumSprites; ++uiIndex )
{
// Lock the vertex buffer into memory
hResult = pVertexBuffer->Lock( 0, 0, &pVoid, 0 );
// Copy our vertex buffer to memory
::memcpy( pVoid, &renderContextIndex->vertexLists[uiIndex], sizeof(vertexList) );
// Unlock buffer
hResult = pVertexBuffer->Unlock();
hResult = pd3dDevice->SetTexture( 0, (*renderContextIndex).textures[uiIndex]->GetTexture() );
hResult = pd3dDevice->DrawPrimitive( D3DPT_TRIANGLELIST, 0, 6 );
}
}
// Complete and present the rendered scene
hResult = pd3dDevice->EndScene();
hResult = pd3dDevice->Present( NULL, NULL, NULL, NULL );
return;
}
To test SetTransform, I tried adding the following (sloppy but temporary) code block inside the render code before the call to DrawPrimitive:
{
static FLOAT di = 0.0f;
static FLOAT dy = 0.0f;
di += 0.03f;
dy += 0.03f;
// Build and set translation matrix
D3DXMATRIX ret;
D3DXMatrixIdentity(&ret);
ret(3, 0) = di;
ret(3, 1) = dy;
//ret(3, 2) = dz;
hResult = pd3dDevice->SetTransform( D3DTS_TEXTURE0, &ret );
}
This does not make any of my rendered sprites pan about.
I've been working through DirectX tutorials and reading the MS documentation to catch up on things but there are definitely holes in my knowledge, so I hope I'm not doing anything too completely brain-dead.
Any help super appreciated.
Thanks!
This should be quiet easy to do with one quad.
Assuming that you're using DX9 with the fixed function pipeline, you can translate your texture with IDirect3DDevice9::SetTransform (doc) with the properly texture as D3DTRANSFORMSTATETYPE(doc) and a 2D-Translation Matrix. You must ensure that your sampler state D3DSAMP_ADDRESSU and D3DSAMP_ADDRESSV (doc) is set to D3DTADDRESS_WRAP (doc). This tiles the texture virtually, so that negativ uv-values or values greater 1 are mapped to an infinit repetition of your texture.
If you're using shaders or another version of directx, you can translate you texture coordinate by yourself in the shader or manipulate the uv-values of your vertices.

How to get device context in Direct3D10

Could anyone please help me to port this DX9 code to DX10 (not 11!). It is a simple drawing to texture.
// create the texture
m_pDevice->CreateTexture(m_width, m_height, 1, 0, D3DFMT_X8R8G8B8, D3DPOOL_DEFAULT, &m_pTexture, NULL);
// get its surface (target surface)
m_pTexture->GetSurfaceLevel(0, &m_pSurface);
// create the second surface (source surface)
m_pDevice->CreateOffscreenPlainSurface(m_width, m_height, D3DFMT_X8R8G8B8, D3DPOOL_SYSTEMMEM, &m_pSurfaceMem, NULL);
// drawing to Direct3D, this is called in loop
...
m_pSurfaceMem->GetDC(&hdc);
HRGN hrgn = CreateRectRgnIndirect(lprc);
SelectClipRgn(hdc, hrgn);
RECTL clipRect = { 0, 0, m_width, m_height };
// This draws to the source surface
m_pViewObject->Draw(DVASPECT_TRANSPARENT, 1, NULL, NULL, NULL, hdc, &clipRect, &clipRect, NULL, 0);
m_pSurfaceMem->ReleaseDC(hdc);
DeleteObject(hrgn);
POINT pt = {lprc->left, lprc->top};
// Now update the target surface
m_pDevice->UpdateSurface(m_pSurfaceMem, lprc, m_pSurface, &pt);
Error handling and the likes can be omitted for simplicity.

Resources