XNA 3.1 Preserving the Depth Buffer before it clears - xna

I'm trying to get around XNA 3.1's automatic clearing of the depth buffer when switching render targets by copying the IDirect3DSurface9 from the depth buffer before the render targets are switched, then restore the depth buffer at a later point.
In the code, the getDepthBuffer method is a pointer to the IDirect3DDevice9 GetDepthStencilBuffer function. The pointer to that method seems to be correct, but when I try to get the IDirect3DSurface9 pointer it returns an exception (0x8876086C - D3DERR_INVALIDCALL). The surfacePtr pointer ends up pointing to 0x00000000.
Any idea on why it is not working? And any ideas on how to fix it?
Heres the code:
public static unsafe Texture2D GetDepthStencilBuffer(GraphicsDevice g)
{
if (g.DepthStencilBuffer.Format != DepthFormat.Depth24Stencil8)
{
return null;
}
Texture2D t2d = new Texture2D(g, g.DepthStencilBuffer.Width, g.DepthStencilBuffer.Height, 1, TextureUsage.None, SurfaceFormat.Color);
FieldInfo f = typeof(GraphicsDevice).GetField("pComPtr", BindingFlags.NonPublic | BindingFlags.GetField | BindingFlags.Instance);
object o = f.GetValue(g);
void* devicePtr = Pointer.Unbox(f.GetValue(g));
void* getDepthPtr = AccessVTable(devicePtr, 160);
void* surfacePtr;
var getDepthBuffer = (GetDepthStencilBufferDelegate)Marshal.GetDelegateForFunctionPointer(new IntPtr(getDepthPtr), typeof(GetDepthStencilBufferDelegate));
var rv = getDepthBuffer(&surfacePtr);
SetData(t2d, 0, surfacePtr, g.DepthStencilBuffer.Width, g.DepthStencilBuffer.Height, (uint)(g.DepthStencilBuffer.Width / 4), D3DFORMAT.D24S8);
Marshal.Release(new IntPtr(devicePtr));
Marshal.Release(new IntPtr(getDepthPtr));
Marshal.Release(new IntPtr(surfacePtr));
return t2d;
}

XNA3.1 will not clear your depth-stencil buffer upon changing render targets, it will however resolve it (so that it's unusable for depth tests) if you are not careful with your render target changes.
For example:
SetRenderTarget(someRenderTarget)
DrawStuff()
SetRenderTarget(null)
SetRenderTarget(someOtherRenderTarget)
Will cause the depth-stencil buffer to be resolved, but the following will not:
SetRenderTarget(someRenderTarget)
DrawStuff()
SetRenderTarget(someOtherRenderTarget)
I'm unsure why this happens with XNA3.1 (and earlier versions), but ever since figuring that out I've been able to keep the same depth-stencil buffer alive through many render target changes, even Clear operations as long as the clear specified ClearOptions.Target only.

Related

What is the syntax for construct array of array using placement new?

i have trying to construct array of array for placement new.
i searching internet only manage to found construct an array using placement new. But what if i want array of array instead?
i not sure how to construct the inner array.
memory manager constructor already allocate buffer with large size
memory manager destructor have delete buff
Node operator new overload already implement
This my code
map_size_x = terrain->get_map_width();
map_size_y = terrain->get_map_height();
grid_map = new Node *[map_size_x];
for (int i = 0; i < map_size_x; ++i)
{
//grid_map[i] = new Node[map_size_y];
grid_map[i] = new( buf + i * sizeof(Node)) Node;
}
buf is char * already allocated big size of memory in somewhere other class like memory manager and should be enough to fit in sizeof Node * width and height.
There is new operator overload implemented in Node class which is
void *AStarPather::Node::operator new(std::size_t size, void* buffer)
{
return buffer;
}
the result seem failed to allocate and program stuck, but no crash.
i am using visual studio 2017

I can't 'HRESULT Authorize()' through 'interface IiTunes'

Good day, everybady,
I work on Windows7 (64 bits) and try use COM / OLE object "iTunesApp Class". This object has installed with iTunes application.
My code is following
HRESULT hr;
CLSID clsid;
IiTunes *pIiTunes = nullptr;
//Apple.iTunes
CLSIDFromProgID(OLESTR("iTunes.Application.1"), &clsid);
hr = CoCreateInstance(clsid, nullptr, CLSCTX_LOCAL_SERVER, __uuidof(IiTunes), reinterpret_cast<LPVOID *>(&pIiTunes));
if (pIiTunes != nullptr)
{
VARIANT data[16];
OLECHAR ver[4096] = L"vaneustroev#gmail.com";
pIiTunes->Authorize(1, data, (BSTR*)ver);
}
Then (pIiTunes->Authorize(1, data, (BSTR*)ver); ) I've got exception '...exception from address 0x000007FEFF4E4FCA (oleaut32.dll) ...Violation of access rights at address 0x000007FEFF4E4FCA...'
I don't know which parameters for pIiTunes->Authorize() I must set
I don't know what is the value of parameters that must be set, but I know the types of these parameters.
First one is a int32, second is a VARIANT reference, third is a array of BSTR. VARIANTs must be initialized and cleared after use, BSTRs must be allocated (a BSTR is not a OLECHAR *) and freed after use.
So, beyond the real semantics of the method, you can call it like this:
VARIANT data;
VariantInit(&data); // undercovers, this will just zero the whole 16-bytes structure
// ... do something with data here
BSTR ver = SysAllocString(L"vaneustroev#gmail.com"); // you should check for null -> out of memory
pIiTunes->Authorize(1, &data, &ver);
// always free BSTRs and clear VARIANTS
SysFreeString(ver);
VariantClear(&data);
If you use Visual Studio, there are cool Compiler COM Support Classes that ease VARIANT and BSTR programming considerably, as you could rewrite all this like this:
_variant_t data;
_bstr_t ver = L"vaneustroev#gmail.com";
BSTR b = ver;
pIiTunes->Authorize(1, &data, &b);
Visual Studio also provides a library called ATL that has other wrappers. Using them is similar:
CComVariant data;
CComBSTR ver = L"vaneustroev#gmail.com";
BSTR b = ver;
pIiTunes->Authorize(1, &data, &b);

F# Passing Nulls to Unmanaged Imported DLL

In F# i'm using an external DLL (in this case SDL Graphics library) I'm importing the method I require as follows...
[<DllImport("SDL2.dll", CallingConvention = CallingConvention.Cdecl)>]
extern int SDL_QueryTexture(nativeint texture, uint32& format, int& access, int& w, int& h)
This works fine and I can successfully call the method using the following...
let result = SDLDefs.SDL_QueryTexture(textTexture, &format, &access, &w, &h)
The problem is that the native SDL methods accept null values for many pointer arguments. This is required in some scenarios (which function like overloaded methods). I can't find any way to call these methods from F# passing nulls.
For example, this fails with "does not have null as proper value"
let result = SDLDefs.SDL_QueryTexture(textTexture, &format, null, &w, &h)
I read about the attribute [AllowNullLiteral] but it seems like I can only apply it to types I define, and not pre-defined types which are used in my imported DLL.
Is there any way I can do this?
If you want to specify nulls, you need to use "raw pointers", which are represented by types nativeint and nativeptr<T>.
[<DllImport("SDL2.dll", CallingConvention = CallingConvention.Cdecl)>]
extern int SDL_QueryTexture(nativeint texture, uint32& format, nativeint access, int& w, int& h)
// Call without null
let access = 42
let pAccess = NativePtr.stackalloc<int> 1
NativePtr.write pAccess access
SQL_QueryTexture( textTexture, &format, NativePtr.toNativeInt pAccess, &w, &h )
let returnedAccess = NativePtr.read pAccess
// Call with null
SQL_QueryTexture( textTexture, &format, null, &w, &h )
NOTE: be careful with stackalloc. Allocating memory on the stack is quite handy, because you don't need to explicitly release it, but pointers to it will become invalid once you exit the current function. So you can only pass such pointers to an external function if you're sure that the function won't store the pointer and try to use it later.
If you need to pass a pointer to real heap memory that's not going anywhere, you'll need Marshal.AllocHGlobal. But don't forget to release! (or else :-)
let access = 42
let pAccess = Marshal.AllocHGlobal( sizeof<int> )
NativePtr.write (NativePtr.ofNativeInt pAccess) access
SQL_QueryTexture( textTexture, &format, pAccess, &w, &h )
Marshal.FreeHGlobal( pAccess )

ARM: May I do direct memory accesses to a range returned by ioremap_nocache() [without using ioread*()/iowrite*()]?

I'm using a TI AM3358 SoC, running an ARM Cortex-A8 processor, which runs Linux 3.12. I enabled a child device of the GPMC node in the device tree, which probes my driver, and in there I call ioremap_nocache() with the resource provided by the device tree node to get an uncached region.
The reason I'm requesting no cache is that it's not an actual memory device which is connected to the GPMC bus, which would of course benefit from the processor cache, but an FPGA device. So accesses need to always go through the actual wires.
When I do this:
u16 __iomem *addr = ioremap_nocache(...);
iowrite16(1, &addr[0]);
iowrite16(1, &addr[1]);
iowrite16(1, &addr[2]);
iowrite16(1, &addr[3]);
ioread16(&addr[0]);
ioread16(&addr[1]);
ioread16(&addr[2]);
ioread16(&addr[3]);
I see the 8 accesses are done on the wires using a logic analyzer. However, when I do this:
u16 v;
addr[0] = 1;
addr[1] = 1;
addr[2] = 1;
addr[3] = 1;
v = addr[0];
v = addr[1];
v = addr[2];
v = addr[3];
I see the four write accesses, but not the subsequent read accesses.
Am I missing something? What would be the difference here between ioread16() and a direct memory access, knowing that the whole GPMC range is supposed to be addressable just like memory?
Could this behaviour be the result of any compiler optimization which can be avoided? I didn't look at the generated instructions yet, but until then, maybe someone experienced enough has something interesting to reply.
ioread*() and iowrite*(), on ARM, perform a data memory barrier followed by a volatile access, e.g.:
#define readb(c) ({ u8 __v = readb_relaxed(c); __iormb(); __v; })
#define readw(c) ({ u16 __v = readw_relaxed(c); __iormb(); __v; })
#define readl(c) ({ u32 __v = readl_relaxed(c); __iormb(); __v; })
#define writeb(v,c) ({ __iowmb(); writeb_relaxed(v,c); })
#define writew(v,c) ({ __iowmb(); writew_relaxed(v,c); })
#define writel(v,c) ({ __iowmb(); writel_relaxed(v,c); })
__raw_read*() and __raw_write*() (where * is b, w, or l) may be used for direct reads/writes. They do the exact single instruction needed for those operations, casting the address pointer to a volatile pointer.
__raw_writew() example (store register, halfword):
#define __raw_writew __raw_writew
static inline void __raw_writew(u16 val, volatile void __iomem *addr)
{
asm volatile("strh %1, %0"
: "+Q" (*(volatile u16 __force *)addr)
: "r" (val));
}
Beware, though, that those two functions do not insert any barrier, so you should call rmb() (read memory barrier) and wmb() (write memory barrier) anywhere you want your memory accesses to be ordered.

DirectX Device::CreateTexture2D() crashes when calling with all 3 params

I am gonna try to render a 2D Texture from opencv::Mat to a Texture in DX11 and also to a shaderesource afterwars. The Problem is, the program crashes on Device::CreateTexture2D() and I can't figure out why. I researched the whole day, I just don't see whats wrong here.
Furthermore, the Problem seems not to be the cv::Mat as Resource, because I have tried also this example here: D3D11 CreateTexture2D in 32 bit format with the chess-example as resource for the texture... and the functions still crashes, when calling with the 3rd param.
I found others, who had Problems with that function, sometimes the Problem was because that SysMemPitch was not set for 2D Textures but that's not the case here unfortunately.
Error Output: First-chance exception at 0x692EF11E (igd10iumd32.dll) in ARift.exe: 0xC0000005: Access violation reading location 0x03438000.
Unhandled exception at 0x692EF11E (igd10iumd32.dll) in ARift.exe: 0xC0000005: Access violation reading location 0x03438000.
here is the relevant code:
bool Texture::InitCameraStream(ID3D11Device* device, ARiftControl* arift_control)
{
D3D11_TEXTURE2D_DESC td;
ZeroMemory(&td, sizeof(td));
td.ArraySize = 1;
td.BindFlags = D3D11_BIND_SHADER_RESOURCE;
td.Usage = D3D11_USAGE_DYNAMIC;
td.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
td.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
td.Height = arift_control->picture_1_.size().height;
td.Width = arift_control->picture_1_.size().width;
td.MipLevels = 1;
td.MiscFlags = 0;
td.SampleDesc.Count = 1;
td.SampleDesc.Quality = 0;
D3D11_SUBRESOURCE_DATA srInitData;
srInitData.pSysMem = arift_control->picture_1_.ptr();
srInitData.SysMemPitch = arift_control->picture_1_.size().width * 4;
ID3D11Texture2D* tex = 0;
if ((device->CreateTexture2D(&td, &srInitData, NULL) == S_FALSE));
{
std::cerr << "Texture Description: OK " << std::endl << "Subresource: OK" << std::endl;
}
if (FAILED(device->CreateTexture2D(&td, &srInitData, &tex)));
{
std::cerr << "Error: Texture could not be created! "<< std::endl;
return false;
}
// Create the shader-resource view
D3D10_SHADER_RESOURCE_VIEW_DESC srDesc;
srDesc.Format = td.Format;
srDesc.ViewDimension = D3D10_SRV_DIMENSION_TEXTURE2D;
srDesc.Texture2D.MostDetailedMip = 0;
srDesc.Texture2D.MipLevels = 1;
if (FAILED(device->CreateShaderResourceView(tex, NULL, &texture_)));
{
std::cerr << "Can't create Shader Resource View" << std::endl;
return false;
}
return true;
}
CreateTexture2D Returns S_FALSE when the first 2 Parameters are valid, and passing 0 as the 3rd param. So in my case, it also Returns S_FALSE the first time, so the Debug Output appears. When calling CreateTexture2D with the 3rd param (the TExture COM object), it crashed. I have absolutely no idea anymore.
Furthermore, I tried to Setup Debugging with DirectX and followed that tutorial: http://blog.rthand.com/post/2010/10/25/Capture-DirectX-1011-debug-output-to-Visual-Studio.aspx - but I can't see a "Debug" window in my Project Properties in Visual Studio 2013. So I still get to "igd10iumd32.pdb not loaded" window, after the program crashes.
Edit: at least I could fix the issue with the additional D3D debug Outputs for now. In my Visual Studio 2013 I had to set the following: Project Properties -> Debugging -> Debug Type -> Mixed for getting the Additional D3D logs :)
Can anyone help here? It's really frustrating, just getting stuck on that single function the whole day ..
Many thanks!
Max
Your input texture data passed in D3D11_SUBRESOURCE_DATA is not sufficiently sized. In your comment, you said that the input image data is 900x1600, and the link is a JPEG. However, you are specifying to D3D that the data format is DXGI_FORMAT_B8G8R8A8_UNORM. JPEG is a compressed format, thus, the data stream will be smaller than it would be in BGRA format. When your drive (igd10iumd32.dll) attempts to read this input stream, it crashes because the buffer is not as large as you told D3D it was.
You can use D3DX11CreateTextureFromFile to load JPEG data. There are also some free image conversion libraries you can use to convert the JPEG into a D3D natively compatible format.

Resources