I've done 2 programs to use Shared Resources, running on SlimDX & DirectX10. One program will display the shared texture on a 3D mesh. The 2nd program will load an image as texture. So far I need to pass the shared handled everytime the texture is update from a new image.
Now, is there a way that I can initialize a fixed size shared texture (Texture2D), then everytime when I load a new image, all I need to do is load it as texture, then copy it to the existing texture. This way the shared handle would not change, and I can save some overhead of passing the shared handle. For DirectX9, I do know there a function to do just that, "StretchRectangle" but I can't find that or anything similar in DirectX10.
The intermediate format can be anything, even surface, as long as I get to update it to the shared texture.
Thanks
What about CopyResource() or CopySubresourceRegion()? I don't know SlimDX, but these should work fine in native D3D10.
Related
I am new to opencv,I did a project with opencv,
My project is tracking object with stereo camera,so I find where is the object and I want to represent it in (blender or with opengl or another one),so my situation is that I have point 3d in YML file and I want to represent them . I dont know what I will use ,can any one help me ??
Its possible to do in Blender, but for your simple purpose Opengl should be enough. To get started with Modern Opengl check this list of contents: link
In opengl, before drawing anything you must "send" data (vertices) to your GPU. One part of this process is called Vertex Buffer Object. (its very simple after your program by yourself). When you use VBO, you can specify what type of data you have: STATIC or DYNAMIC. Dynamic means that you (the artist) will change the data over time, the position of each vertex might change. And that is what you want.
I am creating a photo slide show with complex transitions between images on iOS. Core Animation doesn't suits the purpose as the possible transitions are limited, so I resort to Opengles 2.0. The problem is uploading images to GPU and creating texture is a time consuming operation & takes roughly 200 ms even for a 960x640 image, which is not suitable for real time playback scenario. And its not feasible to pre-create all the textures before hand as there could be 100s of them. I wonder how Core Animation deals with this problem and is smooth enough to run no matter how many CGImages you assign in animations ? (As long as images are presented at different times and not together).
Texture loading is time consuming and most of applications dealing with a large number of them are loading them on some initialisation. That is the simplest approach but surely most resource consuming. You must understand that what goes on in the back is reading an image file, decompressing it, creating a raw RGB(A) data on the CPU, allocating a memory on the GPU and sending the raw data to the GPU...
As the best approach of dealing with large number of textures is loading them in background preferably even before you need them. In your case as already mentioned in the comment you will need to create some smart cache of these textures. This will still not be enough since the loading itself might make your thread unresponsive. You will need to add a background task to handle those images.
What I suggest to you is creating 2 additional threads. First should load the image data to the CPU while the second will push the data to the GPU. The first thread is pretty straight forward while the second will need a bit of additional GL code to accomplish. Each thread will need its own openGL context to be able to communicate with the GPU, so once you create this thread you also need to create an extra context. These contexts are not aware of each others resources which leads to creating a texture in one context will make it unusable on the other context. For this you will need an extra parameter called a share group. So first you create the share group and then create both contexts with the same share group so the textures will be accessible. Do note that the context is preferably created on the thread you are supposed to be using it on (it might be enough to simply set it as current though).
This question's kind of a 4-parter:
Is it true that all BitmapData is immediately cached to the GPU as soon as it's created (even if it's never applied to a Bitmap or added to stage?)
Does this still happen if the GPU texture buffer is already full? Bonus points: if so, what's the preferential swap method the GPU chooses to select which textures to remove from memory?
If (1), then does setting the width/height of any BitmapData uncache it and/or does replacing its pixels therefore upload the new pixels to the same memory address on the GPU? Bonus: What if the size changes?
To bring this all together, would a hybrid class that extends BitmapData but stores its actual data in a ByteArray be able to use setPixels/getPixels on itself to control upload/download from the GPU as necessary, to buffer a large number of bitmaps? Bonus: Would speed improve for actually placing them in Bitmaps if the instances of this class were static?
Here are some answers
No. In AIR, you manually upload bitmaps to GPU and have control WHEN to do it
As far as I've reached, if the buffer is full, you simply get an error for it - the GPU cannot make a choice what do to. Removing a random texture won't be nice if it's important to you, right? :)
You can check for example Starling and how it uploads textures to GPU. Once you force it to do so, it doesn't care what you do with the bitmap. It's like making a photo image of an object so that you can just show it instead of explaining it with words. It won't matter if you change the object, the photo will be still the same.
Simplified answer: no. Again - it's best to check out how textures are created and how you upload stuff to GPU.
Currently I am developing application for the Windows Store which does real time-image processing using Direct2D. It must support various sizes of images. The first problem I have faced is how to handle the situations when the image is larger than the maximum supported texture size. After some research and documentation reading I found the VirtualSurfaceImageSource as a solution. The idea was to load the image as IWICBitmap then to create render target with CreateWICBitmapRenderTarget (which as far as I know is not hardware accelerated). After some drawing operations I wanted to display the result to the screen by invalidating the corresponding region in the VirtualSurfaceImage source or when the NeedUpdate callback fires. I supposed that it is possible to do it by creating ID2D1Bitmap (hardware accelerated) and to call CopyFromRenderTarget with the render target created with CreateWICBitmapRenderTarget and the invalidated region as bounds, but the method returns D2DERR_WRONG_RESOURCE_DOMAIN as a result. Another reason for using IWICBitmap is one of the algorithms involved in the application which must have access to update the pixels of the image.
The question is why this logic doesn't work? Is this the right way to achieve my goal using Direct2D? Also as far as the render target created with CreateWICBitmapRenderTarget is not hardware accelerated if I want to do my image processing on the GPU with images larger than the maximum allowed texture size which is the best solution?
Thank you in advance.
You are correct that images larger than the texture limit must be handled in software.
However, the question to ask is whether or not you need that entire image every time you render.
You can use the hardware accel to render a portion of the large image that is loaded in a software target.
For example,
Use ID2D1RenderTarget::CreateSharedBitmap to make a bitmap that can be used by different resources.
Then create a ID2D1BitmapRenderTarget and render the large bitmap into that. (making sure to do BeginDraw, Clear, DrawBitmap, EndDraw). Both the bitmap and the render target can be cached for use by successive calls.
Then copy from that render target into a regular ID2D1Bitmap with the portion that will fit into the texture memory using the ID2D1Bitmap::CopyFromRenderTarget method.
Finally draw that to the real render target, pRT->DrawBitmap
I need to copy content of one texture2d to another (both stored in VRAM)?
Is this even possible without using RTT or any additional RAM-VRAM transfers?
Just pure BLIT between two textures in VRAM.
Thanks in advance! I am not able to figure it out.
Using a RenderTarget does not remove the data from VRAM. It can be reused in a subsequent draw call as a texture without returning it to RAM. However, if you need to perform operations on it in code, like with getData(), then it will move out of video memory.