XNA 4 - Texture clone - xna

I need to copy content of one texture2d to another (both stored in VRAM)?
Is this even possible without using RTT or any additional RAM-VRAM transfers?
Just pure BLIT between two textures in VRAM.
Thanks in advance! I am not able to figure it out.

Using a RenderTarget does not remove the data from VRAM. It can be reused in a subsequent draw call as a texture without returning it to RAM. However, if you need to perform operations on it in code, like with getData(), then it will move out of video memory.

Related

AS3 AIR iOS - How to control when BitmapData is cached/uncached from the GPU?

This question's kind of a 4-parter:
Is it true that all BitmapData is immediately cached to the GPU as soon as it's created (even if it's never applied to a Bitmap or added to stage?)
Does this still happen if the GPU texture buffer is already full? Bonus points: if so, what's the preferential swap method the GPU chooses to select which textures to remove from memory?
If (1), then does setting the width/height of any BitmapData uncache it and/or does replacing its pixels therefore upload the new pixels to the same memory address on the GPU? Bonus: What if the size changes?
To bring this all together, would a hybrid class that extends BitmapData but stores its actual data in a ByteArray be able to use setPixels/getPixels on itself to control upload/download from the GPU as necessary, to buffer a large number of bitmaps? Bonus: Would speed improve for actually placing them in Bitmaps if the instances of this class were static?
Here are some answers
No. In AIR, you manually upload bitmaps to GPU and have control WHEN to do it
As far as I've reached, if the buffer is full, you simply get an error for it - the GPU cannot make a choice what do to. Removing a random texture won't be nice if it's important to you, right? :)
You can check for example Starling and how it uploads textures to GPU. Once you force it to do so, it doesn't care what you do with the bitmap. It's like making a photo image of an object so that you can just show it instead of explaining it with words. It won't matter if you change the object, the photo will be still the same.
Simplified answer: no. Again - it's best to check out how textures are created and how you upload stuff to GPU.

Reusing a VertexBuffer or make new VertexBuffer object?

I'm trying to render bitmap fonts in directX10 at the moment, and I want to do this as efficiently as possible. I'm having a hard time getting a start on my design because of this question though.
So should I reuse a single VertexBuffer or make multiple VertexBuffer objects?
Currently I allocate one dynamic VertexBuffer per Quad object in my program. This way I wouldn't have to map/unmap a VertexBuffer if nothing moves on my screen. For fonts I can implement a similar method on where I allocate one buffer per text box, or something similar.
After searching I read about reusing a single VertexBuffer for all objects. Vertex caching came up also. What is the advantage/disadvantage of this, and is it faster than my previous method?
For last, is there any other method I should look into rendering many 2d quads in the screen?
Thank you in advance.
Using a single dynamic Vertex Buffer with the proper combinations of DISCARD and NO_OVERWRITE is the best way to handle this kind of dynamic submission. The driver will perform buffer renaming with DISCARD to minimize GPU stalls.
This is the mechanism used by SpriteBatch/SpriteFont and PrimitiveBatch in the DirectX Tool Kit. You can check that source for details, and if really needed you could adopt it to Direct3D 10.x. Of course, moving to Direct3D 11 is probably the better choice.

How to make a copy of the OpenGL ES framebuffer in iOS?

I'd like to copy the OpenGL ES framebuffer from video RAM to video RAM in my iOS game. How is this done?
Ideally I'll do this 30 times per second. Then transfer the contents of the copied buffer to the CPU piecewise (not all at once since this causes a stutter in the game).
EDIT: I would say that you should have a look into Frame Buffer Objects (FBOs), you can find an example on the following post:
https://devforums.apple.com/message/23282#23282
This will allow you to render your scene into a texture attached to a FBO and use the texture afterwards.
PS: Thanks Christian for pointing out my mistake (I first read that MrMusic wanted to copy VRAM to RAM and wrongly suggested to use glReadPixels which is indeed unsuitable for that purpose).

iOS: playing a frame-by-frame greyscale animation in a custom colour

I have a 32 frame greyscale animation of a diamond exploding into pieces (ie 32 PNG images # 1024x1024)
my game consists of 12 separate colours, so I need to perform the animation in any desired colour
this I believe rules out any Apple frameworks, also it rules out a lot of public code for animating frame by frame in iOS.
what are my potential solution paths?
these are the best SO links I have found:
Faster iPhone PNG Animations
frame by frame animation
Is it possible using video as texture for GL in iOS?
that last one just shows it is may be possible to load an image into a GL texture each frame ( he is doing it from the camera, so if I have everything stored in memory, that should be even faster )
I can see these options ( listed laziest first, most optimised last )
option A
each frame (courtesy of CADisplayLink), load the relevant image from file into a texture, and display that texture
I'm pretty sure this is stupid, so onto option B
option B
preload all images into memory
then as per above, only we load from memory rather than from file
I think this is going to be the ideal solution, can anyone give it the thumbs up or thumbs down?
option C
preload all of my PNGs into a single GL texture of the maximum size, creating a texture Atlas. each frame, set the texture coordinates to the rectangle in the Atlas for that frame.
while this is potentially a perfect balance between coding efficiency and performance efficiency, the main problem here is losing resolution; on older iOS devices maximum texture size is 1024x1024. if we are cramming 32 frames into this ( really this is the same as cramming 64 ) we would be at 128x128 for each frame. if the resulting animation is close to full screen on the iPad this isn't going to hack it
option D
instead of loading into a single GL texture, load into a bunch of textures
moreover, we can squeeze 4 images into a single texture using all four channels
I baulk at the sheer amount of fiddly coding required here. My RSI starts to tingle even thinking about this approach
I think I have answered my own question here, but if anyone has actually done this or can see the way through, please answer!
If something higher performance than (B) is needed, it looks like the key is glTexSubImage2D http://www.opengl.org/sdk/docs/man/xhtml/glTexSubImage2D.xml
Rather than pull across one frame at a time from memory, we could arrange say 16 512x512x8-bit greyscale frames contiguously in memory, send this across to GL as a single 1024x1024x32bit RGBA texture, and then split it within GL using the above function.
This would mean that we are performing one [RAM->VRAM] transfer per 16 frames rather than per one frame.
Of course, for more modern devices we could get 64 instead of 16, since more recent iOS devices can handle 2048x2048 textures.
I will first try technique (B) and leave it at that if it works ( I don't want to over code ), and look at this if needed.
I still can't find any way to query how many GL textures it is possible to hold on the graphics chip. I have been told that when you try to allocate memory for a texture, GL just returns 0 when it has run out of memory. however to implement this properly I would want to make sure that I am not sailing close to the wind re: resources... I don't want my animation to use up so much VRAM that the rest of my rendering fails...
You would be able to get this working just fine with CoreGraphics APIs, there is no reason to deep dive into OpenGL for a simple 2D problem like this. For the general approach you should take to creating colored frames from a grayscale frame, see colorizing-image-ignores-alpha-channel-why-and-how-to-fix. Basically, you need to use CGContextClipToMask() and then render a specific color so that what is left is the diamond colored in with the specific color you have selected. You could do this at runtime, or you could do it offline and create 1 video for each of the colors you want to support. It is be easier on your CPU if you do the operation N times and save the results into files, but modern iOS hardware is much faster than it used to be. Beware of memory usage issues when writing video processing code, see video-and-memory-usage-on-ios-devices for a primer that describes the problem space. You could code it all up with texture atlases and complex openGL stuff, but an approach that makes use of videos would be a lot easier to deal with and you would not need to worry so much about resource usage, see my library linked in the memory post for more info if you are interested in saving time on the implementation.

Copy Texture to Texture

I've done 2 programs to use Shared Resources, running on SlimDX & DirectX10. One program will display the shared texture on a 3D mesh. The 2nd program will load an image as texture. So far I need to pass the shared handled everytime the texture is update from a new image.
Now, is there a way that I can initialize a fixed size shared texture (Texture2D), then everytime when I load a new image, all I need to do is load it as texture, then copy it to the existing texture. This way the shared handle would not change, and I can save some overhead of passing the shared handle. For DirectX9, I do know there a function to do just that, "StretchRectangle" but I can't find that or anything similar in DirectX10.
The intermediate format can be anything, even surface, as long as I get to update it to the shared texture.
Thanks
What about CopyResource() or CopySubresourceRegion()? I don't know SlimDX, but these should work fine in native D3D10.

Resources