I am making a 2D game with Direct3D,and use D3DXCreateTextureFromFileInMemoryEx to load my images of game.There are 221KB size of images in my game. But when I use D3DXCreateTextureFromFileInMemoryEx to load the image to memory.They are becoming Dozens of times in the memory of use.
So,how can i to reduce the memory of use?
The D3DX Texture loading functions by default are very accomodating and will try and be helpful by performing all kinds of format conversions, compression / decompression, scaling, mip-map generation and other tasks for you to turn an arbitrary source image into a usable texture.
If you want to prevent any of that conversion (which can be expensive in both CPU cost and memory usage) then you need to ensure that your source textures are already in exactly the format you want and then pass flags to D3DX to tell it not to perform any conversions or scaling.
Generally the only file format that supports all of the texture formats that D3D uses is the D3D format DDS so you will want to save all of your source assets as DDS files. If the software you use to make your textures does not directly support saving DDS files you can use the tools that come with the DirectX SDK to do the conversion for you, write your own tools or use other third party conversion tools.
If you have the full DirectX SDK installed (the June 2010 SDK, the last standalone release) you can use the DirectX Texture Tool which is installed along with it. This isn't shipped with the Windows 8 SDK though (the DirectX SDK is now part of the Windows SDK and doesn't ship as a standalone SDK).
Once you have converted your source assets to DDS files with exactly the format you want to use at runtime (perhaps DXT compressed, with a full mipmap chain pre-generated) you want to pass the right flags to D3DX texture loading functions to avoid any conversions. This means you should pass:
D3DX_DEFAULT_NONPOW2 for the Width and Height parameters and D3DX_DEFAULT for the MipLevels parameter (meaning take the dimensions from the file and don't round the size to a power of 2)*.
D3DFMT_FROM_FILE for the Format parameter (meaning use the exact texture format from the file).
D3DX_FILTER_NONE for the Filter and MipFilter parameters (meaning don't perform any scaling or filtering).
If you pre-process your source assets to be in the exact format you want to use at runtime, save them as DDS files and set all the flags above to tell D3DX not to perform any conversions then you should find your textures load much faster and without excessive memory usage.
*This requires your target hardware supports non power of 2 textures or you only use power of 2 textures. Almost all hardware from the last 5 years supports non power of 2 textures though so this should be pretty safe.
Related
I always thought these words are just file format that use some special compression method, but recently I find a file format "DDS" that can use DTX1 as the compression method. So I'm curious, if they are all just compression algorithm, do that mean I could for example use DXT1(ETC1, PVR) as compression method to get a PNG image?
They are hardward supported binary compression formats (not file formats).
DXT1, DXT3, DXT5 are supported on desktop GPUs
ETC1 is supported on most mobile GPUs
PVRTC is supported on GPUs made by PowerVR/Imgtec which is all iPhones (and some Androids?)
If you want to use them you generally run some offline tools to generate them and then either write a loader for the format the tools spits out or roll your own. You then pull the data out of the file and call gl.compressedTexImage2D(...) with the data.
The advantages to compressed texture formats are that they take less GPU memory so you can use more of them and also that they can take less memory bandwidth meaning they can potentially run faster. The disavantage is that although they are compressed they aren't compressed nearly as small as say a .jpg (generally) so they may take longer to transmit over the internet. For games stored on your local machine like a game you install from Steam you really don't care about how faster the game downloads, you expect it to take 10 minutes to several hours. For a WebGL game meant to be played on a webpage most users will not wait very long so there's a trade off.
Another issue with these formats is like it says above they only work on certain devices. To support them across devices you'd need to do something like query WebGL by calling gl.getExtension for the various compression formats and then request from your server the correct set of textures for the user's device. For native games that's generally not an issue since an Android app is made separtely from an iPhone app and that's separate from a desktop app but for a web page ideally you'd like any device to be able to run the same page. A desktop machine, a tablet, maybe a smartphone. (although for android there are many GPUs so the problem is similar there for native android apps)
Here's an article that explains PVRTC and another about DXT/S3TC
I can use D3DXCreateTextureFromFile to load an image, and D3DXSaveTextureToFile to save it, but how can I resize it? Should I use IDirect3DDevice9::SetRenderTarget and render to a texture?
Is it slower than doing it on CPU?
Generally you 'resize' images with the GPU by drawing them as a 'fullscreen quad' onto a render target set to the target size. Depending on the size and the data involved, it's probably slower to ship it over the GPU and then rely on readback to get it to disk so doing it on the CPU is usually the better approach. I believe the legacy deprecated D3DX9 utility library you are using can do the resize with D3DXLoadSurfaceFromSurface.
You should not be using Direct3D 9 and/or the legacy D3DX9 library for new projects. See MSDN and Living without D3DX
A better solution is to use the Windows Imaging Component directly to load the image, resize it, and save a thumbnail. See DirectXTex for extensive example code using WIC.
I have not found any definitive answers to this, so decided to ask here. Is there really no way to save and load compiled webgl shaders? It seems a waste to compile the shaders every time someone loads the page, when all you would have to do is compile the shaders once, save it to a file, then load the compiled shader object, as you would HLSL (I know it's not GLSL, but i'm still a little new to OpenGL).
So, if possible, how can i save and load a compiled shader in webgl?
There really is no way, and imho thats a good thing. It would pose a security issue(feeding arbitrary bytecode to the GPU) in addition to that when drivers are updated the precompiled shaders are potentially missing new optimizations or just break.
when all you would have to do is compile the shaders once, save it to a file, then load the compiled shader object, as you would HLSL
OpenGL(and its derivatives) does not support loading pre-compiled shaders the same way DirectX does:
Program binary formats are not intended to be transmitted. It is not reasonable to expect different hardware vendors to accept the same binary formats. It is not reasonable to expect different hardware from the same vendor to accept the same binary formats.
https://www.opengl.org/wiki/Shader_Compilation#Binary_limitations
There seems to be no intermediate format like SPIR-V in OpenGL so you would need to compile the shaders on the target platform introducing a whole lot of additional concerns with users changing their graphics cards / employing a hybrid graphics solution, storage limitations on the client(5 MB using localstorage) and the possibility of abusing it to fingerprint the hardware.
I am working on an image processing app for the iOS, and one of the various stages of my application is a vector based image posterization/color detection.
Now, I've written the code that can, per-pixel, determine the posterized color, but going through each and every pixel in an image, I imagine, would be quite difficult for the processor if the iOS. As such, I was wondering if it is possible to use the graphics processor instead;
I'd like to create a sort of "pixel shader" which uses OpenGL-ES, or some other rendering technology to process and posterize the image quickly. I have no idea where to start (I've written simple shaders for Unity3D, but never done the underlying programming for them).
Can anyone point me in the correct direction?
I'm going to come at this sideways and suggest you try out Brad Larson's GPUImage framework, which describes itself as "a BSD-licensed iOS library that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies". I haven't used it and assume you'll need to do some GL reading to add your own filtering but it'll handle so much of the boilerplate stuff and provides so many prepackaged filters that it's definitely worth looking into. It doesn't sound like you're otherwise particularly interested in OpenGL so there's no real reason to look into it.
I will add the sole consideration that under iOS 4 I found it often faster to do work on the CPU (using GCD to distribute it amongst cores) than on the GPU where I needed to be able to read the results back at the end for any sort of serial access. That's because OpenGL is generally designed so that you upload an image and then it converts it into whatever format it wants and if you want to read it back then it converts it back to the one format you expect to receive it in and copies it to where you want it. So what you save on the GPU you pay for because the GL driver has to shunt and rearrange memory. As of iOS 5 Apple have introduced a special mechanism that effectively gives you direct CPU access to OpenGL's texture store so that's probably not a concern any more.
Apple provides the texturetool tool to cook textures into the PowerVR compressed texture format. My toolchain runs on Windows so I would like to create this texture data on a Windows PC. It looks like this will be simple because Imagination provides a tool and SDK that runs on windows. So I've downloaded PVRTexTool and will use that in my existing in-house texture cooking tool. Has anyone tried this? I'm wondering if there are any known incompatibilities between this and the iOS OpenGL ES implementation.
I now have this working and did not have any issues of compatibility with iOS.
One thing that confused me at first is that the standard formats that the tool does processing on are all ABGR format. You can convert your original data (mine was ARGB) into a standard format using the DecompressPVR function (even though my original data is not compressed).
Other issues that came up along the way:
- Compressed textures have to be square. You can use the ProcessRawPVR function to resize non-square textures to square
- the layout of the generated mipmaps in the resulting buffer is not obvious. You end up with one buffer containing all the mipmaps but, at runtime you need to add each mip map separately using glCompressedTexImage2D or glTexImage2D.