Can I use Direct3D to generate thumbnails? - directx

I can use D3DXCreateTextureFromFile to load an image, and D3DXSaveTextureToFile to save it, but how can I resize it? Should I use IDirect3DDevice9::SetRenderTarget and render to a texture?
Is it slower than doing it on CPU?

Generally you 'resize' images with the GPU by drawing them as a 'fullscreen quad' onto a render target set to the target size. Depending on the size and the data involved, it's probably slower to ship it over the GPU and then rely on readback to get it to disk so doing it on the CPU is usually the better approach. I believe the legacy deprecated D3DX9 utility library you are using can do the resize with D3DXLoadSurfaceFromSurface.
You should not be using Direct3D 9 and/or the legacy D3DX9 library for new projects. See MSDN and Living without D3DX
A better solution is to use the Windows Imaging Component directly to load the image, resize it, and save a thumbnail. See DirectXTex for extensive example code using WIC.

Related

How to reduce the memory of use about D3DXCreateTextureFromFileInMemoryEx?

I am making a 2D game with Direct3D,and use D3DXCreateTextureFromFileInMemoryEx to load my images of game.There are 221KB size of images in my game. But when I use D3DXCreateTextureFromFileInMemoryEx to load the image to memory.They are becoming Dozens of times in the memory of use.
So,how can i to reduce the memory of use?
The D3DX Texture loading functions by default are very accomodating and will try and be helpful by performing all kinds of format conversions, compression / decompression, scaling, mip-map generation and other tasks for you to turn an arbitrary source image into a usable texture.
If you want to prevent any of that conversion (which can be expensive in both CPU cost and memory usage) then you need to ensure that your source textures are already in exactly the format you want and then pass flags to D3DX to tell it not to perform any conversions or scaling.
Generally the only file format that supports all of the texture formats that D3D uses is the D3D format DDS so you will want to save all of your source assets as DDS files. If the software you use to make your textures does not directly support saving DDS files you can use the tools that come with the DirectX SDK to do the conversion for you, write your own tools or use other third party conversion tools.
If you have the full DirectX SDK installed (the June 2010 SDK, the last standalone release) you can use the DirectX Texture Tool which is installed along with it. This isn't shipped with the Windows 8 SDK though (the DirectX SDK is now part of the Windows SDK and doesn't ship as a standalone SDK).
Once you have converted your source assets to DDS files with exactly the format you want to use at runtime (perhaps DXT compressed, with a full mipmap chain pre-generated) you want to pass the right flags to D3DX texture loading functions to avoid any conversions. This means you should pass:
D3DX_DEFAULT_NONPOW2 for the Width and Height parameters and D3DX_DEFAULT for the MipLevels parameter (meaning take the dimensions from the file and don't round the size to a power of 2)*.
D3DFMT_FROM_FILE for the Format parameter (meaning use the exact texture format from the file).
D3DX_FILTER_NONE for the Filter and MipFilter parameters (meaning don't perform any scaling or filtering).
If you pre-process your source assets to be in the exact format you want to use at runtime, save them as DDS files and set all the flags above to tell D3DX not to perform any conversions then you should find your textures load much faster and without excessive memory usage.
*This requires your target hardware supports non power of 2 textures or you only use power of 2 textures. Almost all hardware from the last 5 years supports non power of 2 textures though so this should be pretty safe.

Processing images via DirectX on WinRT

My friend trying to find a best way for image processing(rotate, flip, zoom, crop) on WinRT, but WriteableBitmapEx too slow (testing on Surface and WP8).
I think, he must use WinRT C++ DirectX for writing library, which will process image via shaders and link it to main C# project, but we don't have any example and don't know how do that.
You can use SharpDX if you want to keep it in C#.
If you dont plan to go on windows phone, Direct2D would be suitable for image processing. Otherwise you have to use Direct3D.

iOS graphics engines

I am new to iOS programming and am interested in working with images. Basically, I want to be able to obtain the (0,255) and RGB tuples of every pixel in a given image. What would be the best way of doing this? Would I need to use Open GL, or something similar?
Thanks
If you want to work with images, get a copy of Apple's 'Quartz 2D Programming Guide'. If you want even more detailed how-to, get a copy of the "Programming with Quartz" book on Amazon (its says Mac in the title as it predates iOS).
Essentially you are going to take images, draw them into bit map contexts, then determine the rgba layout by querying the image.
If you want to use system resources to assist you in making certain types of changes to images, there is a OSX framework recently moved to iOS called the Accelerate Framework. and it has a lot of functions in it for image manipulation (vImage).
For reading and writing images to the file system look at Apple's 'Image I/O Guide'. For advanced filtering there is Core Image, which allows you to apply filters to images.
EDIT: If you have any interest in really fast accellerated code that uses the GPU to perform some sophisticated filtering, you can checkout Brad Larson's GPU Image project on github.

iOS CGImageRef Pixel Shader

I am working on an image processing app for the iOS, and one of the various stages of my application is a vector based image posterization/color detection.
Now, I've written the code that can, per-pixel, determine the posterized color, but going through each and every pixel in an image, I imagine, would be quite difficult for the processor if the iOS. As such, I was wondering if it is possible to use the graphics processor instead;
I'd like to create a sort of "pixel shader" which uses OpenGL-ES, or some other rendering technology to process and posterize the image quickly. I have no idea where to start (I've written simple shaders for Unity3D, but never done the underlying programming for them).
Can anyone point me in the correct direction?
I'm going to come at this sideways and suggest you try out Brad Larson's GPUImage framework, which describes itself as "a BSD-licensed iOS library that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies". I haven't used it and assume you'll need to do some GL reading to add your own filtering but it'll handle so much of the boilerplate stuff and provides so many prepackaged filters that it's definitely worth looking into. It doesn't sound like you're otherwise particularly interested in OpenGL so there's no real reason to look into it.
I will add the sole consideration that under iOS 4 I found it often faster to do work on the CPU (using GCD to distribute it amongst cores) than on the GPU where I needed to be able to read the results back at the end for any sort of serial access. That's because OpenGL is generally designed so that you upload an image and then it converts it into whatever format it wants and if you want to read it back then it converts it back to the one format you expect to receive it in and copies it to where you want it. So what you save on the GPU you pay for because the GL driver has to shunt and rearrange memory. As of iOS 5 Apple have introduced a special mechanism that effectively gives you direct CPU access to OpenGL's texture store so that's probably not a concern any more.

Using PVRTexTool to build texture data on PC for use on iOS OpenGL ES

Apple provides the texturetool tool to cook textures into the PowerVR compressed texture format. My toolchain runs on Windows so I would like to create this texture data on a Windows PC. It looks like this will be simple because Imagination provides a tool and SDK that runs on windows. So I've downloaded PVRTexTool and will use that in my existing in-house texture cooking tool. Has anyone tried this? I'm wondering if there are any known incompatibilities between this and the iOS OpenGL ES implementation.
I now have this working and did not have any issues of compatibility with iOS.
One thing that confused me at first is that the standard formats that the tool does processing on are all ABGR format. You can convert your original data (mine was ARGB) into a standard format using the DecompressPVR function (even though my original data is not compressed).
Other issues that came up along the way:
- Compressed textures have to be square. You can use the ProcessRawPVR function to resize non-square textures to square
- the layout of the generated mipmaps in the resulting buffer is not obvious. You end up with one buffer containing all the mipmaps but, at runtime you need to add each mip map separately using glCompressedTexImage2D or glTexImage2D.

Resources