I know it is possible, and a lot faster than using GDI+. However I haven't found any good example of using DirectX to resize an image and save it to disk. I have implemented this over and over in GDI+, thats not difficult. However GDI+ does not use any hardware acceleration, and I was hoping to get better performance by tapping into the graphics card.
You can load the image as a texture, texture-map it onto a quad and draw that quad in any size on the screen. That will do the scaling. Afterwards you can grab the pixel-data from the screen, store it in a file or process it further.
It's easy. The basic texturing DirectX examples that come with the SDK can be adjusted to do just this.
However, it is slow. Not the rendering itself, but the transfer of pixel data from the screen to a memory buffer.
Imho it would be much simpler and faster to just write a little code that resizes an image using bilinear scaling from one buffer to another.
Do you really need to use DirectX? GDI+ does the job well for resizing images. In DirectX, you don't really need to resize images, as most likely you'll be displaying your images as textures. Since textures can only applies on 3d object (triangles/polygons/mesh), the size of the 3d object and view port determines the actual image size displayed. If you need to scale your texture within the 3d object, just play the texture coordinate or matrix.
To manipute the texture, you can use alpha blending, masking and all sort of texture manipulation technique, if that's what you're looking for. To manipulate individual pixel like GDI+, I still think GDI+ is the way to do. DirectX was never mend to do image manipulation.
Related
If you have a single power-of-two width/height texture (say 2048) and you want to blit out scaled and translated subsets of it (say 64x92-sized tiles scaled down) as quickly as possible onto another texture (as a buffer so it can be cached when not dirty), then draw that texture onto a webgl canvas, and you have no more requirements - what is the fastest strategy?
Is it first loading the source texture, binding an empty texture to a framebuffer, rendering the source with drawElementsInstancedANGLE to the framebuffer, then unbinding the framebuffer and rendering to the canvas?
I don't know much about WebGL and I'm trying to write a non-stateful version of https://github.com/kutuluk/js13k-2d (that just uses draw() calls instead of sprites that maintain state, since I would have millions of sprites). Before I get too far into the weeds, I'm hoping for some feedback.
There is no generic fastest way. The fastest way is different by GPU and also different by specifics.
Are you drawing lots of things the same size?
Are the parts of the texture atlas the same size?
Will you be rotating or scaling each instance?
Can their movement be based on time alone?
Will their drawing order change?
Do the textures have transparency?
Is that transparency 100% or not (0 or 1) or is it various values in between?
I'm sure there's tons of other considerations. For every consideration I might choose a different approach.
In general your idea if using drawElementsAngleInstanced seems fine but without knowing exactly what you're trying to do and on which device it's hard to know.
Here's some tests of drawing lots of stuff.
I am trying to put together ray tracing routine and the width/shade of every pixel in a single ray will change over the length of the line. Is there a way in SpriteKit to draw a single pixel on the screen? or should I be doing this using UIImage?
SpriteKit isn't a pixel drawing API. It's more of a "moving pictures around on the screen" API. There are a couple of cases, though, where it makes sense to do custom drawing, and SpriteKit (as of iOS 8 and OS X 10.10) has a few facilities for this.
If you want to create custom art to apply to sprites, with that art being mostly static (that is, not needing to be redrawn frequently as SpriteKit animates the scene), just use the drawing API of your choice — Core Graphics, OpenGL, emacs, whatever — to create an image. Then you can stick that image in an SKTexture object to apply to sprites in your scene.
To directly munge the bits inside a texture, use SKMutableTexture. This doesn't really provide drawing facilities — you just get to work with the raw image data. (And while you could layer drawing tools on top of that — for example, by creating a CG image context from the data and rewriting the data with CG results — that'd slow you down too much to keep up with the animation framerate.)
If you need high-framerate, per-pixel drawing, your best bet is to do that entirely on the GPU. Use SKShader for that.
If your application is redrawing every frame, then you can use an offscreen buffer to dynamically update pixels, aka SKMutableTexture. I'm not sure how slow it will be.
I'm working on an Open GL app that uses 1 particularly large texture 2250x1000. Unfortunately, Open GL ES 2.0 doesn't support textures larger than 2048x2048. When I try to draw my texture, it appears black. I need a way to load and draw the texture in 2 segments (left, right). I've seen a few questions that touch on libpng, but I really just need a straight forward solution for drawing large textures in opengl es.
First of all the texture size support depends on device, I believe iPad 3 supports 4096x4096 but don't mind that. There is no way to push all those data as they are to most devices onto 1 texture. First you should ask yourself if you really need such a large texture, will it really make a difference if you resample it down to 2048x_. If the answer is NO you will need to break it at some point. You could cut it by half in width and append of the cut parts to the bottom of the texture resulting in 1125x2000 texture or simply create 2 or more textures and push to them certain parts of the texture image. In any of the cases you might have trouble with texture coordinates but this all heavily depends on what you are trying to do, what is on that texture (a single image or parts of a sophisticated model; color mapping or some data you can not interpolate; do you create it at load time or it is modified as it goes...). Maybe some more info could help us solve your situation more specifically.
Imagine, I have a picture (Texture2D in XNA) scalled 256x256, but sometimes I want to use it with a size of 64x64 in the application.
I learned on regular Windows Forms or WPF applications, when I have to resize an image, I should store it in a field so that I have to do the resizing only once. It would massively slow down the performance when resizing in the game loop over and over again.
Do I have to that in XNA too? I didn't find anything about it. I can resize the texture when drawing with the spritebatch but that would be the same as just explained. I would resize the texture every frame and not only once in the field. I don't even know how to resize a texture2d without spritebatch in XNA.
There is no reason to create a resized copy of the original texture. Just draw it at whatever size you want. Drawing a texture at a different size than the original image is essentially free on modern graphics hardware.
With modern hardware, you rarely need to manually resize textures, the hardware does all that automatically. However, you must make sure you are using mipmapping, else the result will be very aliased.
If mipmapping is enabled on your current texture, it will be resized when it is loaded. Just make sure trilinear filtering render state is enabled and draw whatever size you need and it will automatically use a mix of the best resized textures.
I'm using a texture cache to draw video frames to the screen, just like the RosyWriter sample application from Apple.
I want to downsample an image from 1080p down to around 320x480 (for various reasons, I don't want to capture at a lower resolution) and use mipmap filtering to get rid of aliasing. However, when I try adding:
glGenerateMipmap(CVOpenGLESTextureGetTarget(inputTexture));
glTexParameteri(CVOpenGLESTextureGetTarget(inputTexture), GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
I just get a black screen, as though the mipmaps aren't being generated. I'm rendering offscreen from one texture to another. Both source and destination are mapped to pixel buffers using texture caches.
Mipmaps can only be generated for power-of-two sized textures. None of the video frame sizes returned by the iOS cameras that I can think of have power-of-two dimensions. For using the texture caches while still generating mipmaps, I think you'd have to do something like do an offscreen re-render of the texture to a power-of-two FBO backed by a texture, then generate a mipmap for that.
That said, this is probably not the best way to accomplish what you want. Mipmaps only help when making a texture smaller on the screen, not making it larger. Also, they are pretty slow to generate at runtime, so this would drag your entire video processing down.
What kind of aliasing are you seeing when you zoom in? The normal hardware texture filtering should produce a reasonably smooth image when zoomed in on a video frame. As an example of this, grab and run the FilterShowcase sample from my GPUImage framework and look at the Crop filter. Zooming in on a section of the video that way seems to smooth things out pretty nicely, just using hardware filtering.
I do employ mipmaps for smooth downsampling of large images in the framework (see the GPUImagePicture when smoothlyScaleOutput is set to YES), but again that's for shrinking an image, not zooming in on it.