directx texture dimensions - directx

so I've discovered that my graphics card automatically resizes textures to powers of 2, which isn't usually a problem but I need to render only a portion of my texture and in doing so, must have the dimensions it has been resized to...
ex:
I load a picture that is 370x300 pixels into my texture and try to draw it with a specific source rectangle
RECT test;
test.left = 0;
test.top = 0;
test.right = 370;
test.bottom = 300;
lpSpriteHandler->Draw(
lpTexture,
&test, // srcRect
NULL, // center
NULL, // position
D3DCOLOR_XRGB(255,255,255)
);
but since the texture has been automatically resized (in this case) to 512x512, I see only a portion of my original texture.
The question is,
is there a function or something I can call to find the dimensions of my texture?
(I've tried googling this but always get some weird crap about Objects and HSL or something)

You may get file information by using this call:
D3DXIMAGE_INFO info;
D3DXGetImageInfoFromFile(file_name, &info);
Though, knowing the original texture size you'll still get it resized on load. This will obviously affect texture quality. Texture resizing is not a big deal when you apply it on mesh (it will get resized anyway) but for drawing sprites this could be a concern. To workaround this I could suggest creating a surface, loading it via D3DXLoadSurfaceFromFile and then copying it to a "pow2" sized texture.
And an offtopic: are you definitely sure about your card capabilities? May be in fact your card do support arbitrary texture sizes but you use D3DXCreateTextureFromFile() which by deafult enforces pow2 sizes. To avoid this try using extended version of this routine:
D3DTexture* texture;
D3DXCreateTextureFromFileEx(
device, file_name, D3DX_DEFAULT_NONPOW2, D3DX_DEFAULT_NONPOW2, D3DX_DEFAULT, 0,
D3DFMT_UNKNOWN, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, 0, NULL, NULL,
&texture);
If your hardware suppors non-pow2 textures you'll get your file loaded as it is. If hardware is not able to handle it than method will fail.

Related

Texture atlas to texture array via PIXEL_UNPACK_BUFFER

I have two questions:
First, is there any more direct, sane way to go from a texture atlas image to a texture array in WebGL than what I'm doing below? I've not tried this, but doing it entirely in WebGL seems possible, though four-times the work and I still have to make two round trips to the GPU to do it.
And am I right that because buffer data for texImage3D() must come from PIXEL_UNPACK_BUFFER, this data must come directly from the CPU side? I.e. There is no way to copy from one block of GPU memory to a PIXEL_UNPACK_BUFFER without copying it to the CPU first. I'm pretty sure the answer to this is a hard "no".
In case my questions themselves are stupid (and they may be), my ultimate goal here is simply to convert a texture atlas PNG to a texture array. From what I've tried, the fastest way to do this by far is via PIXEL_UNPACK_BUFFER, rather than extracting each sub-image and sending them in one at a time, which for large atlases is extremely slow.
This is basically how I'm currently getting my pixel data.
const imageToBinary = async (image: HTMLImageElement) => {
const canvas = document.createElement('canvas');
canvas.width = image.width;
canvas.height = image.height;
const context = canvas.getContext('2d');
context.drawImage(image, 0, 0);
const imageData = context.getImageData(0, 0, image.width, image.height);
return imageData.data;
};
So, I'm creating an HTMLImageElement object, which contains the uncompressed pixel data I want, but has no methods to get at it directly. Then I'm creating a 2D context version containing the same pixel data a second time. Then I'm repopulating the GPU with the same pixel data a third time. Seems bonkers to me, but I don't see a way around it.

Sprite Kit - Low Resolution Texture on Retina when using textureFromNode

For my game I am trying to create custom textures from two other textures. This is to allow for a varietly of colours, etc in my sprites.
To do this, I'm creating a sprite by adding both textures together, then applying this to a new SKTexture by using
SKTexture *texture = [self.view textureFromNode:newSprite];
This works great on the whole and I get a nice custom texture. Except when trying my game on Retina devices, where the texture is the correct size on the screen, but clearly a lower resolution.
The textures are all there and properly named so I don't believe that that is an issue.
Has anyone encountered this, or know how I can create the proper #2x texture?
I finally (accidentally) figured out how to fix this. The node which you are creating a texture from has to be added to the scene. Otherwise you will get a non-retina size for your texture.
It's not ideal as it would be nice to create textures without having to add them onto the screen.
I've discovered another way of improving the fidelity of textures created from ShapeNodes, not quite related to this question - but useful intel.
Create your shape at x2 its size and width.
Create all the fonts and other shapes at the same oversized ratio.
Make sure your positioning is relative to this overall size, (e.g. don't use absolute sizes, use relative sizes to the container.)
When you create the texture as a sprite it'll be huge - but then apply
sprite.scale = 0.5; // if you were using 2x
I've found this makes it look much higher resolution, no graininess, no fuzziness on fonts, sharp corners.
I also used tex.filteringMode = SKTextureFilteringNearest;
Thus: it doesn't have to be added to the scene and then removed.

Good tutorial on using Quads for custom Text in OpenGL ES 2.0 on iOS

I'm currently new to OpenGL ES and am self teaching myself how to program iOS games. I'm currently playing with a project that I would like to put a HUD over with some custom text. I don't want to do this using a UILabel and currently have no idea how to use Quads to cut up a png or such full of text and attach them to normal text to be used for display. I would like the end result to be providing a simple string to a command/method and the output to be displayed using the textures/bitmap for the quad. Say glPrint("Hello World");. Would anyone be able to guide me in the proper direction? There doesn't seem to be a single good tutorial on how to do this for OpenGL ES 2.0 (just OpenGL). I also want to try to avoid using 3rd party APIs. I really need/want to understand how to tackle this.
When I was getting started with OpenGL ES for my current 2D project I used Ray's tutorial, which helped me get a handle on rendering textured 2D quads. In conjunction with his 3D OpenGL ES tutorial, you might be able to piece together what you want to do. Note that you probably wouldn't render every single quad separately like in the tutorial, as that is very inefficient. Instead, you would gather all of the vertices of the characters into two big arrays/vertex buffers and batch render the characters. The basic flow for rendering each frame would probably look like this: pass a normal perspective projection matrix for 3D rendering, get your vertex information for your 3D scene to your shaders somehow, render the 3D scene. This part you've already done. For the text, immediately after, pass an orthogonal projection matrix in, bind your font texture (generally generated earlier with the GLKTextureLoader class) to the active texture unit, generate two big arrays of texture and geometric vertices for the characters/update VBOs if the text has changed, pass that in, and then batch render all of the letters at once using either glDrawArrays or glDrawElements (which requires indices).
Also, as I'm also new at using OpenGL, some of this may be wrong/inefficient. I've yet to use OpenGL ES to render anything 3D, so I'm not sure what other state changes (enabling, disabling, etc) besides a different projection matrix might be needed between rendering your 3D scene and the 2D scene (text).
It seems that drawing text using only OpenGL is a relatively difficult and tedious task, so if you just want to render a HUD overlay displaying frame rates and other things you are much better off using UILabels and saving yourself the trouble, especially if your project is not very complex. This also prevents you from having to deal with wrapping, kerning, font sizes, colors, different languages and a load of other stuff that greatly complicates text rendering if you need anything more complex.
Rather than tracking the location of each letter, why not use Core Graphics to draw your entire string into a bitmap, then upload that as a texture? You'd just need to get the dimensions from your bitmap to know what size quad to draw for that text string.
Within my open source GPUImage framework, I have an input class called a GPUImageUIElement that does something similar. The relevant code from that input is as follows:
CGSize layerPixelSize = [self layerSizeInPixels];
GLubyte *imageData = (GLubyte *) calloc(1, (int)layerPixelSize.width * (int)layerPixelSize.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)layerPixelSize.width, (int)layerPixelSize.height, 8, (int)layerPixelSize.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextTranslateCTM(imageContext, 0.0f, layerPixelSize.height);
CGContextScaleCTM(imageContext, layer.contentsScale, -layer.contentsScale);
[layer renderInContext:imageContext];
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)layerPixelSize.width, (int)layerPixelSize.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
free(imageData);
This code takes a CALayer (either directly or from the backing layer of a UIView) and renders its contents to a texture. I've already initialized the texture before this, so the code sets up a bitmap context, renders the layer into that context using -renderInContext:, and then uploads that bitmap to the texture for use in OpenGL ES.
The helper method -layerSizeInPixels just accounts for the current Retina scale factor as follows:
- (CGSize)layerSizeInPixels;
{
CGSize pointSize = layer.bounds.size;
return CGSizeMake(layer.contentsScale * pointSize.width, layer.contentsScale * pointSize.height);
}
If you used a UILabel for your view and had it autosize to fit its text, you could set the text on it, use the above to render and upload your texture, and then take the pixel size of the element to determine your quad size. However, it would probably be more efficient to just draw the text yourself using -drawAtPoint:withFont:fontForSize: or the like with an NSString.
Using Core Graphics to render your text makes it easy to manipulate the text as an NSString and use all of Core Graphics' typesetting capabilities instead of rolling your own.

Keep pixel aspect with different resolution in xna game

I'm currently developping an old-school game with XNA 4.
My graphics assets are based on 568x320 resolution (16/9 ration), I want to change my window resolution (1136x640 for example) and my graphics are scaled without stretching, that they keep pixel aspect.
How can I reach this ?
You could use a RenderTargetto achieve your goal. It sounds like you don't want to have to render accordingly to every possible screen size, so if your graphics aren't dependant on other graphical features like a mouse, then I would use a RenderTarget and draw all the pixel data to that and afterwards draw it to the actual screen allowing the screen to stretch it.
This technique can be used in other ways too. I use it to draw objects in my game, so I can easily change the rotation and location without having to calculate every sprite for the object.
Example:
void PreDraw()
// You need your graphics device to render to
GraphicsDevice graphicsDevice = Settings.GlobalGraphicsDevice;
// You need a spritebatch to begin/end a draw call
SpriteBatch spriteBatch = Settings.GlobalSpriteBatch;
// Tell the graphics device where to draw too
graphicsDevice.SetRenderTarget(renderTarget);
// Clear the buffer with transparent so the image is transparent
graphicsDevice.Clear(Color.Transparent);
spriteBatch.Begin();
flameAnimation.Draw(spriteBatch);
spriteBatch.Draw(gunTextureToDraw, new Vector2(100, 0), Color.White);
if (!base.CurrentPowerUpLevel.Equals(PowerUpLevels.None)) {
powerUpAnimation.Draw(spriteBatch);
}
// DRAWS THE IMAGE TO THE RENDERTARGET
spriteBatch.Draw(shipSpriteSheet, new Rectangle(105,0, (int)Size.X, (int)Size.Y), shipRectangleToDraw, Color.White);
spriteBatch.End();
// Let the graphics device know you are done and return to drawing according to its dimensions
graphicsDevice.SetRenderTarget(null);
// utilize your render target
finishedShip = renderTarget;
}
Remember, in your case, you would initialize your RenderTarget with dimensions of 568x320 and draw according to that and not worry about any other possible sizes. Once you give the RenderTarget to the spritebatch to draw to the screen, it will "stretch" the image for you!
EDIT:
Sorry, I skimmed through the question and missed that you don't want to "stretch" your result. This could be achieved by drawing the final RenderTarget to your specified dimensions according to the graphics device.
Oh Gosh !!!! I've got it ! Just give SamplerState.PointClamp at your spriteBatch.Begin methods to keep that cool pixel visuel effet <3
spriteBatch.Begin(SpriteSortMode.Immediate,
BlendState.AlphaBlend,
SamplerState.PointClamp,
null,
null,
null,
cam.getTransformation(this.GraphicsDevice));

ActionScript3.0 - How to get color (uint) of pixel at coordinates? (Stage3D, Flare3D)

Question is in the title:
[ActionScript3.0] How to get color (uint) of pixel at coordinates? (Stage3D, Flare3D)
I am using Flare3D library to render 3D scene on an iPad2. I need to get color values at 768 different coordinates every time screen is redrawn. Previously, on simple stage (2D), I could just draw it on 1x1 bitmaps translated to specified coordinates, now it does not work on stage3D. Plus, I am a bit worried weather it will kill the performance since I really need to do it as often as possible - actually every time screen is drawn.
It would be really nice if that currently displayed screen was like a bitmap somewhere, so I could access it like simple array...but yeah, I am not holding my breath:)
Since Stage3D renders to back-buffer and one can't directly access it, you also need to render to BitmapData using Context3D.drawToBitmapData() method. Rendering to a bitmap is very slow, especially if the viewport is large. As you only need to access those 768 pixels, you could use Context3D.setScissorRectangle to render scene 768 times with the size of scissor rectangle set to 1x1 along with needed coordinates. I haven't tested that myself so I don't know if rendering scene 700 times won`t be slower than rendering it once, but you may want to try that. :)

Resources