OpenGL ES Texture Not Rendering - ios

I am attempting to render a texture to a plane with OpenGL ES on an iPhone. I have worked with OpenGL before but I'm not sure why this isn't working.
When I run the code the plane is rendered as a black square and not a textured square. I believe the problem may be when loading the texture although I see no errors when I run the code.
Hopefully someone will spot a problem and be able to help. Thanks in advance.
Here is my code for the mesh.
// Mesh loading
- ( id ) init {
if ( self = [ super init ] ) {
glGenVertexArraysOES( 1, &m_vertexArray );
glBindVertexArrayOES( m_vertexArray );
glGenBuffers( 1, &m_vertexBuffer );
glBindBuffer( GL_ARRAY_BUFFER, m_vertexBuffer );
glBufferData( GL_ARRAY_BUFFER, sizeof( g_vertices ), g_vertices, GL_STATIC_DRAW );
glGenBuffers( 1, &m_texCoordBuffer );
glBindBuffer( GL_ARRAY_BUFFER, m_texCoordBuffer );
glBufferData( GL_ARRAY_BUFFER, sizeof( g_texCoords ), g_texCoords, GL_STATIC_DRAW );
}
return self;
}
- ( void ) render {
glBindBuffer( GL_ARRAY_BUFFER, m_vertexBuffer );
glVertexAttribPointer( GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, ( GLvoid* ) 0 );
glBindBuffer( GL_ARRAY_BUFFER, m_texCoordBuffer );
glVertexAttribPointer( GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 0, ( GLvoid* ) 0 );
glDrawArrays( GL_TRIANGLES, 0, sizeof( g_vertices ) / sizeof( g_vertices[ 0 ] ) );
}
const GLfloat g_vertices[] = {
-1.0, -1.0, 0.0,
1.0, 1.0, 0.0,
-1.0, 1.0, 0.0,
-1.0, -1.0, 0.0,
1.0, -1.0, 0.0,
1.0, 1.0, 0.0
};
const GLfloat g_texCoords[] = {
0.0, 0.0,
1.0, 1.0,
0.0, 1.0,
0.0, 0.0,
1.0, 0.0,
1.0, 1.0
};
I only need a my vertices and tex coords right now so that is all I'm using.
Next is my texture loading.
- ( id ) init: ( NSString* ) filename {
if ( self = [ super init ] ) {
CGImageRef spriteImage = [ UIImage imageNamed: filename ].CGImage;
if ( !spriteImage ) {
NSLog( #"Failed to load image %#", filename );
exit( 1 );
}
size_t width = CGImageGetWidth( spriteImage );
size_t height = CGImageGetHeight( spriteImage );
GLubyte *spriteData = ( GLubyte* ) calloc( width * height * 4, sizeof( GLubyte ) );
CGContextRef spriteContext = CGBitmapContextCreate( spriteData, width, height, 8, 4 * width, CGImageGetColorSpace( spriteImage ), kCGImageAlphaPremultipliedLast );
CGContextDrawImage( spriteContext, CGRectMake( 0, 0, width, height ), spriteImage );
CGContextRelease( spriteContext );
glGenTextures( 1, &m_texture );
glBindTexture( GL_TEXTURE_2D, m_texture );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, ( GLuint ) width, ( GLuint ) height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData );
free( spriteData );
}
return self;
}
- ( void ) bind {
glActiveTexture( GL_TEXTURE0 );
glBindTexture( GL_TEXTURE_2D, m_texture );
}
I used the texture loading code from this tutorial.
Then here is my rendering code.
- ( void ) glkView: ( GLKView* ) view drawInRect: ( CGRect ) rect {
glClearColor( 0.65f, 0.65f, 0.65f, 1.0f );
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glUseProgram( m_shaderProgram );
[ m_texture bind ];
glUniform1i( uniforms[ UNIFORM_SAMPLER ], 0 );
GLKMatrix4 mvp = GLKMatrix4Multiply( GLKMatrix4Multiply( m_projectionMatrix, m_viewMatrix ), m_modelMatrix );
glUniformMatrix4fv( uniforms[ UNIFORM_MODELVIEWPROJECTION_MATRIX ], 1, 0, mvp.m );
glEnableVertexAttribArray( GLKVertexAttribPosition );
glEnableVertexAttribArray( GLKVertexAttribTexCoord0 );
[ m_plane render ];
glDisableVertexAttribArray( GLKVertexAttribTexCoord0 );
glDisableVertexAttribArray( GLKVertexAttribPosition );
}
Vertex shader.
attribute vec3 position;
attribute vec2 texCoord;
varying lowp vec2 texCoord0;
uniform mat4 modelViewProjectionMatrix;
void main()
{
texCoord0 = texCoord;
gl_Position = modelViewProjectionMatrix * vec4( position, 1.0 );
}
And lastly fragment shader.
varying lowp vec2 texCoord0;
uniform sampler2D sampler;
void main()
{
gl_FragColor = texture2D( sampler, texCoord0.st );
}

As mentioned in the comment, you can check with a Power of Two (POT) texture. In addition, there are extensions that enable support for NonPOT (NPOT) textures like GL_IMG_texture_npot, refer to the discussion in this thread (Non power of two textures in iOS), and this thread (http://aras-p.info/blog/2012/10/17/non-power-of-two-textures/).

Related

OpenGL ES Depthbuffer not working

I hope someone can help me, sitting on that problem for long time now. I'm working on a 3D Game with OpenGl-ES. I have a map/world and some Objects and I want to make shadows with shadow mapping.
My problem ist, that the shadow map information does not fit with my information of lightspace while rendering.
I'm not sure if I have problems in my FBO, that's my I give it here with:
- (void)createShadowMap:(GLboolean)c Depth:(GLboolean)d Stencil:(GLboolean)s{
//Get ID
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &defaultFBO);
//Texture
glGenTextures(1, &_colorBuffer);
glBindTexture(GL_TEXTURE_2D, _colorBuffer);
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGBA,_screenWidth, _screenHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL );
glTexParameterf ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameterf ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameterf ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
//SetupFBO
glGenFramebuffers(1, &_frameBuffer);
glGenRenderbuffers(1, &_depthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _depthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, _screenWidth, _screenHeight);
glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, _depthBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _colorBuffer, 0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
exit(1);
}
glBindFramebuffer ( GL_FRAMEBUFFER, defaultFBO );
glBindTexture ( GL_TEXTURE_2D, 0 );
}
But I think the problem is in the calculation, I just don't find it. So I have here the shaders for the depth map:
void main()
{
vTexCoord = projectionMatrix * viewMatrix * modelMatrix * position;
gl_Position = projectionMatrix * viewMatrix * modelMatrix * position;
}
Fragment:
void main()
{
highp float depth = gl_FragCoord.z;
highp float linearDepth = (2.0 * 0.1) / (60.0 + 1.0 - depth * (60.0 - 1.0));
lowp vec4 color = vec4(vec3(linearDepth), 1.0);
gl_FragColor = color;
}
And now the shader while rendering:
Vert:
vec4 lightSight = projectionMatrix * lightSpaceMatrix * modelMatrix * position;
projCoords = lightSight;
Frag:
highp vec3 depth = projCoords.xyz/projCoords.w;
depth = (depth +1.0)*0.5;
highp float currentDepth = (projCoords.xyz/projCoords.w).z;
highp float closestDepth = texture2D(shadowMap, depth).r;
lowp float shadow;
if (closestDepth > currentDepth) {
shadow = 0.0;
} else {
shadow = 0.5;
}
lowp vec4 color = vec4(vec3(shadow), 1.0);

Draw Multiple Shapes in WebGL

I am facing the same issue of creating multiple objects ( One rotating and One static). I want to draw a static rectangle in the following code (The rotating rectangle code is from Edward Angels WebGL examples). I try to follow the instructions that gman has said from the link Drawing many shapes in WebGL, but still not able to solve. It would be nice to get some help on this to create another static object such as rectangle along with this rotating rectangle in the code. Thanks.
<script id="vertex-shader" type="x-shader/x-vertex">
attribute vec4 vPosition;
uniform float theta;
void
main()
{
float s = sin( theta );
float c = cos( theta );
gl_Position.x = -s * vPosition.x + c * vPosition.y;
gl_Position.y = s * vPosition.y + c * vPosition.x;
gl_Position.z = 0.0;
gl_Position.w = 1.0;
}
</script>
<script id="fragment-shader" type="x-shader/x-fragment">
precision mediump float;
void
main()
{
gl_FragColor = vec4( 1.0, 0.0, 0.0, 1.0 );
}
</script>
var canvas;
var gl;
var theta = 0.0;
var thetaLoc;
window.onload = function init()
{
canvas = document.getElementById( "gl-canvas" );
gl = WebGLUtils.setupWebGL( canvas );
if ( !gl ) { alert( "WebGL isn't available" ); }
gl.viewport( 0, 0, canvas.width, canvas.height );
gl.clearColor( 1.0, 1.0, 1.0, 1.0 );
var program = initShaders( gl, "vertex-shader", "fragment-shader" );
gl.useProgram( program );
var vertices = [
vec2( 0, 1 ),
vec2( 1, 0 ),
vec2( -1, 0 ),
vec2( 0, -1 )
];
var bufferId = gl.createBuffer();
gl.bindBuffer( gl.ARRAY_BUFFER, bufferId );
gl.bufferData( gl.ARRAY_BUFFER, flatten(vertices), gl.STATIC_DRAW );
var vPosition = gl.getAttribLocation( program, "vPosition" );
gl.vertexAttribPointer( vPosition, 2, gl.FLOAT, false, 0, 0 );
gl.enableVertexAttribArray( vPosition );
thetaLoc = gl.getUniformLocation( program, "theta" );
render();
};
function render() {
gl.clear( gl.COLOR_BUFFER_BIT );
theta += 0.1;
gl.uniform1f( thetaLoc, theta );
gl.drawArrays( gl.TRIANGLE_STRIP, 0, 4 );
window.requestAnimFrame(render);
}
So, were you define your vertices for the rectangle, simply add on more vertices to create an additional rectangle at a different point on the canvas. This one you only need to draw once, so if you have say:
var render = function(){
//Insert rest of code
if(!not_First_Time)
{
gl.drawArrays(gl.TRIANGLE_STRIP , 4 , 8 );
}
gl.drawArrays( gl.TRIANGLE_STRIP, 0, 4 );
//Insert rest of code
But that's more or less the cheating way of doing it as you're still modifying the points to rotate, and if you ever re-drew them, they would rotate as the main square did.
I also notice you stripped out a few of the include commands from the HTML code. You're going to need those.
Here is what my solution...
SHADER CODE:
<script id="vertex-shader" type="x-shader/x-vertex">
attribute vec4 vPosition;
uniform float theta;
void
main()
{
float s = sin( theta );
float c = cos( theta );
gl_Position.x = -s * vPosition.x + c * vPosition.y;
gl_Position.y = s * vPosition.y + c * vPosition.x;
gl_Position.z = 0.0;
gl_Position.w = 1.0;
}
</script>
<script id="fragment-shader" type="x-shader/x-fragment">
precision mediump float;
void
main()
{
gl_FragColor = vec4( 1.0, 0.0, 0.0, 1.0 );
}
</script>
JAVASCRIPT CODE:
var canvas;
var gl;
var theta = 0.0;
var thetaLoc;
var program;
var program1;
window.onload = function init()
{
canvas = document.getElementById( "gl-canvas" );
gl = WebGLUtils.setupWebGL( canvas );
if ( !gl ) { alert( "WebGL isn't available" ); }
//
// Configure WebGL
//
gl.viewport( 0, 0, canvas.width, canvas.height );
gl.clearColor( 1.0, 1.0, 1.0, 1.0 );
// Load shaders and initialize attribute buffers
program = initShaders( gl, "vertex-shader", "fragment-shader" );
program1 = initShaders( gl, "vertex-shader", "fragment-shader" );
//Rotating Rectangle
var rr_vertices = [
vec2( 0, 0.25),
vec2( 0.25, 0),
vec2(-0.25, 0 ),
vec2( 0, -0.25)
];
// Load the data into the GPU
rr_bufferId = gl.createBuffer();
gl.bindBuffer( gl.ARRAY_BUFFER, rr_bufferId );
gl.bufferData( gl.ARRAY_BUFFER, flatten(rr_vertices), gl.STATIC_DRAW );
// Associate out shader variables with our data buffer
rr_vPosition = gl.getAttribLocation( program, "vPosition" );
gl.vertexAttribPointer( rr_vPosition, 2, gl.FLOAT, false, 0, 0 );
//Static Rectangle
var sr_vertices = [
vec2( 0.5, 0.5),
vec2( 1.0, 0.5),
vec2( 0.5, 1.0 ),
vec2( 1.0, 1.0)
];
// Load the data into the GPU
sr_bufferId = gl.createBuffer();
gl.bindBuffer( gl.ARRAY_BUFFER, sr_bufferId );
gl.bufferData( gl.ARRAY_BUFFER, flatten(sr_vertices), gl.STATIC_DRAW );
// Associate out shader variables with our data buffer
sr_vPosition = gl.getAttribLocation( program, "vPosition" );
render();
};
var rr_vPosition;
var sr_vPosition;
var rr_bufferId;
var sr_bufferId;
function render() {
gl.clear( gl.COLOR_BUFFER_BIT );
gl.useProgram( program1 );
gl.enableVertexAttribArray( sr_vPosition );
gl.bindBuffer( gl.ARRAY_BUFFER, sr_bufferId );
gl.vertexAttribPointer( sr_vPosition, 2, gl.FLOAT, false, 0, 0 );
gl.drawArrays( gl.TRIANGLE_STRIP, 0, 4 );
gl.useProgram( program );
thetaLoc = gl.getUniformLocation( program, "theta" );
gl.enableVertexAttribArray( rr_vPosition );
gl.bindBuffer( gl.ARRAY_BUFFER, rr_bufferId );
gl.vertexAttribPointer( rr_vPosition, 2, gl.FLOAT, false, 0, 0 );
theta += 0.1;
gl.uniform1f( thetaLoc, theta );
gl.drawArrays( gl.TRIANGLE_STRIP, 0, 4 );
window.requestAnimFrame(render);
}

No transparency with simple OpenGL ES2.0 stencils

I am attempting to make a stencil mask in OpenGL. I have been following the model from this source (http://open.gl/depthstencils and more specifically, http://open.gl/content/code/c5_reflection.txt), and as far as I can tell, I have followed the example properly. My code is drawing one square stencil, and then another square on top of it. I expected to see only the parts of the second rotating green square that are covering the same space as the first. What I actually see is the two overlapping squares, one rotating with no transparency. One notable difference from the example is that I am not using a texture. Is that a problem? I figured that this would be a simpler example.
I'm fairly new to ES2.0, so if I'm doing something obviously stupid, please let me know.
Initialization:
GLuint attributes[] = { GLKVertexAttribPosition, GLKVertexAttribColor, GLKVertexAttribTexCoord0 };
const char *attributeNames[] = { "position", "color", "texcoord0" };
// These are all global GLuint variables
// vshSrc and fshSrc are const char* filenames (the files are found properly)
_myProgram = loadProgram(vshSrc, fshSrc, 3, attributes, attributeNames);
_myProgramUniformMVP = glGetUniformLocation(_myProgram, "modelViewProjectionMatrix");
_myProgramUniformTex = glGetUniformLocation(_myProgram, "tex");
_myProgramUniformOverrideColor = glGetUniformLocation(_myProgram, "overrideColor");
The draw loop:
glEnable(GL_DEPTH_TEST);
glUseProgram(_myProgram);
glDisable(GL_BLEND);
glClearColor(1.0, 1.0, 1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GLfloat gSquare[20] = { // not using the textures currently
// posX, posY, posZ, texX, texY,
-0.5f, -0.5f, 0, 0.0f, 0.0f,
0.5f, -0.5f, 0, 1.0f, 0.0f,
-0.5f, 0.5f, 0, 0.0f, 1.0f,
0.5f, 0.5f, 0, 1.0f, 1.0f
};
// Projection matrix
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f);
// Put the squares where they can be seen
GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f);
glEnable(GL_STENCIL_TEST);
// Build the stencil
glStencilFunc(GL_ALWAYS, 1, 0xFF); // Set any stencil to 1
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glStencilMask(0xFF); // Write to stencil buffer
glDepthMask(GL_FALSE); // Don't write to depth buffer
glClear(GL_STENCIL_BUFFER_BIT); // Clear stencil buffer (0 by default)
GLKMatrix4 mvp = GLKMatrix4Multiply(projectionMatrix, baseModelViewMatrix);
// Draw a stationary red square for the stencil (though the color shouldn't matter)
glUniformMatrix4fv(_chetProgramUniformMVP, 1, 0, mvp.m);
glUniform1i(_chetProgramUniformTex, 0);
glUniform4f(_chetProgramUniformOverrideColor, 1.0f, 1.0f, 1.0f,0.0f);
glVertexAttrib4f(GLKVertexAttribColor, 1, 0, 0, 1);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 20, gSquare);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Prepare the mask
glStencilFunc(GL_EQUAL, 1, 0xFF); // Pass test if stencil value is 1
glStencilMask(0x00); // Don't write anything to stencil buffer
glDepthMask(GL_TRUE); // Write to depth buffer
glUniform4f(_myProgramUniformOverrideColor, 0.3f, 0.3f, 0.3f,1.0f);
// A slow rotating green square to be masked by the stencil
static float rotation = 0;
rotation += 0.01;
baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, rotation, 0, 0, 1);
mvp = GLKMatrix4Multiply(projectionMatrix, baseModelViewMatrix);
glUniformMatrix4fv(_myProgramUniformMVP, 1, 0, mvp.m);//The transformation matrix
glUniform1i(_myProgramUniformTex, 0); // The texture
glVertexAttrib4f(GLKVertexAttribColor, 0, 1, 0, 1);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 20, gSquare);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisable(GL_STENCIL_TEST);
EDIT: The following shader stuff is irrelevant to the problem I was having. The stenciling does not take place in the shader.
Vertex Shader:
attribute vec4 position;
attribute vec4 color;
attribute vec2 texcoord0;
varying lowp vec4 colorVarying;
varying lowp vec2 texcoord;
uniform mat4 modelViewProjectionMatrix;
uniform vec4 overrideColor;
void main()
{
colorVarying = overrideColor * color;
texcoord = texcoord0;
gl_Position = modelViewProjectionMatrix * position;
}
Fragment Shader:
varying lowp vec4 colorVarying;
varying lowp vec2 texcoord;
uniform sampler2D tex;
void main()
{
gl_FragColor = colorVarying * texture2D(tex, texcoord);
}
It was necessary to initialize a stencil buffer. Here is the code that fixed it.
glGenRenderbuffersOES(1, &depthStencilRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthStencilRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH24_STENCIL8_OES, framebufferWidth, framebufferHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthStencilRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_STENCIL_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthStencilRenderbuffer);

Using sample2D as position in vertex shader

I'm learning how to manipulate particles with FBO in webGL, I tried to store the position with a texture and use it a position reference, but nothing came up on the stage.
Fragment Shaders
precision mediump float;
void main() {
gl_FragColor = vec4(vec3(1.0), 1.0);
}
</script>
Vertex Render Shader script
attribute vec2 aTextureUV;
uniform sampler2D uTexture;
void main() {
vec4 texture = texture2D( uTexture, aTextureUV );
gl_Position = vec4( texture.rgb, 1.0 );
gl_PointSize = 3;
}
</script>
JS Script
// Determine the UVs
var uvs = new Float32Array([
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
1.0, 1.0
])
var uvBuffer = gl.createBuffer();
gl.bindBuffer( gl.ARRAY_BUFFER, uvBuffer );
gl.bufferData( gl.ARRAY_BUFFER, uvs, gl.STATIC_DRAW );
gl.enableVertexAttribArray( defaultProgram.aTextureUVLoc );
gl.vertexAttribPointer( defaultProgram.aTextureUVLoc, 2, gl.FLOAT, false, 0, 0);
// Texture initialization
for(var i = 0; i < particleCount; i++) {
initialData.push(
Math.random(),
Math.random(),
Math.random(),
0
);
}
var texture = gl.createTexture();
gl.activeTexture( gl.TEXTURE0 );
gl.bindTexture( gl.TEXTURE_2D, texture );
gl.pixelStorei( gl.UNPACK_ALIGNMENT, 1 );
gl.texImage2D(
gl.TEXTURE_2D, 0, gl.RGBA, fboWidth, fboWidth, 0,
gl.RGBA, gl.FLOAT, new Float32Array(initialData)
);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
.....
function draw() {
requestAnimationFrame( draw );
gl.viewport( 0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight );
gl.clear( gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.drawArrays( gl.POINTS, 0, particleCount );
}
Can't comment yet, but just as a desktop GL remark, I can see that 1) there are no actual draw calls in your script i.e. glDrawElements/glDrawArrays (unless you've left some parts of it out), and 2) you're missing a fragment shader. Refer to e.g. this.

2D Programming with Direct3D 9 - Test image is distorted

I am trying to build a simple 2D game using 2D sprites with DirectX 9, and I'm having problems getting the images to come out cleanly. I'd like to load bmp images and display them on the screen as is (no interpolation, no magnification, no filtering or anti-aliasing, etc).
I'm sure I'm missing something, but when I try and render a 100x100 bmp to the screen, it looks choppy and distorted, like a pixel art image would normally look when shrunken slightly. I want the bmp to look exactly as it does when loaded in MS Paint.
Does anyone have any idea why this might be the case? My code is shown below:
Initialization code:
g_DxCom = Direct3DCreate9( D3D_SDK_VERSION );
if ( g_DxCom == NULL )
{
return false;
}
D3DDISPLAYMODE d3dDisplayMode;
if ( FAILED( g_DxCom->GetAdapterDisplayMode( D3DADAPTER_DEFAULT, &d3dDisplayMode ) ) )
{
return false;
}
D3DPRESENT_PARAMETERS d3dPresentParameters;
::ZeroMemory( &d3dPresentParameters, sizeof(D3DPRESENT_PARAMETERS) );
d3dPresentParameters.Windowed = FALSE;
d3dPresentParameters.SwapEffect = D3DSWAPEFFECT_DISCARD;
d3dPresentParameters.BackBufferFormat = d3dDisplayMode.Format; // D3DFMT_X8R8G8B8
d3dPresentParameters.BackBufferWidth = d3dDisplayMode.Width;
d3dPresentParameters.BackBufferHeight = d3dDisplayMode.Height;
d3dPresentParameters.PresentationInterval = D3DPRESENT_INTERVAL_ONE;
if ( FAILED( g_DxCom->CreateDevice( D3DADAPTER_DEFAULT,
D3DDEVTYPE_HAL,
this->hWnd,
D3DCREATE_HARDWARE_VERTEXPROCESSING,
&d3dPresentParameters,
&pd3dDevice ) ) )
{
if ( FAILED( g_DxCom->CreateDevice( D3DADAPTER_DEFAULT,
D3DDEVTYPE_HAL,
this->hWnd,
D3DCREATE_SOFTWARE_VERTEXPROCESSING,
&d3dPresentParameters,
&pd3dDevice ) ) )
{
return false;
}
}
texture = NULL;
bg_texture = NULL;
Render code:
LPDIRECT3DDEVICE9 g_dxDevice;
float float1 = 99.5f; // I'd like to render my 100x100 sprite from screen coordinates 100, 100 to 200, 200
float float2 = 198.5f;
CUSTOMVERTEX OurVertices[] =
{
{ float1, float2, 1.0f, 1.0f, 0.0f, 1.0f },
{ float1, float1, 1.0f, 1.0f, 0.0f, 0.0f },
{ float2, float1, 1.0f, 1.0f, 1.0f, 0.0f },
{ float1, float2, 1.0f, 1.0f, 0.0f, 1.0f },
{ float2, float1, 1.0f, 1.0f, 1.0f, 0.0f },
{ float2, float2, 1.0f, 1.0f, 1.0f, 1.0f }
};
LPDIRECT3DVERTEXBUFFER9 v_buffer;
g_dxDevice->CreateVertexBuffer( 6 * sizeof(CUSTOMVERTEX),
0,
CUSTOMFVF,
D3DPOOL_MANAGED,
&v_buffer,
NULL );
VOID* pVoid;
// Lock the vertex buffer into memory
v_buffer->Lock( 0, 0, &pVoid, 0 );
// Copy our vertex buffer to memory
::memcpy( pVoid, OurVertices, sizeof(OurVertices) );
// Unlock buffer
v_buffer->Unlock();
LPDIRECT3DTEXTURE9 g_texture;
HRESULT hError;
DWORD dwTextureFilter = D3DTEXF_NONE;
g_dxDevice->SetSamplerState( 0, D3DSAMP_MINFILTER, dwTextureFilter );
g_dxDevice->SetSamplerState( 0, D3DSAMP_MAGFILTER, dwTextureFilter );
g_dxDevice->SetSamplerState( 0, D3DSAMP_MIPFILTER, dwTextureFilter );
g_dxDevice->SetTextureStageState(0,D3DTSS_COLOROP,D3DTOP_SELECTARG1);
g_dxDevice->SetTextureStageState(0,D3DTSS_COLORARG1,D3DTA_TEXTURE);
g_dxDevice->SetTextureStageState(0,D3DTSS_COLORARG2,D3DTA_DIFFUSE);
hError = D3DXCreateTextureFromFile( g_dxDevice, L"Test.bmp", &g_texture ); // 100x100 sprite
g_dxDevice->SetTexture( 0, g_texture );
g_dxDevice->Clear( 0,
NULL,
D3DCLEAR_TARGET,
D3DCOLOR_XRGB( 0, 40, 100 ),
1.0f,
0 );
g_dxDevice->BeginScene();
// Do rendering on the back buffer here
g_dxDevice->SetFVF( CUSTOMFVF );
g_dxDevice->SetStreamSource( 0, v_buffer, 0, sizeof(CUSTOMVERTEX) );
g_dxDevice->DrawPrimitive( D3DPT_TRIANGLELIST, 0, 6 );
g_dxDevice->EndScene();
g_dxDevice->Present( NULL, NULL, NULL, NULL );
g_texture->Release();
v_buffer->Release();
Okay, so I've finally figured it out, and I should have known this was the case.
It looks like DirectX9 only works with textures with sizes that are multiples of 2. If I change the texture so that the sprite square is 128 x 128 (just adding some transparency) and run the application with float2 changed appropriately, there is no distortion in the rendered image.
Hurrah...

Resources