Drawing a sprite from a spritesheet using XNA/Monogame - xna

I have a spritesheet with 2 sprites. Each sprite is 40x60. The total size of the image is 40x120.
It looks like this (the red line is part of the 1st sprite). I'll tell you in a second why I added that.
It seems that for some reason, when trying to draw the second sprite, it will always take the last line of the previous sprite. I have drawn that red line to illustrate that.
This is my code that draws the 2nd sprite:
Rectangle rect = new Rectangle(0, 60, 40, 60); // Choose 2nd sprite
Vector2 pos = new Vector2(100, 100);
Vector2 origin = new Vector2(0, 0);
spriteBatch.Begin();
spriteBatch.Draw(mSpriteTexture, pos , rect, Color.White, 0.0f, origin, 6, SpriteEffects.None, 0.0f);
spriteBatch.End();
And this is how it looks when I run the program:
Any ideas what I'm doing wrong?
Note: For this example I'm using scale=6. I did this because it seems that when scale > 1 this problem will always happen. If scale = 1, it doesn't seem to happen all the time.

Had the same issue. Used SamplerState.PointClamp in my spriteBatch.Begin - call to fix it
It looks like this: (MonoGame 3.6 tho)
SpriteBatch.Begin(SpriteSortMode.Deferred, BlendState.NonPremultiplied, SamplerState.PointClamp, null, null, null, viewMatrix);

Related

Is is possible to eliminate space between non-stroked polygons in Processing? If so, how?

I'm working on a Processing project to simulate very basic hard-shadows. For the most part I've got it working; each edge of each object checks if its back is facing the light. If it is, a shadow polygon is added using that edge and others cast back away from the point directly away from the light.
However, when I tried to shift from solid shadows to transparent I ran into some problems. Namely, because the shadows are made of multiple shapes the borders tended to overlap, making them darker than everywhere else:
I disabled the stroke on the shadows, which improved the effect but left small lines between the shadows, despite the edges for the polygons being identical:
Is there a way to eliminate this artifact? If so, how?
The solution is to not draw the shadows as separate pieces, but to draw the combined polygon of all the shadow pieces as one polygon.
Here's a little example that exhibits your problem:
void setup(){
size(500, 500);
}
void draw(){
background(255);
noStroke();
fill(0);
ellipse(mouseX, mouseY, 10, 10);
fill(128, 128, 128, 128);
beginShape();
vertex(mouseX, mouseY);
vertex(0, height);
vertex(width, height);
endShape();
fill(128, 128, 128, 128);
beginShape();
vertex(mouseX, mouseY);
vertex(width, height);
vertex(width, 0);
endShape();
}
Notice the white line between the two polygons:
But if I instead draw the two polygons as one:
void setup(){
size(500, 500);
}
void draw(){
background(255);
noStroke();
fill(0);
ellipse(mouseX, mouseY, 10, 10);
fill(128, 128, 128, 128);
beginShape();
vertex(mouseX, mouseY);
vertex(0, height);
vertex(width, height);
vertex(width, 0);
endShape();
}
Then the white line goes away:

Rotating a sprite around its center

I am trying to figure out how to use the origin in Draw method to rotate a sprite around its center. I was hoping somebody could explain the correct usage of origin parameter in Draw method.
If I use the following Draw method (without any rotation and origin specified) the the object is drawn at the correct/expected place:
spriteBatch.Draw(myTexture, destinationRectangle, null, Color.White, 0.0f, Vector2.Zero, SpriteEffects.None, 0);
However, if I use the origin and rotation like shown below, the object is rotating around is center but the object is floating above the expecting place (by around 20 pixels.)
Vector2 origin = new Vector2(myTexture.Width / 2 , myTexture.Height / 2 );
spriteBatch.Draw(myTexture, destinationRectangle, null, Color.White, ballRotation, origin, SpriteEffects.None, 0);
Even if I set the ballRotation to 0 the object is still drawn above the expected place
spriteBatch.Draw(myTexture, destinationRectangle, null, Color.White, 0.0f, origin, SpriteEffects.None, 0);
Is seems that just by setting the origin, the placement of the object changes.
Can somebody tell me how to use the origin parameter correctly.
Solution:
Davor's response made the usage of origin clear.
The following change was required in the code to make it work:
Vector2 origin = new Vector2(myTexture.Width / 2 , myTexture.Height / 2 );
destinationRectangle.X += destinationRectangle.Width/2;
destinationRectangle.Y += destinationRectangle.Height / 2;
spriteBatch.Draw(myTexture, destinationRectangle, null, Color.White, ballRotation, origin, SpriteEffects.None, 0);
this is correct use of origin. but now your position changed also to center, it's not on top left corner anymore, its on center. and it's floating for width/2 and height/2 from position befor seting origin.
so if your texture is 20x20, you need to subtract X by 10 (width/2) and Y by 10 (height/2) and you will have original position.

In XNA , what's the difference between the rectangle width and the texture's width in a sprite?

Let's say you did this: spriteBatch.Draw(myTexture, myRectangle, Color.White);
And you have this:
myTexture = Content.Load<Texture2D>("myCharacterTransparent");
myRectangle = new Rectangle(10, 100, 30, 50);
Ok, so now we have a rectangle width of 30. Let's say the myTexture's width is 100.
So with the first line, does it make the sprite's width 30 because that's the width you set to the rectangle, while the myTexture width stays 100? Or does the sprite's width go 100 because that's the width of the texture?
the Rectangles used by the Draw-method defines what part of the Texture2D should be drawn in what part of the rendertarget (usually the screen).
This is how we use tilesets, for instance;
class Tile
{
int Index;
Vector2 Position;
}
Texture2D tileset = Content.Load<Texture2D>("sometiles"); //128x128 of 32x32-sized tiles
Rectangle source = new Rectangle(0,0,32,32); //We set the dimensions here.
Rectangle destination = new Rectangle(0,0,32,32); //We set the dimensions here.
List<Tile> myLevel = LoadLevel("level1");
//the tileset is 4x4 tiles
in Draw:
spriteBatch.Begin();
foreach (var tile in myLevel)
{
source.Y = (int)((tile.Index / 4) * 32);
source.X = (tile.Index - source.Y) * 32;
destination.X = (int)tile.Position.X;
destination.Y = (int)tile.Position.Y;
spriteBatch.Draw(tileset, source, destination, Color.White);
}
spriteBatch.End();
I may have mixed up the order of which the rectangles are used in the draw-method, as I am doing this off the top of my head while at work.
Edit; Using only Source Rectangle lets you draw only a piece of a texture on a position of the screen, while using only destination lets you scale the texture to fit wherever you want it.

How do I draw thousands of squares with glkit, opengl es2?

I'm trying to draw up to 200,000 squares on the screen. Or a lot of squares basically. I believe I'm just calling way to many draw calls, and it's crippling the performance of the app. The squares only update when I press a button, so I don't necessarily have to update this every frame.
Here's the code i have now:
- (void)glkViewControllerUpdate:(GLKViewController *)controller
{
//static float transY = 0.0f;
//float y = sinf(transY)/2.0f;
//transY += 0.175f;
GLKMatrix4 modelview = GLKMatrix4MakeTranslation(0, 0, -5.f);
effect.transform.modelviewMatrix = modelview;
//GLfloat ratio = self.view.bounds.size.width/self.view.bounds.size.height;
GLKMatrix4 projection = GLKMatrix4MakeOrtho(0, 768, 1024, 0, 0.1f, 20.0f);
effect.transform.projectionMatrix = projection;
_isOpenGLViewReady = YES;
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
if(_model.updateView && _isOpenGLViewReady)
{
glClear(GL_COLOR_BUFFER_BIT);
[effect prepareToDraw];
int pixelSize = _model.pixelSize;
if(!_model.isReady)
return;
//NSLog(#"UPDATING: %d, %d", _model.rows, _model.columns);
for(int i = 0; i < _model.rows; i++)
{
for(int ii = 0; ii < _model.columns; ii++)
{
ColorModel *color = [_model getColorAtRow:i andColumn:ii];
CGRect rect = CGRectMake(ii * pixelSize, i*pixelSize, pixelSize, pixelSize);
//[self drawRectWithRect:rect withColor:c];
GLubyte squareColors[] = {
color.red, color.green, color.blue, 255,
color.red, color.green, color.blue, 255,
color.red, color.green, color.blue, 255,
color.red, color.green, color.blue, 255
};
// NSLog(#"Drawing color with red: %d", color.red);
int xVal = rect.origin.x;
int yVal = rect.origin.y;
int width = rect.size.width;
int height = rect.size.height;
GLfloat squareVertices[] = {
xVal, yVal, 1,
xVal + width, yVal, 1,
xVal, yVal + height, 1,
xVal + width, yVal + height, 1
};
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribColor);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, squareVertices);
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_UNSIGNED_BYTE, GL_TRUE, 0, squareColors);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(GLKVertexAttribPosition);
glDisableVertexAttribArray(GLKVertexAttribColor);
}
}
_model.updateView = YES;
}
First, do you really need to draw 200,000 squares? Your viewport only has 786,000 pixels total. You might be able to reduce the number of drawn objects without significantly impacting the overall quality of your scene.
That said, if these are smaller squares, you could draw them as points with a pixel size large enough to cover your square's area. That would require setting gl_PointSize in your vertex shader to the appropriate pixel width. You could then generate your coordinates and send them all to be drawn at once as GL_POINTS. That should remove the overhead of the extra geometry of the triangles and the individual draw calls you are using here.
Even if you don't use points, it's still a good idea to calculate all of the triangle geometry you need first, then send all that in a single draw call. This will significantly reduce your OpenGL ES API call overhead.
One other thing you could look into would be to use vertex buffer objects to store this geometry. If the geometry is static, you can avoid sending it on each drawn frame, or only update a part of it that has changed. Even if you just change out the data each frame, I believe using a VBO for dynamic geometry has performance advantages on the modern iOS devices.
Can you not try to optimize it somehow? I'm not terribly familiar with graphics type stuff, but I'd imagine that if you are drawing 200,000 squares chances that all of them are actually visible seems to be unlikely. Could you not add some sort of isVisible tag for your mySquare class that determines whether or not the square you want to draw is actually visible? Then the obvious next step is to modify your draw function so that if the square isn't visible, you don't draw it.
Or are you asking for someone to try to improve the current code you have, because if your performance is as bad as you say, I don't think making small changes to the above code will solve your problem. You'll have to rethink how you're doing your drawing.
It looks like what your code is actually trying to do is take a _model.rows × _model.columns 2D image and draw it upscaled by _model.pixelSize. If -[ColorModel getColorAtRow:andColumn:] is retrieving 3 bytes at a time from an array of color values, then you may want to consider uploading that array of color values into an OpenGL texture as GL_RGB/GL_UNSIGNED_BYTE data and letting the GPU scale up all of your pixels at once.
Alternatively, if scaling up the contents of your ColorModel is the only reason that you’re using OpenGL ES and GLKit, you might be better off wrapping your color values into a CGImage and allowing UIKit and Core Animation do the drawing for you. How often do the color values in the ColorModel get updated?

XNA - Drawing zoomed 2D sprites

I have a tile based game. Each tile texture is loaded and then I draw each one next to the other, forming a continuous background. I actually followed this tutorial for the xml files.
http://www.xnadevelopment.com/tutorials/looksleveltome/looksleveltome.shtml
The sources of the textures are 50x50.
However, it works only it the scale is 1 (or lower), if the scale is greater
The results : Normal size (50 pixel and scale 1)
http://imageshack.us/photo/my-images/525/smallzp.jpg/
Larger size (Zoomed or 100 pixel in xml file)
http://imageshack.us/photo/my-images/577/largeki.jpg/
We can see there are lines between the tiles, which are not in the texture. It's actually not so bad here, but in my game tileset, that's what it does :
http://imageshack.us/photo/my-images/687/zoomedsize.png/
The same effect is present whether I increase the tile size in the xml file, change the scale when drawing or use my camera to zoom.
//zoom code
public Matrix GetTransformation()
{
return
Matrix.CreateTranslation(new Vector3(-_pos.X, -_pos.Y, 0)) *
Matrix.CreateRotationZ(Rotation) *
Matrix.CreateScale(new Vector3(Zoom, Zoom, 1)) *
Matrix.CreateTranslation(new Vector3(_device.Viewport.Width * 0.5f, _device.Viewport.Height * 0.5f, 0));
}
//draw
_spriteBatch.Begin(SpriteSortMode.Immediate,
BlendState.AlphaBlend, null, null, null, null,
_camera.GetTransformation());
//for each tile
theSpriteBatch.Draw(mSpriteTexture, Position, Source,
Color.Lerp(Color.White, Color.Transparent, mAlphaValue),
mRotation, new Vector2(mSource.Width / 2, mSource.Height / 2),
Scale, SpriteEffects.None, mDepth);
Is there a reason for this? A way to fix it to have a continuous texture when zoomed?
The problem is in your sampler state, the gpu is trying to sample colors near the point to interpolate them.
Use SamplerState.PointClamp in your spriteBatch.Begin() and it will be fixed.

Resources