The issue of programmatically drawing lines using XNA has been covered here. However, I want to allow a user to draw on a canvas as one would with a drawing app such as MS Paint.
This of course requires each x and/or y coordinate change in the mouse pointer position to result in another "dot" of the line being drawn on the canvas in the crayon color in real time.
In the mouse move event, what XNA API considerations come into play in order to draw the line point by point? Literally, of course, I'm not drawing a line as such, but rather a sequence of "dots". Each "dot" can, and probably should, be larger than a single pixel. Think of drawing with a felt tip pen.
The article you provided suggests a method of drawing lines with primitives; vector graphics, in other words. Applications like Paint are mostly pixel based (even though more advanced software like Photoshop has vector and rasterization features).
Bitmap editor
Since you want it to be "Paint-like" I would definitely go with the pixel based approach:
Create a grid of color values. (Extend the System.Drawing.Bitmap class or implement your own.)
Start the (game) loop:
Process input and update the color values in the grid accordingly.
Convert the Bitmap to a Texture2D.
Use a sprite batch or custom renderer to draw the texture to the screen.
Save the bitmap, if you want.
Drawing on the bitmap
I added a rough draft of the image class I am using here at the bottom of the answer. But the code should be quite self-explanatory anyways.
As mentioned before you also need to implement a method for converting the image to a Texture2D and draw it to the screen.
First we create a new 10x10 image and set all pixels to white.
var image = new Grid<Color>(10, 10);
image.Initilaize(() => Color.White);
Next we set up a brush. A brush is in essence just a function that is applied on the whole image. In this case the function should set all pixels inside the specified circle to a dark red color.
// Create a circular brush
float brushRadius = 2.5f;
int brushX = 4;
int brushY = 4;
Color brushColor = new Color(0.5f, 0, 0, 1); // dark red
Now we apply the brush. See this SO answer of mine on how to identify the pixels inside a circle.
You can use mouse input for the brush offsets and enable the user to actually draw on the bitmap.
double radiusSquared = brushRadius * brushRadius;
image.Modify((x, y, oldColor) =>
{
// Use the circle equation
int deltaX = x - brushX;
int deltaY = y - brushY;
double distanceSquared = Math.Pow(deltaX, 2) + Math.Pow(deltaY, 2);
// Current pixel lies inside the circle
if (distanceSquared <= radiusSquared)
{
return brushColor;
}
return oldColor;
});
You could also interpolate between the brush color and the old pixel. For example, you can implement a "soft" brush by letting the blend amount depend on the distance between the brush center and the current pixel.
Drawing a line
In order to draw a freehand line simply apply the brush repeatedly, each time with a different offset (depending on the mouse movement):
Custom image class
I obviously skipped some necessary properties, methods and data validation, but you get the idea:
public class Image
{
public Color[,] Pixels { get; private set; }
public Image(int width, int height)
{
Pixels= new Color[width, height];
}
public void Initialize(Func<Color> createColor)
{
for (int x = 0; x < Width; x++)
{
for (int y = 0; y < Height; y++)
{
Pixels[x, y] = createColor();
}
}
}
public void Modify(Func<int, int, Color, Color> modifyColor)
{
for (int x = 0; x < Width; x++)
{
for (int y = 0; y < Height; y++)
{
Color current = Pixels[x, y];
Pixels[x, y] = modifyColor(x, y, current);
}
}
}
}
Related
I would like to create a brush for drawing on a PGraphics element with Processing. I would like past brush strokes to be visible. However, since the PGraphics element is loaded every frame, previous brush strokes disappear immediatly.
My idea was then to create PGraphics pg in setup(), make a copy of it in void(), alter the original graphic pg and update the copy at every frame. This produces a NullPointerException, most likely because pg is defined locally in setup().
This is what I have got so far:
PGraphics pg;
PFont font;
void setup (){
font = createFont("Pano Bold Kopie.otf", 600);
size(800, 800, P2D);
pg = createGraphics(800, 800, P2D);
pg.beginDraw();
pg.background(0);
pg.fill(255);
pg.textFont(font);
pg.textSize(400);
pg.pushMatrix();
pg.translate(width/2, height/2-140);
pg.textAlign(CENTER, CENTER);
pg.text("a", 0 , 0);
pg.popMatrix();
pg.endDraw();
}
void draw () {
copy(pg, 0, 0, width, height, 0, 0, width, height);
loop();
int c;
loadPixels();
for (int x=0; x<width; x++) {
for (int y=0; y<height; y++) {
pg.pixels[mouseX+mouseY*width]=0;
}
}
updatePixels();
}
My last idea, which I have not attempted to implement yet, is to append pixels which have been touched by the mouse to a list and to draw from this list each frame. But this seems quite complicated to me as it might result into super long arrays needing to be processed on top of the original image. So, I hope there is another way around!
EDIT: My goal is to create a smudge brush, hence a brush which kind of copies areas from one part of the image to other parts.
There's no need to manually copy pixels like that. The PGraphics class extends PImage, which means you can simply render it with image(pg,0,0); for example.
The other thing you could do is an old trick to fade the background: instead of clearing pixels completely you can render a sketch size slightly opaque rectangle with no stroke.
Here's a quick proof of concept based on your code:
PFont font;
PGraphics pg;
void setup (){
//font = createFont("Pano Bold Kopie.otf", 600);
font = createFont("Verdana",600);
size(800, 800, P2D);
// clear main background once
background(0);
// prep fading background
noStroke();
// black fill with 10/255 transparnecy
fill(0,10);
pg = createGraphics(800, 800, P2D);
pg.beginDraw();
// leave the PGraphics instance transparent
//pg.background(0);
pg.fill(255);
pg.textFont(font);
pg.textSize(400);
pg.pushMatrix();
pg.translate(width/2, height/2-140);
pg.textAlign(CENTER, CENTER);
pg.text("a", 0 , 0);
pg.popMatrix();
pg.endDraw();
}
void draw () {
// test with mouse pressed
if(mousePressed){
// slowly fade/clear the background by drawing a slightly opaque rectangle
rect(0,0,width,height);
}
// don't clear the background, render the PGraphics layer directly
image(pg, mouseX - pg.width / 2, mouseY - pg.height / 2);
}
If you hold the mouse pressed you can see the fade effect.
(changing transparency to 10 to a higher value with make the fade quicker)
Update To create a smudge brush you can still sample pixels and then manipulate the read colours to some degree. There are many ways to implement a smudge effect based on what you want to achieve visually.
Here's a very rough proof of concept:
PFont font;
PGraphics pg;
int pressX;
int pressY;
void setup (){
//font = createFont("Pano Bold Kopie.otf", 600);
font = createFont("Verdana",600);
size(800, 800, P2D);
// clear main background once
background(0);
// prep fading background
noStroke();
// black fill with 10/255 transparnecy
fill(0,10);
pg = createGraphics(800, 800, JAVA2D);
pg.beginDraw();
// leave the PGraphics instance transparent
//pg.background(0);
pg.fill(255);
pg.noStroke();
pg.textFont(font);
pg.textSize(400);
pg.pushMatrix();
pg.translate(width/2, height/2-140);
pg.textAlign(CENTER, CENTER);
pg.text("a", 0 , 0);
pg.popMatrix();
pg.endDraw();
}
void draw () {
image(pg,0,0);
}
void mousePressed(){
pressX = mouseX;
pressY = mouseY;
}
void mouseDragged(){
// sample the colour where mouse was pressed
color sample = pg.get(pressX,pressY);
// calculate the distance from where the "smudge" started to where it is
float distance = dist(pressX,pressY,mouseX,mouseY);
// map this distance to transparency so the further the distance the less smudge (e.g. short distance, high alpha, large distnace, small alpha)
float alpha = map(distance,0,30,255,0);
// map distance to "brush size"
float size = map(distance,0,30,30,0);
// extract r,g,b values
float r = red(sample);
float g = green(sample);
float b = blue(sample);
// set new r,g,b,a values
pg.beginDraw();
pg.fill(r,g,b,alpha);
pg.ellipse(mouseX,mouseY,size,size);
pg.endDraw();
}
As the comments mention, one idea is to sample colour on press then use the sample colour and fade it as your drag away from the source area. This shows simply reading a single pixel. You may want to experiment with sampling/reading more pixels (e.g. a rectangle or ellipse).
Additionally, the code above isn't optimised.
A few things could be sped up a bit, like reading pixels, extracting colours, calculating distance, etc.
For example:
void mouseDragged(){
// sample the colour where mouse was pressed
color sample = pg.pixels[pressX + (pressY * pg.width)];
// calculate the distance from where the "smudge" started to where it is (can use manual distance squared if this is too slow)
float distance = dist(pressX,pressY,mouseX,mouseY);
// map this distance to transparency so the further the distance the less smudge (e.g. short distance, high alpha, large distnace, small alpha)
float alpha = map(distance,0,30,255,0);
// map distance to "brush size"
float size = map(distance,0,30,30,0);
// extract r,g,b values
int r = (sample >> 16) & 0xFF; // Like red(), but faster
int g = (sample >> 8) & 0xFF;
int b = sample & 0xFF;
// set new r,g,b,a values
pg.beginDraw();
pg.fill(r,g,b,alpha);
pg.ellipse(mouseX,mouseY,size,size);
pg.endDraw();
}
The idea is to start simple with clear, readable code and only at the end, if needed look into optimisations.
i am stuck on this problem for like 20h.
The quality is not every good because on 1080p video, the minimap is less than 300px / 300px
I want to detect the 10 heros circles on this images:
Like this:
For background removal, i can use this:
The heroes portrait circle radius are between 8 to 12 because a hero portrait is like 21x21px.
With this code
Mat minimapMat = mgcodecs.imread("minimap.png");
Mat minimapCleanMat = Imgcodecs.imread("minimapClean.png");
Mat minimapDiffMat = new Mat();
Core.subtract(minimapMat, minimapCleanMat, minimapDiffMat);
I obtain this:
Now i apply circles detection on it:
findCircles(minimapDiffMat);
public static void findCircles(Mat imgSrc) {
Mat img = imgSrc.clone();
Mat gray = new Mat();
Imgproc.cvtColor(img, gray, Imgproc.COLOR_BGR2GRAY);
Imgproc.blur(gray, gray, new Size(3, 3));
Mat edges = new Mat();
int lowThreshold = 40;
int ratio = 3;
Imgproc.Canny(gray, edges, lowThreshold, lowThreshold * ratio);
Mat circles = new Mat();
Vector<Mat> circlesList = new Vector<Mat>();
Imgproc.HoughCircles(edges, circles, Imgproc.CV_HOUGH_GRADIENT, 1, 10, 5, 20, 7, 15);
double x = 0.0;
double y = 0.0;
int r = 0;
for (int i = 0; i < circles.rows(); i++) {
for (int k = 0; k < circles.cols(); k++) {
double[] data = circles.get(i, k);
for (int j = 0; j < data.length; j++) {
x = data[0];
y = data[1];
r = (int) data[2];
}
Point center = new Point(x, y);
// circle center
Imgproc.circle(img, center, 3, new Scalar(0, 255, 0), -1);
// circle outline
Imgproc.circle(img, center, r, new Scalar(0, 255, 0), 1);
}
}
HighGui.imshow("cirleIn", img);
}
Results is not ok, detecting only 2 on 10:
I have tried with knn background too:
With less success.
Any tips ? Thanks a lot in advance.
The problem is that your minimap contains highlighted parts (possibly around active players) rendering your background removal inoperable. Why not threshold the highlighted color out from the image? From what I see there are just few of them. I do not use OpenCV so I gave it a shot in C++ here is the result:
int x,y;
color c0,c1,c;
picture pic0,pic1,pic2;
// pic0 - source background
// pic1 - source map
// pic2 - output
// ensure all images are the same size
pic1.resize(pic0.xs,pic0.ys);
pic2.resize(pic0.xs,pic0.ys);
// process all pixels
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
// get both colors without alpha
c0.dd=pic0.p[y][x].dd&0x00FFFFFF;
c1.dd=pic1.p[y][x].dd&0x00FFFFFF; c=c1;
// threshold 0xAARRGGBB distance^2
if (distance2(c1,color(0x00EEEEEE))<2000) c.dd=0; // white-ish rectangle
if (distance2(c1,color(0x00889971))<2000) c.dd=0; // gray-ish path
if (distance2(c1,color(0x005A6443))<2000) c.dd=0; // gray-ish path
if (distance2(c1,color(0x0021A2C2))<2000) c.dd=0; // aqua water
if (distance2(c1,color(0x002A6D70))<2000) c.dd=0; // aqua water
if (distance2(c1,color(0x00439D96))<2000) c.dd=0; // aqua water
if (distance2(c1,c0 )<2500) c.dd=0; // close to background
pic2.p[y][x]=c;
}
pic2.save("out0.png");
pic2.pixel_format(_pf_u); // convert to gray scale
pic2.smooth(); // blur a little
pic2.save("out1.png");
pic2.threshold(0,80,765,0x00000000); // set dark pixels (<80) to black (0) and rest to white (3*255)
pic2.pixel_format(_pf_rgba);// convert back to RGB
pic2.save("out2.png");
So you need to find OpenCV counter parts to this. The thresholds are color distance^2 (so I do not need sqrt) and looks like 50^2 is ideal for <0,255> per channel RGB vector.
I use my own picture class for images so some members are:
xs,ys is size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
clear(color) clears entire image with color
resize(xs,ys) resizes image to new resolution
bmp is VCL encapsulated GDI Bitmap with Canvas access
pf holds actual pixel format of the image:
enum _pixel_format_enum
{
_pf_none=0, // undefined
_pf_rgba, // 32 bit RGBA
_pf_s, // 32 bit signed int
_pf_u, // 32 bit unsigned int
_pf_ss, // 2x16 bit signed int
_pf_uu, // 2x16 bit unsigned int
_pixel_format_enum_end
};
color and pixels are encoded like this:
union color
{
DWORD dd; WORD dw[2]; byte db[4];
int i; short int ii[2];
color(){}; color(color& a){ *this=a; }; ~color(){}; color* operator = (const color *a) { dd=a->dd; return this; }; /*color* operator = (const color &a) { ...copy... return this; };*/
};
The bands are:
enum{
_x=0, // dw
_y=1,
_b=0, // db
_g=1,
_r=2,
_a=3,
_v=0, // db
_s=1,
_h=2,
};
Here also the distance^2 between colors I used for thresholding:
DWORD distance2(color &a,color &b)
{
DWORD d,dd;
d=DWORD(a.db[0])-DWORD(b.db[0]); dd =d*d;
d=DWORD(a.db[1])-DWORD(b.db[1]); dd+=d*d;
d=DWORD(a.db[2])-DWORD(b.db[2]); dd+=d*d;
d=DWORD(a.db[3])-DWORD(b.db[3]); dd+=d*d;
return dd;
}
As input I used your images:
pic0:
pic1:
And here the (sub) results:
out0.png:
out1.png:
out2.png:
Now just remove noise (by blurring or by erosion) a bit and apply your circle fitting or hough transform...
[Edit1] circle detector
I gave it a bit of taught and implemented simple detector. I just check circumference points around any pixel position with constant radius (player circle) and if number of set point is above threshold I found potential circle. It is better than use whole disc area as some of the players contain holes and there are more pixels to test also ... Then I average close circles together and render the output ... Here updated code:
int i,j,x,y,xx,yy,x0,y0,r=10,d;
List<int> cxy; // circle circumferece points
List<int> plr; // player { x,y } list
color c0,c1,c;
picture pic0,pic1,pic2;
// pic0 - source background
// pic1 - source map
// pic2 - output
// ensure all images are the same size
pic1.resize(pic0.xs,pic0.ys);
pic2.resize(pic0.xs,pic0.ys);
// process all pixels
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
// get both colors without alpha
c0.dd=pic0.p[y][x].dd&0x00FFFFFF;
c1.dd=pic1.p[y][x].dd&0x00FFFFFF; c=c1;
// threshold 0xAARRGGBB distance^2
if (distance2(c1,color(0x00EEEEEE))<2000) c.dd=0; // white-ish rectangle
if (distance2(c1,color(0x00889971))<2000) c.dd=0; // gray-ish path
if (distance2(c1,color(0x005A6443))<2000) c.dd=0; // gray-ish path
if (distance2(c1,color(0x0021A2C2))<2000) c.dd=0; // aqua water
if (distance2(c1,color(0x002A6D70))<2000) c.dd=0; // aqua water
if (distance2(c1,color(0x00439D96))<2000) c.dd=0; // aqua water
if (distance2(c1,c0 )<2500) c.dd=0; // close to background
pic2.p[y][x]=c;
}
// pic2.save("out0.png");
pic2.pixel_format(_pf_u); // convert to gray scale
pic2.smooth(); // blur a little
// pic2.save("out1.png");
pic2.threshold(0,80,765,0x00000000); // set dark pixels (<80) to black (0) and rest to white (3*255)
// compute player circle circumference points mask
x0=r-1; y0=r; x0*=x0; y0*=y0;
for (x=-r,xx=x*x;x<=r;x++,xx=x*x)
for (y=-r,yy=y*y;y<=r;y++,yy=y*y)
{
d=xx+yy;
if ((d>=x0)&&(d<=y0))
{
cxy.add(x);
cxy.add(y);
}
}
// get all potential player circles
x0=(5*cxy.num)/20;
for (y=r;y<pic2.ys-r;y+=2) // no need to step by single pixel ...
for (x=r;x<pic2.xs-r;x+=2)
{
for (d=0,i=0;i<cxy.num;)
{
xx=x+cxy.dat[i]; i++;
yy=y+cxy.dat[i]; i++;
if (pic2.p[yy][xx].dd>100) d++;
}
if (d>=x0) { plr.add(x); plr.add(y); }
}
// pic2.pixel_format(_pf_rgba);// convert back to RGB
// pic2.save("out2.png");
// average all circles too close together
pic2=pic1; // use original image again
pic2.bmp->Canvas->Pen->Color=TColor(0x0000FF00);
pic2.bmp->Canvas->Pen->Width=3;
pic2.bmp->Canvas->Brush->Style=bsClear;
for (i=0;i<plr.num;i+=2) if (plr.dat[i]>=0)
{
x0=plr.dat[i+0]; x=x0;
y0=plr.dat[i+1]; y=y0; d=1;
for (j=i+2;j<plr.num;j+=2) if (plr.dat[j]>=0)
{
xx=plr.dat[j+0];
yy=plr.dat[j+1];
if (((x0-xx)*(x0-xx))+((y0-yy)*(y0-yy))*10<=20*r*r) // if close
{
x+=xx; y+=yy; d++; // add to average
plr.dat[j+0]=-1; // mark as deleted
plr.dat[j+1]=-1;
}
}
x/=d; y/=d;
plr.dat[i+0]=x;
plr.dat[i+1]=y;
pic2.bmp->Canvas->Ellipse(x-r,y-r,x+r,y+r);
}
pic2.bmp->Canvas->Pen->Width=1;
pic2.bmp->Canvas->Brush->Style=bsSolid;
// pic2.save("out3.png");
As you can see the core of code is the same I just added the detector in the end.
I also use mine dynamic list template so:
List<double> xxx; is the same as double xxx[];
xxx.add(5); adds 5 to end of the list
xxx[7] access array element (safe)
xxx.dat[7] access array element (unsafe but fast direct access)
xxx.num is the actual used size of the array
xxx.reset() clears the array and set xxx.num=0
xxx.allocate(100) preallocate space for 100 items
And here the final result out3.png:
As you can see it is a bit messed up when the players are very near (due to circle averaging) with some tweaking you might get better results. But on second taught it might be due to that small red circle nearby ...
I used VCL/GDI for the circles render so just ignore/port the pic2.bmp->Canvas-> stuff to what ever you use.
As the populated image is lighter in the blue areas around the heroes, your background subtraction is of virtually no use.
I tried to improve by applying a gain of 3 to the clean image before subtraction and here is the result.
The background has disappeared, but the outlines of the heroes are severely damaged.
I looked at your case with other approaches and I consider that it is a very difficult one.
What I do when I want to do image processing is first open the image in a paint editor (I use Gimp). Then I manipulate the image the until I end up with something that defines the parts I want to detect.
Generally, RGB is bad for a lot of computer vision tasks, and making it gray scale solves only a part of the problem.
A good start is trying to decompose the image to HSL instead.
Doing so on the first image, and only looking at the Hue channel gives me this:
Several of the blobs are quite well defined.
Playing a bit with the contrast and brightness of the Hue and Luminance layers and multiplying them gives me this:
It enhances the ring around the markers, which might be useful.
These methods all have corresponding functionality in OpenCV.
It's a tricky task and you will most likely require several different filters and techniques to succeed. Hope this helps a bit. Good luck.
I'm making a sprite editor using JavaFX for use on desktops.
I'm attempting to implement zooming functionality, but I've run into a problem: I can't figure out how to disable image smoothing on a Canvas object.
I'm calling Canvas.setScaleX() and Canvas.setScaleY() as per every tutorial implementing Canvas zooming. But my image appears blurred when zoomed in.
I have some test code here to demonstrate.
As this is a sprite editor, it's important for me to have crisp edges to work with. The alternative to fixing image smoothing on the Canvas is to have a non-smoothing ImageView, and have a hidden Canvas to draw on, which I would rather avoid.
Help is appreciated.
(here's a link to a related question, but doesn't address my particular problem)
I was having the same issue with the blurring.
In my case, my computer has Retina Display. Retina Display causes a pixel to be rendered with sub-pixels. When drawing images to the canvas, the image would be drawn with antialiasing for the sub-pixels. I have not found a way to prevent this antialiasing from occurring (although it is possible with other canvas technologies such as HTML5's Canvas)
In the meantime, I have a work-around (albeit I'm concerned about performance):
public class ImageRenderer {
public void render(GraphicsContext context, Image image, int sx, int sy, int sw, int sh, int tx, int ty) {
PixelReader reader = image.getPixelReader();
PixelWriter writer = context.getPixelWriter();
for (int x = 0; x < sw; x++) {
for (int y = 0; y < sh; y++) {
Color color = reader.getColor(sx + x, sy + y);
if (color.isOpaque()) {
writer.setColor(tx + x, ty + y, color);
}
}
}
}
}
The PixelWriter bypasses the anti-aliasing that occurs when drawing the image.
I have this image that contains few objects with different color. The background of image is white color.
I need to found top left point and bottom right point to crop image with the objects bounces.
The image below show just one gray object (exclude small dots and labels) that I need to crop, but at first I need to get these extreme points.
// Extract the bitmap data from the image
unsigned char* imageData= [self extractImageDataForImage:self.image];
// Iterate through the matrix and compare pixel colors
for (int i=0; i< height; i++){
for(int j=0; j<width*4; j+=4){ // assuming we extracted the RGBA image, therefore the 4 pixels, one per component
int pixelIndex= (i*width*4) + j;
MyColorImpl* pixelColor= [self colorForPixelAtIndex:pixelIndex imageData:imageData];
if( [self isColorWhite:pixelColor] ){
// we're not interested in white pixels
}else{
// The reason not to use UI color a few lines above is so you can compare colors in the way you want.
// You can say that two colors are equal if the difference for each component is not larger than x.
// That way you can locate pixels with equal color even if they are almost the same color.
// Let's say current color is yellow
// Get the object that contains the info for the yellow drawable
MyColoredObjectInformation* info= [self.coloredObjectDictionary objectForKey:pixelColor.description];
if(!info){
//it doesn't exist. So lets create it and map it to the yellow color
info= [MyColoredObjectInformation new];
[self.coloredObjectDictionary setObject:info forKey:pixelColor.description];
}
// get x and y for the current pixel
float pixelX= pixelIndex % (width*4);
float pixelY= i;
if(pixelX < info.xMin)
info.xMin= pixelX;
if(pixelX > info.xMax)
info.xMax= pixelX;
if(pixelY > info.yMax)
info.yMax= pixelY;
if(pixelY < info.yMin)
info.yMin= pixelY;
}
}
}
// don't forget to free the array (since it's been allocated dynamically in extractImageForDataForImage:]
free(imageData];
Don't forget to set xMin, xMax, yMin and yMax to appropriate values for each object
#implementation MyColoredObjectInformation
-(id)init{
if( self= [super init]){
self.xMin= -1;
self.xMax= INT_MAX;
self.yMin= -1;
self.yMax= INT_MAX;
}
return self;
}
One thing that might happen when converting the image to the data array is that pixels don't go top-> bottom & left-> right. Usually image can be rotated when you convert it to CGImage. In that case, you'll just have different formula for pixelIndex, pixelX and pixelY.
At the end, just iterate through the values of self.coloredObjectDictionary and for each color you will have two points that represent the rect around the object p1(xMin, yMin) and p2(xMax, yMax)
I'm currently working on a top down game using MonoGame that uses tiles to indicate whether a position is walkable or not. Tiles have the size of 32x32 (which the images also have)
A grid of 200 x 200 is being made filled with wall tiles (and a random generator is supposed to create a path and rooms) but when I draw all the tiles on the screen a lot of tiles go missing. Below is an image where after position (x81 y183) the tiles are simply not drawn?
http://puu.sh/3JOUO.png
The code used to fill the array puts a wall tile on the grid and the position of the tile is it's array position multiplied by the tile size (32x32) the parent is used for the camera position
public override void Fill(IResourceContainer resourceContainer)
{
for (int i = 0; i < width; i++)
for (int j = 0; j < height; j++)
{
objectGrid[i, j] = new Wall(resourceContainer);
objectGrid[i, j].Parent = this;
objectGrid[i, j].Position = new Vector2(i * TileWidth, j * TileHeight);
}
}
When drawing I just loop through all tiles and draw them accordingly. This is what happends in the Game.Draw function
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.Yellow);
// TODO: Add your drawing code here
spriteBatch.Begin();
map.Draw(gameTime, spriteBatch);
spriteBatch.End();
base.Draw(gameTime);
}
The map.draw function calls this function which basically draws each tile. I tried putting a counter on how much times the draw call for each tile was hit and every update the draw function is called 40000 times which is the amount of tiles I use. So it draws them all but I still don't see them all on the screen
public override void Draw(GameTime gameTime, SpriteBatch spriteBatch)
{
for (int i = 0; i < width; i++)
for (int j = 0; j < height; j++)
{
if (objectGrid[i, j] != null)
{
objectGrid[i, j].Draw(gameTime, spriteBatch);
}
}
}
This is the code for drawing a tile. Where the current image is 0 at all times and the GlobalPosition is the position of a tile minus the camera position.
public override void Draw(GameTime gameTime, SpriteBatch spriteBatch)
{
if (visible)
spriteBatch.Draw(textures[currentImage], GlobalPosition, null, color, 0f, -Center, 1f, SpriteEffects.None, 0f);
}
My apologies for the wall of code. It all looks very simple to me yet I can't seem to find out why it is not drawing all of the tiles. For the not drawn tiles visible is still true and currentImage is 0 which it should be
The monogame spritebatch has still some bugs and errors when drawing a large number of 16-bit images. In my case around 200.000 this is not something you can easily solve. If you encounter the same problem make sure that every image you draw is on the screen and you will probably have no problems from this anymore.