using processing for 3D image mapping [closed] - image-processing

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
actually, I've tried to run the spherical POV. For run POV, each image should be converted to the number of lines which is used in each revolution, I've tried to use processing software to convert the 2d to 3d image by its library, but when insert those pixel data on the spherical POV, the image is really unrecognizable, any clue to how to map 2d image on a spherical surface

The Processing editor comes with examples, one of which maps a texture image to a sphere. It does this by splitting the sphere into triangles, and then using the texture() and vertex() functions to draw the textured sphere.
You can get to the code from the Processing editor by going to File > Examples > Topics > Textures > TextureSphere, but here's the code:
/**
* Texture Sphere
* by Gillian Ramsay
*
* Rewritten by Gillian Ramsay to better display the poles.
* Previous version by Mike 'Flux' Chang (and cleaned up by Aaron Koblin).
* Original based on code by Toxi.
*
* A 3D textured sphere with simple rotation control.
*/
int ptsW, ptsH;
PImage img;
int numPointsW;
int numPointsH_2pi;
int numPointsH;
float[] coorX;
float[] coorY;
float[] coorZ;
float[] multXZ;
void setup() {
size(640, 360, P3D);
background(0);
noStroke();
img=loadImage("world32k.jpg");
ptsW=30;
ptsH=30;
// Parameters below are the number of vertices around the width and height
initializeSphere(ptsW, ptsH);
}
// Use arrow keys to change detail settings
void keyPressed() {
if (keyCode == ENTER) saveFrame();
if (keyCode == UP) ptsH++;
if (keyCode == DOWN) ptsH--;
if (keyCode == LEFT) ptsW--;
if (keyCode == RIGHT) ptsW++;
if (ptsW == 0) ptsW = 1;
if (ptsH == 0) ptsH = 2;
// Parameters below are the number of vertices around the width and height
initializeSphere(ptsW, ptsH);
}
void draw() {
background(0);
camera(width/2+map(mouseX, 0, width, -2*width, 2*width),
height/2+map(mouseY, 0, height, -height, height),
height/2/tan(PI*30.0 / 180.0),
width, height/2.0, 0,
0, 1, 0);
pushMatrix();
translate(width/2, height/2, 0);
textureSphere(200, 200, 200, img);
popMatrix();
}
void initializeSphere(int numPtsW, int numPtsH_2pi) {
// The number of points around the width and height
numPointsW=numPtsW+1;
numPointsH_2pi=numPtsH_2pi; // How many actual pts around the sphere (not just from top to bottom)
numPointsH=ceil((float)numPointsH_2pi/2)+1; // How many pts from top to bottom (abs(....) b/c of the possibility of an odd numPointsH_2pi)
coorX=new float[numPointsW]; // All the x-coor in a horizontal circle radius 1
coorY=new float[numPointsH]; // All the y-coor in a vertical circle radius 1
coorZ=new float[numPointsW]; // All the z-coor in a horizontal circle radius 1
multXZ=new float[numPointsH]; // The radius of each horizontal circle (that you will multiply with coorX and coorZ)
for (int i=0; i<numPointsW ;i++) { // For all the points around the width
float thetaW=i*2*PI/(numPointsW-1);
coorX[i]=sin(thetaW);
coorZ[i]=cos(thetaW);
}
for (int i=0; i<numPointsH; i++) { // For all points from top to bottom
if (int(numPointsH_2pi/2) != (float)numPointsH_2pi/2 && i==numPointsH-1) { // If the numPointsH_2pi is odd and it is at the last pt
float thetaH=(i-1)*2*PI/(numPointsH_2pi);
coorY[i]=cos(PI+thetaH);
multXZ[i]=0;
}
else {
//The numPointsH_2pi and 2 below allows there to be a flat bottom if the numPointsH is odd
float thetaH=i*2*PI/(numPointsH_2pi);
//PI+ below makes the top always the point instead of the bottom.
coorY[i]=cos(PI+thetaH);
multXZ[i]=sin(thetaH);
}
}
}
void textureSphere(float rx, float ry, float rz, PImage t) {
// These are so we can map certain parts of the image on to the shape
float changeU=t.width/(float)(numPointsW-1);
float changeV=t.height/(float)(numPointsH-1);
float u=0; // Width variable for the texture
float v=0; // Height variable for the texture
beginShape(TRIANGLE_STRIP);
texture(t);
for (int i=0; i<(numPointsH-1); i++) { // For all the rings but top and bottom
// Goes into the array here instead of loop to save time
float coory=coorY[i];
float cooryPlus=coorY[i+1];
float multxz=multXZ[i];
float multxzPlus=multXZ[i+1];
for (int j=0; j<numPointsW; j++) { // For all the pts in the ring
normal(-coorX[j]*multxz, -coory, -coorZ[j]*multxz);
vertex(coorX[j]*multxz*rx, coory*ry, coorZ[j]*multxz*rz, u, v);
normal(-coorX[j]*multxzPlus, -cooryPlus, -coorZ[j]*multxzPlus);
vertex(coorX[j]*multxzPlus*rx, cooryPlus*ry, coorZ[j]*multxzPlus*rz, u, v+changeV);
u+=changeU;
}
v+=changeV;
u=0;
}
endShape();
}
Hopefully this is a good start, but more generally: Stack Overflow isn't really designed for general "how do I do this" type questions. It's more designed for specific "I tried X, expected Y, but got Z instead" type questions. Can you post anything you've tried? Where is your Minimal, Complete, and Verifiable example?
If you have no idea how to start, then start smaller: can you get an image mapped to a rectangle? Work your way up from there and post if you get stuck. Good luck.

Related

Draw permanently on PGraphics (Processing)

I would like to create a brush for drawing on a PGraphics element with Processing. I would like past brush strokes to be visible. However, since the PGraphics element is loaded every frame, previous brush strokes disappear immediatly.
My idea was then to create PGraphics pg in setup(), make a copy of it in void(), alter the original graphic pg and update the copy at every frame. This produces a NullPointerException, most likely because pg is defined locally in setup().
This is what I have got so far:
PGraphics pg;
PFont font;
void setup (){
font = createFont("Pano Bold Kopie.otf", 600);
size(800, 800, P2D);
pg = createGraphics(800, 800, P2D);
pg.beginDraw();
pg.background(0);
pg.fill(255);
pg.textFont(font);
pg.textSize(400);
pg.pushMatrix();
pg.translate(width/2, height/2-140);
pg.textAlign(CENTER, CENTER);
pg.text("a", 0 , 0);
pg.popMatrix();
pg.endDraw();
}
void draw () {
copy(pg, 0, 0, width, height, 0, 0, width, height);
loop();
int c;
loadPixels();
for (int x=0; x<width; x++) {
for (int y=0; y<height; y++) {
pg.pixels[mouseX+mouseY*width]=0;
}
}
updatePixels();
}
My last idea, which I have not attempted to implement yet, is to append pixels which have been touched by the mouse to a list and to draw from this list each frame. But this seems quite complicated to me as it might result into super long arrays needing to be processed on top of the original image. So, I hope there is another way around!
EDIT: My goal is to create a smudge brush, hence a brush which kind of copies areas from one part of the image to other parts.
There's no need to manually copy pixels like that. The PGraphics class extends PImage, which means you can simply render it with image(pg,0,0); for example.
The other thing you could do is an old trick to fade the background: instead of clearing pixels completely you can render a sketch size slightly opaque rectangle with no stroke.
Here's a quick proof of concept based on your code:
PFont font;
PGraphics pg;
void setup (){
//font = createFont("Pano Bold Kopie.otf", 600);
font = createFont("Verdana",600);
size(800, 800, P2D);
// clear main background once
background(0);
// prep fading background
noStroke();
// black fill with 10/255 transparnecy
fill(0,10);
pg = createGraphics(800, 800, P2D);
pg.beginDraw();
// leave the PGraphics instance transparent
//pg.background(0);
pg.fill(255);
pg.textFont(font);
pg.textSize(400);
pg.pushMatrix();
pg.translate(width/2, height/2-140);
pg.textAlign(CENTER, CENTER);
pg.text("a", 0 , 0);
pg.popMatrix();
pg.endDraw();
}
void draw () {
// test with mouse pressed
if(mousePressed){
// slowly fade/clear the background by drawing a slightly opaque rectangle
rect(0,0,width,height);
}
// don't clear the background, render the PGraphics layer directly
image(pg, mouseX - pg.width / 2, mouseY - pg.height / 2);
}
If you hold the mouse pressed you can see the fade effect.
(changing transparency to 10 to a higher value with make the fade quicker)
Update To create a smudge brush you can still sample pixels and then manipulate the read colours to some degree. There are many ways to implement a smudge effect based on what you want to achieve visually.
Here's a very rough proof of concept:
PFont font;
PGraphics pg;
int pressX;
int pressY;
void setup (){
//font = createFont("Pano Bold Kopie.otf", 600);
font = createFont("Verdana",600);
size(800, 800, P2D);
// clear main background once
background(0);
// prep fading background
noStroke();
// black fill with 10/255 transparnecy
fill(0,10);
pg = createGraphics(800, 800, JAVA2D);
pg.beginDraw();
// leave the PGraphics instance transparent
//pg.background(0);
pg.fill(255);
pg.noStroke();
pg.textFont(font);
pg.textSize(400);
pg.pushMatrix();
pg.translate(width/2, height/2-140);
pg.textAlign(CENTER, CENTER);
pg.text("a", 0 , 0);
pg.popMatrix();
pg.endDraw();
}
void draw () {
image(pg,0,0);
}
void mousePressed(){
pressX = mouseX;
pressY = mouseY;
}
void mouseDragged(){
// sample the colour where mouse was pressed
color sample = pg.get(pressX,pressY);
// calculate the distance from where the "smudge" started to where it is
float distance = dist(pressX,pressY,mouseX,mouseY);
// map this distance to transparency so the further the distance the less smudge (e.g. short distance, high alpha, large distnace, small alpha)
float alpha = map(distance,0,30,255,0);
// map distance to "brush size"
float size = map(distance,0,30,30,0);
// extract r,g,b values
float r = red(sample);
float g = green(sample);
float b = blue(sample);
// set new r,g,b,a values
pg.beginDraw();
pg.fill(r,g,b,alpha);
pg.ellipse(mouseX,mouseY,size,size);
pg.endDraw();
}
As the comments mention, one idea is to sample colour on press then use the sample colour and fade it as your drag away from the source area. This shows simply reading a single pixel. You may want to experiment with sampling/reading more pixels (e.g. a rectangle or ellipse).
Additionally, the code above isn't optimised.
A few things could be sped up a bit, like reading pixels, extracting colours, calculating distance, etc.
For example:
void mouseDragged(){
// sample the colour where mouse was pressed
color sample = pg.pixels[pressX + (pressY * pg.width)];
// calculate the distance from where the "smudge" started to where it is (can use manual distance squared if this is too slow)
float distance = dist(pressX,pressY,mouseX,mouseY);
// map this distance to transparency so the further the distance the less smudge (e.g. short distance, high alpha, large distnace, small alpha)
float alpha = map(distance,0,30,255,0);
// map distance to "brush size"
float size = map(distance,0,30,30,0);
// extract r,g,b values
int r = (sample >> 16) & 0xFF; // Like red(), but faster
int g = (sample >> 8) & 0xFF;
int b = sample & 0xFF;
// set new r,g,b,a values
pg.beginDraw();
pg.fill(r,g,b,alpha);
pg.ellipse(mouseX,mouseY,size,size);
pg.endDraw();
}
The idea is to start simple with clear, readable code and only at the end, if needed look into optimisations.

Detectings small circles on game minimap

i am stuck on this problem for like 20h.
The quality is not every good because on 1080p video, the minimap is less than 300px / 300px
I want to detect the 10 heros circles on this images:
Like this:
For background removal, i can use this:
The heroes portrait circle radius are between 8 to 12 because a hero portrait is like 21x21px.
With this code
Mat minimapMat = mgcodecs.imread("minimap.png");
Mat minimapCleanMat = Imgcodecs.imread("minimapClean.png");
Mat minimapDiffMat = new Mat();
Core.subtract(minimapMat, minimapCleanMat, minimapDiffMat);
I obtain this:
Now i apply circles detection on it:
findCircles(minimapDiffMat);
public static void findCircles(Mat imgSrc) {
Mat img = imgSrc.clone();
Mat gray = new Mat();
Imgproc.cvtColor(img, gray, Imgproc.COLOR_BGR2GRAY);
Imgproc.blur(gray, gray, new Size(3, 3));
Mat edges = new Mat();
int lowThreshold = 40;
int ratio = 3;
Imgproc.Canny(gray, edges, lowThreshold, lowThreshold * ratio);
Mat circles = new Mat();
Vector<Mat> circlesList = new Vector<Mat>();
Imgproc.HoughCircles(edges, circles, Imgproc.CV_HOUGH_GRADIENT, 1, 10, 5, 20, 7, 15);
double x = 0.0;
double y = 0.0;
int r = 0;
for (int i = 0; i < circles.rows(); i++) {
for (int k = 0; k < circles.cols(); k++) {
double[] data = circles.get(i, k);
for (int j = 0; j < data.length; j++) {
x = data[0];
y = data[1];
r = (int) data[2];
}
Point center = new Point(x, y);
// circle center
Imgproc.circle(img, center, 3, new Scalar(0, 255, 0), -1);
// circle outline
Imgproc.circle(img, center, r, new Scalar(0, 255, 0), 1);
}
}
HighGui.imshow("cirleIn", img);
}
Results is not ok, detecting only 2 on 10:
I have tried with knn background too:
With less success.
Any tips ? Thanks a lot in advance.
The problem is that your minimap contains highlighted parts (possibly around active players) rendering your background removal inoperable. Why not threshold the highlighted color out from the image? From what I see there are just few of them. I do not use OpenCV so I gave it a shot in C++ here is the result:
int x,y;
color c0,c1,c;
picture pic0,pic1,pic2;
// pic0 - source background
// pic1 - source map
// pic2 - output
// ensure all images are the same size
pic1.resize(pic0.xs,pic0.ys);
pic2.resize(pic0.xs,pic0.ys);
// process all pixels
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
// get both colors without alpha
c0.dd=pic0.p[y][x].dd&0x00FFFFFF;
c1.dd=pic1.p[y][x].dd&0x00FFFFFF; c=c1;
// threshold 0xAARRGGBB distance^2
if (distance2(c1,color(0x00EEEEEE))<2000) c.dd=0; // white-ish rectangle
if (distance2(c1,color(0x00889971))<2000) c.dd=0; // gray-ish path
if (distance2(c1,color(0x005A6443))<2000) c.dd=0; // gray-ish path
if (distance2(c1,color(0x0021A2C2))<2000) c.dd=0; // aqua water
if (distance2(c1,color(0x002A6D70))<2000) c.dd=0; // aqua water
if (distance2(c1,color(0x00439D96))<2000) c.dd=0; // aqua water
if (distance2(c1,c0 )<2500) c.dd=0; // close to background
pic2.p[y][x]=c;
}
pic2.save("out0.png");
pic2.pixel_format(_pf_u); // convert to gray scale
pic2.smooth(); // blur a little
pic2.save("out1.png");
pic2.threshold(0,80,765,0x00000000); // set dark pixels (<80) to black (0) and rest to white (3*255)
pic2.pixel_format(_pf_rgba);// convert back to RGB
pic2.save("out2.png");
So you need to find OpenCV counter parts to this. The thresholds are color distance^2 (so I do not need sqrt) and looks like 50^2 is ideal for <0,255> per channel RGB vector.
I use my own picture class for images so some members are:
xs,ys is size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
clear(color) clears entire image with color
resize(xs,ys) resizes image to new resolution
bmp is VCL encapsulated GDI Bitmap with Canvas access
pf holds actual pixel format of the image:
enum _pixel_format_enum
{
_pf_none=0, // undefined
_pf_rgba, // 32 bit RGBA
_pf_s, // 32 bit signed int
_pf_u, // 32 bit unsigned int
_pf_ss, // 2x16 bit signed int
_pf_uu, // 2x16 bit unsigned int
_pixel_format_enum_end
};
color and pixels are encoded like this:
union color
{
DWORD dd; WORD dw[2]; byte db[4];
int i; short int ii[2];
color(){}; color(color& a){ *this=a; }; ~color(){}; color* operator = (const color *a) { dd=a->dd; return this; }; /*color* operator = (const color &a) { ...copy... return this; };*/
};
The bands are:
enum{
_x=0, // dw
_y=1,
_b=0, // db
_g=1,
_r=2,
_a=3,
_v=0, // db
_s=1,
_h=2,
};
Here also the distance^2 between colors I used for thresholding:
DWORD distance2(color &a,color &b)
{
DWORD d,dd;
d=DWORD(a.db[0])-DWORD(b.db[0]); dd =d*d;
d=DWORD(a.db[1])-DWORD(b.db[1]); dd+=d*d;
d=DWORD(a.db[2])-DWORD(b.db[2]); dd+=d*d;
d=DWORD(a.db[3])-DWORD(b.db[3]); dd+=d*d;
return dd;
}
As input I used your images:
pic0:
pic1:
And here the (sub) results:
out0.png:
out1.png:
out2.png:
Now just remove noise (by blurring or by erosion) a bit and apply your circle fitting or hough transform...
[Edit1] circle detector
I gave it a bit of taught and implemented simple detector. I just check circumference points around any pixel position with constant radius (player circle) and if number of set point is above threshold I found potential circle. It is better than use whole disc area as some of the players contain holes and there are more pixels to test also ... Then I average close circles together and render the output ... Here updated code:
int i,j,x,y,xx,yy,x0,y0,r=10,d;
List<int> cxy; // circle circumferece points
List<int> plr; // player { x,y } list
color c0,c1,c;
picture pic0,pic1,pic2;
// pic0 - source background
// pic1 - source map
// pic2 - output
// ensure all images are the same size
pic1.resize(pic0.xs,pic0.ys);
pic2.resize(pic0.xs,pic0.ys);
// process all pixels
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
// get both colors without alpha
c0.dd=pic0.p[y][x].dd&0x00FFFFFF;
c1.dd=pic1.p[y][x].dd&0x00FFFFFF; c=c1;
// threshold 0xAARRGGBB distance^2
if (distance2(c1,color(0x00EEEEEE))<2000) c.dd=0; // white-ish rectangle
if (distance2(c1,color(0x00889971))<2000) c.dd=0; // gray-ish path
if (distance2(c1,color(0x005A6443))<2000) c.dd=0; // gray-ish path
if (distance2(c1,color(0x0021A2C2))<2000) c.dd=0; // aqua water
if (distance2(c1,color(0x002A6D70))<2000) c.dd=0; // aqua water
if (distance2(c1,color(0x00439D96))<2000) c.dd=0; // aqua water
if (distance2(c1,c0 )<2500) c.dd=0; // close to background
pic2.p[y][x]=c;
}
// pic2.save("out0.png");
pic2.pixel_format(_pf_u); // convert to gray scale
pic2.smooth(); // blur a little
// pic2.save("out1.png");
pic2.threshold(0,80,765,0x00000000); // set dark pixels (<80) to black (0) and rest to white (3*255)
// compute player circle circumference points mask
x0=r-1; y0=r; x0*=x0; y0*=y0;
for (x=-r,xx=x*x;x<=r;x++,xx=x*x)
for (y=-r,yy=y*y;y<=r;y++,yy=y*y)
{
d=xx+yy;
if ((d>=x0)&&(d<=y0))
{
cxy.add(x);
cxy.add(y);
}
}
// get all potential player circles
x0=(5*cxy.num)/20;
for (y=r;y<pic2.ys-r;y+=2) // no need to step by single pixel ...
for (x=r;x<pic2.xs-r;x+=2)
{
for (d=0,i=0;i<cxy.num;)
{
xx=x+cxy.dat[i]; i++;
yy=y+cxy.dat[i]; i++;
if (pic2.p[yy][xx].dd>100) d++;
}
if (d>=x0) { plr.add(x); plr.add(y); }
}
// pic2.pixel_format(_pf_rgba);// convert back to RGB
// pic2.save("out2.png");
// average all circles too close together
pic2=pic1; // use original image again
pic2.bmp->Canvas->Pen->Color=TColor(0x0000FF00);
pic2.bmp->Canvas->Pen->Width=3;
pic2.bmp->Canvas->Brush->Style=bsClear;
for (i=0;i<plr.num;i+=2) if (plr.dat[i]>=0)
{
x0=plr.dat[i+0]; x=x0;
y0=plr.dat[i+1]; y=y0; d=1;
for (j=i+2;j<plr.num;j+=2) if (plr.dat[j]>=0)
{
xx=plr.dat[j+0];
yy=plr.dat[j+1];
if (((x0-xx)*(x0-xx))+((y0-yy)*(y0-yy))*10<=20*r*r) // if close
{
x+=xx; y+=yy; d++; // add to average
plr.dat[j+0]=-1; // mark as deleted
plr.dat[j+1]=-1;
}
}
x/=d; y/=d;
plr.dat[i+0]=x;
plr.dat[i+1]=y;
pic2.bmp->Canvas->Ellipse(x-r,y-r,x+r,y+r);
}
pic2.bmp->Canvas->Pen->Width=1;
pic2.bmp->Canvas->Brush->Style=bsSolid;
// pic2.save("out3.png");
As you can see the core of code is the same I just added the detector in the end.
I also use mine dynamic list template so:
List<double> xxx; is the same as double xxx[];
xxx.add(5); adds 5 to end of the list
xxx[7] access array element (safe)
xxx.dat[7] access array element (unsafe but fast direct access)
xxx.num is the actual used size of the array
xxx.reset() clears the array and set xxx.num=0
xxx.allocate(100) preallocate space for 100 items
And here the final result out3.png:
As you can see it is a bit messed up when the players are very near (due to circle averaging) with some tweaking you might get better results. But on second taught it might be due to that small red circle nearby ...
I used VCL/GDI for the circles render so just ignore/port the pic2.bmp->Canvas-> stuff to what ever you use.
As the populated image is lighter in the blue areas around the heroes, your background subtraction is of virtually no use.
I tried to improve by applying a gain of 3 to the clean image before subtraction and here is the result.
The background has disappeared, but the outlines of the heroes are severely damaged.
I looked at your case with other approaches and I consider that it is a very difficult one.
What I do when I want to do image processing is first open the image in a paint editor (I use Gimp). Then I manipulate the image the until I end up with something that defines the parts I want to detect.
Generally, RGB is bad for a lot of computer vision tasks, and making it gray scale solves only a part of the problem.
A good start is trying to decompose the image to HSL instead.
Doing so on the first image, and only looking at the Hue channel gives me this:
Several of the blobs are quite well defined.
Playing a bit with the contrast and brightness of the Hue and Luminance layers and multiplying them gives me this:
It enhances the ring around the markers, which might be useful.
These methods all have corresponding functionality in OpenCV.
It's a tricky task and you will most likely require several different filters and techniques to succeed. Hope this helps a bit. Good luck.

(iOS) Accelerometer Graph (convert g-force to +/- 128) granularity

I am using this Accelerometer graph from Apple and trying to convert their G-force code to calculate +/- 128.
The following image shows that the x, y, z values in the labels do not match the output on the graph: (Note that addX:y:z values are what is shown in the labels above the graph)
ViewController
The x, y, z values are received from a bluetooth peripheral, then converted using:
// Updates LABELS
- (void)didReceiveRawAcceleromaterDataWithX:(NSInteger)x Y:(NSInteger)y Z:(NSInteger)z
{
dispatch_async(dispatch_get_main_queue(), ^{
_labelAccel.text = [NSString stringWithFormat:#"x:%li y:%li z:%li", (long)x, (long)y, (long)z];
});
}
// Updates GRAPHS
- (void)didReceiveAcceleromaterDataWithX:(NSInteger)x Y:(NSInteger)y Z:(NSInteger)z
{
dispatch_async(dispatch_get_main_queue(), ^{
float xx = ((float)x) / 8192;
float yy = ((float)y) / 8192;
float zz = ((float)z) / 8192;
[_xGraph addX:xx y:0 z:0];
[_yGraph addX:0 y:yy z:0];
[_zGraph addX:0 y:0 z:zz];
});
}
GraphView
- (BOOL)addX:(UIAccelerationValue)x y:(UIAccelerationValue)y z:(UIAccelerationValue)z
{
// If this segment is not full, then we add a new acceleration value to the history.
if (index > 0)
{
// First decrement, both to get to a zero-based index and to flag one fewer position left
--index;
xhistory[index] = x;
yhistory[index] = y;
zhistory[index] = z;
// And inform Core Animation to redraw the layer.
[layer setNeedsDisplay];
}
// And return if we are now full or not (really just avoids needing to call isFull after adding a value).
return index == 0;
}
- (void)drawLayer:(CALayer*)l inContext:(CGContextRef)context
{
// Fill in the background
CGContextSetFillColorWithColor(context, kUIColorLightGray(1.f).CGColor);
CGContextFillRect(context, layer.bounds);
// Draw the grid lines
DrawGridlines(context, 0.0, 32.0);
// Draw the graph
CGPoint lines[64];
int i;
float _granularity = 16.f; // 16
NSInteger _granualCount = 32; // 32
// X
for (i = 0; i < _granualCount; ++i)
{
lines[i*2].x = i;
lines[i*2+1].x = i + 1;
lines[i*2].y = xhistory[i] * _granularity;
lines[i*2+1].y = xhistory[i+1] * _granularity;
}
CGContextSetStrokeColorWithColor(context, _xColor.CGColor);
CGContextStrokeLineSegments(context, lines, 64);
// Y
for (i = 0; i < _granualCount; ++i)
{
lines[i*2].y = yhistory[i] * _granularity;
lines[i*2+1].y = yhistory[i+1] * _granularity;
}
CGContextSetStrokeColorWithColor(context, _yColor.CGColor);
CGContextStrokeLineSegments(context, lines, 64);
// Z
for (i = 0; i < _granualCount; ++i)
{
lines[i*2].y = zhistory[i] * _granularity;
lines[i*2+1].y = zhistory[i+1] * _granularity;
}
CGContextSetStrokeColorWithColor(context, _zColor.CGColor);
CGContextStrokeLineSegments(context, lines, 64);
}
How can I calculate the above code to show the correct accelerometer values on the graph with precision?
I post this as an aswer not a comment, because I have not enough reputation, but what I'll write might be enough to send you in the right direction, that it even may count as an answer...
Your question still doesn't include what is really important. I assume the calculation of the xx/yy/zz is no problem. Although I have no idea what the 8192 is supposed to mean.
I guess the preblem is in the part where you map your values to pixel coordinates...
the lines[] contains your values in a range of 1/8192th of the values in the label. so your x value of -2 should be at a pixel position of -0.0000something, so slightly(far less than 1 Pixel) above the view... Because you see the line a lot further down there must be some translation in place (not shown in your code)
The second part that is important but not shown is DrawGridlines. Probably in there is a different approach to map the values to pixel-coordinates...
Use the debugger to check what pixel-coordinates you get when draw your +127-line and what you get if you insert the value of +127 in your history-array
And some Ideas for improvements when reading your code:
1.)Put the graph in it's own class that draws one graph(and has only one history. Somehow you seem to have that partially already (otherwise I cannot figure out your _xGraph/_yGraph/_zGraph) But on the other hand you draw all 3 values in one drawLayer??? Currently you seem to have 3*3 history buffers of which 3*2 are filled with zeros...
2.) use one place where you do the calculation of Y that you use both for drawing the grid and drawing the lines...
3.) use CGContextMoveToPoint(); + CGContextAddLineToPoint(); instead of copying into lines[] with these ugly 2*i+1 indecies...

MonoGame Spritebatch Only Partially Drawing TileField

I'm currently working on a top down game using MonoGame that uses tiles to indicate whether a position is walkable or not. Tiles have the size of 32x32 (which the images also have)
A grid of 200 x 200 is being made filled with wall tiles (and a random generator is supposed to create a path and rooms) but when I draw all the tiles on the screen a lot of tiles go missing. Below is an image where after position (x81 y183) the tiles are simply not drawn?
http://puu.sh/3JOUO.png
The code used to fill the array puts a wall tile on the grid and the position of the tile is it's array position multiplied by the tile size (32x32) the parent is used for the camera position
public override void Fill(IResourceContainer resourceContainer)
{
for (int i = 0; i < width; i++)
for (int j = 0; j < height; j++)
{
objectGrid[i, j] = new Wall(resourceContainer);
objectGrid[i, j].Parent = this;
objectGrid[i, j].Position = new Vector2(i * TileWidth, j * TileHeight);
}
}
When drawing I just loop through all tiles and draw them accordingly. This is what happends in the Game.Draw function
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.Yellow);
// TODO: Add your drawing code here
spriteBatch.Begin();
map.Draw(gameTime, spriteBatch);
spriteBatch.End();
base.Draw(gameTime);
}
The map.draw function calls this function which basically draws each tile. I tried putting a counter on how much times the draw call for each tile was hit and every update the draw function is called 40000 times which is the amount of tiles I use. So it draws them all but I still don't see them all on the screen
public override void Draw(GameTime gameTime, SpriteBatch spriteBatch)
{
for (int i = 0; i < width; i++)
for (int j = 0; j < height; j++)
{
if (objectGrid[i, j] != null)
{
objectGrid[i, j].Draw(gameTime, spriteBatch);
}
}
}
This is the code for drawing a tile. Where the current image is 0 at all times and the GlobalPosition is the position of a tile minus the camera position.
public override void Draw(GameTime gameTime, SpriteBatch spriteBatch)
{
if (visible)
spriteBatch.Draw(textures[currentImage], GlobalPosition, null, color, 0f, -Center, 1f, SpriteEffects.None, 0f);
}
My apologies for the wall of code. It all looks very simple to me yet I can't seem to find out why it is not drawing all of the tiles. For the not drawn tiles visible is still true and currentImage is 0 which it should be
The monogame spritebatch has still some bugs and errors when drawing a large number of 16-bit images. In my case around 200.000 this is not something you can easily solve. If you encounter the same problem make sure that every image you draw is on the screen and you will probably have no problems from this anymore.

How to create Paint-like app with XNA?

The issue of programmatically drawing lines using XNA has been covered here. However, I want to allow a user to draw on a canvas as one would with a drawing app such as MS Paint.
This of course requires each x and/or y coordinate change in the mouse pointer position to result in another "dot" of the line being drawn on the canvas in the crayon color in real time.
In the mouse move event, what XNA API considerations come into play in order to draw the line point by point? Literally, of course, I'm not drawing a line as such, but rather a sequence of "dots". Each "dot" can, and probably should, be larger than a single pixel. Think of drawing with a felt tip pen.
The article you provided suggests a method of drawing lines with primitives; vector graphics, in other words. Applications like Paint are mostly pixel based (even though more advanced software like Photoshop has vector and rasterization features).
Bitmap editor
Since you want it to be "Paint-like" I would definitely go with the pixel based approach:
Create a grid of color values. (Extend the System.Drawing.Bitmap class or implement your own.)
Start the (game) loop:
Process input and update the color values in the grid accordingly.
Convert the Bitmap to a Texture2D.
Use a sprite batch or custom renderer to draw the texture to the screen.
Save the bitmap, if you want.
Drawing on the bitmap
I added a rough draft of the image class I am using here at the bottom of the answer. But the code should be quite self-explanatory anyways.
As mentioned before you also need to implement a method for converting the image to a Texture2D and draw it to the screen.
First we create a new 10x10 image and set all pixels to white.
var image = new Grid<Color>(10, 10);
image.Initilaize(() => Color.White);
Next we set up a brush. A brush is in essence just a function that is applied on the whole image. In this case the function should set all pixels inside the specified circle to a dark red color.
// Create a circular brush
float brushRadius = 2.5f;
int brushX = 4;
int brushY = 4;
Color brushColor = new Color(0.5f, 0, 0, 1); // dark red
Now we apply the brush. See this SO answer of mine on how to identify the pixels inside a circle.
You can use mouse input for the brush offsets and enable the user to actually draw on the bitmap.
double radiusSquared = brushRadius * brushRadius;
image.Modify((x, y, oldColor) =>
{
// Use the circle equation
int deltaX = x - brushX;
int deltaY = y - brushY;
double distanceSquared = Math.Pow(deltaX, 2) + Math.Pow(deltaY, 2);
// Current pixel lies inside the circle
if (distanceSquared <= radiusSquared)
{
return brushColor;
}
return oldColor;
});
You could also interpolate between the brush color and the old pixel. For example, you can implement a "soft" brush by letting the blend amount depend on the distance between the brush center and the current pixel.
Drawing a line
In order to draw a freehand line simply apply the brush repeatedly, each time with a different offset (depending on the mouse movement):
Custom image class
I obviously skipped some necessary properties, methods and data validation, but you get the idea:
public class Image
{
public Color[,] Pixels { get; private set; }
public Image(int width, int height)
{
Pixels= new Color[width, height];
}
public void Initialize(Func<Color> createColor)
{
for (int x = 0; x < Width; x++)
{
for (int y = 0; y < Height; y++)
{
Pixels[x, y] = createColor();
}
}
}
public void Modify(Func<int, int, Color, Color> modifyColor)
{
for (int x = 0; x < Width; x++)
{
for (int y = 0; y < Height; y++)
{
Color current = Pixels[x, y];
Pixels[x, y] = modifyColor(x, y, current);
}
}
}
}

Resources