how to make texture2d from just point array.(XNA, Farseer Physics Engine) - xna

I use Farseer Physics Engine for pump simulation.
In there Example, they always use texture2d format.
But that pump shape is given just Point(x,y) Array.
I want to make polygon or texture2d from that point array.
PolygonTools.CreatePolygon method need int[] and width, not point[].
I don`t know how to make polygon by int[] and width.
please help.

so you wish to create a texture2d from array... hm... i will try explain how will i try this, this is not working example just hint how to do it.
first you need to find with and height, so find max X and max Y to create blank texture.
Texture2D blankTexture = new Texture2D(GraphicsDevice, maxX, maxY, false, SurfaceFormat.Color);
then loop over texture and set pixel color from your array
for(int i=0; i<blankTexture .width; i++)
{
for(int j=0; j<blankTexture .height; j++)
{
// pixel = texture.GetPixel(i, j);
// loop over array, and if pointX in array = i and pointY in array = j then
pixel.Color = Color.White; //
}
}
i think this is quite cpu expensive way... but it could work.

Related

MonoGame Spritebatch Only Partially Drawing TileField

I'm currently working on a top down game using MonoGame that uses tiles to indicate whether a position is walkable or not. Tiles have the size of 32x32 (which the images also have)
A grid of 200 x 200 is being made filled with wall tiles (and a random generator is supposed to create a path and rooms) but when I draw all the tiles on the screen a lot of tiles go missing. Below is an image where after position (x81 y183) the tiles are simply not drawn?
http://puu.sh/3JOUO.png
The code used to fill the array puts a wall tile on the grid and the position of the tile is it's array position multiplied by the tile size (32x32) the parent is used for the camera position
public override void Fill(IResourceContainer resourceContainer)
{
for (int i = 0; i < width; i++)
for (int j = 0; j < height; j++)
{
objectGrid[i, j] = new Wall(resourceContainer);
objectGrid[i, j].Parent = this;
objectGrid[i, j].Position = new Vector2(i * TileWidth, j * TileHeight);
}
}
When drawing I just loop through all tiles and draw them accordingly. This is what happends in the Game.Draw function
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.Yellow);
// TODO: Add your drawing code here
spriteBatch.Begin();
map.Draw(gameTime, spriteBatch);
spriteBatch.End();
base.Draw(gameTime);
}
The map.draw function calls this function which basically draws each tile. I tried putting a counter on how much times the draw call for each tile was hit and every update the draw function is called 40000 times which is the amount of tiles I use. So it draws them all but I still don't see them all on the screen
public override void Draw(GameTime gameTime, SpriteBatch spriteBatch)
{
for (int i = 0; i < width; i++)
for (int j = 0; j < height; j++)
{
if (objectGrid[i, j] != null)
{
objectGrid[i, j].Draw(gameTime, spriteBatch);
}
}
}
This is the code for drawing a tile. Where the current image is 0 at all times and the GlobalPosition is the position of a tile minus the camera position.
public override void Draw(GameTime gameTime, SpriteBatch spriteBatch)
{
if (visible)
spriteBatch.Draw(textures[currentImage], GlobalPosition, null, color, 0f, -Center, 1f, SpriteEffects.None, 0f);
}
My apologies for the wall of code. It all looks very simple to me yet I can't seem to find out why it is not drawing all of the tiles. For the not drawn tiles visible is still true and currentImage is 0 which it should be
The monogame spritebatch has still some bugs and errors when drawing a large number of 16-bit images. In my case around 200.000 this is not something you can easily solve. If you encounter the same problem make sure that every image you draw is on the screen and you will probably have no problems from this anymore.

how to pass from farseer vertices list to vertexpositioncolortexture data vertex

My issue started when i was doing the texture to vertices example (https://gamedev.stackexchange.com/questions/30050/building-a-shape-out-of-an-texture-with-farseer) then i pop up if its posible to pass this "farseer vertices" to vertex data that can be used in DrawUserIndexedPrimitives in order to have the vertices ready for modification on alpha textures.
Example:
You draw your texture(with transparency in some places) over the triangle strip vertex data so you can manipulate the points in order to disort the image like this:
http://www.tutsps.com/images/Water_Design_avec_Photoshop/Water_Design_avec_Photoshop_20.jpg
As you can see the A letter was just a normal image on a PNG file but after the conversion iam looking it can be used to disort image.
plz any solution give some code or link to tutorial that can help me to figure out this...
Thanks all!!
P.D. i think the main issue is how to make the indexData and the textureCoordination from just the vertices that PolygonTools.CreatePolygon makes.
TexturedFixture polygon = fixture.UserData as TexturedFixture;
effect.Texture = polygon.Texture;
effect.CurrentTechnique.Passes[0].Apply();
VertexPositionColorTexture[] points;
int vertexCount;
int[] indices;
int triangleCount;
polygon.Polygon.GetTriangleList(fixture.Body.Position, fixture.Body.Rotation, out points, out vertexCount, out indices, out triangleCount);
GraphicsDevice.SamplerStates[0] = SamplerState.AnisotropicClamp;
GraphicsDevice.RasterizerState = new RasterizerState() { FillMode = FillMode.Solid, CullMode = CullMode.None, };
GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionColorTexture>(PrimitiveType.TriangleList, points, 0, vertexCount, indices, 0, triangleCount);
This will do the trick

How to create Paint-like app with XNA?

The issue of programmatically drawing lines using XNA has been covered here. However, I want to allow a user to draw on a canvas as one would with a drawing app such as MS Paint.
This of course requires each x and/or y coordinate change in the mouse pointer position to result in another "dot" of the line being drawn on the canvas in the crayon color in real time.
In the mouse move event, what XNA API considerations come into play in order to draw the line point by point? Literally, of course, I'm not drawing a line as such, but rather a sequence of "dots". Each "dot" can, and probably should, be larger than a single pixel. Think of drawing with a felt tip pen.
The article you provided suggests a method of drawing lines with primitives; vector graphics, in other words. Applications like Paint are mostly pixel based (even though more advanced software like Photoshop has vector and rasterization features).
Bitmap editor
Since you want it to be "Paint-like" I would definitely go with the pixel based approach:
Create a grid of color values. (Extend the System.Drawing.Bitmap class or implement your own.)
Start the (game) loop:
Process input and update the color values in the grid accordingly.
Convert the Bitmap to a Texture2D.
Use a sprite batch or custom renderer to draw the texture to the screen.
Save the bitmap, if you want.
Drawing on the bitmap
I added a rough draft of the image class I am using here at the bottom of the answer. But the code should be quite self-explanatory anyways.
As mentioned before you also need to implement a method for converting the image to a Texture2D and draw it to the screen.
First we create a new 10x10 image and set all pixels to white.
var image = new Grid<Color>(10, 10);
image.Initilaize(() => Color.White);
Next we set up a brush. A brush is in essence just a function that is applied on the whole image. In this case the function should set all pixels inside the specified circle to a dark red color.
// Create a circular brush
float brushRadius = 2.5f;
int brushX = 4;
int brushY = 4;
Color brushColor = new Color(0.5f, 0, 0, 1); // dark red
Now we apply the brush. See this SO answer of mine on how to identify the pixels inside a circle.
You can use mouse input for the brush offsets and enable the user to actually draw on the bitmap.
double radiusSquared = brushRadius * brushRadius;
image.Modify((x, y, oldColor) =>
{
// Use the circle equation
int deltaX = x - brushX;
int deltaY = y - brushY;
double distanceSquared = Math.Pow(deltaX, 2) + Math.Pow(deltaY, 2);
// Current pixel lies inside the circle
if (distanceSquared <= radiusSquared)
{
return brushColor;
}
return oldColor;
});
You could also interpolate between the brush color and the old pixel. For example, you can implement a "soft" brush by letting the blend amount depend on the distance between the brush center and the current pixel.
Drawing a line
In order to draw a freehand line simply apply the brush repeatedly, each time with a different offset (depending on the mouse movement):
Custom image class
I obviously skipped some necessary properties, methods and data validation, but you get the idea:
public class Image
{
public Color[,] Pixels { get; private set; }
public Image(int width, int height)
{
Pixels= new Color[width, height];
}
public void Initialize(Func<Color> createColor)
{
for (int x = 0; x < Width; x++)
{
for (int y = 0; y < Height; y++)
{
Pixels[x, y] = createColor();
}
}
}
public void Modify(Func<int, int, Color, Color> modifyColor)
{
for (int x = 0; x < Width; x++)
{
for (int y = 0; y < Height; y++)
{
Color current = Pixels[x, y];
Pixels[x, y] = modifyColor(x, y, current);
}
}
}
}

DrawUserPrimitives<VertexPositionTexture> complains about Color0 missing for vertex shader

First of all, I am new to XNA and the way GPU works and how it cooperates with the XNA (or DirectX) API.
I have a polygon to draw using the SpriteBatch. I'm triangulating the polygon, and creating a VertexPositionTexture array to hold the vertices. I set the vertices (and just for simplicity, set the texture offset vector to zero), and try to draw the primitives, but I get this error:
The current vertex declaration does not include all the elements required by the current vertex shader. Color0 is missing.
Here is my code, I've double checked my vectors from triangulation, they are fine:
VertexPositionTexture[] vertices = new VertexPositionTexture[triangulationResult.Count * 3];
int ctr = 0;
foreach (var item in triangulationResult)
{
foreach (var point in item.Vertices)
{
vertices[ctr++] = new VertexPositionTexture(new Vector3(point.X, point.Y, 0), Vector2.Zero);
}
}
sb.GraphicsDevice.DrawUserPrimitives<VertexPositionTexture>(PrimitiveType.TriangleList, vertices, 0, triangulationResult.Count);
What am I possibly doing wrong here?
Your shader is expecting a Color in the vertex stream.... so you have to use VertexPositionColorTexture or change your shader.
It seems that you are not using any shader. If the active shader is the one used with spritebatch you won't be able to draw it right.
VertexPositionColorTexture[] vertices = new VertexPositionColorTexture[triangulationResult.Count * 3];
int ctr = 0;
foreach (var item in triangulationResult)
{
foreach (var point in item.Vertices)
{
vertices[ctr++] = new VertexPositionColorTexture(new Vector3(point.X, point.Y, 0), Color.White, Vector2.Zero);
}
}
sb.GraphicsDevice.DrawUserPrimitives<VertexPositionColorTexture>(PrimitiveType.TriangleList, vertices, 0, triangulationResult.Count);
Use BasicEffect if you are drawing polygons (MSDN tutorial). You should only use SpriteBatch for sprite drawing (ie: using its Draw methods).
The vertex element type that BasicEffect requires will depend on what settings you apply to it.
To use a vertex element type without a colour component (like VertexPositionTexture), set BasicEffect.VertexColorEnabled to false.
Or alternately, use a vertex element type that supplies a colour, such as VertexPositionColorTexture.
If you want to create a BasicEffect that has the same coordinate system as SpriteBatch, see this answer or this blog post.

Problem assigning values to Mat array in OpenCV 2.3 - seems simple

Using the new API for OpenCV 2.3, I am having trouble assigning values to a Mat array (or say image) inside a loop. Here is the code snippet which I am using;
int paddedHeight = 256 + 2*padSize;
int paddedWidth = 256 + 2*padSize;
int n = 266; // padded height or width
cv::Mat fx = cv::Mat(paddedHeight,paddedWidth,CV_64FC1);
cv::Mat fy = cv::Mat(paddedHeight,paddedWidth,CV_64FC1);
float value = -n/2.0f;
for(int i=0;i<n;i++)
{
for(int j=0;j<n;j++)
fx.at<cv::Vec2d>(i,j) = value++;
value = -n/2.0f;
}
meshElement = -n/2.0f;
for(int i=0;i<n;i++)
{
for(int j=0;j<n;j++)
fy.at<cv::Vec2d>(i,j) = value;
value++;
}
Now in the first loop as soon as j = 133, I get an exception which seems to be related to depth of the image, I cant figure out what I am doing wrong here.
Please Advise! Thanks!
You are accessing the data as 2-component double vector (using .at<cv::Vec2d>()), but you created the matrices to contain only 1 component doubles (using CV_64FC1). Either create the matrices to contain two components per element (with CV_64FC2) or, what seems more appropriate to your code, access the values as simple doubles, using .at<double>(). This explodes exactly at j=133 because that is half the size of your image and when treated as containing 2-component vectors when it only contains 1, it is only half as wide.
Or maybe you can merge these two matrices into one, containing two components per element, but this depends on the way you are going to use these matrices in the future. In this case you can also merge the two loops together and really set a 2-component vector:
cv::Mat f = cv::Mat(paddedHeight,paddedWidth,CV_64FC2);
float yValue = -n/2.0f;
for(int i=0;i<n;i++)
{
float xValue = -n/2.0f;
for(int j=0;j<n;j++)
{
f.at<cv::Vec2d>(i,j)[0] = xValue++;
f.at<cv::Vec2d>(i,j)[1] = yValue;
}
++yValue;
}
This might produce a better memory accessing scheme if you always need both values, the one from fx and the one from fy, for the same element.

Resources