Is there a way to draw in a canvas with a layout. I mean that I have an object that moves and constant axis. Everybody knows when the object moves from point A to point B we should fill the rectangle by some color. One attentive user may notice that image blinks.
One way: If I have a constant canvas with a set of objects like axis, grids etc... and another one with the moving objects.
I can combine them like
SumCanvas.Pixels[i, j] = ConstCanvas.Pixels[i, j] + MovingCanvas.Pixels[i, j]
filling only the MovingCanvas.
Maybe there is another way?
UPD
A piece of code:
Self is inherited from.
// ...
Canvas.Rectangle(-1, -1, Self.Width, Self.Height);
// Fill all area of canvas by
// clWhite. This is a reason of an image blinking.
// ...
Shape.position := position; // the coordinate of permanently moving object;
// ...
{The constant part}
AddGrid; // drawing a constantCanvas (axis and grids);
for Shape in FListOfShapes do // TListOfShapes = TList<IShape>
shape.draw; // a set of another shape with constant coordinates.
// ...
Related
I'm writing my first Qt 5 application... This uses a third-party map library (QGeoView).
I need to draw an object (something like a stylized airplane) over this map. Following the library coding guidelines, I derived from the base class QGVDrawItem my QGVAirplane.
The airplane class contains heading and position values: such values must be used to draw the airplane on the map (of course in the correct position and with correct heading). The library requires QGVDrawItem derivatives to override three base class methods:
QPainterPath projShape() const;
void projPaint(QPainter* painter);
void onProjection(QGVMap* geoMap)
The first method is used to achieve the area of the map that needs to be updated. The second is the method responsible to draw the object on the map. The third method is needed to reproject the point from the coordinate space on the map (it's not relevant for the solution of my problem).
My code looks like this:
void onProjection(QGVMap* geoMap)
{
QGVDrawItem::onProjection(geoMap);
mProjPoint = geoMap->getProjection()->geoToProj(mPoint);
}
QPainterPath projShape() const
{
QRectF _bounding = createGlyph().boundingRect();
double _size = fmax(_bounding.height(), _bounding.width());
QPainterPath _bounding_path;
_bounding_path.addRect(0,0,_size,_size);
_bounding_path.translate(mProjPoint.x(), mProjPoint.y());
return _bounding_path;
}
// This function creates the path containing the airplane glyph
// along with its label
QPainterPath createGlyph() const
{
QPainterPath _path;
QPolygon _glyph = QPolygon();
_glyph << QPoint(0,6) << QPoint(0,8) << QPoint(14,6) << QPoint(28,8) << QPoint(28,6) << QPoint(14,0);
_path.addPolygon(_glyph);
_path.setFillRule(Qt::FillRule::OddEvenFill);
_path.addText(OFF_X_TEXT, OFF_Y_TEXT, mFont , QString::number(mId));
QTransform _transform;
_transform.rotate(mHeading);
return _transform.map(_path);
}
// This function is the actual painting method
void drawGlyph(QPainter* painter)
{
painter->setRenderHints(QPainter::Antialiasing, true);
painter->setBrush(QBrush(mColor));
painter->setPen(QPen(QBrush(Qt::black), 1));
QPainterPath _path = createGlyph();
painter->translate(mProjPoint.x(), mProjPoint.y());
painter->drawPath(_path);
}
Of course:
mProjPoint is the position of the airplane,
mHeading is the heading (the direction where the airplane is pointing),
mId is a number identifying the airplane (will be displayed as a label under airplane glyph),
mColor is the color assigned to the airplane.
The problem here is the mix of rotation and translation. Transformation: since the object is rotated, projShape() methods return a bounding rectangle that's not fully overlapping the object drawn on the map...
I also suspect that the center of the object is not correctly pointed on mProjPoint. I tried many times trying to translate the bounding rectangle to center the object without luck.
Another minor issue is the fillup of the font... the label under the airplane glyph is not solid, but it is filled with the same color of the airplane.
How can I fix this?
Generically speaking, the general pattern for rotation is to scale about the origin first and then finish with your final translation.
The following is pseudocode, but it illustrates the need to shift your object's origin to (0, 0) prior to doing any rotation or scaling. After the rotate and scale are done, the object can be moved back from (0, 0) back to where it came from. From here, any post-translation step may be applied.
translate( -origin.x, -origin.y );
rotate( angle );
scale( scale.x, scale y);
translate( origin.x, origin.y );
translate( translation.x, translation.y )
I finally managed to achieve the result I meant....
QPainterPath projShape() const
{
QPainterPath _path;
QRectF _glyph_bounds = _path.boundingRect();
QPainterPath _textpath;
_textpath.addText(0, 0, mFont, QString::number(mId));
QRectF _text_bounds = _textpath.boundingRect();
_textpath.translate(_glyph_bounds.width()/2-_text_bounds.width()/2, _glyph_bounds.height()+_text_bounds.height());
_path.addPath(_textpath);
QTransform _transform;
_transform.translate(mProjPoint.x(),mProjPoint.y());
_transform.rotate(360-mHeading);
_transform.translate(-_path.boundingRect().width()/2, -_path.boundingRect().height()/2);
return _transform.map(_path);
}
void projPaint(QPainter* painter)
{
painter->setRenderHint(QPainter::Antialiasing, true);
painter->setRenderHint(QPainter::TextAntialiasing, true);
painter->setRenderHint(QPainter::SmoothPixmapTransform, true);
painter->setRenderHint(QPainter::HighQualityAntialiasing, true);
painter->setBrush(QBrush(mColor));
painter->setPen(QPen(QBrush(Qt::black), 1));
painter->setFont(mFont);
QPainterPath _path = projShape();
painter->drawPath(_path);
}
Unluckly I still suffer the minor issue with text fill mode:
I would like to have a solid black fill for the text instead of the mColor fill I use for the glyph/polygon.
Openlayers provides useful functions for drawing boxes and rectangles and also has ol.geom.Geometry.prototype.rotate(angle, anchor) for rotating a geometry around a certain anchor. Is it possible to lock the rotation of a box/rectangle while modifying it?
Using the OpenLayers example located here to draw a box with a certain rotation to illustrate the point:
I would like the box/rectangle to maintain its rotation while still being able to drag the sides longer and shorter. Is there a simple way to achieve this?
Answering with the solution I came up with.
First of all, add the feature(s) to a ModifyInteraction so you are able to modify by dragging the corners of the feature.
this.modifyInteraction = new Modify({
deleteCondition: eventsCondition.never,
features: this.drawInteraction.features,
insertVertexCondition: eventsCondition.never,
});
this.map.addInteraction(this.modifyInteraction);
Also, add event handlers upon the events "modifystart" and "modifyend".
this.modifyInteraction.on("modifystart", this.modifyStartFunction);
this.modifyInteraction.on("modifyend", this.modifyEndFunction);
The functions for "modifystart" and "modifyend" look like this.
private modifyStartFunction(event) {
const features = event.features;
const feature = features.getArray()[0];
this.featureAtModifyStart = feature.clone();
this.draggedCornerAtModifyStart = "";
feature.on("change", this.changeFeatureFunction);
}
private modifyEndFunction(event) {
const features = event.features;
const feature = features.getArray()[0];
feature.un("change", this.changeFeatureFunction);
// removing and adding feature to force reindexing
// of feature's snappable edges in OpenLayers
this.drawInteraction.features.clear();
this.drawInteraction.features.push(feature);
this.dispatchRettighetModifyEvent(feature);
}
The changeFeatureFunction is below. This function is called for every single change which is done to the geometry as long as the user is still modifying/dragging one of the corners. Inside this function, I made another function to adjust the modified rectangle into a rectangle again. This "Rectanglify"-function moves the corners which are adjacent to the corner which was just moved by the user.
private changeFeatureFunction(event) {
let feature = event.target;
let geometry = feature.getGeometry();
// Removing change event temporarily to avoid infinite recursion
feature.un("change", this.changeFeatureFunction);
this.rectanglifyModifiedGeometry(geometry);
// Reenabling change event
feature.on("change", this.changeFeatureFunction);
}
Without going into too much detail, the rectanglify-function needs to
find rotation of geometry in radians
inversely rotate with radians * -1 (e.g. geometry.rotate(radians * (-1), anchor) )
update neighboring corners of the dragged corner (easier to do when we have a rectangle which is parallel to the x and y axes)
rotate back with the rotation we found in 1
--
In order to get the rotation of the rectangle, we can do this:
export function getRadiansFromRectangle(feature: Feature): number {
const coords = getCoordinates(feature);
const point1 = coords[0];
const point2 = coords[1];
const deltaY = (point2[1] as number) - (point1[1] as number);
const deltaX = (point2[0] as number) - (point1[0] as number);
return Math.atan2(deltaY, deltaX);
}
I am making a level based game with many different objects, all different. In each level, there will be different amounts of each type of object. Thus, I have been trying to make the drawing part as generic as possible so that all I have to do is pass in the coords and it will automatically draw. To do this, I have made a protocol that forces each object class to implement the method getBP(), which returns the UIBezierPath to draw for each. Then, the view class just has to say
Object.getBP().fill()
However, this has been leading to some strange problems. The object does not draw at the correct coordinates. The y coordinate is correct, but the x coordinate puts it always at the left of the screen. I think it may be the fact that the Bezier Path is not being created in the view class. Here is my code in Surface.swift (this is meant to draw a surface in the game):
func getBP() -> UIBezierPath {
var rect:CGRect
var length:Double = getSurfaceVector().getMagnitude()//length of the surface
var cx = points.1.x+(points.0.x-points.1.x)//center coords of the surface
var cy = points.1.y+(points.0.y-points.1.y)
var bp = UIBezierPath(roundedRect: CGRectMake(CGFloat(cx - length/2), CGFloat(cy-RECT_HEIGHT/2), CGFloat(length), CGFloat(RECT_HEIGHT)), cornerRadius: CGFloat(5))
let transform:CGAffineTransform = CGAffineTransformMakeRotation(CGFloat(Double(angle)*(Double(M_PI)/Double(180))))
bp.applyTransform(transform)
return bp
}
points is just a tuple with the start and end points of the surface. RECT_HEIGHT is the height of the rectangle that is drawn to represent the surface. angle is the angle from horizontal of the surface.
Creating the surface in View.swift, I do this:
Surface(fixed: true, points: (Vector(x: 50, y:100), Vector(x: Double(UIScreen.mainScreen().bounds.width), y: 100)))
I add that surface to the array of objects in the game. I draw it in the View.swift file by saying
surface.stroke()
The surface draws on the screen with a y value of 100, but it is centered at x = 0 so that it is half on and half off of the screen. Also, it doesn't draw at the angle - it is always horizontal. Is there some better way of doing this? What is happening?
The issue of programmatically drawing lines using XNA has been covered here. However, I want to allow a user to draw on a canvas as one would with a drawing app such as MS Paint.
This of course requires each x and/or y coordinate change in the mouse pointer position to result in another "dot" of the line being drawn on the canvas in the crayon color in real time.
In the mouse move event, what XNA API considerations come into play in order to draw the line point by point? Literally, of course, I'm not drawing a line as such, but rather a sequence of "dots". Each "dot" can, and probably should, be larger than a single pixel. Think of drawing with a felt tip pen.
The article you provided suggests a method of drawing lines with primitives; vector graphics, in other words. Applications like Paint are mostly pixel based (even though more advanced software like Photoshop has vector and rasterization features).
Bitmap editor
Since you want it to be "Paint-like" I would definitely go with the pixel based approach:
Create a grid of color values. (Extend the System.Drawing.Bitmap class or implement your own.)
Start the (game) loop:
Process input and update the color values in the grid accordingly.
Convert the Bitmap to a Texture2D.
Use a sprite batch or custom renderer to draw the texture to the screen.
Save the bitmap, if you want.
Drawing on the bitmap
I added a rough draft of the image class I am using here at the bottom of the answer. But the code should be quite self-explanatory anyways.
As mentioned before you also need to implement a method for converting the image to a Texture2D and draw it to the screen.
First we create a new 10x10 image and set all pixels to white.
var image = new Grid<Color>(10, 10);
image.Initilaize(() => Color.White);
Next we set up a brush. A brush is in essence just a function that is applied on the whole image. In this case the function should set all pixels inside the specified circle to a dark red color.
// Create a circular brush
float brushRadius = 2.5f;
int brushX = 4;
int brushY = 4;
Color brushColor = new Color(0.5f, 0, 0, 1); // dark red
Now we apply the brush. See this SO answer of mine on how to identify the pixels inside a circle.
You can use mouse input for the brush offsets and enable the user to actually draw on the bitmap.
double radiusSquared = brushRadius * brushRadius;
image.Modify((x, y, oldColor) =>
{
// Use the circle equation
int deltaX = x - brushX;
int deltaY = y - brushY;
double distanceSquared = Math.Pow(deltaX, 2) + Math.Pow(deltaY, 2);
// Current pixel lies inside the circle
if (distanceSquared <= radiusSquared)
{
return brushColor;
}
return oldColor;
});
You could also interpolate between the brush color and the old pixel. For example, you can implement a "soft" brush by letting the blend amount depend on the distance between the brush center and the current pixel.
Drawing a line
In order to draw a freehand line simply apply the brush repeatedly, each time with a different offset (depending on the mouse movement):
Custom image class
I obviously skipped some necessary properties, methods and data validation, but you get the idea:
public class Image
{
public Color[,] Pixels { get; private set; }
public Image(int width, int height)
{
Pixels= new Color[width, height];
}
public void Initialize(Func<Color> createColor)
{
for (int x = 0; x < Width; x++)
{
for (int y = 0; y < Height; y++)
{
Pixels[x, y] = createColor();
}
}
}
public void Modify(Func<int, int, Color, Color> modifyColor)
{
for (int x = 0; x < Width; x++)
{
for (int y = 0; y < Height; y++)
{
Color current = Pixels[x, y];
Pixels[x, y] = modifyColor(x, y, current);
}
}
}
}
So I have this Panel class. It's a little like a Window where you can resize, close, add buttons, sliders, etc. Much like the status screen in Morrowind if any of you remember. The behavior I want is that when a sprite is outside of the panel's bounds it doesn't get drawn and if it's partially outside only the part inside gets drawn.
So what it does right now is first get a rectangle that represents the bounds of the panel, and a rectangle for the sprite, it finds the rectangle of intersection between the two then translates that intersection to the local coordinates of the sprite rectangle and uses that for the source rectangle. It works and as clever as I feel the code is I can't shake the feeling that there's a better way to do this. Also, with this set up I cannot utilize a global transformation matrix for my 2D camera, everything in the "world" must be passed a camera argument to draw. Anyway, here's the code I have:
for the Intersection:
public static Rectangle? Intersection(Rectangle rectangle1, Rectangle rectangle2)
{
if (rectangle1.Intersects(rectangle2))
{
if (rectangle1.Contains(rectangle2))
{
return rectangle2;
}
else if (rectangle2.Contains(rectangle1))
{
return rectangle1;
}
else
{
int x = Math.Max(rectangle1.Left, rectangle2.Left);
int y = Math.Max(rectangle1.Top, rectangle2.Top);
int height = Math.Min(rectangle1.Bottom, rectangle2.Bottom) - Math.Max(rectangle1.Top, rectangle2.Top);
int width = Math.Min(rectangle1.Right, rectangle2.Right) - Math.Max(rectangle1.Left, rectangle2.Left);
return new Rectangle(x, y, width, height);
}
}
else
{
return null;
}
}
and for actually drawing on the panel:
public void DrawOnPanel(IDraw sprite, SpriteBatch spriteBatch)
{
Rectangle panelRectangle = new Rectangle(
(int)_position.X,
(int)_position.Y,
_width,
_height);
Rectangle drawRectangle = new Rectangle();
drawRectangle.X = (int)sprite.Position.X;
drawRectangle.Y = (int)sprite.Position.Y;
drawRectangle.Width = sprite.Width;
drawRectangle.Height = sprite.Height;
if (panelRectangle.Contains(drawRectangle))
{
sprite.Draw(
spriteBatch,
drawRectangle,
null);
}
else if (Intersection(panelRectangle, drawRectangle) == null)
{
return;
}
else if (Intersection(panelRectangle, drawRectangle).HasValue)
{
Rectangle intersection = Intersection(panelRectangle, drawRectangle).Value;
if (Intersection(panelRectangle, drawRectangle) == drawRectangle)
{
sprite.Draw(spriteBatch, intersection, intersection);
}
else
{
sprite.Draw(
spriteBatch,
intersection,
new Rectangle(
intersection.X - drawRectangle.X,
intersection.Y - drawRectangle.Y,
intersection.Width,
intersection.Height));
}
}
}
So I guess my question is, is there a better way to do this?
Update: Just found out about the ScissorRectangle property. This seems like a decent way to do this; it requires a RasterizerState object to be made and passed into the spritebatch.Begin overload that accepts it. Seems like this might be the best bet though. There's also the Viewport which I can apparently change around. Thoughts? :)
There are several ways to limit drawing to a portion of the screen. If the area is rectangular (which seems to be the case here), you could set the viewport (see GraphicsDevice) to the panel's surface.
For non-rectangular areas, you can use the stencil buffer or use some tricks with the depth buffer. Draw the shape of the surface in the stencil buffer or the depth buffer, set your render state to draw only pixels located in the shape you just rendered in the stencil/depth buffer, finally render your sprites.
One way of doing this is simple per-pixel collision. Although this is a bad idea if the sprites are large or numerous, this can be a very easy and fast way to get the job done with small sprites. First, do a bounding circle or bounding square collision check against the panel to see if you even need to do per-pixel detection.
Then, create a contains method that checks if the position, scale, and rotation of the sprite put it so far inside the panel that it must be totally enclosed by the panel, so you don't need per-pixel collision in that case. This can be done pretty easily by just creating a bounding square that has the width and height of the length of the sprite's diagonal, and checking for collision with that.
Finally, if both of these fail, we must do per-pixel collision. Go through and check against every pixel in the sprite to see if it is within the bounds of the panel. If it isn't set the alpha value of the pixel to 0.
Thats it.