I've read the online docs about DirectX rasterization rules but I still can't understand why doesn't this code produce anything visible on the screen?
target->SetAntialiasMode(D2D1_ANTIALIAS_MODE_ALIASED);
target->DrawLine(D2D1::Point2F(0.f, 0.f),
D2D1::Point2F(10.f, 0.f), redBrush, 1.f);
Of course, if I change y from 0.0f to 1.0f for both line points I get a visible horizontal line at the top-left side of the Window client area but I would like to understand what principles are involved here. I couldn't figure them out from available documentation.
You should draw your line "in the middle of the pixel":
target->DrawLine(D2D1::Point2F(0.0f, 0.5f), D2D1::Point2F(10.f, 0.5f), redBrush, 1.f);
Otherwise, if you don't use antialiasing, your line can be drawn on either side 0/0...10/0 line. In your case it gets drawn outside of window canvas.
Note that pixel on screen is actually a rectangle with coordinates D2D1::Point2F(0.0f, 0.0f) to D2D1::Point2F(1.0f, 1.0f).
And D2D1::Point2F(0.5f, 0.5f) is the center of that rectangle. If you draw 1x1 rectangle around that coordinate it will cover the pixel exactly.
Related
The problem
I would like to have an area inside love2d in which movable objects are drawn. The movement of the objects is not restricted by the area boundries but drawing is. Think of it as looking outside through a window. For example: a blue rectangle in an area, if it moves to the side its drawing should be truncated to the boundries of the area.
Before moving:
After moving (wrong):
After moving (right):
Restrictions and assumptions
You can assume the area is rectangular.
The object to draw inside can be anything: polygon, image or text.
The area covers anything behind it (as if it has its own background)
Objects not 'belonging' to the area should be drawn as usual.
Atempted solutions
I know I could stop drawing objects as soon as they 'touch' the boundries of the area, but that would cause them to suddenly disappear and then appear when they are wholly inside the area. I guess it takes some sort of layering system but I have no clue on how to include that in love2d.
I think you are looking for love.graphics.setScissor.
The scissor limits the drawing area to a specified rectangle.
Calling the function without any arguments (i.e. love.graphics.setScissor()) disables scissor.
Example:
function love.draw ()
-- sets the drawing area to the top left quarter of the screen
local width, height = love.graphics.getDimensions()
love.graphics.setScissor(0, 0, width / 2, height / 2)
-- code to draw things
love.graphics.setScissor()
end
I am building an app where I wanted show floor plans to the user which is interactive that means they can tap on each area zoom those and find finer details.
I am getting a JSON response from the back-end which has the metadata info of the floor plans, shape info and points. I am planning to parse the points and draw the shapes on a view using quartz (started learning Quartz2d). As a start I took a simple blue print which looks like the below image.
As per the blue print the center is at (0,0) and there are 4 points.
Below are the points that I am getting from backend for the blue print.
X:-1405.52, Y:686.18
X:550.27, Y:683.97
X:1392.26, Y:-776.79
X:-1405.52, Y:-776.79
I tried to draw this
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
- (void)drawRect:(CGRect)rect {
// Drawing code
for (Shape *shape in shapes.shapesArray) {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 2.0);
CGContextSetStrokeColorWithColor(context, [UIColor blueColor].CGColor);
BOOL isFirstPoint = YES;
for (Points *point in shape.arrayOfPoints) {
NSLog(#"X:%f, Y:%f",point.x,point.y);
if (isFirstPoint) {
CGContextMoveToPoint(context, point.x, point.y);
//[bpath moveToPoint:CGPointMake(point.x,point.y)];
isFirstPoint = NO;
continue;
}
CGContextAddLineToPoint(context, point.x, point.x);
}
CGContextStrokePath(context);
}
}
But I am getting the below image as result which looks not the correct one
Questions:
Am I in the correct direction of achieving this?
How to draw points at -ve direction?
According to the co-ordinates, the drawing will be very huge, I want to draw it first and then shrink to fit to the screen so that users can later zoom in/ pan etc
UPDATE:
I have achieved some of the basic things using translation and scaling. Now the problem is how I will fit the content drawn, to the view bounds. Since my co-ordinates are pretty big, it goes out of bounds, I want it to fit.
Please find below the test code that I am having.
Any idea how to fit it?
DrawshapefromJSONPoints
I have found a solution to the problem. Now I am able to draw shapes from a set of points and fit them to the screen and also make them zoom able without quality loss. Please find below the link to the project. This can be tweaked to support colors, different shapes, I am currently handling only polygons.
DrawShapeFromJSON
A few observations:
There is nothing wrong with using quartz for this, but you might find it more flexible using OpenGL since you mention requirements for hit-testing areas and scaling the model in realtime.
Your code is assuming an ordered list of points since you plot the path using CGContextMoveToPoint. If this is guaranteed by the data contract then fine; however, you will need to write a more intelligent renderer if you have multiple closed paths returned in your JSON.
Questions 2 & 3 can be covered with a primer in computer graphics (specifically model-view matrix and transformations). If your world coordinates are centered at (0,0,0) you can scale the vertices by applying a scalar to each vertex. Drawing points on the negative axis will make more sense when you are not using the quartz-2d coordinate system (0,0)-(w, h)
I'm developing an app which involves drawing lines. Every times the user moves the finger, that point is added to an path and also added to the CGContext as the example below.
CGContextMoveToPoint(cacheContext, point1.x, point1.y);
CGContextAddCurveToPoint(cacheContext, ctrl1_x, ctrl1_y, ctrl2_x, ctrl2_y, point2.x, point2.y);
CGPathMoveToPoint(path, NULL, point1.x, point1.y);
CGPathAddCurveToPoint(path, NULL, ctrl1_x, ctrl1_y, ctrl2_x, ctrl2_y, point2.x, point2.y);
Now when I want to add it and stroke it in black I use the following code
CGContextSetStrokeColorWithColor([UIColor blackcolor].CGColor)
CGContextAddPath(cacheContext,path);
CGContextStrokePath(cacheContext);
However the line that gets stroked this time will be a bit smaller then the one that was drawn before. This will result in a slight border around the stroked path. So my question is: How can I get the stroked path to be identical to the path that was drawn into the CGcontext?
The issue is due to anti-aliasing. The path is a geometric ideal. The bitmap generated by stroking the path with a given width, color, etc. is imperfect. The ideal shape covers some pixels completely, but only covers others partially.
The result without anti-aliasing (and assuming an opaque color) is to fully paint pixels which mostly lie within the ideal shape and don't touch the pixels which mostly lie outside of it. That leaves visible jaggies on anything other than vertical or horizontal lines. If you later draw the same path with the same stroke parameters again, exactly the same pixels will be affected and, since they are being fully painted, you can completely replace the old drawing with the new.
With anti-aliasing, any pixel which is only partially within the ideal shape is not completely painted with the new color. Rather, the stroke color is applied in proportion to the percentage of the pixel which is within the ideal shape. The color that was already in that pixel is retained in inverse proportion. For example, a pixel which is 43% within the ideal shape will get a color which is 43% of the stroke color plus 57% of the prior color.
That means that stroking the path a second time with a different color will not completely replace the color from a previous stroke. If you fill a bitmap with white and then stroke a path in red, some of the pixels along the edge will mix a little red with a little of the white to give light red or pink. If you then stroke that path in blue, the pixels along the edge will mix a little blue with a little of the color that was there, which is a light red or pink. That will give a magenta-ish color.
You can disable anti-aliasing using CGContextSetShouldAntialias(), but then you risk getting jaggies. You would have to do this around both strokings of the path.
Alternatively, you can clear the context to some background color before redrawing the path. But for that, you need to be able to completely redraw everything you want to appear.
I use the following code to draw an arc in the lower half of a circle.
According to documentation in Apple:
This method creates an open subpath. The created arc lies on the
perimeter of the specified circle. When drawn in the default
coordinate system, the start and end angles are based on the unit
circle shown in Figure 1. For example, specifying a start angle of 0
radians, an end angle of π radians, and setting the clockwise
parameter to YES draws the bottom half of the circle. However,
specifying the same start and end angles but setting the clockwise
parameter set to NO draws the top half of the circle.
I wrote
std::complex<double> start1( std::polar(190.0,0.0) );
CGPoint targetStart1 = CGPointMake(start1.real() + 342.0, start1.imag() + 220.);
CGMutablePathRef path = CGPathCreateMutable();
CGPathMoveToPoint(path, NULL, targetStart1.x, targetStart1.y);
CGPathAddArc(path, NULL, 342.0, 220.0, 190, 0.0, PI_180, YES );
CGContextAddPath(context, path);
CGContextSetLineWidth( context, 15.0 );
// set the color for the stroked circle
CGContextSetStrokeColorWithColor( context, [UIColor greenColor].CGColor);
CGContextStrokePath(context);
CGPathRelease(path);
However, the arc is drawn in the top half of the circle (see the green ring). I am wondering if I have accidentally somewhere in the code flipped the coordination system, but I don't know for what command I should search.
Thanks for ur help.
As per Vinzzz's comment: for some opaque reason, Apple flipped the window manager's coordinate system between OS X and iOS. OS X uses graph paper layout with the origin in the lower left and positive y proceeding upwards. UIKit uses English reading order with the origin in the upper left and positive y proceeding downwards.
Core Graphics retains the OS X approach; UIKit simply presents its output upside down. Throw some Core Text in if you really want to convince yourself.
The standard solution is to translate and scale the current transform matrix as the first two steps in any drawRect:, effectively moving the origin back to the lower left for that view and flipping the y coordinate axis.
So I am using CGBitmapContext to adapt a software blitter to iOS. I write my data to the bitmap, then I use CoreGraphics functions to write text over the CGBitmapContext. The software blitter uses Upper Left Hand Origin, while the CoreGraphics functions use Lower Left Hand Origin. So when I draw an image using the blitter at (0, 0), it appears in the upper left hand corner. When I draw text using CG at (0, 0) it appears in the lower left hand corner and it is NOT INVERTED. Yes I have read all the other questions about inverting coordinates and I am doing that to the the resulting CG image to display it in the view properly. Perhaps a code example would help...
// This image is at the top left corner using the custom blitter.
screen->write(image, ark::Point(0, 0));
// This text is at the bottom left corner RIGHT SIDE UP
// cg context is a CGBitmapContext
CGContextShowTextAtPoint(
cgcontext,
0, 0,
str.c_str(),
str.length());
// Send final image to video output.
CGImageRef screenImage = CGBitmapContextCreateImage(cgcontext);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextTranslateCTM(context, 0, 480);
// Here I flip the whole image, so if I flip the text above, then
// it ends up upside down.
CGContextScaleCTM(context, 1, -1);
CGContextDrawImage(context,
CGRectMake(0, 0, 320, 480), screenImage);
CGContextRestoreGState(context);
CGImageRelease(screenImage);
How can I mix coordinate systems? How can I draw my text right side up with ULC origin?
Flipping the co-ordinate system means you will draw upside down. You can't do that transformation without that result. (The reason to do it is when you're going to hand your result off to something that will flip it back, so that upside down will be right-side-up again.)
Everything always draws with positive y going up and negative y going down. The text rises toward positive y, and descends below the baseline toward negative y. Consecutive lines' baselines have lower and lower y positions. The same is true of images and (non-textual) paths—everything draws this way.
What you do by transforming the co-ordinate system is not change how things draw; they always draw with positive y going up. To transform the co-ordinate system is to redefine up and down. You make positive y into negative y, and vice versa. You also change where 0, 0 is.
So what you need to do is not to flip the co-ordinate system, but to (only) translate it. You need to translate up by the entire height of the context, minus the height of a line of text. Then, with 0,0 defined as that position, you can show your text at 0,0 and it will draw from that position, and the right way up (and down).
Or, don't translate, and just show the text from that point. Either way will work.
I'm pretty sure you don't need and won't want any transformation in the destination context. The translation (if you translate) should happen in the bitmap context.