So I am using CGBitmapContext to adapt a software blitter to iOS. I write my data to the bitmap, then I use CoreGraphics functions to write text over the CGBitmapContext. The software blitter uses Upper Left Hand Origin, while the CoreGraphics functions use Lower Left Hand Origin. So when I draw an image using the blitter at (0, 0), it appears in the upper left hand corner. When I draw text using CG at (0, 0) it appears in the lower left hand corner and it is NOT INVERTED. Yes I have read all the other questions about inverting coordinates and I am doing that to the the resulting CG image to display it in the view properly. Perhaps a code example would help...
// This image is at the top left corner using the custom blitter.
screen->write(image, ark::Point(0, 0));
// This text is at the bottom left corner RIGHT SIDE UP
// cg context is a CGBitmapContext
CGContextShowTextAtPoint(
cgcontext,
0, 0,
str.c_str(),
str.length());
// Send final image to video output.
CGImageRef screenImage = CGBitmapContextCreateImage(cgcontext);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextTranslateCTM(context, 0, 480);
// Here I flip the whole image, so if I flip the text above, then
// it ends up upside down.
CGContextScaleCTM(context, 1, -1);
CGContextDrawImage(context,
CGRectMake(0, 0, 320, 480), screenImage);
CGContextRestoreGState(context);
CGImageRelease(screenImage);
How can I mix coordinate systems? How can I draw my text right side up with ULC origin?
Flipping the co-ordinate system means you will draw upside down. You can't do that transformation without that result. (The reason to do it is when you're going to hand your result off to something that will flip it back, so that upside down will be right-side-up again.)
Everything always draws with positive y going up and negative y going down. The text rises toward positive y, and descends below the baseline toward negative y. Consecutive lines' baselines have lower and lower y positions. The same is true of images and (non-textual) paths—everything draws this way.
What you do by transforming the co-ordinate system is not change how things draw; they always draw with positive y going up. To transform the co-ordinate system is to redefine up and down. You make positive y into negative y, and vice versa. You also change where 0, 0 is.
So what you need to do is not to flip the co-ordinate system, but to (only) translate it. You need to translate up by the entire height of the context, minus the height of a line of text. Then, with 0,0 defined as that position, you can show your text at 0,0 and it will draw from that position, and the right way up (and down).
Or, don't translate, and just show the text from that point. Either way will work.
I'm pretty sure you don't need and won't want any transformation in the destination context. The translation (if you translate) should happen in the bitmap context.
Related
I've read the online docs about DirectX rasterization rules but I still can't understand why doesn't this code produce anything visible on the screen?
target->SetAntialiasMode(D2D1_ANTIALIAS_MODE_ALIASED);
target->DrawLine(D2D1::Point2F(0.f, 0.f),
D2D1::Point2F(10.f, 0.f), redBrush, 1.f);
Of course, if I change y from 0.0f to 1.0f for both line points I get a visible horizontal line at the top-left side of the Window client area but I would like to understand what principles are involved here. I couldn't figure them out from available documentation.
You should draw your line "in the middle of the pixel":
target->DrawLine(D2D1::Point2F(0.0f, 0.5f), D2D1::Point2F(10.f, 0.5f), redBrush, 1.f);
Otherwise, if you don't use antialiasing, your line can be drawn on either side 0/0...10/0 line. In your case it gets drawn outside of window canvas.
Note that pixel on screen is actually a rectangle with coordinates D2D1::Point2F(0.0f, 0.0f) to D2D1::Point2F(1.0f, 1.0f).
And D2D1::Point2F(0.5f, 0.5f) is the center of that rectangle. If you draw 1x1 rectangle around that coordinate it will cover the pixel exactly.
I am working on API which requires me set up a outer geometry mask on ID2D1Rendertarget such that any draw call after that only draws portion of drawings which lies outside this geometry.
https://msdn.microsoft.com/en-us/library/windows/desktop/dd756675(v=vs.85).aspx explains how can we setup a inner geometry mask on ID2D1Rendertarget such that any draw call after that only draws portion of drawings which lies inside this geometry.I want to implement just opposite of that. Is this possible? Any help is deeply appreciated.
One way to do this is to subtract your geometry from a rectangle that fills the entire render target. Check out the MSDN page on combining geometries. I have a small code example below:
ComPtr<ID2D1PathGeometry> invertedGeometry;
ComPtr<ID2D1RectangleGeometry> rectangleGeometry;
d2dFactory->CreateRectangleGeometry(
{ 0, 0, targetWidth, targetHeight },
&rectangleGeometry
);
ComPtr<ID2D1GeometrySink> geometrySink;
d2dFactory->CreatePathGeometry(&invertedGeometry);
invertedGeometry->Open(&geometrySink);
rectangleGeometry->CombineWithGeometry(
pathGeometry.Get(),
D2D1_COMBINE_MODE_EXCLUDE,
D2D1::Matrix3x2F::Identity(),
geometrySink.Get()
);
geometrySink->Close();
Use the inverted geometry as the geometric mask instead of the original path geometry.
A second way to do this is to rasterize your geometry to a bitmap and use it as an opacity mask. You can flip the colors depending on whether or not you want the inside or outside to mask.
I am building an app where I wanted show floor plans to the user which is interactive that means they can tap on each area zoom those and find finer details.
I am getting a JSON response from the back-end which has the metadata info of the floor plans, shape info and points. I am planning to parse the points and draw the shapes on a view using quartz (started learning Quartz2d). As a start I took a simple blue print which looks like the below image.
As per the blue print the center is at (0,0) and there are 4 points.
Below are the points that I am getting from backend for the blue print.
X:-1405.52, Y:686.18
X:550.27, Y:683.97
X:1392.26, Y:-776.79
X:-1405.52, Y:-776.79
I tried to draw this
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
- (void)drawRect:(CGRect)rect {
// Drawing code
for (Shape *shape in shapes.shapesArray) {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 2.0);
CGContextSetStrokeColorWithColor(context, [UIColor blueColor].CGColor);
BOOL isFirstPoint = YES;
for (Points *point in shape.arrayOfPoints) {
NSLog(#"X:%f, Y:%f",point.x,point.y);
if (isFirstPoint) {
CGContextMoveToPoint(context, point.x, point.y);
//[bpath moveToPoint:CGPointMake(point.x,point.y)];
isFirstPoint = NO;
continue;
}
CGContextAddLineToPoint(context, point.x, point.x);
}
CGContextStrokePath(context);
}
}
But I am getting the below image as result which looks not the correct one
Questions:
Am I in the correct direction of achieving this?
How to draw points at -ve direction?
According to the co-ordinates, the drawing will be very huge, I want to draw it first and then shrink to fit to the screen so that users can later zoom in/ pan etc
UPDATE:
I have achieved some of the basic things using translation and scaling. Now the problem is how I will fit the content drawn, to the view bounds. Since my co-ordinates are pretty big, it goes out of bounds, I want it to fit.
Please find below the test code that I am having.
Any idea how to fit it?
DrawshapefromJSONPoints
I have found a solution to the problem. Now I am able to draw shapes from a set of points and fit them to the screen and also make them zoom able without quality loss. Please find below the link to the project. This can be tweaked to support colors, different shapes, I am currently handling only polygons.
DrawShapeFromJSON
A few observations:
There is nothing wrong with using quartz for this, but you might find it more flexible using OpenGL since you mention requirements for hit-testing areas and scaling the model in realtime.
Your code is assuming an ordered list of points since you plot the path using CGContextMoveToPoint. If this is guaranteed by the data contract then fine; however, you will need to write a more intelligent renderer if you have multiple closed paths returned in your JSON.
Questions 2 & 3 can be covered with a primer in computer graphics (specifically model-view matrix and transformations). If your world coordinates are centered at (0,0,0) you can scale the vertices by applying a scalar to each vertex. Drawing points on the negative axis will make more sense when you are not using the quartz-2d coordinate system (0,0)-(w, h)
I use the following code to draw an arc in the lower half of a circle.
According to documentation in Apple:
This method creates an open subpath. The created arc lies on the
perimeter of the specified circle. When drawn in the default
coordinate system, the start and end angles are based on the unit
circle shown in Figure 1. For example, specifying a start angle of 0
radians, an end angle of π radians, and setting the clockwise
parameter to YES draws the bottom half of the circle. However,
specifying the same start and end angles but setting the clockwise
parameter set to NO draws the top half of the circle.
I wrote
std::complex<double> start1( std::polar(190.0,0.0) );
CGPoint targetStart1 = CGPointMake(start1.real() + 342.0, start1.imag() + 220.);
CGMutablePathRef path = CGPathCreateMutable();
CGPathMoveToPoint(path, NULL, targetStart1.x, targetStart1.y);
CGPathAddArc(path, NULL, 342.0, 220.0, 190, 0.0, PI_180, YES );
CGContextAddPath(context, path);
CGContextSetLineWidth( context, 15.0 );
// set the color for the stroked circle
CGContextSetStrokeColorWithColor( context, [UIColor greenColor].CGColor);
CGContextStrokePath(context);
CGPathRelease(path);
However, the arc is drawn in the top half of the circle (see the green ring). I am wondering if I have accidentally somewhere in the code flipped the coordination system, but I don't know for what command I should search.
Thanks for ur help.
As per Vinzzz's comment: for some opaque reason, Apple flipped the window manager's coordinate system between OS X and iOS. OS X uses graph paper layout with the origin in the lower left and positive y proceeding upwards. UIKit uses English reading order with the origin in the upper left and positive y proceeding downwards.
Core Graphics retains the OS X approach; UIKit simply presents its output upside down. Throw some Core Text in if you really want to convince yourself.
The standard solution is to translate and scale the current transform matrix as the first two steps in any drawRect:, effectively moving the origin back to the lower left for that view and flipping the y coordinate axis.
I tried to rotate an image by its bottom center by setting the anchorPoint property of the views layer to CGPointMake(0.5, 1).It did rotate based on the bottom center,but the image views bottom center was translated to the image views center position and thus the whole image was translated up by half the height of the image.How to have the image view retain its center but still rotate about its bottom center?
Has anyone ever encountered this issue before?
Hey guys heres a pictorial demonstration of how i want the rotation to work on the images!
This is the original untransformed image!
As you can see i have put a red dot to indicate the bottom center of the image.
Here is the rotated image.The image has been rotated clockwise by a few degrees.This has been rotated about by its center which is again not what i want!!
Image obtained after applying tranforms posted in Calebs answer..Note: The transforms were applied on an image view that houses the above vertical image!As you can see the flaws in the rotation the bottom center has gone up and even the rotation was different.It took sort of 180 degree rotation and the whole image translated up by some distance.
To reiterate,I just want the image to rotate like a needle of a clock but about its bottom center
Thank you!
Assuming that you want to rotate the view in which the image is drawn, the usual way to do it is to compose a transformation matrix from three separate matrices. The first one translates the image left by half its width. The second rotates the view. The third translates the image right by half its width (i.e. it's the inverse of the first). When you put these three together, you get a matrix that rotates the image about its bottom center.
For example, if you have:
CGFloat x = imageWidth/2.0;
CGFloat r = 1.0; // some value in radians;
you can set up a transform like this:
CGAffineTransform t1 = CGAffineTransformMakeTranslation(-x, 0);
CGAffineTransform t2 = CGAffineTransformRotate(t1, r);
CGAffineTransform t3 = CGAffineTransformTranslate(t2, x, 0);
t3 is the final transform that will translate, rotate, and translate back all in one step. If you set t3 as the transform for an image view, you'll find the view rotated by 1.0 radians (or whatever you set r to) about the bottom center.
In addition to correcting my earlier error, Peter Hosey's comment points out that another option is to rotate the layer instead of the view. I don't think this is what you're looking for, but if it is you should know that transformations on a layer take place about it's anchorPoint, which by default is set to the center of its bounds rectangle. To rotate about the bottom center of the bounds rect, you could set the anchor point to the bottom center first, and then apply a rotation transform. (Note that the transform property of a layer is a CATransform3D; if you want to use an affine transform as above, use the setAffineTransform: method.)