PDF Annots on iOS with Quartz - ipad

Hmm, spending a couple of days trying to get the PDF annotations on my iPad application.
I'm using the following code to get the annotations, and yes! it works :)
But the rect value is completely different then the IOS rect values.
I can't figure it out how to place UIButtons on the spot where the annotation supposed to be.
For example, i have an annotation in the top left corner of the pdf file.
My /Annots/Rect values are, 1208.93, 2266.28, 1232.93, 2290.28 (WHAT?!)
How can i translate the PDF /Annots /Rect values to iOS x an y coordinates?
CGPDFPageRef page = CGPDFDocumentGetPage(doc, i+1);
CGPDFDictionaryRef pageDictionary = CGPDFPageGetDictionary(page);
CGPDFArrayRef outputArray;
if(!CGPDFDictionaryGetArray(pageDictionary, "Annots", &outputArray)) {
return;
.... .... ....}

I think those coordinates are in the "default user coordinate space" of the PDF. You need to apply a transformation that sends them into screen coordinates. You can use CGPDFPageGetDrawingTransform to get such a transformation. Make sure you're using the same transformation for drawing the page and the annotations.

I am not sure if this is the problem but the Quartz 2D Coordinate Systems begins at the bottom left.
See Quartz 2D Programming Guide's Coordinate Systems for more information.
Ps. If you got it working, I would like to see the result code for annotating.
EDIT:
Just found this code (not tested):
// flip context so page is right way up
CGContextTranslateCTM(currentContext, 0, paperSize.size.height);
CGContextScaleCTM(currentContext, 1.0, -1.0);
Source

Related

How to map any arbitrary two points from image to mapview? In iOS overlay mapkit context

I can choose points from mapview and can overlay an image there so that image's top left and bottom right are mapped to corresponding mapview's points.
But how can I map any arbitrary two points (not those corner points) from image to mapview's two chosen points? Like the calibrate feature of this app https://itunes.apple.com/us/app/mapcha/id956671318?mt=8
Any hints either SDK specific or math involved to guide towards the solutions are appreciated.
Edited:
I didn't realize that my explanation were not making sense to you, pardon.
I thought my requirement would easily make sense by providing the reference app and reference feature (Calibrate).
OK, a bit more explanation here. I followed this tutorial to achieve result like this image. Now, all I want to do is to make my app users be able to attach their own images to the map at their desired places like that calibrate feature of the reference app.
Note:
You don't have to install the reference app to understand the calibrate feature as it can be understood by just viewing the images with title Step 1, Step 2, Step 3, Step 4 & Finish.
Here are those points' text in case reference would be no more available.
Step 1: Take a picture of your map or use a photo from your photo library. Ensure North is up.
Step 2: Choose a point on the paper map and locate it on the standard map.
Step 3: Repeat for another point. Points that are farther apart give better results.
Step 4: Preview the map and check alignment. Adjust the transparency if required.
Finish: Now you can see your current location on the paper map.
Edited:
I'm not getting the desired result as shown in the image but seem close.
And here is a snap of code I'm using.
-(void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context {
CGImageRef imageReference = self.overlayImage.CGImage;
MKMapRect theMapRect = self.overlay.boundingMapRect;
CGRect theRect = [self rectForMapRect:theMapRect];
double imgWidthScaleFactor = 1.0;
if (_imgWidth > 0) {
imgWidthScaleFactor = theRect.size.width / _imgWidth;
}
double imgHeightScaleFactor = 1.0;
if (_imgHeight > 0) {
imgHeightScaleFactor = theRect.size.height / _imgHeight;
}
CGPoint contextCenter = CGPointMake(CGRectGetMidX(theRect), CGRectGetMidY(theRect));
CGContextTranslateCTM(context, contextCenter.x, contextCenter.y);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextRotateCTM(context, _radian);
CGContextTranslateCTM(context, -contextCenter.x, -contextCenter.y);
CGContextTranslateCTM(context, imgWidthScaleFactor*_pivotX, imgHeightScaleFactor*_pivotY);
CGContextSetAlpha(context, 0.5);
CGContextDrawImage(context, theRect, imageReference);
}
So, let me start by explaining the intuition behind what this example app does:
image to stick on map:
/* _____
| |
| | __
|_____| |__|
(image to stick) (map)
*/
As you can see, i have made it such that the image shape and size, are different from the map. Now, the reason you have to pick two points on the map and two on the image is to provide a scale for your image.
/* _____
| * |
|* |
|_____|
(image to stick, with chosen points as *)
*/
I. calculate the distance between the two chosen points on the Image, and the two chosen points on the map, lets call them d_image and d_map.
Now you get the scaling of your image by simply doing:
float scale = (d_map/d_image);
II. Using the scale you will be able to find the upper right corner of the image on the map, and the bottom left corner of the image on the map. You can use these points and convert them to coordinates using the following:
CLLocationCoordinate2D image_corner_coordinate = [*yourMap* convertPoint:*image_corner* toCoordinateFromView:self.view];
And voila, you have your SouthWest and NorthEast coordinates to pin your overlay to!
***EDIT****
OP added additional information to the question with the explicit request to be able to handle images that are at a different angle than the chosen points on the map. Although I believe that the answer above, steps I and II should be enough to figure out how to achieve the desired effect, here we go for completeness sake:
step I (.5)
Using the scaling factor calculated in Step I. we first scale down the image to the appropriate size. Then, give than we know the two given points on the map and the two given points on the image, we calculate the rotation factor.
Given that the image is now the right scale, the way to think about this is the following: you pin down on of the two points on it's matching map point, and start to rotate the image until the other point matches up. When you find the rotation factor, continue to step II to get your final answer.

iOS:Quartz2d to Draw floor plans

I am building an app where I wanted show floor plans to the user which is interactive that means they can tap on each area zoom those and find finer details.
I am getting a JSON response from the back-end which has the metadata info of the floor plans, shape info and points. I am planning to parse the points and draw the shapes on a view using quartz (started learning Quartz2d). As a start I took a simple blue print which looks like the below image.
As per the blue print the center is at (0,0) and there are 4 points.
Below are the points that I am getting from backend for the blue print.
X:-1405.52, Y:686.18
X:550.27, Y:683.97
X:1392.26, Y:-776.79
X:-1405.52, Y:-776.79
I tried to draw this
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
- (void)drawRect:(CGRect)rect {
// Drawing code
for (Shape *shape in shapes.shapesArray) {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 2.0);
CGContextSetStrokeColorWithColor(context, [UIColor blueColor].CGColor);
BOOL isFirstPoint = YES;
for (Points *point in shape.arrayOfPoints) {
NSLog(#"X:%f, Y:%f",point.x,point.y);
if (isFirstPoint) {
CGContextMoveToPoint(context, point.x, point.y);
//[bpath moveToPoint:CGPointMake(point.x,point.y)];
isFirstPoint = NO;
continue;
}
CGContextAddLineToPoint(context, point.x, point.x);
}
CGContextStrokePath(context);
}
}
But I am getting the below image as result which looks not the correct one
Questions:
Am I in the correct direction of achieving this?
How to draw points at -ve direction?
According to the co-ordinates, the drawing will be very huge, I want to draw it first and then shrink to fit to the screen so that users can later zoom in/ pan etc
UPDATE:
I have achieved some of the basic things using translation and scaling. Now the problem is how I will fit the content drawn, to the view bounds. Since my co-ordinates are pretty big, it goes out of bounds, I want it to fit.
Please find below the test code that I am having.
Any idea how to fit it?
DrawshapefromJSONPoints
I have found a solution to the problem. Now I am able to draw shapes from a set of points and fit them to the screen and also make them zoom able without quality loss. Please find below the link to the project. This can be tweaked to support colors, different shapes, I am currently handling only polygons.
DrawShapeFromJSON
A few observations:
There is nothing wrong with using quartz for this, but you might find it more flexible using OpenGL since you mention requirements for hit-testing areas and scaling the model in realtime.
Your code is assuming an ordered list of points since you plot the path using CGContextMoveToPoint. If this is guaranteed by the data contract then fine; however, you will need to write a more intelligent renderer if you have multiple closed paths returned in your JSON.
Questions 2 & 3 can be covered with a primer in computer graphics (specifically model-view matrix and transformations). If your world coordinates are centered at (0,0,0) you can scale the vertices by applying a scalar to each vertex. Drawing points on the negative axis will make more sense when you are not using the quartz-2d coordinate system (0,0)-(w, h)

How to draw smooth ink annotation on PDF in iOS using Objective-C

I am drawing ink annotation in pdf using objective-c. Pdf specifications say that we need to provide an array of points for the ink drawing. I am using PoDoFo library.
This is how I am drawing ink annotation currently:
PoDoFo::PdfArray arr; //This is the array of points to be drawn
arr.push_back(X1);
arr.push_back(Y1);
arr.push_back(X2);
arr.push_back(Y2);
arr.push_back(X3);
arr.push_back(Y3);
.
.
.
arr.push_back(Xn-1);
arr.push_back(Yn-1);
arr.push_back(Xn);
arr.push_back(Yn);
PoDoFo::PdfMemDocument* doc = (PoDoFo::PdfMemDocument *)aDoc;
PoDoFo::PdfPage* pPage = doc->GetPage(pageIndex);
//PageIndex is page number
if (! pPage) {
// couldn't get that page
return;
}
PoDoFo::PdfAnnotation* anno;
PoDoFo::EPdfAnnotation type= PoDoFo::ePdfAnnotation_Ink;
PoDoFo::PdfRect rect;
rect.SetBottom(aRect.origin.y);
rect.SetLeft(aRect.origin.x);
rect.SetHeight(aRect.size.height);
rect.SetWidth(aRect.size.width);
//aRect is CGRect where annotation is to be drawn on page
anno = pPage->CreateAnnotation(type , rect);
anno->GetObject()->GetDictionary().AddKey("InkList", arr);
The problem is that how do I make an array which covers every point. I am getting points from touches delegate methods (e.g. TouchesMoved), but when user draw with high velocity then some points/pixels are skipped and pdf just can’t interpolate those skipped points for itself. Bezier curves can interpolate and draw smooth curves but Bezier-curve don’t provide me the array of all the points (including skipped one) and I need such an array so that when the pdf is opened in adobe reader, it shows smooth curves. Right now I get smooth curve in any iOS device but the curve isn’t smooth on adobe reader. Here is the comparison of curves, one drawn using Bezier-curves in simulator and other in adobe reader.
Above picture is taken from iPad simulator which is drawn using Bezier curve and is smooth.
Above picture is taken from adobe reader. You can see the red curve isn’t smooth like the blue one. How do I make it smooth?
According to PDF 1.3 specification for InkList array:
An array of n arrays, each representing a stroked path. Each array is
a series of alternating x and y coordinates in default user space,
specifying points along the path. When drawn, the points are connected
by straight lines or curves in an implementation-dependent way. (See
implementation note 60 in Appendix H.)
And from note 60:
Acrobat viewers always connect the points along each path with
straight lines.
So you can't have smooth line here like bezier curve. This is true up to version 1.7.
The only way to have it smooth is to render as image with Core Graphics and add it to PDF.

iPhone Build custom monocolor image with clear area at fixed coords

So what I want to do is build an image the size of the device. The image should be all blue except for a circle at coords (x,y) with a radius of z that should be clear, where x,y,z are variables. I know I should use the CGContext I just don't know how to get it done.
Ani ideas?
You want to use Core Graphics for this. To be more precise, read up on CGContext and the functions used to manipulate them. There are plenty of tutorials for it out there, and Apple provides a lot of sample code as well.

How to place sprites at specific x,y coordinates in xna 4.0

Okay im working on making a tic tac toe game for one of my game development courses using XNA 4.0 I need to place sprites or some other objects so the game can check if the mouse is being clicked in the correct spots. I am going to use transparent sprites as a kind of button. How do I code them to go to these specific x,y coordinates. The game board is drawn on the background, I have all the coordinates to where to place these sprites. I am just stuck on putting the sprites in the correct positions.
SpriteBatch.Draw has a position parameter. Pass in an appropriately-valued Vector2.
Well if you check the Draw method you will find a parameter for the position.
Check this code sample
spriteBatch.Begin();
Vector2 pos = new Vector2(10, 10);
spriteBatch.Draw(SpriteTexture, pos, Color.White);
spriteBatch.End();
This is how you draw a sprite, with SpriteTexture as the image, on the position x10, y10 with the color White to modulate the texture.
You can also find more informations here.
Keep in mind that there are many overloaded methods to the Draw method. One you can even pass in rotation information and the like. So .Draw(...) has a lot of functionality you can use beyond just placing a sprite.

Resources