I am a bit new to iOS development and have a fairly general question regarding iOS and PDFs. I know there is built-in PDF support in iOS4+, but is it possible to open a PDF for viewing then maybe make notes on it and save it again?
I'm sure there are options like opening the PDF as a background and having a "writable" overlay over it, saving it as an image then writing it out as a PDF, but I was wondering if there was a more "inherent" way to do so.
Thanks.
I am working on the same issue. I'm starting to look at FastPDFKit http://mobfarm.eu/fastpdfkit ... You are supposed to be able to add bookmarks, view outline, perform search and highlight. I'm trying to see if I can overlay a series of things like images, graphics and text elements. Then I need to merge the pdf with the additions.
I think we are trying to do the same thing.
For searching, see this answer: Text searching PDF
For the overlay, you can just draw the pdf youself, and add you own overlay:
UIGraphicsBeginImageContextWithOptions(pageRect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
// Flip the context so that the PDF page is rendered
// right side up.
CGContextTranslateCTM(context, 0.0, pageRect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// Scale the context so that the PDF page is rendered
// at the correct size for the zoom level.
CGContextScaleCTM(context, pdfScale, pdfScale);
CGContextTranslateCTM(context, -pageRect.origin.x, -pageRect.origin.y);
CGContextDrawPDFPage(context, pdfPage);
CGContextRestoreGState(context);
// callback for custom overlay drawing (example_
if ([document shouldDrawOverlayRectForSize:size]) {
[document drawOverlayRect:pageRect inContext:context forPage:page zoomScale:1.0 size:size];
}
Update: Just a note, pdf can contain rotation information, so the actual drawing should read that metadata and transform the context accordingly.
Related
I want to fill specific color on specific area of an image.
EX:
In above Joker image, If touch on hair of Joker then fill specific color on hair Or touch on nose then fill specific color on nose.. etc. I hope may you understand what I am trying to say.
After googling it's may be achieve by use of UIBezierPath or CGContext Reference but I am very new for it, I tried to read this documentation but I do not understand (take more time) anything also I have limit of time for this Project. So I can not spend more time on it.
Also I found that we can use Flood fill algorithm. But I don't know how to use in my case.
NOTE: I don't want to divide original image (such like hair. nose, cap,...etc) because If I will do then there will be so many images in bundle so I need to handle it for both normal and retina device so this option is not helpful for me.
So please give me your valuable suggestion and also tell me which is best for me UIBezierPath or CGContext Reference? How can I fill color on specific portion of image? and/or can we fill color under the black border of area ? Because I am new at Quartz 2D Programming.
Use the Github library below. The post uses the flood fill algorithm : UIImageScanlineFloodfill
Objective C description : ObjFloodFill
If you want to look at detailed explanation of the algorithm : Recursion Explained with the Flood Fill Algorithm (and Zombies and Cats)
few of the other tutorials in other languages : Lode's Computer Graphics Tutorial : Flood Fill
Rather than attempting to flood fill an area of a raster-based image, a better approach (and much smaller amount of data) would be to create vector images. Once you have a vector image, you can stroke the outline to draw it uncolored, or you can fill the outline to draw it colored.
I recommend using CoreGraphics calls like CGContextStrokePath() and CGContextFillPath() to do the drawing. This will likely look better than using the flood fill because you will get nicely anti-aliased edges.
Apple has some good documentation on how to draw with Quartz 2D. Particularly the section on Paths is useful to what you're trying to do.
You can Clip the context based on image Alpha transparency
I have created a quick image with black color and alpha
Then using the below Code
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext(); // Get the context
CGContextSetFillColorWithColor(context, [UIColor blueColor].CGColor); // Set the fill color to be blue
// Flip the context so that the image is not flipped
CGContextTranslateCTM(context, 0, rect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// Clip the context by a mask created from the image
UIImage *image = [UIImage imageNamed:#"image.png"];
CGImageRef cgImage = image.CGImage;
CGContextClipToMask(context, rect, cgImage);
// Finally fill the context with the color and mask you set earlier
CGContextFillRect(context, rect);
}
The result
This is a quick hint of what you can do. However you need now to convert your image to add alpha transparent to the parts you need to remove
After a quick search I found these links
How can I change the color 'white' from a UIImage to transparent
How to make one colour transparent in UIImage
If You create your image as SVG vector base image, it will be very light (less than png, jpg) and really easy to manage via Quartz 2D using bezier paths. Bezier paths can be filled (white at starting). UIBezierPath have a method (containsPoint:) that help define if you click inside.
I have a custom UITableViewCell subclass which shows and image and a text over it.
The image is downloaded while the text is readily available at the time the table view cell is displayed.
From various places, I read that it is better to just have one view and draw stuff in the view's drawRect method to improve performance as compared to have multiple subviews (in this case a UIImageView view and 2 UILabel views)
I don't want to draw the image in the custom table view cell's drawRect because
the image will probably not be available the first time its called,
I don't want to draw the whole image everytime someone calls drawRect.
The image in the view should only be done when someone asks for the image to be displayed (example when the network operation completes and image is available to be rendered). The text however is drawn in the -drawRect method.
The problems:
I am not able to show the image on the screen once it is downloaded.
The code I am using currently is-
- (void)drawImageInView
{
//.. completion block after downloading from network
if (image) { // Image downloaded from the network
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextSetFillColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextSetLineWidth(context, 1.0);
CGContextSetTextDrawingMode(context, kCGTextFill);
CGPoint posOnScreen = self.center;
CGContextDrawImage(context, CGRectMake(posOnScreen.x - image.size.width/2,
posOnScreen.y - image.size.height/2,
image.size.width,
image.size.height),
image .CGImage);
UIGraphicsEndImageContext();
}
}
I have also tried:
UIGraphicsBeginImageContext(rect.size);
[image drawInRect:rect];
UIGraphicsEndImageContext();
to no avail.
How can I make sure the text is drawn on the on top of the image when it is rendered. Should calling [self setNeedsDisplay] after UIGraphicsEndImageContext(); be enough to
ensure that the text is rendered on top of the image?
You're right on the fact that drawing text will make your application faster as there's no UILabel object overhead, but UIImageViews are highly optimized and you won't probably ever be able to draw images faster than this class. Therefore I highly recommend you do use UIImageViews to draw your images. Don't fall in the optimization pitfall: only optimize when you see that your application is not performing at it's max.
Once the image is downloaded, just set the imageView's image property to your image and you'll be done.
Notice that the stackoverflow page you linked to is almost four years old, and that question links to articles that are almost five years old. When those articles were written in 2008, the current device was an iPhone 3G, which was much slower (both CPU and GPU) and had much less RAM than the current devices in 2013. So the advice you read there isn't necessarily relevant today.
Anyway, don't worry about performance until you've measured it (presumably with the Time Profiler instrument) and found a problem. Just implement your interface in the simplest, most maintainable way you can. Then, if you find a problem, try something more complicated to fix it.
So: just use a UIImageView to display your image, and a UILabel to display your text. Then test to see if it's too slow.
If your testing shows that it's too slow, profile it. If you can't figure out how to profile it, or how to interpret the profiler output, or how to fix the problem, then come back and post a question, and include the profiler output.
I'm trying to screen capture a view that uses CATiledLayers (for animation) but unable to get the image that I want.
I tried it on Apple's "PhotoScroller" sample application and added this:
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:ctx];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
However, the tiles don't render in the resulting UIImage and all I get is the tile outlines.
Seems that CATiledLayer's renderInContext behaves differently from CALayer.
Am I doing anything wrong in trying to capture the tiles? Is my only solution to render the tiles individually myself?
In the end, rather than trying to render the tiles into another view just for animation, I just created a new instance of ImageScrollView, and animated the original one and the new one together before deallocating the original one.
Hmm, spending a couple of days trying to get the PDF annotations on my iPad application.
I'm using the following code to get the annotations, and yes! it works :)
But the rect value is completely different then the IOS rect values.
I can't figure it out how to place UIButtons on the spot where the annotation supposed to be.
For example, i have an annotation in the top left corner of the pdf file.
My /Annots/Rect values are, 1208.93, 2266.28, 1232.93, 2290.28 (WHAT?!)
How can i translate the PDF /Annots /Rect values to iOS x an y coordinates?
CGPDFPageRef page = CGPDFDocumentGetPage(doc, i+1);
CGPDFDictionaryRef pageDictionary = CGPDFPageGetDictionary(page);
CGPDFArrayRef outputArray;
if(!CGPDFDictionaryGetArray(pageDictionary, "Annots", &outputArray)) {
return;
.... .... ....}
I think those coordinates are in the "default user coordinate space" of the PDF. You need to apply a transformation that sends them into screen coordinates. You can use CGPDFPageGetDrawingTransform to get such a transformation. Make sure you're using the same transformation for drawing the page and the annotations.
I am not sure if this is the problem but the Quartz 2D Coordinate Systems begins at the bottom left.
See Quartz 2D Programming Guide's Coordinate Systems for more information.
Ps. If you got it working, I would like to see the result code for annotating.
EDIT:
Just found this code (not tested):
// flip context so page is right way up
CGContextTranslateCTM(currentContext, 0, paperSize.size.height);
CGContextScaleCTM(currentContext, 1.0, -1.0);
Source
I've asked this question on a couple other forums and have had zero response, so I'm hoping someone here can help point me in the right direction. I have a pretty simple one screen application for my work. It's basically just a recreation of a 1 page paper report that has a company logo, some labels, a few text boxes and a scroll text box for the report.
I need to be able to fill out the report then click a button to save it in a graphical form so I can fax, print or email it later. Currently, I'm just programmatically taking a screen capture and saving it to the photo's library (default for screen capture). Then I can just email it from photo's. This works ok, but is kind of hacky, at best.
I've read through the new iPad 3.2 guide for creating PDF's (apparently it's supposed to be much easier than before) but I can not get it to work and I've spent countless hours on it now. I'm hoping someone has the answer for me.
Alternatively, if anyone knows how I can redirect where the screen capture is stored (default is in the photo album) then maybe I can make that function work. If I could redirect the screen capture to store in my applications document folder, then I can use MFMailCompose to attach it to an email.
Lastly, on a side note, does anyone know of a good way to capture a digital signature via touch. For instance, I'd love to have my users be able to just sign their name via touch at the bottom of the document before I convert to PDF or take a screen capture.
Thanks in advance for your help.
-Ray
I am using the following snippet. it works fine with single page PDF generation.
It uses any View where you have content and capture it as a PDF file and stores into the document directory.
on some function
{
....
// here container view is my content to be converted to PDF file
// filepath is the path to where it should be write in our documents directory
CGContextRef pdfContext = [self createPDFContext:containerView.bounds path:(CFStringRef)filePath];
NSLog(#"PDF Context created");
CGContextBeginPage (pdfContext,nil); // 6
//turn PDF upsidedown
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0, containerView.bounds.size.height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(pdfContext, transform);
//Draw view into PDF
[containerView.layer renderInContext:pdfContext];
CGContextEndPage (pdfContext);// 8
CGContextRelease (pdfContext);
....
}
- (CGContextRef) createPDFContext:(CGRect)inMediaBox path:(CFStringRef) path
{
CGContextRef myOutContext = NULL;
CFURLRef url;
url = CFURLCreateWithFileSystemPath (NULL, // 1
path,
kCFURLPOSIXPathStyle,
false);
if (url != NULL) {
myOutContext = CGPDFContextCreateWithURL (url,// 2
&inMediaBox,
NULL);
CFRelease(url);// 3
}
return myOutContext;// 4
}