CGPathRef and PDF - ios

It there a way to draw a complex shape with an application like CorelDraw or Adobe Flash, etc, save it or export it as a PDF and then open it with Core Graphics in iOS.
The idea is, to draw a shape, a vector, with CorelDraw - for example, and it is just the path. No color or fill. And then be able to open it directly by Core Graphics, add it as a CGPath to the context, and then be able to manipulate it, like fill it with solid color or gradient, or patterns.
The bottom line is, I am looking for a way to draw a complex shape in a user-friendly environment, like Corel or Flash, and export it, as a vactor, which can be manipulated in Core Graphics. And suggestions or help is really appreicated.
Thanks.

SVGKit doesn't work the exact same as I need either. Although I should say it is nicely done. There are also other resources, that I found and I'll leave them here for future references, if anyone stops by this post and is looking for a solution.
Converting SVG Paths to Objective-C Paths Good for simple paths; strokes and fills can be manipulated later by using protocols. Complex paths get mixed up.
SVGKit Good for creating images and animate them later through the course of the program. However, strokes, fills, paths can not be manipulated.
Opacity You can export as source code, hence you have more control over strokes, paths, and fills. As the path gets more complex, it is harder to manage the code manually. The other problem is by the time of export, the program adds resolution-dependent codes. It can be a pain to go through about 300+ lines of code to fix it so that it is not resolution dependent. By the final product wouldn't be mixed up, and can be manipulated by protocols. Layers are CGLayers, not CALayers.

If, as you say, you've got PDF files (from Corel, or another app), you can display them using CoreGraphics.
Take a look at the:
CGPDFDocument class
CGPDFPage class
Then, there is a CGContextDrawPDFPage function, that you can use to draw a PDF pages in a given graphic context, typically in the drawLayer: inContext: method of a UIView subclass.

There isn't really a built-in way to load CGPaths from files but you might want to take a look at SVGKit. Pretty much every modern vector drawing app can produce SVG files.

Related

Best approach for coding a painting app on iOS / iPad

I’m trying to build a drawing/painting app for the iPad, with textured brush tips and paper.
So far, all drawing app example codes I've come across seem to work by stroking a path. However, I'd like to actually apply a texture all along the path, to simulate say, an oil brush, or charcoal.
Here is an example of a brush tip texture: Bursh tip
The result when painting with the same brush tip: Result
In the results, the top output is what it looks like when the "brush tip" texture is applied far apart along the path.
The bottom result is the texture applied with very small steps along the path. Those who've worked in Photoshop with custom brushes will find this familiar.
I had once prototyped this in Processing years ago (I've since lost the source code), and got it to work in real-time.
In Processing, I converted both the brush tip PNG and the canvas (or the image I'm painting on to) into an array of integers. Then, I simply copied the values from the brush tip to the canvas texture, at the appropriate index. At the end of the cycle, I displayed the image, for that time-step. Repeat this dozens of times in-between each point returned by the mouse.
How would I approach this in iOS, and in real-time? I tried this (https://blog.avenuecode.com/how-to-use-uikit-for-low-level-image-processing-in-swift) but it's way too slow.
This makes me believe Metal might be the only way forward. Is that true, or am complicating this unnecessarily?
Thank you for any guidance!
PS. I'm coding in Swift 5, targeting iOS 13, in Xcode 11.5.
Welcome!
I recommend you check out Core Image. It's Apple's framework for image processing (on a higher level than Metal, though it can integrate with Metal). Unfortunately, the documentation is a bit out-dated, but I'm sure you can translate it into Swift.
Here Apple describes how you would realize a painting app with Core Image and here you can download the corresponding sample project.

how to manipulate graphics created with Adobe Animate CC?

So I have created some fancy graphics with Adobe Animate (HTML CANVAS)and added some animation as well. Is there a way to manipulate (or simply get) the Javascript code that generated these graphics? Say I want to use these same graphics in another project. How to extract that code? The generated code is not just simple CreateJS code. It is very tight to the Animate framework.
The code is in fact just plain CreateJS (EaselJS) code. In your generated lib file, you can see all the Graphic instructions for each Shape. For Animate, it uses a "compressed" format by default (which you can actually see in the docs). Since 95% of developers don't care what the exported graphics look like, so it uses a custom compression format to reduce the amount of code that is generated, which results in a much smaller filesize.
You can easily pop into the publish settings, and under "Advanced" turn off the "Compact Shapes" option, which will give you typical lineTo().moveTo().drawRect() commands again.
Hope that helps!

How can I cleanly draw on a PDF both statically and dynamically on iOS?

I am currently working on an app where we would like to download a PDF from a remote server and then draw on it. We would like to draw Google Maps pin-like annotations on the PDF (the static draw part). Furthermore, we would like to detect if a user has touched a pin and then draw a calloutBox over this PDF (dynamic draw part). We obviously would like the pdf to be scrollable/zoomable. Does anyone know of a good way to achieve this?
Things I have researched:
1) Render in a UIWebView. This seems like a great solution but its not clear to me how to then implement the draw code on the PDF. I have heard people say create a transparent UIView above the UIWebView for the drawing. This seems to come with its issues, how will it handle zooming and scrolling?
2) Use Quartz 2D and generate my own PDF from the PDF I fetch from the server. As I draw my own PDF content I can draw the static marker pins. Once I have this PDF, I can then shove it in a WebView. The problem with this approach however is I still need to handle the dynamic drawing of the call-out boxes when a user taps on the pin and this then kinda takes me back to problem 1.
You're correct that Apple does not offer much in terms of this issue. There's UIWebView which can preview and show PDF documents, but it's really not suited to adding annotations, and any "solution" with views will be very fragile, if you manage to do it at all. It's meant as a black box to read PDF documents, not for annotating.
You have to go all the way back to CGContextRef and take over the scrolling, zooming and touch handling/drawing yourself. Apple's ZoomingPDFViewer example is a good start.
I have been working on this problem since 2010 and we offer a commercial solution for PDF annotating for iOS, Android and Web called PSPDFKit. We ship a custom renderer which is better and more exact than Apple's CoreGraphics renderer, but the more interesting part is that we can deal with all common PDF annotation types. You can use note annotations to represent your pins and move them around, add notes, interact/override the default tap handling (and e.g. show your own popover when people tap on them). They are also always the same size - so they can be anchored at an exact point in the PDF and then you can zoom in while they stay the same size. The best part is that this is all part of the PDF spec, so they will also work with Apple's Preview app or Adobe Acrobat, so people can save/customize the markup and then everything can be saved in the PDF. The architecture is flexible so you can also simply save everything in a database or sync it back up to your server and simply use it for touch handling.
You can also build that yourself - the basic architecture is a UIScrollView and views that are managed. It quickly gets tricky when you do zooming and have views that need to stay the same size + touch handling and maybe you also want things like multi-select or regular ink drawing. You will also want to add some sort of image caching layer, since rendering PDF documents can be quite slow on mobile devices. Oh, and if you want to make text selectable or implement search, be ready for a rabbit hole that is called the Adobe CMap and CIDFont
Files Specification.

Valid technique for scalable graphics on iOS?

A little background: I'm working on an iOS app that has a variety of status icons for various states. These icons are used in a variety of places and sizes including as UITableViewCell imageViews, as custom MKMapAnnotations and a few other spots. I actually have a couple sets which include a more static status icon as well as ones that have dynamic text injected into the design.
So at first I went the conventional route of using static raster assets, but because the sizes were dynamic this wasn't always the best solution and I wasn't thrilled with the quality of the scaling using CGAffineTransforms. So instead I changed gears a bit and tried something else:
Created a custom UIView subclass for each high level class of icon. It takes as input the model object that derives the status from (I suppose I could have also just used an enum and loaded this into some kind of model constructor but this is how I did it) so it can decide what it needs to draw, then does the necessary drawing in drawRect. Since all of the drawing is based on the view bounds it scales to any reasonable dimensions.
Created a Category which has class method constructors that take the model inputs as well as the size you want to use and constructs the custom views.
Since I also wanted the option to have rasterized versions of these icons to plug into certain places (such as a UITableViewCell imageView) I also created constructors that build the view and return a UIImage using the fast iOS7 snapshotting functions.
So what does this give me? Well here's the pros/cons that I can see.
Pros
Completely scalable graphics that can easily be used in a variety of different scenarios and contexts.
Easy compatibility with adding dynamic info to the graphics such as text. Because I have the exact shape data on everything I'm drawing I don't need to guesstimate on the bounds for a text box since I know how everything is laid out.
Compatibility with situations where I might want a rasterized asset but I still get all the advantages of the dynamic view since I'm not rasterizing it till I need it.
Reduces the size of the application since I don't need to include raster assets.
Cons
The workflow for creating the draw code in the first place isn't ideal. For simple stuff I can do it straight in code but for more complex things I'll need to create the vector asset in Illustrator or Sketch then bring it into PaintCode and clean up the generated draw code into something more streamlined. This is not the most ideal process.
So the question is: does anyone have any better suggestions for how to deal with this sort of situation? I haven't found an enormous amount of material on techniques for this sort of thing and I'm wondering if I'm missing a better way of handling this or if there are any hidden gotchas here...performance doesn't seem to be an issue from my testing with my approach but I haven't tested it on the iPad3 or iPhone 4 yet so there could still be some unknowns.
You could try SVGKit, which draws SVG files, and can export to a UIImage, if desired.

Creating a Geometric Path from 2D points without a graphics context. Possible in iOS?

I am needing a way to test out some heavy mathematical functionality in my code and have come to the point where I need to verify that such code is working properly. I would like to be able to create a path based on an array of points and use this path for testing without a graphics context.
As an example, Java has various classes such as the Path2D class that is completely independent on any kind of context or view unless you need to display the information in some kind of graphics context.
It looks like that Apple doesn't provide any methods that allow you to create, manipulate and change arbitrary geometric shapes but I wanted to come here and make sure.
CGPath and UIBezierPath can both be created without having a current context. But it depends what you want to do as to how much use they will be because their purpose is really for drawing. As such it isn't really easy to get the points back from the path once added.

Resources