SceneKit - Textures not properly displayed - ios

I have a cube (rounded) and want to display a texture on one of it's side. I can access the material on that side with:
var tex1: SCNMaterial! = cube.geometry?.materialWithName("_1")!
I then set it's image contents:
tex1.diffuse.contents = "cube1"
This then looks like this:
This shows me that it does work, but the white part is not in the center as
it should be. (The image I am using has the white part in the center.)
I tried to use offset to move the image around on the surface, I would like scale it as well. I tried it like this:
tex1.diffuse.contents.offset = SCNVector3Make(20, 0, 0)
That gives me errors: it says it cannot assign the result of that expression. (I also tried contentMode, same error, I think because these are for UI, not SCN)
Questions
Does anyone know what I can do?
Maybe offset is not the way to go?
How can I scale the image?

The type of a material property's contents is AnyObject, which means the compiler will allow you to call any method (defined on any object type) on it. That doesn't mean all methods or property accessors are actually really implemented by the actual class that's in your particular contents.
Material properties do have a contentsTransform option, though. Have you looked at that?

Here is my solution :
create offset:
let offsetVal = SCNMatrix4MakeTranslation(0, -0.05, 0)
create scale:
let scaleVal = SCNMatrix4MakeScale(1.5, 1.5, 1.5)
if you want to set Offset Property only:
material.diffuse.contentsTransform = offsetVal
if you want to set Scale Property only:
material.diffuse.contentsTransform = scaleVal
if you want to mix them:
material.diffuse.contentsTransform = SCNMatrix4Mult(scaleVal, offsetVal)
hope this helpful!!!

Related

custom image filter

1.Introduction:
So I want to develop a special filter method for uiimages - my idea is to change from one picture all the colors to black except a certain color, which should keep their appearance.
Images are always nice, so look at this image to get what I'd like to achieve:
2.Explanation:
I'd like to apply a filter (algorithm) that is able to find specific colors in an image. The algorithm must be able to replace all colors that are not matching to the reference colors with e.g "black".
I've developed a simple code that is able to replace specific colors (color ranges with threshold) in any image.
But tbh this solution doesn't seems to be a fast & efficient way at all!
func colorFilter(image: UIImage, findcolor: String, threshold: Int) -> UIImage {
let img: CGImage = image.cgImage!
let context = CGContext(data: nil, width: img.width, height: img.height, bitsPerComponent: 8, bytesPerRow: 4 * img.width, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)!
context.draw(img, in: CGRect(x: 0, y: 0, width: img.width, height: img.height))
let binaryData = context.data!.assumingMemoryBound(to: UInt8.self),
referenceColor = HEXtoHSL(findcolor) // [h, s, l] integer array
for i in 0..<img.height {
for j in 0..<img.width {
let pixel = 4 * (i * img.width + j)
let pixelColor = RGBtoHSL([Int(binaryData[pixel]), Int(binaryData[pixel+1]), Int(binaryData[pixel+2])]) // [h, s, l] integer array
let distance = calculateHSLDistance(pixelColor, referenceColor) // value between 0 and 100
if (distance > threshold) {
let setValue: UInt8 = 255
binaryData[pixel] = setValue; binaryData[pixel+1] = setValue; binaryData[pixel+2] = setValue; binaryData[pixel+3] = 255
}
}
}
let outputImg = context.makeImage()!
return UIImage(cgImage: outputImg, scale: image.scale, orientation: image.imageOrientation)
}
3.Code Information The code above is working quite fine but is absolutely ineffective. Because of all the calculation (especially color conversion, etc.) this code is taking a LONG (too long) time, so have a look at this screenshot:
My question I'm pretty sure there is a WAY simpler solution of filtering a specific color (with a given threshold #c6456f is similar to #C6476f, ...) instead of looping trough EVERY single pixel to compare it's color.
So what I was thinking about was something like a filter (CIFilter-method) as alternative way to the code on top.
Some Notes
So I do not ask you to post any replies that contain suggestions to use the openCV libary. I would like to develop this "algorithm" exclusively with Swift.
The size of the image from which the screenshot was taken over time had a resolution of 500 * 800px
Thats all
Did you really read this far? - congratulation, however - any help how to speed up my code would be very appreciated! (Maybe theres a better way to get the pixel color instead of looping trough every pixel) Thanks a million in advance :)
First thing to do - profile (measure time consumption of different parts of your function). It often shows that time is spent in some unexpected place, and always suggests where to direct your optimization effort. It doesn't mean that you have to focus on that most time consuming thing though, but it will show you where the time is spent. Unfortunately I'm not familiar with Swift so cannot recommend any specific tool.
Regarding iterating through all pixels - depends on the image structure and your assumptions about input data. I see two cases when you can avoid this:
When there is some optimized data structure built over your image (e.g. some statistics in its areas). That usually makes sense when you process the same image with same (or similar) algorithm with different parameters. If you process every image only once, likely it will not help you.
When you know that the green pixels always exist in a group, so there cannot be an isolated single pixel. In that case you can skip one or more pixels and when you find a green pixel, analyze its neighbourhood.
I do not code on your platform but...
Well I assume your masked areas (with the specific color) are continuous and large enough ... that means you got groups of pixels together with big enough areas (not just few pixels thick stuff). With this assumption you can create a density map for your color. What I mean if min detail size of your specific color stuff is 10 pixels then you can inspect every 8th pixel in each axis speeding up the initial scan ~64 times. And then use the full scan only for regions containing your color. Here is what you have to do:
determine properties
You need to set the step for each axis (how many pixels you can skip without missing your colored zone). Let call this dx,dy.
create density map
simply create 2D array that will hold info if center pixel of region is set with your specific color. so if your image has xs,ys resolution than your map will be:
int mx=xs/dx;
int my=ys/dy;
int map[mx][my],x,y,xx,yy;
for (yy=0,y=dy>>1;y<ys;y+=dy,yy++)
for (xx=0,x=dx>>1;x<xs;x+=dx,xx++)
map[xx][yy]=compare(pixel(x,y) , specific_color)<threshold;
enlarge map set areas
now you should enlarge the set areas in map[][] to neighboring cells because #2 could miss edge of your color region.
process all set regions
for (yy=0;yy<my;yy++)
for (xx=0;xx<mx;xx++)
if (map[xx][yy])
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
if (compare(pixel(x,y) , specific_color)>=threshold) pixel(x,y)=0x00000000;
If you want to speed up this even more than you need to detect set map[][] cells that are on edge (have at least one zero neighbor) you can distinquish the cells like:
0 - no specific color is present
1 - inside of color area
2 - edge of color area
That can be done by simply in O(mx*my). After that you need to check for color only the edge regions so:
for (yy=0;yy<my;yy++)
for (xx=0;xx<mx;xx++)
if (map[xx][yy]==2)
{
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
if (compare(pixel(x,y) , specific_color)>=threshold) pixel(x,y)=0x00000000;
} else if (map[xx][yy]==0)
{
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
pixel(x,y)=0x00000000;
}
This should be even faster. In case your image resolution xs,ys is not a multiple of region size mx,my you should handle the outer edge of image either by zero padding or by special loops for that missing part of image...
btw how long it takes to read and set your whole image?
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
pixel(x,y)=pixel(x,y)^0x00FFFFFF;
if this alone is slow than it means your pixel access is too slow and you should use different api for this. That is very common mistake on Windows GDI platform as people usually use Pixels[][] which is slower than crawling snail. there are other ways like bitlocking/blitting,ScanLine etc so in such case you need to look for something fast on your platform. If you are not able to speed even this stuff than you can not do anything else ... btw what HW is this run on?

OpenLayers3 Custom Scale Select Options: Calculation needed

I'm a newbie using OpenLayers 3 and having the following problem:
I've created a custom control (select box; see image) where I would like to add 3 predefined scale options (see code after image).
let option2 = new Option('1 : 150.000', '2183910.7260319907', false, false);
option2.id = 'opt2';
select.appendChild(option2);
let option3 = new Option('1 : 30.000', '2183910.7260319907', false, false);
option3.id = 'opt3';
select.appendChild(option3);
let option4 = new Option('1 : 15.000', '2183910.7260319907', false, false);
option4.id = 'opt4';
select.appendChild(option4);
From other sources I've found that I "simply" need to pass the correct value (above it would be 2183910.7260319907 for the 1:2M scale) to the respective scale option and then I've already implemented the set resolution function in order to update the map.
Question: How can I now calculate that particular value (I don't even know what it represents...) for the scales of 1:150.000, 1:30.000 and 1:15.000?
Thank you so much in advance & if something is not clear, I'm happy to clarify and provide more code/screenshots if necessary.
Cheers :)
In web cartography, we work with resolutions, not scales. Resolution simply means map units per screen pixel.
If you want to apply the concept of paper map scales, you'll need to make assumptions about the resolution of the user's screen, which is usually expressed in dots per inch (dpi). Then you can calculate the scale like this:
var inchesPerMeter = 39.3700787;
var dpi = 96;
function getResolutionForScale(scaleDenominator) {
return scaleDenominator / inchesPerMeter / dpi;
}
To use this, simply use the scale denominator as value in your options. So e.g. 2000000 for the "1 : 2.000.000" scale. Then it is easy to apply the scale to the map:
map.getView().setResolution(getResolutionForScale(scaleDenominator));
Side note if you want to get super accurate: projections like Web Mercator have different resolutions for different latitudes. To accommodate for that, you could use ol.proj.getPointResolution() to get the true resolution for a specific location on the map, and adjust the result from getResolutionForScale() accordingly:
var view = map.getView();
view.setResolution(ol.proj.getPointResolution(
view.getProjection(),
getResolutionForScale(scaleDenominator),
view.getCenter());

SCNShape doesn't draw a shape for NSBezierPath

I experienced that for some NSBezierPaths SCNShape seems to be unable to draw a shape.
The path is created only using line(to:).
//...set up scene...
//Create path (working)
let path = NSBezierPath()
path.move(to: CGPoint.zero)
path.line(to: NSMakePoint(0.000000, 0.000000))
path.line(to: NSMakePoint(0.011681, 0.029526))
// more points ...
path.close()
// Make a 3D shape (not working)
let shape = SCNShape(path: path, extrusionDepth: 10)
shape.firstMaterial?.diffuse.contents = NSColor.green
let node = SCNNode(geometry: shape)
root.addChildNode(node)
For verifying that the general process of creating a SCNShape is correct, I also drew a blue shape that only differs by having different points. The blue shape gets drawn, the green shape doesn't.
You can find a playground containing the full example here. In the example you should be able to see a green and a blue shape in assistant editor. But only the blue shape gets drawn.
Do you have any idea why the green shape is not shown?
The short story: your path has way more points than it needs to, leading you to unexpected, hard to find geometric problems.
Note this bit in the documentation:
The result of extruding a self-intersecting path is undefined.
As it turns out, somewhere in the first 8 or so points, your "curve" makes enough of a turn the wrong way that the line closing the path (between the first point in the path 0,0, and the last point 32.366829, 29.713470) intersects the rest of the path. Here's an attempt at making it visible by excluding all but the first few points and the last point from a playground render (see that tiny little zigzag in the bottom left corner):
And at least on some SceneKit versions/renderers, when it tries to make a mesh out of a self-intersecting path it just gives up and makes nothing.
However, you really don't need that many points to make your path look good. Here it is if you use 1x, 1/5x, and 1/10x as many points:
If you exclude enough points overall, and/or skip the few at the beginning that make your curve zag where it should zig, SceneKit renders the shape just fine:
Some tips from diagnosing the problem:
When working with lots of coordinate data like this, I like to use ExpressibleByArrayLiteral so I can easily build an array of lots of points/vectors/etc:
extension CGPoint: ExpressibleByArrayLiteral {
public init(arrayLiteral elements: CGFloat...) {
precondition(elements.count == 2)
self.init(x: elements.first!, y: elements.last!)
}
}
var points: [CGPoint] = [
[0.000000, 0.000000],
[0.011681, 0.029526],
// ...
]
That gets me an array (and a lot less typing out things like NSPointMake over and over), so I can slice and dice the data to figure out what's wrong with it. (For example, one of my early theories was that there might be something about negative coordinates, so I did some map and min() to find the most-negative X and Y values, then some more map to make an array where all points are offset by a constant amount.)
Now, to make paths using arrays of points, I make an extension on NSBezierPath:
extension NSBezierPath {
convenience init(linesBetween points: [CGPoint], stride: Int = 1) {
precondition(points.count > 1)
self.init()
move(to: points.first! )
for i in Swift.stride(from: 1, to: points.count, by: stride) {
line(to: points[i])
}
}
}
With this, I can easily create paths from not just entire arrays of points, but also...
paths that skip parts of the original array (with the stride parameter)
let path5 = NSBezierPath(linesBetween: points, stride: 5)
let path10 = NSBezierPath(linesBetween: points, stride: 10)
(This is handy for generating playground previews a bit more quickly, too.)
paths that use some chunk or slice of the original array
let zigzag = NSBezierPath(linesBetween: Array(points.prefix(to:10)) + [points.last!])
let lopOffBothEnds = NSBezierPath(linesBetween: Array(points[1 ..< points.count < 1]))
Or both... the winning entry (in the screenshot above) is:
let path = NSBezierPath(linesBetween: Array(points.suffix(from: 10)), stride: 5)
You can get a (marginally) better render out of having more points in your path, but an even better way to do it would be to make a path out of curves instead of lines. For extra credit, try extending the NSBezierPath(linesBetween:) initializer above to add curves by keeping every nth point as part of the path while using a couple of the intermediary points as control handles. (It's no general purpose auto trace algorithm, but might be good enough for cases like this.)
In no way does this compare to Rikster's answer, but there is another way to prevent this kind of problem. It's a commercial way, and there's probably freeware apps that do similar, but this is one I'm used to using, that does this quite well.
What is 'this' that I'm talking about?
The conversion of drawings to code, by an app called PaintCode. This will allow you to see your paths and be sure they have none of the conflicts that Rickster pointed out are your issue.
Check it out here: https://www.paintcodeapp.com/
Other options are listed in answers here: How to import/parse SVG into UIBezierpaths, NSBezierpaths, CGPaths?

Can't use multiple images for SCNPlane array

I have an array of SCNPlane objects, and I want to add different images to each one. My problem is that whenever I initialize the SCNPlanes in the for loop, each plane will end up with the exact same image. Here is what my loop basically looks like:
var layer = [CALayer](count: 8, repeatedValue: CALayer())
var tmpPhoto = [CGImage]()
for var i = 0; i < 8; ++i{
tmpPhoto.append("image data")
layer[i].contents = tmpPhoto[i]
layer[i].frame = CGRectMake(0, 0, "image width", "image height")
//initialize the SCNPlane and add it to an array of SCNNodes
plane[i].geometry?.firstMaterial?.locksAmbientWithDiffuse = true
plane[i].geometry?.firstMaterial?.diffuse.contents = layer[i]
//add SCNPlane constraints
}
What I've noticed is that the image displayed will be the last image that was added/altered. I know this because after this loop, I tried modifying the first entry in the plane array only. At run time, the image for the first array entry would be displayed on all the other SCNPlanes instead! Keep in mind that I am not using displayLayer() or setNeedsDisplay() at all.
Here is what I have tried:
using a similar layer variable instead of an array, and just modifying it at the start of each loop
manipulating layer variables outside of a loop
directly adding on a UIImage without converting to CGImage first (I know that each image is being loaded into the array)
trying to modify the SCNPlane's layer directly
using an SCNMaterial variable (followed this, but with one layer added to the materials variable for each SCNPlane)
adding the layers to existing view structures using either addSublayer() (doesn't work) or layoutSublayersOfLayer() (crashes the app due to an uncaught exception)
Could I be missing something important?
EDIT: Forgot a line.
Create a different SCNPlane for each SCNNode. If you create a single SCNPlane for multiple nodes then they will share the geometry. Setting a material on the geometry will change all nodes that use that geometry.

How to animate a human written stroke using Swift/iOS?

Objective
I am attempting to create an animated approximation of human writing, using a UIBezierPath generated from a glyph. I understand and I have read the many UIBezierPath questions which sound similar to mine (but are not the same). My goal is to create an animation that approximates the look of a human performing the writing of characters and letters.
Background
I've created a playground that I am using to better understand paths for use in animating glyphs as if they were drawn by hand.
A couple of concepts in the Apple CAShapeLayer (strokeStart and strokeEnd) really don't seem to operate as expected when animated. I think that generally people tend to think of a stroke as if done with a writing instrument (basically a straight or curved line). We consider the stroke and fill together to be a line as our writing instruments do not distinguish between stroke and fill.
But when animated, the outline of a path is constructed by line segments (fill is treated separately and it is unclear how to animate the position of the fill?). What I want to achieve is a natural human written line/curve that shows the start and end of a stroke together with the portion of the fill being added as the animation moves from start to finish. Initially this appears simple but I think it may require animating the fill position (unsure of how to do this), the stroke start/end (not sure if this required given the unexpected caveats with how the animation performs noted above), and making use of sub-paths (how to reconstruct from a known path).
Approach 1
So, I've considered the idea of a Path (CGPath/UIBezierPath). Each path actually contains all of the subpaths required to construct a glyph so perhaps recursing those subpaths and using a CAKeyframeAnimation / CABasicAnimations and an animation group showing the partially constructed subpaths would be a good approach (although the fill position and stroke of each subpath would still need to be animated from start to end?).
This approach leads to the refined question:
How to access and create UIBezierPath/CGPath (subpaths) if one has a complete UIBezierPath/CGPath?
How to animate the fill and stroke as if drawn with a writing instrument using the path/subpath information? (seemingly this implies one would need to animate the strokeStart/strokeEnd, position, and path properties of a CALayer at the same time)
NB:
As one can observe in the code, I do have the finished paths obtained from glyphs. I can see that the path description gives me path-like information. How would one take that information and recast it as an array of sub paths human-writable strokes?
(The idea here would be to convert the point information into a new data type of human-like strokes. This implies a requirement for an algorithm to identify the start, slope, endpoint and boundary of each fill)
Hints
I've noted in Pleco (an iOS app that successfully implements a similar algorithm), that each stroke is composed of a closed path that describes the human-writable stroke. UIBezierPath has a closed path based on continuous connected fills. An algorithm is needed to refine overlapping fills to create distinct closed paths for each stroke-type.
Erica Sadun has a set of path utilities available on github. I haven't fully explored these files but they might prove useful in determining discrete strokes.
UIBezierPath structure seems based on the notion of a contiguous line segments/curve. There are confluence points appearing at the intersections of fills, which represent directional path change. Could one calculate the stroke/fill angle of a curve/line segment and search other curves/lines for a corresponding confluence point? (i.e. connect a line segment across the gap of intersecting fills to produce two separate paths -- assuming one picked up the points and recreated the path with a new line segment/curve)
Introspectively: Is there a simpler method? Am I missing a critical API, a book or a better approach to this problem?
Some alternative methods (not useful - requires loading gifs or flash) for producing the desired outcome:
Good Example (using Flash) with a presentation layer showing progression of the written stroke. (If possible, this is what I would want to approximate in Swift/iOS) - (alt link - see animating image on left)
A less good example showing the use of progressive paths and fills to approximate the written stroke characteristics (animation not smooth and requires external resources):
A Flash version - I am familiar with creating Flash animations but I am disinclined to implement these in the 1000's (not too mention that its not supported on iOS, although I could probably also convert an algorithm to leverage an HTML5 canvas with css animation). But this line of thought seems a bit far afield, after all, the path information I want is stored in the glyphs that I've extracted from fonts/strings provided.
Approach 2
I am considering the use of a stroke-based font rather than an outline-based font to obtain the correct path information (i.e. one where fill is represented as a path). If successful, this approach would be cleaner than approximating the strokes, stroke-type, intersections, and stroke order. I've already submitted a radar to Apple suggesting that stroke-based fonts be added to iOS (#20426819). Notwithstanding this effort, I still have not given up on forming an algorithm that resolves partial-strokes, full strokes, intersections, and confluence points from the line-segments and curves found on the bezier path.
Updated Thoughts Based On Discussion/Answers
The following additional information is provided based on any ongoing conversations and answers below.
Stroke order is important and most languages (Chinese in this case) have clearly defined stroke types and stroke order rules that appear to provide a mechanism to determine type and order based on the point information provided with each CGPathElement.
CGPathApply and CGPathApplierFunction appear promising as a means to enumerate the subpaths (saved to an array and apply the fill animation)
A mask may be applied to the layer to reveal a portion of the sublayer
(I have not used this property before but it appears that if I could move a masked layer over the subpaths that might assist in animating the fill?)
There are a large number of points defined for each path. As if the BezierPath is defined using the outline of the glyph only. This fact makes understanding the start, end, and union of crossing fills an important factor to disambiguate specific fills.
Additional external libraries are available that may allow one to better resolve stroke behavior. Other technology like the Saffron Type System or one of its derivatives may be applicable to this problem domain.
A basic issue with the simplest solution of just animating the stroke is that the available iOS fonts are outline fonts rather than stroke-based fonts. Some commercial manufacturers do produce stroke-based fonts. Please feel free to use the link to the playground file if you have one of these for testing.
I think this is a common problem and I will continue to update the post as I move toward a solution. Please let me know in the comments if further information is required or if I might be missing some of the necessary concepts.
Possible Solution
I am always in search of the simplest possible solution. The issue originates from the structure of the fonts being outline fonts rather than stroke-based. I found a sample of a stroke-based font to test and used that to evaluate a proof of concept (see video). I am now in search of an extended single stroke font (which includes Chinese characters) to further evaluate. A less simple solution might be to find a way to create a stroke that follows the fill and then use simple 2D geometry to evaluate which stroke to animate first (For example Chinese rules are very clear on stroke order).
Link to Playground on Github
To use the XPCShowView function: Open the File Navigator and File
Utilities Inspector
Click the playground file and in the FUI (choose
Run in Simulator)
To access the Assistant Editor: Goto menu View > Assistant Editor
To see resources/sources right-click playground file in Finder and Show Package Contents
If Playground is blank on opening, copy the file to the desktop and reopen (bug??)
Playground Code
import CoreText
import Foundation
import UIKit
import QuartzCore
import XCPlayground
//research layers
//var l:CALayer? = nil
//var txt:CATextLayer? = nil
//var r:CAReplicatorLayer? = nil
//var tile:CATiledLayer? = nil
//var trans:CATransformLayer? = nil
//var b:CAAnimation?=nil
// Setup playground to run in full simulator (⌘-0:Select Playground File; ⌘-alt-0:Choose option Run in Full Simulator)
//approach 2 using a custom stroke font requires special font without an outline whose path is the actual fill
var customFontPath = NSBundle.mainBundle().pathForResource("cwTeXFangSong-zhonly", ofType: "ttf")
// Within the playground folder create Resources folder to hold fonts. NB - Sources folder can also be created to hold additional Swift files
//ORTE1LOT.otf
//NISC18030.ttf
//cwTeXFangSong-zhonly
//cwTeXHei-zhonly
//cwTeXKai-zhonly
//cwTeXMing-zhonly
//cwTeXYen-zhonly
var customFontData = NSData(contentsOfFile: customFontPath!) as! CFDataRef
var error:UnsafeMutablePointer<Unmanaged<CFError>?> = nil
var provider:CGDataProviderRef = CGDataProviderCreateWithCFData ( customFontData )
var customFont = CGFontCreateWithDataProvider(provider) as CGFont!
let registered = CTFontManagerRegisterGraphicsFont(customFont, error)
if !registered {
println("Failed to load custom font: ")
}
let string:NSString = "五"
//"ABCDEFGHIJKLMNOPQRSTUVWXYZ一二三四五六七八九十什我是美国人"
//use the Postscript name of the font
let font = CTFontCreateWithName("cwTeXFangSong", 72, nil)
//HiraMinProN-W6
//WeibeiTC-Bold
//OrachTechDemo1Lotf
//XinGothic-Pleco-W4
//GB18030 Bitmap
var count = string.length
//must initialize with buffer to enable assignment within CTFontGetGlyphsForCharacters
var glyphs = Array<CGGlyph>(count: string.length, repeatedValue: 0)
var chars = [UniChar]()
for index in 0..<string.length {
chars.append(string.characterAtIndex(index))
}
//println ("\(chars)") //ok
//println(font)
//println(chars)
//println(chars.count)
//println(glyphs.count)
let gotGlyphs = CTFontGetGlyphsForCharacters(font, &chars, &glyphs, chars.count)
//println(glyphs)
//println(glyphs.count)
if gotGlyphs {
// loop and pass paths to animation function
let cgpath = CTFontCreatePathForGlyph(font, glyphs[0], nil)
//how to break the path apart?
let path = UIBezierPath(CGPath: cgpath)
//path.hashValue
//println(path)
// all shapes are closed paths
// how to distinguish overlapping shapes, confluence points connected by line segments?
// compare curve angles to identify stroke type
// for curves that intersect find confluence points and create separate line segments by adding the linesegmens between the gap areas of the intersection
/* analysis of movepoint
This method implicitly ends the current subpath (if any) and
sets the current point to the value in the point parameter.
When ending the previous subpath, this method does not actually
close the subpath. Therefore, the first and last points of the
previous subpath are not connected to each other.
For many path operations, you must call this method before
issuing any commands that cause a line or curve segment to be
drawn.
*/
//CGPathApplierFunction should allow one to add behavior to each glyph obtained from a string (Swift version??)
// func processPathElement(info:Void, element: CGPathElement?) {
// var pointsForPathElement=[UnsafeMutablePointer<CGPoint>]()
// if let e = element?.points{
// pointsForPathElement.append(e)
//
// }
// }
//
// var pathArray = [CGPathElement]() as! CFMutableArrayRef
//var pathArray = Array<CGPathElement>(count: 4, repeatedValue: 0)
//CGPathApply(<#path: CGPath!#>, <#info: UnsafeMutablePointer<Void>#>, function: CGPathApplierFunction)
// CGPathApply(path.CGPath, info: &pathArray, function:processPathElement)
/*
NSMutableArray *pathElements = [NSMutableArray arrayWithCapacity:1];
// This contains an array of paths, drawn to this current view
CFMutableArrayRef existingPaths = displayingView.pathArray;
CFIndex pathCount = CFArrayGetCount(existingPaths);
for( int i=0; i < pathCount; i++ ) {
CGMutablePathRef pRef = (CGMutablePathRef) CFArrayGetValueAtIndex(existingPaths, i);
CGPathApply(pRef, pathElements, processPathElement);
}
*/
//given the structure
let pathString = path.description
// println(pathString)
//regex patthern matcher to produce subpaths?
//...
//must be simpler method
//...
/*
NOTES:
Use assistant editor to view
UIBezierPath String
http://www.google.com/fonts/earlyaccess
Stroke-based fonts
Donald Knuth
*/
// var redColor = UIColor.redColor()
// redColor.setStroke()
var pathLayer = CAShapeLayer()
pathLayer.frame = CGRect(origin: CGPointZero, size: CGSizeMake(300.0,300.0))
pathLayer.lineJoin = kCALineJoinRound
pathLayer.lineCap = kCALineCapRound
//pathLayer.backgroundColor = UIColor.whiteColor().CGColor
pathLayer.strokeColor = UIColor.redColor().CGColor
pathLayer.path = path.CGPath
// pathLayer.backgroundColor = UIColor.redColor().CGColor
// regarding strokeStart, strokeEnd
/* These values define the subregion of the path used to draw the
* stroked outline. The values must be in the range [0,1] with zero
* representing the start of the path and one the end. Values in
* between zero and one are interpolated linearly along the path
* length. strokeStart defaults to zero and strokeEnd to one. Both are
* animatable. */
var pathAnimation = CABasicAnimation(keyPath: "strokeEnd")
pathAnimation.duration = 10.0
pathAnimation.fromValue = NSNumber(float: 0.0)
pathAnimation.toValue = NSNumber(float: 1.0)
/*
var fillAnimation = CABasicAnimation (keyPath: "fill")
fillAnimation.fromValue = UIColor.blackColor().CGColor
fillAnimation.toValue = UIColor.blueColor().CGColor
fillAnimation.duration = 10.0
pathLayer.addAnimation(fillAnimation, forKey: "fillAnimation") */
//given actual behavior of boundary animation, it is more likely that some other animation will better simulate a written stroke
var someView = UIView(frame: CGRect(origin: CGPointZero, size: CGSizeMake(300.0, 300.0)))
someView.layer.addSublayer(pathLayer)
//SHOW VIEW IN CONSOLE (ASSISTANT EDITOR)
XCPShowView("b4Animation", someView)
pathLayer.addAnimation(pathAnimation, forKey: "strokeEndAnimation")
someView.layer.addSublayer(pathLayer)
XCPShowView("Animation", someView)
}
A couple of concepts in the Apple CAShapeLayer (strokeStart and strokeEnd) really don't seem to operate as expected when animated.
But surely animating the strokeEnd is exactly what you want to do. Use multiple CAShapeLayers over top of one another, each one representing one stroke of the pen to form the desired character shape.
You want to look at CGPathApply (this is the short answer to your more refined question). You supply it with a function and it will call that function for each element (these will be lines and arc and closes) of the path. You can use that to reconstruct each closed item, and stash them into a list. Then you can figure out which direction each item is drawn in (I think this could actually be the hardest part) and rather then using strokeStart/strokeEnd one each subpath draw it in a layer with a mask and move the mask across the layer.
Progress Report
This answer is posted to emphasize the significant progress being made on solving this question. With so much detail added to the question, I just wanted to clarify the progress on the solution and ultimately (when achieved), the definitive answer. Although I did select an answer that was helpful, please consider this post for the complete solution.
Approach # 1
Use existing UIBezierPath information to identify segments of the path (and ultimately) make use of those segments (and their coordinates) to stroke each subpath (according to available language rules).
(Current Thinking)
Erica Sadun is producing a SwiftSlowly repo on Github that supplies many functions on paths, including what appears to be a promising library on Segments (of a UIBezierPath), Line Intersections and many functions to act on these items. I have not had the time to review completely but I can envision that one might deconstruct a given path into segments based on the known stroke types. Once all stroke types are known for a given path, one might then evaluate the relative path coordinates to assign stroke-order. After that simply animate the strokes (a subclass of UIBezierPath) according to their stroke order.
Approach # 2
Use a stroke-based font instead of an outline-based font.
(Current Thinking)
I have found a sample of a stroke-based font and been able to animate the stroke. These fonts come with a built-in stroke order. I do not have access to a completed stroke-based font that also supports Chinese but encourage anyone with knowledge of such a font to reply in comments.
I have made a recommendation to Apple that they supply stroke-based fonts in future releases. The Swift Playground notes and the files (with sample stroke fonts) are included in the question above. Please comment or post an answer if you have something constructive to add to this solution.
Stroke Order Rules
See the stroke order rules as described on the Clear Chinese website.

Resources