Is there a good way to transform every glyph of a font uniformly? - ttx-fonttools

I'm a little stuck on how to apply a transformation to every glyph of a particular font using fonttools. I'm playing with an Open Source font where every character is just a taaaad too high up in its bounding box, so I was hoping to transform it down.
It is a variable font (Source Sans). This is the salient code sample:
from fontTools import ttLib
from fontTools.pens.transformPen import TransformPen
from fontTools.pens.ttGlyphPen import TTGlyphPen
font = ttLib.TTFont(font_path)
for glyphName in font["glyf"].keys():
print(glyphName)
glyph = font.getGlyphSet().get(glyphName)
glyphPen = TTGlyphPen(font.getGlyphSet())
transformPen = TransformPen(glyphPen, (1.0, 0.0, 0.0, 1.0, 0, -300))
glyph.draw(transformPen)
transformedGlyph = glyphPen.glyph()
font["glyf"][glyphName] = transformedGlyph
However, when I do this, the transformation doesn't quite go as expected... some characters are shifted down a lot, and other are shifted down only a little. i, 3, 4, 5, and 6 are some noteworthy examples, whereas other letters and numbers are unaffected.
Is there an easier way to do this in general, and if not, why are some letters getting transformed differently than others?

I found out what was happening.
Some glyphs are made up of other, nested glyphs. These are called composite glyphs, as opposed to simple glyphs.
Since I was transforming all glyphs, I was transforming all composite glyphs, as well as the simple glyphs that they were composed from. This led to composite glyphs getting the appearance of being transformed twice.
The new code looks like this:
glyphPen = TTGlyphPen(font.getGlyphSet())
transformPen = TransformPen(glyphPen, Transform().translate(0, -25))
for glyphName in font.getGlyphOrder():
glyph = font['glyf'][glyphName]
# Avoid double-transforming composite glyphs
if glyph.isComposite():
continue
glyph.draw(transformPen, font['glyf'])
font['glyf'][glyphName] = glyphPen.glyph()

Related

Extract text from background grids/lines [2]

I'm trying to remove the grid lines in handwriting picture. I tried to use FFT to extract the grid pattern and remove it (this is from an answer in the original question, which is closed somehow. It has more background as well.). This image shows what I am able to get currently (Illustration result):
The first line is a real image with handwriting character. Since it's taken by phone in various conditions (light, direction, etc.), the grid line might not be perfect horizontal/vertical, and the color of grid line also varies and might be close the the color of characters. I turn it to grayscale, apply fft, and use tries to use thresholding to extract the patterns (in red rectangle, the illustration is using OTSU). Then I mask the image with the thresholding pattern, and use ifft to get the result. It fails on the real image obviously.
The second line is a real image of blank grid w/o handwriting character. From this, I think 3 lines (vertical and horizontal) in the center are the patterns I care.
The third line is a synthetic image w/ perfect grid lines. It's just for reference. And after applying the same algorithm, the grid lines could be removed successfully.
The fourth line is a synthetic image w/ perfect dashed grid lines, which is closer to the grid lines on real handwriting practice paper. It's also for reference. It shows the pattern of dashed lines are actually more complicated than 3 lines in the center. With the same algorithm, the grid lines could be removed almost completely as well.
The code I use is:
def FFTCV(img):
util.Plot(img, 'Input')
print(img.shape)
if len(img.shape) == 3 and img.shape[2] == 3:
img = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
util.Plot(img, 'Gray')
dft = cv.dft(np.float32(img),flags = cv.DFT_COMPLEX_OUTPUT)
dft_shift = np.fft.fftshift(dft)
util.Plot(cv.magnitude(dft_shift[:,:,0],dft_shift[:,:,1]), 'fft shift')
magnitude_spectrum = np.uint8(20*np.log(cv.magnitude(dft_shift[:,:,0],dft_shift[:,:,1])))
util.Plot(magnitude_spectrum, 'Magnitude')
_, threshold = cv.threshold(magnitude_spectrum, 0, 1, cv.THRESH_BINARY_INV + cv.THRESH_OTSU)
# threshold = cv.adaptiveThreshold(
# magnitude_spectrum, 1, cv.ADAPTIVE_THRESH_MEAN_C, cv.THRESH_BINARY_INV, 11, 10)
# magnitude_spectrum, 1, cv.ADAPTIVE_THRESH_GAUSSIAN_C, cv.THRESH_BINARY_INV, 11, 10)
util.Plot(threshold, 'Threshold Mask')
fshift = dft_shift * threshold[:, :, None]
util.Plot(cv.magnitude(fshift[:,:,0],fshift[:,:,1]), 'fft shift Masked')
magnitude_spectrum = np.uint8(20*np.log(cv.magnitude(fshift[:,:,0],fshift[:,:,1])))
util.Plot(magnitude_spectrum, 'Magnitude Masked')
f_ishift = np.fft.ifftshift(fshift)
img_back = cv.idft(f_ishift)
img_back = cv.magnitude(img_back[:,:,0],img_back[:,:,1])
util.Plot(img_back, 'Back')
So I'd like to learn suggestions on how to extract the patterns for real images. Thanks very much.

Detect handwritten characters in boxes from a filled form using Fourier transforms

I am trying to extract handwritten characters from boxes. The scanning of the forms is not consistent, so the width and height of the boxes are also not constants.
Here is a part of the form.
My current approach:
1. Extract horizontal lines
2. Extract vertical lines
3. Combine both the above images
4. Find contours ( used opencv)
This approach gives me most of the boxes. But, when the box is filled with characters like "L" or "I", the vertical line in the character is also getting extracted as a part of vertical lines extraction. Hence the contours also get messed up.
Since the boxes are arranged periodically, is there a way to extract the boxes using Fast Fourier transforms?
I recently came up with a python package that deals with this exact problem.
I called it BoxDetect and after installing it through:
pip install boxdetect
It may look somewhat like this (you need to adjust parameters for different forms:
from boxdetect import config
config.min_w, config.max_w = (20,50)
config.min_h, config.max_h = (20,50)
config.scaling_factors = [0.4]
config.dilation_iterations = 0
config.wh_ratio_range = (0.5, 2.0)
config.group_size_range = (1, 100)
config.horizontal_max_distance_multiplier = 2
from boxdetect.pipelines import get_boxes
image_path = "dumpster/m1nda.jpg"
rects, grouped_rects, org_image, output_image = get_boxes(image_path, config, plot=False)
You might want to check below thread for more info:
How to detect all boxes for inputting letters in forms for a particular field?
The Fourier transform is the last thing I would think of.
I'd rather try with a Hough line detector to get long lines or as you did, with edge detection, but I would reconstruct the grids explicitly, finding their pitch and the exact locations of the rows/columns, hence every individual cell.
You can try select handwritten characters by color.
example:
import cv2
import numpy as np
img=cv2.imread('YdUqv .jpg')
#convert to hsv
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
#color definition
color_lower = np.array([105,80,60])
color_upper = np.array([140,255,255])
# select color objects
mask = cv2.inRange(hsv, color_lower, color_upper)
cv2.imwrite('hand.png', mask)
Result:

PaintCode - move object on the path

I would like draw a curved line and attach an object to it. Is it possible to create fraction (from 0.0 to 1.0) which makes move my object on the path? When fraction is 0 then object is on the beginning, when 0.5 is on half way and finally when is on 1.0 it is at the end. Of course i want a curved path, not a straight line :) Is it possible to do in PaintCode?
If you need it only as a progress bar, it is possible in PaintCode. The trick is to use dashed stroke with very large Gap and then just change the Dash.
Then just attach a Variable and you are done.
Edit: Regarding the discussion under the original post, this solution uses points as the unit, so it will be distributed equally along the curve, no matter how curved the bezier is.
Based on the fact that you're going to walk along the curve using linear distance, a thing Bezier curves are terrible for, you need to build the linear mapping yourself. That's fairly simple though:
When you draw the curve, also build a look-up table that samples the curve once, at say 100 points (t=0, t=0.01, t=0.02, etc). In pseudocode:
lut = [];
lut[0] = 0;
tlen = curve.length();
for(v=0; v<=100; v++) {
t = v/100;
clen = curve.split(0,t).length();
percent = 100*clen/tlen;
lut[percent] = t;
}
This may leave gaps in your LUT - you can either fix those as a secondary step, or just leave them in and do a binary scan on your array to find the nearest "does have a value" percentage.
Then, when you need to show your progress as some percentage value, you just look up the corresponding t value: say you need to show 83%, you look up lut[83] and draw your object at the value that gives you.

How to animate a human written stroke using Swift/iOS?

Objective
I am attempting to create an animated approximation of human writing, using a UIBezierPath generated from a glyph. I understand and I have read the many UIBezierPath questions which sound similar to mine (but are not the same). My goal is to create an animation that approximates the look of a human performing the writing of characters and letters.
Background
I've created a playground that I am using to better understand paths for use in animating glyphs as if they were drawn by hand.
A couple of concepts in the Apple CAShapeLayer (strokeStart and strokeEnd) really don't seem to operate as expected when animated. I think that generally people tend to think of a stroke as if done with a writing instrument (basically a straight or curved line). We consider the stroke and fill together to be a line as our writing instruments do not distinguish between stroke and fill.
But when animated, the outline of a path is constructed by line segments (fill is treated separately and it is unclear how to animate the position of the fill?). What I want to achieve is a natural human written line/curve that shows the start and end of a stroke together with the portion of the fill being added as the animation moves from start to finish. Initially this appears simple but I think it may require animating the fill position (unsure of how to do this), the stroke start/end (not sure if this required given the unexpected caveats with how the animation performs noted above), and making use of sub-paths (how to reconstruct from a known path).
Approach 1
So, I've considered the idea of a Path (CGPath/UIBezierPath). Each path actually contains all of the subpaths required to construct a glyph so perhaps recursing those subpaths and using a CAKeyframeAnimation / CABasicAnimations and an animation group showing the partially constructed subpaths would be a good approach (although the fill position and stroke of each subpath would still need to be animated from start to end?).
This approach leads to the refined question:
How to access and create UIBezierPath/CGPath (subpaths) if one has a complete UIBezierPath/CGPath?
How to animate the fill and stroke as if drawn with a writing instrument using the path/subpath information? (seemingly this implies one would need to animate the strokeStart/strokeEnd, position, and path properties of a CALayer at the same time)
NB:
As one can observe in the code, I do have the finished paths obtained from glyphs. I can see that the path description gives me path-like information. How would one take that information and recast it as an array of sub paths human-writable strokes?
(The idea here would be to convert the point information into a new data type of human-like strokes. This implies a requirement for an algorithm to identify the start, slope, endpoint and boundary of each fill)
Hints
I've noted in Pleco (an iOS app that successfully implements a similar algorithm), that each stroke is composed of a closed path that describes the human-writable stroke. UIBezierPath has a closed path based on continuous connected fills. An algorithm is needed to refine overlapping fills to create distinct closed paths for each stroke-type.
Erica Sadun has a set of path utilities available on github. I haven't fully explored these files but they might prove useful in determining discrete strokes.
UIBezierPath structure seems based on the notion of a contiguous line segments/curve. There are confluence points appearing at the intersections of fills, which represent directional path change. Could one calculate the stroke/fill angle of a curve/line segment and search other curves/lines for a corresponding confluence point? (i.e. connect a line segment across the gap of intersecting fills to produce two separate paths -- assuming one picked up the points and recreated the path with a new line segment/curve)
Introspectively: Is there a simpler method? Am I missing a critical API, a book or a better approach to this problem?
Some alternative methods (not useful - requires loading gifs or flash) for producing the desired outcome:
Good Example (using Flash) with a presentation layer showing progression of the written stroke. (If possible, this is what I would want to approximate in Swift/iOS) - (alt link - see animating image on left)
A less good example showing the use of progressive paths and fills to approximate the written stroke characteristics (animation not smooth and requires external resources):
A Flash version - I am familiar with creating Flash animations but I am disinclined to implement these in the 1000's (not too mention that its not supported on iOS, although I could probably also convert an algorithm to leverage an HTML5 canvas with css animation). But this line of thought seems a bit far afield, after all, the path information I want is stored in the glyphs that I've extracted from fonts/strings provided.
Approach 2
I am considering the use of a stroke-based font rather than an outline-based font to obtain the correct path information (i.e. one where fill is represented as a path). If successful, this approach would be cleaner than approximating the strokes, stroke-type, intersections, and stroke order. I've already submitted a radar to Apple suggesting that stroke-based fonts be added to iOS (#20426819). Notwithstanding this effort, I still have not given up on forming an algorithm that resolves partial-strokes, full strokes, intersections, and confluence points from the line-segments and curves found on the bezier path.
Updated Thoughts Based On Discussion/Answers
The following additional information is provided based on any ongoing conversations and answers below.
Stroke order is important and most languages (Chinese in this case) have clearly defined stroke types and stroke order rules that appear to provide a mechanism to determine type and order based on the point information provided with each CGPathElement.
CGPathApply and CGPathApplierFunction appear promising as a means to enumerate the subpaths (saved to an array and apply the fill animation)
A mask may be applied to the layer to reveal a portion of the sublayer
(I have not used this property before but it appears that if I could move a masked layer over the subpaths that might assist in animating the fill?)
There are a large number of points defined for each path. As if the BezierPath is defined using the outline of the glyph only. This fact makes understanding the start, end, and union of crossing fills an important factor to disambiguate specific fills.
Additional external libraries are available that may allow one to better resolve stroke behavior. Other technology like the Saffron Type System or one of its derivatives may be applicable to this problem domain.
A basic issue with the simplest solution of just animating the stroke is that the available iOS fonts are outline fonts rather than stroke-based fonts. Some commercial manufacturers do produce stroke-based fonts. Please feel free to use the link to the playground file if you have one of these for testing.
I think this is a common problem and I will continue to update the post as I move toward a solution. Please let me know in the comments if further information is required or if I might be missing some of the necessary concepts.
Possible Solution
I am always in search of the simplest possible solution. The issue originates from the structure of the fonts being outline fonts rather than stroke-based. I found a sample of a stroke-based font to test and used that to evaluate a proof of concept (see video). I am now in search of an extended single stroke font (which includes Chinese characters) to further evaluate. A less simple solution might be to find a way to create a stroke that follows the fill and then use simple 2D geometry to evaluate which stroke to animate first (For example Chinese rules are very clear on stroke order).
Link to Playground on Github
To use the XPCShowView function: Open the File Navigator and File
Utilities Inspector
Click the playground file and in the FUI (choose
Run in Simulator)
To access the Assistant Editor: Goto menu View > Assistant Editor
To see resources/sources right-click playground file in Finder and Show Package Contents
If Playground is blank on opening, copy the file to the desktop and reopen (bug??)
Playground Code
import CoreText
import Foundation
import UIKit
import QuartzCore
import XCPlayground
//research layers
//var l:CALayer? = nil
//var txt:CATextLayer? = nil
//var r:CAReplicatorLayer? = nil
//var tile:CATiledLayer? = nil
//var trans:CATransformLayer? = nil
//var b:CAAnimation?=nil
// Setup playground to run in full simulator (⌘-0:Select Playground File; ⌘-alt-0:Choose option Run in Full Simulator)
//approach 2 using a custom stroke font requires special font without an outline whose path is the actual fill
var customFontPath = NSBundle.mainBundle().pathForResource("cwTeXFangSong-zhonly", ofType: "ttf")
// Within the playground folder create Resources folder to hold fonts. NB - Sources folder can also be created to hold additional Swift files
//ORTE1LOT.otf
//NISC18030.ttf
//cwTeXFangSong-zhonly
//cwTeXHei-zhonly
//cwTeXKai-zhonly
//cwTeXMing-zhonly
//cwTeXYen-zhonly
var customFontData = NSData(contentsOfFile: customFontPath!) as! CFDataRef
var error:UnsafeMutablePointer<Unmanaged<CFError>?> = nil
var provider:CGDataProviderRef = CGDataProviderCreateWithCFData ( customFontData )
var customFont = CGFontCreateWithDataProvider(provider) as CGFont!
let registered = CTFontManagerRegisterGraphicsFont(customFont, error)
if !registered {
println("Failed to load custom font: ")
}
let string:NSString = "五"
//"ABCDEFGHIJKLMNOPQRSTUVWXYZ一二三四五六七八九十什我是美国人"
//use the Postscript name of the font
let font = CTFontCreateWithName("cwTeXFangSong", 72, nil)
//HiraMinProN-W6
//WeibeiTC-Bold
//OrachTechDemo1Lotf
//XinGothic-Pleco-W4
//GB18030 Bitmap
var count = string.length
//must initialize with buffer to enable assignment within CTFontGetGlyphsForCharacters
var glyphs = Array<CGGlyph>(count: string.length, repeatedValue: 0)
var chars = [UniChar]()
for index in 0..<string.length {
chars.append(string.characterAtIndex(index))
}
//println ("\(chars)") //ok
//println(font)
//println(chars)
//println(chars.count)
//println(glyphs.count)
let gotGlyphs = CTFontGetGlyphsForCharacters(font, &chars, &glyphs, chars.count)
//println(glyphs)
//println(glyphs.count)
if gotGlyphs {
// loop and pass paths to animation function
let cgpath = CTFontCreatePathForGlyph(font, glyphs[0], nil)
//how to break the path apart?
let path = UIBezierPath(CGPath: cgpath)
//path.hashValue
//println(path)
// all shapes are closed paths
// how to distinguish overlapping shapes, confluence points connected by line segments?
// compare curve angles to identify stroke type
// for curves that intersect find confluence points and create separate line segments by adding the linesegmens between the gap areas of the intersection
/* analysis of movepoint
This method implicitly ends the current subpath (if any) and
sets the current point to the value in the point parameter.
When ending the previous subpath, this method does not actually
close the subpath. Therefore, the first and last points of the
previous subpath are not connected to each other.
For many path operations, you must call this method before
issuing any commands that cause a line or curve segment to be
drawn.
*/
//CGPathApplierFunction should allow one to add behavior to each glyph obtained from a string (Swift version??)
// func processPathElement(info:Void, element: CGPathElement?) {
// var pointsForPathElement=[UnsafeMutablePointer<CGPoint>]()
// if let e = element?.points{
// pointsForPathElement.append(e)
//
// }
// }
//
// var pathArray = [CGPathElement]() as! CFMutableArrayRef
//var pathArray = Array<CGPathElement>(count: 4, repeatedValue: 0)
//CGPathApply(<#path: CGPath!#>, <#info: UnsafeMutablePointer<Void>#>, function: CGPathApplierFunction)
// CGPathApply(path.CGPath, info: &pathArray, function:processPathElement)
/*
NSMutableArray *pathElements = [NSMutableArray arrayWithCapacity:1];
// This contains an array of paths, drawn to this current view
CFMutableArrayRef existingPaths = displayingView.pathArray;
CFIndex pathCount = CFArrayGetCount(existingPaths);
for( int i=0; i < pathCount; i++ ) {
CGMutablePathRef pRef = (CGMutablePathRef) CFArrayGetValueAtIndex(existingPaths, i);
CGPathApply(pRef, pathElements, processPathElement);
}
*/
//given the structure
let pathString = path.description
// println(pathString)
//regex patthern matcher to produce subpaths?
//...
//must be simpler method
//...
/*
NOTES:
Use assistant editor to view
UIBezierPath String
http://www.google.com/fonts/earlyaccess
Stroke-based fonts
Donald Knuth
*/
// var redColor = UIColor.redColor()
// redColor.setStroke()
var pathLayer = CAShapeLayer()
pathLayer.frame = CGRect(origin: CGPointZero, size: CGSizeMake(300.0,300.0))
pathLayer.lineJoin = kCALineJoinRound
pathLayer.lineCap = kCALineCapRound
//pathLayer.backgroundColor = UIColor.whiteColor().CGColor
pathLayer.strokeColor = UIColor.redColor().CGColor
pathLayer.path = path.CGPath
// pathLayer.backgroundColor = UIColor.redColor().CGColor
// regarding strokeStart, strokeEnd
/* These values define the subregion of the path used to draw the
* stroked outline. The values must be in the range [0,1] with zero
* representing the start of the path and one the end. Values in
* between zero and one are interpolated linearly along the path
* length. strokeStart defaults to zero and strokeEnd to one. Both are
* animatable. */
var pathAnimation = CABasicAnimation(keyPath: "strokeEnd")
pathAnimation.duration = 10.0
pathAnimation.fromValue = NSNumber(float: 0.0)
pathAnimation.toValue = NSNumber(float: 1.0)
/*
var fillAnimation = CABasicAnimation (keyPath: "fill")
fillAnimation.fromValue = UIColor.blackColor().CGColor
fillAnimation.toValue = UIColor.blueColor().CGColor
fillAnimation.duration = 10.0
pathLayer.addAnimation(fillAnimation, forKey: "fillAnimation") */
//given actual behavior of boundary animation, it is more likely that some other animation will better simulate a written stroke
var someView = UIView(frame: CGRect(origin: CGPointZero, size: CGSizeMake(300.0, 300.0)))
someView.layer.addSublayer(pathLayer)
//SHOW VIEW IN CONSOLE (ASSISTANT EDITOR)
XCPShowView("b4Animation", someView)
pathLayer.addAnimation(pathAnimation, forKey: "strokeEndAnimation")
someView.layer.addSublayer(pathLayer)
XCPShowView("Animation", someView)
}
A couple of concepts in the Apple CAShapeLayer (strokeStart and strokeEnd) really don't seem to operate as expected when animated.
But surely animating the strokeEnd is exactly what you want to do. Use multiple CAShapeLayers over top of one another, each one representing one stroke of the pen to form the desired character shape.
You want to look at CGPathApply (this is the short answer to your more refined question). You supply it with a function and it will call that function for each element (these will be lines and arc and closes) of the path. You can use that to reconstruct each closed item, and stash them into a list. Then you can figure out which direction each item is drawn in (I think this could actually be the hardest part) and rather then using strokeStart/strokeEnd one each subpath draw it in a layer with a mask and move the mask across the layer.
Progress Report
This answer is posted to emphasize the significant progress being made on solving this question. With so much detail added to the question, I just wanted to clarify the progress on the solution and ultimately (when achieved), the definitive answer. Although I did select an answer that was helpful, please consider this post for the complete solution.
Approach # 1
Use existing UIBezierPath information to identify segments of the path (and ultimately) make use of those segments (and their coordinates) to stroke each subpath (according to available language rules).
(Current Thinking)
Erica Sadun is producing a SwiftSlowly repo on Github that supplies many functions on paths, including what appears to be a promising library on Segments (of a UIBezierPath), Line Intersections and many functions to act on these items. I have not had the time to review completely but I can envision that one might deconstruct a given path into segments based on the known stroke types. Once all stroke types are known for a given path, one might then evaluate the relative path coordinates to assign stroke-order. After that simply animate the strokes (a subclass of UIBezierPath) according to their stroke order.
Approach # 2
Use a stroke-based font instead of an outline-based font.
(Current Thinking)
I have found a sample of a stroke-based font and been able to animate the stroke. These fonts come with a built-in stroke order. I do not have access to a completed stroke-based font that also supports Chinese but encourage anyone with knowledge of such a font to reply in comments.
I have made a recommendation to Apple that they supply stroke-based fonts in future releases. The Swift Playground notes and the files (with sample stroke fonts) are included in the question above. Please comment or post an answer if you have something constructive to add to this solution.
Stroke Order Rules
See the stroke order rules as described on the Clear Chinese website.

What's the CoreText equivalent to AppKit's NSObliquenessAttributeName?

I'm drawing some text in Mac/iOS cross-platform code using CoreText. I may be using fonts that do not have a real "Italic" version installed in the OS for all users, but they need to be aware that the text is italic even then.
With AppKit's NSAttributedString -drawAtPoint:, I can use NSObliquenessAttributeName to make the text slanted (and thus look italic -- well, oblique). CoreText doesn't seem to have an equivalent for this attribute. At least I found none in CTStringAttributes.h (not that there's any documentation even years after CoreText was released).
Does anyone know how I can get oblique text with CoreText on iOS?
I’d try using the affine transform argument to CTFontCreateWithName() with a shear matrix. For instance
CGAffineTransform matrix = { 1, 0, 0.5, 1, 0, 0 };
CTFontRef myFont = CTFontCreateWithName(CFSTR("Helvetica"), 48, &matrix);
That will create quite an extreme skew (assuming I got it right), but you get the idea.
Update:
In fact, the documentation appears to imply that this is the right way to do things.
Displaying a font that has no italic trait as italic is generally a bad idea. However, I can understand that there are some cases where this has to be enforced anyways.
The only solution that comes to my mind right now is to create a custom font with a sheared font matrix:
CGAffineTransform matrix = CGAffineTransformMake(1, tan(degreesToRadians(0)), tan(degreesToRadians(20)), 1, 0, 0);
CTFontRef myfont = CTFontCreateWithName(CFSTR("Helvetica"), 48, &matrix);
You'll have to play with the matrix and see what brings the best results. (Please not that this is a fake code mix out of my head and the internet.)
Haven't tried, but according to iOS Programming Pushing The Limits, passing kCTFontItalicTrait to CTFontCreateCopyWithSymbolicTraits will choose true italic if available, and oblique otherwise. There's also kCTFontSlantTrait for manual decimal slant up to 30 degrees.

Resources