I need to use Kern attribute of NSAttributedString. As I can see in the documentation, default value of that attribute is 0.0. But I faced with strange behaviour for phrase Hello, world (for phrase "Hello" all fine):
NSDictionary<NSString*, id>* attributes = #{NSFontAttributeName: [UIFont systemFontOfSize:12]};
NSString* text = #"Hello, World";
NSAttributedString* string = [[NSAttributedString alloc] initWithString:text attributes:attributes];
CGSize size1 = [string size];
NSMutableDictionary<NSString*, id>* attributesWithKernel = [attributes mutableCopy];
attributesWithKernel[NSKernAttributeName] = #(0.0);
NSAttributedString* stringWithKern = [[NSAttributedString alloc] initWithString:text attributes:attributesWithKernel];
CGSize size2 = [stringWithKern size];
XCTAssertTrue(CGSizeEqualToSize(size1, size2)); //here test falls
//size1 = size1 = (width = 68.8125, height = 14.3203125)
//size2 = (width = 69.515625, height = 14.3203125)
To make size1 and size2 equal, kerning should be equal -7.105427357601002e-15 , I know that this is very close to 0.0, but it is strange, because this changes the width almost a pixel.
NSAttributedString has same behaviour in Objective-C and in Swift, example for swift:
let text = "Hello, World"
let attributes : [NSAttributedString.Key : Any] = [NSAttributedString.Key.font: UIFont.systemFont(ofSize: UIFont.systemFontSize)]
let str = NSAttributedString(string: text, attributes: attributes)
let size = str.size()
var attributesWithKern = attributes
attributesWithKern[NSAttributedString.Key.kern] = NSNumber(value: 0.0)
let strWithKern = NSAttributedString(string: text, attributes: attributesWithKern)
let sizeWithKern = strWithKern.size()
XCTAssertTrue(size == sizeWithKern)
How I can fix this behaviour?
P.S.
Now I just remove NSKernAttributeKey from an attribute string if the key equal to 0.0, but I do not think that it is a good solution.
I believe the docs here are wrong, and it is worth opening a radar about that. When no value is set, it is interpreted as "normal kerning." When 0 is set, it is interpreted as "disable kerning," which is why the width is a little wider (kerning typically is slightly negative, bringing kern-pair character, like "W" and "o" in this font, a little closer). I don't think there's any way to explicitly request "default kerning" without removing the attribute.
For your purposes, I believe you're doing the right thing by removing the value when it's zero, because you want default kerning, not to disable kerning.
The reason your tiny negative value is working is because it's not zero, so it's not disabling kerning, but it's so small that the behavior is very, very close to the default, and you're running into the precision of Double in the intermediate calculations (or possibly the precision of Float, depending on how it's implemented internally). You should find that your test passes for any value smaller (closer to zero) than this one, not just that value. In my tests, positive 7e-15 also works, for example.
I did some tests based on your code and I confirm this behaviour which looks like a bug. The point is for some letters strings are equal, for some not. For instance strings with "Hello Mo" or "Hello Oo" are equal, but for instance "Hello WoWoWo" differ a lot. So we see here some kerning being added for "W" without any reason. It can also depend on chosen font, I did not test it though. I see here the only solution that you've used - removing NSKernAttributeKey if it equals to 0.
Related
I want to replace all standard iOS emoji from a UILable or UITextView with twitters open source twemoji.
I can't find any library or documentation to do this in iOS. Does anyone have a solution that does not involve me implementing this from scratch?
The solution needs to be efficient and work offline.
The question got me intrigued, and after a bit of searching on how it would be possible to replace all standard iOS emoji with a custom set, I noticed that even Twitter's own iOS app doesn't use Twemoji:
In the end, I came to the same conclusion as you:
I can't find any library or documentation to do this in iOS.
So, I created a framework in Swift for this exact purpose.
It does all the work for you, but if you want to implement your own solution, I'll describe below how to replace all standard emoji with Twemoji.
1. Document all characters that can be represented as emoji
There are 1126 base characters that have emoji representations, and over a thousand additional representations formed by sequences. Although most base characters are confined to six Unicode blocks, all but one of these blocks are mixed with non-emoji characters and/or unassigned code points. The remaining base characters outside these blocks are scattered across various other blocks.
My implementation simply declares the UTF-32 code points for these characters, as the value property of UnicodeScalar is exactly this.
2. Check whether a character is an emoji
In Swift, a String contains a collection of Character objects, each of which represent a single extended grapheme cluster. An extended grapheme cluster is a sequence of Unicode scalars that together represent one1 human-readable character, which is helpful since you can loop through the Characters of a string and handling them based on the UnicodeScalars they contain (rather than looping through the UTF-16 values of the string).
To identify whether a Character is an emoji, only the first UnicodeScalar is significant, so comparing this value to your table of emoji characters is enough. However, I'd also recommend checking if the Character contains a Variation Selector, and if it does, make sure that it's VS16 – otherwise the character shouldn't be presented as emoji.
Extracting the UnicodeScalars from a Character requires a tiny hack:
let c: Character = "A"
let scalars = String(c).unicodeScalars
3. Convert the code points into the correct format
Twemoji images are named according to their corresponding code points2, which makes sense. So, the next step is to convert the Character into a string equivalent to the image name:
let codePoint = String("🙃").unicodeScalars.first!.value // 128579
let imageName = String(codePoint, radix: 16) // "1f643"
Great, but this won't work for flags or keycaps, so we'll have to modify our code to take those into account:
let scalars = String("🇧🇪").unicodeScalars
let filtered = scalars.filter{ $0.value != 0xfe0f } // Remove VS16 from variants, including keycaps.
let mapped = filtered.map{ String($0.value, radix: 16) }
let imageName = mapped.joined(separator: "-") // "1f1e7-1f1ea"
4. Replace the emoji in the string
In order to replace the emoji in a given String, we'll need to use NSMutableAttributedString for storing the original string, and replace the emoji with NSTextAttachment objects containing the corresponding Twemoji image.
let originalString = "🙃"
let attributedString = NSMutableAttributedString(string: originalString)
for character in originalString.characters {
// Check if character is emoji, see section 2.
...
// Get the image name from the character, see section 3.
let imageName = ...
// Safely unwrapping to make sure the image exists.
if let image = UIImage(named: imageName) {
let attachment = NSTextAttachment()
attachment.image = image
// Create an attributed string from the attachment.
let twemoji = NSAttributedString(attachment: attachment)
// Get the range of the character in attributedString.
let range = attributedString.mutableString.range(of: String(character))
// Replace the emoji with the corresponding Twemoji.
attributedString.replaceCharacters(in: range, with: twemoji)
}
}
To display the resulting attributed string, just set it as the attributedText property of a UITextView/UILabel.
Note that the above method doesn't take into account zero-width joiners or modifier sequences, but I feel like this answer is already too long as it stands.
1. There is a quirk with the Character type that interprets a sequence of joined regional indicator symbols as one object, despite containing a theoretically unlimited amount of Unicode scalars. Try "🇩🇰🇫🇮🇮🇸🇳🇴🇸🇪".characters.count in a playground.
2. The naming pattern varies slightly when it comes to zero-width joiners and variation selectors, so it's easier to strip these out of the image names – see here.
Easiest thing to do:
1) Load the twemoji images into your project.
2) Create an NSDictionary that correlates the emoji codes supported by iOS with the paths to the respective twemoji images:
NSArray *iOSEmojis = #[#"iOSEmoji1",#"iOSEmoji2];
NSDictionary *twemojiPaths = [NSDictionary dictionaryWithObjects:#[#"Project/twemoji1.png",#"Project/twemoji2.png"] andKeys:#[#"iOSEmoji1","iOSEmoji2"]];
3) Code your app to search for emoji strings and display the twemojis where the regular emojis would go:
for (NSString *emoji in iOSEmojis)
{
NSString *twemojiPath = [twemojiPaths valueForKey:emoji];
// Find the position of the emoji string in the text
// and put an image view there.
NSRange range = [label.text rangeOfString:emoji];
NSString *prefix = [label.text substringToIndex:range.location];
CGSize prefixSize = [prefix sizeWithAttributes: #{NSFontAttributeName: [UIFont fontWithName:#"HelveticaNeue" size:14]}];
CGSize emojiSize = [label.text sizeWithAttributes: #{NSFontAttributeName: [UIFont fontWithName:#"HelveticaNeue" size:14]}];
CGRect imageViewFrame = CGRectMake(prefixSize.width,label.frame.size.height,emojiSize.width,label.frame.size.height);
imageViewFrame = [self.view convertRect:imageViewFrame fromView:label];
UIImageView *imageView = [[UIImageView alloc] initWithFrame:imageViewFrame];
imageView.image = [UIImage imageWithContentsOfFile:twemojiPath];
}
Sorry I don't have any code yet, but would appreciate some advice!
I have a countdown timer showing seconds with one decimal point in a UILabel (10.0, 9.9, 9.8, etc.). It works fine, but the decimal point moves around slightly depending on the size decimal value. Is there a way to align the text in the UILabel to the decimal point or should I create two labels (one for the seconds value aligned right and one for the decimal value aligned left)?
Thanks!
I think your suggestion of multiple labels is perfectly valid.
Here is an attributed string solution (for m:ss but it should work with floating point numbers also):
let string = "\t43:21" // Sample min:sec. Note the leading tab is required in this code.
let countdownFont = UIFont.systemFontOfSize(13)
let terminators = NSCharacterSet(charactersInString: "\t:.") // in some locales '.' is used instead of ':'
let tabStop = NSTextTab(textAlignment: .Right, location: 40, options: [NSTabColumnTerminatorsAttributeName: terminators])
let paragraphStyle = NSMutableParagraphStyle()
paragraphStyle.tabStops = [tabStop]
let attributeDictionary: [String: AnyObject] = [NSParagraphStyleAttributeName: paragraphStyle, NSFontAttributeName: countdownFont]
let attributedString = NSAttributedString(string: string, attributes: attributeDictionary)
self.countdownLabel.attributedText = attributedString
Related resources:
https://www.objc.io/issues/9-strings/string-rendering/#tables-with-numbers
https://library.oreilly.com/book/0636920034261/programming-ios-8-1st-edition/314.xhtml?ref=toc#_tab_stops
Of course a fixed width font, such as Courier or Menlo could also solve this problem, but they contrast fairly starkly.
I have a multi-line label that has the following text:
Lots of text here · $$$$
Since the text at the beginning is freeform, sometimes the wrapping ends up looking like this:
Lots of text here · $$$
$
How do I prevent this from happening? I want it to look like this:
Lots of text here ·
$$$$
I've tried every lineBreakMode to little avail. Word wrap doesn't work because it doesn't treat $$$$ as a word.
It seems that you might benefit from subclassing UILabel, which would treat a string differently for the NSLineBreakByWordWrapping line break mode, which treats your phonetic words like words. You will effectively be expanding the definition by which your custom linebreakmode considers a word.
You would have to roll your own line-breaking algorithm. The approach to determining the location of your line-breaks would be similar to the following:
Loop through the string, to get each character, until one of two conditions is met: a) you have reached the width of the view, or b) you have reached a space, and the next word (delimited by a space) doesn't fit on the same line.
If you have reached condition a, you have two options--you could either adopt a policy that never splits words into multiple lines, or your could only apply the non-split rule to your phonetic words. Either way, you will need to insert a line break at the beginning of the phonetic word, when there is no more room on a given line.
You may want to use two separate strings, to keep the source string separate from the display string that contains your formatting.
Let me know if that helps!
This might be very late but atleast it could help someone.
The way I have done it is as follows:
UIFont *fontUsed = [UIFont systemFontOfSize:17];
NSDictionary *dictFont = [NSDictionary dictionaryWithObject:fontUsed forKey:NSFontAttributeName];
NSString *strTextToShow = #"text that has to be displayed but without $$$$";
CGRect rectForSimpleText = [strTextToShow boundingRectWithSize:CGSizeMake(154, 258) options:NSStringDrawingUsesLineFragmentOrigin attributes:dictFont context:nil];
NSString *strTextAdded = [NSString stringWithFormat:#"%# $$$$", strTextToShow];
CGFloat oldHeight = rectForSimpleText.size.height;
CGRect rectForAppendedText = [strTextAdded boundingRectWithSize:CGSizeMake(154, 258) options:NSStringDrawingUsesLineFragmentOrigin attributes:dictFont context:nil];
CGFloat newHeight = rectForAppendedText.size.height;
if (oldHeight < newHeight) {
strTextAdded = [strTextAdded stringByReplacingOccurrencesOfString:#"$$$$" withString:#"\n$$$$"];
}
[lblLongText setText:strTextAdded];
lblLongText here is the IBOutlet of UILabel and CGSizeMake(154, 258) is the size of UILabel I have used. Let me know if there is any other way you have found.
Try inserting a line break in your input text.
Lots of text here ·\n $$$
It should print the $$$ in the next line.
for example:
import Foundation
import UIKit
var str = NSString(string: "saldkjaskldjhf")
var font = UIFont.systemFontOfSize(14.0)
var attributes:[String:AnyObject] = [NSFontAttributeName: font]
var attriStrWithoutParagraph = NSAttributedString(string: str, attributes: attributes)
var size = attriStrWithoutParagraph.boundingRectWithSize(CGSize(width: CGFloat.max, height: CGFloat.max), options: NSStringDrawingOptions.UsesLineFragmentOrigin, context: nil)
var paragraphstyle = NSMutableParagraphStyle()
paragraphstyle.firstLineHeadIndent = CGFloat(20)
attributes[NSParagraphStyleAttributeName] = paragraphstyle
attriStrWithoutParagraph = NSAttributedString(string: str, attributes: attributes)
size = attriStrWithoutParagraph.boundingRectWithSize(CGSize(width: CGFloat.max, height: CGFloat.max), options: NSStringDrawingOptions.UsesLineFragmentOrigin, context: nil)
here is the output:
(0.0,0.0,87.276,16.702)
(0.0,0.0,87.276,16.702)
we can see the result is the same, so the firstlineindent is not considered in why it works like this???
You're specifying very large (effectively infinite) values (CGFloat.max) for the size that you're passing to -boundingRectWithSize:options:. So, the text will never wrap. It will always be laid out in one long line.
Furthermore, the docs for -boundingRectWithSize:options: say:
The origin of the rectangle returned from this method is the first glyph origin.
So, the result is always relative to where the first glyph is placed. You're basically measuring the size of the line. The indent doesn't change the size of the line. It changes where the first glyph is placed, but the result is relative to the first glyph, so it doesn't change the result.
It would change the result if you were providing a real limit for the width and making the paragraph wrap. In that case, the second line would be "outdented" relative to the first line (and the first glyph), so the bounding rectangle would change as you change the firstLineHeadIndent.
You can simply apply the desired indent yourself. That is, after you get the bounding rect, add the indent distance to the X coordinate of the origin (edit: or to the width, if you want a rect encompassing the indent and not just the text positioned by the indent). (Although it's not clear to me what it could mean to indent text in an "infinite" space.)
You could also provide an actual bounding size for your desired destination for the text.
Several posts have noted difficulties with getting an exact height out of CTFramesetterSuggestFrameSizeWithConstraints, and here, (framesetter post), #Chris DeSalvo gives what looks like the definitive fix: add a paragraph style setting with the correct line spacing adjustment.
DeSalvo gets his “leading” by removing UIFont’s ascender and descender from its lineHeight. I wondered how that would compare to CTFontGetLeading.
I worked with fonts created like this:
CTFontRef fontr = CTFontCreateWithName((CFStringRef)#"Helvetica Neue", 16.0f, NULL);
UIFont *font = [UIFont fontWithName:#"Helvetica Neue" size:16.0f];
The values were quite different:
0.448 CTFontGetLeading
2.360 DeSalvo’s formula: UIFont lineHeight - ascender + descender
Here are some other UIFont values:
21.000 UIFont’s lineHeight
15.232 UIFont’s ascender (Y coord from baseline)
-3.408 UIFont’s descender (Y coord from baseline)
08.368 UIFont’s xHeight
And here are the CTFont values that Ken Thomases inquired about:
11.568001 CTFontGetCapHeight
08.368 CTFontGetXHeight
-15.216001, -7.696001, 38.352001, 24.928001 CTFontGetBoundingBox
15.232 CTFontGetAscent
03.408 CTFontGetDescent (class ref says "scaled font-descent metric scaled according to the point size and matrix of the font reference" -- which apparently means that it is the absolute value of the Y coordinate from the baseline?)
I note that UIFont previously had a property specifically for “leading,” but it has been deprecated and we are advised to use lineHeight instead. So UIFont considers leading to be 21 and CTFontRef .448 for the same font? Something’s not right.
Three questions:
Is “leading” really what is meant by kCTParagraphStyleSpecifierLineSpacingAdjustment?
If so, which method/formula should I use to get it?
If not, what should I use for the line spacing adjustment?
I too ran into this and here is the code that worked in a real project:
// When you create an attributed string the default paragraph style has a leading
// of 0.0. Create a paragraph style that will set the line adjustment equal to
// the leading value of the font. This logic will ensure that the measured
// height for a given paragraph of attributed text will be accurate wrt the font.
- (void) applyParagraphAttributes:(CFMutableAttributedStringRef)mAttributedString
{
CGFloat leading = CTFontGetLeading(self.plainTextFont);
CTParagraphStyleSetting paragraphSettings[1] = {
kCTParagraphStyleSpecifierLineSpacingAdjustment, sizeof (CGFloat), &leading
};
CTParagraphStyleRef paragraphStyle = CTParagraphStyleCreate(paragraphSettings, 1);
CFRange textRange = CFRangeMake(0, [self length]);
CFStringRef keys[] = { kCTParagraphStyleAttributeName };
CFTypeRef values[] = { paragraphStyle };
CFDictionaryRef attrValues = CFDictionaryCreate(kCFAllocatorDefault,
(const void**)&keys,
(const void**)&values,
sizeof(keys) / sizeof(keys[0]),
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
BOOL clearOtherAttributes = FALSE;
CFAttributedStringSetAttributes(mAttributedString, textRange, attrValues, (Boolean)clearOtherAttributes);
CFRelease(attrValues);
CFRelease(paragraphStyle);
self.stringRange = textRange;
return;
}
Answers to the 3 questions I had above:
Yes, “leading” really what is meant by kCTParagraphStyleSpecifierLineSpacingAdjustment. Or at any rate, it works as expected.
Use CTFontGetLeading(fontRef) to get the font's normal leading, or plug in whatever value (as a CGFloat) you choose.
N/A.
Answers 1 and 2 work: Specifying a leading value in a paragraphStyle attribute of your attributed string will enable the Core-Text framesetter to calculate its height exactly.
There are two caveats:
If you try to calculate heights incrementally, one string at a time, each string containing an initial line break, framesetter will consider that line break to represent an entire line, not just the leading. If you want the height of the concatenated strings, you have to feed that concatenation to the framesetter. Of course, you could keep track of the incremental height differences, but there's no way to avoid having framesetter recalculate the earlier string dimensions.
CATextLayer ignores spacing adjustments (and other attributes). If framing per exact string height is an issue, you must draw direct to a CALayer.
And there is one mystery: What is going on with UIFont's deprecated leading? Leading and lineHeight are two distinct things.