Disable blending when drawing with CoreText - ios

I'm drawing black text on a gray background using CoreText.
It seems that the system automatically does some blending on the glyphs, i'd like to disable this behavior if possible but I'm not sure how.
this is a zoomed-in screenshot of the top of the letter L. this is the blending i'd like to disable
The drawing code looks like:
let para = NSMutableParagraphStyle()
para.alignment = .center
let attrString = NSAttributedString(string: Configuration.trainString,
attributes: [
.font:UIFont.boldSystemFont(ofSize: size.height*0.8),
.paragraphStyle:para
])
let framesetter = CTFramesetterCreateWithAttributedString(attrString as CFAttributedString)
let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, attrString.length), path, nil)
CTFrameDraw(frame, ctx)
let cgImage = ctx.makeImage()
return cgImage
is there a way to do this while still leveraging CoreText?
EDIT: it is possible this is related, however I believe this question is still valid. this question can be answered explicitly with code, whereas no code was provided before. I am already rounding the path used by the framesetter to integrals (as well as the font size)
a full gist of the code that will run in a Swift playground is posted HERE

This may seem obvious, but it bears worth repeating that CTFrameDraw(_:_:) is ultimately just performing operations on CGContext so its text functions are essential.
Depending on the specifics of what you want I suggest you experiment with the font smoothing functions on CGContext starting with setAllowsFontSmoothing(_:).
EDIT
Using your playground I found a specific solution. Add setAllowsAntialiasing just before drawing your text:
ctx.setAllowsAntialiasing(false)
CTFrameDraw(frame, ctx)

Related

Metal alphaBlendOperation .max weird behavior

I'm using metal to draw some lines, my drawing canvas has a texture in MTLRenderPassDescriptor and when I draw inside it blending is enabled MTLRenderPipelineDescriptor and I'm using alphaBlendOperation = .max
renderPassDescriptor = MTLRenderPassDescriptor()
let attachment = renderPassDescriptor?.colorAttachments[0]
attachment?.texture = self.texture
attachment?.loadAction = .load
attachment?.storeAction = .store
let rpd = MTLRenderPipelineDescriptor()
rpd.colorAttachments[0].pixelFormat = .rgba8Unorm
let attachment = rpd.colorAttachments[0]!
attachment.isBlendingEnabled = true
attachment.rgbBlendOperation = .max
attachment.alphaBlendOperation = .max
I can change the properties in brush (size, opacity, hardness "blur"). However first two brushes are working really great as in the image bellow
But I have only one weird behavior when I use blurred brush with faded sides where lines are connected the faded areas is not blending as expected and an empty small line created between the connection. the image bellow described this issue, please check the single line and single point and then check the connections you can see this behavior very clear
MTLRenderPassDescriptor Should choose even the bellow alpha from down texture or brush alpha but when tap in the second and third point its making empty line instead of choosing a one of the alpha, Its like making alpha zero in these areas.
This is my faded brush you can see there is a gradian of color but i don't know if there is a problem with it
Please share with me any idea you have to solve it

How to get emoji path in iOS

My purpose : Draw outline of every glyph
example1:
input: text= "666棒"
display:
Attach:In the figure above, 1 is displayView,2 is inputView.
example2:
input: text= "666😁棒"
display:
Attach: In the figure above, 1 is displayView,2 is inputView,3 is nothing rendered.
Main ideas is :
Use CoreText obtained every CGglyph.
Get every glyph's CGPath
Use CAShapeLayer display the glyph on screen.
Main method:
let letters = CGMutablePath()
let font = CTFontCreateWithName(fontName as CFString?, fontSize, nil)
let attrString = NSAttributedString(string: text, attributes: [kCTFontAttributeName as String : font])
let line = CTLineCreateWithAttributedString(attrString)
let runArray = CTLineGetGlyphRuns(line)
for runIndex in 0..<CFArrayGetCount(runArray) {
let run : CTRun = unsafeBitCast(CFArrayGetValueAtIndex(runArray, runIndex), to: CTRun.self)
let dictRef : CFDictionary = unsafeBitCast(CTRunGetAttributes(run), to: CFDictionary.self)
let dict : NSDictionary = dictRef as NSDictionary
let runFont = dict[kCTFontAttributeName as String] as! CTFont
for runGlyphIndex in 0..<CTRunGetGlyphCount(run) {
let thisGlyphRange = CFRangeMake(runGlyphIndex, 1)
var glyph = CGGlyph()
var position = CGPoint.zero
CTRunGetGlyphs(run, thisGlyphRange, &glyph)
CTRunGetPositions(run, thisGlyphRange, &position)
let letter = CTFontCreatePathForGlyph(runFont, glyph, nil)
let t = CGAffineTransform(translationX: position.x, y: position.y)
if let letter = letter {
letters.addPath(letter, transform: t)
}
}
}
let path = UIBezierPath()
path.move(to: CGPoint.zero)
path.append(UIBezierPath(cgPath: letters))
let pathLayer = CAShapeLayer()
pathLayer.path = path.cgPath
self.layer.addSubLayer(pathLayer)
...
Question:
How to get emoji path ,in this case I can draw the emoji outline instead of draw the whole emoji? Another benefit is I can draw emoji path animated if I need.
Any help is appreciate!
************************ update 2.15.2017 ***********************
Thanks #KrishnaCA 's suggest.
I used bool supports = CTFontGetGlyphWithName(myFont, "😀") find that no font is support emoji.
Fortunately is Google's Noto provide good support for emoji fonts
You can find it there :google's Noto
I used font Noto Emoji
Display:
Only Noto Emoji and Noto Color Emoji support Emoji (I guess)
Hope to help people who come here!
I believe you need to check whether a glyph for an unicode corresponding to the CTFont exist or not. If it doesn't exist, fall back to any default CTFont that has a glpyh for an unicode
You can check that using the following code.
bool supports = CTFontGetGlyphWithName(myFont, "😀")
here, myFont is a CTFontRef object.
Please let me know if this is what you're not looking for
I believe you'll need CATextLayers to help you out.
I know it's a bit late, but sadly you can not - emojis are actually bitmaps drawn into the same context as shapes representing regular characters. You best bet is probably to draw emoji characters separately at needed scale into the context. This won't give you access to the actual vector data.
If you really need it in a vector form:
I'd go with finding Apple Emoji font redrawn in vector (I remember seeing it on the internet, not sure if it contains all the latest emojis though)
Mapping names of the individual vector images you found to the characters and then drawing the vector images

How to invert colors of a specific area of a UIView?

It's easy to blur a portion of the view, keeping in mind that if the contents of views behind change, the blur changes too in realtime.
My questions
How to make an invert effect, and you can put it over a view and the contents behind would have inverted colors
How to add an effect that would know the average color of the pixels behind?
In general, How to access the pixels and manipulate them?
My question is not about UIImageView, asking about UIView in general..
there are libraries that does something similar, but they are so slow and don't run as smooth as blur!
Thanks.
If you know how to code a CIColorKernel, you'll have what you need.
Core Image has several blur filters, all of which use the GPU, which will give you the performance you need.
The CIAreaAverage will give you the average color for a specified rectangular area.
Core Image Filters
Here is about the simplest CIColorKernel you can write. It swaps the red and green value for every pixel in an image (note the "grba" instead of "rgba"):
kernel vec4 swapRedAndGreenAmount(__sample s) {
return s.grba;
}
To put this into a CIColorKernel, just use this line of code:
let swapKernel = CIKernel(string:
"kernel vec4 swapRedAndGreenAmount(__sample s) {" +
"return s.grba;" +
"}"
#tww003 has good code to convert a view's layer into a UIImage. Assuming you call your image myUiImage, to execute this swapKernel, you can:
let myInputCi = CIImage(image: myUiImage)
let myOutputCi = swapKernel.apply(withExtent: myInputCi, arguments: myInputCi)
Let myNewImage = UIImage(ciImage: myOutputCi)
That's about it. You can do alot more (including using CoreGraphics, etc.) but this is a good start.
One last note, you can chain individual filters (including hand-written color, warp, and general kernels). If you want, you can chain your color average over the underlying view with a blur and do whatever kind of inversion you wish as a single filter/effect.
I don't think I can fully answer your question, but maybe I can point you in the right direction.
Apple has some documentation on accessing the pixels data from CGImages, but of course that requires that you have an image to work with in the first place. Fortunately, you can create an image from a UIView like this:
UIGraphicsBeginImageContext(view.frame.size)
view.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
From this image you created, you'll be able to manipulate the pixel data how you want to. It may not be the cleanest way to solve your problem, but maybe it's something worth exploring.
Unfortunately, the link I provided is written in Objective-C and is a few years old, but maybe you can figure out how to make good use of it.
1st I will recommend you to extend UIImageView for this purpose. ref
Good ref by Joe
You have to override drawRect method
import UIKit
#IBDesignable class PortholeView: UIView {
#IBInspectable var innerCornerRadius: CGFloat = 10.0
#IBInspectable var inset: CGFloat = 20.0
#IBInspectable var fillColor: UIColor = UIColor.grayColor()
#IBInspectable var strokeWidth: CGFloat = 5.0
#IBInspectable var strokeColor: UIColor = UIColor.blackColor()
override func drawRect(rect: CGRect) {
super.drawRect(rect:rect)
// Prep constants
let roundRectWidth = rect.width - (2 * inset)
let roundRectHeight = rect.height - (2 * inset)
// Use EvenOdd rule to subtract portalRect from outerFill
// (See https://stackoverflow.com/questions/14141081/uiview-drawrect-draw-the-inverted-pixels-make-a-hole-a-window-negative-space)
let outterFill = UIBezierPath(rect: rect)
let portalRect = CGRectMake(
rect.origin.x + inset,
rect.origin.y + inset,
roundRectWidth,
roundRectHeight)
fillColor.setFill()
let portal = UIBezierPath(roundedRect: portalRect, cornerRadius: innerCornerRadius)
outterFill.appendPath(portal)
outterFill.usesEvenOddFillRule = true
outterFill.fill()
strokeColor.setStroke()
portal.lineWidth = strokeWidth
portal.stroke()
}
}
Your answer is here

Losing NSTextEffectLetterpressStyle when drawing into CGBitmapContext

I am drawing text on top of a solid background color, and while it is appearing, the NSTextEffectLetterpressStyle effect applied to the attributed string isn't showing up - you can only see the text color not the additional white outline that provides the desired look. It looks essentially the exact same as if I didn't apply the letterpress effect. Why is this, how can I draw the letterpress text effect correctly?
let bitmapContext = CGBitmapContextCreate(nil, UInt(imageRect.width), UInt(imageRect.height), UInt(8), UInt(imageRect.width * 16), CGColorSpaceCreateDeviceRGB(), CGBitmapInfo(CGImageAlphaInfo.PremultipliedLast.rawValue))
CGContextSetFillColorWithColor(bitmapContext, UIColor.blackColor().CGColor)
let fullBox = CGRectMake(0, 0, imageRect.width, imageRect.height)
CGContextAddRect(bitmapContext, fullBox)
CGContextFillPath(bitmapContext)
CGContextSetTextDrawingMode(bitmapContext, kCGTextFill)
CGContextSetTextPosition(bitmapContext, drawPoint.x, drawPoint.y)
let coloredAttributedString = NSAttributedString(string: "20", attributes:[NSFontAttributeName: myFont, NSForegroundColorAttributeName: textColor, NSTextEffectAttributeName: NSTextEffectLetterpressStyle])
let displayLineTextColored = CTLineCreateWithAttributedString(coloredAttributedString)
CTLineDraw(displayLineTextColored, bitmapContext)
let cgImage = CGBitmapContextCreateImage(bitmapContext)
var myImage = CIImage(CGImage: dateStampCGImage)
This is the result:
This is what I expected it to be (notice the white-ish outline):
Both CFAttributedString and NSAttributedString can apply arbitrary attributes to ranges. The attributed string types don't inherently interpret the attributes.
Rather, the string drawing technologies interpret the attributes. Core Text understands a different set of attributes than UIKit or AppKit. The set of attributes supported by Core Text is documented here. It does not list (an equivalent of) NSTextEffectAttributeName.
If you set up a current graphics context using UIGraphicsBeginImageContextWithOptions() and draw into that using the NSAttributedString drawing methods, that should work, since it's using UIKit to do the drawing.

Text/font rendering in OpenGLES 2 (iOS - CoreText?) - options and best practice?

There are many questions on OpenGL font rendering, many of them are satisfied by texture atlases (fast, but wrong), or string-textures (fixed-text only).
However, those approaches are poor and appear to be years out of date (what about using shaders to do this better/faster?). For OpenGL 4.1 there's this excellent question looking at "what should you use today?":
What is state-of-the-art for text rendering in OpenGL as of version 4.1?
So, what should we be using on iOS GL ES 2 today?
I'm disappointed that there appears to be no open-source (or even commercial solution). I know a lot of teams suck it down and spend weeks of dev time re-inventing this wheel, gradually learning how to kern and space etc (ugh) - but there must be a better way than re-writing the whole of "fonts" from scratch?
As far as I can see, there are two parts to this:
How do we render text using a font?
How do we display the output?
For 1 (how to render), Apple provides MANY ways to get the "correct" rendered output - but the "easy" ones don't support OpenGL (maybe some of the others do - e.g. is there a simple way to map CoreText output to OpenGL?).
For 2 (how to display), we have shaders, we have VBOs, we have glyph-textures, we have lookup-textures, and other tecniques (e.g. the OpenGL 4.1 stuff linked above?)
Here are the two common OpenGL approaches I know of:
Texture atlas (render all glyphs once, then render 1 x textured quad per character, from the shared texture)
This is wrong, unless you're using a 1980s era "bitmap font" (and even then: texture atlas requires more work than it may seem, if you need it correct for non-trivial fonts)
(fonts aren't "a collection of glyphs" there's a vast amount of positioning, layout, wrapping, spacing, kerning, styling, colouring, weighting, etc. Texture atlases fail)
Fixed string (use any Apple class to render correctly, then screenshot the backing image-data, and upload as a texture)
In human terms, this is fast. In frame-rendering, this is very, very slow. If you do this with a lot of changing text, your frame rate goes through the floor
Technically, it's mostly correct (not entirely: you lose some information this way) but hugely inefficient
I've also seen, but heard both good and bad things about:
Imagination/PowerVR "Print3D" (link broken) (from the guys that manufacture the GPU! But their site has moved/removed the text rendering page)
FreeType (requires pre-processing, interpretation, lots of code, extra libraries?)
...and/or FTGL http://sourceforge.net/projects/ftgl/ (rumors: slow? buggy? not updated in a long time?)
Font-Stash http://digestingduck.blogspot.co.uk/2009/08/font-stash.html (high quality, but very slow?)
1.
Within Apple's own OS / standard libraries, I know of several sources of text rendering. NB: I have used most of these in detail on 2D rendering projects, my statements about them outputting different rendering are based on direct experience
CoreGraphics with NSString
Simplest of all: render "into a CGRect"
Seem to be a slightly faster version of the "fixed string" approach people recommend (even though you'd expect it to be much the same)
UILabel and UITextArea with plain text
NB: they are NOT the same! Slight differences in how they render the smae text
NSAttributedString, rendered to one of the above
Again: renders differently (the differences I know of are fairly subtle and classified as "bugs", various SO questions about this)
CATextLayer
A hybrid between iOS fonts and old C rendering. Uses the "not fully" toll-free-bridged CFFont / UIFont, which reveals some more rendering differences / strangeness
CoreText
... the ultimate solution? But a beast of its own...
I did some more experimenting, and it seems that CoreText might make for a perfect solution when combined with a texture atlas and Valve's signed-difference textures (which can turn a bitmap glyph into a resolution-independent hi-res texture).
...but I don't have it working yet, still experimenting.
UPDATE: Apple's docs say they give you access to everything except the final detail: which glyph + glyph layout to render (you can get the line layout, and the number of glyphs, but not the glyph itself, according to docs). For no apparent reason, this core piece of info is apparently missing from CoreText (if so, that makes CT almost worthless. I'm still hunting to see if I can find a way to get the actual glpyhs + per-glyph data)
UPDATE2: I now have this working properly with Apple's CT (but no different-textures), but it ends up as 3 class files, 10 data structures, about 300 lines of code, plus the OpenGL code to render it. Too much for an SO answer :(.
The short answer is: yes, you can do it, and it works, if you:
Create CTFrameSetter
Create CTFrame for a theoretical 2D frame
Create a CGContext that you'll convert to a GL texture
Go through glyph-by-glyph, allowing Apple to render to the CGContext
Each time Apple renders a glyph, calculate the boundingbox (this is HARD), and save it somewhere
And save the unique glyph-ID (this will be different for e.g. "o", "f", and "of" (one glyph!))
Finally, send your CGContext up to GL as a texture
When you render, use the list of glyph-IDs that Apple created, and for each one use the saved info, and the texture, to render quads with texture-co-ords that pull individual glyphs out of the texture you uploaded.
This works, it's fast, it works with all fonts, it gets all font layout and kerning correct, etc.
1.
Create any string by NSMutableAttributedString.
let mabstring = NSMutableAttributedString(string: "This is a test of characterAttribute.")
mabstring.beginEditing()
var matrix = CGAffineTransform(rotationAngle: CGFloat(GLKMathDegreesToRadians(0)))
let font = CTFontCreateWithName("Georgia" as CFString?, 40, &matrix)
mabstring.addAttribute(kCTFontAttributeName as String, value: font, range: NSRange(location: 0, length: 4))
var number: Int8 = 2
let kdl = CFNumberCreate(kCFAllocatorDefault, .sInt8Type, &number)!
mabstring.addAttribute(kCTStrokeWidthAttributeName as String, value: kdl, range: NSRange(location: 0, length: mabstring.length))
mabstring.endEditing()
2.
Create CTFrame. The rect calculate from mabstring by CoreText.CTFramesetterSuggestFrameSizeWithConstraints
let framesetter = CTFramesetterCreateWithAttributedString(mabstring)
let path = CGMutablePath()
path.addRect(rect)
let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, nil)
3.
Create bitmap context.
let imageWidth = Int(rect.width)
let imageHeight = Int(rect.height)
var rawData = [UInt8](repeating: 0, count: Int(imageWidth * imageHeight * 4))
let bitmapInfo = CGBitmapInfo(rawValue: CGBitmapInfo.byteOrder32Big.rawValue | CGImageAlphaInfo.premultipliedLast.rawValue)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let bitsPerComponent = 8
let bytesPerRow = Int(rect.width) * 4
let context = CGContext(data: &rawData, width: imageWidth, height: imageHeight, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: rgbColorSpace, bitmapInfo: bitmapInfo.rawValue)!
4.
Draw CTFrame in bitmap context.
CTFrameDraw(frame, context)
Now, we got the raw pixel data rawData. Create OpenGL Texture , MTLTexture , UIImage with rawData is ok.
Example,
To OpenGL Texture:Convert an UIImage in a texture
Set-up your texture:
GLuint textureID;
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
,
//to MTLTexture
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm, width: Int(imageWidth), height: Int(imageHeight), mipmapped: true)
let device = MTLCreateSystemDefaultDevice()!
let texture = device.makeTexture(descriptor: textureDescriptor)
let region = MTLRegionMake2D(0, 0, Int(imageWidth), Int(imageHeight))
texture.replace(region: region, mipmapLevel: 0, withBytes: &rawData, bytesPerRow: imageRef.bytesPerRow)
,
//to UIImage
let providerRef = CGDataProvider(data: NSData(bytes: &rawData, length: rawData.count * MemoryLayout.size(ofValue: UInt8(0))))
let renderingIntent = CGColorRenderingIntent.defaultIntent
let imageRef = CGImage(width: imageWidth, height: imageHeight, bitsPerComponent: 8, bitsPerPixel: 32, bytesPerRow: bytesPerRow, space: rgbColorSpace, bitmapInfo: bitmapInfo, provider: providerRef!, decode: nil, shouldInterpolate: false, intent: renderingIntent)!
let image = UIImage.init(cgImage: imageRef)
I know this post is old, but I came across it while trying to do this exactly in my application. In my search, I came across this sample project
http://metalbyexample.com/rendering-text-in-metal-with-signed-distance-fields/
It is a perfect implementation of CoreText with OpenGL using the techniques of texture atlasing and signed distance fields. It has greatly helped me achieve the results I wanted. Hope this helps someone else.

Resources