I am drawing text on top of a solid background color, and while it is appearing, the NSTextEffectLetterpressStyle effect applied to the attributed string isn't showing up - you can only see the text color not the additional white outline that provides the desired look. It looks essentially the exact same as if I didn't apply the letterpress effect. Why is this, how can I draw the letterpress text effect correctly?
let bitmapContext = CGBitmapContextCreate(nil, UInt(imageRect.width), UInt(imageRect.height), UInt(8), UInt(imageRect.width * 16), CGColorSpaceCreateDeviceRGB(), CGBitmapInfo(CGImageAlphaInfo.PremultipliedLast.rawValue))
CGContextSetFillColorWithColor(bitmapContext, UIColor.blackColor().CGColor)
let fullBox = CGRectMake(0, 0, imageRect.width, imageRect.height)
CGContextAddRect(bitmapContext, fullBox)
CGContextFillPath(bitmapContext)
CGContextSetTextDrawingMode(bitmapContext, kCGTextFill)
CGContextSetTextPosition(bitmapContext, drawPoint.x, drawPoint.y)
let coloredAttributedString = NSAttributedString(string: "20", attributes:[NSFontAttributeName: myFont, NSForegroundColorAttributeName: textColor, NSTextEffectAttributeName: NSTextEffectLetterpressStyle])
let displayLineTextColored = CTLineCreateWithAttributedString(coloredAttributedString)
CTLineDraw(displayLineTextColored, bitmapContext)
let cgImage = CGBitmapContextCreateImage(bitmapContext)
var myImage = CIImage(CGImage: dateStampCGImage)
This is the result:
This is what I expected it to be (notice the white-ish outline):
Both CFAttributedString and NSAttributedString can apply arbitrary attributes to ranges. The attributed string types don't inherently interpret the attributes.
Rather, the string drawing technologies interpret the attributes. Core Text understands a different set of attributes than UIKit or AppKit. The set of attributes supported by Core Text is documented here. It does not list (an equivalent of) NSTextEffectAttributeName.
If you set up a current graphics context using UIGraphicsBeginImageContextWithOptions() and draw into that using the NSAttributedString drawing methods, that should work, since it's using UIKit to do the drawing.
Related
I'm drawing black text on a gray background using CoreText.
It seems that the system automatically does some blending on the glyphs, i'd like to disable this behavior if possible but I'm not sure how.
this is a zoomed-in screenshot of the top of the letter L. this is the blending i'd like to disable
The drawing code looks like:
let para = NSMutableParagraphStyle()
para.alignment = .center
let attrString = NSAttributedString(string: Configuration.trainString,
attributes: [
.font:UIFont.boldSystemFont(ofSize: size.height*0.8),
.paragraphStyle:para
])
let framesetter = CTFramesetterCreateWithAttributedString(attrString as CFAttributedString)
let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, attrString.length), path, nil)
CTFrameDraw(frame, ctx)
let cgImage = ctx.makeImage()
return cgImage
is there a way to do this while still leveraging CoreText?
EDIT: it is possible this is related, however I believe this question is still valid. this question can be answered explicitly with code, whereas no code was provided before. I am already rounding the path used by the framesetter to integrals (as well as the font size)
a full gist of the code that will run in a Swift playground is posted HERE
This may seem obvious, but it bears worth repeating that CTFrameDraw(_:_:) is ultimately just performing operations on CGContext so its text functions are essential.
Depending on the specifics of what you want I suggest you experiment with the font smoothing functions on CGContext starting with setAllowsFontSmoothing(_:).
EDIT
Using your playground I found a specific solution. Add setAllowsAntialiasing just before drawing your text:
ctx.setAllowsAntialiasing(false)
CTFrameDraw(frame, ctx)
My purpose : Draw outline of every glyph
example1:
input: text= "666棒"
display:
Attach:In the figure above, 1 is displayView,2 is inputView.
example2:
input: text= "666😁棒"
display:
Attach: In the figure above, 1 is displayView,2 is inputView,3 is nothing rendered.
Main ideas is :
Use CoreText obtained every CGglyph.
Get every glyph's CGPath
Use CAShapeLayer display the glyph on screen.
Main method:
let letters = CGMutablePath()
let font = CTFontCreateWithName(fontName as CFString?, fontSize, nil)
let attrString = NSAttributedString(string: text, attributes: [kCTFontAttributeName as String : font])
let line = CTLineCreateWithAttributedString(attrString)
let runArray = CTLineGetGlyphRuns(line)
for runIndex in 0..<CFArrayGetCount(runArray) {
let run : CTRun = unsafeBitCast(CFArrayGetValueAtIndex(runArray, runIndex), to: CTRun.self)
let dictRef : CFDictionary = unsafeBitCast(CTRunGetAttributes(run), to: CFDictionary.self)
let dict : NSDictionary = dictRef as NSDictionary
let runFont = dict[kCTFontAttributeName as String] as! CTFont
for runGlyphIndex in 0..<CTRunGetGlyphCount(run) {
let thisGlyphRange = CFRangeMake(runGlyphIndex, 1)
var glyph = CGGlyph()
var position = CGPoint.zero
CTRunGetGlyphs(run, thisGlyphRange, &glyph)
CTRunGetPositions(run, thisGlyphRange, &position)
let letter = CTFontCreatePathForGlyph(runFont, glyph, nil)
let t = CGAffineTransform(translationX: position.x, y: position.y)
if let letter = letter {
letters.addPath(letter, transform: t)
}
}
}
let path = UIBezierPath()
path.move(to: CGPoint.zero)
path.append(UIBezierPath(cgPath: letters))
let pathLayer = CAShapeLayer()
pathLayer.path = path.cgPath
self.layer.addSubLayer(pathLayer)
...
Question:
How to get emoji path ,in this case I can draw the emoji outline instead of draw the whole emoji? Another benefit is I can draw emoji path animated if I need.
Any help is appreciate!
************************ update 2.15.2017 ***********************
Thanks #KrishnaCA 's suggest.
I used bool supports = CTFontGetGlyphWithName(myFont, "😀") find that no font is support emoji.
Fortunately is Google's Noto provide good support for emoji fonts
You can find it there :google's Noto
I used font Noto Emoji
Display:
Only Noto Emoji and Noto Color Emoji support Emoji (I guess)
Hope to help people who come here!
I believe you need to check whether a glyph for an unicode corresponding to the CTFont exist or not. If it doesn't exist, fall back to any default CTFont that has a glpyh for an unicode
You can check that using the following code.
bool supports = CTFontGetGlyphWithName(myFont, "😀")
here, myFont is a CTFontRef object.
Please let me know if this is what you're not looking for
I believe you'll need CATextLayers to help you out.
I know it's a bit late, but sadly you can not - emojis are actually bitmaps drawn into the same context as shapes representing regular characters. You best bet is probably to draw emoji characters separately at needed scale into the context. This won't give you access to the actual vector data.
If you really need it in a vector form:
I'd go with finding Apple Emoji font redrawn in vector (I remember seeing it on the internet, not sure if it contains all the latest emojis though)
Mapping names of the individual vector images you found to the characters and then drawing the vector images
It's easy to blur a portion of the view, keeping in mind that if the contents of views behind change, the blur changes too in realtime.
My questions
How to make an invert effect, and you can put it over a view and the contents behind would have inverted colors
How to add an effect that would know the average color of the pixels behind?
In general, How to access the pixels and manipulate them?
My question is not about UIImageView, asking about UIView in general..
there are libraries that does something similar, but they are so slow and don't run as smooth as blur!
Thanks.
If you know how to code a CIColorKernel, you'll have what you need.
Core Image has several blur filters, all of which use the GPU, which will give you the performance you need.
The CIAreaAverage will give you the average color for a specified rectangular area.
Core Image Filters
Here is about the simplest CIColorKernel you can write. It swaps the red and green value for every pixel in an image (note the "grba" instead of "rgba"):
kernel vec4 swapRedAndGreenAmount(__sample s) {
return s.grba;
}
To put this into a CIColorKernel, just use this line of code:
let swapKernel = CIKernel(string:
"kernel vec4 swapRedAndGreenAmount(__sample s) {" +
"return s.grba;" +
"}"
#tww003 has good code to convert a view's layer into a UIImage. Assuming you call your image myUiImage, to execute this swapKernel, you can:
let myInputCi = CIImage(image: myUiImage)
let myOutputCi = swapKernel.apply(withExtent: myInputCi, arguments: myInputCi)
Let myNewImage = UIImage(ciImage: myOutputCi)
That's about it. You can do alot more (including using CoreGraphics, etc.) but this is a good start.
One last note, you can chain individual filters (including hand-written color, warp, and general kernels). If you want, you can chain your color average over the underlying view with a blur and do whatever kind of inversion you wish as a single filter/effect.
I don't think I can fully answer your question, but maybe I can point you in the right direction.
Apple has some documentation on accessing the pixels data from CGImages, but of course that requires that you have an image to work with in the first place. Fortunately, you can create an image from a UIView like this:
UIGraphicsBeginImageContext(view.frame.size)
view.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
From this image you created, you'll be able to manipulate the pixel data how you want to. It may not be the cleanest way to solve your problem, but maybe it's something worth exploring.
Unfortunately, the link I provided is written in Objective-C and is a few years old, but maybe you can figure out how to make good use of it.
1st I will recommend you to extend UIImageView for this purpose. ref
Good ref by Joe
You have to override drawRect method
import UIKit
#IBDesignable class PortholeView: UIView {
#IBInspectable var innerCornerRadius: CGFloat = 10.0
#IBInspectable var inset: CGFloat = 20.0
#IBInspectable var fillColor: UIColor = UIColor.grayColor()
#IBInspectable var strokeWidth: CGFloat = 5.0
#IBInspectable var strokeColor: UIColor = UIColor.blackColor()
override func drawRect(rect: CGRect) {
super.drawRect(rect:rect)
// Prep constants
let roundRectWidth = rect.width - (2 * inset)
let roundRectHeight = rect.height - (2 * inset)
// Use EvenOdd rule to subtract portalRect from outerFill
// (See https://stackoverflow.com/questions/14141081/uiview-drawrect-draw-the-inverted-pixels-make-a-hole-a-window-negative-space)
let outterFill = UIBezierPath(rect: rect)
let portalRect = CGRectMake(
rect.origin.x + inset,
rect.origin.y + inset,
roundRectWidth,
roundRectHeight)
fillColor.setFill()
let portal = UIBezierPath(roundedRect: portalRect, cornerRadius: innerCornerRadius)
outterFill.appendPath(portal)
outterFill.usesEvenOddFillRule = true
outterFill.fill()
strokeColor.setStroke()
portal.lineWidth = strokeWidth
portal.stroke()
}
}
Your answer is here
When I add a semi-transparent image (sample) as a texture for a SCNNode, how can I specify a color attribute for the node where the image is transparent. Since I am able to specify either color or image as a material property, I am unable to specify the color value to the node. Is there a way to specify both color and image for the material property or is there a workaround to this problem.
If you are assigning the image to the contents of the transparent material property, you can change the materials transparencyMode to be either .AOne or .RGBZero.
.AOne means that transparency is derived from the images alpha channel.
.RGBZero means that transparency is derived from the luminance (the total red, green, and blue) in the image.
You cannot configure an arbitrary color to be treated as transparency without a custom shader.
However, from the looks of your sample image, I would think that assigning the sample image to the transparent material properties contents and using the .AOne transparency mode would give you the result you are looking for.
I'm posting this as a new answer because it's different from the other answer.
Based on your comment, I understand that you want to want to use an image with transparency as the diffuse content of a material, but use a background color wherever the image is transparent. In other words, you won't to use a composite of the image over a color as the diffuse contents.
Using UIImage
There are a few different ways you can achieve this composited image. The easiest and likely most familiar solution is to create a new UIImage that draws the image over the color. This new image will have the same size and scale as your image, but can be opaque since it has a solid background color.
func imageByComposing(image: UIImage, over color: UIColor) -> UIImage {
UIGraphicsBeginImageContextWithOptions(image.size, true, image.scale)
defer {
UIGraphicsEndImageContext()
}
let imageRect = CGRect(origin: .zero, size: image.size)
// fill with background color
color.set()
UIRectFill(imageRect)
// draw image on top
image.drawInRect(imageRect)
return UIGraphicsGetImageFromCurrentImageContext()
}
Using this image as the contents of the diffuse material property will give you the effect that you're after.
Using Shader Modifiers
If you find yourself having to change the color very frequently (possibly animating it), you could also use custom shaders or shader modifiers to composite the image over the color.
In that case, you want to composite the image A over the color B, so that the output color (CO) is:
CO = CA + CB * (1 - ɑA)
By passing the image as the diffuse contents, and assigning the output to the diffuse content, the expression can be simplified as:
Cdiffuse = Cdiffuse + Ccolor * (1 - ɑdiffuse)
Cdiffuse += Ccolor * (1 - ɑdiffuse)
Generally the output alpha would depend on the alpha of A and B, but since B (the color) is opaque (1), the output alpha is also 1.
This can be written as a small shader modifier. Since the motivation for this solutions was to be able to change the color, the color is created as a uniform variable which can be updated in code.
// Define a color that can be set/changed from code
uniform vec3 backgroundColor;
#pragma body
// Composit A (the image) over B (the color):
// output = image + color * (1-alpha_image)
float alpha = _surface.diffuse.a;
_surface.diffuse.rgb += backgroundColor * (1.0 - alpha);
// make fully opaque (since the color is fully opaque)
_surface.diffuse.a = 1.0;
This shader modifier would then be read from the file, and set in the materials shader modifier dictionary
enum ShaderLoadingError: ErrorType {
case FileNotFound, FailedToLoad
}
func shaderModifier(named shaderName: String, fileExtension: String = "glsl") throws -> String {
guard let url = NSBundle.mainBundle().URLForResource(shaderName, withExtension: fileExtension) else {
throw ShaderLoadingError.FileNotFound
}
do {
return try String(contentsOfURL: url)
} catch {
throw ShaderLoadingError.FailedToLoad
}
}
// later, in the code that configures the material ...
do {
let modifier = try shaderModifier(named: "Composit") // the name of the shader modifier file (assuming 'glsl' file extension)
theMaterial.shaderModifiers = [SCNShaderModifierEntryPointSurface: modifier]
} catch {
// Handle the error here
print(error)
}
You would then be able to change the color by setting a new value for the "backgroundColor" of the material. Note that there is no initial value, so one would have to be set.
let backgroundColor = SCNVector3Make(1.0, 0.0, 0.7) // r, g, b
// Set the color components as an SCNVector3 wrapped in an NSValue
// for the same key as the name of the uniform variable in the sahder modifier
theMaterial.setValue(NSValue(SCNVector3: backgroundColor), forKey: "backgroundColor")
As you can see, the first solution is simpler and the one I would recommend if it suits your needs. The second solution is more complicated, but enabled the background color to be animated.
Just in case someone comes across this in the future... for some tasks, ricksters solution is likely the easiest. In my case, I wanted to display a grid on top of an image that was mapped to a sphere. I originally composited the images into one and applied them, but over time I got more fancy and this started getting complex. So I made two spheres, one inside the other. I put the grid on the inner one and the image on the outer one and presto...
let outSphereGeometry = SCNSphere(radius: 20)
outSphereGeometry.segmentCount = 100
let outSphereMaterial = SCNMaterial()
outSphereMaterial.diffuse.contents = topImage
outSphereMaterial.isDoubleSided = true
outSphereGeometry.materials = [outSphereMaterial]
outSphere = SCNNode(geometry: outSphereGeometry)
outSphere.position = SCNVector3(x: 0, y: 0, z: 0)
let sphereGeometry = SCNSphere(radius: 10)
sphereGeometry.segmentCount = 100
sphereMaterial.diffuse.contents = gridImage
sphereMaterial.isDoubleSided = true
sphereGeometry.materials = [sphereMaterial]
sphere = SCNNode(geometry: sphereGeometry)
sphere.position = SCNVector3(x: 0, y: 0, z: 0)
I was surprised that I didn't need to set sphereMaterial.transparency, it seems to get this automatically.
I have this transparent image:
My goal is to change the "ME!" parts color. Either tint only the last 3rd of the image, or replace the blue color with the new color.
Expected result after color change:
Unfortunately neither worked for me. To change the specific color I tried this: LINK, but as the documentation says, this works only without alpha channel!
Then I tried this one: LINK, but this actually does nothing, no tint or anything.
Is there any other way to tint only one part of the color or just replace a specific color?
I know I could slice the image in two parts, but I hope there is another way.
It turns out to be surprisingly complicated—you’d think you could do it in one pass with CoreGraphics blend modes, but from pretty extensive experimentation I haven’t found such a way that doesn’t mangle the alpha channel or the coloration. The solution I landed on is this:
Start with a grayscale/alpha version of your image rather than a colored one: black in the areas you don’t want tinted, white in the areas you do
Create an image context with your image’s dimensions
Fill that context with black
Draw the image into the context
Get a new image (let’s call it “the-image-over-black”) from that context
Clear the context (so you can use it again)
Fill the context with the color you want the tinted part of your image to be
Draw the-image-over-black into the context with the “multiply” blend mode
Draw the original image into the context with the “destination in” blend mode
Get your final image from the context
The reason this works is because of the combination of blend modes. What you’re doing is creating a fully-opaque black-and-white image (step 5), then multiplying it by your final color (step 8), which gives you a fully opaque black-and-your-final-color image. Then, you take the original image, which still has its alpha channel, and draw it with the “destination in” blend mode which takes the color from the black-and-your-color image and the alpha channel from the original image. The result is a tinted image with the original brightness values and alpha channel.
Objective-C
- (UIImage *)createTintedImageFromImage:(UIImage *)originalImage color:(UIColor *)desiredColor {
CGSize imageSize = originalImage.size;
CGFloat imageScale = originalImage.scale;
CGRect contextBounds = CGRectMake(0, 0, imageSize.width, imageSize.height);
UIGraphicsBeginImageContextWithOptions(imageSize, NO /* not opaque */, imageScale); // 2
[[UIColor blackColor] setFill]; // 3a
UIRectFill(contextBounds); // 3b
[originalImage drawAtPoint:CGPointZero]; // 4
UIImage *imageOverBlack = UIGraphicsGetImageFromCurrentImageContext(); // 5
CGContextClear(UIGraphicsGetCurrentImageContext()); // 6
[desiredColor setFill]; // 7a
UIRectFill(contextBounds); // 7b
[imageOverBlack drawAtPoint:CGPointZero blendMode:kCGBlendModeMultiply alpha:1]; // 8
[originalImage drawAtPoint:CGPointZero blendMode:kCGBlendModeDestinationIn alpha:1]; // 9
finalImage = UIGraphicsGetImageFromCurrentContext(); // 10
UIGraphicsEndImageContext();
return finalImage;
}
Swift 4
func createTintedImageFromImage(originalImage: UIImage, desiredColor: UIColor) -> UIImage {
let imageSize = originalImage.size
let imageScale = originalImage.scale
let contextBounds = CGRect(origin: .zero, size: imageSize)
UIGraphicsBeginImageContextWithOptions(imageSize, false /* not opaque */, imageScale) // 2
defer { UIGraphicsEndImageContext() }
UIColor.black.setFill() // 3a
UIRectFill(contextBounds) // 3b
originalImage.draw(at: .zero) // 4
guard let imageOverBlack = UIGraphicsGetImageFromCurrentImageContext() else { return originalImage } // 5
desiredColor.setFill() // 7a
UIRectFill(contextBounds) // 7b
imageOverBlack.draw(at: .zero, blendMode: .multiply, alpha: 1) // 8
originalImage.draw(at: .zero, blendMode: .destinationIn, alpha: 1) // 9
guard let finalImage = UIGraphicsGetImageFromCurrentImageContext() else { return originalImage } // 10
return finalImage
}
There are lots of ways to do this.
Core image filters come to mind as a good way to go. Since the part you want to change is a unique color, you could use the Core image CIHueAdjust filter to shift the hue from blue to red. Only the word you want to change has any color to it, so that's all it would change.
If you had an image with various colors in it and still wanted to replace ALL The colors in the image you could use CIColorCube to map the blue pixels to red without affecting other colors. There was a thread on this board last week with sample code using CIColorCube to force one color to another. Search on CIColorCube and look for the most recent post and you should be able to find it.
If you wanted to limit the change to a specific area of the screen you could probably come up with a sequence of core image filters that would limit your changes to just the target area.
You could also slice out the part you want to change, color edit it using a any of variety of techniques, and then composite it back together.
Another way is to use CoreImage filter - ColorCube.
Made category for myself when had this problem. It is for NSImage, but I think should work for UIImage after some update
https://github.com/braginets/NSImage-replace-color