I'm using this code which works well on iPhone but I need to color a WKImage in WatchKit, how can I do this? I tried using the same code for an Extension on WKInterfaceImage but the image property is get only.
extension UIImageView {
func setImageColor(color: UIColor) {
let templateImage = self.image?.withRenderingMode(UIImage.RenderingMode.alwaysTemplate)
self.image = templateImage
self.tintColor = color
}
}
How can I color a UIImage in Swift?
I need the buttons to be disabled on top of Scene objects. How can i achieve that? The current code i am working is working fine but how can i get a specific child node to be transparent,
extension SCNMaterial {
convenience init(color: UIColor) {
self.init()
diffuse.contents = color
}
convenience init(image: UIImage) {
self.init()
diffuse.contents = image
}
}
let clearMaterial = SCNMaterial(color: .clear)
boxNode.materials = [clearMaterial]
Did you not get any error? SCNGeometry not SCNNode have material. try:
boxNode.geometry?.materials = [clearMaterial]
I tried this but it did not work. Maybe SCNMaterial cant use .clear
I have always used .transparency to hide/unhide node. try this:
func show(){
yourNode.geometry?.firstMaterial?.transparency = 1
}
func hide(){
yourNode.geometry?.firstMaterial?.transparency = 0
}
Is there a simple way to render a rotated tiled image as a view background? Something to the effect of UIColor(patternImage:) but where the image is rotated at a certain angle?
There is no simple way to achieve this, at least not in vanilla Swift. I would use another UIView as a subview for our original view, set its background to a tiled image and add a CGAffineTransform to that particular view.
Turns out Core Image filter CIAffineTile does exactly what I want.
extension UIImage {
func tile(angle: CGFloat) -> CIImage? {
return CIImage(image: self)?.applyingFilter(
"CIAffineTile",
withInputParameters: [
kCIInputTransformKey: NSValue(
cgAffineTransform: CGAffineTransform(rotationAngle: angle)
)
]
)
}
}
This function creates a CIImage with infinite extent, which can be cropped and converted to a real image.
let v = UIImageView()
// ...
let source = UIImage(named: "sample")!
let tiled = source.tile(angle: CGFloat.pi / 6)!
let result = UIImage(ciImage: tiled.cropping(to: v.bounds))
v.image = result
I'm creating an app that requires real-time application of filters to images. Converting the UIImage to a CIImage, and applying the filters are both extremely fast operations, yet it takes too long to convert the created CIImage back to a CGImageRef and display the image (1/5 of a second, which is actually a lot if editing needs to be real-time).
The image is about 2500 by 2500 pixels big, which is most likely part of the problem
Currently, I'm using
let image: CIImage //CIImage with applied filters
let eagl = EAGLContext(API: EAGLRenderingAPI.OpenGLES2)
let context = CIContext(EAGLContext: eagl, options: [kCIContextWorkingColorSpace : NSNull()])
//this line takes too long for real-time processing
let cg: CGImage = context.createCGImage(image, fromRect: image.extent)
I've looked into using EAGLContext.drawImage()
context.drawImage(image, inRect: destinationRect, fromRect: image.extent)
Yet I can't find any solid documentation on exactly how this is done, or if it would be any faster
Is there any faster way to display a CIImage to the screen (either in a UIImageView, or directly on a CALayer)? I would like to avoid decreasing the image quality too much, because this may be noticeable to the user.
It may be worth considering Metal and displaying with a MTKView.
You'll need a Metal device which can be created with MTLCreateSystemDefaultDevice(). That's used to create a command queue and Core Image context. Both these objects are persistent and quite expensive to instantiate, so ideally should be created once:
lazy var commandQueue: MTLCommandQueue =
{
return self.device!.newCommandQueue()
}()
lazy var ciContext: CIContext =
{
return CIContext(MTLDevice: self.device!)
}()
You'll also need a color space:
let colorSpace = CGColorSpaceCreateDeviceRGB()!
When it comes to rendering a CIImage, you'll need to create a short lived command buffer:
let commandBuffer = commandQueue.commandBuffer()
You'll want to render your CIImage (let's call it image) to the currentDrawable?.texture of a MTKView. If that's bound to targetTexture, the rendering syntax is:
ciContext.render(image,
toMTLTexture: targetTexture,
commandBuffer: commandBuffer,
bounds: image.extent,
colorSpace: colorSpace)
commandBuffer.presentDrawable(currentDrawable!)
commandBuffer.commit()
I have a working version here.
Hope that helps!
Simon
I ended up using the context.drawImage(image, inRect: destinationRect, fromRect: image.extent) method. Here's the image view class that I created
import Foundation
//GLKit must be linked and imported
import GLKit
class CIImageView: GLKView{
var image: CIImage?
var ciContext: CIContext?
//initialize with the frame, and CIImage to be displayed
//(or nil, if the image will be set using .setRenderImage)
init(frame: CGRect, image: CIImage?){
super.init(frame: frame, context: EAGLContext(API: EAGLRenderingAPI.OpenGLES2))
self.image = image
//Set the current context to the EAGLContext created in the super.init call
EAGLContext.setCurrentContext(self.context)
//create a CIContext from the EAGLContext
self.ciContext = CIContext(EAGLContext: self.context)
}
//for usage in Storyboards
required init?(coder aDecoder: NSCoder){
super.init(coder: aDecoder)
self.context = EAGLContext(API: EAGLRenderingAPI.OpenGLES2)
EAGLContext.setCurrentContext(self.context)
self.ciContext = CIContext(EAGLContext: self.context)
}
//set the current image to image
func setRenderImage(image: CIImage){
self.image = image
//tell the processor that the view needs to be redrawn using drawRect()
self.setNeedsDisplay()
}
//called automatically when the view is drawn
override func drawRect(rect: CGRect){
//unwrap the current CIImage
if let image = self.image{
//multiply the frame by the screen's scale (ratio of points : pixels),
//because the following .drawImage() call uses pixels, not points
let scale = UIScreen.mainScreen().scale
let newFrame = CGRectMake(rect.minX, rect.minY, rect.width * scale, rect.height * scale)
//draw the image
self.ciContext?.drawImage(
image,
inRect: newFrame,
fromRect: image.extent
)
}
}
}
Then, to use it, simply
let myFrame: CGRect //frame in self.view where the image should be displayed
let myImage: CIImage //CIImage with applied filters
let imageView: CIImageView = CIImageView(frame: myFrame, image: myImage)
self.view.addSubview(imageView)
Resizing the UIImage to the screen size before converting it to a CIImage also helps. It speeds things up a lot in the case of high quality images. Just make sure to use the full-size image when actually saving it.
Thats it! Then, to update the image in the view
imageView.setRenderImage(newCIImage)
//note that imageView.image = newCIImage won't work because
//the view won't be redrawn
You can use GlkView and render as you said with context.drawImage() :
let glView = GLKView(frame: superview.bounds, context: EAGLContext(API: .OpenGLES2))
let context = CIContext(EAGLContext: glView.context)
After your processing render the image :
glView.bindDrawable()
context.drawImage(image, inRect: destinationRect, fromRect: image.extent)
glView.display()
That is a pretty big image so that's definitely part of it. I'd recommend looking at GPUImage for doing single image filters. You can skip over using CoreImage altogether.
let inputImage:UIImage = //... some image
let stillImageSource = GPUImagePicture(image: inputImage)
let filter = GPUImageSepiaFilter()
stillImageSource.addTarget(filter)
filter.useNextFrameForImageCapture()
stillImageSource.processImage()
I would like to use Apple's built-in emoji characters (specifically, several of the smileys, e.g. \ue415) in a UILabel but I would like the emojis to be rendered in grayscale.
I want them to remain characters in the UILabel (either plain text or attributed is fine). I'm not looking for a hybrid image / string solution (which I already have).
Does anyone know how to accomplish this?
I know you said you aren't looking for a "hybrid image solution", but I have been chasing this dragon for a while and the best result I could come up with IS a hybrid. Just in case my solution is somehow more helpful on your journey, I am including it here. Good luck!
import UIKit
import QuartzCore
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
// the target label to apply the effect to
let label = UILabel(frame: view.frame)
// create label text with empji
label.text = "🍑 HELLO"
label.textAlignment = .center
// set to red to further show the greyscale change
label.textColor = .red
// calls our extension to get an image of the label
let image = UIImage.imageWithLabel(label: label)
// create a tonal filter
let tonalFilter = CIFilter(name: "CIPhotoEffectTonal")
// get a CIImage for the filter from the label image
let imageToBlur = CIImage(cgImage: image.cgImage!)
// set that image as the input for the filter
tonalFilter?.setValue(imageToBlur, forKey: kCIInputImageKey)
// get the resultant image from the filter
let outputImage: CIImage? = tonalFilter?.outputImage
// create an image view to show the result
let tonalImageView = UIImageView(frame: view.frame)
// set the image from the filter into the new view
tonalImageView.image = UIImage(ciImage: outputImage ?? CIImage())
// add the view to our hierarchy
view.addSubview(tonalImageView)
}
}
extension UIImage {
class func imageWithLabel(label: UILabel) -> UIImage {
UIGraphicsBeginImageContextWithOptions(label.bounds.size, false, 0.0)
label.layer.render(in: UIGraphicsGetCurrentContext()!)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img!
}
}