How can I use CiFilter to give a bump effect using swift? - ios

How can I do this in swift?
I am trying to set this effect particularly to a position on an image.
Please give the simple code to apply this effect.
Thanks in advance.
Error while adding distortion bump effect.

Try this code:
public typealias Filter = CIImage -> CIImage
public typealias CIParameters = Dictionary<String, AnyObject>
public func bumpDistortion(center: CGPoint, radius: Float, scale: Float) -> Filter {
return { image in
let parameters : CIParameters = [
kCIInputRadiusKey:radius,
kCIInputCenterKey:CIVector(CGPoint:center),
kCIInputScaleKey:scale,
kCIInputImageKey: image]
let filter = CIFilter(name:"CIBumpDistortion", withInputParameters:parameters)
return filter!.outputImage!
}
}

Lets assume you use the code from TastyCat:
if let image = UIImage(named: "YOUR_IMAGE_NAME") {
let bumpEffect = self.bumpDistortion(YOUR_POINT , radius:YOUR_RADIUS, scale:YOUR_SCALE)
guard let yourCIImage = CIImage(image: image) else {
//handle error
return
}
let result = bumpEffect(yourCIImage)
let theImageWithEffect = UIImage(CIImage:result)
}

Related

Render a MTIImage

Please don't judge me I'm just learning Swift.
Recently I installed MetalPetal framework and I followed the instructions:
https://github.com/MetalPetal/MetalPetal#example-code
But I get error because of MTIContext. Maybe I have to declare something more of MetalPetal?
My Code:
import UIKit
import MetalPetal
import CoreGraphics
class ViewController: UIViewController {
#IBOutlet weak var image1: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
weak var image: UIImage?
image = image1.image
var ciImage = CIImage(image: image!)
var cgImage1 = convertCIImageToCGImage(inputImage: ciImage!)
let imageFromCGImage = MTIImage(cgImage: cgImage1!)
let inputImage = imageFromCGImage
let filter = MTISaturationFilter()
filter.saturation = 1
filter.inputImage = inputImage
let outputImage = filter.outputImage
let context = MTIContext()
do {
try context.render(outputImage, to: pixelBuffer)
var image3: CIImage? = try context.makeCIImage(from: outputImage!)
//context.makeCIImage(from: image)
//context.makeCGImage(from: image)
} catch {
print(error)
}
// Do any additional setup after loading the view, typically from a nib.
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
func convertCIImageToCGImage(inputImage: CIImage) -> CGImage? {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(inputImage, from: inputImage.extent) {
return cgImage
}
return nil
}
}
#YuAo
Input Image
An UIImage is based on either underlying Quartz image (can be retrieved with cgImage) or an underlying Core Image (can be retrieved from UIImage with ciImage).
MTIImage offers constructors for both types.
MTIContext
A MTIContext must be initialized with a device that can be retrieved by calling MTLCreateSystemDefaultDevice().
Rendering
A rendering to a pixel buffer is not needed. We can get the result by calling makeCGImage.
Test
I've taken your source code above and slightly adapted it to the aforementioned points.
I also added a second UIImageView to see the result of the filtering. I also changed the saturation to 0 to see if the filter works
If GPU or shaders are involved it makes sense to test on a real device and not on the simulator.
The result looks like this:
In the upper area you see the original jpg, in the lower area the filter is applied.
Swift
The simplified Swift code that produces this result looks like this:
override func viewDidLoad() {
super.viewDidLoad()
guard let image = UIImage(named: "regensburg.jpg") else { return }
guard let cgImage = image.cgImage else { return }
imageView1.image = image
let filter = MTISaturationFilter()
filter.saturation = 0
filter.inputImage = MTIImage(cgImage: cgImage)
if let device = MTLCreateSystemDefaultDevice(),
let outputImage = filter.outputImage {
do {
let context = try MTIContext(device: device)
let filteredImage = try context.makeCGImage(from: outputImage)
imageView2.image = UIImage(cgImage: filteredImage)
} catch {
print(error)
}
}
}

How to remove the border/drop shadow from an UIImageView?

I've been generating QR Codes using the CIQRCodeGenerator CIFilter and it works very well:
But when I resize the UIImageView and generate again
#IBAction func sizeSliderValueChanged(_ sender: UISlider) {
qrImageView.transform = CGAffineTransform(scaleX: CGFloat(sender.value), y: CGFloat(sender.value))
}
I get a weird Border/DropShadow around the image sometimes:
How can I prevent it from appearing at all times or remove it altogether?
I have no idea what it is exactly, a border, a dropShadow or a Mask, as I'm new to Swift/iOS.
Thanks in advance!
PS. I didn't post any of the QR-Code generating code as it's pretty boilerplate and can be found in many tutorials out there, but let me know if you need it
EDIT:
code to generate the QR Code Image
private func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
guard let filter = CIFilter(name: "CIQRCodeGenerator") else {
return nil
}
filter.setValue(data, forKey: "inputMessage")
guard let qrEncodedImage = filter.outputImage else {
return nil
}
let scaleX = qrImageView.frame.size.width / qrEncodedImage.extent.size.width
let scaleY = qrImageView.frame.size.height / qrEncodedImage.extent.size.height
let transform = CGAffineTransform(scaleX: scaleX, y: scaleY )
if let outputImage = filter.outputImage?.applying(transform) {
return UIImage(ciImage: outputImage)
}
return nil
}
Code for button pressed
#IBAction func generateCodeButtonPressed(_ sender: CustomButton) {
if codeTextField.text == "" {
return
}
let newEncodedMessage = codeTextField.text!
let encodedImage: UIImage = generateQRCode(from: newEncodedMessage)!
qrImageView.image = encodedImage
qrImageView.transform = CGAffineTransform(scaleX: CGFloat(sizeSlider.value), y: CGFloat(sizeSlider.value))
qrImageView.layer.minificationFilter = kCAFilterNearest
qrImageView.layer.magnificationFilter = kCAFilterNearest
}
It’s a little hard to be sure without the code you’re using to generate the image for the image view, but that looks like a resizing artifact—the CIImage may be black or transparent outside the edges of the QR code, and when the image view size doesn’t match the image’s intended size, the edges get fuzzy and either the image-outside-its-boundaries or the image view’s background color start bleeding in. Might be able to fix it by setting the image view layer’s minification/magnification filters to “nearest neighbor”, like so:
imageView.layer.minificationFilter = kCAFilterNearest
imageView.layer.magnificationFilter = kCAFilterNearest
Update from seeing the code you added—you’re currently resizing the image twice, first with the call to applying(transform) and then by setting a transform on the image view itself. I suspect the first resize is adding the blurriness, which the minification / magnification filter I suggested earlier then can’t fix. Try shortening generateQRCode to this:
private func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
guard let filter = CIFilter(name: "CIQRCodeGenerator") else {
return nil
}
filter.setValue(data, forKey: "inputMessage")
if let qrEncodedImage = filter.outputImage {
return UIImage(cgImage: qrEncodedImage)
}
return nil
}
I think the problem here is that you try to resize it to "non-square" (as your scaleX isn't always the same as scaleY), while the QR code is always square so both side should have the same scale factor to get a non-blurred image.
Something like:
let scaleX = qrImageView.frame.size.width / qrEncodedImage.extent.size.width
let scaleY = qrImageView.frame.size.height / qrEncodedImage.extent.size.height
let scale = max(scaleX, scaleY)
let transform = CGAffineTransform(scaleX: scale, y: scale)
will make sure you have "non-bordered/non-blurred/squared" UIImage.
I guess the issue is with the image(png) file not with your UIImageView. Try to use another image and I hope it will work!

How can I convert an UIImage to grayscale in Swift using CIFilter?

I am building a scanner component for an iOS app so far I have the result image cropped and in the correct perspective.
Now I need to turn the color image into Black and white "Scanned" document.
I tried to use - "CIPhotoEffectNoir" but it more grayscale then totally black and white. I wish to get a full contrast image with 100% black and 100% white.
How can I achieve that?
Thanks
You can use CIColorControls and set Contrast Key kCIInputContrastKey to increase the black/white contrast as follow:
Xcode 9 • Swift 4
extension String {
static let colorControls = "CIColorControls"
}
extension UIImage {
var coreImage: CIImage? { return CIImage(image: self) }
}
extension CIImage {
var uiImage: UIImage? { return UIImage(ciImage: self) }
func applying(contrast value: NSNumber) -> CIImage? {
return applyingFilter(.colorControls, parameters: [kCIInputContrastKey: value])
}
func renderedImage() -> UIImage? {
guard let image = uiImage else { return nil }
return UIGraphicsImageRenderer(size: image.size,
format: image.imageRendererFormat).image { _ in
image.draw(in: CGRect(origin: .zero, size: image.size))
}
}
}
let url = URL(string: "https://i.stack.imgur.com/Xs4RX.jpg")!
do {
if let coreImage = UIImage(data: try Data(contentsOf: url))?.coreImage,
let increasedContrast = coreImage.applying(contrast: 1.5) {
imageView.image = increasedContrast.uiImage
// if you need to convert your image to data (JPEG/PNG) you would need to render the ciimage using renderedImage method on CIImage
}
} catch {
print(error)
}
To convert from colors to grayscale you can set the Saturation Key kCIInputSaturationKey to zero:
extension CIImage {
func applying(saturation value: NSNumber) -> CIImage? {
return applyingFilter(.colorControls, parameters: [kCIInputSaturationKey: value])
}
var grayscale: CIImage? { return applying(saturation: 0) }
}
let url = URL(string: "https://i.stack.imgur.com/Xs4RX.jpg")!
do {
if let coreImage = UIImage(data: try Data(contentsOf: url))?.coreImage,
let grayscale = coreImage.grayscale {
// use grayscale image here
imageView.image = grayscale.uiImage
}
} catch {
print(error)
}
Desaturate will convert your image to grayscale
Increasing the contrast will push those grays out to the extremes, i.e. black and white.
You can CIColorControls:
let ciImage = CIImage(image: image)!
let blackAndWhiteImage = ciImage.applyingFilter("CIColorControls", withInputParameters: ["inputSaturation": 0, "inputContrast": 5])
Original:
With inputContrast = 1 (default):
With inputContrast = 5:
In Swift 5.1 I have written an extension method for OSX which also converts to and from NSImage. It uses saturation and input contrast to convert the image. I have abstracted a func for black and white.
extension NSImage {
func blackAndWhite () -> NSImage?
{
return applying(saturation: 0,inputContrast: 5,image: self)
}
func applying(saturation value: NSNumber, inputContrast inputContrastValue: NSNumber, image:NSImage) -> NSImage? {
let ciImage = CIImage(data: image.tiffRepresentation!)!
let blackAndWhiteImage = ciImage.applyingFilter("CIColorControls", parameters: ["inputSaturation": value, "inputContrast": inputContrastValue])
let rep: NSCIImageRep = NSCIImageRep(ciImage: blackAndWhiteImage)
let nsImage: NSImage = NSImage(size: rep.size)
nsImage.addRepresentation(rep)
return nsImage
}
}

How do you add MKPolylines to MKSnapShotter in swift 3?

Is there a way to take a screenshot of mapView and include the polyline? I believe I need to draw CGPoint's on the image that the MKSnapShotter returns, but I am unsure on how to do so.
Current code
func takeSnapshot(mapView: MKMapView, withCallback: (UIImage?, NSError?) -> ()) {
let options = MKMapSnapshotOptions()
options.region = mapView.region
options.size = mapView.frame.size
options.scale = UIScreen.main().scale
let snapshotter = MKMapSnapshotter(options: options)
snapshotter.start() { snapshot, error in
guard snapshot != nil else {
withCallback(nil, error)
return
}
if let image = snapshot?.image{
withCallback(image, nil)
for coordinate in self.area {
image.draw(at:snapshot!.point(for: coordinate))
}
}
}
}
I had the same problem today. After several hours of research, here is how I solve it.
The following codes are in Swift 3.
1. Init your polyline coordinates array
// initial this array with your polyline coordinates
var yourCoordinates = [CLLocationCoordinate2D]()
yourCoorinates.append( coordinate 1 )
yourCoorinates.append( coordinate 2 )
...
// you can use any data structure you like
2. take the snapshot as usual, but set the region based on your coordinates:
func takeSnapShot() {
let mapSnapshotOptions = MKMapSnapshotOptions()
// Set the region of the map that is rendered. (by polyline)
let polyLine = MKPolyline(coordinates: &yourCoordinates, count: yourCoordinates.count)
let region = MKCoordinateRegionForMapRect(polyLine.boundingMapRect)
mapSnapshotOptions.region = region
// Set the scale of the image. We'll just use the scale of the current device, which is 2x scale on Retina screens.
mapSnapshotOptions.scale = UIScreen.main.scale
// Set the size of the image output.
mapSnapshotOptions.size = CGSize(width: IMAGE_VIEW_WIDTH, height: IMAGE_VIEW_HEIGHT)
// Show buildings and Points of Interest on the snapshot
mapSnapshotOptions.showsBuildings = true
mapSnapshotOptions.showsPointsOfInterest = true
let snapShotter = MKMapSnapshotter(options: mapSnapshotOptions)
snapShotter.start() { snapshot, error in
guard let snapshot = snapshot else {
return
}
// Don't just pass snapshot.image, pass snapshot itself!
self.imageView.image = self.drawLineOnImage(snapshot: snapshot)
}
}
3. Use snapshot.point() to draw Polylines on Snapshot Image
func drawLineOnImage(snapshot: MKMapSnapshot) -> UIImage {
let image = snapshot.image
// for Retina screen
UIGraphicsBeginImageContextWithOptions(self.imageView.frame.size, true, 0)
// draw original image into the context
image.draw(at: CGPoint.zero)
// get the context for CoreGraphics
let context = UIGraphicsGetCurrentContext()
// set stroking width and color of the context
context!.setLineWidth(2.0)
context!.setStrokeColor(UIColor.orange.cgColor)
// Here is the trick :
// We use addLine() and move() to draw the line, this should be easy to understand.
// The diificult part is that they both take CGPoint as parameters, and it would be way too complex for us to calculate by ourselves
// Thus we use snapshot.point() to save the pain.
context!.move(to: snapshot.point(for: yourCoordinates[0]))
for i in 0...yourCoordinates.count-1 {
context!.addLine(to: snapshot.point(for: yourCoordinates[i]))
context!.move(to: snapshot.point(for: yourCoordinates[i]))
}
// apply the stroke to the context
context!.strokePath()
// get the image from the graphics context
let resultImage = UIGraphicsGetImageFromCurrentImageContext()
// end the graphics context
UIGraphicsEndImageContext()
return resultImage!
}
That's it, hope this helps someone.
References
How do I draw on an image in Swift?
MKTile​Overlay,MKMap​Snapshotter & MKDirections
Creating an MKMapSnapshotter with an MKPolylineRenderer
Render a Map as an Image using MapKit
What is wrong with:
snapshotter.start( completionHandler: { snapshot, error in
guard snapshot != nil else {
withCallback(nil, error)
return
}
if let image = snapshot?.image {
withCallback(image, nil)
for coordinate in self.area {
image.draw(at:snapshot!.point(for: coordinate))
}
}
})
If you just want a copy of the image the user sees in the MKMapView, remember that it's a UIView subclass, and so you could do this...
public extension UIView {
public var snapshot: UIImage? {
get {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, false, UIScreen.main.scale)
self.drawHierarchy(in: self.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
}
// ...
if let img = self.mapView.snapshot {
// Do something
}

Generate image thumbnail from existing CGImage with ImageIO?

I am trying to generate a thumbnail from an existing CGImage with the help of the function CGImageSourceCreateThumbnailAtIndex.
In all examples that I found the provided image source is created with the help of the image's url.
Since I don't have any URLs -only the image data- I tried like this:
func createThumbnailForImage(image: CGImage, size: Int) -> CGImage?
{
var provider = CGImageGetDataProvider(image)
if let imageSource = CGImageSourceCreateWithDataProvider(provider, nil)
{
let swiftDict = [
kCGImageSourceThumbnailMaxPixelSize as String : size,
kCGImageSourceCreateThumbnailFromImageIfAbsent as String : true
]
let nsDict = swiftDict as NSDictionary
let cfDict = nsDict as CFDictionary
return CGImageSourceCreateThumbnailAtIndex(imageSource, 0, cfDict)
}
return nil
}
The result I get is always nil.
My guess is that something is wrong with the image source but cannot really identify the problem.
There are implementation thumbnail generation for Swift 3
import Foundation
import UIKit
import ImageIO
class PhotoUtils: NSObject {
static func thumbnail(url:CFURL) -> UIImage {
let src = CGImageSourceCreateWithURL(url, nil)
return thumbnailImage(src: src!)
}
static func thumbnail(data imageData:CFData) -> UIImage {
let src = CGImageSourceCreateWithData(imageData, nil)
return thumbnailImage(src: src!)
}
static private func thumbnailImage(src: CGImageSource) -> UIImage {
let scale = UIScreen.main.scale
let w = (UIScreen.main.bounds.width / 3) * scale
let d : [NSObject:AnyObject] = [
kCGImageSourceShouldAllowFloat : true as AnyObject,
kCGImageSourceCreateThumbnailWithTransform : true as AnyObject,
kCGImageSourceCreateThumbnailFromImageAlways : true as AnyObject,
kCGImageSourceThumbnailMaxPixelSize : w as AnyObject
]
let imref = CGImageSourceCreateThumbnailAtIndex(src, 0, d as CFDictionary)
return UIImage(cgImage: imref!, scale: scale, orientation: .up)
}
}

Resources