According to Apple docs, filters property of CALayer is not supported in iOS. As i used one of the apps which are applying CIFilter to UIView i.e. Splice, Video Editor Videoshow FX for Funimate and artisto. That's means we can apply CIFilter to UIView.
I have used SCRecorder library and try to get this task done by SCPlayer and SCFilterImageView. But i am facing black screen issue when video is playing after apply CIFilter. So kindly help me to complete this task so that i can apply CIFilter to UIView and also can change the filter by clicking on a UIButton.
The technically accurate answer is that a CIFilter requires a CIImage. You can turn a UIView into a UIImage and then convert that into a CIImage, but all CoreImage filters that use an image for input (there are some that generate a new image) use a `CIImage for input and output.
Please note that the origin for a CIImage is bottom left, not top left. Basically the Y axis is flipped.
If you use CoreImage filters dynamically, learn to use a GLKView to render in - it uses the GPU where a UIImageView uses the CPU.
If you want to test out a filter, it's best to use an actual device. The simulator will give you very poor performance. I've seen a simple blur take nearly a minute where on a device it will be a fraction of a second!
Let's say you have a UIView that you wish to apply a CIPhotoEffectMono to. The steps to do this would be:
Convert the UIView into a CIImage.
Apply the filter, getting a CIImage as output.
Use a CIContext to create a CGImage and then convert that to a UIImage.
Here's a UIView extension that will convert the view and all it's subviews into a UIImage:
extension UIView {
public func createImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(
CGSize(width: self.frame.width, height: self.frame.height), true, 1)
self.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image!
}
}
Converting a UIImage into a CIImage is one line of code:
let ciInput = CIImage(image: myView.createImage)
Here's a function that will apply the filter and return a UIImage:
func convertImageToBW(image:UIImage) -> UIImage {
let filter = CIFilter(name: "CIPhotoEffectMono")
// convert UIImage to CIImage and set as input
let ciInput = CIImage(image: image)
filter?.setValue(ciInput, forKey: "inputImage")
// get output CIImage, render as CGImage first to retain proper UIImage scale
let ciOutput = filter?.outputImage
let ciContext = CIContext()
let cgImage = ciContext.createCGImage(ciOutput!, from: (ciOutput?.extent)!)
return UIImage(cgImage: cgImage!)
}
Related
I am trying to convert UIView containing 2 UIImageView to UIImage. Almost everything is working fine, but on final conversion some transparent diagonal lines are shown in final UIImage. I cant understand why is this happening. If someone can help. Thanks.
extension UIView {
/**
Convert UIView to UIImage
*/
func toImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.isOpaque, 0.0)
self.drawHierarchy(in: self.bounds, afterScreenUpdates: false)
let snapshotImageFromMyView = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return snapshotImageFromMyView!
}
}
following are before and after saving image
enter image descriptBefore SavingAfter Saving
I found the problem, my code was converting resulting UIImage to Webp at 0.70 quality. That was adding lines in image. By saving jpg and then converting to webp fixed the problem.
I was reading an article about Core Image where I saw the following lines:
if let output = filter?.valueForKey(kCIOutputImageKey) as? CIImage {
let cgimgresult = context.createCGImage(output, fromRect: output.extent)
let result = UIImage(CGImage: cgimgresult)
imageView?.image = result
}
As you can see, the CIImage instance is first converted into a CGImage instance and only then into a UIImage one. After doing some research I found out that it had something to do with the scale of the image within the image view's bounds.
I wonder, is that the only reason (having the right scale for display purposes) why we need to do all those conversions because there is already an initializer for UIImage that takes an instance of CIImage as an argument?
In the UIImage's reference that wrote
An initialized UIImage object. In Objective-C, this method returns nil if the ciImage parameter is nil.
and like #matt wrote here
UIImage's CIImage is not nil only if the UIImage is backed by a CIImage already (e.g. because it was generated by imageWithCIImage:).
So, the direct init
UIImage(ciImage: ciImage)
can be nil.
That's why we should be init the UIImage via the CGImage, not CIImage
I am trying to crop part of an image taken with the iPhone's camera via the cropping(to:) method on a CGImage but I am encountering a weird phenomenon where my UIImage's dimensions are doubled when converted with .cgImage which, obviously, prevents me from doing what I want.
The flow is:
Picture is taken with the camera and goes into a full-screen imageContainerView
A "screenshot" of this imageContainerView is made with a UIView extension, effectively resizing the image to the container's dimensions
imageContainerView's .image is set to now be the "screenshot"
let croppedImage = imageContainerView.renderToImage()
imageContainerView.image = croppedImage
print(imageContainerView.image!.size) //yields (320.0, 568.0)
print(imageContainerView.image!.cgImage!.width, imageContainerView.image!.cgImage!.height) //yields (640, 1136) ??
extension UIView {
func renderToImage(afterScreenUpdates: Bool = false) -> UIImage {
let rendererFormat = UIGraphicsImageRendererFormat.default()
rendererFormat.opaque = isOpaque
let renderer = UIGraphicsImageRenderer(size: bounds.size, format: rendererFormat)
let snapshotImage = renderer.image { _ in
drawHierarchy(in: bounds, afterScreenUpdates: afterScreenUpdates)
}
return snapshotImage
}
}
I have been wandering around here with no success so far and would gladly appreciate a pointer or a suggestion on how/why the image size is suddenly doubled.
Thanks in advance.
This is because print(imageContainerView.image!.size) prints the size of the image object in points and print(imageContainerView.image!.cgImage!.width, imageContainerView.image!.cgImage!.height) print the size of the actual image in pixels.
On iPhone you are using there are 2 pixels for evert point in both horizontal and vertical. The UIImage scale property will give you the factor which in your case will be 2.
See this link iPhone Resolutions
I want to blend an image with a background image.
As if it is sprayed on a wall.
How can i obtain a realistic blend.
I have tried alpha but not giving good results.
I am quit new in this CoreImage stuff.
Please help.
Something similiar to this.
http://designshack.net/wp-content/uploads/texturetricks-14.jpg
i googled some but no luck.
I even do not know what i am looking for exactly.
Not sure how to google it.
Here is how to blend two images together.
UIImage *bottomImage = [UIImage imageNamed:#"bottom.png"];
UIImage *image = [UIImage imageNamed:#"top.png"];
CGSize newSize = CGSizeMake(width, height);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This kind of works. I found a smiley face image - white on black - and a brick background.
The first job is to create CIImage versions of the pair. I pass the smiley face through a CIMaskToAlpha filter to make the black transparent:
let brick = CIImage(image: UIImage(named: "brick.jpg")!)!
let smiley = CIImage(image: UIImage(named: "smiley.jpg")!)!
.imageByApplyingFilter("CIMaskToAlpha", withInputParameters: nil)
Then composite the pair together with CISubtractBlendMode:
let composite = brick
.imageByApplyingFilter("CISubtractBlendMode",
withInputParameters: [kCIInputBackgroundImageKey: smiley])
The final result is almost there:
The subtract blend will only really work with white artwork.
Here's another approach: using CISoftLightBlendMode for the blend, bumping up the gamma to brighten the effect and then compositing that over the original brickwork using the smiley as a mask:
let composite = smiley
.imageByApplyingFilter("CISoftLightBlendMode",
withInputParameters: [kCIInputBackgroundImageKey: brick])
.imageByApplyingFilter("CIGammaAdjust",
withInputParameters: ["inputPower": 0.5])
.imageByApplyingFilter("CIBlendWithMask",
withInputParameters: [kCIInputBackgroundImageKey: brick, kCIInputMaskImageKey: smiley])
Which gives:
This approach is nice because you can control the paint whiteness with the gamma power.
Simon
I'm trying to save a cropped image to the camera roll.
(I need to do it programmatically, I can't have the user edit it)
This is my (still quite basic) cut and save code:
- (void)cutAndSaveImage:(UIImage*)rawImage
{
CIImage *workingImage = [[CIImage alloc] initWithImage:rawImage];
CGRect croppingRect = CGRectMake(0.0f, 0.0f, 3264.0f, 1224.0f);
CIImage *croppedImage = [workingImage imageByCroppingToRect:croppingRect];
UIImage *endImage = [UIImage imageWithCIImage:croppedImage scale: 1.0f orientation:UIImageOrientationRight];
self.testImage.image = endImage;
UIImageWriteToSavedPhotosAlbum(rawImage, self, #selector(image:didFinishSavingWithError:contextInfo:) , nil);
UIImageWriteToSavedPhotosAlbum(endImage, self, #selector(image:didFinishSavingWithError:contextInfo:) , nil);
}
The method is called within:
- (void)imagePickerController:(UIImagePickerController*)picker didFinishPickingMediaWithInfo:(NSDictionary*)info
I first create a CIImage using the raw UIImage.
Then I get a cropped CIImage using an instance method of the first one.
After that I create a new UIImage using the cropped CIImage.
At this point, to have some feedback, I set the new cropped UIImage as the backing image of a UIImageView. This works, and I can clearly see the image cropped exactly how I desired.
When I try to save it to the camera roll, however, things stop working.
I can't save the newly created endImage.
As you can see, I added a line to save the original UIImage too, just for comparison. The original one saves normally.
Another confusing thing is that the NSError object passed to the image:didFinishSavingWithError:contextInfo: callback is nil. (the callback is normally executed for both saving attempts)
EDIT:
Just made an experiment:
NSLog(#"rawImage: %# - rawImage.CGImage: %#", rawImage, rawImage.CGImage);
NSLog(#"endImage: %# - endImage.CGImage: %#", endImage, endImage.CGImage);
It looks like only the rawImage (coming from the UIImagePickerController) possesses a backing CGImageRef. The other one, created from a CIImage, doesn't.
Can it be that UIImageWriteToSavedPhotosAlbum works using the backing CGImageRef?
Can it be that UIImageWriteToSavedPhotosAlbum works using the backing CGImageRef?
Correct. A CIImage is not an image, and a UIImage backed only by a CIImage is not an image either; it is just a kind of wrapper. Why are you using CIImage at all here? You aren't using CIFilter so this makes no sense. Or if you are using CIFilter, you must render through a CIContext to get the output as a bitmap.
You can crop easily by drawing into a smaller graphics context.
If the UIImage object was initialized using a CIImage object, the
value of the property is NULL.
You can generate UIImage from CIImage like this:
let lecturePicture = UIImage(data: NSData(contentsOfURL: NSURL(string:"http://i.stack.imgur.com/Xs4RX.jpg")!)!)!
let controlsFilter = CIFilter(name: "CIColorControls")
controlsFilter.setValue(CIImage(image: lecturePicture), forKey: kCIInputImageKey)
controlsFilter.setValue(1.5, forKey: kCIInputContrastKey)
let displayImage = UIImage(CGImage: CIContext(options:nil).createCGImage(controlsFilter.outputImage, fromRect:controlsFilter.outputImage.extent()))!
displayImage