I have the following UIImage:
Using Objective-C, I want to be able to invert the white to black and vice versa.
Any help would be appreciated.
Swift
Using CIContext instead of -UIImage:CIImage (see https://stackoverflow.com/a/28386697/218152), and building upon #wtznc's response, here is a self-contained IBDesignable:
#IBDesignable
class InvertImage: UIImageView {
#IBInspectable var originalImage:UIImage? = nil
#IBInspectable var invert:Bool = false {
didSet {
var inverted = false
if let originalImage = self.originalImage {
if(invert) {
let image = CIImage(CGImage: originalImage.CGImage!)
if let filter = CIFilter(name: "CIColorInvert") {
filter.setDefaults()
filter.setValue(image, forKey: kCIInputImageKey)
let context = CIContext(options: nil)
let imageRef = context.createCGImage(filter.outputImage!, fromRect: image.extent)
self.image = UIImage(CGImage: imageRef)
inverted = true
}
}
}
if(!inverted) {
self.image = self.originalImage
}
}
}
}
To use it, set Original Image instead of Image since Image will be dynamically associated:
Swift3
extension UIImage {
func invertedImage() -> UIImage? {
guard let cgImage = self.cgImage else { return nil }
let ciImage = CoreImage.CIImage(cgImage: cgImage)
guard let filter = CIFilter(name: "CIColorInvert") else { return nil }
filter.setDefaults()
filter.setValue(ciImage, forKey: kCIInputImageKey)
let context = CIContext(options: nil)
guard let outputImage = filter.outputImage else { return nil }
guard let outputImageCopy = context.createCGImage(outputImage, from: outputImage.extent) else { return nil }
return UIImage(cgImage: outputImageCopy, scale: self.scale, orientation: .up)
}
}
- (UIImage *)negativeImage
{
// get width and height as integers, since we'll be using them as
// array subscripts, etc, and this'll save a whole lot of casting
CGSize size = self.size;
int width = size.width;
int height = size.height;
// Create a suitable RGB+alpha bitmap context in BGRA colour space
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *memoryPool = (unsigned char *)calloc(width*height*4, 1);
CGContextRef context = CGBitmapContextCreate(memoryPool, width, height, 8, width * 4, colourSpace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colourSpace);
// draw the current image to the newly created context
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]);
// run through every pixel, a scan line at a time...
for(int y = 0; y < height; y++)
{
// get a pointer to the start of this scan line
unsigned char *linePointer = &memoryPool[y * width * 4];
// step through the pixels one by one...
for(int x = 0; x < width; x++)
{
// get RGB values. We're dealing with premultiplied alpha
// here, so we need to divide by the alpha channel (if it
// isn't zero, of course) to get uninflected RGB. We
// multiply by 255 to keep precision while still using
// integers
int r, g, b;
if(linePointer[3])
{
r = linePointer[0] * 255 / linePointer[3];
g = linePointer[1] * 255 / linePointer[3];
b = linePointer[2] * 255 / linePointer[3];
}
else
r = g = b = 0;
// perform the colour inversion
r = 255 - r;
g = 255 - g;
b = 255 - b;
if ( (r+g+b) / (3*255) == 0 )
{
linePointer[0] = linePointer[1] = linePointer[2] = 0;
linePointer[3] = 0;
}
else
{
// multiply by alpha again, divide by 255 to undo the
// scaling before, store the new values and advance
// the pointer we're reading pixel data from
linePointer[0] = r * linePointer[3] / 255;
linePointer[1] = g * linePointer[3] / 255;
linePointer[2] = b * linePointer[3] / 255;
}
linePointer += 4;
}
}
// get a CG image from the context, wrap that into a
// UIImage
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage *returnImage = [UIImage imageWithCGImage:cgImage];
// clean up
CGImageRelease(cgImage);
CGContextRelease(context);
free(memoryPool);
// and return
return returnImage;
}
I added the above method inside a UIImage extension class.
Firstly, you have to add the Core Image framework to your project.
Project settings -> Targets "project name" -> Build phases -> Link Binary With Libraries -> Add items -> CoreImage.framework
Secondly, import the Core Image header to your implementation file.
#import <CoreImage/CoreImage.h>
Initialize an UIImage object to store the original file.
UIImage *inputImage = [UIImage imageNamed:#"imageNamed"];
Create a CIFilter to define how you want to modify your original UIImage object.
CIFilter* filter = [CIFilter filterWithName:#"CIColorInvert"];
[filter setDefaults];
[filter setValue:inputImage.CIImage forKey:#"inputImage"];
Create another UIImage object to keep modified image.
UIImage *outputImage = [[UIImage alloc] initWithCIImage:filter.outputImage];
VoilĂ ! Hope it will help.
For Xamarin C# (iOS-specific; not a cross-platform solution):
public static UIImage InvertImageColors( UIImage original )
{
return ApplyFilter( original, new CIColorInvert() );
}
public static UIImage ApplyFilter( UIImage original, CIFilter filter )
{
UIImage result;
try {
CIImage coreImage = original.CGImage;
filter.SetValueForKey( coreImage, CIFilterInputKey.Image );
CIImage output = filter.OutputImage;
CIContext context = CIContext.FromOptions( null );
CGImage cgImage = context.CreateCGImage( output, output.Extent );
result = UIImage.FromImage( cgImage );
} catch (Exception ex) {
// .. Log error here ..
result = original;
}
return result;
}
NOTE #1: Adapted from someone else's answer; I've lost track of where the original is.
NOTE #2: Deliberately done with one small step per line, so you can see the intermediate types, and easily adapt to other situations.
Also see Xamarin docs - CIFilter class
Related
I am trying to use a CIColorKernel or CIBlendKernel with sampler arguments but the program crashes. Here is my shader code which compiles successfully.
extern "C" float4 wipeLinear(coreimage::sampler t1, coreimage::sampler t2, float time) {
float2 coord1 = t1.coord();
float2 coord2 = t2.coord();
float4 innerRect = t2.extent();
float minX = innerRect.x + time*innerRect.z;
float minY = innerRect.y + time*innerRect.w;
float cropWidth = (1 - time) * innerRect.w;
float cropHeight = (1 - time) * innerRect.z;
float4 s1 = t1.sample(coord1);
float4 s2 = t2.sample(coord2);
if ( coord1.x > minX && coord1.x < minX + cropWidth && coord1.y > minY && coord1.y <= minY + cropHeight) {
return s1;
} else {
return s2;
}
}
And it crashes on initialization.
class CIWipeRenderer: CIFilter {
var backgroundImage:CIImage?
var foregroundImage:CIImage?
var inputTime: Float = 0.0
static var kernel:CIColorKernel = { () -> CIColorKernel in
let url = Bundle.main.url(forResource: "AppCIKernels", withExtension: "ci.metallib")!
let data = try! Data(contentsOf: url)
return try! CIColorKernel(functionName: "wipeLinear", fromMetalLibraryData: data) //Crashes here!!!!
}()
override var outputImage: CIImage? {
guard let backgroundImage = backgroundImage else {
return nil
}
guard let foregroundImage = foregroundImage else {
return nil
}
return CIWipeRenderer.kernel.apply(extent: backgroundImage.extent, arguments: [backgroundImage, foregroundImage, inputTime])
}
}
It crashes in the try line with the following error:
Fatal error: 'try!' expression unexpectedly raised an error: Foundation._GenericObjCError.nilError
If I replace the kernel code with the following, it works like a charm:
extern "C" float4 wipeLinear(coreimage::sample_t s1, coreimage::sample_t s2, float time)
{
return mix(s1, s2, time);
}
So there are no obvious errors in the code, such as passing incorrect function name or so.
For your use case, you actually can use a CIColorKernel. You just have to pass the extent of your render destination to the kernel as well, then you don't need the sampler to access it.
The kernel would look like this:
extern "C" float4 wipeLinear(coreimage::sample_t t1, coreimage::sample_t t2, float4 destinationExtent, float time, coreimage::destination destination) {
float minX = destinationExtent.x + time * destinationExtent.z;
float minY = destinationExtent.y + time * destinationExtent.w;
float cropWidth = (1.0 - time) * destinationExtent.w;
float cropHeight = (1.0 - time) * destinationExtent.z;
float2 destCoord = destination.coord();
if ( destCoord.x > minX && destCoord.x < minX + cropWidth && destCoord.y > minY && destCoord.y <= minY + cropHeight) {
return t1;
} else {
return t2;
}
}
And you call it like this:
let destinationExtent = CIVector(cgRect: backgroundImage.extent)
return CIWipeRenderer.kernel.apply(extent: backgroundImage.extent, arguments: [backgroundImage, foregroundImage, destinationExtent, inputTime])
Note that the last destination parameter in the kernel is passed automatically by Core Image. You don't need to pass it with the arguments.
Yes, you can't use samplers in CIColorKernel or CIBlendKernel. Those kernels are optimized for the use case where you have a 1:1 mapping from input pixel to output pixel. This allows Core Image to execute multiple of these kernels in one command buffer since they don't require any intermediate buffer writes.
A sampler would allow you to sample the input at arbitrary coordinates, which is not allowed in this case.
You can simply use a CIKernel instead. It's meant to be used when you need to sample the input more freely.
To initialize the kernel, you need to adapt the code like this:
static var kernel: CIKernel = {
let url = Bundle.main.url(forResource: "AppCIKernels", withExtension: "ci.metallib")!
let data = try! Data(contentsOf: URL)
return try! CIKernel(functionName: "wipeLinear", fromMetalLibraryData: data)
}()
When calling the kernel, you now need to also provide a ROI callback, like this:
let roiCallback: CIKernelROICallback = { index, rect -> CGRect in
return rect // you need the same region from the input as the output
}
// or even shorter
let roiCallback: CIKernelROICallback = { $1 }
return CIWipeRenderer.kernel.apply(extent: backgroundImage.extent, roiCallback: roiCallback, arguments: [backgroundImage, foregroundImage, inputTime])
Bonus answer:
For this blending effect, you actually don't need any kernel at all. You can achieve all that with simple cropping and compositing:
class CIWipeRenderer: CIFilter {
var backgroundImage:CIImage?
var foregroundImage:CIImage?
var inputTime: CGFloat = 0.0
override var outputImage: CIImage? {
guard let backgroundImage = backgroundImage else { return nil }
guard let foregroundImage = foregroundImage else { return nil }
// crop the foreground based on time
var foregroundCrop = foregroundImage.extent
foregroundCrop.size.width *= inputTime
foregroundCrop.size.height *= inputTime
return foregroundImage.cropped(to: foregroundCrop).composited(over: backgroundImage)
}
}
Basically, I want to add an area with an arbitrary text to an image. So the image size get's bigger afterwards. This is what I've come up with:
public partial class ViewController : UIViewController
{
public override void ViewDidLoad()
{
base.ViewDidLoad();
string filename = "TestImage.jpg";
string bundlePath = NSBundle.MainBundle.BundlePath;
string sourcePath = Path.Combine(bundlePath, filename);
string docPath = Environment.GetFolderPath(Environment.SpecialFolder.Personal);
string destinationPath = Path.Combine(docPath, filename);
File.Copy(sourcePath, destinationPath, true);
string testString = "Lorem ipsum dolor sit amet";
this.AddTextToImage(testString, destinationPath);
var imageView = new UIImageView(new CGRect(0, 0, 500, 500));
imageView.ContentMode = UIViewContentMode.ScaleAspectFit;
imageView.Image = UIImage.FromFile(destinationPath);
this.View.AddSubview(imageView);
this.View.BackgroundColor = UIColor.Red;
}
public void AddTextToImage(string texttoadd, string filepath)
{
UIImage image = UIImage.FromFile(filepath);
nfloat fontSize = 16;
nfloat fWidth = image.Size.Width;
nfloat fHeight = image.Size.Height;
nfloat textWidth;
nfloat textHeight;
CGColorSpace colorSpace = CGColorSpace.CreateDeviceRGB();
UIFont font = UIFont.FromName("Helvetica", fontSize);
NSParagraphStyle style = new NSMutableParagraphStyle();
style.LineBreakMode = UILineBreakMode.WordWrap;
NSAttributedString attributedString = new NSAttributedString(texttoadd, font: font, foregroundColor: UIColor.Blue, paragraphStyle: style);
CGRect stringSize = attributedString.GetBoundingRect(new CGSize(fWidth, double.MaxValue), NSStringDrawingOptions.UsesLineFragmentOrigin | NSStringDrawingOptions.UsesFontLeading, null);
textWidth = (nfloat)Math.Ceiling(stringSize.Width);
textHeight = (nfloat)Math.Ceiling(stringSize.Height);
nfloat fullWidth = fWidth;
nfloat fullHeight = fHeight + textHeight;
UIImage composition;
using (CGBitmapContext ctx = new CGBitmapContext(IntPtr.Zero, (nint)fullWidth, (nint)fullHeight, 8, 4 * (nint)fullWidth, CGColorSpace.CreateDeviceRGB(), CGImageAlphaInfo.PremultipliedFirst))
{
CGRect frameRect = new CGRect(0, 0, fullWidth, fullHeight);
ctx.SetFillColor(UIColor.Yellow.CGColor);
ctx.FillRect(frameRect);
CGRect imageRect = new CGRect(0, textHeight, (double)fWidth, (double)fHeight);
ctx.DrawImage(imageRect, image.CGImage);
CGPath stringPath = new CGPath();
stringPath.AddRect(new CGRect(0, 0, textWidth, textHeight));
CTFramesetter framesetter = new CTFramesetter(attributedString);
CTFrame frame = framesetter.GetFrame(new NSRange(0, attributedString.Length), stringPath, null);
frame.Draw(ctx);
using (var imageRef = ctx.ToImage())
composition = new UIImage(imageRef);
}
NSData data = composition.AsJPEG();
NSError error;
data.Save(filepath, NSDataWritingOptions.FileProtectionNone, out error);
}
}
Currently, I have the following issues:
Text is cropped (e.g. fontSize = 160;). Multi line text seems not working.
Text isn't shown at all (e.g. fontSize = 16;).
You can provide answers in Objective-C, Swift or C# - I'll try to translate it.
It seems the font was the problem. Using
UIFont font = UIFont.SystemFontOfSize(fontSize);
now does the job without cutting off text. The only question remaining is, why?
I am trying to write code which put a sticker on eyes and the code is based on SquareCam.
It detects faces well, but when I tried to output my image on left eye, it always gives wrong position even though I used the same ways on finding face rect.
There are results on my phone.
And the code is here.
for ff in features as! [CIFaceFeature] {
// find the correct position for the square layer within the previewLayer
// the feature box originates in the bottom left of the video frame.
// (Bottom right if mirroring is turned on)
var faceRect = ff.bounds
let temp = faceRect.origin.x
faceRect.origin.x = faceRect.origin.y
faceRect.origin.y = temp
// scale coordinates so they fit in the preview box, which may be scaled
let widthScaleBy = previewBox.size.width / clap.size.height
let heightScaleBy = previewBox.size.height / clap.size.width
faceRect.size.width *= widthScaleBy
faceRect.size.height *= heightScaleBy
faceRect.origin.x *= widthScaleBy
faceRect.origin.y *= heightScaleBy
var eyeRect : CGRect
eyeRect = CGRect()
eyeRect.origin.x = ff.leftEyePosition.y
eyeRect.origin.y = ff.leftEyePosition.x
eyeRect.origin.x *= widthScaleBy
eyeRect.origin.y *= heightScaleBy
eyeRect.size.width = faceRect.size.width * 0.15
eyeRect.size.height = eyeRect.size.width
if isMirrored {
faceRect = faceRect.offsetBy(dx: previewBox.origin.x + previewBox.size.width - faceRect.size.width - (faceRect.origin.x * 2), dy: previewBox.origin.y)
eyeRect = eyeRect.offsetBy(dx:previewBox.origin.x + previewBox.size.width - eyeRect.size.width - (eyeRect.origin.x * 2),dy : previewBox.origin.y)
} else {
faceRect = faceRect.offsetBy(dx: previewBox.origin.x, dy: previewBox.origin.y)
eyeRect = eyeRect.offsetBy(dx: previewBox.origin.x, dy:previewBox.origin.y)
}
print(eyeRect)
print(faceRect)
var featureLayer: CALayer? = nil
var eyeLayer : CALayer? = nil
// re-use an existing layer if possible
while featureLayer == nil && (currentSublayer < sublayersCount) {
let currentLayer = sublayers[currentSublayer];currentSublayer += 1
if currentLayer.name == "FaceLayer" {
featureLayer = currentLayer
currentLayer.isHidden = false
eyeLayer = featureLayer?.sublayers?[0]
//eyeLayer?.isHidden = false
}
}
// create a new one if necessary
if featureLayer == nil {
featureLayer = CALayer()
featureLayer!.contents = square.cgImage
featureLayer!.name = "FaceLayer"
previewLayer?.addSublayer(featureLayer!)
eyeLayer = CALayer()
eyeLayer!.contents = eyes.cgImage
eyeLayer!.name = "EyeLayer"
featureLayer?.addSublayer(eyeLayer!)
}
featureLayer!.frame = faceRect
eyeLayer!.frame = eyeRect
0,0 is at bottom left for the eyePositions, so you have to eyePosition.y = image.size.height - eyePosition.y to be in the same coordinate system as frames.
Currently I am using this method to loop through every pixel, and insert a value into a 3D array based upon RGB values. I need this array for other parts of my program, however it is extraordinarily slow. When run on a 50 x 50 picture, it is almost instant, but as soon as you start getting into the hundreds x hundreds it takes a long time to the point where the app is useless. Anyone have any ideas on how to speed up my method?
#IBAction func convertImage(sender: AnyObject) {
if let image = myImageView.image {
var pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
var data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let height = Int(image.size.height)
let width = Int(image.size.width)
var zArry = [Int](count:3, repeatedValue: 0)
var yArry = [[Int]](count:width, repeatedValue: zArry)
var xArry = [[[Int]]](count:height, repeatedValue: yArry)
for (var h = 0; h < height; h++) {
for (var w = 0; w < width; w++) {
var pixelInfo: Int = ((Int(image.size.width) * Int(h)) + Int(w)) * 4
var rgb = 0
xArry[h][w][rgb] = Int(data[pixelInfo])
rgb++
xArry[h][w][rgb] = Int(data[pixelInfo+1])
rgb++
xArry[h][w][rgb] = Int(data[pixelInfo+2])
}
}
println(xArry[20][20][1])
}
}
Maybe there is a way to convert the UIImage to a different type of image and create an array of pixels. I am open to all suggestions. Thanks!
GOAL: The goal is to use the array to modify the RGB values of all pixels, and create a new image with the modified pixels. I tried simply looping through all of the pixels without storing them, and modifying them into a new array to create an image, but got the same performance issues.
Update:
After countless tries I realized I was making my tests on debug configuration.
Switched to release, and now it's so much faster.
Swift seems to be many times slower on the debug configuration.
The difference now between your code and my optimized version is several times faster.
It seems as you have a big slowdown from using image.size.width instead of the local variable width.
Original
I tried to optimize it a bit and come up with this:
#IBAction func convertImage () {
if let image = UIImage(named: "test") {
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let height = Int(image.size.height)
let width = Int(image.size.width)
let zArry = [Int](count:3, repeatedValue: 0)
let yArry = [[Int]](count:width, repeatedValue: zArry)
let xArry = [[[Int]]](count:height, repeatedValue: yArry)
for (index, value) in xArry.enumerate() {
for (index1, value1) in value.enumerate() {
for (index2, var value2) in value1.enumerate() {
let pixelInfo: Int = ((width * index) + index1) * 4 + index2
value2 = Int(data[pixelInfo])
}
}
}
}
}
However in my tests this is barely 15% faster. What you need is orders of magnitude faster.
Another ideea is use the data object directly when you need it without creating the array like this:
let image = UIImage(named: "test")!
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let width = Int(image.size.width)
// value for [x][y][z]
let value = Int(data[((width * x) + y) * 4 + z])
You didn't say how you use this array in your app, but I feel that even if you find a way to get this array created much faster, you would get another problem when you try to use it, as it would take a long time too..
I have the following MonoTouch code which can change the Saturation , but I am trying to also change the Hue.
float hue = 0;
float saturation = 1;
if (colorCtrls == null)
colorCtrls = new CIColorControls() {
Image = CIImage.FromCGImage (originalImage.CGImage) };
else
colorCtrls.Image = CIImage.FromCGImage(originalImage.CGImage);
colorCtrls.Saturation = saturation;
var output = colorCtrls.OutputImage;
var context = CIContext.FromOptions(null);
var result = context.CreateCGImage(output, output.Extent);
return UIImage.FromImage(result);
It's part of a different filter so you'll need to use CIHueAdjust instead of CIColorControls to control the hue.
Here's what I ended up doing to add Hue:
var hueAdjust = new CIHueAdjust() {
Image = CIImage.FromCGImage(originalImage.CGImage),
Angle = hue // Default is 0
};
var output = hueAdjust.OutputImage;
var context = CIContext.FromOptions(null);
var cgimage = context.CreateCGImage(output, output.Extent);
return UIImage.FromImage(cgimage);
However, this does not work on Retina devices, the image returned is scaled incorrectly.