OpenCV:image processing,objective C/C++ - ios

My goal is to find golf ball using iPhone camera, So I did same steps in photoshop and
I want to achieve same steps in openCV on image/live video frame
start with the original picture.
then boost the satturation to get color into light areas
the use curves to cut off the edges of the spectrum
then convert the image to grayscale
use curves again to get to black/white
and finally - just for the look - apply a color
--Input Image:
--Output Image:
Would you please help me or give me some hints related image processing with OpenCV in iOS?
Thanks in advance!
Edit
I used following code and got the below output Image,
- (UIImage*) applyToneCurveToImage:(UIImage*)image
{
CIImage* ciImage = [[CIImage alloc] initWithImage:image];
CIFilter* filter =
[CIFilter filterWithName:#"CIToneCurve"
keysAndValues:
kCIInputImageKey, ciImage,
#"inputPoint0",[CIVector vectorWithX:0.00 Y:0.3]
,#"inputPoint1",[CIVector vectorWithX:0.25 Y:0.4]
,#"inputPoint2",[CIVector vectorWithX:0.50 Y:0.5]
,#"inputPoint3",[CIVector vectorWithX:0.75 Y:0.6]
,#"inputPoint4",[CIVector vectorWithX:1.00 Y:0.7]
,nil];
//CIFilter* filter2 = [filter copy];
//step1
filter = [CIFilter filterWithName:#"CIColorControls"
keysAndValues:kCIInputImageKey,
[filter valueForKey:kCIOutputImageKey], nil];
[filter setValue:[NSNumber numberWithFloat:0]
forKey:#"inputBrightness"];
[filter setValue:[NSNumber numberWithFloat:6]
forKey:#"inputContrast"];
CIImage* result = [filter valueForKey:kCIOutputImageKey];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:result
fromRect:[result extent]];
UIImage* filteredImage = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
ciImage=nil;
context=nil;
cgImage=nil;
result=nil;
return filteredImage;
}
- (void)didCaptureIplImage:(IplImage *)iplImage
{
#autoreleasepool
{
IplImage *orgimage = cvCreateImage(cvGetSize(iplImage), IPL_DEPTH_8U, 3);
orgimage=[self CreateIplImageFromUIImage:[self applyToneCurveToImage:[UIImage imageNamed:#"GolfImage.jpeg"] ] ];
Mat matRGB = Mat(orgimage);
//ipl imaeg is also converted to HSV; hue is used to find certain color
IplImage *imgHSV = cvCreateImage(cvGetSize(orgimage), 8, 3); //2
cvCvtColor(orgimage, imgHSV, CV_BGR2HSV);
IplImage *imgThreshed = cvCreateImage(cvGetSize(orgimage), 8, 1); //3
// cvInRangeS(imgHSV, cvScalar(_Hmin, _Smin, _Vmin), cvScalar(_Hmax , _Smax, _Vmax), imgThreshed);
cvInRangeS(imgHSV, cvScalar(0.00, 0.00, 34.82), cvScalar(180.00 , 202.54, 256.00), imgThreshed);
Originalimage=nil;
cvReleaseImage(&iplImage);
cvReleaseImage(&orgimage);
cvReleaseImage(&imgHSV);
[self didFinishProcessingImage:imgThreshed];
}
Output Image:

You don't need openCV for any of this. You should be able to get this result using Core Image
See this question How to change minimum or maximum value by CIFilter in Core image?
Where I give a fairly detailed answer on the manipulation of tone curves.
This will cover your steps 2 and 4. For step 1 (saturation) try CIColorControls. For step 3 (convert to grayscale) you could also use CIColorControls, but that would involove dropping saturation to 0, not what you want. Instead you can use CIMaximumComponentor CIMinimumComponent. For step 5, you could use the result of 1-4 as a mask with a flat colour.
OpenCV will allow you to pick out the ball from the rest of the image (I guess this is what you want to achieve, you didn't mention it in your question). You can refer to this question:
How does the HoughCircle function works? I can't get it to work properly which I answered with an accompanying demo project: https://github.com/foundry/OpenCVCircles. You can pass the result of your Core Image processing in to openCV by converting to the openCV Mat format from UIImage (that linked project shows you how to do this).

Related

White pixels around iOS Image using CIFilter

I add a picture frame (Image with transparent background) around an existing UIImage and save it all as one image. On simulator, everything looks like it runs great. However on the device, it adds some white pixels around some of the areas of the frame's image. Here is my code:
- (void)applyFilter {
NSLog(#"Running");
UIImage *borderImage = [UIImage imageNamed:#"IMG_8055.PNG"];
NSData *dataFromImage = UIImageJPEGRepresentation(self.imgView.image, 1);
CIImage *beginImage= [CIImage imageWithData:dataFromImage];
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *border =[CIImage imageWithData:UIImagePNGRepresentation(borderImage)];
border = [border imageByApplyingTransform:CGAffineTransformMakeScale(beginImage.extent.size.width/border.extent.size.width, beginImage.extent.size.height/border.extent.size.height)];
CIFilter *filter= [CIFilter filterWithName:#"CISourceOverCompositing"]; //#"CISoftLightBlendMode"];
[filter setDefaults];
[filter setValue:border forKey:#"inputImage"];
[filter setValue:beginImage forKey:#"inputBackgroundImage"];
CIImage *outputImage = [filter valueForKey:#"outputImage"];
CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]];
UIImage *newImg = [UIImage imageWithCGImage:cgimg];
self.imgView.image = newImg;
}
Here is the resulting image:
The frame image used in the picture looks like this:
Here is a screenshot of the frame image in photoshop, showing those pixels are not present in the PNG.
The issue is that if you look at your image, those pixels immediately adjacent to the musical notes are apparently not transparent. And if you notice, those white pixels that appear in the final image aren't just the occasional pixel, but they appear in square blocks.
These sorts of squared-off pixel noise is a telltale sign of JPEG artifacts. It's hard to say what's causing this because the image you added to this question was a JPEG (which doesn't support transparency). I assume you must have a PNG version of this backdrop? You might have to share that with us to confirm this diagnosis.
But the bottom line is that you need to carefully examine the original image and the transparency of those pixels that appear to be white noise. Make sure that as you create/manipulate these images, avoid JPEG file formats, because it loses transparency information and introduces artifacts. PNG files are often safer.

CIImage(IOS): Adding 3x3 convolution after a monochrome filter somehow restores color

I'm converting ciimage to monochrome, clipping with CICrop
and running sobel to detect edges, #if section at the bottom is
used to display result
CIImage *ci = [[CIImage alloc] initWithCGImage:uiImage.CGImage];
CIImage *gray = [CIFilter filterWithName:#"CIColorMonochrome" keysAndValues:
#"inputImage", ci, #"inputColor", [[CIColor alloc] initWithColor:[UIColor whiteColor]],
nil].outputImage;
CGRect rect = [ci extent];
rect.origin = CGPointZero;
CGRect cropRectLeft = CGRectMake(0, 0, rect.size.width * 0.2, rect.size.height);
CIVector *cropRect = [CIVector vectorWithX:rect.origin.x Y:rect.origin.y Z:rect.size.width* 0.2 W:rect.size.height];
CIImage *left = [gray imageByCroppingToRect:cropRectLeft];
CIFilter *cropFilter = [CIFilter filterWithName:#"CICrop"];
[cropFilter setValue:left forKey:#"inputImage"];
[cropFilter setValue:cropRect forKey:#"inputRectangle"];
// The sobel convoloution will produce an image that is 0.5,0.5,0.5,0.5 whereever the image is flat
// On edges the image will contain values that deviate from that based on the strength and
// direction of the edge
const double g = 1.;
const CGFloat weights[] = { 1*g, 0, -1*g,
2*g, 0, -2*g,
1*g, 0, -1*g};
left = [CIFilter filterWithName:#"CIConvolution3X3" keysAndValues:
#"inputImage", cropFilter.outputImage,
#"inputWeights", [CIVector vectorWithValues:weights count:9],
#"inputBias", #0.5,
nil].outputImage;
#define VISUALHELP 1
#if VISUALHELP
CGImageRef imageRefLeft = [gcicontext createCGImage:left fromRect:cropRectLeft];
CGContextDrawImage(context, cropRectLeft, imageRefLeft);
CGImageRelease(imageRefLeft);
#endif
Now whenever 3x3 convolution is not part of the ciimage pipeline
the portion of the image I run edge detection on shows up gray,
but whenever CIConvolution3X3 postfix is part of the processing pipeline
the colors magically appear back. This happens no matter
if I use CIColorMonochrome or CIPhotoEffectMono prefix to remove color.
Any ideas how to keep the color out all the way to the bottom of the pipeline?
tnx
UPD: not surprisingly running a crude custom monochrome kernel such as this one
kernel vec4 gray(sampler image)
{
vec4 s = sample(image, samplerCoord(image));
float r = (s.r * .299 + s.g * .587 + s.b * 0.114) * s.a;
s = vec4(r, r, r, 1);
return s;
}
instead of using standard mono filters from apple results in the exact same
issue with the color coming back when 3x3 convolution is part of my ci pipeline
This issue is that CI convolutions operations (e.g CIConvolution3X3, CIConvolution5X5 and CIGaussianBlur) operate on all four channels of the input image. This means that, in your code example, the resulting alpha channel will be 0.5 where you probably want it to be 1.0. Try adding a simple kernel after the convolution to set the alpha back to 1.
to followup: I gave up on coreimage for this task. it seems that using two instances of CIFilter or CIKernel cause a conflict. Someone somewhere in the coreimage
innards seems to manipulate gles state incorrectly and, thusly, reverse engineering what went wrong ends up costlier than using something other than core image (with custom ci filters which work on ios8 only anyway)
gpuimage seems not as buggy and easier to service/debug (no affiliation on my part)

UIImagePNGRepresentation returns nil after CIFilter

Running into a problem getting a PNG representation for a UIImage after having rotated it with CIAffineTransform. First, I have a category on UIImage that rotates an image 90 degrees clockwise. It seems to work correctly when I display the rotated image in a UIImageView.
-(UIImage *)cwRotatedRepresentation
{
// Not very precise, stop yelling at me.
CGAffineTransform xfrm=CGAffineTransformMakeRotation(-(6.28 / 4.0));
CIContext *context=[CIContext contextWithOptions:nil];
CIImage *inputImage=[CIImage imageWithCGImage:self.CGImage];
CIFilter *filter=[CIFilter filterWithName:#"CIAffineTransform"];
[filter setValue:inputImage forKey:#"inputImage"];
[filter setValue:[NSValue valueWithBytes:&xfrm objCType:#encode(CGAffineTransform)] forKey:#"inputTransform"];
CIImage *result=[filter valueForKey:#"outputImage"];
CGImageRef cgImage=[context createCGImage:result fromRect:[inputImage extent]];
return [[UIImage alloc] initWithCIImage:result];
}
However, when I try to actually get a PNG for the newly rotated image, UIImagePNGRepresentation returns nil.
-(NSData *)getPNG
{
UIImage *myImg=[UIImage imageNamed:#"canada"];
myImg=[myImg cwRotatedRepresentation];
NSData *d=UIImagePNGRepresentation(myImg);
// d == nil :(
return d;
}
Is core image overwriting the PNG headers or something? Is there a way around this behavior, or a better means of achieving the desired result of a PNG representation of a UIImage rotated 90 degrees clockwise?
Not yelling, but -M_PI_4 will give you the constant you want with maximum precision :)
The only other thing that I see is you probably want to be using [result extent] instead of [inputImage extent] unless your image is known square.
Not sure how that would cause UIImagePNGRepresentation to fail though. One other thought... you create a CGImage and then use the CIImage in the UIImage, perhaps using initWithCGImage would give better results.

I am using CIFilter to get a blur image,but why is the output image always larger than input image?

Codes are as below:
CIImage *imageToBlur = [CIImage imageWithCGImage: self.pBackgroundImageView.image.CGImage];
CIFilter *blurFilter = [CIFilter filterWithName: #"CIGaussianBlur" keysAndValues: kCIInputImageKey, imageToBlur, #"inputRadius", [NSNumber numberWithFloat: 10.0], nil];
CIImage *outputImage = [blurFilter outputImage];
UIImage *resultImage = [UIImage imageWithCIImage: outputImage];
For example,the input image has a size of (640.000000,1136.000000),but the output image has a size of (700.000000,1196.000000)
Any advice is appreciated.
This is a super late answer to your question, but the main problem is you're thinking of a CIImage as an image. It is not, it is a "recipe" for an image. So, when you apply the blur filter to it, Core Image calculates that to show every last pixel of your blur you would need a larger canvas. That estimated size to draw the entire image is called the "extent". In essence, every pixel is getting "fatter", which means that the final extent will be bigger than the original canvas. It is up to you to determine which part of the extent is useful to your drawing routine.

Multiply image and color

I have an image with transparence background, for example image.
I need to create many images with different color and I want to use this one image and multiply it with color for create some other images, for example new image.
Could you please help me with some lines of code. Thanks.
This might help:
UIImage *beginUIImage = [UIImage imageNamed:#"myImage.png"];
CIImage *beginImage = [CIImage imageWithCGImage:beginUIImage.CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CISepiaTone"
keysAndValues: kCIInputImageKey, beginImage,
#"inputIntensity", [NSNumber numberWithFloat:0.8], nil];
CIImage *outputImage = [filter outputImage];
UIImage *endImage = [[UIImage alloc] initWithCIImage:outputImage];
The beginUIImage is the initial transparent image. Then I change it into a CIImage to ease the process of applying filters. Then I apply a Sepia filter to the image. Then I output the image with a filter applied into another CIImage called outputImage. Lastly, I change the outputImage into a UIImage to be used later, perhaps put into a UIImageView, perhaps saved into the Photo library. You can change the type of filter to change the output images' colors.

Resources