CGImageCreateWithMaskingColors No Matching Functions - ios

I'm using the QR Code creation project at https://github.com/kuapay/iOS-QR-Code-Generator within my project. I've added it in exactly as the instructions say.
I can compile and run the project on my test devices with no problem what-so-ever, but when I try to archive it, I get the following error:
Path/to/project/Barcode.mm:67:33: No matching function for call to 'CGImageCreateWithMaskingColors'
I am pulling my hair out on this one. Here's the code snippet where it's called along with the variable declarations that it's using.
CGImageRef rawImageRef = image.CGImage;
const float colorMasking[6] = {222, 255, 222, 255, 222, 255};
UIGraphicsBeginImageContext(image.size);
CGImageRef maskedImageRef = CGImageCreateWithMaskingColors(rawImageRef, colorMasking);

I know the asker found his answer, but my problem was with release build where it is supposed to be NO. Because we want non-active architectures too!
Problem lies in XCode being much more type-strict with 64-bit builds in new XCode 5.1(5B130a). CGImageCreateWithMaskingColors second parameter is CGFloat, so changing the type from float to CGFloat fixed it.
//const float colorMasking[6] = {222, 255, 222, 255, 222, 255};//before
const CGFloat colorMasking[6] = {222, 255, 222, 255, 222, 255};//after
UIGraphicsBeginImageContext(image.size);
CGImageRef maskedImageRef = CGImageCreateWithMaskingColors(rawImageRef, colorMasking);

Since the answer is in the comments of the question, I'm answering it myself just to have a marked answer. In the build settings for "Build for active architectures only" for Debug I had YES, and release I had NO. I switched the release version to YES, and it worked with no problem.
Thanks to Wain for pointing me in the right direction.

Related

How to release CGImageRef if required to return it?

I have a method to resize a CGImageRef and return CGImageRef. The issue is the last few lines, where I need to somehow release but return it after. Any ideas? Thanks
-(CGImageRef)resizeImage:(CGImageRef *)anImage width:(CGFloat)width height:(CGFloat)height
{
CGImageRef imageRef = *anImage;
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(imageRef), 4 * width, CGImageGetColorSpace(imageRef), alphaInfo);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
CGContextRelease(bitmap);
CGImageRelease(ref); //issue here
return ref;
}
The Cocoa memory management naming policy states, that you own an object that is created from methods whose name begin with alloc, copy or new.
This rules are also respected by the Clang Static Analyzer.
Note that there are slightly different conventions for Core Foundation. Details can be found in Apple's Advanced Memory Management Programming Guide.
I modified your above method to conform to that naming conventions. I also removed the asterisk when passing in anImage, as CGImageRef is already a pointer. (Or was this on purpose?).
Note that you own the returned CGImage and have to CGImageRelease it later.
-(CGImageRef)newResizedImageWithImage:(CGImageRef)anImage width:(CGFloat)width height:(CGFloat)height
{
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(anImage);
if (alphaInfo == kCGImageAlphaNone)
{
alphaInfo = kCGImageAlphaNoneSkipLast;
}
CGContextRef bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(anImage), 4 * width, CGImageGetColorSpace(anImage), alphaInfo);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), anImage);
CGImageRef image = CGBitmapContextCreateImage(bitmap);
CGContextRelease(bitmap);
return image;
}
You could also operate on the pointer anImage (after removing the asterisk, like #weichsel suggested) and return void.
Still, you should read your code and think about the questions:
Who owns anImage? (clearly not your method, as it does neither retain nor copy it)
What happens if it is released by the owner while you're in your method? (or other things that might happen to it while your code runs)
What happens to it after your method finishes? (aka: did you remember to release it in the calling code)
So, I would strongly encourage you not to mix CoreFoundation, which works with functions, pointers and "classic" data-structures, and Foundation, which works with objects and messages.
If you want to operate on CF-structures, you should write a C-function which does it. If you want to operate on Foundation-objects, you should write (sub-)classes with methods. If you want to mix both or provide a bridge, you should know exactly what you are doing and write wrapper-classes which expose a Foundation-API and handle all the CF-stuff internally (thus leaving it to you when to release structures).

Variable size of CGContext

I'm currently using a UIGraphicsBeginImageContext(resultingImageSize); to create an image.
But when I call this function, I don't know exactly the width of resultingImageSize.
Indeed, I developed some kind of video processing which consume lots of memory, and I cannot process first then draw after: I must draw during the video process.
If I set, for example UIGraphicsBeginImageContext(CGSizeMake(300, 400));, the drawn part over 400 is lost.
So is there a solution to set a variable size of CGContext, or resize a CGContext with very few memory consume?
I found a solution by creating a new larger Context each time it must be resized. Here's the magic function:
void MPResizeContextWithNewSize(CGContextRef *c, CGSize s) {
size_t bitsPerComponents = CGBitmapContextGetBitsPerComponent(*c);
size_t numberOfComponents = CGBitmapContextGetBitsPerPixel(*c) / bitsPerComponents;
CGContextRef newContext = CGBitmapContextCreate(NULL, s.width, s.height, bitsPerComponents, sizeof(UInt8)*s.width*numberOfComponents,
CGBitmapContextGetColorSpace(*c), CGBitmapContextGetBitmapInfo(*c));
// Copying context content
CGImageRef im = CGBitmapContextCreateImage(*c);
CGContextDrawImage(newContext, CGRectMake(0, 0, CGBitmapContextGetWidth(*c), CGBitmapContextGetHeight(*c)), im);
CGImageRelease(im);
CGContextRelease(*c);
*c = newContext;
}
I wonder if it could be optimized, for example with memcpy, as suggested here. I tried but it makes my code crash.

Showing text with CGContextShowTextAtPoint draws strange text

I'm just trying to overlay a text in a UIImage (using CGContext) but it doesn't show the text I specified. In this code I'm giving as the text argument "Hello" but it shows "eÉääc". I don't know what's happening I think the characters are displaced but I don't know how to solve it.
This is my code:
UIGraphicsBeginImageContextWithOptions(image.size, YES, 0);
CGContextRef c = UIGraphicsGetCurrentContext();
[image drawInRect:CGRectMake(0, 0, image.size.width, image.size.height)];
CGContextSetTextMatrix(c, CGAffineTransformMake(1.0, 0, 0, -1.0, 0, 0));
CGContextSelectFont(c, "ArialMT", 50, kCGEncodingFontSpecific);
CGContextSetRGBStrokeColor(c, 255, 0, 0, 1);
CGContextSetRGBFillColor(c, 255, 0, 0, 1);
CGContextSetCharacterSpacing(c, 2);
CGContextShowTextAtPoint(c,100,100, "Hello", 5);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I'm using the Spanish Keyboard (I don't know if this matters) and the iPad (where I'm testing the application) is in Catalan.
CGContextShowTextAtPoint() interprets the given text according to the specified encoding parameter of CGContextSelectFont. You have chosen kCGEncodingFontSpecific, which is the font built-in encoding (whatever that might be).
The only other choice is kCGEncodingMacRoman:
CGContextSelectFont(c, "ArialMT", 50, kCGEncodingMacRoman);
Then your text should display correctly, as long as you use only characters from the ASCII character set.
For non-ASCII characters, you have to convert the string to the MacRoman encoding, see e.g. https://stackoverflow.com/a/13743834/1187415 for an example.
Note that CGContextShowTextAtPoint cannot display general Unicode strings, and even has problems with the Euro (€) character. The drawAtPoint:withFont: method of NSString does not have these limitations.

OpenCV errors for iOS / detecting Hough Circles

I have been trying for hours to run an xcode project with openCV. I have built the source, imported it into the project and included
#ifdef __cplusplus #import opencv2/opencv.hpp>
#endif
in the .pch file.
I followed the instructions from http://docs.opencv.org/trunk/doc/tutorials/introduction/ios_install/ios_install.html
Still I am getting many Apple Mach-O linker errors when I compile.
Undefined symbols for architecture i386:
"std::__1::__vector_base_common<true>::__throw_length_error() const", referenced from:
Please help me I am really lost..
UPDATE:
Errors all fixed and now I am trying to detect circles..
Mat src, src_gray;
cvtColor( image, src_gray, CV_BGR2GRAY );
vector<Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles( src_gray, circles, CV_HOUGH_GRADIENT, 1, image.rows/8, 200, 100, 0, 0 );
/// Draw the circles detected
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// circle center
circle( src, center, 3, Scalar(0,255,0), -1, 8, 0 );
// circle outline
circle( src, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
I am using the code above, however no circles are being drawn on the image.. is there something obvious that I am doing wrong?
Try the solution in my answer to this question...
How to resolve iOS Link errors with OpenCV
Also on github I have a couple of simple working samples - with recently built openCV framework.
NB - OpenCVSquares is simpler than OpenCVSquaresSL. The latter was adapted for Snow Leopard backwards compatibility - it contains two builds of the openCV framework and 3 targets, so you are better off using the simpler OpenCVSquares if it will run on your system.
To adapt OpenCVSquares to detect circles, I suggest that you start with the Hough Circles c++ sample from the openCV distro, and use it to adapt/replace CVSquares.cpp and CVSquares.h with, say CVCircles.cpp and CVCicles.h
The principles are exactly the same:
remove UI code from the c++, the UI is provided on the obj-C side
transform the main() function into a static member function for the class declared in the header file. This should mirror in form an Objective-C message to the wrapper (which translates the obj-c method to a c++ function call).
From the objective-C side, you are passing a UIImage to the wrapper object, which:
converts the UIImage to a cv::Mat image
pass the Mat to a c++ class for processing
converts the result from Mat back to UIImage
returns the processed UIImage back to the objective-C calling object
update
The adapted houghcircles.cpp should look something like this at it's most basic (I've replaced the CVSquares class with a CVCircles class):
cv::Mat CVCircles::detectedCirclesInImage (cv::Mat img)
{
//expects a grayscale image on input
//returns a colour image on ouput
Mat cimg;
medianBlur(img, img, 5);
cvtColor(img, cimg, CV_GRAY2RGB);
vector<Vec3f> circles;
HoughCircles(img, circles, CV_HOUGH_GRADIENT, 1, 10,
100, 30, 1, 60 // change the last two parameters
// (min_radius & max_radius) to detect larger circles
);
for( size_t i = 0; i < circles.size(); i++ )
{
Vec3i c = circles[i];
circle( cimg, Point(c[0], c[1]), c[2], Scalar(255,0,0), 3, CV_AA);
circle( cimg, Point(c[0], c[1]), 2, Scalar(0,255,0), 3, CV_AA);
}
return cimg;
}
Note that the input parameters are reduced to one - the input image - for simplicity. Shortly I will post a sample on github which will include some parameters tied to slider controls in the iOS UI, but you should get this version working first.
As the function signature has changed you should follow it up the chain...
Alter the houghcircles.h class definition:
static cv::Mat detectedCirclesInImage (const cv::Mat image);
Modify the CVWrapper class to accept a similarly-structured method which calls detectedCirclesInImage
+ (UIImage*) detectedCirclesInImage:(UIImage*) image
{
UIImage* result = nil;
cv::Mat matImage = [image CVGrayscaleMat];
matImage = CVCircles::detectedCirclesInImage (matImage);
result = [UIImage imageWithCVMat:matImage];
return result;
}
Note that we are converting the input UIImage to grayscale, as the houghcircles function expects a grayscale image on input. Take care to pull the latest version of my github project, I found an error in the CVGrayscaleMat category which is now fixed . Output image is colour (colour applied to grayscale input image to pick out found circles).
If you want your input and output images in colour, you just need to ensure that you make a grayscale conversion of your input image for sending to Houghcircles() - eg cvtColor(input_image, gray_image, CV_RGB2GRAY); and apply your found circles to the colour input image (which becomes your return image).
Finally in your CVViewController, change your messages to CVWrapper to conform to this new signature:
UIImage* image = [CVWrapper detectedCirclesInImage:self.image];
If you follow all of these details your project will produce circle-detected results.
update 2
OpenCVCircles now on Github
With sliders to adjust HoughCircles() parameters

Looking for a simple pixel drawing method in ios (iphone, ipad)

I have a simple drawing issue. I have prepared a 2 dimensional array which has an animated wave motion. The array is updated every 1/10th of a second (this can be changed by the user). After the array is updated I want to display it as a 2 dimensional image with each array value as a pixel with color range from 0 to 255.
Any pointers on how to do this most efficiently...
Appreciate any help on this...
KAS
If it's just a greyscale then the following (coded as I type, probably worth checking for errors) should work:
CGDataProviderRef dataProvider =
CGDataProviderCreateWithData(NULL, pointerToYourData, width*height, NULL);
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceGray();
CGImageRef inputImage = CGImageCreate( width, height,
8, 8, width,
colourSpace,
kCGBitmapByteOrderDefault,
dataProvider,
NULL, NO,
kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(colourSpace);
UIImage *image = [UIImage imageWithCGImage:inputImage];
CGImageRelease(inputImage);
someImageView.image = image;
That'd be for a one-shot display, assuming you didn't want to write a custom UIView subclass (which is worth the effort only if performance is a problem, probably).
My understanding from the docs is that the data provider can be created just once for the lifetime of your C buffer. I don't think that's true of the image, but if you created a CGBitmapContext to wrap your buffer rather than a provider and an image, that would safely persist and you could use CGBitmapContextCreateImage to get a CGImageRef to be moving on with. It's probably worth benchmarking both ways around if it's an issue.
EDIT: so the alternative way around would be:
// get a context from your C buffer; this is now something
// CoreGraphics could draw to...
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context =
CGBitmapContextCreate(pointerToYourData,
width, height,
8, width,
colourSpace,
kCGBitmapByteOrderDefault);
CGColorSpaceRelease(colourSpace);
// get an image of the context, which is something
// CoreGraphics can draw from...
CGImageRef image = CGBitmapContextCreateImage(context);
/* wrap in a UIImage, push to a UIImageView, as before, remember
to clean up 'image' */
CoreGraphics copies things about very lazily, so neither of these solutions should be as costly as the multiple steps imply.

Resources