I found some code as follows:
UIGraphicsBeginImageContext(CGSizeMake(320, 480));
// This is where we resize captured image
[(UIImage *)[info objectForKey:UIImagePickerControllerOriginalImage] drawInRect:CGRectMake(0, 0, 320, 480)];
// And add the watermark on top of it
[[UIImage imageNamed:#"Watermark.png"] drawAtPoint:CGPointMake(0, 0) blendMode:kCGBlendModeNormal alpha:WATERMARK_ALPHA];
// Save the results directly to the image view property
imageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
but I am not sure whether it's the best way.
Check CGImageCreateWithMask.
Pass the existing image and the watermark image to this function
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
return [UIImage imageWithCGImage:masked];
}
There are different types of watermarking: visible and non visible watermarking. Because you didn't mentioned explicit you want a visible watermark, I will provide a solution for a non-visible watermark. The Theory of thiss kind of simple: Take the bits with the lowest priority and add your watermark there.
In iPhone programming it would be something like this:
CGContextRef context = [self createARGBBitmapContextFromImage:yourView.image.CGImage];
unsigned char* data = CGBitmapContextGetData (context);
size_t width = CGImageGetWidth(yourView.image.CGImage);
size_t height = CGImageGetHeight(yourView.image.CGImage);
for (int y=0; y<height; y++) {
for (int x=0; x<width; x++) {
int pos = y*width + x;
int argb = data[pos];
int r = (argb >> 16) & 0xff;
int g = (argb >> 8) & 0xff;
int b = argb & 0xff;
//add watermark to the bits with the lowest priority
unsigned char bit1 = 0x00 , bit2 = 0x01, bit3 = 0x00;
//example adds false, true, false to every pixel - only 0x00 and 0x01 should be used here (1 bit)
unsigned char mask = 0x01;
r = (r - (r & mask)) + bit1;
g = (g - (g & mask)) + bit2;
b = (b - (b & mask)) + bit3;
data[pos] = (0xFF<<24) | (r<<16) | (g<<8) | b;
}
}
The encoding would be vice-versa exactly the same - you could store with this code width*height*3 Bits in your image. I.e. for an 640x480 image that would be 996 Bytes
It can store more bits per pixel, but will also loses more details in this case (then you need to change the mask 0x01). And the alpha channel could be also used to store a few bits as well - for simplicity I leaved that out here...
Probably the most widely used library for this sort of thing is called Imagemagick. How to watermark.
I don't think you need a library for doing that. It is just too much for such a simple thing. At least the library proposed by OmnipotentEntity is too much in my opinion.
The approach you are trying is simple and good.
Even if it does not work you could do it yourself. Blending is a very simple algorithm.
From WikiPedia:
where Co is the result of the
operation, Ca is the color of the
pixel in element A, Cb is the color of
the pixel in element B, and αa and αb
are the alpha of the pixels in
elements A and B respectively
I was going to write how to access pixels but Constantin already did it!
Related
I am trying to implement something similar to this using openCV
https://mathematica.stackexchange.com/questions/19546/image-processing-floor-plan-detecting-rooms-borders-area-and-room-names-t
However, I am running into some walls (probably due to my own ignorance in working with OpenCV).
When I try to perform a distance transform on my image, I am not getting the expected result at all.
This is the original image I am working with
This is the image I get after doing some cleanup with opencv
This is the wierdness I get after trying to run a distance transform on the above image. My understanding is that this should look more like a blurry heatmap. If I follow the opencv example passed this point and try to run a threshold to find the distance peaks, I get nothing but a black image.
This is the code thus far that I have cobbled together using a few different opencv examples
cv::Mat outerBox = cv::Mat(matImage.size(), CV_8UC1);
cv::Mat kernel = (cv::Mat_<uchar>(3,3) << 0,1,0,1,1,1,0,1,0);
for(int x = 0; x < 3; x++) {
cv::GaussianBlur(matImage, matImage, cv::Size(11,11), 0);
cv::adaptiveThreshold(matImage, outerBox, 255, cv::ADAPTIVE_THRESH_MEAN_C, cv::THRESH_BINARY, 5, 2);
cv::bitwise_not(outerBox, outerBox);
cv::dilate(outerBox, outerBox, kernel);
cv::dilate(outerBox, outerBox, kernel);
removeBlobs(outerBox, 1);
erode(outerBox, outerBox, kernel);
}
cv::Mat dist;
cv::bitwise_not(outerBox, outerBox);
distanceTransform(outerBox, dist, cv::DIST_L2, 5);
// Normalize the distance image for range = {0.0, 1.0}
// so we can visualize and threshold it
normalize(dist, dist, 0, 1., cv::NORM_MINMAX);
//using a threshold at this point like the opencv example shows to find peaks just returns a black image right now
//threshold(dist, dist, .4, 1., CV_THRESH_BINARY);
//cv::Mat kernel1 = cv::Mat::ones(3, 3, CV_8UC1);
//dilate(dist, dist, kernel1);
self.mainImage.image = [UIImage fromCVMat:outerBox];
void removeBlobs(cv::Mat &outerBox, int iterations) {
int count=0;
int max=-1;
cv::Point maxPt;
for(int iteration = 0; iteration < iterations; iteration++) {
for(int y=0;y<outerBox.size().height;y++)
{
uchar *row = outerBox.ptr(y);
for(int x=0;x<outerBox.size().width;x++)
{
if(row[x]>=128)
{
int area = floodFill(outerBox, cv::Point(x,y), CV_RGB(0,0,64));
if(area>max)
{
maxPt = cv::Point(x,y);
max = area;
}
}
}
}
floodFill(outerBox, maxPt, CV_RGB(255,255,255));
for(int y=0;y<outerBox.size().height;y++)
{
uchar *row = outerBox.ptr(y);
for(int x=0;x<outerBox.size().width;x++)
{
if(row[x]==64 && x!=maxPt.x && y!=maxPt.y)
{
int area = floodFill(outerBox, cv::Point(x,y), CV_RGB(0,0,0));
}
}
}
}
}
I've been banging my head on this for a few hours and I am totally stuck in the mud on it, so any help would be greatly appreciated. This is a little bit out of my depth, and I feel like I am probably just making some basic mistake somewhere without realizing it.
EDIT:
Using the same code as above running OpenCV for Mac (not iOS) I get the expected results
This seems to indicate that the issue is with the Mat -> UIImage bridging that OpenCV suggests using. I am going to push forward using the Mac library to test my code, but it would sure be nice to be able to get consistent results from the UIImage bridging as well.
+ (UIImage*)fromCVMat:(const cv::Mat&)cvMat
{
// (1) Construct the correct color space
CGColorSpaceRef colorSpace;
if ( cvMat.channels() == 1 ) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
// (2) Create image data reference
CFDataRef data = CFDataCreate(kCFAllocatorDefault, cvMat.data, (cvMat.elemSize() * cvMat.total()));
// (3) Create CGImage from cv::Mat container
CGDataProviderRef provider = CGDataProviderCreateWithCFData(data);
CGImageRef imageRef = CGImageCreate(cvMat.cols,
cvMat.rows,
8,
8 * cvMat.elemSize(),
cvMat.step[0],
colorSpace,
kCGImageAlphaNone | kCGBitmapByteOrderDefault,
provider,
NULL,
false,
kCGRenderingIntentDefault);
// (4) Create UIImage from CGImage
UIImage * finalImage = [UIImage imageWithCGImage:imageRef];
// (5) Release the references
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CFRelease(data);
CGColorSpaceRelease(colorSpace);
// (6) Return the UIImage instance
return finalImage;
}
I worked out distance transform in OpenCV using python and I was able to obtain this:
You stated "I get nothing but a black image". Well I faced the same problem initially, until I converted the image to type int using: np.uint8(dist_transform)
I did something extra as well (you might/might not need it). In order to segment the rooms to a certain extent, I performed threshold on the distance transformed image. I got this as a result:
I have some
CGImageRef cgImage = "something"
Is there a way to manipulate the pixel values of this cgImage? For example if this image contains values between 0.0001 and 3000 thus when I try to view or release the image this way in an NSImageView (How can I show an image in a NSView using an CGImageRef image)
I get a black image, all pixels are black, I think it has to do with setting the pixel range values in a different color map (I don't know).
I want to be able to manipulate or change the pixel values or just be able to see the image by manipulating the color map range.
I have tried this but obviously it doesn't work:
CGContextDrawImage(ctx, CGRectMake(0,0, CGBitmapContextGetWidth(ctx),CGBitmapContextGetHeight(ctx)),cgImage);
UInt8 *data = CGBitmapContextGetData(ctx);
for (**all pixel values and i++ **) {
data[i] = **change to another value I want depending on the value in data[i]**;
}
Thank you,
In order to manipulate individual pixels in an image
allocate a buffer to hold the pixels
create a memory bitmap context using that buffer
draw the image into the context, which puts the pixels into the
buffer
change the pixels as desired
create a new image from the context
free up resources (note be sure to check for leaks using instruments)
Here's some sample code to get you started. This code will swap the blue and red components of each pixel.
- (CGImageRef)swapBlueAndRedInImage:(CGImageRef)image
{
int x, y;
uint8_t red, green, blue, alpha;
uint8_t *bufptr;
int width = CGImageGetWidth( image );
int height = CGImageGetHeight( image );
// allocate memory for pixels
uint32_t *pixels = calloc( width * height, sizeof(uint32_t) );
// create a context with RGBA pixels
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate( pixels, width, height, 8, width * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast );
// draw the image into the context
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image );
// manipulate the pixels
bufptr = (uint8_t *)pixels;
for ( y = 0; y < height; y++)
for ( x = 0; x < width; x++ )
{
red = bufptr[3];
green = bufptr[2];
blue = bufptr[1];
alpha = bufptr[0];
bufptr[1] = red; // swaps the red and blue
bufptr[3] = blue; // components of each pixel
bufptr += 4;
}
// create a new CGImage from the context with modified pixels
CGImageRef resultImage = CGBitmapContextCreateImage( context );
// release resources to free up memory
CGContextRelease( context );
CGColorSpaceRelease( colorSpace );
free( pixels );
return( resultImage );
}
I just started to get my hands dirty with the Tesseract library, but the results are really really bad.
I followed the instructions in the Git repository ( https://github.com/gali8/Tesseract-OCR-iOS ). My ViewController uses the following method to start recognizing:
Tesseract *t = [[Tesseract alloc] initWithLanguage:#"deu"];
t.delegate = self;
[t setVariableValue:#"0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" forKey:#"tessedit_char_whitelist"];
[t setImage:img];
[t recognize];
NSLog( #"Recognized text: %#", [t recognizedText] );
labelRecognizedText.text = [t recognizedText];
t = nil;
The sample image from the project tempalte
works well (which tells me that the project itself is setup correctly), but whenever I try to use other images, the recognized text is a complete mess. For example, I tried to take a picture of my finder displaying the sample image:
https://dl.dropboxusercontent.com/u/607872/tesseract.jpg (1,5 MB)
But Tesseract recognizes:
Recognized text: s f l TO if v Ysssifss f
ssqxizg ss sfzzlj z
s N T IYIOGY Z I l EY s s
k Es ETL ZHE s UEY
z xhks Fsjs Es z VIII c
s I XFTZT c s h V Ijzs
L s sk sisijk J
s f s ssj Jss sssHss H VI
s s H
i s H st xzs
s s k 4 is x2 IV
Illlsiqss sssnsiisfjlisszxiij s
K
Even when the character whitelist only contains numbers, I don't get a result even close to what the image looks like:
Recognized text: 3 74 211
1
1 1 1
3 53 379 1
3 1 33 5 3 2
3 9 73
1 61 2 2
3 1 6 5 212 7
1
4 9 4
1 17
111 11 1 1 11 1 1 1 1
I assume there's something wrong with the way fotos are taken from the iPad mini's camera I currently use, but I can't figure out what and why.
Any hints?
Update #1
In response to Tomas:
I followed the tutorial in your post but encountered several errors along the way...
the UIImage+OpenCV category cannot be used in my ARC project
I cannot import <opencv2/...> in my controllers, auto-completion does not offer it (and therefore [UIImage CVMat] is not defined)
I think there's something wrong with my integration of OpenCV, even though I followed the Hello-tutorial and added the framework. Am I required to build OpenCV on my Mac as well or is it sufficient to just include the framework in my Xcode project?
Since I don't really know what you might consider as "important" at this point (I've already read several posts and tutorials and tried different steps), feel free to ask :)
Update #2
#Tomas: thanks, the ARC-part was essential. My ViewController already has been renamed to .mm. Forget the part about "cannot import opencv2/" since I already included it in my TestApp-Prefix.pch (as stated in the Hello-tutorial).
On to the next challenge ;)
I noticed that when I use images taken with the camera, the bounds for the roi object aren't calculated successfully. I played around with the device orientation and put a UIImage in my view to see the image processing steps, but sometimes (even when the image is correctly aligned) the values are negative because the if-condition in the bounds.size()-for-loop isn't met. The worst case I had: minX/Y and maxX/Y were never touched. Long story short: the line starting with Mat roi = inranged(cv::Rect( throws an exception (assertion failed because the values were < 0 ). I don't know if the number of contours matter, but I assume so because the bigger the images, the more likely the assertion exception is.
To be perfectly honest: I haven't had the time to read OpenCV's documentation and understand what your code does, but as of now, I don't think there's a way around. Seems like, unfortunately for me, my initial task (scan receipt, run OCR, show items in a table) requires more resources (=time) than I thought.
There's nothing wrong in the way your taking the pictures from your iPad per se. But you just can't throw in such a complex image and expect Tesseract to magically determine which text to extract. Take a closer look to the image and you'll notice it has no uniform lightning, it's extremely noisy so it may not be the best sample to start playing with.
In such scenarios it is mandatory to pre process the image in order to provide the tesseract library with something simpler to recognise.
Below find a very naive pre processing example that uses OpenCV (http://www.opencv.org), a popular image processing framework. It should give you and idea to get you started.
#import <TesseractOCR/TesseractOCR.h>
#import <opencv2/opencv.hpp>
#import "UIImage+OpenCV.h"
using namespace cv;
...
// load source image
UIImage *img = [UIImage imageNamed:#"tesseract.jpg"];
Mat mat = [img CVMat];
Mat hsv;
// convert to HSV (better than RGB for this task)
cvtColor(mat, hsv, CV_RGB2HSV_FULL);
// blur is slightly to reduce noise impact
const int blurRadius = img.size.width / 250;
blur(hsv, hsv, cv::Size(blurRadius, blurRadius));
// in range = extract pixels within a specified range
// here we work only on the V channel extracting pixels with 0 < V < 120
Mat inranged;
inRange(hsv, cv::Scalar(0, 0, 0), cv::Scalar(255, 255, 120), inranged);
Mat inrangedforcontours;
inranged.copyTo(inrangedforcontours); // findContours alters src mat
// now find contours to find where characters are approximately located
vector<vector<cv::Point> > contours;
vector<Vec4i> hierarchy;
findContours(inrangedforcontours, contours, hierarchy, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0));
int minX = INT_MAX;
int minY = INT_MAX;
int maxX = 0;
int maxY = 0;
// find all contours that match expected character size
for (size_t i = 0; i < contours.size(); i++)
{
cv::Rect brect = cv::boundingRect(contours[i]);
float ratio = (float)brect.height / brect.width;
if (brect.height > 250 && ratio > 1.2 && ratio < 2.0)
{
minX = MIN(minX, brect.x);
minY = MIN(minY, brect.y);
maxX = MAX(maxX, brect.x + brect.width);
maxY = MAX(maxY, brect.y + brect.height);
}
}
// Now we know where our characters are located
// extract relevant part of the image adding a margin that enlarges area
const int margin = img.size.width / 50;
Mat roi = inranged(cv::Rect(minX - margin, minY - margin, maxX - minX + 2 * margin, maxY - minY + 2 * margin));
cvtColor(roi, roi, CV_GRAY2BGRA);
img = [UIImage imageWithCVMat:roi];
Tesseract *t = [[Tesseract alloc] initWithLanguage:#"eng"];
[t setVariableValue:#"0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" forKey:#"tessedit_char_whitelist"];
[t setImage:img];
[t recognize];
NSString *recognizedText = [[t recognizedText] stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]];
if ([recognizedText isEqualToString:#"1234567890"])
NSLog(#"Yeah!");
else
NSLog(#"Epic fail...");
Notes
The UIImage+OpenCV category can be found here. If you're under ARC check this.
Take a look at this to get you started with OpenCV in Xcode. Note that OpenCV is a C++ framework which can't be imported in plain C (or Objective-C) source files. The easiest workaround is to rename your view controller from .m to .mm (Objective-C++) and reimport it in your project.
There is different behavior of tesseract result.
It requires good quality of picture means good texture visibility.
Large size picture take much time to process its also good to resize it into small before processing.
It will good to perform some color effect on image before sending it to tesseract. Use effects which could enhance the visibility of image.
There is sometime different behavior of processing photo by using Camera or by Camera Album.
In case of taking photo directly from Camera try below function.
- (UIImage *) getImageForTexture:(UIImage *)src_img{
CGColorSpaceRef d_colorSpace = CGColorSpaceCreateDeviceRGB();
/*
* Note we specify 4 bytes per pixel here even though we ignore the
* alpha value; you can't specify 3 bytes per-pixel.
*/
size_t d_bytesPerRow = src_img.size.width * 4;
unsigned char * imgData = (unsigned char*)malloc(src_img.size.height*d_bytesPerRow);
CGContextRef context = CGBitmapContextCreate(imgData, src_img.size.width,
src_img.size.height,
8, d_bytesPerRow,
d_colorSpace,
kCGImageAlphaNoneSkipFirst);
UIGraphicsPushContext(context);
// These next two lines 'flip' the drawing so it doesn't appear upside-down.
CGContextTranslateCTM(context, 0.0, src_img.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// Use UIImage's drawInRect: instead of the CGContextDrawImage function, otherwise you'll have issues when the source image is in portrait orientation.
[src_img drawInRect:CGRectMake(0.0, 0.0, src_img.size.width, src_img.size.height)];
UIGraphicsPopContext();
/*
* At this point, we have the raw ARGB pixel data in the imgData buffer, so
* we can perform whatever image processing here.
*/
// After we've processed the raw data, turn it back into a UIImage instance.
CGImageRef new_img = CGBitmapContextCreateImage(context);
UIImage * convertedImage = [[UIImage alloc] initWithCGImage:
new_img];
CGImageRelease(new_img);
CGContextRelease(context);
CGColorSpaceRelease(d_colorSpace);
free(imgData);
return convertedImage;
}
I have been struggling with Tesseract character recognition for weeks. Here are two things I learned to get it to work better...
If you know what font you will be reading, clear the training and retrain it for only that font. Multiple fonts slows the OCR processing down and also increases the ambiguity in the Tesseract decision process. This will lead to greater accuracy and speed.
After OCR processing is really needed. You will end up with a matrix of characters that Tesseract recognizes. You will need to further process the characters to narrow down on what you are trying to read. So for instance, if your application is reading food labels, knowing the rules for the words and sentences that make up the food label will help recognize a series of characters that make up that label.
Convert your UIImage from srgb to rgb format .
if you are using IOS 5.0 and above use
use #import <Accelerate/Accelerate.h>
else uncomment //IOS 3.0-5.0
-(UIImage *) createARGBImageFromRGBAImage: (UIImage*)image
{ //CGSize size = CGSizeMake(320, 480);
CGSize dimensions = CGSizeMake(320, 480);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * dimensions.width;
NSUInteger bitsPerComponent = 8;
unsigned char *rgba = malloc(bytesPerPixel * dimensions.width * dimensions.height);
unsigned char *argb = malloc(bytesPerPixel * dimensions.width * dimensions.height);
CGColorSpaceRef colorSpace = NULL;
CGContextRef context = NULL;
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(rgba, dimensions.width, dimensions.height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrderDefault); // kCGBitmapByteOrder32Big
CGContextDrawImage(context, CGRectMake(0, 0, dimensions.width, dimensions.height), [image CGImage]);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
const vImage_Buffer src = { rgba, dimensions.height, dimensions.width, bytesPerRow };
const vImage_Buffer dis = { rgba, dimensions.height, dimensions.width, bytesPerRow };
const uint8_t map[4] = {3,0,1,2};
vImagePermuteChannels_ARGB8888(&src, &dis, map, kvImageNoFlags);
//IOS 3.0-5.0
/*for (int x = 0; x < dimensions.width; x++) {
for (int y = 0; y < dimensions.height; y++) {
NSUInteger offset = ((dimensions.width * y) + x) * bytesPerPixel;
argb[offset + 0] = rgba[offset + 3];
argb[offset + 1] = rgba[offset + 0];
argb[offset + 2] = rgba[offset + 1];
argb[offset + 3] = rgba[offset + 2];
}
}*/
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(dis.data, dimensions.width, dimensions.height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrderDefault); // kCGBitmapByteOrder32Big
CGImageRef imageRef = CGBitmapContextCreateImage(context);
image = [UIImage imageWithCGImage: imageRef];
CGImageRelease(imageRef);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(rgba);
free(argb);
return image;
}
Tesseract *t = [[Tesseract alloc] initWithLanguage:#"eng"];
[t setVariableValue:#"0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" forKey:#"tessedit_char_whitelist"];
[t setImage:[self createARGBImageFromRGBAImage:img]];
[t recognize];
The swift equivalent of #FARAZ's answer
func getImageForTexture(srcImage: UIImage) -> UIImage{
let d_colorSpace = CGColorSpaceCreateDeviceRGB()
let d_bytesPerRow: size_t = Int(srcImage.size.width) * 4
/*
* Note we specify 4 bytes per pixel here even though we ignore the
* alpha value; you can't specify 3 bytes per-pixel.
*/
let imgData = malloc(Int(srcImage.size.height) * Int(d_bytesPerRow))
let context = CGBitmapContextCreate(imgData, Int(srcImage.size.width), Int(srcImage.size.height), 8, Int(d_bytesPerRow), d_colorSpace,CGImageAlphaInfo.NoneSkipFirst.rawValue)
UIGraphicsPushContext(context!)
// These next two lines 'flip' the drawing so it doesn't appear upside-down.
CGContextTranslateCTM(context, 0.0, srcImage.size.height)
CGContextScaleCTM(context, 1.0, -1.0)
// Use UIImage's drawInRect: instead of the CGContextDrawImage function, otherwise you'll
srcImage.drawInRect(CGRectMake(0.0, 0.0, srcImage.size.width, srcImage.size.height))
UIGraphicsPopContext()
/*
* At this point, we have the raw ARGB pixel data in the imgData buffer, so
* we can perform whatever image processing here.
*/
// After we've processed the raw data, turn it back into a UIImage instance.
let new_img = CGBitmapContextCreateImage(context)
let convertedImage = UIImage(CGImage: new_img!)
return convertedImage
}
I have an UIImage that shows a photo downloaded from the net.
I would like to know away to programmatically discover if the image is in B&W or Color.
If you dont mind a computing intensive task and you want the job done, check pixel per pixel the image.
The idea is to check if all R G B channels for each single pixels are similar, for example a pixel with RGB 45-45-45 is a gray, and also 43-42-44 because all channels are close to each other. I'm looking that every channel has a similar value (i am using a threshold of 10 but it's just random, you have to do some tests)
As soon you have enought pixels that are above your threshold you can break the loop an flag the image as colored
the code is not tested, is just an idea, and hopefully without leaks.
// load image
CGImageRef imageRef = yourUIImage.CGImage
CFDataRef cfData = CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
NSData * data = (NSData *) cfData;
char *pixels = (char *)[data bytes];
const int threshold = 10; //define a gray threshold
for(int i = 0; i < [data length]; i += 4)
{
Byte red = pixels[i];
Byte green = pixels[i+1];
Byte blue = pixels[i+2];
//check if a single channel is too far from the average value.
//greys have RGB values very close to each other
int average = (red+green+blue)/3;
if( abs(average - red) >= threshold ||
abs(average - green) >= threshold ||
abs(average - blue) >= threshold )
{
//possibly its a colored pixel.. !!
}
}
CFRelease(cfData);
I'm trying to create an UIImage test pattern for an iOS 5.1 device. The target UIImageView is 320x240 in size, but I was trying to create a 160x120 UIImage test pattern (future, non-test pattern images will be this size). I wanted the top half of the box to be blue and the bottom half to be red, but I get what looks like uninitialized memory corrupting the bottom of the image. The code is as follows:
int width = 160;
int height = 120;
unsigned int testData[width * height];
for(int k = 0; k < (width * height) / 2; k++)
testData[k] = 0xFF0000FF; // BGRA (Blue)
for(int k = (width * height) / 2; k < width * height; k++)
testData[k] = 0x0000FFFF; // BGRA (Red)
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, &testData, (width * height * 4), NULL);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaNoneSkipFirst;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow,
colorSpaceRef, bitmapInfo, provider, NULL, NO,renderingIntent);
UIImage *myTestImage = [UIImage imageWithCGImage:imageRef];
This should look like another example on Stack Overflow. Anyway, I found that as I decrease the size of the test pattern the "corrupt" portion of the image increases. What is also strange is that I see lines of red in the "corrupt" portion, so it doesn't appear that I'm just messing up the sizes of components. What am I missing? It feels like something in the provider, but I don't see it.
Thanks!
Added screenshots. Here is what it looks like with kCGImageAlphaNoneSkipFirst set:
And here is what it looks like with kCGImageAlphaFirst:
Your pixel data is in an automatic variable, so it's stored on the stack:
unsigned int testData[width * height];
You must be returning from the function where this data is declared. That makes the function's stack frame get popped and reused by other functions, which overwrites the data.
Your image, however, still refers to that pixel data at the same address on the stack. (CGDataProviderCreateWithData doesn't copy the data, it just refers to it.)
To fix: use malloc or CFMutableData or NSMutableData to allocate space for your pixel data on the heap.
Your image includes alpha which you then tell the system to ignore by skipping the most significant bits (i.e. the "B" portion of your image). Try setting it to kCGImageAlphaPremultipliedLast instead.
EDIT:
Now that I remember endianness, I realize that the program is probably reading your values in backwards, so what you might actually want is kCGImageAlphaPremultipliedFirst