IOS Compare 2 images that are 80% - 90% same? - ios

i want to compare 2 image that are 80% - 90% same means if i take two images i 1st image I'm standing in the middle of the image and in another i standing little bit away from the centre with the same pose then this will not work of me and one image is blur and another is clear then also it will not return true or if the 1 image is little dark another is bright then must return true ...
how to make it run using hashing technic or if any another technic any help will be appreciated a lot ....thank you
one of the technics m using code is as below but its not working of me:
-(CGFloat)compareImage:(UIImage *)imgPre capturedImage:(UIImage *)imgCaptured
{
int colorDiff;
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(imgPre.CGImage));
int myWidth = (int )CGImageGetWidth(imgPre.CGImage)/2;
int myHeight =(int )CGImageGetHeight(imgPre.CGImage)/2;
const UInt8 *pixels = CFDataGetBytePtr(pixelData);
int bytesPerPixel_ = 4;
int pixelStartIndex = (myWidth + myHeight) * bytesPerPixel_;
UInt8 alphaVal = pixels[pixelStartIndex];
UInt8 redVal = pixels[pixelStartIndex + 1];
UInt8 greenVal = pixels[pixelStartIndex + 2];
UInt8 blueVal = pixels[pixelStartIndex + 3];
UIColor *color = [UIColor colorWithRed:(redVal/255.0f) green:(greenVal/255.0f) blue:(blueVal/255.0f) alpha:(alphaVal/255.0f)];
NSLog(#"color of image=%#",color);
NSLog(#"color of R=%hhu/G=%hhu/B=%hhu",redVal,greenVal,blueVal);
CFDataRef pixelDataCaptured = CGDataProviderCopyData(CGImageGetDataProvider(imgCaptured.CGImage));
int myWidthCaptured = (int )CGImageGetWidth(imgCaptured.CGImage)/2;
int myHeightCaptured =(int )CGImageGetHeight(imgCaptured.CGImage)/2;
const UInt8 *pixelsCaptured = CFDataGetBytePtr(pixelDataCaptured);
int pixelStartIndexCaptured = (myWidthCaptured + myHeightCaptured) * bytesPerPixel_;
UInt8 alphaValCaptured = pixelsCaptured[pixelStartIndexCaptured];
UInt8 redValCaptured = pixelsCaptured[pixelStartIndexCaptured + 1];
UInt8 greenValCaptured = pixelsCaptured[pixelStartIndexCaptured + 2];
UInt8 blueValCaptured = pixelsCaptured[pixelStartIndexCaptured + 3];
UIColor *colorCaptured = [UIColor colorWithRed:(redValCaptured/255.0f) green:(greenValCaptured/255.0f) blue:(blueValCaptured/255.0f) alpha:(alphaValCaptured/255.0f)];
NSLog(#"color of captured image=%#",colorCaptured);
NSLog(#"color of captured image R=%hhu/G=%hhu/B=%hhu",redValCaptured,greenValCaptured,blueValCaptured);
colorDiff=sqrt((redVal-249)*(redVal-249)+(greenVal-greenValCaptured)*(greenVal-greenValCaptured)+(blueVal-blueValCaptured)*(blueVal-blueValCaptured));
return colorDiff;
}

Related

Reading pixels from UIImage results in BAD_ACCESS

I wrote this code that is supposed to NSLog all non-white pixels as a test before going further.
This is my code:
UIImage *image = [UIImage imageNamed:#"image"];
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
if(!pixelData) {
return;
}
const UInt8 *buffer = CFDataGetBytePtr(pixelData);
CFRelease(pixelData);
for(int y = 0; y < image.size.height; y++) {
for(int x = 0; x < image.size.width; x++) {
int pixelInfo = ((image.size.width * y) + x) * 4;
UInt8 red = buffer[pixelInfo];
UInt8 green = buffer[(pixelInfo + 1)];
UInt8 blue = buffer[pixelInfo + 2];
UInt8 alpha = buffer[pixelInfo + 3];
if(red != 0xff && green != 0xff && blue != 0xff){
NSLog(#"R: %hhu, G: %hhu, B: %hhu, A: %hhu", red, green, blue, alpha);
}
}
}
For some reason, when I build an app, it iterates for a moment and then throws BAD_ACCESS error on line:
UInt8 red = buffer[pixelInfo];. What could be the issue?
Is this the fastest method to iterate through pixels?
I think the problem is a buffer size error.
buffer has the size of width x height, and pixelInfo has a 4 multiplier.
I think you need to create an array 4 times bigger and save each pixel color of buffer in this new array. But you have to be careful not to read more of the size of the buffer.

How do I convert bitmap format of a UIImage?

I need to convert my bitmap from the normal camera format of kCVPixelFormatType_32BGRA to the kCVPixelFormatType_24RGB format so it can be consumed by a 3rd party library.
How can this be done?
My c# code looks like this in an effort of doing it directly with the byte data:
byte[] sourceBytes = UIImageTransformations.BytesFromImage(sourceImage);
// final source is to be RGB
byte[] finalBytes = new byte[(int)(sourceBytes.Length * .75)];
int length = sourceBytes.Length;
int finalByte = 0;
for (int i = 0; i < length; i += 4)
{
byte blue = sourceBytes[i];
byte green = sourceBytes[i + 1];
byte red = sourceBytes[i + 2];
finalBytes[finalByte] = red;
finalBytes[finalByte + 1] = green;
finalBytes[finalByte + 2] = blue;
finalByte += 3;
}
UIImage finalImage = UIImageTransformations.ImageFromBytes(finalBytes);
However I'm finding that my sourceBytes length is not always divisible by 4 which doesn't make any sense to me.

Convert matrix to UIImage

I need to convert a matrix representing a b/w image to UIImage.
For example:
A matrix like this (just the representation). This image would be the symbol '+'
1 0 1
0 0 0
1 0 1
This matrix represents an image in black and white, where black is 0 and white is 1. I need to convert this matrix to UIImage. In this case width would be 3 and height would be 3
I use this method to create an image for my Game Of Life app. The advantages over drawing to a graphics context is that this is ridiculously fast.
This was all written a long time ago so it's a bit messier than what I might do now but the method would stay the same. For some reasons I defined these outside the method...
{
unsigned int length_in_bytes;
unsigned char *cells;
unsigned char *temp_cells;
unsigned char *changes;
unsigned char *temp_changes;
GLubyte *buffer;
CGImageRef imageRef;
CGDataProviderRef provider;
int ar, ag, ab, dr, dg, db;
float arf, agf, abf, drf, dgf, dbf, blah;
}
You won't need all of these for the image.
The method itself...
- (UIImage*)imageOfMapWithDeadColor:(UIColor *)deadColor aliveColor:(UIColor *)aliveColor
{
//translate colours into rgb components
if ([deadColor isEqual:[UIColor whiteColor]]) {
dr = dg = db = 255;
} else if ([deadColor isEqual:[UIColor blackColor]]) {
dr = dg = db = 0;
} else {
[deadColor getRed:&drf green:&dgf blue:&dbf alpha:&blah];
dr = drf * 255;
dg = dgf * 255;
db = dbf * 255;
}
if ([aliveColor isEqual:[UIColor whiteColor]]) {
ar = ag = ab = 255;
} else if ([aliveColor isEqual:[UIColor blackColor]]) {
ar = ag = ab = 0;
} else {
[aliveColor getRed:&arf green:&agf blue:&abf alpha:&blah];
ar = arf * 255;
ag = agf * 255;
ab = abf * 255;
}
// dr = 255, dg = 255, db = 255;
// ar = 0, ag = 0, ab = 0;
//create bytes of image from the cell map
int yRef, cellRef;
unsigned char *cell_ptr = cells;
for (int y=0; y<self.height; y++)
{
yRef = y * (self.width * 4);
int x = 0;
do
{
cellRef = yRef + 4 * x;
if (*cell_ptr & 0x01) {
//alive colour
buffer[cellRef] = ar;
buffer[cellRef + 1] = ag;
buffer[cellRef + 2] = ab;
buffer[cellRef + 3] = 255;
} else {
//dead colour
buffer[cellRef] = dr;
buffer[cellRef + 1] = dg;
buffer[cellRef + 2] = db;
buffer[cellRef + 3] = 255;
}
cell_ptr++;
} while (++x < self.width);
}
//create image
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// render the byte array into an image ref
imageRef = CGImageCreate(self.width, self.height, 8, 32, 4 * self.width, colorSpace, kCGBitmapByteOrderDefault, provider, NULL, NO, kCGRenderingIntentDefault);
// convert image ref to UIImage
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpace);
//return image
return image;
}
You should be able to adapt this to create an image from your matrix.
In order to convert a matrix to UIImage :
CGSize size = CGSizeMake(lines, columns);
UIGraphicsBeginImageContextWithOptions(size, YES, 0);
for (int i = 0; i < lines; i++)
{
for (int j = 0; j < columns; j++)
{
// Choose color to draw
if ( matrixDraw[i*lines + j] == 1 ) {
[[UIColor whiteColor] setFill];
} else {
// Draw black pixel
[[UIColor blackColor] setFill];
}
// Draw just one pixel in i,j
UIRectFill(CGRectMake(i, j, 1, 1));
}
}
// Create UIImage with the current context that we have just created
UIImage *imageFinal = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Basically what we are doing is :
Create a context with the size of our image
Looping for each pixel to see the value. Black is 0 and white is 1. So depends on the value, we set the color.
The most important function :
UIRectFill(CGRectMake(i,j,1,1));
This function let us to fill a pixel in the i,j position with width and height (1 both cases for fill one single pixel)
Finally we create an UIImage with the current context and we call to finish the image context.
Hope it helps someone!

Extract cyan channel of UIImage?

I was wondering, is it possible to extract the cyan channel of a UIImage into a separate UIImage? Kind of like how in Photoshop, you can click the tab that says Cyan and it shows the Cyan channel of the image. Is this even possible?
By modifying this answer, you can get something.
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* pixelBytes = CFDataGetBytePtr(pixelData);
int cyanChannel = 0;
//32-bit RGBA
for(int i = 0; i < CFDataGetLength(pixelData); i += 4) {
cyanChannel += pixelBytes[i + 1] + pixelBytes[i + 2]; // cyan = green + blue
}

Converting a 24-bit PNG image to an array of GLubytes

I'd like to do the following:
Read RGB color values from a 24 bit PNG image
Average the RGB values and store them into an array of Glubytes.
I have provided my function that I was hoping would perform these 2 steps.
My function returns an array of Glubytes, however all elements have a value of 0.
So im guessing im reading the image data incorrectly.
What am i going wrong in reading the image? (perhaps my format is incorrect).
Here is my function:
+ (GLubyte *) LoadPhotoAveragedIndexPNG:(UIImage *)image numPixelComponents: (int)numComponents
{
// Load an image and return byte array.
CGImageRef textureImage = image.CGImage;
if (textureImage == nil)
{
NSLog(#"LoadPhotoIndexPNG: Failed to load texture image");
return nil;
}
NSInteger texWidth = CGImageGetWidth(textureImage);
NSInteger texHeight = CGImageGetHeight(textureImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
GLubyte *indexedData = (GLubyte *)malloc(texWidth * texHeight);
GLubyte *rawData = (GLubyte *)malloc(texWidth * texHeight * numComponents);
CGContextRef textureContext = CGBitmapContextCreate(
rawData,
texWidth,
texHeight,
8,
texWidth * numComponents,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(textureContext,
CGRectMake(0.0, 0.0, (float)texWidth, (float)texHeight),
textureImage);
CGContextRelease(textureContext);
int rawDataLength = texWidth * texHeight * numComponents;
for (int i = 0, j = 0; i < rawDataLength; i += numComponents)
{
GLubyte b = rawData[i];
GLubyte g = rawData[i + 1];
GLubyte r = rawData[i + 2];
indexedData[j++] = (r + g + b) / 3;
}
return indexedData;
}
Here is the test image im loading (RGB colorspace in PNG format):
Do check with some logging if the parameters b,g and r are producing normal values in the last for loop. Where you made a mistake is indexedData[j++] = (r + g + b) / 3; those 3 parameters are sizeof 1 byte and you can not sum them up like that. Use a larger integer, typecast them and typecast the result back to array. (You are most likely getting overflow)
Apart from your original problem there's a major problem here (maybe even related)
for (int i = 0, j = 0; i < rawDataLength; i += numComponents)
{
GLubyte b = rawData[i];
GLubyte g = rawData[i + 1];
GLubyte r = rawData[i + 2];
indexedData[j++] = (r + g + b) / 3;
}
Namely the expression
(r + g + b)
This expression will be performed on GLubyte sized integer operations. If the sum of r+g+b is larger than the type GLubyte can hold it will overflow. Whenever you're processing data through intermediary variables (good style!) choose the variable types large enough to hold the largest value you can encounter. Another method was casting the expression like
indexedData[j++] = ((uint16_t)r + (uint16_t)g + (uint16_t)b) / 3;
But that's cumbersome to read. Also if you're processing integers of a known size, use the types found in stdint.h. You know, that you're expecting 8 bits per channel. Also you can use the comma operator in the for increment clause
uint8_t *indexedData = (GLubyte *)malloc(texWidth * texHeight);
/* ... */
for (int i = 0, j = 0; i < rawDataLength; i += numComponents, j++)
{
uint16_t b = rawData[i];
uint16_t g = rawData[i + 1];
uint16_t r = rawData[i + 2];
indexedData[j] = (r + g + b) / 3;
}

Resources