Extract cyan channel of UIImage? - ios

I was wondering, is it possible to extract the cyan channel of a UIImage into a separate UIImage? Kind of like how in Photoshop, you can click the tab that says Cyan and it shows the Cyan channel of the image. Is this even possible?

By modifying this answer, you can get something.
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* pixelBytes = CFDataGetBytePtr(pixelData);
int cyanChannel = 0;
//32-bit RGBA
for(int i = 0; i < CFDataGetLength(pixelData); i += 4) {
cyanChannel += pixelBytes[i + 1] + pixelBytes[i + 2]; // cyan = green + blue
}

Related

IOS Compare 2 images that are 80% - 90% same?

i want to compare 2 image that are 80% - 90% same means if i take two images i 1st image I'm standing in the middle of the image and in another i standing little bit away from the centre with the same pose then this will not work of me and one image is blur and another is clear then also it will not return true or if the 1 image is little dark another is bright then must return true ...
how to make it run using hashing technic or if any another technic any help will be appreciated a lot ....thank you
one of the technics m using code is as below but its not working of me:
-(CGFloat)compareImage:(UIImage *)imgPre capturedImage:(UIImage *)imgCaptured
{
int colorDiff;
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(imgPre.CGImage));
int myWidth = (int )CGImageGetWidth(imgPre.CGImage)/2;
int myHeight =(int )CGImageGetHeight(imgPre.CGImage)/2;
const UInt8 *pixels = CFDataGetBytePtr(pixelData);
int bytesPerPixel_ = 4;
int pixelStartIndex = (myWidth + myHeight) * bytesPerPixel_;
UInt8 alphaVal = pixels[pixelStartIndex];
UInt8 redVal = pixels[pixelStartIndex + 1];
UInt8 greenVal = pixels[pixelStartIndex + 2];
UInt8 blueVal = pixels[pixelStartIndex + 3];
UIColor *color = [UIColor colorWithRed:(redVal/255.0f) green:(greenVal/255.0f) blue:(blueVal/255.0f) alpha:(alphaVal/255.0f)];
NSLog(#"color of image=%#",color);
NSLog(#"color of R=%hhu/G=%hhu/B=%hhu",redVal,greenVal,blueVal);
CFDataRef pixelDataCaptured = CGDataProviderCopyData(CGImageGetDataProvider(imgCaptured.CGImage));
int myWidthCaptured = (int )CGImageGetWidth(imgCaptured.CGImage)/2;
int myHeightCaptured =(int )CGImageGetHeight(imgCaptured.CGImage)/2;
const UInt8 *pixelsCaptured = CFDataGetBytePtr(pixelDataCaptured);
int pixelStartIndexCaptured = (myWidthCaptured + myHeightCaptured) * bytesPerPixel_;
UInt8 alphaValCaptured = pixelsCaptured[pixelStartIndexCaptured];
UInt8 redValCaptured = pixelsCaptured[pixelStartIndexCaptured + 1];
UInt8 greenValCaptured = pixelsCaptured[pixelStartIndexCaptured + 2];
UInt8 blueValCaptured = pixelsCaptured[pixelStartIndexCaptured + 3];
UIColor *colorCaptured = [UIColor colorWithRed:(redValCaptured/255.0f) green:(greenValCaptured/255.0f) blue:(blueValCaptured/255.0f) alpha:(alphaValCaptured/255.0f)];
NSLog(#"color of captured image=%#",colorCaptured);
NSLog(#"color of captured image R=%hhu/G=%hhu/B=%hhu",redValCaptured,greenValCaptured,blueValCaptured);
colorDiff=sqrt((redVal-249)*(redVal-249)+(greenVal-greenValCaptured)*(greenVal-greenValCaptured)+(blueVal-blueValCaptured)*(blueVal-blueValCaptured));
return colorDiff;
}

Reading pixels from UIImage results in BAD_ACCESS

I wrote this code that is supposed to NSLog all non-white pixels as a test before going further.
This is my code:
UIImage *image = [UIImage imageNamed:#"image"];
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
if(!pixelData) {
return;
}
const UInt8 *buffer = CFDataGetBytePtr(pixelData);
CFRelease(pixelData);
for(int y = 0; y < image.size.height; y++) {
for(int x = 0; x < image.size.width; x++) {
int pixelInfo = ((image.size.width * y) + x) * 4;
UInt8 red = buffer[pixelInfo];
UInt8 green = buffer[(pixelInfo + 1)];
UInt8 blue = buffer[pixelInfo + 2];
UInt8 alpha = buffer[pixelInfo + 3];
if(red != 0xff && green != 0xff && blue != 0xff){
NSLog(#"R: %hhu, G: %hhu, B: %hhu, A: %hhu", red, green, blue, alpha);
}
}
}
For some reason, when I build an app, it iterates for a moment and then throws BAD_ACCESS error on line:
UInt8 red = buffer[pixelInfo];. What could be the issue?
Is this the fastest method to iterate through pixels?
I think the problem is a buffer size error.
buffer has the size of width x height, and pixelInfo has a 4 multiplier.
I think you need to create an array 4 times bigger and save each pixel color of buffer in this new array. But you have to be careful not to read more of the size of the buffer.

How do I convert bitmap format of a UIImage?

I need to convert my bitmap from the normal camera format of kCVPixelFormatType_32BGRA to the kCVPixelFormatType_24RGB format so it can be consumed by a 3rd party library.
How can this be done?
My c# code looks like this in an effort of doing it directly with the byte data:
byte[] sourceBytes = UIImageTransformations.BytesFromImage(sourceImage);
// final source is to be RGB
byte[] finalBytes = new byte[(int)(sourceBytes.Length * .75)];
int length = sourceBytes.Length;
int finalByte = 0;
for (int i = 0; i < length; i += 4)
{
byte blue = sourceBytes[i];
byte green = sourceBytes[i + 1];
byte red = sourceBytes[i + 2];
finalBytes[finalByte] = red;
finalBytes[finalByte + 1] = green;
finalBytes[finalByte + 2] = blue;
finalByte += 3;
}
UIImage finalImage = UIImageTransformations.ImageFromBytes(finalBytes);
However I'm finding that my sourceBytes length is not always divisible by 4 which doesn't make any sense to me.

Convert matrix to UIImage

I need to convert a matrix representing a b/w image to UIImage.
For example:
A matrix like this (just the representation). This image would be the symbol '+'
1 0 1
0 0 0
1 0 1
This matrix represents an image in black and white, where black is 0 and white is 1. I need to convert this matrix to UIImage. In this case width would be 3 and height would be 3
I use this method to create an image for my Game Of Life app. The advantages over drawing to a graphics context is that this is ridiculously fast.
This was all written a long time ago so it's a bit messier than what I might do now but the method would stay the same. For some reasons I defined these outside the method...
{
unsigned int length_in_bytes;
unsigned char *cells;
unsigned char *temp_cells;
unsigned char *changes;
unsigned char *temp_changes;
GLubyte *buffer;
CGImageRef imageRef;
CGDataProviderRef provider;
int ar, ag, ab, dr, dg, db;
float arf, agf, abf, drf, dgf, dbf, blah;
}
You won't need all of these for the image.
The method itself...
- (UIImage*)imageOfMapWithDeadColor:(UIColor *)deadColor aliveColor:(UIColor *)aliveColor
{
//translate colours into rgb components
if ([deadColor isEqual:[UIColor whiteColor]]) {
dr = dg = db = 255;
} else if ([deadColor isEqual:[UIColor blackColor]]) {
dr = dg = db = 0;
} else {
[deadColor getRed:&drf green:&dgf blue:&dbf alpha:&blah];
dr = drf * 255;
dg = dgf * 255;
db = dbf * 255;
}
if ([aliveColor isEqual:[UIColor whiteColor]]) {
ar = ag = ab = 255;
} else if ([aliveColor isEqual:[UIColor blackColor]]) {
ar = ag = ab = 0;
} else {
[aliveColor getRed:&arf green:&agf blue:&abf alpha:&blah];
ar = arf * 255;
ag = agf * 255;
ab = abf * 255;
}
// dr = 255, dg = 255, db = 255;
// ar = 0, ag = 0, ab = 0;
//create bytes of image from the cell map
int yRef, cellRef;
unsigned char *cell_ptr = cells;
for (int y=0; y<self.height; y++)
{
yRef = y * (self.width * 4);
int x = 0;
do
{
cellRef = yRef + 4 * x;
if (*cell_ptr & 0x01) {
//alive colour
buffer[cellRef] = ar;
buffer[cellRef + 1] = ag;
buffer[cellRef + 2] = ab;
buffer[cellRef + 3] = 255;
} else {
//dead colour
buffer[cellRef] = dr;
buffer[cellRef + 1] = dg;
buffer[cellRef + 2] = db;
buffer[cellRef + 3] = 255;
}
cell_ptr++;
} while (++x < self.width);
}
//create image
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// render the byte array into an image ref
imageRef = CGImageCreate(self.width, self.height, 8, 32, 4 * self.width, colorSpace, kCGBitmapByteOrderDefault, provider, NULL, NO, kCGRenderingIntentDefault);
// convert image ref to UIImage
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpace);
//return image
return image;
}
You should be able to adapt this to create an image from your matrix.
In order to convert a matrix to UIImage :
CGSize size = CGSizeMake(lines, columns);
UIGraphicsBeginImageContextWithOptions(size, YES, 0);
for (int i = 0; i < lines; i++)
{
for (int j = 0; j < columns; j++)
{
// Choose color to draw
if ( matrixDraw[i*lines + j] == 1 ) {
[[UIColor whiteColor] setFill];
} else {
// Draw black pixel
[[UIColor blackColor] setFill];
}
// Draw just one pixel in i,j
UIRectFill(CGRectMake(i, j, 1, 1));
}
}
// Create UIImage with the current context that we have just created
UIImage *imageFinal = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Basically what we are doing is :
Create a context with the size of our image
Looping for each pixel to see the value. Black is 0 and white is 1. So depends on the value, we set the color.
The most important function :
UIRectFill(CGRectMake(i,j,1,1));
This function let us to fill a pixel in the i,j position with width and height (1 both cases for fill one single pixel)
Finally we create an UIImage with the current context and we call to finish the image context.
Hope it helps someone!

Processing sketch won't display modified image on screen

I am making a sketch which does the following:
Resizes a large image to fit it into a 800 x 600 screen.
displays the image
Applies some effects when keys are pressed
Display the image back on the screen and print a little Done message
Everything works fine expect that it does not display the image back properly.
Here is the black and white effect:
void blackAndWhite() {
loadPixels();
for (int i = 0;i < img.pixels.length;i++) {
int pixel = img.pixels[i];
// println("Working on pixel " + i + " out of " + img.pixels.length);
int red = (int) red(pixel);
int green = (int) green(pixel);
int blue = (int) blue(pixel);
/*
* Luminosity Method.
*/
// red = (int) (red * 0.21);
// green = (int) (green * 0.71);
// blue = (int) (blue * 0.07);
/*
* Average Method
*/
// float avg = (int) (red + green + blue / 3);
// red = green = blue = (int) avg;
/*
* Lightness Method
*/
int mostProminent = max(red, green, blue);
int leastProminent = min(red, green, blue);
int avg = (int) ((mostProminent + leastProminent) / 2);
red = green = blue = avg;
pixel = color(red, green, blue);
img.pixels[i] = pixel;
}
println("Done");
updatePixels();
image(img, WIDTH/2, HEIGHT/2, calculatedWidth, calculatedHeight);
}
However, only the colored image is displayed :(
I know the algorithm works cause I have tried it on other images (not using this sketch).
What is going wrong?
The following code works fine:
PImage img;
void setup () {
img = loadImage ("img.png");
size(img.width, img.height);
}
void draw () {
image(img, 0, 0);
}
void keyReleased () {
blackAndWhite();
}
void blackAndWhite() {
img.loadPixels();
for (int i = 0;i < img.pixels.length;i++) {
int pixel = img.pixels[i];
// println("Working on pixel " + i + " out of " + img.pixels.length);
int red = (int) red(pixel);
int green = (int) green(pixel);
int blue = (int) blue(pixel);
/*
* Luminosity Method.
*/
// red = (int) (red * 0.21);
// green = (int) (green * 0.71);
// blue = (int) (blue * 0.07);
/*
* Average Method
*/
// float avg = (int) (red + green + blue / 3);
// red = green = blue = (int) avg;
/*
* Lightness Method
*/
int mostProminent = max(red, green, blue);
int leastProminent = min(red, green, blue);
int avg = (int) ((mostProminent + leastProminent) / 2);
red = green = blue = avg;
pixel = color(red, green, blue);
img.pixels[i] = pixel;
}
println("Done");
img.updatePixels();
}
My guess is that you either forgot to write img.loadPixels() instead of loadPixels(), or there was something wrong in sketch's draw() method.

Resources