How image pixel data "scans" the image pixels? - ios

The Goal:
Finding the first black pixel on the left side of an image that contains black and transparent pixels only.
What I have:
I know how to get the pixel data and have an array of black and transparent pixels (found it here : https://stackoverflow.com/a/1262893/358480 ):
+ (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
NSUInteger alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
[result addObject:[NSNumber numberWithInt:alpha]];
}
free(rawData);
return result;
}
What is the problem ?
I can not understand the order which the function "scans" the image.
What i want is to get only the columns of the image and locate the first column that has at list 1 non-transperant pixel. this way I will know how to crop the left, transparent side of the image?
How can I get the pixels by columns?
Thanks
Shani

The bytes are ordered left-to-right, top-to-bottom. So to do what you want, I think you want to loop over the rawData like this:
int x = 0;
int y = 0;
BOOL found = NO;
for (x = 0; x < width; x++) {
for (y = 0; y < height; y++) {
unsigned char alphaByte = rawData[(y*bytesPerRow)+(x*bytesPerPixel)+3];
if (alphaByte > 0) {
found = YES;
break;
}
}
if (found) break;
}
NSLog(#"First non-transparent pixel at %i, %i", x, y);
Then your first column that contains a non-transparent pixel will be column x.

Normally one would iterate over the image array from top to bottom over rows, and within each row from left to right over the columns. In this case you want the reverse: we want to iterate over each column, beginning at the left, and within the column we go over all rows and check if a black pixel is present.
This will give you the left-most black pixel:
size_t maxIndex = height * bytesPerRow;
for (size_t x = 0; x < bytesPerRow; x += bytesPerPixel)
{
for (size_t index = x; index < maxIndex; index += bytesPerRow)
{
if (rawData[index + 3] > 0)
{
goto exitLoop;
}
}
}
exitLoop:
if (x < bytesPerRow)
{
x /= bytesPerPixel;
// left most column is `x`
}
Well, this is equal to mattjgalloway, just slightly optimized, and neater too :O
Although a goto is usually permitted to abandon two loops from within the inner loop, it's still ugly. Makes me really miss those nifty flow control statements D has...
The function you provided in the example code does something different though. It starts at a certain position in the image (defined by xx and yy), and goes over count pixels going from the starting position to the right, continuing to next rows. It adds those alpha values to some array I suspect.
When passed xx = yy = 0, this will find the top-most pixel with certain conditions, not the left-most. This transformation is given by the code above. Do remind that a 2D image is simply a 1D array in memory, starting with the top row from left to right and proceeding with the next rows. Doing simple math one can iterate over rows or over columns.

Related

Reading pixels from UIImage results in BAD_ACCESS

I wrote this code that is supposed to NSLog all non-white pixels as a test before going further.
This is my code:
UIImage *image = [UIImage imageNamed:#"image"];
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
if(!pixelData) {
return;
}
const UInt8 *buffer = CFDataGetBytePtr(pixelData);
CFRelease(pixelData);
for(int y = 0; y < image.size.height; y++) {
for(int x = 0; x < image.size.width; x++) {
int pixelInfo = ((image.size.width * y) + x) * 4;
UInt8 red = buffer[pixelInfo];
UInt8 green = buffer[(pixelInfo + 1)];
UInt8 blue = buffer[pixelInfo + 2];
UInt8 alpha = buffer[pixelInfo + 3];
if(red != 0xff && green != 0xff && blue != 0xff){
NSLog(#"R: %hhu, G: %hhu, B: %hhu, A: %hhu", red, green, blue, alpha);
}
}
}
For some reason, when I build an app, it iterates for a moment and then throws BAD_ACCESS error on line:
UInt8 red = buffer[pixelInfo];. What could be the issue?
Is this the fastest method to iterate through pixels?
I think the problem is a buffer size error.
buffer has the size of width x height, and pixelInfo has a 4 multiplier.
I think you need to create an array 4 times bigger and save each pixel color of buffer in this new array. But you have to be careful not to read more of the size of the buffer.

CIPhotoEffect CIFilters are ~invariant with respect to colour management. What gives CIPhotoEffect filters this property?

To give this question some context (ho ho):
I am subclassing CIFilter under iOS for the purpose of creating some custom photo-effect filters. As per the documentation, this means creating a "compound" filter that encapsulates one or more pre-existing CIFilters within the umbrella of my custom CIFilter subclass.
All well and good. No problems there. For the sake of example, let's say I encapsulate a single CIColorMatrix filter which has been preset with certain rgba input vectors.
When applying my custom filter (or indeed CIColorMatrix alone), I see radically different results when using a CIContext with colour management on versus off. I am creating my contexts as follows:
Colour management on:
CIContext * context = [CIContext contextWithOptions:nil];
Colour management off:
NSDictionary *options = #{kCIContextWorkingColorSpace:[NSNull null], kCIContextOutputColorSpace:[NSNull null]};
CIContext * context = [CIContext contextWithOptions:options];
Now, this is no great surprise. However, I have noticed that all of the pre-built CIPhotoEffect CIFilters, e.g. CIPhotoEffectInstant, are essentially invariant under those same two colour management conditions.
Can anyone lend any insight as to what gives them this property? For example, do they themselves encapsulate particular CIFilters that may be applied with similar invariance?
My goal is to create some custom filters with the same property, without being limited to chaining only CIPhotoEffect filters.
--
Edit: Thanks to YuAo, I have assembled some working code examples which I post here to help others:
Programmatically generated CIColorCubeWithColorSpace CIFilter, invariant under different colour management schemes / working colour space:
self.filter = [CIFilter filterWithName:#"CIColorCubeWithColorSpace"];
[self.filter setDefaults];
int cubeDimension = 2; // Must be power of 2, max 128
int cubeDataSize = 4 * cubeDimension * cubeDimension * cubeDimension; // bytes
float cubeDataBytes[8*4] = {
0.0, 0.0, 0.0, 1.0,
0.1, 0.0, 1.0, 1.0,
0.0, 0.5, 0.5, 1.0,
1.0, 1.0, 0.0, 1.0,
0.5, 0.0, 0.5, 1.0,
1.0, 0.0, 1.0, 1.0,
0.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0
};
NSData *cubeData = [NSData dataWithBytes:cubeDataBytes length:cubeDataSize * sizeof(float)];
[self.filter setValue:#(cubeDimension) forKey:#"inputCubeDimension"];
[self.filter setValue:cubeData forKey:#"inputCubeData"];
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
[self.filter setValue:(__bridge id)colorSpace forKey:#"inputColorSpace"];
[self.filter setValue:sourceImageCore forKey:#"inputImage"];
CIImage *filteredImageCore = [self.filter outputImage];
CGColorSpaceRelease(colorSpace);
The docs state:
To provide a CGColorSpaceRef object as the input parameter, cast it to type id. With the default color space (null), which is equivalent to kCGColorSpaceGenericRGBLinear, this filter’s effect is identical to that of CIColorCube.
I wanted to go further and be able to read in cubeData from a file. So-called Hald Colour Look-up Tables, or Hald CLUT images may be used to defining a mapping from input colour to output colour.
With help from this answer, I assembled the code to do this also, reposted here for convenience.
Hald CLUT image based CIColorCubeWithColorSpace CIFilter, invariant under different colour management schemes / working colour space:
Usage:
NSData *cubeData = [self colorCubeDataFromLUT:#"LUTImage.png"];
int cubeDimension = 64;
[self.filter setValue:#(cubeDimension) forKey:#"inputCubeDimension"];
[self.filter setValue:cubeData forKey:#"inputCubeData"];
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB); // or whatever your image's colour space
[self.filter setValue:(__bridge id)colorSpace forKey:#"inputColorSpace"];
[self.filter setValue:sourceImageCore forKey:#"inputImage"];
Helper Methods (which use Accelerate Framework):
- (nullable NSData *) colorCubeDataFromLUT:(nonnull NSString *)name
{
UIImage *image = [UIImage imageNamed:name inBundle:[NSBundle bundleForClass:self.class] compatibleWithTraitCollection:nil];
static const int kDimension = 64;
if (!image) return nil;
NSInteger width = CGImageGetWidth(image.CGImage);
NSInteger height = CGImageGetHeight(image.CGImage);
NSInteger rowNum = height / kDimension;
NSInteger columnNum = width / kDimension;
if ((width % kDimension != 0) || (height % kDimension != 0) || (rowNum * columnNum != kDimension)) {
NSLog(#"Invalid colorLUT %#",name);
return nil;
}
float *bitmap = [self createRGBABitmapFromImage:image.CGImage];
if (bitmap == NULL) return nil;
// Convert bitmap data written in row,column order to cube data written in x:r, y:g, z:b representation where z varies > y varies > x.
NSInteger size = kDimension * kDimension * kDimension * sizeof(float) * 4;
float *data = malloc(size);
int bitmapOffset = 0;
int z = 0;
for (int row = 0; row < rowNum; row++)
{
for (int y = 0; y < kDimension; y++)
{
int tmp = z;
for (int col = 0; col < columnNum; col++) {
NSInteger dataOffset = (z * kDimension * kDimension + y * kDimension) * 4;
const float divider = 255.0;
vDSP_vsdiv(&bitmap[bitmapOffset], 1, &divider, &data[dataOffset], 1, kDimension * 4); // Vector scalar divide; single precision. Divides bitmap values by 255.0 and puts them in data, processes each column (kDimension * 4 values) at once.
bitmapOffset += kDimension * 4; // shift bitmap offset to the next set of values, each values vector has (kDimension * 4) values.
z++;
}
z = tmp;
}
z += columnNum;
}
free(bitmap);
return [NSData dataWithBytesNoCopy:data length:size freeWhenDone:YES];
}
- (float *)createRGBABitmapFromImage:(CGImageRef)image {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
unsigned char *bitmap;
NSInteger bitmapSize;
NSInteger bytesPerRow;
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
bytesPerRow = (width * 4);
bitmapSize = (bytesPerRow * height);
bitmap = malloc( bitmapSize );
if (bitmap == NULL) return NULL;
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) {
free(bitmap);
return NULL;
}
context = CGBitmapContextCreate (bitmap,
width,
height,
8,
bytesPerRow,
colorSpace,
(CGBitmapInfo)kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease( colorSpace );
if (context == NULL) {
free (bitmap);
return NULL;
}
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGContextRelease(context);
float *convertedBitmap = malloc(bitmapSize * sizeof(float));
vDSP_vfltu8(bitmap, 1, convertedBitmap, 1, bitmapSize); // Converts an array of unsigned 8-bit integers to single-precision floating-point values.
free(bitmap);
return convertedBitmap;
}
One may create a Hald CLUT Image by obtaining an identity image (Google!) and then applying to it the same image processing chain applied to the image used for visualising the "look" in any image editing program. Just make sure you set the cubeDimension in the example code to the correct dimension for the LUT image. If the dimension, d, is the number of elements along one side of the 3D LUT cube, the Hald CLUT image width and height would be d*sqrt(d) pixels and the image would have d^3 total pixels.
CIPhotoEffect internally uses
CIColorCubeWithColorSpace filter.
All the color cube data is stored within the CoreImage.framework.
You can find the simulator's CoreImage.framework here (/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator.sdk/System/Library/Frameworks/CoreImage.framework/).
The color cube data is named with scube path extension. e.g. CIPhotoEffectChrome.scube
CIColorCubeWithColorSpace internally covert the color cube color values to match the working color space of the current core image context by using private methods:
-[CIImage _imageByMatchingWorkingSpaceToColorSpace:];
-[CIImage _imageByMatchingColorSpaceToWorkingSpace:];
Here's how CIPhotoEffect/CIColorCubeWithColorSpace should work with color management on vs. off.
With color management ON here is what CI should do:
color match from input space to cube space. If these two are equal this is a noop.
apply the color cube.
color match from cube space to the output space. If these two are equal this is a noop.
With color management OFF here is what CI should do:
apply the color cube.

Convert matrix to UIImage

I need to convert a matrix representing a b/w image to UIImage.
For example:
A matrix like this (just the representation). This image would be the symbol '+'
1 0 1
0 0 0
1 0 1
This matrix represents an image in black and white, where black is 0 and white is 1. I need to convert this matrix to UIImage. In this case width would be 3 and height would be 3
I use this method to create an image for my Game Of Life app. The advantages over drawing to a graphics context is that this is ridiculously fast.
This was all written a long time ago so it's a bit messier than what I might do now but the method would stay the same. For some reasons I defined these outside the method...
{
unsigned int length_in_bytes;
unsigned char *cells;
unsigned char *temp_cells;
unsigned char *changes;
unsigned char *temp_changes;
GLubyte *buffer;
CGImageRef imageRef;
CGDataProviderRef provider;
int ar, ag, ab, dr, dg, db;
float arf, agf, abf, drf, dgf, dbf, blah;
}
You won't need all of these for the image.
The method itself...
- (UIImage*)imageOfMapWithDeadColor:(UIColor *)deadColor aliveColor:(UIColor *)aliveColor
{
//translate colours into rgb components
if ([deadColor isEqual:[UIColor whiteColor]]) {
dr = dg = db = 255;
} else if ([deadColor isEqual:[UIColor blackColor]]) {
dr = dg = db = 0;
} else {
[deadColor getRed:&drf green:&dgf blue:&dbf alpha:&blah];
dr = drf * 255;
dg = dgf * 255;
db = dbf * 255;
}
if ([aliveColor isEqual:[UIColor whiteColor]]) {
ar = ag = ab = 255;
} else if ([aliveColor isEqual:[UIColor blackColor]]) {
ar = ag = ab = 0;
} else {
[aliveColor getRed:&arf green:&agf blue:&abf alpha:&blah];
ar = arf * 255;
ag = agf * 255;
ab = abf * 255;
}
// dr = 255, dg = 255, db = 255;
// ar = 0, ag = 0, ab = 0;
//create bytes of image from the cell map
int yRef, cellRef;
unsigned char *cell_ptr = cells;
for (int y=0; y<self.height; y++)
{
yRef = y * (self.width * 4);
int x = 0;
do
{
cellRef = yRef + 4 * x;
if (*cell_ptr & 0x01) {
//alive colour
buffer[cellRef] = ar;
buffer[cellRef + 1] = ag;
buffer[cellRef + 2] = ab;
buffer[cellRef + 3] = 255;
} else {
//dead colour
buffer[cellRef] = dr;
buffer[cellRef + 1] = dg;
buffer[cellRef + 2] = db;
buffer[cellRef + 3] = 255;
}
cell_ptr++;
} while (++x < self.width);
}
//create image
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// render the byte array into an image ref
imageRef = CGImageCreate(self.width, self.height, 8, 32, 4 * self.width, colorSpace, kCGBitmapByteOrderDefault, provider, NULL, NO, kCGRenderingIntentDefault);
// convert image ref to UIImage
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpace);
//return image
return image;
}
You should be able to adapt this to create an image from your matrix.
In order to convert a matrix to UIImage :
CGSize size = CGSizeMake(lines, columns);
UIGraphicsBeginImageContextWithOptions(size, YES, 0);
for (int i = 0; i < lines; i++)
{
for (int j = 0; j < columns; j++)
{
// Choose color to draw
if ( matrixDraw[i*lines + j] == 1 ) {
[[UIColor whiteColor] setFill];
} else {
// Draw black pixel
[[UIColor blackColor] setFill];
}
// Draw just one pixel in i,j
UIRectFill(CGRectMake(i, j, 1, 1));
}
}
// Create UIImage with the current context that we have just created
UIImage *imageFinal = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Basically what we are doing is :
Create a context with the size of our image
Looping for each pixel to see the value. Black is 0 and white is 1. So depends on the value, we set the color.
The most important function :
UIRectFill(CGRectMake(i,j,1,1));
This function let us to fill a pixel in the i,j position with width and height (1 both cases for fill one single pixel)
Finally we create an UIImage with the current context and we call to finish the image context.
Hope it helps someone!

Create ColorCube CIFilter

I want to create ColorCube CIFilter for my app and i found documentation on apple site here https://developer.apple.com/library/ios/documentation/GraphicsImaging/Conceptual/CoreImaging/ci_filer_recipes/ci_filter_recipes.html .
Also i post code here,
**//Allocate memory **
**const unsigned int size = 64;**
**float *cubeData = (float *)malloc (size * size * size * sizeof (float) * 4);**
float rgb[3], hsv[3], *c = cubeData;
// Populate cube with a simple gradient going from 0 to 1
for (int z = 0; z < size; z++){
rgb[2] = ((double)z)/(size-1); // Blue value
for (int y = 0; y < size; y++){
rgb[1] = ((double)y)/(size-1); // Green value
for (int x = 0; x < size; x ++){
rgb[0] = ((double)x)/(size-1); // Red value
// Convert RGB to HSV
// You can find publicly available rgbToHSV functions on the Internet
rgbToHSV(rgb, hsv);
// Use the hue value to determine which to make transparent
// The minimum and maximum hue angle depends on
// the color you want to remove
float alpha = (hsv[0] > minHueAngle && hsv[0] < maxHueAngle) ? 0.0f: 1.0f;
// Calculate premultiplied alpha values for the cube
c[0] = rgb[0] * alpha;
c[1] = rgb[1] * alpha;
c[2] = rgb[2] * alpha;
c[3] = alpha;
c += 4; // advance our pointer into memory for the next color value
}
}
}
i want to know what they take size=64 wand what the mean of that bold line in code?
Any help appreciated...

Implementing Ordered Dithering (24 bit RGB to 3 bit per channel RGB)

I'm writing an image editing programme, and I need functionality to dither any arbitrary 24-bit RGB image (I've taken care of loading it with CoreGraphics and such) to an image with 3 bit colour channels, then displaying it. I've set up my matrices and such, but I've not got any results from the code below besides a simple pattern that is applied to the image:
- (CGImageRef) ditherImageTo16Colours:(CGImageRef)image withDitheringMatrixType:(SQUBayerDitheringMatrix) matrix {
if(image == NULL) {
NSLog(#"Image is NULL!");
return NULL;
}
unsigned int imageWidth = CGImageGetWidth(image);
unsigned int imageHeight = CGImageGetHeight(image);
NSLog(#"Image size: %u x %u", imageWidth, imageHeight);
CGContextRef context = CGBitmapContextCreate(NULL,
imageWidth,
imageHeight,
8,
4 * (imageWidth),
CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB),
kCGImageAlphaNoneSkipLast);
CGContextDrawImage(context, CGRectMake(0, 0, imageWidth, imageHeight), image); // draw it
CGImageRelease(image); // get rid of the image, we don't want it anymore.
unsigned char *imageData = CGBitmapContextGetData(context);
unsigned char ditheringModulusType[0x04] = {0x02, 0x03, 0x04, 0x08};
unsigned char ditheringModulus = ditheringModulusType[matrix];
unsigned int red;
unsigned int green;
unsigned int blue;
uint32_t *memoryBuffer;
memoryBuffer = (uint32_t *) malloc((imageHeight * imageWidth) * 4);
unsigned int thresholds[0x03] = {256/8, 256/8, 256/8};
for(int y = 0; y < imageHeight; y++) {
for(int x = 0; x < imageWidth; x++) {
// fetch the colour components, add the dither value to them
red = (imageData[((y * imageWidth) * 4) + (x << 0x02)]);
green = (imageData[((y * imageWidth) * 4) + (x << 0x02) + 1]);
blue = (imageData[((y * imageWidth) * 4) + (x << 0x02) + 2]);
if(red > 36 && red < 238) {
red += SQUBayer117_matrix[x % ditheringModulus][y % ditheringModulus];
} if(green > 36 && green < 238) {
green += SQUBayer117_matrix[x % ditheringModulus][y % ditheringModulus];
} if(blue > 36 && blue < 238) {
blue += SQUBayer117_matrix[x % ditheringModulus][y % ditheringModulus];
}
// memoryBuffer[(y * imageWidth) + x] = (0xFF0000 + ((x >> 0x1) << 0x08) + (y >> 2));
memoryBuffer[(y * imageWidth) + x] = find_closest_palette_colour(((red & 0xFF) << 0x10) | ((green & 0xFF) << 0x08) | (blue & 0xFF));
}
}
//CGContextRelease(context);
context = CGBitmapContextCreate(memoryBuffer,
imageWidth,
imageHeight,
8,
4 * (imageWidth),
CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB),
kCGImageAlphaNoneSkipLast);
NSLog(#"Created context from buffer: %#", context);
CGImageRef result = CGBitmapContextCreateImage(context);
return result;
}
Note that find_closest_palette_colour doesn't do anything besides returning the original colour right now for testing.
I'm trying to implement the example pseudocode from Wikipedia, and I don't really get anything out of that right now.
Anyone got a clue on how to fix this up?
Use the code that I have provided here: https://stackoverflow.com/a/17900812/342646
This code converts the image to a single-channel gray-scale first. If you want the dithering to be done on a three-channel image, you can just split your image into three channels and call the function three times (once per channel).

Resources