unsigned char allocation comes nils in offset value objective c - ios

I am getting the pixel colour values from touch points. I am successfully doing this but after sometimes app is giving the error( EXC_BAD_ACCESS(CODE=1,address=0x41f6864). Its memory allocation problem here is the source code for your reference.
- (UIColor *) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
#try{
{
CGImageRef inImage = drawImage.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL)
{
return nil; /* error */
}
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage (cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char *data = {0};
data=(unsigned char*) calloc(CGImageGetHeight(inImage) * CGImageGetWidth(inImage) , CGBitmapContextGetHeight(cgctx)*CGBitmapContextGetWidth(cgctx));
data= CGBitmapContextGetData (cgctx);
if( data !=NULL ) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(point.y))+round(point.x));
// NSLog(#"%s111111",data);
int alpha = data[offset]; /////// EXC_BAD_ACCESS(CODE=1,address=0x41f6864)
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
//NSLog(#"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
//CGImageRelease(*data);
CGContextRelease(cgctx);
// Free image data memory for the context
if (data)
{
free(data);
}
}
#catch (NSException *exception) {
}
return color;
}

The memory management in your code appears to be wrong:
Declare data and pointlessly assign a value to it:
unsigned char *data = {0};
Allocate a memory block and store a reference to it in data - overwriting the pointless initialisation:
data = (unsigned char *)calloc(CGImageGetHeight(inImage) * CGImageGetWidth(inImage), CGBitmapContextGetHeight(cgctx) * CGBitmapContextGetWidth(cgctx));
Now get a reference to a different memory block and store it in data, throwing away the reference to the calloc'ed block:
data = CGBitmapContextGetData (cgctx);
Do some other stuff and then free the block you did not calloc:
free(data);
If you are allocating your own memory buffer you should pass it to CGBitmapContextCreate, however provided you are using iOS 4+ there is no need to allocate your own buffer.
As to the memory access error, you are doing no checks on the value of point and your calculation would appear to be producing a value of offset which is incorrect. Add checks on the values of point and offset and take appropriate action if they are out of bounds (you will have to decide what that should be).
HTH

The problem may cause by the point is out of image rect,so you can use
try{
int offset = 4*((w*round(point.y))+round(point.x));
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f)
alpha:(alpha/255.0f)];
}catch(NSException e){
}
to avoid the EXC_BAD_ACCESS

Related

Apple's Image loading/save code appears to be slightly modifying my images

I'm writing some tests to detect changes to the lossless image formats (starting with PNG) and finding that on Linux and Windows the image loading mechanisms work as expected - but on iOS (haven't tried on macOS) the image data is always being very slightly changed if I load from a PNG file on disk or save to a PNG file on disk using Apples' methods.
If I create a PNG using any number of tools (GIMP/Paint.NET/whatever) and I use my cross platform PNG reading code to examine each pixel of the resulting loaded data - it matches exactly what I did in the tool (or programmatically generated with my cross platform PNG writing code.) Subsequent reloading into the creation tools yields the exactly same RGBA8888 components.
If I load the PNG from disk using Apple's:
NSString* pPathToFile = nsStringFromStdString( sPathToFile );
UIImage* pImageFromDiskPNG = [UIImage imageWithContentsOfFile:pPathToFile];
...then examine the resulting pixels it's similar but not the same. I would expect, like on other platforms, for the data to be identical.
Now, interestingly, if I load the data from the PNG using my code, and creating a UIImage with it (using some code I show below) I can use that UIImage and display it, copy it, whatever, and if I examine the pixel data - it's exactly what I gave it to begin with (which is why I think it's the loading saving part where Apple is modifying the image data.)
When I instruct it to save what I know to be a good UIImage with perfect pixel data, and then load that Apple saved image with my PNG loading code, I can see it's not exactly the same data. I have used several methods by which Apple suggests to save UIImage's to PNG (UIImagePNGRepresentation primarily.)
The only thing I can really think of is that Apple when loading or saving on iOS doesn't truly support RGBA8888 and is doing some sort of premultiply with the alpha channel - I speculate about this because when I first started using the code I posted below I was choosing
kCGImageAlphaLast
...instead of what I ultimately had to use
kCGImageAlphaPremultipliedLast
because the former is not supported on iOS for some reason.
Does anyone have any experience around this issue on iOS?
Cheers!
The code I use to push/pull RGBA8888 data into and out of UIImages is below:
- (unsigned char *) convertUIImageToBitmapRGBA8:(UIImage*)image dataSize:(NSUInteger*)dataSize
{
CGImageRef imageRef = image.CGImage;
// Create a bitmap context to draw the uiimage into
CGContextRef context = [self newBitmapRGBA8ContextFromImage:imageRef];
if(!context) {
return NULL;
}
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
CGRect rect = CGRectMake(0, 0, width, height);
// Draw image into the context to get the raw image data
CGContextDrawImage(context, rect, imageRef);
// Get a pointer to the data
unsigned char *bitmapData = (unsigned char *)CGBitmapContextGetData(context);
// Copy the data and release the memory (return memory allocated with new)
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(context);
size_t bufferLength = bytesPerRow * height;
unsigned char *newBitmap = NULL;
if(bitmapData) {
*dataSize = sizeof(unsigned char) * bytesPerRow * height;
newBitmap = (unsigned char *)malloc(sizeof(unsigned char) * bytesPerRow * height);
if(newBitmap) { // Copy the data
for(int i = 0; i < bufferLength; ++i) {
newBitmap[i] = bitmapData[i];
}
}
free(bitmapData);
} else {
NSLog(#"Error getting bitmap pixel data\n");
}
CGContextRelease(context);
return newBitmap;
}
- (CGContextRef) newBitmapRGBA8ContextFromImage:(CGImageRef) image
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
uint32_t *bitmapData;
size_t bitsPerPixel = 32;
size_t bitsPerComponent = 8;
size_t bytesPerPixel = bitsPerPixel / bitsPerComponent;
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
size_t bytesPerRow = width * bytesPerPixel;
size_t bufferLength = bytesPerRow * height;
colorSpace = CGColorSpaceCreateDeviceRGB();
if(!colorSpace) {
NSLog(#"Error allocating color space RGB\n");
return NULL;
}
// Allocate memory for image data
bitmapData = (uint32_t *)malloc(bufferLength);
if(!bitmapData) {
NSLog(#"Error allocating memory for bitmap\n");
CGColorSpaceRelease(colorSpace);
return NULL;
}
//Create bitmap context
context = CGBitmapContextCreate( bitmapData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrder32Big );
if( !context )
{
free( bitmapData );
NSLog( #"Bitmap context not created" );
}
CGColorSpaceRelease( colorSpace );
return context;
}
- (UIImage*) convertBitmapRGBA8ToUIImage:(unsigned char*) pBuffer withWidth:(int) nWidth withHeight:(int) nHeight
{
// Create the bitmap context
const size_t nColorChannels = 4;
const size_t nBitsPerChannel = 8;
const size_t nBytesPerRow = ((nBitsPerChannel * nWidth) / 8) * nColorChannels;
CGColorSpaceRef oCGColorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGContextRef oCGContextRef = CGBitmapContextCreate( pBuffer, nWidth, nHeight, nBitsPerChannel, nBytesPerRow , oCGColorSpaceRef, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrder32Big );
// create the image:
CGImageRef toCGImage = CGBitmapContextCreateImage(oCGContextRef);
UIImage* pImage = [[UIImage alloc] initWithCGImage:toCGImage];
return pImage;
}
Based on your source code, it appears that you are using BGRA (RGB + alpha channel) data which is imported from PNG source images. When you attach images to an iOS project, Xcode will pre-process each image to pre-multiply the RGB and A channel data for performance reasons. So, by the time the image is loaded on the iPhone device, the RGB values for non-opaque (not A = 255) pixels can be changed. The RGB numbers are modified, but the actual image data will come out the same when rendered to the screen by iOS. This is known as "straight alpha" vs "pre-multiplied alpha".
Store image data directly, don't use UIImage.pngData() to convert image to data, this method will change the pixel's rgb value if the pixel has alpha channel.

Why did the SDWebImage use #autoreleasepool block in the decodedImageWithImage method?

Look like the code snippet below, it use the #autoreleasepool block in this method.
+ (UIImage *)decodedImageWithImage:(UIImage *)image {
// while downloading huge amount of images
// autorelease the bitmap context
// and all vars to help system to free memory
// when there are memory warning.
// on iOS7, do not forget to call
// [[SDImageCache sharedImageCache] clearMemory];
if (image == nil) { // Prevent "CGBitmapContextCreateImage: invalid context 0x0" error
return nil;
}
#autoreleasepool{
// do not decode animated images
if (image.images != nil) {
return image;
}
CGImageRef imageRef = image.CGImage;
CGImageAlphaInfo alpha = CGImageGetAlphaInfo(imageRef);
BOOL anyAlpha = (alpha == kCGImageAlphaFirst ||
alpha == kCGImageAlphaLast ||
alpha == kCGImageAlphaPremultipliedFirst ||
alpha == kCGImageAlphaPremultipliedLast);
if (anyAlpha) {
return image;
}
// current
CGColorSpaceModel imageColorSpaceModel = CGColorSpaceGetModel(CGImageGetColorSpace(imageRef));
CGColorSpaceRef colorspaceRef = CGImageGetColorSpace(imageRef);
BOOL unsupportedColorSpace = (imageColorSpaceModel == kCGColorSpaceModelUnknown ||
imageColorSpaceModel == kCGColorSpaceModelMonochrome ||
imageColorSpaceModel == kCGColorSpaceModelCMYK ||
imageColorSpaceModel == kCGColorSpaceModelIndexed);
if (unsupportedColorSpace) {
colorspaceRef = CGColorSpaceCreateDeviceRGB();
}
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
// kCGImageAlphaNone is not supported in CGBitmapContextCreate.
// Since the original image here has no alpha info, use kCGImageAlphaNoneSkipLast
// to create bitmap graphics contexts without alpha info.
CGContextRef context = CGBitmapContextCreate(NULL,
width,
height,
bitsPerComponent,
bytesPerRow,
colorspaceRef,
kCGBitmapByteOrderDefault|kCGImageAlphaNoneSkipLast);
// Draw the image into the context and retrieve the new bitmap image without alpha
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGImageRef imageRefWithoutAlpha = CGBitmapContextCreateImage(context);
UIImage *imageWithoutAlpha = [UIImage imageWithCGImage:imageRefWithoutAlpha
scale:image.scale
orientation:image.imageOrientation];
if (unsupportedColorSpace) {
CGColorSpaceRelease(colorspaceRef);
}
CGContextRelease(context);
CGImageRelease(imageRefWithoutAlpha);
return imageWithoutAlpha;
}
}
(the method is in SDWebImageDecoder.m, the version is SDWebImage
3.7.0).
I am confused with it, because these temp objects will be released after the method return, so is it necessary to use the autoreleasepool to release them only a little before? the autoreleasepool will also occupy the memory.
anyone can explain it, thanks!
Go through this apple doc. It is mentioned that
Three occasions when you might use your own autorelease pool blocks:
If you are writing a program that is not based on a UI framework, such as a command-line tool.
If you write a loop that creates many temporary objects.
You may use an autorelease pool block inside the loop to dispose of those objects before the next iteration. Using an autorelease pool block in the loop helps to reduce the maximum memory footprint of the application.
If you spawn a secondary thread.
You must create your own autorelease pool block as soon as the thread begins executing; otherwise, your application will leak objects. (See Autorelease Pool Blocks and Threads for details.)
I am not sure about the first point, but SDWebImage will surely use autoreleasepool for other two points.

Malloc pointer being freed was not allocated error when calling initWithBitmapData

When I create a CIImage by calling this routine I get the malloc error in the title. When I call initWithBitmapData directly (not within the createCIUimageFromData routine) then it works fine.
I have seen references to possible bugs in iOS that might be related, but I can't tell for sure and I certainly suspect my code more than Apple's!
My guess is somehow my additional redirection is screwing things up, but it's cleaner to have the separate routine than to embed the code wherever I need it.
Thank you.
Fails:
- (CIImage *) createCIimageFromData : (unsigned char *)pData width : (int32_t)width height : (int32_t)height
{
/*
Once we have the raw data, we convert it into a CIImage.
The following code does the required work.
*/
NSLog(#"entering createciimage\n");
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceGray();
NSData *_pixelsData = [NSData dataWithBytesNoCopy:pData length:(sizeof(unsigned char)*width*height) freeWhenDone:YES ];
CIImage *_dataCIImage = [[CIImage alloc] initWithBitmapData:_pixelsData bytesPerRow:(width*sizeof(unsigned char)) size:CGSizeMake(width,height) format:kCIFormatR8 colorSpace:colorSpaceRef];
CGColorSpaceRelease(colorSpaceRef);
/*
newImage is the final image
Do remember to release the allocated parts.
*/
NSLog(#"done ciimage\n");
return _dataCIImage;
}
Works:
void prepData(unsigned char *pData, // source-destination
int strideSrc, // stride
int width,
int height,
double amount,
int deltaLimit,
id owner)
{
//[owner createCIimageFromData:pData width:width height:height]; // <-- commented out
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceGray();
NSData *_pixelsData = [NSData dataWithBytesNoCopy:pData length:(sizeof(unsigned char)*width*height) freeWhenDone:YES ];
CIImage *_dataCIImage = [[CIImage alloc] initWithBitmapData:_pixelsData bytesPerRow:(width*sizeof(unsigned char)) size:CGSizeMake(width,height) format:kCIFormatR8 colorSpace:colorSpaceRef];
CGColorSpaceRelease(colorSpaceRef);
// . . .
}
Evidently the problem is caused when the NSData object attempts to free the data. To avoid the problem, use freeWhenDone:NO and then free the data after you're done with the CIImage.

How to change colour of individual pixel of UIImage/UIImageView

I have a UIImageView to which I have applied the filter:
testImageView.layer.magnificationFilter = kCAFilterNearest;
So that the individual pixels are visible. This UIImageView is within a UIScrollView, and the image itself is 1000x1000. I have used the following code to detect which pixel has been tapped:
I first set up a tap gesture recognizer:
UITapGestureRecognizer *scrollTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(singleTapGestureCaptured: )];
scrollTap.numberOfTapsRequired = 1;
[mainScrollView addGestureRecognizer:scrollTap];
Then used the location of the tap to produce the coordinates of the tap by which pixel of the UIImageView is tapped:
- (void)singleTapGestureCaptured:(UITapGestureRecognizer *)gesture
{
CGPoint touchPoint = [gesture locationInView:testImageView];
NSLog(#"%f is X pixel num, %f is Y pixel num ; %f is width of imageview", (touchPoint.x/testImageView.bounds.size.width)*1000, (touchPoint.y/testImageView.bounds.size.width)*1000, testImageView.bounds.size.width);
}
I would like to be able to tap a pixel, and have its colour change. However, none of the StackOverflow posts I have found have answers which work or are not outdated. For skilled coders, however, you may be able to help me decipher the older posts to make something that works, or to produce a simple fix on your own using my above code for detecting which pixel of the UIImageView has been tapped.
All help is appreciated.
Edit for originaluser2:
After following originaluser2's post, running the code works perfectly when I run it through his example GitHub project on my physical device. However, when I run the same code in my own app, I am met with the image being replaced with a white space, and the following errors:
<Error>: Unsupported pixel description - 3 components, 16 bits-per-component, 64 bits-per-pixel
<Error>: CGBitmapContextCreateWithData: failed to create delegate.
<Error>: CGContextDrawImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
<Error>: CGBitmapContextCreateImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
The code clearly works, as demonstrated by me testing it on my phone. However, the same code has produced a few issues in my own project. Though I have the suspicion that they are all caused by one or two simple central issues. How can I solve these errors?
You'll want to break this problem up into multiple steps.
Get the coordinates of the touched point in the image coordinate system
Get the x and y position of the pixel to change
Create a bitmap context and replace the given pixel's components with your new color's components.
First of all, to get the coordinates of the touched point in the image coordinate system – you can use a category method that I wrote on UIImageView. This will return a CGAffineTransform that will map a point from view coordinates to image coordinates – depending on the content mode of the view.
#interface UIImageView (PointConversionCatagory)
#property (nonatomic, readonly) CGAffineTransform viewToImageTransform;
#property (nonatomic, readonly) CGAffineTransform imageToViewTransform;
#end
#implementation UIImageView (PointConversionCatagory)
-(CGAffineTransform) viewToImageTransform {
UIViewContentMode contentMode = self.contentMode;
// failure conditions. If any of these are met – return the identity transform
if (!self.image || self.frame.size.width == 0 || self.frame.size.height == 0 ||
(contentMode != UIViewContentModeScaleToFill && contentMode != UIViewContentModeScaleAspectFill && contentMode != UIViewContentModeScaleAspectFit)) {
return CGAffineTransformIdentity;
}
// the width and height ratios
CGFloat rWidth = self.image.size.width/self.frame.size.width;
CGFloat rHeight = self.image.size.height/self.frame.size.height;
// whether the image will be scaled according to width
BOOL imageWiderThanView = rWidth > rHeight;
if (contentMode == UIViewContentModeScaleAspectFit || contentMode == UIViewContentModeScaleAspectFill) {
// The ratio to scale both the x and y axis by
CGFloat ratio = ((imageWiderThanView && contentMode == UIViewContentModeScaleAspectFit) || (!imageWiderThanView && contentMode == UIViewContentModeScaleAspectFill)) ? rWidth:rHeight;
// The x-offset of the inner rect as it gets centered
CGFloat xOffset = (self.image.size.width-(self.frame.size.width*ratio))*0.5;
// The y-offset of the inner rect as it gets centered
CGFloat yOffset = (self.image.size.height-(self.frame.size.height*ratio))*0.5;
return CGAffineTransformConcat(CGAffineTransformMakeScale(ratio, ratio), CGAffineTransformMakeTranslation(xOffset, yOffset));
} else {
return CGAffineTransformMakeScale(rWidth, rHeight);
}
}
-(CGAffineTransform) imageToViewTransform {
return CGAffineTransformInvert(self.viewToImageTransform);
}
#end
There's nothing too complicated here, just some extra logic for scale aspect fit/fill, to ensure the centering of the image is taken into account. You could skip this step entirely if your were displaying your image 1:1 on screen.
Next, you'll want to get the x and y position of the pixel to change. This is fairly simple – you just want to use the above category property viewToImageTransform to get the pixel in the image coordinate system, and then use floor to make the values integral.
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(imageViewWasTapped:)];
tapGesture.numberOfTapsRequired = 1;
[imageView addGestureRecognizer:tapGesture];
...
-(void) imageViewWasTapped:(UIGestureRecognizer*)tapGesture {
if (!imageView.image) {
return;
}
// get the pixel position
CGPoint pt = CGPointApplyAffineTransform([tapGesture locationInView:imageView], imageView.viewToImageTransform);
PixelPosition pixelPos = {(NSInteger)floor(pt.x), (NSInteger)floor(pt.y)};
// replace image with new image, with the pixel replaced
imageView.image = [imageView.image imageWithPixel:pixelPos replacedByColor:[UIColor colorWithRed:0 green:1 blue:1 alpha:1.0]];
}
Finally, you'll want to use another of my category methods – imageWithPixel:replacedByColor: to get out your new image with a replaced pixel with a given color.
/// A simple struct to represent the position of a pixel
struct PixelPosition {
NSInteger x;
NSInteger y;
};
typedef struct PixelPosition PixelPosition;
#interface UIImage (UIImagePixelManipulationCatagory)
#end
#implementation UIImage (UIImagePixelManipulationCatagory)
-(UIImage*) imageWithPixel:(PixelPosition)pixelPosition replacedByColor:(UIColor*)color {
// components of replacement color – in a 255 UInt8 format (fairly standard bitmap format)
const CGFloat* colorComponents = CGColorGetComponents(color.CGColor);
UInt8* color255Components = calloc(sizeof(UInt8), 4);
for (int i = 0; i < 4; i++) color255Components[i] = (UInt8)round(colorComponents[i]*255.0);
// raw image reference
CGImageRef rawImage = self.CGImage;
// image attributes
size_t width = CGImageGetWidth(rawImage);
size_t height = CGImageGetHeight(rawImage);
CGRect rect = {CGPointZero, {width, height}};
// image format
size_t bitsPerComponent = 8;
size_t bytesPerRow = width*4;
// the bitmap info
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
// data pointer – stores an array of the pixel components. For example (r0, b0, g0, a0, r1, g1, b1, a1 .... rn, gn, bn, an)
UInt8* data = calloc(bytesPerRow, height);
// get new RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create bitmap context
CGContextRef ctx = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);
// draw image into context (populating the data array while doing so)
CGContextDrawImage(ctx, rect, rawImage);
// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));
// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a
// get image from context
CGImageRef img = CGBitmapContextCreateImage(ctx);
// clean up
free(color255Components);
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
free(data);
UIImage* returnImage = [UIImage imageWithCGImage:img];
CGImageRelease(img);
return returnImage;
}
#end
What this does is first get out the components of the color you want to write to one of the pixels, in a 255 UInt8 format. Next, it creates a new bitmap context, with the given attributes of your input image.
The important bit of this method is:
// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));
// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a
What this does is get out the index of a given pixel (based on the x and y coordinate of the pixel) – then uses that index to replace the component data of that pixel with the color components of your replacement color.
Finally, we get out an image from the bitmap context and perform some cleanup.
Finished Result:
Full Project: https://github.com/hamishknight/Pixel-Color-Changing
You could try something like the following:
UIImage *originalImage = [UIImage imageNamed:#"something"];
CGSize size = originalImage.size;
UIGraphicsBeginImageContext(size);
[originalImage drawInRect:CGRectMake(0, 0, size.width, size.height)];
// myColor is an instance of UIColor
[myColor setFill];
UIRectFill(CGRectMake(someX, someY, 1, 1);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

How to get the coordinates of each pixel of custom uiimage?

Hi everyone
I need to write the simple puzzle game and the main condition is that when the piece of puzzle is close to its destination when it is "released" it gets there exactly where it should be.
So I tried to get the array of cordinates of each pixel of image, to do this I want to compare the pixels color with background color and if them are not equal, that is the coordinate of images pixel. But.. I don`t how to do this.
I tried:
- (BOOL)isImagePixel:(UIImage *)image withX:(int)x andY:(int) y {
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);
int pixelInfo = ((image.size.width * y) + x ) * 4; // The image is png
UInt8 red = data[pixelInfo];
UInt8 green = data[(pixelInfo + 1)];
UInt8 blue = data[pixelInfo + 2];
UInt8 alpha = data[pixelInfo + 3];
CFRelease(pixelData);
UIColor* color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];
NSLog(#"color is %#",[UIColor whiteColor]);
if ([color isEqual:self.view.backgroundColor]){
NSLog(#"x = %d, y = %d",x,y);
return YES;
}
else return NO;
}
What is wrong here?
Or maybe someone can suggest me another solution?
Thank you.
This appears to be a really cumbersome solution. My suggestion is that for every piece you maintain a table of say it's top left coordinate in the puzzle, and when the user lifts a finger you compute the absolute distance from the current location to the designated location.

Resources