crash in simulator but iphone when I use dispatch_aplly - ios

When I user dispatch_apply to add some data to nscountset,I get a crash in simulator but iphone. And get a message "-[__NSArrayI isEqual:]: message sent to deallocated instance 0x60000024c5a0" ,I can not find how it happen.
CGSize thumbSize = CGSizeMake(200,200);
NSCountedSet *cls2 = [NSCountedSet setWithCapacity:thumbSize.width * thumbSize.height];
dispatch_apply(thumbSize.width, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(size_t index) {
int x = (int)index;
for (int y = 0; y < thumbSize.height; y++) {
if(y<x){
continue;
}
int offset = 4*(x*y);
int red = data[offset];
int green = data[offset+1];
int blue = data[offset+2];
int alpha = data[offset+3];
NSArray *clr2 = #[#(red),#(green),#(blue),#(alpha)];
[cls2 addObject:clr2];
}
});
crash in [cls2 addobject:clr2];The log is "-[__NSArrayI isEqual:]: message sent to deallocated instance 0x60000024c5a0".
And There are all Code In My function,I want get the mostColor in a Image.
CGSize thumbSize = CGSizeMake(200, 200);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, thumbSize.width, thumbSize.height, 8, thumbSize.width * 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGRect drawRect = CGRectMake(0, 0, thumbSize.width, thumbSize.height);
CGContextDrawImage(context, drawRect, self.CGImage);
CGColorSpaceRelease(colorSpace);
unsigned char *data = CGBitmapContextGetData(context);
if (data == NULL)
{
return nil;
}
NSCountedSet *cls2 = [NSCountedSet setWithCapacity:thumbSize.width * thumbSize.height];
dispatch_apply(thumbSize.width, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(size_t index) {
int x = (int) index;
for (int y = 0; y < thumbSize.height; y++)
{
if (y < x)
{
continue;
}
int offset = 4 * (x * y);
int red = data[offset];
int green = data[offset + 1];
int blue = data[offset + 2];
int alpha = data[offset + 3];
NSArray *clr3 = #[#(red), #(green), #(blue), #(alpha)];
[cls2 addObject:clr3];
}
});
CGContextRelease(context);
NSEnumerator *enumerator = [cls2 objectEnumerator];
NSArray *curColor = nil;
NSArray *maxColor = nil;
NSUInteger maxCount = 0;
while ((curColor = [enumerator nextObject]) != nil)
{
NSUInteger tmpCount = [cls2 countForObject:curColor];
if (tmpCount < maxCount) {
continue;
}
maxCount = tmpCount;
maxColor = curColor;
}
NSLog(#"colors: RGB A %f %d %d %d",[maxColor[0] floatValue],[maxColor[1] intValue],[maxColor[2] intValue],[maxColor[3] intValue]);
return [UIColor colorWithRed:([maxColor[0] intValue]/255.0f) green:([maxColor[1] intValue]/255.0f) blue:([maxColor[2] intValue]/255.0f) alpha:([maxColor[3] intValue]/255.0f)];

NSCountedSet is not thread safe - reference. Attempting to mutate one from concurrent threads is going to give unpredictable results - as you have found.
Either use a simple for loop without a dispatch_apply, dispatched on a background thread if required, to process your data, or protect your NSCountedSet update by dispatching it on a serial dispatch queue.

Related

Convert YUV data to CVPixelBufferRef and play in AVSampleBufferDisplayLayer

I'm having a stream of video in IYUV (4:2:0) format and trying to convert it into CVPixelBufferRef and then into CMSampleBufferRef and play it in AVSampleBufferDisplayLayer (AVPictureInPictureController required). I've tried several version of solution, but none actually works well, hope someone with video processing experience can tell what I've done wrong here.
Full function:
- (CMSampleBufferRef)makeSampleBufferFromTexturesWithY:(void *)yPtr U:(void *)uPtr V:(void *)vPtr yStride:(int)yStride uStride:(int)uStride vStride:(int)vStride width:(int)width height:(int)height doMirror:(BOOL)doMirror doMirrorVertical:(BOOL)doMirrorVertical
{
NSDictionary *pixelAttributes = #{(NSString *)kCVPixelBufferIOSurfacePropertiesKey:#{}}; // For 1,2,3
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result;
result = CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange // For 1,2,3
// kCVPixelFormatType_32BGRA, // For 4.
(__bridge CFDictionaryRef)(pixelAttributes),
&pixelBuffer);
if (result != kCVReturnSuccess) {
NSLog(#"PIP: Unable to create cvpixelbuffer %d", result);
return nil;
}
/// Converter code below...
CMFormatDescriptionRef formatDesc;
result = CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, &formatDesc);
if (result != kCVReturnSuccess) {
NSAssert(NO, #"PIP: Failed to create CMFormatDescription: %d", result);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return nil;
}
CMTime now = CMTimeMakeWithSeconds(CACurrentMediaTime(), 1000);
CMSampleTimingInfo timingInfo;
timingInfo.duration = CMTimeMakeWithSeconds(1, 1000);
timingInfo.presentationTimeStamp = now;
timingInfo.decodeTimeStamp = now;
#try {
if (#available(iOS 13.0, *)) {
CMSampleBufferRef sampleBuffer;
CMSampleBufferCreateReadyWithImageBuffer(kCFAllocatorDefault, pixelBuffer, formatDesc, &timingInfo, &sampleBuffer);
// CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CVPixelBufferRelease(pixelBuffer);
pixelBuffer = nil;
// free(dest.data);
// free(uvPlane);
return sampleBuffer;
} else {
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return nil;
}
} #catch (NSException *exception) {
NSAssert(NO, #"PIP: Failed to create CVSampleBuffer: %#", exception);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return nil;
}
}
Here's some solutions that I found:
Combine UV, but half bottom is green.
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yDestPlane, yPtr, width * height);
CGFloat uPlaneSize = width * height / 4;
CGFloat vPlaneSize = width * height / 4;
CGFloat numberOfElementsForChroma = uPlaneSize + vPlaneSize;
// for simplicity and speed create a combined UV panel to hold the pixels
uint8_t *uvPlane = calloc(numberOfElementsForChroma, sizeof(uint8_t));
memcpy(uvPlane, uPtr, uPlaneSize);
memcpy(uvPlane += (uint8_t)(uPlaneSize), vPtr, vPlaneSize);
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(uvDestPlane, uvPlane, numberOfElementsForChroma);
Interleave U and V, image is still distorted
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
for (int i = 0, k = 0; i < height; i ++) {
for (int j = 0; j < width; j ++) {
yDestPlane[k++] = ((unsigned char *)yPtr)[j + i * yStride];
}
}
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
for (int row = 0, index = 0; row < height / 2; row++) {
for (int col = 0; col < width / 2; col++) {
uvDestPlane[index++] = ((unsigned char *)uPtr)[col + row * uStride];
uvDestPlane[index++] = ((unsigned char *)vPtr)[col + row * vStride];
}
}
Some what similar to 1.
int yPixels = yStride * height;
int uPixels = uStride * height/2;
int vPixels = vStride * height/2;
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yDestPlane, yPtr, yPixels);
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(uvDestPlane , uPtr, uPixels);
memcpy(uvDestPlane + uPixels, vPtr, vPixels);
Use Accelerate to convert YUV to BGRA and then convert to CVPixelBuffer, no error but no video rendered
vImage_Buffer srcYp = {
.width = width,
.height = height,
.rowBytes = yStride,
.data = yPtr,
};
vImage_Buffer srcCb = {
.width = width / 2,
.height = height / 2,
.rowBytes = uStride,
.data = uPtr,
};
vImage_Buffer srcCr = {
.width = width / 2,
.height = height / 2,
.rowBytes = vStride,
.data = vPtr,
};
vImage_Buffer dest;
dest.data = NULL;
dest.width = width;
dest.height = height;
vImage_Error error = kvImageNoError;
error = vImageBuffer_Init(&dest, height, width, 32, kvImagePrintDiagnosticsToConsole);
// vImage_YpCbCrPixelRange pixelRange = (vImage_YpCbCrPixelRange){ 0, 128, 255, 255, 255, 1, 255, 0 };
vImage_YpCbCrPixelRange pixelRange = { 16, 128, 235, 240, 255, 0, 255, 0 };
vImage_YpCbCrToARGB info;
error = kvImageNoError;
error = vImageConvert_YpCbCrToARGB_GenerateConversion(kvImage_YpCbCrToARGBMatrix_ITU_R_601_4,
&pixelRange,
&info,
kvImage420Yp8_Cb8_Cr8,
kvImageARGB8888,
kvImagePrintDiagnosticsToConsole);
error = kvImageNoError;
uint8_t permuteMap[4] = {3, 2, 1, 0}; // BGRA - iOS only support BGRA
error = vImageConvert_420Yp8_Cb8_Cr8ToARGB8888(&srcYp,
&srcCb,
&srcCr,
&dest,
&info,
permuteMap, // for iOS must be no NULL, mac can be NULL iOS only support BGRA
255,
kvImagePrintDiagnosticsToConsole);
if (error != kvImageNoError) {
NSAssert(NO, #"PIP: vImageConvert error %ld", error);
return nil;
}
// vImageBuffer_CopyToCVPixelBuffer will give out error destFormat bitsPerComponent = 0 is not supported
// vImage_CGImageFormat format = {
// .bitsPerComponent = 8,
// .bitsPerPixel = 32,
// .bitmapInfo = (CGBitmapInfo)kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst,
// .colorSpace = CGColorSpaceCreateDeviceRGB()
// };
// vImageCVImageFormatRef vformat = vImageCVImageFormat_CreateWithCVPixelBuffer(pixelBuffer);
//
// error = vImageBuffer_CopyToCVPixelBuffer(&dest, &format, pixelBuffer, vformat, 0, kvImagePrintDiagnosticsToConsole);
result = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_32BGRA,
dest.data,
dest.rowBytes,
NULL,
NULL,
(__bridge CFDictionaryRef)pixelAttributes,
&pixelBuffer);
I have to resort to use a third-party library OGVKit to makes it works with some minor tweaks. The decoder is from the function (void)updatePixelBuffer420:pixelBuffer works with very fast decoding time for YUV420 data.

How to handle CT(Computed Tomography) grayscale binary data to RGB UIimage

Here is a binary CT grayscale data down the file/date_byte1
The aim is to change the window wide window to change the CT image
The two bytes in the data are 1 pixels, and the width and height are 512 pixels
How to transform the pixel array RGB array into a bitmap
- (void)viewDidLoad {
[super viewDidLoad];
NSData *data = [[NSData alloc]initWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"data_byte1" ofType:#""]];
Byte *testByte = (Byte *)[data bytes];
self.imageView.image = [self imageFromGRAYBytes:testByte imageSize:CGSizeMake(512, 512)];
}
- (UIImage *)imageFromGRAYBytes:(unsigned char *)imageBytes imageSize:(CGSize)imageSize {
CGImageRef imageRef = [self imageRefFromGRAYBytes:imageBytes imageSize:imageSize];
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return image;
}
- (CGImageRef)imageRefFromGRAYBytes:(unsigned char *)imageBytes imageSize:(CGSize)imageSize {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(imageBytes,
imageSize.width,
imageSize.height,
16,
imageSize.width * 2,
colorSpace,
kCGImageAlphaNone);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return imageRef;
}
I'm going through the following methodGet the picture
Obtained original diagram
But I want to change the display of the picture through this method
- (int)getPresentValueV:(int)v AndWidth:(int)w AndHeight:(int)h{
int minv = h - roundf(w/2);
if (v < minv) {
return 0;
}
int maxv = h + roundf(w/2);
if (v > maxv) {
return 255;
}
int pv = roundf(255*(v - minv)/w);
if (pv < 0 ){
return 0;
}
if( pv > 255 ){
return 255;
}
return pv;
}
This method changes the output of PV use w h, which is less than 255, The default value of w 1600 . The default value of h -400.Next based on PV as an array of RGB (PV, PV, PV) to create UIimage.
for (unsigned i = 0; i < data.length/2; i++) {
NSData *intData = [data subdataWithRange:NSMakeRange(i*2, 2)];
[arrray addObject:#([self getPresentValueV: *(int*)([intData bytes]) AndWidth:1600 AndHeight:-400])];
}
So I want to choose the different w h by the getPresentValueV method above, and get the PV to generate the RGB (PV, PV, PV) array to generate a new picture.
- (UIImage *)imageWithGrayArray:(NSArray *)array {
const int width = 512;
const int height = 512;
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
UInt32 * pixels = (UInt32 *)malloc(height * width * sizeof(UInt32));
memset(pixels, 0, height * width * sizeof(UInt32));
for (int i = 0; i < array.count;i++ ) {
NSArray *subArray = array[i];
for ( int j = 0 ; j < subArray.count; j++) {
*(pixels +i *width + j) = makeRGBAColor([subArray[j] integerValue],0, 0, 255);
}
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixels,
width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
UIImage *image = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
return image;
}
I generated a two-dimensional array to generate a new picture, but it was not the correct result.

Turning a uiimage to int array

I currently have code that takes a UIImage in and return an NSArray
-(NSArray*)getPixelsFromImage:(UIImage*)image
{
NSMutableArray *pixelData=[NSMutableArray new];
for (int y = 0; y<image.size.height; y+=1) {
for (int x = 0; x<image.size.width; x+=1) {
CGImageRef tmp = CGImageCreateWithImageInRect(image.CGImage, CGRectMake(x, y, 1, 1));
UIImage *part=[UIImage imageWithCGImage:tmp];
[pixelData addObject:part];
}
}
return [pixelData copy];
}
I was wondering how I could take this NSArray that it returns and turn it into an array of integers?

iOS: Feather with glow/shadow effect on uiimage

I am trying to find a way to apply feather effect with shadow around the UIImage, not UIImageView I have in iOS, I couldn't find any perfect solution yet. I have an idea that it might be done by masking, but I am very new to CoreGraphics.
If anyone can help.
Thanks.
OK so:
I was looking for same thing, but unfortunately with no luck.
I decided to create my own feather(ing) code.
add this code to the UIImage extension, and then call [image featherImageWithDepth:4], 4 is just example.
Try to keep depth as low as possible.
//==============================================================================
- (UIImage*)featherImageWithDepth:(int)featherDepth {
// First get the image into your data buffer
CGImageRef imageRef = [self CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
// Now your rawData contains the image data in the RGBA8888 pixel format.
NSUInteger byteIndex = 0;
NSUInteger rawDataCount = width*height;
for (int i = 0 ; i < rawDataCount ; ++i, byteIndex += bytesPerPixel) {
NSInteger alphaIndex = byteIndex + 3;
if (rawData[alphaIndex] > 100) {
for (int row = 1; row <= featherDepth; row++) {
if (testBorderLayer((long)alphaIndex,
rawData,
(long)rawDataCount,
(long)width,
(long)height,
row)) {
int destinationAlpha = 255 / (featherDepth+1) * (row + 1);
double alphaDiv = (double)destinationAlpha / (double)rawData[alphaIndex];
rawData[alphaIndex] = destinationAlpha;
rawData[alphaIndex-1] = (double)rawData[alphaIndex-1] * alphaDiv;
rawData[alphaIndex-2] = (double)rawData[alphaIndex-2] * alphaDiv;
rawData[alphaIndex-3] = (double)rawData[alphaIndex-3] * alphaDiv;
// switch (row) {
// case 1:
// rawData[alphaIndex-1] = 255;
// rawData[alphaIndex-2] = 0;
// rawData[alphaIndex-3] = 0;
// break;
// case 2:
// rawData[alphaIndex-1] = 0;
// rawData[alphaIndex-2] = 255;
// rawData[alphaIndex-3] = 0;
// break;
// case 3:
// rawData[alphaIndex-1] = 0;
// rawData[alphaIndex-2] = 0;
// rawData[alphaIndex-3] = 255;
// break;
// case 4:
// rawData[alphaIndex-1] = 127;
// rawData[alphaIndex-2] = 127;
// rawData[alphaIndex-3] = 0;
// break;
// case 5:
// rawData[alphaIndex-1] = 127;
// rawData[alphaIndex-2] = 0;
// rawData[alphaIndex-3] = 127;
// case 6:
// rawData[alphaIndex-1] = 0;
// rawData[alphaIndex-2] = 127;
// rawData[alphaIndex-3] = 127;
// break;
// default:
// break;
// }
break;
}
}
}
}
CGImageRef newCGImage = CGBitmapContextCreateImage(context);
UIImage *result = [UIImage imageWithCGImage:newCGImage scale:[self scale] orientation:UIImageOrientationUp];
CGImageRelease(newCGImage);
CGContextRelease(context);
free(rawData);
return result;
}
//==============================================================================
bool testBorderLayer(long byteIndex,
unsigned char *imageData,
long dataSize,
long pWidth,
long pHeight,
int border) {
int width = border * 2 + 1;
int height = width - 2;
// run thru border pixels
// |-|
// | |
// |-|
//top,bot - hor
for (int i = 1; i < width - 1; i++) {
long topIndex = byteIndex + 4 * ( - border * pWidth - border + i);
long botIndex = byteIndex + 4 * ( border * pWidth - border + i);
long destColl = byteIndex/4 % pWidth - border + i;
if (destColl > 1 && destColl < pWidth) {
if (testPoint(topIndex, imageData, dataSize) ||
testPoint(botIndex, imageData, dataSize)) {
return true;
}
}
}
//left,right - ver
if (byteIndex / 4 % pWidth < pWidth - border - 1) {
for (int k = 0; k < height; k++) {
long rightIndex = byteIndex + 4 * ( border - (border) * pWidth + pWidth * k);
if (testPoint(rightIndex, imageData, dataSize)) {
return true;
}
}
}
if (byteIndex / 4 % pWidth > border) {
for (int k = 0; k < height; k++) {
long leftIndex = byteIndex + 4 * ( - border - (border) * pWidth + pWidth * k);
if (testPoint(leftIndex, imageData, dataSize)) {
return true;
}
}
}
return false;
}
//==============================================================================
bool testPoint(long pointIndex, unsigned char *imageData, long dataSize) {
if (pointIndex >= 0 && pointIndex < dataSize * 4 - 1 &&
imageData[pointIndex] < 30) {
return true;
}
return false;
}
//==============================================================================
Sorry for rare commenting ;)
I suggest looking at this CIFilter list put out by apple, they got some pretty decent filters.
/reference/CoreImageFilterReference
And also check out GPUImage.
https://github.com/BradLarson/GPUImage

Get pixels value from png file. which is right & why?

I need pixel value from png file. so I searched SO and got two methods as follows:
Method 1
- (void)GeneratePixelArray {
UIImage *cImage = [UIImage imageNamed:#"ball.png"];
int width = (int)cImage.size.width;
int height = (int)cImage.size.height;
unsigned char *cMap = (unsigned char *)malloc(width * height);
memset(cMap, 0, width * height);
CFDataRef imageData = CGDataProviderCopyData(CGImageGetDataProvider(cImage.CGImage));
const UInt32 *pixels = (const UInt32*)CFDataGetBytePtr(imageData);
//0xff 0x00 is guard for this demo
for (int j = 0; j < (width * height); j++)
{
printf("0x%x\n", (unsigned int)pixels[j]);
}
CFRelease(imageData);
}
Method 2
- (void *)GeneratePixelArray//: (UIImage *) image
{
UIImage *cImage = [[UIImage alloc] imageNamed:#"ball.png"];
//UIImage *cImage = [UIImage imageNamed:#"ball.png"];
int pixelsWidth = (int)cImage.size.width;
int pixelsHeight = (int)cImage.size.height;
CGRect rect = {{0,0}, {pixelsWidth, pixelsHeight}};
// Use RCG color space without Alpha, just RGB
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if(colorSpace == NULL)
{
NSLog(#"Error allocating color space! \n");
return NULL;
}
unsigned int bitmapBytesPerRow = pixelsWidth*4;
unsigned int bitmapByteCount = bitmapBytesPerRow*pixelsHeight;
void * bitmapData = malloc(bitmapByteCount);
if (bitmapData == NULL) {
NSLog(#"Memory not allocated!\n");
CGColorSpaceRelease(colorSpace);
return NULL;
}
// create bitmap context ,8 bits per component
CGContextRef context = CGBitmapContextCreate(bitmapData,
pixelsWidth,
pixelsHeight,
8,//bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
// Make sure release colorspace before returning
CGColorSpaceRelease(colorSpace);
if (context == NULL)
{
free(bitmapData);
NSLog(#"Context not be created!");
return NULL;
}
// Draw the image to the bitmap context
CGContextDrawImage(context, rect, cImage.CGImage);
// Now we can get a pointer to the image data associated with context
unsigned char *bitsData;
bitsData = CGBitmapContextGetData(context);
if (!bitsData)
{
NSLog(#"Failed");
return;
}
void *data = CGBitmapContextGetData(context);
unsigned char* bitmapData2 = malloc(bitmapByteCount);
if (bitmapData2 == NULL)
{
NSLog(#"Memory not be allocated!\n");
return NULL;
}
unsigned char* rcdata = (unsigned char *)data;
unsigned char* wcdata = bitmapData2;
// remove ARGB's fourth value, it is Alpha
for (int i = 0; i < bitmapByteCount / 4; ++i, rcdata += 4)
{
printf("%x\n", (unsigned int)*(unsigned int*)rcdata);
*(wcdata + 0) = *(rcdata + 0);
*(wcdata + 1) = *(rcdata + 1);
*(wcdata + 2) = *(rcdata + 2);
*(wcdata + 3) = *(rcdata + 3);
if (*(wcdata+3)<20) {
printf("alpha...\n");
}
// if ( (*(wcdata + 0)==255) &&(*(wcdata + 1)==0) && (*(wcdata + 2)==0) ) {
// printf("red\n");
// }
wcdata += 4;// skip alpha
}
CGContextRelease(context);
return bitsData;
}
I logged output using:
printf("%x\n", (unsigned int)*(unsigned int*)rcdata);
Method 1 and Method 2 logs differ! Why? I am confused.
Thanks a lot!

Resources