Please check this code and help me out,,,
CGImageRef cRef = CGImageRetain(im.CGImage);
NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(cRef));
// return pointer to data
unsigned char* pixelBytes = (unsigned char *)[pixelData bytes];
// step through char data
for(int k = 0; k < [pixelData length]; k += 4) {
// change accordingly
pixelBytes[k] = pixelBytes[k];
pixelBytes[k+1] = pixelBytes[k+1];
pixelBytes[k+2] = pixelBytes[k+2];
pixelBytes[k+3] = 255;
}
NSData* newPixelData = [NSData dataWithBytes:pixelBytes length:[pixelData length]];
CFDataRef imgData = (CFDataRef)pixelData;
CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData(imgData);
CGImageRef throughCGImage = CGImageCreateWithPNGDataProvider(imgDataProvider, NULL, YES, kCGRenderingIntentDefault);
UIImage* newImage = [UIImage imageWithCGImage:throughCGImage];
NSLog(#"newImage: %#", newImage);
Some data is coming but it is not getting added to UIImageView..it is showing blank.
Problem 0)
You're mutating immutable data, which should be undefined behavior.
Problem 1)
Anyways, a PNG is a real file format, not a bitmap blob with a fixed sample format. PNG can represent images in many ways - your programs just overwrites good data.
Related
I am trying to get a list of all the colors in an image in Objective-C. Note, I am COMPLETELY new to Objective-C - I've done some Swift work in the past, but not really Objective-C.
I pulled a library that more or less is supposed to pull all colors as part of its code. I've modified it to look like this (callback at the end is from React Native, path argument is just a string of the path):
getColors:(NSString *)path options:(NSDictionary *)options callback:(RCTResponseSenderBlock)callback) {
UIImage *originalImage = [UIImage imageWithContentsOfFile:path ];
UIImage *image =
[UIImage imageWithCGImage:[originalImage CGImage]
scale:0.5
orientation:(UIImageOrientationUp)];
CGImageRef cgImage = [image CGImage];
NSUInteger width = CGImageGetWidth(cgImage);
NSUInteger height = CGImageGetHeight(cgImage);
// Allocate storage for the pixel data
unsigned char *rawData = (unsigned char *)malloc(height * width * 4);
// Create the color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Set some metrics
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
// Create context using the storage
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
// Release the color space
CGColorSpaceRelease(colorSpace);
// Draw the image into the storage
CGContextDrawImage(context, CGRectMake(0, 0, width, height), cgImage);
// We are done with the context
CGContextRelease(context);
// determine the colours in the image
NSMutableArray * colours = [NSMutableArray new];
float x = 0;
float y = 0;
for (int n = 0; n<(width*height); n++){
int index = (bytesPerRow * y) + x * bytesPerPixel;
int red = rawData[index];
int green = rawData[index + 1];
int blue = rawData[index + 2];
int alpha = rawData[index + 3];
NSArray * a = [NSArray arrayWithObjects:[NSString stringWithFormat:#"%i",red],[NSString stringWithFormat:#"%i",green],[NSString stringWithFormat:#"%i",blue],[NSString stringWithFormat:#"%i",alpha], nil];
[colours addObject:a];
y++;
if (y==height){
y=0;
x++;
}
}
free(rawData);
callback(#[[NSNull null], colours]);
Now, this script is fairly simple it seems like - it should be iterating over each pixel and adding each color to an array, which is then returned to React Native via the callback.
However, the response to the call is always an empty array.
I'm not sure why that is. Could it be due to where the images are located (they're at AWS, on S3), or something in the algorithm? The code looks right to me, but it's entirely possible that I'm missing something just due to unfamiliarity with Objective-C
I ran your code in an empty project and it performs as expected using an image loaded from the assets library. Is it possible that the UIImage *originalImage = [UIImage imageWithContentsOfFile:path ]; call uses an invalid path. You can easily validate that by simply logging the value of the read image:
UIImage * originalImage = [UIImage imageWithContentsOfFile: path];
NSLog(#"image read from file %#", originalImage);
If the image was not read properly from the file, you will get an empty colours array as the width and height will be nil there will be nothing to loop over.
Also, to avoid modifying the array after your function has returned, it is generally a good practice to return a copy of mutable object or an immutable object (i.e. NSArray instead of NSMutableArray):
callback(#[[NSNull null], [colours copy]]);
Hope this helps
The issue was ultimately that the image download method was returning null - not sure why.
So I took this:
UIImage *originalImage = [UIImage imageWithContentsOfFile:path ];
I changed it to this:
NSData * imageData = [[NSData alloc] initWithContentsOfURL: [NSURL URLWithString: path]];
UIImage *originalImage = [UIImage imageWithData: imageData];
And now my image downloads just fine and the rest of the script works great.
I'm creating a gif as below but I always end up with a black line on one or more of the edges as the attached image shows. How can I avoid this please?
//Create gif
MagickWand *mw = NewMagickWand();
MagickSetFormat(mw, "gif");
for(int i = 0; i < [self.finalImageArray count]; i++)
{
float interval = (100/8);
MagickWand *localWand = NewMagickWand();
UIImage *image = [self.finalImageArray objectAtIndex:i];
NSData *dataObj = UIImagePNGRepresentation(image);
MagickReadImageBlob(localWand, [dataObj bytes], [dataObj length]);
MagickThumbnailImage(localWand, 320, 320);
MagickSetImageDelay(localWand, interval);
MagickAddImage(mw, localWand);
DestroyMagickWand(localWand);
}
size_t my_size;
unsigned char * my_image = MagickGetImagesBlob(mw, &my_size);
NSData *gifData = [[NSData alloc] initWithBytes:my_image length:my_size];
The problem isn't with the creation of the gif, the black line is created earlier when I scale the image
I have this code to convert an NSData object filled with RGB data into an NSData object filled with RGBA data that will then be converted to an image and displayed. Unfortunately Instruments shows my memory steadily climbing and that it's my rgbaData object in the below code that never goes away.
- (NSData *)uint8RGBADataOfSize:(NSUInteger)size fromRGBData:(NSData *)rgbData
{
NSMutableData *rgbaData = [[NSMutableData alloc] initWithLength:(size * 4 * sizeof(uint8_t))];
uint8_t *rgba = rgbaData.mutableBytes;
const uint8_t *rgb = rgbData.bytes;
for(int i = 0; i < size; ++i)
{
rgba[4*i+0] = rgb[3*i];
rgba[4*i+1] = rgb[3*i+1];
rgba[4*i+2] = rgb[3*i+2];
rgba[4*i+3] = 255;
}
return rgbaData;
}
I call the above method like so:
data = [self uint8RGBADataOfSize:(width * height) fromRGBData:data];
Data is my original NSData with rgb data, and I just replace it with my newly created rgba NSData before creating an image out of it:
CGDataProviderRef provider = CGDataProviderCreateWithData(nil, data.bytes, totalBytes, nil);
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, bitmapInfo, provider, nil, NO, kCGRenderingIntentDefault);
My new data isn't used after creating the above provider, and my method exists shortly after this when I pass a UIImage back out. Everything here looks good to me, but I end up with an ever increasing horde of Malloc 84.00 KB's that come from my uint8RGBADataOfSize:fromRGBData: method. At this point I'm stumped, anyone have any ideas?
I've been struggling for two days to display an animated webp image in a UIImageView with no success whatsoever.
Mainly the problem is in the decoding step of the file which gives this error: VP8_STATUS_UNSUPPORTED_FEATURE.
I tried
https://github.com/seanooi/iOS-WebP
https://github.com/mattt/WebPImageSerialization
These projects provide code for creating UIImage with webp files and they work fine with images with no animation but they both fail with the same error as above when attempting to decode images with animation.
I am jailbroken and checking the filesystem I saw that Facebook's Messenger app has some of it's stickers in .webp format and also in the License they mention Google's "webp" library so I'm sure somehow it's possible.
Managed to decode animated .webp using the code snippet at the top of this header which also contains explanations of the data structures used.
static NSDictionary* DecodeWebPURL(NSURL *url) {
NSMutableDictionary *info = [NSMutableDictionary dictionary];
NSMutableArray *images = [NSMutableArray array];
NSData *imgData = [NSData dataWithContentsOfURL:url];
WebPData data;
WebPDataInit(&data);
data.bytes = (const uint8_t *)[imgData bytes];
data.size = [imgData length];
WebPDemuxer* demux = WebPDemux(&data);
int width = WebPDemuxGetI(demux, WEBP_FF_CANVAS_WIDTH);
int height = WebPDemuxGetI(demux, WEBP_FF_CANVAS_HEIGHT);
uint32_t flags = WebPDemuxGetI(demux, WEBP_FF_FORMAT_FLAGS);
if (flags & ANIMATION_FLAG) {
WebPIterator iter;
if (WebPDemuxGetFrame(demux, 1, &iter)) {
WebPDecoderConfig config;
WebPInitDecoderConfig(&config);
config.input.height = height;
config.input.width = width;
config.input.has_alpha = iter.has_alpha;
config.input.has_animation = 1;
config.options.no_fancy_upsampling = 1;
config.options.bypass_filtering = 1;
config.options.use_threads = 1;
config.output.colorspace = MODE_RGBA;
[info setObject:[NSNumber numberWithInt:iter.duration] forKey:#"duration"];
do {
WebPData frame = iter.fragment;
VP8StatusCode status = WebPDecode(frame.bytes, frame.size, &config);
if (status != VP8_STATUS_OK) {
NSLog(#"Error decoding frame");
}
uint8_t *data = WebPDecodeRGBA(frame.bytes, frame.size, &width, &height);
CGDataProviderRef provider = CGDataProviderCreateWithData(&config, data, config.options.scaled_width * config.options.scaled_height * 4, NULL);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width, height, 8, 32, 4 * width, colorSpaceRef, bitmapInfo, provider, NULL, YES, renderingIntent);
[images addObject:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
} while (WebPDemuxNextFrame(&iter));
WebPDemuxReleaseIterator(&iter);
}
}
WebPDemuxDelete(demux);
[info setObject:images forKey:#"frames"];
return info;
}
Okay, so I'm downloading a bunch of large-ish images (5mb) from a server in pieces, then stitching the pieces together and rendering the total image from a byte array. However, I've realized that the data for each image is not being released, and consequently builds up causing a memory warning and crash of my app. I thought that because of my explicit (__bridge_transfer NSData *) casting that ARC would take care of releasing the object, but it's still proving to be a problem. In instruments, objects called "CGDataProviderCopyData" of ~ 1mb build up and are not discarded for each file that is being stitched into the whole image. Any ideas or anyone who can steer me in the right direction? Much obliged.
// Create array to add all files into total image
NSMutableArray *byteArray = [[NSMutableArray alloc] initWithCapacity:(imageHeight * imageWidth)];
// Iterate through each file in files array
for (NSString *file in array)
{
// Set baseURL for individual file path
NSString *baseURL = [NSString stringWithFormat:#"http://xx.225.xxx.xxx%#",[imageInfo objectForKey:#"BaseURL"]];
// Specify imagePath by appending baseURL to file name
NSString *imagePath = [NSString stringWithFormat:#"%#%#", baseURL, file];
// Change NSString --> NSURL --> NSData
NSURL *imageUrl = [NSURL URLWithString:imagePath];
NSData *imageData = [NSData dataWithContentsOfURL:imageUrl];
// Create image from imageData
UIImage *image = [UIImage imageWithData:imageData];
CGImageRef cgimage = image.CGImage;
size_t width = CGImageGetWidth(cgimage);
size_t height = CGImageGetHeight(cgimage);
size_t bpr = CGImageGetBytesPerRow(cgimage);
size_t bpp = CGImageGetBitsPerPixel(cgimage);
size_t bpc = CGImageGetBitsPerComponent(cgimage);
size_t bytes_per_pixel = bpp / bpc;
// Get CGDataProviderRef from cgimage
CGDataProviderRef provider = CGImageGetDataProvider(cgimage);
// This is the object that is not being released
NSData *data = (__bridge_transfer NSData *)CGDataProviderCopyData(provider); //Using (__bridge_transfer NSData *) casts the provider to type NSData and gives ownership to ARC, but still not discarded
const UInt8 *bytes = (Byte *)[data bytes];
// Log which file is currently being iterated through
NSLog(#"---Stitching png file to total image: %#", file);
// Populate byte array with channel data from each pixel
for(size_t row = 0; row < height; row++)
{
for(size_t col = 0; col < width; col++)
{
const UInt8* pixel =
&bytes[row * bpr + col * bytes_per_pixel];
for(unsigned short i = 0; i < 4; i+=4)
{
__unused unsigned short red = pixel[i]; // red channel - unused
unsigned short green = pixel[i+1]; // green channel
unsigned short blue = pixel[i+2]; // blue channel
__unused unsigned short alpha = pixel[i+3]; // alpha channel - unused
// Create dicom intensity value from intensity = [(g *250) + b]
unsigned short dicomInt = ((green * 256) + blue);
//Convert unsigned short intensity value to NSNumber so can store in array as object
NSNumber *DICOMvalue = [NSNumber numberWithInt:dicomInt];
// Add to image array (total image)
[byteArray addObject:DICOMvalue];
}
}
}
data = nil;
}
return byteArray;
Running "Analyze" through Xcode doesn't show any apparent leaks either.
I took this code, almost verbatim, and did some more investigation. With the CFDataRef/NSData, I was able to see the problem you were seeing with the NSDatas not going away, and I was able to solve it by wrapping the portion of the code that uses the NSData in an #autoreleasepool scope, like this:
// Create array to add all files into total image
NSMutableArray *byteArray = [[NSMutableArray alloc] initWithCapacity:(imageHeight * imageWidth)];
// Iterate through each file in files array
for (NSString *file in array)
{
// Set baseURL for individual file path
NSString *baseURL = [NSString stringWithFormat:#"http://xx.225.xxx.xxx%#",[imageInfo objectForKey:#"BaseURL"]];
// Specify imagePath by appending baseURL to file name
NSString *imagePath = [NSString stringWithFormat:#"%#%#", baseURL, file];
// Change NSString --> NSURL --> NSData
NSURL *imageUrl = [NSURL URLWithString:imagePath];
NSData *imageData = [NSData dataWithContentsOfURL:imageUrl];
// Create image from imageData
UIImage *image = [UIImage imageWithData:imageData];
CGImageRef cgimage = image.CGImage;
size_t width = CGImageGetWidth(cgimage);
size_t height = CGImageGetHeight(cgimage);
size_t bpr = CGImageGetBytesPerRow(cgimage);
size_t bpp = CGImageGetBitsPerPixel(cgimage);
size_t bpc = CGImageGetBitsPerComponent(cgimage);
size_t bytes_per_pixel = bpp / bpc;
// Get CGDataProviderRef from cgimage
CGDataProviderRef provider = CGImageGetDataProvider(cgimage);
#autoreleasepool
{
// This is the object that is not being released
NSData *data = (__bridge_transfer NSData *)CGDataProviderCopyData(provider); //Using (__bridge_transfer NSData *) casts the provider to type NSData and gives ownership to ARC, but still not discarded
const UInt8 *bytes = (Byte *)[data bytes];
// Log which file is currently being iterated through
NSLog(#"---Stitching png file to total image: %#", file);
// Populate byte array with channel data from each pixel
for(size_t row = 0; row < height; row++)
{
for(size_t col = 0; col < width; col++)
{
const UInt8* pixel =
&bytes[row * bpr + col * bytes_per_pixel];
for(unsigned short i = 0; i < 4; i+=4)
{
__unused unsigned short red = pixel[i]; // red channel - unused
unsigned short green = pixel[i+1]; // green channel
unsigned short blue = pixel[i+2]; // blue channel
__unused unsigned short alpha = pixel[i+3]; // alpha channel - unused
// Create dicom intensity value from intensity = [(g *250) + b]
unsigned short dicomInt = ((green * 256) + blue);
//Convert unsigned short intensity value to NSNumber so can store in array as object
NSNumber *DICOMvalue = [NSNumber numberWithInt:dicomInt];
// Add to image array (total image)
[byteArray addObject:DICOMvalue];
}
}
}
data = nil;
}
}
return byteArray;
After adding that #autoreleasepool, I then commented out the part where you create NSNumbers and put them in the array, and I was able to see in the Allocations template of Instruments that indeed the CFData objects were now being released with each turn of the loop.
The reason I commented out the part where you create NSNumbers and put them in the array, is that with that code in there, you're going to end up adding width * height * 4 NSNumbers to byteArray. This means that even if the NSData was being released properly, your heap use would be going up by width * height * 4 * <at least 4 bytes, maybe more> no matter what. Maybe that's what you need to do, but it sure made it harder for me to see what was going on with the NSDatas because their size was being dwarfed by the array of NSNumbers.
Hope that helps.