I'm getting a memory error. I'm getting a memory error because the memory usage increases exponentially. Clearly I am not releasing something, any ideas what that may be?
Here is my method to determine red pixels present in a UI image. It returns the count.
- (NSUInteger)getRedPixelCount:(UIImage*)image
{
NSUInteger numberOfRedPixels = 0;
struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel));
if (pixels != nil)
{
CGContextRef context = CGBitmapContextCreate((void *) pixels,
image.size.width,
image.size.height,
8,
image.size.width * 4,
CGImageGetColorSpace(image.CGImage),
(CGBitmapInfo)kCGImageAlphaPremultipliedLast);
if (context != NULL)
{
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage);
NSUInteger numberOfPixels = image.size.width * image.size.height;
while (numberOfPixels > 0) {
if (pixels->r == 255) {
numberOfRedPixels++;
}
pixels++;
numberOfPixels--;
}
CGContextRelease(context);
}
}
return numberOfRedPixels;
}
This is the code to iterate through the photo library images and determine each of their red pixels.
[self.library enumerateGroupsWithTypes:ALAssetsGroupAll usingBlock:^(ALAssetsGroup *group, BOOL *stop) {
if (group) {
[group setAssetsFilter:[ALAssetsFilter allPhotos]];
[group enumerateAssetsUsingBlock:^(ALAsset *asset, NSUInteger index, BOOL *stop){
if (asset) {
ALAssetRepresentation *rep = [asset defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref){
UIImage *myImage = [UIImage imageWithCGImage:iref scale:[rep scale] orientation:(UIImageOrientation)[rep orientation]];
NSLog(#"%i", [self getRedPixelCount:myImage]);
}
}
}];
}
} failureBlock:^(NSError *error) {
NSLog(#"error enumerating AssetLibrary groups %#\n", error);
}];
Regards,
C.
You're not releasing the memory allocated by
struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel));
You need to add:
free(pixels);
at the bottom of the if(pixels != nil) block.
Make the first block look like:
- (NSUInteger)getRedPixelCount:(UIImage*)image
{
NSUInteger numberOfRedPixels = 0;
struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel));
if (pixels != nil)
{
CGContextRef context = CGBitmapContextCreate((void *) pixels,
image.size.width,
image.size.height,
8,
image.size.width * 4,
CGImageGetColorSpace(image.CGImage),
(CGBitmapInfo)kCGImageAlphaPremultipliedLast);
if (context != NULL)
{
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage);
NSUInteger numberOfPixels = image.size.width * image.size.height;
struct pixels* ptr = pixels;
while (numberOfPixels > 0) {
if (ptr->r == 255) {
numberOfRedPixels++;
}
ptr++;
numberOfPixels--;
}
CGContextRelease(context);
}
free(pixels);
}
return numberOfRedPixels;
}
Also it will help if the second block changes to include:
#autoreleasepool {
ALAssetRepresentation *rep = [asset defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref){
UIImage *myImage = [UIImage imageWithCGImage:iref scale:[rep scale] orientation:(UIImageOrientation)[rep orientation]];
NSLog(#"%i", [self getRedPixelCount:myImage]);
<#statements#>
}
}
although the major leak is not freeing the pixel buffer.
You can embrace 'enumerateAssetsUsingBlock' block with #autorelease:
#autorelease {
if (asset) {
...
}
}
This force all autoreleasing objects to release immediately.
UPD:
Use [asset thumbnail] instead of fullScreenImage or fullResolutionImage. These methods generate huge amount of pixel data.
UPD2:
if even [asset thumbnail] did not help, than you must find a way to release this image data, it could be little bit tricky as long as you can't to release it directly calling CGImageRelease. Try something like this:
NSMutabelArray* arr = [NSMutabelArray new];
In your enumerateAssetsUsingBlock just put asset object in this array:
[arr addObject:asset];
And do nothing else in this block.
Then iterate through this array in a way:
while (arr.count > 0)
{
ALAsset* asset = [arr lastObject];
// do smth with asset
[arr removeLastObject]; // this will remove object from memory immediatelly
}
Related
We have a process that takes high resolution source PNG/JPG images and creates renditions of these images in various lower resolution formats / cropped versions.
void ResizeAndSaveSourceImageFromFile(NSString *imagePath, NSInteger width, NSInteger height, NSString *destinationFolder, NSString *fileName, BOOL shouldCrop, NSInteger rotation, NSInteger cornerRadius, BOOL removeAlpha) {
NSString *outputFilePath = [NSString stringWithFormat:#"%#/%#", destinationFolder, fileName];
NSImage *sourceImage = [[NSImage alloc] initWithContentsOfFile:imagePath];
NSSize sourceSize = sourceImage.size;
float sourceAspect = sourceSize.width / sourceSize.height;
float desiredAspect = width / height;
float finalWidth = width;
float finalHeight = height;
if (shouldCrop == true) {
if (desiredAspect > sourceAspect) {
width = height * sourceAspect;
} else if (desiredAspect < sourceAspect) {
height = width / sourceAspect;
}
}
if (width < finalWidth) {
width = finalWidth;
height = width / sourceAspect;
}
if (height < finalHeight) {
height = finalHeight;
width = height * sourceAspect;
}
NSImage *resizedImage = ImageByScalingToSize(sourceImage, CGSizeMake(width, height));
if (shouldCrop == true) {
resizedImage = ImageByCroppingImage(resizedImage, CGSizeMake(finalWidth, finalHeight));
}
if (rotation != 0) {
resizedImage = ImageRotated(resizedImage, rotation);
}
if (cornerRadius != 0) {
resizedImage = ImageRounded(resizedImage, cornerRadius);
}
NSBitmapImageRep *imgRep = UnscaledBitmapImageRep(resizedImage, removeAlpha);
NSBitmapImageFileType type = NSPNGFileType;
if ([fileName rangeOfString:#".jpg"].location != NSNotFound) {
type = NSJPEGFileType;
}
NSData *imageData = [imgRep representationUsingType:type properties: #{}];
[imageData writeToFile:outputFilePath atomically:NO];
if ([outputFilePath rangeOfString:#"land-mdpi"].location != NSNotFound) {
[imageData writeToFile:[outputFilePath stringByReplacingOccurrencesOfString:#"land-mdpi" withString:#"tvdpi"] atomically:NO];
}
}
NSImage* ImageByScalingToSize(NSImage* sourceImage, NSSize newSize) {
if (! sourceImage.isValid) return nil;
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:newSize.width
pixelsHigh:newSize.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
rep.size = newSize;
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithBitmapImageRep:rep]];
[sourceImage drawInRect:NSMakeRect(0, 0, newSize.width, newSize.height) fromRect:NSZeroRect operation:NSCompositingOperationCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
NSImage *newImage = [[NSImage alloc] initWithSize:newSize];
[newImage addRepresentation:rep];
return newImage;
}
NSBitmapImageRep* UnscaledBitmapImageRep(NSImage *image, BOOL removeAlpha) {
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:image.size.width
pixelsHigh:image.size.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:
[NSGraphicsContext graphicsContextWithBitmapImageRep:rep]];
[image drawAtPoint:NSMakePoint(0, 0)
fromRect:NSZeroRect
operation:NSCompositingOperationSourceOver
fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
NSBitmapImageRep *imgRepFinal = rep;
if (removeAlpha == YES) {
NSImage *newImage = [[NSImage alloc] initWithSize:[rep size]];
[newImage addRepresentation:rep];
static int const kNumberOfBitsPerColour = 5;
NSRect imageRect = NSMakeRect(0.0, 0.0, newImage.size.width, newImage.size.height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef tileGraphicsContext = CGBitmapContextCreate (NULL, imageRect.size.width, imageRect.size.height, kNumberOfBitsPerColour, 2 * imageRect.size.width, colorSpace, kCGBitmapByteOrder16Little | kCGImageAlphaNoneSkipFirst);
NSData *imageDataTIFF = [newImage TIFFRepresentation];
CGImageRef imageRef = [[NSBitmapImageRep imageRepWithData:imageDataTIFF] CGImage];
CGContextDrawImage(tileGraphicsContext, imageRect, imageRef);
// Create an NSImage from the tile graphics context
CGImageRef newImageRef = CGBitmapContextCreateImage(tileGraphicsContext);
NSImage *newNSImage = [[NSImage alloc] initWithCGImage:newImageRef size:imageRect.size];
// Clean up
CGImageRelease(newImageRef);
CGContextRelease(tileGraphicsContext);
CGColorSpaceRelease(colorSpace);
CGImageRef CGImage = [newNSImage CGImageForProposedRect:nil context:nil hints:nil];
imgRepFinal = [[NSBitmapImageRep alloc] initWithCGImage:CGImage];
}
return imgRepFinal;
}
NSImage* ImageByCroppingImage(NSImage* image, CGSize size) {
NSInteger trueWidth = image.representations[0].pixelsWide;
double refWidth = image.size.width;
double refHeight = image.size.height;
double scale = trueWidth / refWidth;
double x = (refWidth - size.width) / 2.0;
double y = (refHeight - size.height) / 2.0;
CGRect cropRect = CGRectMake(x * scale, y * scale, size.width * scale, size.height * scale);
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)[image TIFFRepresentation], NULL);
CGImageRef maskRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
CGImageRef imageRef = CGImageCreateWithImageInRect(maskRef, cropRect);
NSImage *cropped = [[NSImage alloc] initWithCGImage:imageRef size:size];
CGImageRelease(imageRef);
return cropped;
}
This process works well and gets the results we want. We can re-run these functions on hundreds of images and get the same output every time. We then commit these files in git repos.
HOWEVER, every time we update macOS to a new version (such as updating to High Sierra, Monterey, etc.) when we run these functions ALL of the images result in an output that is different and has different hashes so git treats these images as being changed even though the source images are identical.
FURTHER, JPG images seem to have a different output when run on an Intel mac vs. an Apple M1 mac.
We have checked the head of the output images using a command like:
od -bc banner.png | head
This results in the same head data in all cases even though the actual image data doesn't match after version changes.
We've also checked CGImageSourceCopyPropertiesAtIndex such as:
{
ColorModel = RGB;
Depth = 8;
HasAlpha = 1;
PixelHeight = 1080;
PixelWidth = 1920;
ProfileName = "Generic RGB Profile";
"{Exif}" = {
PixelXDimension = 1920;
PixelYDimension = 1080;
};
"{PNG}" = {
InterlaceType = 0;
};
}
Which do not show any differences between versions of macOS or Intel vs. M1.
We don't want the hash to keep changing on us and resulting in extra churn in git and hoping for feedback that may help in us getting consistent output in all cases.
Any tips are greatly appreciated.
I want to manipulate image and shuffle colors. I'm trying to rotate 180 degress with pixels but failed. I don't want to use UIImageView rotate cause i won't just rotate images. I want to do them whatever i want.
EDIT : It was wrong operator. I dont know why i used % instead of / . Anyways i hope this code helps someone(it works).
- (IBAction)shuffleImage:(id)sender {
[self calculateRGBAsAndChangePixels:self.imageView.image atX:0 andY:0];
}
-(void)calculateRGBAsAndChangePixels:(UIImage*)image atX:(int)x andY:(int)y
{
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * image.size.width;
NSUInteger bitsPerComponent = 8;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bmContext = CGBitmapContextCreate(NULL, image.size.width, image.size.height, bitsPerComponent,bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(bmContext, (CGRect){.origin.x = 0.0f, .origin.y = 0.0f, image.size.width, image.size.height}, image.CGImage);
UInt8* data = (UInt8*)CGBitmapContextGetData(bmContext);
const size_t bitmapByteCount = bytesPerRow * image.size.height;
NSMutableArray *reds = [[NSMutableArray alloc] init];
NSMutableArray *greens = [[NSMutableArray alloc] init];
NSMutableArray *blues = [[NSMutableArray alloc] init];
for (size_t i = 0; i < bitmapByteCount; i += 4)
{
[reds addObject:[NSNumber numberWithInt:data[i]]];
[greens addObject:[NSNumber numberWithInt:data[i+1]]];
[blues addObject:[NSNumber numberWithInt:data[i+2]]];
}
for (size_t i = 0; i < bitmapByteCount; i += 4)
{
data[i] = [[reds objectAtIndex:reds.count-i%4-1] integerValue];
data[i+1] = [[greens objectAtIndex:greens.count-i%4-1] integerValue];
data[i+2] = [[blues objectAtIndex:blues.count-i%4-1] integerValue];
}
CGImageRef newImage = CGBitmapContextCreateImage(bmContext);
UIImage *imageView = [[UIImage alloc] initWithCGImage:newImage];
self.imageView.image = imageView;
}
Assuming that you are wanting to make the image turn upside down (rotate it 180) and not mirror it, I found some relevant code on another question that may help you:
static inline double radians (double degrees) {return degrees * M_PI/180;}
UIImage* rotate(UIImage* src, UIImageOrientation orientation)
{
UIGraphicsBeginImageContext(src.size);
CGContextRef context = UIGraphicsGetCurrentContext();
if (orientation == UIImageOrientationRight) {
CGContextRotateCTM (context, radians(90));
} else if (orientation == UIImageOrientationLeft) {
CGContextRotateCTM (context, radians(-90));
} else if (orientation == UIImageOrientationDown) {
// NOTHING
} else if (orientation == UIImageOrientationUp) {
CGContextRotateCTM (context, radians(90));
}
[src drawAtPoint:CGPointMake(0, 0)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
If you're trying to mirror the image, this code example from this question maybe of help:
UIImage* sourceImage = [UIImage imageNamed:#"whatever.png"];
UIImage* flippedImage = [UIImage imageWithCGImage:sourceImage.CGImage
scale:sourceImage.scale
orientation:UIImageOrientationUpMirrored];
So you're looking to actually manipulate the raw pixel data. Check this out then:
Getting the pixel data from a CGImage object
It's for MacOS but should be relevant for iOS as well.
I am trying to get a screenshot from a GLKView and save it to Photo Library. Unfortunately the quality of the saved photo is very low and I cannot figure why.
This is the link to the UIImage displayed on the screen using ImageView class: https://www.dropbox.com/s/d2cyb099n0ndqag/IMG_0148.PNG?dl=0
This is the link to the photo saved in Photo Albums:
https://www.dropbox.com/s/8pdaytonwxxd6wk/IMG_0147.JPG?dl=0
The code is as follows:
- (UIImage*)snapshot
{
GLint backingWidth, backingHeight;
GLint framebuff;
glGetIntegerv( GL_FRAMEBUFFER_BINDING, &framebuff);
glBindFramebuffer(GL_FRAMEBUFFER, framebuff);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
GLenum errCode;
if ((errCode = glGetError()) != GL_NO_ERROR) {
//errStr = gluErrorString(errCode);
printf("%u: OpenGL ERROR ",errCode);
}
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(dataLength);
for(int y = 0; y < height; y++)
{
for(int x = 0; x < width * 4; x++)
{
buffer2[((height - 1) - y) * width * 4 + x] = data[y * 4 * width + x];
}
}
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer2, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrderDefault, ref, NULL, YES, kCGRenderingIntentDefault);
// Retrieve the UIImage from the current context
UIImage *image = [UIImage imageWithCGImage:iref];
UIImageView *myImageView = [[UIImageView alloc] init];
myImageView.frame = CGRectMake(0, 0, fluid.gpu.Drawing.Pong->width,fluid.gpu.Drawing.Pong->height);
myImageView.image = image;
[self.view addSubview:myImageView];
CFRelease(colorspace);
CGImageRelease(iref);
free(data);
return image;
}
-(void)SavePhoto{
UIImage *temp = [self snapshot];
UIImageWriteToSavedPhotosAlbum(temp, self, #selector(imageSavedToPhotosAlbum: didFinishSavingWithError: contextInfo:), (__bridge void*)self.context);
}
- (void)imageSavedToPhotosAlbum:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)contextInfo {
NSString *message;
NSString *title;
if (!error) {
title = NSLocalizedString(#"Save picture", #"");
message = NSLocalizedString(#"Your screenshot was saved to Photos Album.", #"");
} else {
title = NSLocalizedString(#"There was an error saving screenshot.", #"");
message = [error description];
}
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:title
message:message
delegate:nil
cancelButtonTitle:NSLocalizedString(#"OK", #"")
otherButtonTitles:nil];
[alert show];
}
I also tried the convenience method of [GLKView snapshot] but I get the same result. The problem is replicated on iPhone 4, iPhone 3Gs, IPad2
I manage to get a better quality by transforming my image into a PNG format and save this last one:
NSData* imdata = UIImagePNGRepresentation ( image ); // get PNG representation
UIImage* imgPng = [UIImage imageWithData:imdata]; // wrap UIImage
return imgPng;
Below code is to save each frame of a movie to Photo Album in iPad device... After about 60 frames saved into Image.. it gives me 'Received Memory Warning.." error and my app crashes eventually.. what goes wrong here? Am I missing something..? I use iOS 7...
-(UIImage *) getImageFromGLBuffer : GLuint frameBuffer : (CGCize) screenSize
{
glBindFramebuffer( GL_FRAMEBUFFER, frameBuffer);
int width = screenSize.width;
int height = screenSize.height;
NSInteger iDataLength = width * height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc( iDataLength );
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
glBindFramebuffer( GL_FRAMEBUFFER, 0);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(iDataLength);
for(int y = 0; y <height; y++)
{
for(int x = 0; x < width * 4; x++)
{
buffer2[(int)((height - 1 - y) * width * 4 + x)] = buffer[(int)(y * 4 * width + x)];
}
}
// Release the first buffer
free((void*)buffer);
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, iDataLength, releaseBufferData);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
// then make the UIImage from that
UIImage *image = [UIImage imageWithCGImage:imageRef]; //[[UIImage alloc] initWithCGImage:imageRef];
NSData* imageData = UIImagePNGRepresentation(image); // get png representation
UIImage* pngImage = [UIImage imageWithData:imageData];
CGImageRelease(imageRef);
return pngImage;
}
// callback for CGDataProviderCreateWithData
void releaseBufferData(void *info, const void *data, size_t dataSize)
{
NSLog(#"releaseBufferData\n");
// free the buffer
free((void*)data);
}
- (void)saveImageToPhotoAlbum : GLuint frameBuffer : (CGCize) screenSize
{
if( iqFrameBuffer != nil )
{
UIImage* image = [self getImageFromGLBuffer:frameBuffer : screenSize];
if( image != nil)
{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
UIImageWriteToSavedPhotosAlbum(image, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
});
}
else
{
NSLog(#"Couldn't save to Photo Album due to invalid image..");
}
}
else
{
NSLog(#"Frame buffer is invalid. Couldn't save to image..");
}
}
// callback for UIImageWriteToSavedPhotosAlbum
- (void)image:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)contextInfo
{
NSLog(#"Image has been saved to Photo Album successfully..\n");
// [image release]; // release image
image = nil;
}
I have to resize an album artwork form the file I get with this code:
for (NSString *format in [asset availableMetadataFormats]) {
for (AVMetadataItem *item in [asset metadataForFormat:format]) {
if ([[item commonKey] isEqualToString:#"title"]) {
//NSLog(#"name: %#", (NSString *)[item value]);
downloadedCell.nameLabel.text = (NSString *)[item value];
}
if ([[item commonKey] isEqualToString:#"artist"]) {
downloadedCell.artistLabel.text = (NSString *)[item value];
}
if ([[item commonKey] isEqualToString:#"albumName"]) {
//musicItem.strAlbumName = (NSString *)[item value];
}
if ([[item commonKey] isEqualToString:#"artwork"]) {
UIImage *img = nil;
if ([item.keySpace isEqualToString:AVMetadataKeySpaceiTunes]) {
img = [UIImage imageWithData:[item.value copyWithZone:nil]];
}
else { // if ([item.keySpace isEqualToString:AVMetadataKeySpaceID3]) {
NSData *data = [(NSDictionary *)[item value] objectForKey:#"data"];
img = [UIImage imageWithData:data];
}
// musicItem.imgArtwork = img;
UIImage *newImage = [self resizeImage:img width:70.0f height:70.0f];
downloadedCell.artworkImage.image = newImage;
}
When I apply this method:
- (UIImage *)resizeImage:(UIImage *)image width:(int)width height:(int)height {
//NSLog(#"resizing");
CGImageRef imageRef = [image CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
//if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(imageRef),
4 * width, CGImageGetColorSpace(imageRef), alphaInfo);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return result;
}
I ALWAYS get a noise image like you can see in the photo above.
http://postimage.org/image/jltpfza11/
How can I get a better resolution image?
If your view is 74x74, you should resize to twice that on retina displays. So, something like:
CGFloat imageSize = 74.0f * [[UIScreen mainScreen] scale];
UIImage *newImage = [self resizeImage:img width:imageSize height:imageSize];
Then you need to set the contentMode of your image view to something like UIViewContentModeScaleAspectFill.
Try specify explicitly the level of interpolation for the context using:
CGContextSetInterpolationQuality(bitmap, kCGInterpolationHigh);
Your resizeImage:width:height: method thus becomes:
-(UIImage *)resizeImage:(UIImage *)image width:(int)width height:(int)height {
//NSLog(#"resizing");
CGImageRef imageRef = [image CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
//if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(imageRef),
4 * width, CGImageGetColorSpace(imageRef), alphaInfo);
CGContextSetInterpolationQuality(bitmap, kCGInterpolationHigh);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return result;
}