iOS - Safe way to blur a UIImage - ios

In the past we had two different ways of blurring UIImages, and both led to crashes for our users. The first way crashes when the user puts the device into background (GPU error, not allowed to do anything in background).
The second one crashes with ESC_BAD_ACCESS errors.
What's a better (crash-safe) way to do it?
Version 1:
- (UIImage *)blurred:(float)inputRadius {
// create our blurred image
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [CIImage imageWithCGImage:self.CGImage];
// setting up Gaussian Blur (we could use one of many filters offered by Core Image)
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:inputRadius] forKey:#"inputRadius"];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
UIImage *blurredImage = [UIImage imageWithCGImage:cgImage];
CFRelease(cgImage);
return blurredImage;
}
Version 2:
- (UIImage*)blurredImage:(CGFloat)blurRadius {
if (blurRadius < 0.0) {
blurRadius = 0.0;
}
CGImageRef img = self.CGImage;
CGFloat inputImageScale = self.scale;
vImage_Buffer inBuffer, outBuffer;
vImage_Error error;
void *pixelBuffer;
CGDataProviderRef inProvider = CGImageGetDataProvider(img);
CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);
inBuffer.width = CGImageGetWidth(img);
inBuffer.height = CGImageGetHeight(img);
inBuffer.rowBytes = CGImageGetBytesPerRow(img);
inBuffer.data = (void*)CFDataGetBytePtr(inBitmapData);
pixelBuffer = malloc(CGImageGetBytesPerRow(img) * CGImageGetHeight(img));
outBuffer.data = pixelBuffer;
outBuffer.width = CGImageGetWidth(img);
outBuffer.height = CGImageGetHeight(img);
outBuffer.rowBytes = CGImageGetBytesPerRow(img);
CGFloat inputRadius = blurRadius * inputImageScale;
if (inputRadius - 2. < __FLT_EPSILON__)
inputRadius = 2.;
uint32_t radius = floor((inputRadius * 3. * sqrt(2 * M_PI) / 4 + 0.5) / 2);
radius |= 1; // force radius to be odd so that the three box-blur methodology works.
// line of crash
error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, radius, radius, NULL, kvImageEdgeExtend);
if (!error) {
error = vImageBoxConvolve_ARGB8888(&outBuffer, &inBuffer, NULL, 0, 0, radius, radius, NULL, kvImageEdgeExtend);
}
if (error) {
return self;
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(outBuffer.data,
outBuffer.width,
outBuffer.height,
8,
outBuffer.rowBytes,
colorSpace,
(CGBitmapInfo)kCGImageAlphaNoneSkipLast);
CGImageRef imageRef = CGBitmapContextCreateImage (ctx);
UIImage *returnImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
free(pixelBuffer);
CFRelease(inBitmapData);
CGImageRelease(imageRef);
return returnImage;
}

Related

Cocoa CoreGraphics Image Resizing Different Output by Device/Operating System Version

We have a process that takes high resolution source PNG/JPG images and creates renditions of these images in various lower resolution formats / cropped versions.
void ResizeAndSaveSourceImageFromFile(NSString *imagePath, NSInteger width, NSInteger height, NSString *destinationFolder, NSString *fileName, BOOL shouldCrop, NSInteger rotation, NSInteger cornerRadius, BOOL removeAlpha) {
NSString *outputFilePath = [NSString stringWithFormat:#"%#/%#", destinationFolder, fileName];
NSImage *sourceImage = [[NSImage alloc] initWithContentsOfFile:imagePath];
NSSize sourceSize = sourceImage.size;
float sourceAspect = sourceSize.width / sourceSize.height;
float desiredAspect = width / height;
float finalWidth = width;
float finalHeight = height;
if (shouldCrop == true) {
if (desiredAspect > sourceAspect) {
width = height * sourceAspect;
} else if (desiredAspect < sourceAspect) {
height = width / sourceAspect;
}
}
if (width < finalWidth) {
width = finalWidth;
height = width / sourceAspect;
}
if (height < finalHeight) {
height = finalHeight;
width = height * sourceAspect;
}
NSImage *resizedImage = ImageByScalingToSize(sourceImage, CGSizeMake(width, height));
if (shouldCrop == true) {
resizedImage = ImageByCroppingImage(resizedImage, CGSizeMake(finalWidth, finalHeight));
}
if (rotation != 0) {
resizedImage = ImageRotated(resizedImage, rotation);
}
if (cornerRadius != 0) {
resizedImage = ImageRounded(resizedImage, cornerRadius);
}
NSBitmapImageRep *imgRep = UnscaledBitmapImageRep(resizedImage, removeAlpha);
NSBitmapImageFileType type = NSPNGFileType;
if ([fileName rangeOfString:#".jpg"].location != NSNotFound) {
type = NSJPEGFileType;
}
NSData *imageData = [imgRep representationUsingType:type properties: #{}];
[imageData writeToFile:outputFilePath atomically:NO];
if ([outputFilePath rangeOfString:#"land-mdpi"].location != NSNotFound) {
[imageData writeToFile:[outputFilePath stringByReplacingOccurrencesOfString:#"land-mdpi" withString:#"tvdpi"] atomically:NO];
}
}
NSImage* ImageByScalingToSize(NSImage* sourceImage, NSSize newSize) {
if (! sourceImage.isValid) return nil;
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:newSize.width
pixelsHigh:newSize.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
rep.size = newSize;
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithBitmapImageRep:rep]];
[sourceImage drawInRect:NSMakeRect(0, 0, newSize.width, newSize.height) fromRect:NSZeroRect operation:NSCompositingOperationCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
NSImage *newImage = [[NSImage alloc] initWithSize:newSize];
[newImage addRepresentation:rep];
return newImage;
}
NSBitmapImageRep* UnscaledBitmapImageRep(NSImage *image, BOOL removeAlpha) {
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:image.size.width
pixelsHigh:image.size.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:
[NSGraphicsContext graphicsContextWithBitmapImageRep:rep]];
[image drawAtPoint:NSMakePoint(0, 0)
fromRect:NSZeroRect
operation:NSCompositingOperationSourceOver
fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
NSBitmapImageRep *imgRepFinal = rep;
if (removeAlpha == YES) {
NSImage *newImage = [[NSImage alloc] initWithSize:[rep size]];
[newImage addRepresentation:rep];
static int const kNumberOfBitsPerColour = 5;
NSRect imageRect = NSMakeRect(0.0, 0.0, newImage.size.width, newImage.size.height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef tileGraphicsContext = CGBitmapContextCreate (NULL, imageRect.size.width, imageRect.size.height, kNumberOfBitsPerColour, 2 * imageRect.size.width, colorSpace, kCGBitmapByteOrder16Little | kCGImageAlphaNoneSkipFirst);
NSData *imageDataTIFF = [newImage TIFFRepresentation];
CGImageRef imageRef = [[NSBitmapImageRep imageRepWithData:imageDataTIFF] CGImage];
CGContextDrawImage(tileGraphicsContext, imageRect, imageRef);
// Create an NSImage from the tile graphics context
CGImageRef newImageRef = CGBitmapContextCreateImage(tileGraphicsContext);
NSImage *newNSImage = [[NSImage alloc] initWithCGImage:newImageRef size:imageRect.size];
// Clean up
CGImageRelease(newImageRef);
CGContextRelease(tileGraphicsContext);
CGColorSpaceRelease(colorSpace);
CGImageRef CGImage = [newNSImage CGImageForProposedRect:nil context:nil hints:nil];
imgRepFinal = [[NSBitmapImageRep alloc] initWithCGImage:CGImage];
}
return imgRepFinal;
}
NSImage* ImageByCroppingImage(NSImage* image, CGSize size) {
NSInteger trueWidth = image.representations[0].pixelsWide;
double refWidth = image.size.width;
double refHeight = image.size.height;
double scale = trueWidth / refWidth;
double x = (refWidth - size.width) / 2.0;
double y = (refHeight - size.height) / 2.0;
CGRect cropRect = CGRectMake(x * scale, y * scale, size.width * scale, size.height * scale);
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)[image TIFFRepresentation], NULL);
CGImageRef maskRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
CGImageRef imageRef = CGImageCreateWithImageInRect(maskRef, cropRect);
NSImage *cropped = [[NSImage alloc] initWithCGImage:imageRef size:size];
CGImageRelease(imageRef);
return cropped;
}
This process works well and gets the results we want. We can re-run these functions on hundreds of images and get the same output every time. We then commit these files in git repos.
HOWEVER, every time we update macOS to a new version (such as updating to High Sierra, Monterey, etc.) when we run these functions ALL of the images result in an output that is different and has different hashes so git treats these images as being changed even though the source images are identical.
FURTHER, JPG images seem to have a different output when run on an Intel mac vs. an Apple M1 mac.
We have checked the head of the output images using a command like:
od -bc banner.png | head
This results in the same head data in all cases even though the actual image data doesn't match after version changes.
We've also checked CGImageSourceCopyPropertiesAtIndex such as:
{
ColorModel = RGB;
Depth = 8;
HasAlpha = 1;
PixelHeight = 1080;
PixelWidth = 1920;
ProfileName = "Generic RGB Profile";
"{Exif}" = {
PixelXDimension = 1920;
PixelYDimension = 1080;
};
"{PNG}" = {
InterlaceType = 0;
};
}
Which do not show any differences between versions of macOS or Intel vs. M1.
We don't want the hash to keep changing on us and resulting in extra churn in git and hoping for feedback that may help in us getting consistent output in all cases.
Any tips are greatly appreciated.

Objective C: How to remove a white background from a JPG image without losing edge quality

So I have many jpg images with white backgrounds that I am loading into my app, and I would like to remove the white backgrounds programmatically. I have a function that does this, but it causes some jagged edges around each image. Is there a way that I can blend these edges to achieve smooth edges?
My current method:
-(UIImage *)changeWhiteColorTransparent: (UIImage *)image
{
CGImageRef rawImageRef=image.CGImage;
const CGFloat colorMasking[6] = {222, 255, 222, 255, 222, 255};
UIGraphicsBeginImageContextWithOptions(image.size, NO, [UIScreen mainScreen].scale);
CGImageRef maskedImageRef=CGImageCreateWithMaskingColors(rawImageRef, colorMasking);
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0.0, image.size.height);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, image.size.width, image.size.height), maskedImageRef);
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
CGImageRelease(maskedImageRef);
UIGraphicsEndImageContext();
return result;
}
I know that I can change the color masking values, but I don't think any combination will produce a smooth picture with no white background.
Heres an example:
That method also removes extra pixels within the images that are close to white:
I think the ideal method would change the alpha of white pixels according to how close to pure white they are instead of just removing them all. Any ideas would be appreciated.
#import "UIImage+FloodFill.h"
//https://github.com/Chintan-Dave/UIImageScanlineFloodfill
#define Mask8(x) ( (x) & 0xFF )
#define R(x) ( Mask8(x) )
#define G(x) ( Mask8(x >> 8 ) )
#define B(x) ( Mask8(x >> 16) )
#define A(x) ( Mask8(x >> 24) )
#define RGBAMake(r, g, b, a) ( Mask8(r) | Mask8(g) << 8 | Mask8(b) << 16 | Mask8(a) << 24 )
#interface UIImage (BackgroundRemoval)
//Simple Removal
- (UIImage *)floodFillRemoveBackgroundColor;
#end
#implementation UIImage (BackgroundRemoval)
- (UIImage*) maskImageWithMask:(UIImage *)maskImage {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [maskImage CGImage];
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, (CGBitmapInfo)kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = maskImage.size.width/ self.size.width;
if(ratio * self.size.height < maskImage.size.height) {
ratio = maskImage.size.height/ self.size.height;
}
CGRect rect1 = { {0, 0}, {maskImage.size.width, maskImage.size.height} };
CGRect rect2 = { {-((self.size.width*ratio)-maskImage.size.width)/2 , -((self.size.height*ratio)-maskImage.size.height)/2}, {self.size.width*ratio, self.size.height*ratio} };
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
// return the image
return theImage;
}
- (UIImage *)floodFillRemove{
//1
UIImage *processedImage = [self floodFillFromPoint:CGPointMake(0, 0) withColor:[UIColor magentaColor] andTolerance:0];
CGImageRef inputCGImage=processedImage.CGImage;
UInt32 * inputPixels;
NSUInteger inputWidth = CGImageGetWidth(inputCGImage);
NSUInteger inputHeight = CGImageGetHeight(inputCGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bitsPerComponent = 8;
NSUInteger inputBytesPerRow = bytesPerPixel * inputWidth;
inputPixels = (UInt32 *)calloc(inputHeight * inputWidth, sizeof(UInt32));
CGContextRef context = CGBitmapContextCreate(inputPixels, inputWidth, inputHeight, bitsPerComponent, inputBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, inputWidth, inputHeight), inputCGImage);
//2
for (NSUInteger j = 0; j < inputHeight; j++) {
for (NSUInteger i = 0; i < inputWidth; i++) {
UInt32 * currentPixel = inputPixels + (j * inputWidth) + i;
UInt32 color = *currentPixel;
if (R(color) == 255 && G(color) == 0 && B(color) == 255) {
*currentPixel = RGBAMake(0, 0, 0, A(0));
}else{
*currentPixel = RGBAMake(R(color), G(color), B(color), A(color));
}
}
}
CGImageRef newCGImage = CGBitmapContextCreateImage(context);
//3
UIImage * maskImage = [UIImage imageWithCGImage:newCGImage];
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
free(inputPixels);
UIImage *result = [self maskImageWithMask:maskImage];
//4
return result;
}
#end
what if your image with gradient background?
use below code for that.
- (UIImage *)complexReoveBackground{
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:self];
GPUImagePrewittEdgeDetectionFilter *filter = [[GPUImagePrewittEdgeDetectionFilter alloc] init];
[filter setEdgeStrength:0.04];
[stillImageSource addTarget:filter];
[filter useNextFrameForImageCapture];
[stillImageSource processImage];
UIImage *resultImage = [filter imageFromCurrentFramebuffer];
UIImage *processedImage = [resultImage floodFillFromPoint:CGPointMake(0, 0) withColor:[UIColor magentaColor] andTolerance:0];
CGImageRef inputCGImage=processedImage.CGImage;
UInt32 * inputPixels;
NSUInteger inputWidth = CGImageGetWidth(inputCGImage);
NSUInteger inputHeight = CGImageGetHeight(inputCGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bitsPerComponent = 8;
NSUInteger inputBytesPerRow = bytesPerPixel * inputWidth;
inputPixels = (UInt32 *)calloc(inputHeight * inputWidth, sizeof(UInt32));
CGContextRef context = CGBitmapContextCreate(inputPixels, inputWidth, inputHeight, bitsPerComponent, inputBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, inputWidth, inputHeight), inputCGImage);
for (NSUInteger j = 0; j < inputHeight; j++) {
for (NSUInteger i = 0; i < inputWidth; i++) {
UInt32 * currentPixel = inputPixels + (j * inputWidth) + i;
UInt32 color = *currentPixel;
if (R(color) == 255 && G(color) == 0 && B(color) == 255) {
*currentPixel = RGBAMake(0, 0, 0, A(0));
}else{
*currentPixel = RGBAMake(0, 0, 0, 255);
}
}
}
CGImageRef newCGImage = CGBitmapContextCreateImage(context);
UIImage * maskImage = [UIImage imageWithCGImage:newCGImage];
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
free(inputPixels);
GPUImagePicture *maskImageSource = [[GPUImagePicture alloc] initWithImage:maskImage];
GPUImageGaussianBlurFilter *blurFilter = [[GPUImageGaussianBlurFilter alloc] init];
[blurFilter setBlurRadiusInPixels:0.7];
[maskImageSource addTarget:blurFilter];
[blurFilter useNextFrameForImageCapture];
[maskImageSource processImage];
UIImage *blurMaskImage = [blurFilter imageFromCurrentFramebuffer];
//return blurMaskImage;
UIImage *result = [self maskImageWithMask:blurMaskImage];
return result;
}
You can download the sample code from here sample code

What's the "need a swizzler so that RGB8 can be read" about that Core Image give iOS9?

First of all,i thought a solution to it ,but it's not a good way.I will give in the last.
when i deal with Filter In iOS9 i got "need a swizzler so that RGB8 can be read" error message and the return image is total black by this method
[self.context createCGImage:self.outputImage fromRect:[self.outputImage extent]];
in here
- (UIImage *)fliterImage:(UIImage *)input flitername:(NSString *)name
{
NSString * fliter_name = name;
self.context = [CIContext contextWithOptions:nil];
UIImage *image;
if ([fliter_name isEqualToString:#"OriginImage"]){
image = input;
}else {
self.ciImage = [[CIImage alloc] initWithImage:input];
self.filter = [CIFilter filterWithName:fliter_name keysAndValues:kCIInputImageKey,self.ciImage, nil];
[self.filter setDefaults];
self.outputImage = [self.filter outputImage];
// here give the error message
self.cgimage = [self.context createCGImage:self.outputImage fromRect:[self.outputImage extent]];
UIImage *image1 = [UIImage imageWithCGImage:self.cgimage];
CGImageRelease(self.cgimage);
self.context = [CIContext contextWithOptions:nil];
//
// self.filter=[CIFilter filterWithName:#"CIColorControls"];
// _imageView.image=image;
//
//[self.filter setValue:[CIImage imageWithCGImage:image.CGImage] forKey:#"inputImage"];
image = image1;
}
return image;
}
and the pram input is create by this method written by my director
-(UIImage *)UIImageFromBmp:(uint8_t**)pixels withPlanesCount:(uint32_t)planeCount
{
uint8_t *rgb = malloc(sizeof(uint8_t)*1920*1080*3);
int step = 1920 * 3;
// CSU: YUV420 has 2 kinds of data structure
// planeCount==3 --> |yyyyyyyy|
// |yyyyyyyy|
// |uuuu|
// |vvvv|
// planeCount==2 --> |yyyyyyyy|
// |yyyyyyyy|
// |uvuvuvuv|
if (planeCount == 3) {
for (int rows=0; rows<1080; rows++) {
for (int cols=0; cols<1920; cols++) {
int y = pixels[0][rows*1920 + cols];
int u = pixels[1][(rows>>1)*960 + (cols>>1)];
int v = pixels[2][(rows>>1)*960 + (cols>>1)];
int r = (int)((y&0xff) + 1.402*((v&0xff)-128));
int g = (int)((y&0xff) - 0.34414*((u&0xff)-128) - 0.71414*((v&0xff)-128));
int b = (int)((y&0xff) + 1.772*((u&0xff)-128));
rgb[rows*step + cols*3 + 0] = MAX(MIN(r, 255), 0);
rgb[rows*step + cols*3 + 1] = MAX(MIN(g, 255), 0);
rgb[rows*step + cols*3 + 2] = MAX(MIN(b, 255), 0);
}
}
} else if(planeCount == 2) {
for (int rows=0; rows<1080; rows++) {
for (int cols=0; cols<1920; cols++) {
int y = pixels[0][rows*1920 + cols];
int u = pixels[1][(rows>>1)*1920 + ((cols>>1)<<1)];
int v = pixels[1][(rows>>1)*1920 + ((cols>>1)<<1)+1];
int r = (int)((y&0xff) + 1.402*((v&0xff)-128));
int g = (int)((y&0xff) - 0.34414*((u&0xff)-128) - 0.71414*((v&0xff)-128));
int b = (int)((y&0xff) + 1.772*((u&0xff)-128));
rgb[rows*step + cols*3 + 0] = MAX(MIN(r, 255), 0);
rgb[rows*step + cols*3 + 1] = MAX(MIN(g, 255), 0);
rgb[rows*step + cols*3 + 2] = MAX(MIN(b, 255), 0);
}
}
} else {
// CSU: should not happen
assert(0);
}
NSData *data = [NSData dataWithBytes:rgb length:step*1080];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// CSU: Creating CGImage from raw data
CGImageRef imageRef = CGImageCreate(1920, //width
1080, //height
8, //bits per component
8 * 3, //bits per pixel
step, //bytesPerRow
colorSpace, //colorspace
kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// CSU: Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
free(rgb);
return finalImage;
}
i google it need a swizzler so that RGB8 can be read,and realize it may the image format the cause the prolem.
so i call [self clipImageWithScaleWithsize:_scaledImage.size input:_sourceImage]; to deal the image before it pass to
[self.context createCGImage:self.outputImage fromRect:[self.outputImage extent]];
with this (size is just the image size)
- (UIImage *)clipImageWithScaleWithsize:(CGSize)asize input:(UIImage *)input
{
UIImage *newimage;
UIImage *image = input;
if (nil == image) {
newimage = nil;
}
else{
CGSize oldsize = image.size;
CGRect rect;
if (asize.width/asize.height > oldsize.width/oldsize.height) {
rect.size.width = asize.width;
rect.size.height = asize.width*oldsize.height/oldsize.width;
rect.origin.x = 0;
rect.origin.y = (asize.height - rect.size.height)/2;
}
else{
rect.size.width = asize.height*oldsize.width/oldsize.height;
rect.size.height = asize.height;
rect.origin.x = (asize.width - rect.size.width)/2;
rect.origin.y = 0;
}
UIGraphicsBeginImageContext(asize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClipToRect(context, CGRectMake(0, 0, asize.width, asize.height));
CGContextSetFillColorWithColor(context, [[UIColor clearColor] CGColor]);
UIRectFill(CGRectMake(0, 0, asize.width, asize.height));//clear background
[image drawInRect:rect];
newimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
return newimage;
}
and it did solve the prombem,but i am still puzzled,and i don't think it's a good way.
What does "need a swizzler so that RGB8 can be read" really mean,and why my solution work?

CIGaussianBlur changes imageOrientation sometime

In my iOS app I want to apply a filter CIGaussianBlur on UIImage, when it gets a image having big height it rotates the image
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:image]; //get image for blur
CIFilter *blurFilter = [CIFilter filterWithName:#"CIGaussianBlur"];
[blurFilter setDefaults];
[blurFilter setValue:inputImage forKey:#"inputImage"];
CGFloat blurLevel = 0.0f; // Set blur level
[blurFilter setValue:[NSNumber numberWithFloat:blurLevel] forKey:#"inputRadius"];
// set value for blur level
CIImage *outputImage = [blurFilter valueForKey:#"outputImage"];
CGRect rect = inputImage.extent; // Create Rect
rect.origin.x += blurLevel; // and set custom params
rect.origin.y += blurLevel; //
rect.size.height -= blurLevel*2.0f; //
rect.size.width -= blurLevel*2.0f; //
CGImageRef cgImage = [context createCGImage:outputImage fromRect:rect];
// Then apply new rect
UIImageOrientation originalOrientation = _imageView.image.imageOrientation;
CGFloat originalScale = _imageView.image.scale;
UIImage *fixedImage=[UIImage imageWithCGImage:cgImage scale:originalScale orientation:originalOrientation] ; //output of CIGaussianBlur
It works for me.
_imageView.image=image;
UIImageOrientation originalOrientation = _imageView.image.imageOrientation;
CGFloat originalScale = _imageView.image.scale;
UIImage *fixedImage=[UIImage imageWithCGImage:cgImage scale:originalScale orientation:originalOrientation] ;

Resize Album Artwork Form mp3 File

I have to resize an album artwork form the file I get with this code:
for (NSString *format in [asset availableMetadataFormats]) {
for (AVMetadataItem *item in [asset metadataForFormat:format]) {
if ([[item commonKey] isEqualToString:#"title"]) {
//NSLog(#"name: %#", (NSString *)[item value]);
downloadedCell.nameLabel.text = (NSString *)[item value];
}
if ([[item commonKey] isEqualToString:#"artist"]) {
downloadedCell.artistLabel.text = (NSString *)[item value];
}
if ([[item commonKey] isEqualToString:#"albumName"]) {
//musicItem.strAlbumName = (NSString *)[item value];
}
if ([[item commonKey] isEqualToString:#"artwork"]) {
UIImage *img = nil;
if ([item.keySpace isEqualToString:AVMetadataKeySpaceiTunes]) {
img = [UIImage imageWithData:[item.value copyWithZone:nil]];
}
else { // if ([item.keySpace isEqualToString:AVMetadataKeySpaceID3]) {
NSData *data = [(NSDictionary *)[item value] objectForKey:#"data"];
img = [UIImage imageWithData:data];
}
// musicItem.imgArtwork = img;
UIImage *newImage = [self resizeImage:img width:70.0f height:70.0f];
downloadedCell.artworkImage.image = newImage;
}
When I apply this method:
- (UIImage *)resizeImage:(UIImage *)image width:(int)width height:(int)height {
//NSLog(#"resizing");
CGImageRef imageRef = [image CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
//if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(imageRef),
4 * width, CGImageGetColorSpace(imageRef), alphaInfo);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return result;
}
I ALWAYS get a noise image like you can see in the photo above.
http://postimage.org/image/jltpfza11/
How can I get a better resolution image?
If your view is 74x74, you should resize to twice that on retina displays. So, something like:
CGFloat imageSize = 74.0f * [[UIScreen mainScreen] scale];
UIImage *newImage = [self resizeImage:img width:imageSize height:imageSize];
Then you need to set the contentMode of your image view to something like UIViewContentModeScaleAspectFill.
Try specify explicitly the level of interpolation for the context using:
CGContextSetInterpolationQuality(bitmap, kCGInterpolationHigh);
Your resizeImage:width:height: method thus becomes:
-(UIImage *)resizeImage:(UIImage *)image width:(int)width height:(int)height {
//NSLog(#"resizing");
CGImageRef imageRef = [image CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
//if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(imageRef),
4 * width, CGImageGetColorSpace(imageRef), alphaInfo);
CGContextSetInterpolationQuality(bitmap, kCGInterpolationHigh);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return result;
}

Resources