Here i am converting my image to binary data by category on UIImage which have static method.My Problem is UIImageJPEGRepresentation and UIImagePNGRepresentation are very slow upto 6 second. I need 1 sec solution.Can Anybody help me.
Here i pass my image to category method till its size reduce to less than or equal to 10kbs.
-(NSData *)imageConvertToBinary :(UIImage *) image{
NSLog(#"Image Convert ");
//UIImagePNGRepresentation(image);
NSData *imageData = UIImageJPEGRepresentation(image, .000032);
NSLog(#"Image Done ");
//Change size of image to 10kbs
int size = imageData.length;
NSLog(#"SIZE OF IMAGE:First %i ", size);
NSData *data = UIImageJPEGRepresentation(image, .0032);
NSLog(#"Start while ");
int temp=0;
while (data.length / 1000 >= 10) {
image = [UIImage imageWithImage:image andWidth:image.size.width/2 andHeight:image.size.height/2];
data = UIImageJPEGRepresentation(image, .0032);
temp++;
NSLog(#"temp %u",temp);
}
size = data.length;
NSLog(#"SIZE OF IMAGE:after %i ", size);
return data;
}
and also i have category class on UIImage
#implementation UIImage (ImageProcessing)
+(UIImage*)imageWithImage:(UIImage*)image andWidth:(CGFloat)width andHeight:(CGFloat)height
{
UIGraphicsBeginImageContext( CGSizeMake(width, height));
[image drawInRect:CGRectMake(0,0,width,height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
#end
NSData *data ;
must be equal to something
I reduce your code you were using two times more UIImageJPEGRepresentation try this
- (NSData *)imageConvertToBinary :(UIImage *) image{
NSData *data ;
NSLog(#"Start while ");
int temp=0;
while (data.length / 1000 >= 10) {
image = [UIImage imageWithImage:image andWidth:image.size.width/2 andHeight:image.size.height/2];
data = UIImageJPEGRepresentation(image, .0032);
temp++;
NSLog(#"temp %u",temp);
}
NSLog(#"End while ");
int size = data.length;
NSLog(#"SIZE OF IMAGE:after %i ", size);
return data;
}
Related
I want to compress UIImage. However,I get a wrong size.
I'm sure UIImageJPEGRepresentation is wrong.how to fix
sorry for my poor English
+ (NSData *)compressDataWithImg:(UIImage *)originalImage compression:(CGFloat)compression size:(CGFloat)size {
NSData *data = [NSData dataWithData:UIImageJPEGRepresentation(originalImage, compression)];
if ((data.length / 1024.0) > size)
{
compression = compression - 0.1;
if (compression > 0) {
return [[self class] compressDataWithImg:originalImage compression:compression size:(CGFloat)size];
}else{
return data;
}
}else{
return data;
}
return data;
}
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *fullImage = info[UIImagePickerControllerOriginalImage];
fullImage = [fullImage fixOrientationToUp];
UIImage *imgData = [fullImage scaleToSizeWithLimit:0];
NSData *data = [Util compressDataWithImg:imgData compression:0.9 size:256]; // this is right:size:2M
UIImage *image = [UIImage imageWithData:data];//i use this UIImage
NSData *xxx = [NSData dataWithData: UIImageJPEGRepresentation(image,1) ];// wrong size:7~8M WHY?...
if (self.imageSelectBlock) {
self.imageSelectBlock(image);
}
[picker dismissViewControllerAnimated:YES completion:NULL] ;
}
thanks for helping me
I found when I use UIImageJPEGRepresentation and imageWithData deal a image again and again. the length of image's data gradually increase.
Test code:
UIImage *image = [UIImage imageNamed:#"test.png"];
NSLog(#"first: %lu", UIImageJPEGRepresentation(image, 1.0).length);
NSLog(#"second: %lu", UIImageJPEGRepresentation([UIImage imageWithData:UIImageJPEGRepresentation(image, 1.0)], 1.0).length);
NSLog(#"third: %lu", UIImageJPEGRepresentation([UIImage imageWithData:UIImageJPEGRepresentation([UIImage imageWithData:UIImageJPEGRepresentation(image, 1.0)], 1.0)], 1.0).length);
Logs:
first: 361586
second: 385696
third: 403978
I think this problem is caused by UIImageJPEGRepresentation.
Just a test.
And I found a related question:
When i am using UIImagePNGRepresentation or UIImageJPEGRepresentation for converting UIImage into NSdata, the image size is too much increased
In one my current project, I have to parse base 64 string, which is encoded and stored by flex(Adobe Flash) application. By discussing with flex dev team I found that they storing image string in bitmap format so I assumed it is a pixel representation.
So, first question from me is, in above mentioned case can I directly convert raw data into UIimage using base64 decoding class or I have to use CGBitmapContext ?
However, currently I have implemented below mentioned code for conversion.
`
//Convert to Base64 data
NSData *decodedData = [QSStrings decodeBase64WithString:_settingSerializedImage.currentValue];
NSError *error = nil;
NSData *unCompressedData = [decodedData bbs_dataByInflatingWithError:&error];
NSData *dataImage = [NSMutableData new];
NSUInteger bytePosition = 0;
uint8_t * bytes = malloc(5);
[unCompressedData getBytes:&bytes range:NSMakeRange(unCompressedData.length-3, 3)];
bytes = OSSwapBigToHostInt16(bytes);
int width = (int)bytes;
[unCompressedData getBytes:&bytes range:NSMakeRange(unCompressedData.length-5, 5)];
bytes = OSSwapBigToHostInt16(bytes);dataImage = [unCompressedData subdataWithRange:NSMakeRange(0, (unCompressedData.length -5))];
NSUInteger length = [dataImage length];
unsigned char *bytesData = malloc( length * sizeof(unsigned char) ); [dataImage getBytes:bytesData length:length];UIImage *imgScreenShot = [ScreenshotPortlet convertBitmapRGBA8ToUIImage:bytesData withWidth:width withHeight:height];`
below is the method for converting raw data in image using core graphics
+ (UIImage *) convertBitmapRGBA8ToUIImage:(unsigned char *) buffer
withWidth:(int) width
withHeight:(int) height {
char* rgba = (char*)malloc(width*height*4);
for(int i=0; i < width*height; ++i) {
rgba[4*i] = buffer[4*i];
rgba[4*i+1] = buffer[4*i+1];
rgba[4*i+2] = buffer[4*i+2];
rgba[4*i+3] = buffer[4*i+3];
}
//
size_t bufferLength = width * height * 4;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, rgba, bufferLength, NULL);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
if(colorSpaceRef == NULL) {
NSLog(#"Error allocating color space");
CGDataProviderRelease(provider);
return nil;
}
CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedFirst;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider,
NULL,
YES,
renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
if(pixels == NULL) {
NSLog(#"Error: Memory not allocated for bitmap");
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
return nil;
}
CGContextRef context = CGBitmapContextCreate(pixels,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
if(context == NULL) {
NSLog(#"Error context not created");
free(pixels);
}
UIImage *image = nil;
if(context) {
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(context);
}
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
CGDataProviderRelease(provider);
if(pixels) {
free(pixels);
}
return image;
}
Here is the output I'm getting , a distorted image :
So can anyone suggest me better solution or tell if I'm on right direction in it or not? Thanks, in advance :)
For reference here you can find original base 64 string
To convert image to base64string:
- (NSString *)imageToNSString:(UIImage *)image
{
NSData *data = UIImagePNGRepresentation(image);
return [data base64EncodedStringWithOptions:NSDataBase64EncodingEndLineWithLineFeed];
}
To convert base64string to image:
- (UIImage *)stringToUIImage:(NSString *)string
{
NSData *data = [[NSData alloc]initWithBase64EncodedString:string options:NSDataBase64DecodingIgnoreUnknownCharacters];
return [UIImage imageWithData:data];
}
- (UIImage *)setImage:(UIImage *)img{
CGSize size = img.size;
int imageW = size.width;
int imageH = size.height;
unsigned char *cImage = [self convertUIImageToBitmapBGR:img];
unsigned char *poutBGRImage = (unsigned char *)malloc(imageW * imageH * 3);
if (!self.handle) {
NSLog(#"init handle fail");
}
cv_result_t result = cv_imagesdk_dynamic_imagetone_picture(self.handle, cImage, CV_PIX_FMT_BGR888, imageW, imageH, imageW * 3, poutBGRAImage, CV_PIX_FMT_BGR888, imageW, imageH, imageW * 3, 1.0, 1.0,1.0);
free(cImage);
if (result == CV_OK) {
UIImage * image = [UIImage imageWithData:[NSData dataWithBytes:poutBGRAImage length:imageW*imageH*3]];
free(poutBGRAImage);
return image;
}else{
free(poutBGRAImage);
return [[UIImage alloc] init];
}
}
I can convert the UIImage into the BGR bitmap successfully by using the convertUIImageToBitmapBGR: function. And after running the cv_imagesdk_dynamic_imagetone_picture: function I can get the poutBGRImage as a unsigned char type variable. I use [UIImage imageWithData:] now but it doesn't work. Is there any other function to make the unsigned char type data into the UIImage type?
I'm getting a memory error. I'm getting a memory error because the memory usage increases exponentially. Clearly I am not releasing something, any ideas what that may be?
Here is my method to determine red pixels present in a UI image. It returns the count.
- (NSUInteger)getRedPixelCount:(UIImage*)image
{
NSUInteger numberOfRedPixels = 0;
struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel));
if (pixels != nil)
{
CGContextRef context = CGBitmapContextCreate((void *) pixels,
image.size.width,
image.size.height,
8,
image.size.width * 4,
CGImageGetColorSpace(image.CGImage),
(CGBitmapInfo)kCGImageAlphaPremultipliedLast);
if (context != NULL)
{
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage);
NSUInteger numberOfPixels = image.size.width * image.size.height;
while (numberOfPixels > 0) {
if (pixels->r == 255) {
numberOfRedPixels++;
}
pixels++;
numberOfPixels--;
}
CGContextRelease(context);
}
}
return numberOfRedPixels;
}
This is the code to iterate through the photo library images and determine each of their red pixels.
[self.library enumerateGroupsWithTypes:ALAssetsGroupAll usingBlock:^(ALAssetsGroup *group, BOOL *stop) {
if (group) {
[group setAssetsFilter:[ALAssetsFilter allPhotos]];
[group enumerateAssetsUsingBlock:^(ALAsset *asset, NSUInteger index, BOOL *stop){
if (asset) {
ALAssetRepresentation *rep = [asset defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref){
UIImage *myImage = [UIImage imageWithCGImage:iref scale:[rep scale] orientation:(UIImageOrientation)[rep orientation]];
NSLog(#"%i", [self getRedPixelCount:myImage]);
}
}
}];
}
} failureBlock:^(NSError *error) {
NSLog(#"error enumerating AssetLibrary groups %#\n", error);
}];
Regards,
C.
You're not releasing the memory allocated by
struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel));
You need to add:
free(pixels);
at the bottom of the if(pixels != nil) block.
Make the first block look like:
- (NSUInteger)getRedPixelCount:(UIImage*)image
{
NSUInteger numberOfRedPixels = 0;
struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel));
if (pixels != nil)
{
CGContextRef context = CGBitmapContextCreate((void *) pixels,
image.size.width,
image.size.height,
8,
image.size.width * 4,
CGImageGetColorSpace(image.CGImage),
(CGBitmapInfo)kCGImageAlphaPremultipliedLast);
if (context != NULL)
{
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage);
NSUInteger numberOfPixels = image.size.width * image.size.height;
struct pixels* ptr = pixels;
while (numberOfPixels > 0) {
if (ptr->r == 255) {
numberOfRedPixels++;
}
ptr++;
numberOfPixels--;
}
CGContextRelease(context);
}
free(pixels);
}
return numberOfRedPixels;
}
Also it will help if the second block changes to include:
#autoreleasepool {
ALAssetRepresentation *rep = [asset defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref){
UIImage *myImage = [UIImage imageWithCGImage:iref scale:[rep scale] orientation:(UIImageOrientation)[rep orientation]];
NSLog(#"%i", [self getRedPixelCount:myImage]);
<#statements#>
}
}
although the major leak is not freeing the pixel buffer.
You can embrace 'enumerateAssetsUsingBlock' block with #autorelease:
#autorelease {
if (asset) {
...
}
}
This force all autoreleasing objects to release immediately.
UPD:
Use [asset thumbnail] instead of fullScreenImage or fullResolutionImage. These methods generate huge amount of pixel data.
UPD2:
if even [asset thumbnail] did not help, than you must find a way to release this image data, it could be little bit tricky as long as you can't to release it directly calling CGImageRelease. Try something like this:
NSMutabelArray* arr = [NSMutabelArray new];
In your enumerateAssetsUsingBlock just put asset object in this array:
[arr addObject:asset];
And do nothing else in this block.
Then iterate through this array in a way:
while (arr.count > 0)
{
ALAsset* asset = [arr lastObject];
// do smth with asset
[arr removeLastObject]; // this will remove object from memory immediatelly
}
I keep getting trouble decoding a encoded image.
I use the following code to encode my image returned from UIImagePickerController:
NSData *imageData2 = UIImageJPEGRepresentation(image, 1.0);
NSString *encodedString = [imageData2 base64Encoding];
I tried resizing the image to 600x600, I also tried to compress the image.
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *image = [self imageWithImage:[info objectForKey:UIImagePickerControllerEditedImage] scaledToSize:CGSizeMake(600, 600)];
imgPreviewSelected.image = [self imageWithImage:image scaledToSize:CGSizeMake(600, 600)];
CGFloat compression = 0.9f;
CGFloat maxCompression = 0.1f;
int maxFileSize = 250*1024;
NSData *imageData = UIImageJPEGRepresentation(image, compression);
while ([imageData length] > maxFileSize && compression > maxCompression)
{
compression -= 0.1;
imageData = UIImageJPEGRepresentation(image, compression);
}
image = [UIImage imageWithData:imageData];
imagePickerReturnedImage = image;
NSData *imageData2 = UIImageJPEGRepresentation(image, 1.0);
NSString *encodedString = [imageData2 base64Encoding];
[utils postData:#"example.com" : [NSString stringWithFormat:#"image=%#", encodedString]];
[self dismissViewControllerAnimated:YES completion:NULL];
}
When I try decoding the image that was sent to my database it tells me that the file is damaged.
I tried decoding it with PHP and the following website:
http://www.motobit.com/util/base64-decoder-encoder.asp
My MYSQL database tells me that the image is 1MB, I think that is pretty large for an image of 600x600. Without compression it was 1,3MB.
I Use this method to scale my image:
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I hope anyone can help me out, thx
I used another way without base64, so no answers required.
Your code look right.
Have you try copy your base64string to decode in that website?
you can check image size by
NSLog(#"dataSize: %#",[NSByteCountFormatter stringFromByteCount:data.length countStyle:NSByteCountFormatterCountStyleFile]);