I'm creating a gif as below but I always end up with a black line on one or more of the edges as the attached image shows. How can I avoid this please?
//Create gif
MagickWand *mw = NewMagickWand();
MagickSetFormat(mw, "gif");
for(int i = 0; i < [self.finalImageArray count]; i++)
{
float interval = (100/8);
MagickWand *localWand = NewMagickWand();
UIImage *image = [self.finalImageArray objectAtIndex:i];
NSData *dataObj = UIImagePNGRepresentation(image);
MagickReadImageBlob(localWand, [dataObj bytes], [dataObj length]);
MagickThumbnailImage(localWand, 320, 320);
MagickSetImageDelay(localWand, interval);
MagickAddImage(mw, localWand);
DestroyMagickWand(localWand);
}
size_t my_size;
unsigned char * my_image = MagickGetImagesBlob(mw, &my_size);
NSData *gifData = [[NSData alloc] initWithBytes:my_image length:my_size];
The problem isn't with the creation of the gif, the black line is created earlier when I scale the image
Related
I am trying to get a list of all the colors in an image in Objective-C. Note, I am COMPLETELY new to Objective-C - I've done some Swift work in the past, but not really Objective-C.
I pulled a library that more or less is supposed to pull all colors as part of its code. I've modified it to look like this (callback at the end is from React Native, path argument is just a string of the path):
getColors:(NSString *)path options:(NSDictionary *)options callback:(RCTResponseSenderBlock)callback) {
UIImage *originalImage = [UIImage imageWithContentsOfFile:path ];
UIImage *image =
[UIImage imageWithCGImage:[originalImage CGImage]
scale:0.5
orientation:(UIImageOrientationUp)];
CGImageRef cgImage = [image CGImage];
NSUInteger width = CGImageGetWidth(cgImage);
NSUInteger height = CGImageGetHeight(cgImage);
// Allocate storage for the pixel data
unsigned char *rawData = (unsigned char *)malloc(height * width * 4);
// Create the color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Set some metrics
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
// Create context using the storage
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
// Release the color space
CGColorSpaceRelease(colorSpace);
// Draw the image into the storage
CGContextDrawImage(context, CGRectMake(0, 0, width, height), cgImage);
// We are done with the context
CGContextRelease(context);
// determine the colours in the image
NSMutableArray * colours = [NSMutableArray new];
float x = 0;
float y = 0;
for (int n = 0; n<(width*height); n++){
int index = (bytesPerRow * y) + x * bytesPerPixel;
int red = rawData[index];
int green = rawData[index + 1];
int blue = rawData[index + 2];
int alpha = rawData[index + 3];
NSArray * a = [NSArray arrayWithObjects:[NSString stringWithFormat:#"%i",red],[NSString stringWithFormat:#"%i",green],[NSString stringWithFormat:#"%i",blue],[NSString stringWithFormat:#"%i",alpha], nil];
[colours addObject:a];
y++;
if (y==height){
y=0;
x++;
}
}
free(rawData);
callback(#[[NSNull null], colours]);
Now, this script is fairly simple it seems like - it should be iterating over each pixel and adding each color to an array, which is then returned to React Native via the callback.
However, the response to the call is always an empty array.
I'm not sure why that is. Could it be due to where the images are located (they're at AWS, on S3), or something in the algorithm? The code looks right to me, but it's entirely possible that I'm missing something just due to unfamiliarity with Objective-C
I ran your code in an empty project and it performs as expected using an image loaded from the assets library. Is it possible that the UIImage *originalImage = [UIImage imageWithContentsOfFile:path ]; call uses an invalid path. You can easily validate that by simply logging the value of the read image:
UIImage * originalImage = [UIImage imageWithContentsOfFile: path];
NSLog(#"image read from file %#", originalImage);
If the image was not read properly from the file, you will get an empty colours array as the width and height will be nil there will be nothing to loop over.
Also, to avoid modifying the array after your function has returned, it is generally a good practice to return a copy of mutable object or an immutable object (i.e. NSArray instead of NSMutableArray):
callback(#[[NSNull null], [colours copy]]);
Hope this helps
The issue was ultimately that the image download method was returning null - not sure why.
So I took this:
UIImage *originalImage = [UIImage imageWithContentsOfFile:path ];
I changed it to this:
NSData * imageData = [[NSData alloc] initWithContentsOfURL: [NSURL URLWithString: path]];
UIImage *originalImage = [UIImage imageWithData: imageData];
And now my image downloads just fine and the rest of the script works great.
I've been struggling for two days to display an animated webp image in a UIImageView with no success whatsoever.
Mainly the problem is in the decoding step of the file which gives this error: VP8_STATUS_UNSUPPORTED_FEATURE.
I tried
https://github.com/seanooi/iOS-WebP
https://github.com/mattt/WebPImageSerialization
These projects provide code for creating UIImage with webp files and they work fine with images with no animation but they both fail with the same error as above when attempting to decode images with animation.
I am jailbroken and checking the filesystem I saw that Facebook's Messenger app has some of it's stickers in .webp format and also in the License they mention Google's "webp" library so I'm sure somehow it's possible.
Managed to decode animated .webp using the code snippet at the top of this header which also contains explanations of the data structures used.
static NSDictionary* DecodeWebPURL(NSURL *url) {
NSMutableDictionary *info = [NSMutableDictionary dictionary];
NSMutableArray *images = [NSMutableArray array];
NSData *imgData = [NSData dataWithContentsOfURL:url];
WebPData data;
WebPDataInit(&data);
data.bytes = (const uint8_t *)[imgData bytes];
data.size = [imgData length];
WebPDemuxer* demux = WebPDemux(&data);
int width = WebPDemuxGetI(demux, WEBP_FF_CANVAS_WIDTH);
int height = WebPDemuxGetI(demux, WEBP_FF_CANVAS_HEIGHT);
uint32_t flags = WebPDemuxGetI(demux, WEBP_FF_FORMAT_FLAGS);
if (flags & ANIMATION_FLAG) {
WebPIterator iter;
if (WebPDemuxGetFrame(demux, 1, &iter)) {
WebPDecoderConfig config;
WebPInitDecoderConfig(&config);
config.input.height = height;
config.input.width = width;
config.input.has_alpha = iter.has_alpha;
config.input.has_animation = 1;
config.options.no_fancy_upsampling = 1;
config.options.bypass_filtering = 1;
config.options.use_threads = 1;
config.output.colorspace = MODE_RGBA;
[info setObject:[NSNumber numberWithInt:iter.duration] forKey:#"duration"];
do {
WebPData frame = iter.fragment;
VP8StatusCode status = WebPDecode(frame.bytes, frame.size, &config);
if (status != VP8_STATUS_OK) {
NSLog(#"Error decoding frame");
}
uint8_t *data = WebPDecodeRGBA(frame.bytes, frame.size, &width, &height);
CGDataProviderRef provider = CGDataProviderCreateWithData(&config, data, config.options.scaled_width * config.options.scaled_height * 4, NULL);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width, height, 8, 32, 4 * width, colorSpaceRef, bitmapInfo, provider, NULL, YES, renderingIntent);
[images addObject:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
} while (WebPDemuxNextFrame(&iter));
WebPDemuxReleaseIterator(&iter);
}
}
WebPDemuxDelete(demux);
[info setObject:images forKey:#"frames"];
return info;
}
I have tried to convert PDF Pages to NSImage and save to JPG files successfully. However the output result is as normal 72 DPI. I want to change the DPI to 300 DPI but failed. Below is the code:
- (IBAction)TestButton:(id)sender {
NSString* localDocuments = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES) objectAtIndex:0];
NSString* pdfPath = [localDocuments stringByAppendingPathComponent:#"1.pdf"];
NSData *pdfData = [NSData dataWithContentsOfFile:pdfPath];
NSPDFImageRep *pdfImg = [NSPDFImageRep imageRepWithData:pdfData];
NSFileManager *fileManager = [NSFileManager defaultManager];
NSInteger pageCount = [pdfImg pageCount];
for(int i = 0 ; i < pageCount ; i++) {
[pdfImg setCurrentPage:i];
NSImage *temp = [[NSImage alloc] init];
[temp addRepresentation:pdfImg];
CGFloat factor = 300/72; // Scale from 72 DPI to 300 DPI
//NSImage *img; // Source image
NSSize newSize = NSMakeSize(temp.size.width*factor, temp.size.height*factor);
NSImage *scaledImg = [[NSImage alloc] initWithSize:newSize];
[scaledImg lockFocus];
[[NSColor whiteColor] set];
[NSBezierPath fillRect:NSMakeRect(0, 0, newSize.width, newSize.height)];
NSAffineTransform *transform = [NSAffineTransform transform];
[transform scaleBy:factor];
[transform concat];
[temp drawAtPoint:NSZeroPoint fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
[scaledImg unlockFocus];
NSBitmapImageRep *rep = [NSBitmapImageRep imageRepWithData:[temp TIFFRepresentation]];
NSData *finalData = [rep representationUsingType:NSJPEGFileType properties:nil];
NSString *pageName = [NSString stringWithFormat:#"Page_%ld.jpg", (long)[pdfImg currentPage]];
[fileManager createFileAtPath:[NSString stringWithFormat:#"%#%#", pdfPath, pageName] contents:finalData attributes:nil];
}
}
Since OS X 10.8, NSImage has a block based initialiser to draw vector based content into a bitmap.
The idea is to provide a drawing handler that is called whenever a representation of the image is requested.
The relation between points and pixels is expressed by passing a NSSize (in points) to the initialiser and to explicitly set the pixel dimensions for the representation:
NSString* localDocuments = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES) objectAtIndex:0];
NSString* pdfPath = [localDocuments stringByAppendingPathComponent:#"1.pdf"];
NSData* pdfData = [NSData dataWithContentsOfFile:pdfPath];
NSPDFImageRep* pdfImageRep = [NSPDFImageRep imageRepWithData:pdfData];
CGFloat factor = 300/72;
NSInteger pageCount = [pdfImageRep pageCount];
for(int i = 0 ; i < pageCount ; i++)
{
[pdfImageRep setCurrentPage:i];
NSImage* scaledImage = [NSImage imageWithSize:pdfImageRep.size flipped:NO drawingHandler:^BOOL(NSRect dstRect) {
[pdfImageRep drawInRect:dstRect];
return YES;
}];
NSImageRep* scaledImageRep = [[scaledImage representations] firstObject];
/*
* The sizes of the PDF Image Rep and the [NSImage imageWithSize: drawingHandler:]-context
* are defined in terms of points.
* By explicitly setting the size of the scaled representation in Pixels, you
* define the relation between points & pixels.
*/
scaledImageRep.pixelsWide = pdfImageRep.size.width * factor;
scaledImageRep.pixelsHigh = pdfImageRep.size.height * factor;
NSBitmapImageRep* pngImageRep = [NSBitmapImageRep imageRepWithData:[scaledImage TIFFRepresentation]];
NSData* finalData = [pngImageRep representationUsingType:NSJPEGFileType properties:nil];
NSString* pageName = [NSString stringWithFormat:#"Page_%ld.jpg", (long)[pdfImageRep currentPage]];
[[NSFileManager defaultManager] createFileAtPath:[NSString stringWithFormat:#"%#%#", pdfPath, pageName] contents:finalData attributes:nil];
}
You can set the resolution saved in an image file's metadata by setting the size of the NSImageRep to something other than the image's size
[pngImageRep setSize:NSMakeSize(targetWidth, targetHeight)]
where you have to initialize targetWidth and targetHeight to the values you want
Edit: and I guess you wanted to write "scaledImg" not "temp"
NSBitmapImageRep *rep = [NSBitmapImageRep imageRepWithData:[scaledImg TIFFRepresentation]];
Edit 2: on second thought this will get you a larger image but only as a stretched out version of the smaller one. The approach in weichsel's answer with the modification below is probably what you really want (but the code above is still valid for setting the metadata)
NSSize newSize = NSMakeSize(pdfImageRep.size.width * factor,pdfImageRep.size.height * factor);
NSImage* scaledImage = [NSImage imageWithSize:newSize flipped:NO drawingHandler:^BOOL(NSRect dstRect) {
[pdfImageRep drawInRect:dstRect];
return YES;
}];
Okay, so I'm downloading a bunch of large-ish images (5mb) from a server in pieces, then stitching the pieces together and rendering the total image from a byte array. However, I've realized that the data for each image is not being released, and consequently builds up causing a memory warning and crash of my app. I thought that because of my explicit (__bridge_transfer NSData *) casting that ARC would take care of releasing the object, but it's still proving to be a problem. In instruments, objects called "CGDataProviderCopyData" of ~ 1mb build up and are not discarded for each file that is being stitched into the whole image. Any ideas or anyone who can steer me in the right direction? Much obliged.
// Create array to add all files into total image
NSMutableArray *byteArray = [[NSMutableArray alloc] initWithCapacity:(imageHeight * imageWidth)];
// Iterate through each file in files array
for (NSString *file in array)
{
// Set baseURL for individual file path
NSString *baseURL = [NSString stringWithFormat:#"http://xx.225.xxx.xxx%#",[imageInfo objectForKey:#"BaseURL"]];
// Specify imagePath by appending baseURL to file name
NSString *imagePath = [NSString stringWithFormat:#"%#%#", baseURL, file];
// Change NSString --> NSURL --> NSData
NSURL *imageUrl = [NSURL URLWithString:imagePath];
NSData *imageData = [NSData dataWithContentsOfURL:imageUrl];
// Create image from imageData
UIImage *image = [UIImage imageWithData:imageData];
CGImageRef cgimage = image.CGImage;
size_t width = CGImageGetWidth(cgimage);
size_t height = CGImageGetHeight(cgimage);
size_t bpr = CGImageGetBytesPerRow(cgimage);
size_t bpp = CGImageGetBitsPerPixel(cgimage);
size_t bpc = CGImageGetBitsPerComponent(cgimage);
size_t bytes_per_pixel = bpp / bpc;
// Get CGDataProviderRef from cgimage
CGDataProviderRef provider = CGImageGetDataProvider(cgimage);
// This is the object that is not being released
NSData *data = (__bridge_transfer NSData *)CGDataProviderCopyData(provider); //Using (__bridge_transfer NSData *) casts the provider to type NSData and gives ownership to ARC, but still not discarded
const UInt8 *bytes = (Byte *)[data bytes];
// Log which file is currently being iterated through
NSLog(#"---Stitching png file to total image: %#", file);
// Populate byte array with channel data from each pixel
for(size_t row = 0; row < height; row++)
{
for(size_t col = 0; col < width; col++)
{
const UInt8* pixel =
&bytes[row * bpr + col * bytes_per_pixel];
for(unsigned short i = 0; i < 4; i+=4)
{
__unused unsigned short red = pixel[i]; // red channel - unused
unsigned short green = pixel[i+1]; // green channel
unsigned short blue = pixel[i+2]; // blue channel
__unused unsigned short alpha = pixel[i+3]; // alpha channel - unused
// Create dicom intensity value from intensity = [(g *250) + b]
unsigned short dicomInt = ((green * 256) + blue);
//Convert unsigned short intensity value to NSNumber so can store in array as object
NSNumber *DICOMvalue = [NSNumber numberWithInt:dicomInt];
// Add to image array (total image)
[byteArray addObject:DICOMvalue];
}
}
}
data = nil;
}
return byteArray;
Running "Analyze" through Xcode doesn't show any apparent leaks either.
I took this code, almost verbatim, and did some more investigation. With the CFDataRef/NSData, I was able to see the problem you were seeing with the NSDatas not going away, and I was able to solve it by wrapping the portion of the code that uses the NSData in an #autoreleasepool scope, like this:
// Create array to add all files into total image
NSMutableArray *byteArray = [[NSMutableArray alloc] initWithCapacity:(imageHeight * imageWidth)];
// Iterate through each file in files array
for (NSString *file in array)
{
// Set baseURL for individual file path
NSString *baseURL = [NSString stringWithFormat:#"http://xx.225.xxx.xxx%#",[imageInfo objectForKey:#"BaseURL"]];
// Specify imagePath by appending baseURL to file name
NSString *imagePath = [NSString stringWithFormat:#"%#%#", baseURL, file];
// Change NSString --> NSURL --> NSData
NSURL *imageUrl = [NSURL URLWithString:imagePath];
NSData *imageData = [NSData dataWithContentsOfURL:imageUrl];
// Create image from imageData
UIImage *image = [UIImage imageWithData:imageData];
CGImageRef cgimage = image.CGImage;
size_t width = CGImageGetWidth(cgimage);
size_t height = CGImageGetHeight(cgimage);
size_t bpr = CGImageGetBytesPerRow(cgimage);
size_t bpp = CGImageGetBitsPerPixel(cgimage);
size_t bpc = CGImageGetBitsPerComponent(cgimage);
size_t bytes_per_pixel = bpp / bpc;
// Get CGDataProviderRef from cgimage
CGDataProviderRef provider = CGImageGetDataProvider(cgimage);
#autoreleasepool
{
// This is the object that is not being released
NSData *data = (__bridge_transfer NSData *)CGDataProviderCopyData(provider); //Using (__bridge_transfer NSData *) casts the provider to type NSData and gives ownership to ARC, but still not discarded
const UInt8 *bytes = (Byte *)[data bytes];
// Log which file is currently being iterated through
NSLog(#"---Stitching png file to total image: %#", file);
// Populate byte array with channel data from each pixel
for(size_t row = 0; row < height; row++)
{
for(size_t col = 0; col < width; col++)
{
const UInt8* pixel =
&bytes[row * bpr + col * bytes_per_pixel];
for(unsigned short i = 0; i < 4; i+=4)
{
__unused unsigned short red = pixel[i]; // red channel - unused
unsigned short green = pixel[i+1]; // green channel
unsigned short blue = pixel[i+2]; // blue channel
__unused unsigned short alpha = pixel[i+3]; // alpha channel - unused
// Create dicom intensity value from intensity = [(g *250) + b]
unsigned short dicomInt = ((green * 256) + blue);
//Convert unsigned short intensity value to NSNumber so can store in array as object
NSNumber *DICOMvalue = [NSNumber numberWithInt:dicomInt];
// Add to image array (total image)
[byteArray addObject:DICOMvalue];
}
}
}
data = nil;
}
}
return byteArray;
After adding that #autoreleasepool, I then commented out the part where you create NSNumbers and put them in the array, and I was able to see in the Allocations template of Instruments that indeed the CFData objects were now being released with each turn of the loop.
The reason I commented out the part where you create NSNumbers and put them in the array, is that with that code in there, you're going to end up adding width * height * 4 NSNumbers to byteArray. This means that even if the NSData was being released properly, your heap use would be going up by width * height * 4 * <at least 4 bytes, maybe more> no matter what. Maybe that's what you need to do, but it sure made it harder for me to see what was going on with the NSDatas because their size was being dwarfed by the array of NSNumbers.
Hope that helps.
Please check this code and help me out,,,
CGImageRef cRef = CGImageRetain(im.CGImage);
NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(cRef));
// return pointer to data
unsigned char* pixelBytes = (unsigned char *)[pixelData bytes];
// step through char data
for(int k = 0; k < [pixelData length]; k += 4) {
// change accordingly
pixelBytes[k] = pixelBytes[k];
pixelBytes[k+1] = pixelBytes[k+1];
pixelBytes[k+2] = pixelBytes[k+2];
pixelBytes[k+3] = 255;
}
NSData* newPixelData = [NSData dataWithBytes:pixelBytes length:[pixelData length]];
CFDataRef imgData = (CFDataRef)pixelData;
CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData(imgData);
CGImageRef throughCGImage = CGImageCreateWithPNGDataProvider(imgDataProvider, NULL, YES, kCGRenderingIntentDefault);
UIImage* newImage = [UIImage imageWithCGImage:throughCGImage];
NSLog(#"newImage: %#", newImage);
Some data is coming but it is not getting added to UIImageView..it is showing blank.
Problem 0)
You're mutating immutable data, which should be undefined behavior.
Problem 1)
Anyways, a PNG is a real file format, not a bitmap blob with a fixed sample format. PNG can represent images in many ways - your programs just overwrites good data.