Add label and imageview on animated gif and share animated gif - ios

I am working on the picture caption app. For normal png picture all is working fine but my client ask me to create animated gif from the images. i have created animated gif using imageview animation property but the problem comes when i have to add caption on gif image and share that gif image with caption on it.I am sharing simple gif image by convering them into NSdata. Please advice me the right way to add caption on animated gif.
I have used below code to create gif image
CGImageDestinationRef destination = CGImageDestinationCreateWithURL((CFURLRef)CFBridgingRetain([NSURL fileURLWithPath:path]),kUTTypeGIF,[camImages count],NULL);
NSDictionary *frameProperties = [NSDictionary dictionaryWithObject:[NSDictionary dictionaryWithObject: [NSNumber numberWithInt:2] forKey:(NSString *)#"0.15f"]
forKey:(NSString *)kCGImagePropertyGIFDictionary];
NSDictionary *gifProperties = [NSDictionary dictionaryWithObject:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:0] forKey:(NSString *)kCGImagePropertyGIFLoopCount]
forKey:(NSString *)kCGImagePropertyGIFDictionary];
_image=[[UIImage alloc] init];
for(int i=0;i<camImages.count;i++)
{
_image =[camImages objectAtIndex:i];
CGImageDestinationAddImage(destination, _image.CGImage, (CFDictionaryRef)CFBridgingRetain(frameProperties));
}
CGImageDestinationSetProperties(destination, (CFDictionaryRef)CFBridgingRetain(gifProperties));
CGImageDestinationFinalize(destination);

You have every frame of the gif, so you can draw the title on every image of the gif on the same position, then you get new images and you will compose them into a new gif.
How to write text on image
It's a rotate text demo.
NSString * str = #"hello" ;
UIImage * image = [UIImage imageNamed:#"path"] ;
CGSize imageSize = image.size ;
UIGraphicsBeginImageContextWithOptions(imageSize, YES, 0.0) ;
[image drawInRect:CGRectMake(0, 0, imageSize.width, imageSize.height)] ;
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextRotateCTM(context, M_PI);
[[UIColor whiteColor] set] ;
[str drawInRect:CGRectMake(-imageSize.width, -imageSize.height, imageSize.width, imageSize.height) withFont:[UIFont systemFontOfSize:16.0]] ;
CGContextRestoreGState(context);
UIImage * compressedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Related

Image rotation In IOS

I am using Xcode 7.3 and ojective-c one of my application when I am converting the camera image into base64 then image will be rotation 90 degree left I try so many method to fix this issue but did not working any one.
Below are the code:
NSString *data64URLString = [NSString stringWithFormat:#"data:image/png;base64,%#", [UIImageJPEGRepresentation(image, 0) base64EncodedStringWithOptions:NSDataBase64Encoding64CharacterLineLength]];
I try all the orientation by this ways but it's not working:
CGImageRef cgRef = image.CGImage;
image = [[UIImage alloc] initWithCGImage:cgRef scale:1.0 orientation:UIImageOrientationDownMirrored];
UIImage *originalImage = image;
Try this code:
- (UIImage *)imageRotatedByDegrees:(UIImage*)oldImage deg:(CGFloat)degrees
{
//Calculate the size of the rotated view's containing box for our drawing space
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0,oldImage.size.width, oldImage.size.height)];
CGAffineTransform t = CGAffineTransformMakeRotation(degrees * M_PI / 180);
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
//Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
//Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
//Rotate the image context
CGContextRotateCTM(bitmap, (degrees * M_PI / 180));
//Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-oldImage.size.width / 2, -oldImage.size.height / 2, oldImage.size.width, oldImage.size.height), [oldImage CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
As per this thread, PNG format has a known issue with rotation, I would suggest you to try UIImageJPEGRepresentation with data:image/jpg; instead of data:image/png;. This way it will probably set the rotation flag.
Hope it helps!
You can save to your app and get image from memory to view with code:
- (UIImage *)finishSavePhotoWithImagePickerController:(NSDictionary *()info {
UIImage *editedImage = (UIImage *) [info objectForKey:UIImagePickerControllerEditedImage];
UIImage *originalImage = (UIImage *) [info objectForKey:UIImagePickerControllerOriginalImage];
if (editedImage) {
imageToSave = editedImage;
} else {
imageToSave = originalImage;
}
NSData *imageData = UIImageJPEGRepresentation(imageToSave, 1.0);
[imageData writeToFile:#"yourFilePath" options:NSDataWritingFileProtectionNone error:nil];
UIImage *showImage = [[UIImage alloc] initWithContentsOfFile:#"yourFilePath"];
return showImage;
}

Putting text over an image programmatically in Objective-C

I want to add text on an image programmatically, but I can't seem to find how to do it. I've found one solution on here, but a lot of things are deprecated so it doesn't work...
Please help!
EDIT:
Here's my code:
UIImage *backgroundImage = image;
NSMutableDictionary *stringAttributes = [NSMutableDictionary dictionary];
[stringAttributes setObject: [UIFont fontWithName:#"Helvetica" size:20] forKey: NSFontAttributeName];
[stringAttributes setObject: [UIColor whiteColor] forKey: NSForegroundColorAttributeName];
[stringAttributes setObject: [NSNumber numberWithFloat: 2.0] forKey: NSStrokeWidthAttributeName];
[stringAttributes setObject: [UIColor blackColor] forKey: NSStrokeColorAttributeName];
[backgroundImage drawInRect:CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height)];
NSString *myString = [NSString stringWithFormat:#"Yolo"];
[myString drawInRect:CGRectMake(0, 0, 200, 50) withAttributes:stringAttributes];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.imageView.image = result;
NEW EDIT:
I'd like to clearify some things to understand the question better. My app lets the user send a photo that they have taken themselves via text messaging or by email, and I want to add some pre-written text from strings, on the photo.
So my question is: How do I get the text from the strings, on to the photo?
It took forever, but I figured it out.
Code:
NSMutableDictionary *stringAttributes = [NSMutableDictionary dictionary];
[stringAttributes setObject: [UIFont fontWithName:#"Avenir Book" size:250] forKey: NSFontAttributeName];
[stringAttributes setObject: [UIColor whiteColor] forKey: NSForegroundColorAttributeName];
NSString *placeString = [NSString stringWithFormat:#"Text here."];
CGSize size = [placeString sizeWithAttributes:stringAttributes];
//Create a bitmap context into which the text will be rendered.
UIGraphicsBeginImageContext(size);
//Render the text.
[placeString drawAtPoint:CGPointMake(0.0, 0.0) withAttributes:stringAttributes];
//Retrieve the image.
UIImage *imagene = UIGraphicsGetImageFromCurrentImageContext();
UIImage *mergedImage = _imageView.image;
CGSize newSize = image.size;
UIGraphicsBeginImageContext(newSize);
//Use existing opacity as is.
[mergedImage drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
//Apply supplied opacity if applicable.
CGRect drawingRect = (CGRect) {.size = size};
drawingRect.origin.x = (newSize.width - size.width) * 0.01;
drawingRect.origin.y = (newSize.height - size.height) * 0.03;
[imagene drawInRect:drawingRect blendMode:kCGBlendModeNormal alpha:1];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[_imageView setImage:newImage];
self.imageView.image = newImage;
Here's a general approach, code is likely missing but it gives you the important bits.
CGContextRef ctx = CGBitmapContextCreate(...)
CGContextDrawImage (gtx, myContextRect, myimage);
CGContextSelectFont(ctx, "Helvetica", 10.0, kCGEncodingMacRoman);
CGContextSetCharacterSpacing(ctx, 1.7);
CGContextSetTextDrawingMode(ctx, kCGTextFill);
CGContextShowTextAtPoint(ctx, 100.0, 100.0, "SOME TEXT", 9);
CGImageRef imageRef = CGBitmapContextCreateImage(ctx);
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
This will give you an image you can put in a view, or share via UIActivityViewController. I'm sure you can work out the bits.
The approach is:
1) Create bitmap context.
2) Render image
3) Add text
4) Save context to image
Simply place a UILabel on top of a UIImageView in IB and position them as desired using constraints. Then set the text into the label as desired.
EDIT:
Now that I know you want to be able to save I'd suggest using UIGraphicsBeginImageContextWithOptions to create an off-screen graphics context. I find that easier to deal with than CGContext and Core Graphics calls.
Draw into the context and the fetch and image from it.
Make the context the size of your image, and pass in a scale of 0 to use the device's native scale.
Your code might look something like this:
//Create an off-screen graphics context for drawing.
CGSize imageSize = myImage.size;
UIGraphicsBeginImageContextWithOptions(imageSize, false, 0);
//draw your image using drawAtPoint(CGPointmake(0,0));
//draw your string using one of the drawAtPoint or drawInRect methods
//available in NSString UIKit additions
//Fetch the resulting image from the context.
UIImage *maskImage = UIGraphicsGetImageFromCurrentImageContext();
//End the off-screen context
UIGraphicsEndImageContext();

Create a UIImage after skewing text

I'm trying to create a UIImage from an NSAttributedString that's has a skewed transform. I can make a UILabel or UIImageView and skew that view, but unfortunately I need to create a UIImage after the string has been skewed. I've tried the following..
CGAffineTransform skewTransform = CGAffineTransformIdentity;
skewTransform.b = (M_1_PI / 2);
UIGraphicsBeginImageContextWithOptions(mSize, NO, 1.0);
UILabel* skewedLabel = [[UILabel alloc] init];
skewedLabel.attributedText = stringToBeSkewed;
CGContextSetTextMatrix(UIGraphicsGetCurrentContext(), skewTransform);
skewedLabel.layer.affineTransform = skewTransform;
// The following 3 calls haven't worked
// [stringToBeSkewed drawInRect:inputRectParam.rect];
// [skewedLabel.layer renderInContext:UIGraphicsGetCurrentContext()];
// [skewedLabel.layer drawInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I get that the CALayer is CoreAnimation and CoreGraphics doesn't recognize this, but is there a way to get skewed text to be captured within the context and converted into a UIImage?
I was able to get a UIImage from text by doing the following, hope this helps someone out. Also, it's not the cleanest image, so any advice on sharpening it up would be appreciated.
UIFont* font = [UIFont fontWithName:#"(your font name)" size:60];
CTFontRef skewFont = CTFontCreateWithName((__bridge CFStringRef)font.fontName, font.pointSize, NULL);
// Create an attributed string for shadowed text
CFStringRef skewKeys[] = { kCTFontAttributeName, kCTForegroundColorAttributeName };
CFTypeRef skewValues[] = { skewFont, [UIColor colorWithRed:0 green:0 blue:0 alpha:.4].CGColor };
CFDictionaryRef skewAttributes = CFDictionaryCreate(NULL, (const void **)&skewKeys, (const void **)&skewValues,
sizeof(skewKeys) / sizeof(skewKeys[0]), &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFStringRef theCFString = (__bridge CFStringRef)inputStringParam.string;
CFAttributedStringRef skewString = CFAttributedStringCreate(NULL, theCFString, skewAttributes);
CFRelease(skewAttributes);
// create skew transform
CGAffineTransform skewTransform = CGAffineTransformIdentity;
skewTransform.b = (value to be skewed);
// Draw the string
CTLineRef skewLine = CTLineCreateWithAttributedString(skewString);
CGContextSetTextMatrix(UIGraphicsGetCurrentContext(), CGAffineTransformMakeScale(1.0, -1.0));
CGContextConcatCTM(UIGraphicsGetCurrentContext(), skewTransform);
// draw and capture shadow image
CGContextSetTextPosition(UIGraphicsGetCurrentContext(), inputDrawPointParam.point.x, inputDrawPointParam.point.y);
CTLineDraw(skewLine, UIGraphicsGetCurrentContext());
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CFRelease(skewLine);
CFRelease(skewString);
CFRelease(skewFont);

drawInRect does not read / preserve UIImageView transform

I encountered a problem when I pinch, pan or rotate a UIImageview in drawInRect, the transform is not being preserved.
How can I preserve the transform in drawInRect?
I tried this but no go :(
- (UIImage*) combineImage:(UIImageView *)selectedImage withOverlay:(UIImageView *)overlayImage
{
/* Identify the region that needs to be cropped */
CGRect viewForImgFrame = self.viewForImg.frame;
NSLog(#"view %#", NSStringFromCGRect(viewForImgFrame));
NSLog(#"selectedImage Img value %#",selectedImage);
NSLog(#"overlayImage Img value %#",overlayImage);
CGSize newImageSize =self.viewForImg.frame.size;
NSLog(#"CGSize %#",NSStringFromCGSize(newImageSize));
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, 0.0); //retina res
//[self.viewForImg.layer renderInContext:UIGraphicsGetCurrentContext()];
[selectedImage.image drawInRect:CGRectMake(0, 0, selectedImage.frame.size.width, selectedImage.frame.size.height)];
CGContextConcatCTM(UIGraphicsGetCurrentContext(), overlayImage.transform);
[overlayImage.image drawInRect:CGRectMake(overlayImage.frame.origin.x, overlayImage.frame.origin.y, overlayImage.frame.size.width, overlayImage.frame.size.height)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
NSData *imgData = UIImageJPEGRepresentation(image, 0.9); //UIImagePNGRepresentation ( image ); // get JPEG representation
UIImage * imagePNG = [UIImage imageWithData:imgData]; // wrap UIImage around PNG representation
UIGraphicsEndImageContext();
return imagePNG;
}
Any comments are greatly appreciated.
you need to try
CGRectApplyAffineTransform(<#CGRect rect#>, <#CGAffineTransform t#>)
your code should be like
CGContextConcatCTM(UIGraphicsGetCurrentContext(), overlayImage.transform);
CGRect rect = CGRectMake(overlayImage.frame.origin.x, overlayImage.frame.origin.y, overlayImage.frame.size.width, overlayImage.frame.size.height);
CGRect transformedRect = CGRectApplyAffineTransform(rect, overlayImage.transform);
[overlayImage.image drawInRect:transformedRect];
It turned out that the AutoresizingMask changed rect size
[slider setAutoresizingMask:UIViewAutoresizingNone];

Anti-aliasing custom bar button image in iOS

I'm trying to use a custom bar button image. I've created a simple PNG image with a white object and a transparent background as recommended by Apple:
As you can see, there is no anti-aliasing being applied (that is my exact image, pixel for pixel).
Apple does recommend to use Anti-aliasing, but don't provide any more detail. I would have thought this would be a function of the software like it would with text, rather than pre-applying it to the image.
The question is--how can I programatically provide anti-aliasing for my custom bar button images?
I have tried several things but nothing is doing it for me. Here's one thing I tried:
- (UIImage *)antialiasedImage
{
// Check to see if Antialiased Image has already been initialized
if (_antialiasedImage != nil) {
return _antialiasedImage;
}
// Get Device Scale
CGFloat scale = [[UIScreen mainScreen] scale];
// Get the Image from Resource
UIImage *buttonImage = [UIImage imageNamed:#"ReferenceFieldIcon"]; // .png???
// Get the CGImage from UIImage
CGImageRef imageRef = [buttonImage CGImage];
// Find width and height
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
// Begin Context
UIGraphicsBeginImageContextWithOptions(buttonImage.size, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Set Antialiasing Parameters
CGContextSetAllowsAntialiasing(context, YES);
CGContextSetShouldAntialias(context, YES);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
// Draw Image Ref on Context
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
// End Context
UIGraphicsEndImageContext();
// Set CGImage to UIImage
_antialiasedImage =
[UIImage imageWithCGImage:imageRef scale:scale orientation:UIImageOrientationUp];
// Release Image Ref
CGImageRelease(imageRef);
return _antialiasedImage;
}
Then I create my Segmented Control Like So:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
NSArray *centerItemsArray =
[NSArray arrayWithObjects: #"S",
self.antialiasedImage,
#"D",
nil];
UISegmentedControl *centerSegCtrl =
[[UISegmentedControl alloc] initWithItems:centerItemsArray];
centerSegCtrl.segmentedControlStyle = UISegmentedControlStyleBar;
UIBarButtonItem *centerButtonItem =
[[UIBarButtonItem alloc] initWithCustomView:(UIView *)centerSegCtrl];
UIBarButtonItem *flexibleSpace =
[[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemFlexibleSpace
target:nil
action:nil];
NSArray *barArray = [NSArray arrayWithObjects: flexibleSpace,
centerButtonItem,
flexibleSpace,
nil];
[self setToolbarItems:barArray];
}
Thank you in advance!
IOS doesn't add anti-aliasing to rendered images--only items drawn from code. You'll have to anti-alias the image before saving to file.
The way to anti-alias images from code is to draw them code.

Resources