How to add sticker overlay to camera photo - ios

I'm trying to emulate what Celebrity Clicks does: add a celebrity sticker to camera feed, position it and scale it, and than take the photo. This should give you a photo with the sticker applied, which is what Celebrity Click does. However, I'm having trouble merging the camera photo with the sticker. There are a few issues: the sticker scale and position is wrong when applied on the final camera image, because the image taken from the camera is actually much larger both in resolution and in size than the picture shown on the live camera feed when you set up the sticker.
Here is what I'm doing now:
[(GPUImageStillCamera *)videoCamera capturePhotoAsImageProcessedUpToFilter:selectedFilter withCompletionHandler:^(UIImage *processedImage, NSError *error) {
selectedImage = [self imageByCombiningImage:processedImage withImage:celebOverlayView.imageView.image];
}];
- (UIImage*)imageByCombiningImage:(UIImage*)firstImage withImage:(UIImage*)secondImage {
UIImage *image = nil;
CGSize newImageSize = CGSizeMake(MAX(firstImage.size.width, secondImage.size.width), MAX(firstImage.size.height, secondImage.size.height));
if (UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, [[UIScreen mainScreen] scale]);
} else {
UIGraphicsBeginImageContext(newImageSize);
}
[firstImage drawInRect:cameraView.frame];
[firstImage drawAtPoint:CGPointMake(roundf((newImageSize.width-firstImage.size.width)/2),
roundf((newImageSize.height-firstImage.size.height)/2))];
[secondImage drawAtPoint:CGPointMake(roundf((newImageSize.width-secondImage.size.width)/2),
roundf((newImageSize.height-secondImage.size.height)/2))];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
I have attached the before and after photo, so that you can see my problem.
Camera setup screen:
Photo taken with camera with the sticker applied:
I'm guessing that there is a better way to merge the two images or to simply apply the sticker at given coordinates to the camera captured image. Any suggestions?

It is easiest if you save the relative frame of the added view in the picker and then compute the new frame when applying the added image combining the two images.
One of the many ways to do so is to divide all the parameters of the frame with the superview width and height respectively when taking the photo and then multiplying the same frame coordinates with the image width and height when trying to merge the two images.
Also for what you are doing I suggest you lose the core graphics to draw the image content and rather just use the image view: Crate the image view with the size of the background image, set the image then add another image view with the added image and set the frame as described above. Then simply create a screenshot of the view and your image is done. This way you will have no issues with scaling, transforms and such.

Related

ios - Can crop photo vertical but not horizontal

I have a UIScrollView with a UIImageView inside of it. For this part of my app, the user can select photos from their camera roll and scale and crop them.
I have successfully made the user be able select different photos, then zoom in and out & pan around the image. Also, the user can zoom out to make the image centre vertically or horizontally depending on if the image is portrait or landscape.
The problem is, I try to then crop the photo from the visible rect in the scroll view to a new image, however its only working for portrait photos.
Here is an example of it working then not working:
Here is a portrait image that is zoomed out to fit the screen:
Next, I zoom in the image so there is no black space.
Finally, I crop the photo and you can see it crops perfectly in the top left hand corner.
However, for some reason when I try to do this with a landscape image the cropping messes up?! Here is an example of it not working.
Here is a zoomed out landscape image.
Next, I zoom in so there is no black space left. Notice how I zoomed in specifically so there is no physical boarder of the white board visible in the photo.
Now, I crop the photo just like before and it doesn't crop it properly. Notice how in the top left hand corner the image is different from before. It appears to have been zoomed out and you can see more of the bottom of the white board.
I need to figure out why this is happening and how to fix it.
Here is the exact code I use to crop the photo from the UIScrollView.
//Get the scale
float scale = 1.0f/_libraryScrollView.zoomScale;
//Create a new rect
CGRect visibleRect;
visibleRect.origin.x = _libraryScrollView.contentOffset.x * scale;
visibleRect.origin.y = _libraryScrollView.contentOffset.y * scale;
visibleRect.size.width = _libraryScrollView.bounds.size.width * scale;
visibleRect.size.height = _libraryScrollView.bounds.size.height * scale;
//Get the source image
UIImage *src = libraryPreviewImageView.image;
//Create the new cropped image with the rect
CGImageRef cr = CGImageCreateWithImageInRect(src.CGImage, visibleRect);
UIImage *finalImage = [[UIImage alloc]initWithCGImage:cr];
//Set the new image to the preview image view
self.imagePreviewView.image = finalImage;
This code works for portrait images but doesn't work for landscape images as shown above in the examples. Is this error to do with my cropping code or is it to do with something else?
Any help would be appreciated.
In the end, I had no idea what was the problem but trying to use maths to crop an image from a scroll view is extremely difficult!
I found a really easy way and that is to take a screen shot of the visible content in the scroll view, its as easy as this:
UIGraphicsBeginImageContextWithOptions(_libraryScrollView.bounds.size, YES, [UIScreen mainScreen].scale);
CGPoint offset = _libraryScrollView.contentOffset;
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -offset.x, -offset.y);
[_libraryScrollView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Set the new image to the preview image view
self.imagePreviewView.image = finalImage;
I really hope this answer can help other people out too!

need a very tiny (rectangular in shape) overlay over UIImagePickerController, and then crop the image accordingly - UPDATED

In my application, i need the user to take a snap of only a 10 letter word (using overlay, which should be right in the centre of the screen of the UIImagePicker), and then in need to show him that image (only the part of the image covered by that rectangle). So, I need to crop that image according to the overlay.
Here, i have taken a picture using UIImagePickerControl. Now, i want to see the dimensions of the image that i have taken..
UIImage *imageToprocess = [info objectForKey:UIImagePickerControllerOriginalImage];
NSLog(#"image width %f", imageToprocess.size.width);
NSLog(#"image height %f", imageToprocess.size.height);
I see the following result on console.. But how is this possible. the dimensions of the image is exceeding the dimension of the iPhone screen size.. (which is 320, 568)
UsingTesseractOCR[524:60b] image width 2448.000000
2013-12-17 16:02:18.962 UsingTesseractOCR[524:60b] image height 3264.000000
Can anybody help me out here?? I have gone through several questions here, but did not understand how to do it.
Please help..
Refer this sample code for image capturing and cropping.
https://github.com/kishikawakatsumi/CropImageSample
For creating overlay, first create a custom view (of full dimensions of camera preview) and add an transparent image with just a rectangle in its background. use this view as overlay view.
myview =[[UIImageView alloc]init];
myview.frame=CGRectMake(0, 0, 320, 431);
// why 431? bcoz height = height of device - height of tabbar present in the
bottom for camera controls of picker
//for iphone 4 ,480-49
myview.backgroundColor =[UIColor clearColor];
myview.opaque = NO;
myview.image =[UIImage imageNamed:#"A45Box.png"];
myview.userInteractionEnabled =YES;
note that you create a background image appropriately (means dimensions). You can also draw rectangle programmatically but this is much easy way.
Secondly, talking about your cropping issue, you have to get your hands dirty....Try these links for help
https://github.com/iosdeveloper/ImageCropper
https://github.com/barrettj/BJImageCropper
https://github.com/ardalahmet/SSPhotoCropperViewController

Get picture from square AVCaptureVideoPreviewLayer

So what I am doing is creating a custom image picker, and I have a 320 X 320 AVCaptureVideoPreviewLayer that I am using, and when I take a picture, I want to get a UIImage of what is actually seen in the preview layer, but what I get when I take a photo with captureStillImageAsynchronouslyFromConnection: completionHandler: is a normal image with size 2448 X 3264. So what would be the best way to get make this image into a 320 x 320 square image like is seen in the preview layer without messing it up? Is there a Right Way™ to do this? Also, I am using AVLayerVideoGravityResizeAspectFill for the videoGravity property of AVCaptureVideoPreviewLayer, if that is relevant.
Have you tried to transform the image?
try using CGAffineTransformMakeScale(<#sx: CGFloat#>, <#sy: CGFloat#>) to scale the image down. Transforms can do magic! If you have ever taken linear algebra, you should recall your standard transformation matrices. I have not used this for images, so I am not sure if this would work well with the pixels.
you could also try
// grab the original image
UIImage *originalImage = [UIImage imageNamed:#"myImage.png"];
// scaling set to 2.0 makes the image 1/2 the size.
UIImage *scaledImage =
[UIImage imageWithCGImage:[originalImage CGImage]
scale:(originalImage.scale * 2.0)
orientation:(originalImage.imageOrientation)];
where you can change the scale factor

Crop UIImage from a transformed UIImageView

I am letting the user capture an image from the camera or picking one from the library.
This image I display in an UIImageView.
The user can now scale and position the image within a bounding box, exactly like you would do using the UIImagePickerController when allowsEditing is set to YES.
When the user is satisfied with the result and taps Done I would like to produce a cropped UIImage.
The problem arises when using CGImageCreateWithImageInRect as this does not take the scaling into account. The transform is applied to the imageView like this:
CGAffineTransform transform = CGAffineTransformScale(self.imageView.transform, newScale, newScale);
[self.imageView setTransform:transform];
Using a gestureRecognizer.
I assume what is happening is; the UIImageView is scaled and moved, it then applies the UIViewContentModeScaleAspectFit to the UIImage is holds and when I ask it to crop the image, it does exactly that - whit no regards to the scaling positioning. The reason I think this, is that if I don't scale or move the image but just tap Done straight away the cropping works.
I crop the image like this:
- (UIImage *)cropImage:(UIImage*) img toRect:(CGRect)rect {
CGFloat scale = [[UIScreen mainScreen] scale];
if (scale>1.0) {
rect = CGRectMake(rect.origin.x*scale , rect.origin.y*scale, rect.size.width*scale, rect.size.height*scale);
}
CGImageRef imageRef = CGImageCreateWithImageInRect([img CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.imageView.image.scale orientation:self.imageView.image.imageOrientation];
// UIImage *result = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return result;
}
Passing in a cropRect from a view that is a subView of my main view (the square overlay box, like in UIImagePickerController). Main UIView has a UIImageView that gets scaled and a UIView that displays the crop rectangle.
How can I get the "what you see is what you get" cropping and which factors must I take into account. Or maybe suggestions if I should implemented the hierarchy or scaling differently.
Try a simple trick. Apple has got samples on its site to show how to zoom into a photo using code. Once done zooming, using graphic context take the frame size of the bounding view, and take the image with that. Eg Uiview contains scroll view which has the zoomed image. So the scrollview zooms and so does your image, now take the frame size of your bounding UIview, and create an image context out of it and then save that as a new image. Tell me if that makes sense.
Cheers :)

How to truncate a UIImage in iOS

How can I truncate the left side of an image stored in a UIImage object. Basically in certain situations I just want to show part of an image.
How can I do this on with the iOS sdk?
P.S. I tried changing the frame size of the UIImage but that just scales the image and distorts it.
A very simple way is to load the image into a UIImageView, and then add the view to another view. You can then position the image view so that its .frame.origin.x property is negative, which will place it off to the left. The parent view needs to have setMasksToBounds:YES called on it, or else the image view will still be fully-visible.
There are many other ways to achieve this effect as well, but this may be the simplest for you to implement.
to crop a UIImage, you can use one of the UIImage categories available out there, such as http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/
For example, this frame will remove 100 pixel from the left side of a 200x200 pixel UIImage
CGRect clippedRect = CGRectMake(100, 0, 100, 200);
UIImage *cropped = [self imageByCropping:lightsOnImage toRect:clippedRect];

Resources