So what I am doing is creating a custom image picker, and I have a 320 X 320 AVCaptureVideoPreviewLayer that I am using, and when I take a picture, I want to get a UIImage of what is actually seen in the preview layer, but what I get when I take a photo with captureStillImageAsynchronouslyFromConnection: completionHandler: is a normal image with size 2448 X 3264. So what would be the best way to get make this image into a 320 x 320 square image like is seen in the preview layer without messing it up? Is there a Right Way™ to do this? Also, I am using AVLayerVideoGravityResizeAspectFill for the videoGravity property of AVCaptureVideoPreviewLayer, if that is relevant.
Have you tried to transform the image?
try using CGAffineTransformMakeScale(<#sx: CGFloat#>, <#sy: CGFloat#>) to scale the image down. Transforms can do magic! If you have ever taken linear algebra, you should recall your standard transformation matrices. I have not used this for images, so I am not sure if this would work well with the pixels.
you could also try
// grab the original image
UIImage *originalImage = [UIImage imageNamed:#"myImage.png"];
// scaling set to 2.0 makes the image 1/2 the size.
UIImage *scaledImage =
[UIImage imageWithCGImage:[originalImage CGImage]
scale:(originalImage.scale * 2.0)
orientation:(originalImage.imageOrientation)];
where you can change the scale factor
Related
I have an UIView with UIImageViews and UILabels, which I have to capture into image and then export to photo gallery. The image has a fixed size in pixels and must have alpha channel, because UIView background color is clear.
Now I use UIGraphicsBeginImageContextWithOptions with renderInContext or drawViewHierarchyInRect, then I resize image to a given size and save it with UIImagePNGRepresentation. It works - I get an UIImage of the exact pixel size I need, with alpha channel, saved in gallery.
UIGraphicsBeginImageContextWithOptions(_templateView.bounds.size, NO, 0.0);
[_templateView drawViewHierarchyInRect:_templateView.bounds afterScreenUpdates:NO];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsBeginImageContext(CGSizeMake(1080.0f, 1080.0f));
[img drawInRect:CGRectMake(0, 0, 1080.0f, 1080.0f)];
img = UIGraphicsGetImageFromCurrentImageContext();
NSData *pngImageData = UIImagePNGRepresentation(img);
The problem is the size of the result image. It is a way larger than expected. When I add only one UIImageView (filling parent UIView) with the image of 1.2Mb. it's capture results in 1.65Mb.. It is crucial because I have a limit size for an image. How can I reduce it's size? Is is possible to reduce quality of such an image with alpha channel?
I tried resize it to 50% and then again to 100% but it results even in largest size.
I'm cropping UIImages with a UIBezierPath using UIGraphicsContext:
CGSize thumbnailSize = CGSizeMake(54.0f, 45.0f); // dimensions of UIBezierPath
UIGraphicsBeginImageContextWithOptions(thumbnailSize, NO, 0);
[path addClip];
[originalImage drawInRect:CGRectMake(0, originalImage.size.height/-3, thumbnailSize.width, originalImage.size.height)];
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But for some reason my images are getting stretched vertically (everything looks slightly long and skinny), and this effect is stronger the bigger my originalImage is. I'm sure the originalImages are perfectly fine before I do these operations (I've checked)
My images are all 9:16 (say 72px wide by 128px tall) if that matters.
I've seen UIGraphics creates a bitmap with an "ARGB 32-bit integer pixel format using host-byte order"; and I'll admit a bit of ignorance when it comes to pixel formats, but felt this MAY be relevant because I'm not sure if that's the same pixel format I use to encode the picture data.
No idea how relevant this is but here is the FULL processing pipeline:
I'm capturing using AVFoundation and I set my photoSettings as
NSDictionary *photoSettings = #{AVVideoCodecKey : AVVideoCodecH264};
capturing using captureStillImageAsynchronouslyFromConnection:.. then turning it into NSData using [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer]; then downsizing into thumbnail by creating a CGDataProviderRefWithCFData and converting to CGImageRef using CGImageSourceCreateThumbnailAtIndex and getting a UIImage from that.
Later, I once again turn it into NSData using UIImageJPEGRepresentation(thumbnail, 0.7) so I can store. And finally when I'm ready to display I call my own method detailed on top [self maskImage:[UIImage imageWithData:imageData] toPath:_thumbnailPath] and display it on a UIImageView and set contentMode = UIViewContentModeScaleAspectFit.
If the method I'm using to mask the UIImage with the UIBezierPath is fine, I may end up explicitly setting the photoOutput settings with [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil] and the I can probably use something like how to convert a CVImageBufferRef to UIImage and change a lot of my code... but I really rather not do that unless completely necessary since, as I've mentioned, I really don't know much about video encoding / all these graphical, low level objects.
This line:
[originalImage drawInRect:CGRectMake(0, originalImage.size.height/-3, thumbnailSize.width, originalImage.size.height)];
is a problem. You are drawing originalImage but you specify the width of thumbnailSize.width and the height of originalImage. This messes up the image's aspect ratio.
You need a width and a height based on the same image size. Pick one as needed to maintain the proper aspect ratio.
I am fetching a image from my server based on the scale so I fetch something like :
http://myserver.com/image1.png
or http://myserver.com/image1#2x.png
However what I see is that once I initialize a image with the contents of http://myserver.com/image1#2x.png the scale on the UIImage says it is 1x and it gets rendered badly where I want it to be rendered, it renders it in full size.. instead of 1/2 the size with double the pixels.. how do I make this work correctly?
You can create a new UIImage with a scale factor of 2 using the code below:
UIImage * img = [UIImage imageNamed:#"myimagename"];
img = [UIImage imageWithCGImage:img.CGImage scale:2 orientation:img.imageOrientation];
To make the code device-independent, you should get the scale factor of the current device using the code below and replace the number 2 with it.
[UIScreen mainScreen].scale
I'm trying to emulate what Celebrity Clicks does: add a celebrity sticker to camera feed, position it and scale it, and than take the photo. This should give you a photo with the sticker applied, which is what Celebrity Click does. However, I'm having trouble merging the camera photo with the sticker. There are a few issues: the sticker scale and position is wrong when applied on the final camera image, because the image taken from the camera is actually much larger both in resolution and in size than the picture shown on the live camera feed when you set up the sticker.
Here is what I'm doing now:
[(GPUImageStillCamera *)videoCamera capturePhotoAsImageProcessedUpToFilter:selectedFilter withCompletionHandler:^(UIImage *processedImage, NSError *error) {
selectedImage = [self imageByCombiningImage:processedImage withImage:celebOverlayView.imageView.image];
}];
- (UIImage*)imageByCombiningImage:(UIImage*)firstImage withImage:(UIImage*)secondImage {
UIImage *image = nil;
CGSize newImageSize = CGSizeMake(MAX(firstImage.size.width, secondImage.size.width), MAX(firstImage.size.height, secondImage.size.height));
if (UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, [[UIScreen mainScreen] scale]);
} else {
UIGraphicsBeginImageContext(newImageSize);
}
[firstImage drawInRect:cameraView.frame];
[firstImage drawAtPoint:CGPointMake(roundf((newImageSize.width-firstImage.size.width)/2),
roundf((newImageSize.height-firstImage.size.height)/2))];
[secondImage drawAtPoint:CGPointMake(roundf((newImageSize.width-secondImage.size.width)/2),
roundf((newImageSize.height-secondImage.size.height)/2))];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
I have attached the before and after photo, so that you can see my problem.
Camera setup screen:
Photo taken with camera with the sticker applied:
I'm guessing that there is a better way to merge the two images or to simply apply the sticker at given coordinates to the camera captured image. Any suggestions?
It is easiest if you save the relative frame of the added view in the picker and then compute the new frame when applying the added image combining the two images.
One of the many ways to do so is to divide all the parameters of the frame with the superview width and height respectively when taking the photo and then multiplying the same frame coordinates with the image width and height when trying to merge the two images.
Also for what you are doing I suggest you lose the core graphics to draw the image content and rather just use the image view: Crate the image view with the size of the background image, set the image then add another image view with the added image and set the frame as described above. Then simply create a screenshot of the view and your image is done. This way you will have no issues with scaling, transforms and such.
I am letting the user capture an image from the camera or picking one from the library.
This image I display in an UIImageView.
The user can now scale and position the image within a bounding box, exactly like you would do using the UIImagePickerController when allowsEditing is set to YES.
When the user is satisfied with the result and taps Done I would like to produce a cropped UIImage.
The problem arises when using CGImageCreateWithImageInRect as this does not take the scaling into account. The transform is applied to the imageView like this:
CGAffineTransform transform = CGAffineTransformScale(self.imageView.transform, newScale, newScale);
[self.imageView setTransform:transform];
Using a gestureRecognizer.
I assume what is happening is; the UIImageView is scaled and moved, it then applies the UIViewContentModeScaleAspectFit to the UIImage is holds and when I ask it to crop the image, it does exactly that - whit no regards to the scaling positioning. The reason I think this, is that if I don't scale or move the image but just tap Done straight away the cropping works.
I crop the image like this:
- (UIImage *)cropImage:(UIImage*) img toRect:(CGRect)rect {
CGFloat scale = [[UIScreen mainScreen] scale];
if (scale>1.0) {
rect = CGRectMake(rect.origin.x*scale , rect.origin.y*scale, rect.size.width*scale, rect.size.height*scale);
}
CGImageRef imageRef = CGImageCreateWithImageInRect([img CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.imageView.image.scale orientation:self.imageView.image.imageOrientation];
// UIImage *result = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return result;
}
Passing in a cropRect from a view that is a subView of my main view (the square overlay box, like in UIImagePickerController). Main UIView has a UIImageView that gets scaled and a UIView that displays the crop rectangle.
How can I get the "what you see is what you get" cropping and which factors must I take into account. Or maybe suggestions if I should implemented the hierarchy or scaling differently.
Try a simple trick. Apple has got samples on its site to show how to zoom into a photo using code. Once done zooming, using graphic context take the frame size of the bounding view, and take the image with that. Eg Uiview contains scroll view which has the zoomed image. So the scrollview zooms and so does your image, now take the frame size of your bounding UIview, and create an image context out of it and then save that as a new image. Tell me if that makes sense.
Cheers :)