create whole new image in iOS by selecting different properties - ios

i am working on app which allow to capture image from cam or select from photo library, after that it allow to select some options like what border color to apply, border width, rounded or square corners, what size user want like 45*45mm or 70*70mm etc and text user wants to apply at the bottom of this whole image and then app will save it as a whole image.
For border colour, border width, border corner i have create images in photoshop of different colour.
I am stuck at that how should i approach or how should i apply border properties and image captured and text to create a whole new image and save it. And how to apply 45*45mm or 70*70mm different sizes to the image.

Example code for adding one image to another one
- (void)drawRect:(CGRect)rect {
UIImage *bottomImage = [[UIImage imageNamed:#"background.png"] stretchableImageWithLeftCapWidth:20 topCapHeight:0];
UIImage *image = [UIImage imageNamed:#"logo.png"];
CGSize newSize = CGSizeMake(rect.size.width, rect.size.height);
UIGraphicsBeginImageContextWithOptions(newSize, NO, [UIScreen mainScreen].scale);
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
[image drawAtPoint:CGPointMake((int)((newSize.width - image.size.width) / 2), (int)((newSize.height - image.size.height) / 2))];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[newImage drawInRect:rect];
}
logo always lies on center
For resizing and rounding images i use
http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/

Related

colworWithPatternImage returning a 1 pixel transparent border

I'm trying to create a image with a tiled pattern but the tiled image seems to have a 1 pixel transparent border. Even when I use just a white background image I still get the transparent border.
UIImage *patternImage=[UIImage imageNamed:#"whiteimage.png"];
CALayer *layer=[[CALayer alloc] init];
layer.frame=CGRectMake(0, 0, 2048, 1536);
layer.backgroundColor=[UIColor colorWithPatternImage:patternImage].CGColor;
UIView *view=[[UIView alloc] initWithFrame:CGRectMake(0,0,2048,1536)];
[view setOpaque:YES];
[view.layer addSublayer:layer];
Pattern Image
Tiled Image with transparent 1px border
The pattern image was exported from Photoshop using bicubic resampling but I've also tried bilinear and nearest neighbour methods. Also if I use the image as a pattern in Photoshop I don't get the transparent border which leads me to think it's iOS related.
-(UIImage*) resizeImage:(UIImage*)image newSize:(CGSize)newSize
{
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I forgot to include my helper method in my question which scales the tile image. It was this method that was giving me the problem. When I increase the width and height by a pixel I no longer get the transparent border in the returned image.
-(UIImage*) resizeImage:(UIImage*)image newSize:(CGSize)newSize
{
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width+1, newSize.height+1)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Low quality UIImage after clipping it to a circle

I am trying to clip a UIImage to make it circular, I am starting with a 140*140px image and then running this code:
//round the image
UIImageView *roundView = [[UIImageView alloc] initWithImage:smallImage];
UIGraphicsBeginImageContextWithOptions(roundView.bounds.size, NO, 1.0);
[[UIBezierPath bezierPathWithRoundedRect:roundView.bounds
cornerRadius:roundView.frame.size.width/2] addClip];
[smallImage drawInRect:roundView.bounds];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
smallImage = finalImage;
//end round image
This works as desired but the quality is very low, the image looks fuzzy and the edges around the circle are jagged. I want to achieve them same affect as:
image.layer.cornerRadius = self.thumbnailView.frame.size.width / 2;
image.clipsToBounds = YES;
Not sure why the quality of the image is so low. Can someone give me some pointers please?
You might want to keep scale at 0.f, so it matches the device scale (retina / not retina).
UIGraphicsBeginImageContextWithOptions(roundView.bounds.size, NO, 0.f);
You can do also draw a circle like this:
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.f);
CGRect interiorBox = CGRectInset(rect, 0.f, 0.f);
UIBezierPath *bezierPath = [UIBezierPath bezierPathWithOvalInRect:insideBox];
[bezierPath addClip];
[image drawInRect:rect];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You are passing the wrong scale in to UIGraphicsBeginImageContextWithOptions. You are passing 1. Use UIGraphicsBeginImageContextWithOptions(roundView.bounds.size, NO, [UIScreen mainScreen].scale);
Are you sure you actually need to do this to the actual image? Would it not be enough to just make the image view rounded?
You can do this very easily.
UIImageView *imageView = // your image view with the image
imageView.frame = CGRectMake(0, 0, 140, 140); // the size you want
imageView.clipsToBounds = YES;
imageView.contentMode = UIViewContentModeScaleAspectFill;
imageView.layer.cornerRadius = 70; // make the corner radius half of the size
This will display your image cropped into a circle by the image view.
The image remains the same. It is just the display of the image that is rounded.
It's much less expensive too (time and memory).
Doing this in a table will not slow it down. You only need to do this once for each of the dequeued cells. For reused cells you do not need to do this as it is done when the cell is first created.

How to cut an image into irregular shapes using objective c

Thanks to everyone. I have got solution for cutting an image into irregular shapes in java. But now i want to achieved this in iOS.
Here is my requirement:
I am using fingers to select particular part by touch and draw on an image , after completing drawing, i want to draw that part of the image. I am able to draw using touches, but how can i cut that particular part?
i know cutting an image into rectangular shape and circle but not into a particular shape.
If any one know please help me.
Draw a closed CGPath and turn it into an Image Mask. If I had more experience I'd give more details, but I've only done the graphs in a weather app. A guide should help.
Bellow is the code for corp image with selected rectangle area. But you can customized your selected area with CGPath and then corp the image.
-(IBAction) cropImage:(id) sender{
// Create rectangle that represents a cropped image
// from the middle of the existing image
float xCo,yCo;
float width=bottomCornerPoint.x-topCornerPoint.x;
float height=bottomCornerPoint.y-topCornerPoint.y;
if(width<0)
width=-width;
if(height<0)
height=-height;
if(topCornerPoint.x <bottomCornerPoint.x){
xCo=topCornerPoint.x;
}else{
xCo=bottomCornerPoint.x;
}
if(topCornerPoint.y <bottomCornerPoint.y){
yCo=topCornerPoint.y;
}else{
yCo=bottomCornerPoint.y;
}
CGRect rect = CGRectMake(xCo,yCo,width,height);
// Create bitmap image from original image data,
// using rectangle to specify desired crop area
UIImage *image = [UIImage imageNamed:#"abc.png"];
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);
UIImage *img = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
// Create and show the new image from bitmap data
imageView = [[UIImageView alloc] initWithImage:img];
[imageView setFrame:CGRectMake(110, 600, width, height)];
imageView.image=img;
[[self view] addSubview:imageView];
[imageView release];
}
CGSize newSize = CGSizeMake(backGroundImageView.frame.size.width, backGroundImageView.frame.size.height);
UIGraphicsBeginImageContext(newSize);
[<Drawing Imageview> drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
[backGroundImageView.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeSourceIn alpha:1.0];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Crop a Portion of UIImage from Larger UIImage, and include non-image parts

I think I may have an odd request, however hopefully someone can help. I am using the well known UIScrollView + UIImageView to zoom into and out of an image, as well as pan. This works fine and dandy, but the current project we have needs to be able to crop the image, but also include the black bars on the sides if the image is smaller than the crop rectangle. See the images below.
We wish to capture everything inside of the blue box, including the white (which will be black, since opaque is set to YES).
This works great for images that are completely zoomed out (The white is just the UIImageView's extra space).
However the problem arises when we try to zoom into the image, and capture only that portion, plus the empty space.
This results in the following image
The problem we are seeing is we need to be able to create an image that is exactly what is in the Crop Rect, regardless if there is part of the image there or not. The other problem is we wish to have the ability to dynamically change the output resolution. The aspect ratio is 16:9, and for this example kMaxWidth = 1136 and kMaxHeight = 639, however in the future we may want to request a larger or smaller 16:9 resolution.
Below is the function I have so far:
- (UIImage *)createCroppedImageFromImage:(UIImage *)image {
CGSize newRect = CGSizeMake(kMaxWidth, kMaxHeight);
UIGraphicsBeginImageContextWithOptions(newRect, YES, 0.0);
// 0 is the edge of the screen, to help with zooming
CGFloat xDisplacement = ((abs(0 - imageView.frame.origin.x) * kMaxWidth) / (self.cropSize.width / self.scrollView.zoomScale) / self.scrollView.zoomScale);
CGFloat yDisplacement = ((abs(self.cropImageView.frame.origin.y - imageView.frame.origin.y) * kMaxHeight) / (self.cropSize.height / self.scrollView.zoomScale) / self.scrollView.zoomScale);
CGFloat newImageWidth = (self.image.size.width * kMaxWidth) / (self.cropSize.width / self.scrollView.zoomScale);
CGFloat newImageHeight = (self.image.size.height * kMaxHeight) / (self.cropSize.height / self.scrollView.zoomScale);
[image drawInRect:CGRectMake(xDisplacement, 0, newImageWidth, newImageHeight)];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}
Any help would be greatly appreciated.
I ended up just taking a screenshot, and cropping that. It seems to work well enough.
- (UIImage *)cropImage {
CGRect cropRect = self.cropOverlay.cropRect;
UIGraphicsBeginImageContext(self.view.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *fullScreenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef croppedImage = CGImageCreateWithImageInRect(fullScreenshot.CGImage, cropRect);
UIImage *crop = [[UIImage imageWithCGImage:croppedImage] resizedImage:self.outputSize interpolationQuality:kCGInterpolationHigh];
CGImageRelease(croppedImage);
return crop;
}
If using iOS 7, you would use drawViewHierarchyInRect:afterScreenUpdates:, instead of renderInContext:
I think the translated rect for the image view isn't calculated properly. Since UIImageView is the subview inside the UIScrollView, you should be able to calculate the visible rect by calling [scrollView convertRect:scrollView.bounds toView:imageView];. That will be the visible rect of your image view. All you need to now is crop it.
-(UIImage*)cropImage:(UIImage*)img inRect:(CGRect)rect{
CGImageRef cropped = CGImageCreateWithImageInRect(img.CGImage, rect);
UIImage *image = [UIImage imageWithCGImage:cropped];
CGImageRelease(cropped);
return image;
}
Edit: Yeah... I forgot to mention that cropping should be done in (0,1) coordinate space. I've modified the crop function for you, so it crops the image based on all parameters you provided, UIImageView inside UIScrollView and an image.
-(UIImage*)cropImage:(UIImage*)image inImageView:(UIImageView*)imageView scrollView:(UIScrollView*)scrollView{
// get visible rect from image scrollview
CGRect visibleRect = [scrollView convertRect:scrollView.bounds toView:imageView];
UIImage* rCroppedImage;
CALayer* maskLayer= [[CALayer alloc] init];
maskLayer.contents= (id)image.CGImage;
maskLayer.frame= CGRectMake(0, 0, visibleRect.size.width, visibleRect.size.height);
CGRect rect= CGRectMake(visibleRect.origin.x / image.size.width,
visibleRect.origin.y / image.size.height,
visibleRect.size.width / image.size.width,
visibleRect.size.height / image.size.height);
maskLayer.contentsRect= rect;
UIGraphicsBeginImageContext(visibleRect.size);
CGContextRef context= UIGraphicsGetCurrentContext();
[maskLayer renderInContext:context];
rCroppedImage= UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return rCroppedImage;
}

Merge two UIImageView with respect to content mode?

Is it possible to merge two UIImageViews with respect of content mode ? I have two UIImageViews one is AspectFit and One is Aspect Fill when i blend both images it combines correctly but not with the exact content mode.
Here is the code
CGSize newSize = CGSizeMake(fx_imageView.image.size.width,fx_imageView.image.size.height);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[fx_imageView.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity if applicable
[broderImage.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:1];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// resultImage.image = result;

Resources