How can i get a circular image from a UIImage and display it on an image view.The image should be like the one displayed in IOS7 call History
- (UIImage *)getRoundedRectImageFromImage :(UIImage *)image onReferenceView :
(UIImageView*)imageView withCornerRadius :(float)cornerRadius
{
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, NO, 1.0);
[[UIBezierPath bezierPathWithRoundedRect:imageView.bounds
cornerRadius:cornerRadius] addClip];
[image drawInRect:imageView.bounds];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return finalImage;
} And call the method like this
imageView.image = [self getRoundedRectImageFromImage:image
onReferenceView:
imageView withCornerRadius:imageView.frame.size.width/2];
imageView.clipsToBounds=YES;
imageView.layer.masksToBounds = YES;
Related
Is there any way to capture a UIView to UIImage which is not currently visible or it is out of the frame. Below the code I am using now:
UIGraphicsBeginImageContextWithOptions(_myView.bounds.size, _myView.opaque, 0.0f);
[theView drawViewHierarchyInRect:_myView.bounds afterScreenUpdates:NO];
UIImage * snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Try this-
-(UIImage *)imageFromView:(UIView *)myView{
UIGraphicsBeginImageContextWithOptions(myView.bounds.size, myView.opaque, [UIScreen mainScreen].scale);
[myView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
I tried two different ways to create a screenshot, but unfortunately they don't work as I need, I have an RMMapView that is blank on the screenshot. When I create snapshot manually on my device it works perfectly, and the map view is on the screen. So I would like to achieve the same result programmatically. Is it possible somehow as I tried? Or what is the right way to do that? (To reproduce that type of screenshot)
- (UIImage *) takeScreenshot {
//1. version
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, [UIScreen mainScreen].scale);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
//2. version
UIGraphicsBeginImageContext(self.view.bounds.size);
CGContextRef context=UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:context];
UIImage *image=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Actually You can use a method called takeSnapshot in a RMMapView.
UIImage *image = self.mapView.takeSnapshot
[Update]
Can you try this method instead
CGSize size = self.view.bounds.size;
CGRect cropRect = self.mapView.bounds
UIGraphicsBeginImageContext(size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * mapImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef imageRef = CGImageCreateWithImageInRect(mapImage.CGImage, cropRect);
UIImage * cropImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
UIImageWriteToSavedPhotosAlbum(cropImage, nil, nil, nil);
UIGraphicsEndImageContext();
Good luck
I'm currently coloring an existing image using a mask. For example, I have a white image with a black border and a circular mask (like the first two images). Then, I can create a third image with a color (i.e. green) which has green on the center of the original image (because the mask is present there).
The code I'm using is this (suggestions welcomed):
-(UIImage *)paintWithMask:(UIImage *)mask color:(UIColor *)color andSize:(CGSize)size{
UIImage *image = self;
UIImage *rotatedMask = [self rotateImage:mask]; //For some reason this is needed.
UIGraphicsBeginImageContextWithOptions(size, NO, image.scale);
CGRect rect = CGRectMake(0.0f, 0.0f, size.width, size.height);
[image drawInRect:rect];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(context, kCGBlendModeSourceIn);
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextClipToMask(context, rect, [rotatedMask CGImage]);
CGContextFillRect(context, rect);
UIImage *coloredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return coloredImage;
}
What I need to do now is paint the green circle using only the mask (without the black border obviously), like this:
Any ideas? Thanks a lot!!
There is a much easier way of doing this without CoreGraphics. Simply do the following:
-(UIImageView *)imageViewWithMask:(UIImage *)mask color:(UIColor *)color andSize:(CGSize)size{
UIImage *tempImage = mask;
tempImage = [tempImage imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
UIGraphicsBeginImageContextWithOptions(size, NO,0);
[tempImage drawInRect: CGRectMake(0,0,size.width,size.height)];
tempImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *iv = [[UIImageView alloc] initWithImage: tempImage];
iv.tintColor = color;
return iv;
}
rotate image merge but not working in this code
i post my code
please give me solution
- (void)mergeImage :(UIImage *)imageA withImage:(UIImage *)imageB ###
{
UIImage *image0 = imageA;
UIImage *image1=overlayimg.image;
CGSize newImageSize = CGSizeMake(overlayimg.frame.size.width, overlayimg.frame.size.height);
UIGraphicsBeginImageContext(newImageSize);
[image0 drawInRect:CGRectMake(overlayimg.frame.origin.x,overlayimg.frame.origin.y,newImageSize.width,newImageSize.height)];
NSLog(#"Last Rotation:%f",templastrotate);
overlayimg.transform = CGAffineTransformMakeRotation(templastrotate);
CGContextConcatCTM(UIGraphicsGetCurrentContext(), overlayimg.transform);
[image1 drawInRect:CGRectMake(overlayView.frame.origin.x, overlayView.frame.origin.y,overlayView.frame.size.width,overlayView.frame.size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
imageView.contentMode = UIViewContentModeScaleAspectFill & UIViewContentModeScaleAspectFit;
imageView.image=newImage;
UIGraphicsEndImageContext();
}
I am trying to combine two images into one. This is the code I am using:
UIImage* image1 = [UIImage imageNamed:#"1.tif"];
UIImage* image2 = [UIImage imageNamed:#"sign.tif"];
UIGraphicsBeginImageContext(image1.size);
[image1 drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
[image2 drawInRect:CGRectMake(0, 0, image2.size.width, image2.size.height)];
UIImage *combinedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageView1.image = combinedImage;
It works perfectly in the simulator but does not work on the device. I get a white screen. Can any one please help me with this.
Any help will be appreciated.
Try to use this function:
- (UIImage * ) mergeImage: (UIImage *) imageA
withImage: (UIImage *) imageB
strength: (float) strength {
UIGraphicsBeginImageContextWithOptions(CGSizeMake([imageA size].width,[imageA size].height), NO, 0.0);
[imageA drawAtPoint: CGPointMake(0,0)];
[imageB drawAtPoint: CGPointMake(0,0)
blendMode: kCGBlendModeNormal // you can play with this
alpha: strength]; // 0 - 1
UIImage *mergedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return mergedImage;
}
And call it like this:
UIImage* image1 = [UIImage imageNamed:#"1.tif"];
UIImage* image2 = [UIImage imageNamed:#"sign.tif"];
UIImage *mergedImage =[self mergeImage:image1 withImage:image2 strength:1];