Render a full UIImageView to a bitmap image - ios

I know how to render a normal UIView to a bitmap image:
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *bitmapImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The problem is that if view is a UIImageView with a stretched resizable image returned from resizableImageWithCapInsets:, bitmapImage gotten this way is the original image — the un-stretched one — instead of the really displayed stretched image. I can easily tell that by the size difference between bitmapImage and view. So my question is how to render the full content displayed by a UIImageView whose image is a stretched resizable image?

So, I tried this piece of code:
UIEdgeInsets edgeInsets = UIEdgeInsetsMake(-20, -20, -20, -20);
UIImageView *imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 300, 300)];
imageView.image = [[UIImage imageNamed:#"number1.png"] resizableImageWithCapInsets:edgeInsets];
[self.view addSubview:imageView];
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, NO, 0.0);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *bmImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSArray *documentsPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDir = [documentsPaths objectAtIndex:0];
[UIImagePNGRepresentation(bmImage) writeToFile:[documentsDir stringByAppendingString:#"num1.png"] atomically:YES];
and got this:
and from this:
So it seems like the code works as it is supposed to.

I'm an idiot! The code is perfectly working. I just messed up something somewhere else. The stupid thing is that I put the code in a method of my UIView category but I named the method image. It works perfectly for general UIViews, but as UIImageView has its own image method and UIImageView is a subclass of UIView, UIImageView's image method is called when I tried to get my bitmap image.

Related

Combining Multiple UI Images and UI Labels into 1 Image

Basically I have a main UIImage, which acts as a background/border. Within that UIImage I have 2 more UIImages, vertically split with a gap around them so you can still see a border of the main background UIImage. On each side I have a UILabel, to describe the images. Below is a picture of what I mean to help put into context.
What I want to achieve is to make this into 1 image, but keeping all of the current positions, layouts, image layouts (Aspect Fill) and label sizes and label background colours the same. I also want this image to be the same quality so it still looks good.
I have looked at many other stackoverflow questions and have so far come up with the follow, but it has the following problems:
Doesn't position the image labels to their correct places and sizes
Doesn't have the background colour for the labels or main image
Doesn't have the images as Aspect Fill (like the UIImageViews) so the outside of each picture is shown as well and isn't cropped properly, like in the above example.
Below is my code so far, can anyone help me achieve it like the image above please? I am fairly new to iOS development and am struggling a bit:
-(UIImage *)renderImagesForSharing{
CGSize newImageSize = CGSizeMake(640, 640);
NSLog(#"CGSize %#",NSStringFromCGSize(newImageSize));
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, 0.0);
[self.mainImage.layer renderInContext:UIGraphicsGetCurrentContext()];
[self.beforeImageSide.image drawInRect:CGRectMake(0,0,(newImageSize.width/2),newImageSize.height)];
[self.afterImageSize.image drawInRect:CGRectMake(320,0,(newImageSize.width/2),newImageSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
[self.beforeLabel drawTextInRect:CGRectMake(60.0f, 0.0f, 200.0f, 50.0f)];
[self.afterLabel drawTextInRect:CGRectMake(0.0f, 0.0f, 100.0f, 50.0f)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
NSData *imgData = UIImageJPEGRepresentation(image, 0.9);
UIImage * imagePNG = [UIImage imageWithData:imgData]; //wrap UIImage around PNG representation
UIGraphicsEndImageContext();
return imagePNG;
}
Thank you in advance for any help guys!
I don't understand why you want use drawInRect: to accomplish this task.
Since you have the images and everything with you, you can easily create a view as you have shown in the image. Then take a screenshot of it like this:
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, self.view.opaque, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage*theImage=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData*theImageData=UIImageJPEGRepresentation(theImage, 1.0 ); //you can use PNG too
[theImageData writeToFile:#"example.jpeg" atomically:YES];
Change the self.view to the view just created
It will give some idea.
UIGraphicsBeginImageContextWithOptions(DiagnosisView.bounds.size, DiagnosisView.opaque, 0.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor redColor] set];
CGContextFillRect(ctx, DiagnosisView.frame);
[DiagnosisView.layer renderInContext:ctx];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSString *imagePath = [KdiagnosisFolderPath stringByAppendingPathComponent:FileName];
NSData *pngData = UIImagePNGRepresentation(img);
[pngData writeToFile:imagePath atomically:YES];
pngData = nil,imagePath = nil;

iOS - Make UIImage inside Button Round

Everybody loves them some round profile images these days. So I guess I do too. With that said...
I have a UIImage set as the background image of a UIButton. What I need to do is make the profile picture completely rounded. Here's my code ... can anyone point me in the right direction?
NSString* photoUrl = [NSString stringWithFormat:#"%#%#", URL_IMAGE_BASE, photoFileName];
NSURL *myurl2 = [NSURL URLWithString: photoUrl];
UIImage *image = [UIImage imageWithData: [NSData dataWithContentsOfURL:myurl2]];
[_goToProfileEditButton setImage:image forState:UIControlStateNormal];
[_goToProfileEditButton setBackgroundImage:image forState:UIControlStateNormal];
Make the button a perfect square (same width/height) and use the cornerRadius property of the button layer and set it to be exactly half its width/height. You need to import the QuartzCore header to be able to access the layer property.
What you could do is subclass UIView and override the drawRect: method like so:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context, rect);
CGContextAddEllipseInRect(context, rect);
CGContextSetFillColorWithColor(context, [UIColor clearColor].CGColor);
CGContextFillPath(context);
}
Set the view's clipsToBounds property to YES and add your image view as a subview of your subclassed UIView. Otherwise you could look into masking with an image via CGImageMaskCreate.
bolivia's answer is the best if you are happy to mask the button itself.
If you want to create a circle image from a square, then you can do that with a clipping mask. It should be something like as follows:
// start with the square image, from a file, the network, or wherever...
UIImage * squareImage = [UIImage imageNamed:#"squareImage.png"];
CGRect imageRect = CGRectMake(0, 0, squareImage.size.width, squareImage.size.height);
UIGraphicsBeginImageContextWithOptions(imageRect.size,NO,0.0);
UIBezierPath * path = [UIBezierPath bezierPathWithRoundedRect:imageRect cornerRadius:imageRect.size.width/2.0f];
[path addClip];
[squareImage drawInRect:imageRect];
UIImage *circleImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// now circleImage contains the circle in the center of squareImage

merge 2 images into one just like instagram

I am trying to figure out to get a single image pit of 2 image views like instagram
These are the 2 images. Thanks in advance.
UIImageView *photoImageView = [[UIImageView alloc] initWithFrame:CGRectMake(20.0f, 42.0f, 280.0f, 280.0f)];
[photoImageView setBackgroundColor:[UIColor blackColor]];
[photoImageView setImage:self.image];
[photoImageView setContentMode:UIViewContentModeScaleAspectFit];
//Add overlay
UIImage *overlayGraphic = [UIImage imageNamed:#"chiu"];
UIImageView *overlayGraphicView = [[UIImageView alloc] initWithImage:overlayGraphic];
overlayGraphicView.frame = CGRectMake(30, 100, 260, 200);
[photoImageView addSubview:overlayGraphicView];
I'm not sure you can do this directly with just a UIImageView control. I think you're going to have to get into low-level drawing routines to get this done.
Option 1 (not recommended, just trying to answer the original question):
Have you tried placing the overlay UIImageView on top of the "main" UIImageView and setting its opacity to something less than 1 (say 0.4)? It's a crude hack, but it might get you somewhere.
Option 2 (probably the better path to travel):
Create an image context and then draw your "base" and "overlay" images on it. Then you'll have a UIImage you can output and will only need 1 UIImageView. Something like this (NOTE: this is a basic outline; you will need to add a LOT of time to get exactly what you want out of it!):
UIImage *baseImage = [UIImage imageNamed:#"base"];
UIImage *overlayImage = [UIImage imageNamed:#"overlay"];
UIGraphicsBeginImageContextWithOptions(baseImage.size, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect rect = CGRectMake(0, 0, baseImage.size.width, baseImage.size.height);
CGContextDrawImage(context, rect, baseImage.CGImage);
CGContextDrawImage(context, rect, overlayImage.CGImage);
UIImage *combined = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

How to tile a UIImage vertically while only streching horizontally on iOS?

I have an image that can be stretched in the horizontal direction, but need to be tiled vertically (one on top of the other) to fill my view. How could I do this?
I know that an image can be made resizable by using the -(UIImage *)resizableImageWithCapInsets:(UIEdgeInsets)capInsets method. So that works great for the horizontal direction but that method cannot be used for the vertical direction it really needs to be tiled, and not stretched in the vertical direction to work.
Now my idea of how this could work is to run a loop that creates a UIImageView with the horizontally stretched image, and simply adjust the frame.origin.y property, then add it as a subview each loop until I've gone past the height of the view. However this seems like an overly complex way of doing this and really not practical when the view need to be resized (on an iPad rotation for example)
Is there a simpler more efficient way of doing this on iOS 6.x?
I used this to tile an image horizontally in a UIImageView:
UIImage *image = [UIImage imageNamed:#"my_image"];
UIImage *tiledImage = [image resizableImageWithCapInsets:UIEdgeInsetsMake(0, 0, 0, 0) resizingMode:UIImageResizingModeTile];
UIImageView *imageView = [[UIImageView alloc] initWithImage:tiledImage];
This assumes you set the frame of the UIImageView to a larger size than your image asset. If the height is taller than your image, it will also tile vertically.
Have you considered using UIColor's method colorWithPatternImage: to create a repeating pattern and just pass an image with the correct horizontal width?
Code Example:
// On your desired View Controller
- (void)viewDidLoad
{
[super viewDidLoad];
UIImage *patternImage = [UIImage imageNamed:#"repeating_pattern"];
self.view.backgroundColor = [UIColor colorWithPatternImage:patternImage];
// ... do your other stuff here...
}
UIImage *image = [UIImage imageNamed:#"test.png"];
UIImage *resizableImage = [image resizableImageWithCapInsets:UIEdgeInsetsMake(0, image.size.width / 2, 0, image.size.width / 2)];
So I've figured out a solution for this. It is I think a little jacky but it works.
Basically what I've done was to create my stretchable image with the left and right caps specified. And then I initialize a UIImageView with it. I then adjust the frame of the image view to the desired width. This will appropriately resize the image that is contained within it.
Then finally I used a piece of code I found that creates a new image by vertically tiling the adjusted image that is now contained in the image view. Notice how instead of accessing the original UIImage I am using the UIImageView view's image property.
The last thing then is to set the new UIImage as the patterned background colour of the view
Here is the code:
// create a strechable image
UIImage *image = [[UIImage imageNamed:#"progressBackgroundTop#2x.png"] resizableImageWithCapInsets:UIEdgeInsetsMake(0, 60, 0, 60)];
// create the image view that will contain it
UIImageView *imageView = [[UIImageView alloc] initWithImage:image];
CGRect frame = imageView.frame;
frame.size.width = self.view.bounds.size.width;
imageView.frame = frame;
// create the tiled image
CGSize imageViewSize = imageView.bounds.size;
UIGraphicsBeginImageContext(imageViewSize);
CGContextRef imageContext = UIGraphicsGetCurrentContext();
CGContextDrawTiledImage(imageContext, (CGRect){ CGPointZero, imageViewSize }, imageView.image.CGImage);
UIImage *finishedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.view.backgroundColor = [UIColor colorWithPatternImage:finishedImage];

Create UIImage from 2 UIImages and label

I got one big UIImage. Over this UIImage i got one more, witch is actually a mask. And one more - i got UILabel over this mask! Witch is text for the picture.
I want to combine all this parts in one UIImage to save it to Camera Roll!
How should I do it?
UPD. How should i add UITextView?
i found:
[[myTextView layer] renderInContext:UIGraphicsGetCurrentContext()];
But this method doesn't place myTextView on the right place.
create two UIImage objects and one UILabel objects then use drawInRect: method
//create image 1
UIImage *img1 = [UIImage imageNamed:#"image1.png"];
//create image 2
UIImage *img2 = [UIImage imageNamed:#"image2.png"];
// create label
UILabel *label = [[UILabel alloc] initWithFrame:CGRectMake(0, 0, 50,50 )];
//set you label text
[label setText:#"Hello"];
// use UIGraphicsBeginImageContext() to draw them on top of each other
//start drawing
UIGraphicsBeginImageContext(img1.size);
//draw image1
[img1 drawInRect:CGRectMake(0, 0, img1.size.width, img1.size.height)];
//draw image2
[img2 drawInRect:CGRectMake((img1.size.width - img2.size.width) /2, (img1.size.height- img2.size.height)/2, img2.size.width, img2.size.height)];
//draw label
[label drawTextInRect:CGRectMake((img1.size.width - label.frame.size.width)/2, (img1.size.height - label.frame.size.height)/2, label.frame.size.width, label.frame.size.height)];
//get the final image
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The resultImage which is UIImage contains all of your images and labels as one image. After that you can save it where ever you want.
Hope helps...

Resources