Take Screenshot of UIImageView and UILabel then Share it - ios

I think I'm near to solving the problem. I've a UItableView with a prototype cell which looks like this:
The gray-scale background image you see is coming from server and the text is the overlay text which is a UILabel on it and will change on every image which is also in the server. So it's like two layers which I'm trying to send. Third red button is the button of share and say I select share this image in email this function will be called:
-(UIImage *)makeImageofCell:(NSUInteger)sender
{
HomeCell *cellObj;
cellObj.tag = sender;
UIGraphicsBeginImageContext(cellObj.homeImg.frame.size );
[cellObj.homeImg.layer renderInContext:UIGraphicsGetCurrentContext()];
[cellObj.overlayText.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;
}
In -(void)shareButtonClicked:(UIButton*)sender I've written the code of sharing. Now in this function I want to send that image. Here is what I'm doing: UIImage *imageToShare = [self makeImageofCell:sender.tag]; which is calling the makeImageofCell function and assigning it's value to a UIImage. So I'm writing this to share that URL.
NSString *imageToShare = [NSString stringWithFormat:#"Hey! Check out this Image %#",imageToShare];
But all I'm getting is Image is: (null). Any idea how this can be achieved?

Please following code for the screen shot of UIWindow You just replace with you custom view or cell.
- (UIImage*)takeScreenshot
{
// get the key-window references
UIWindow *keyWindow = [[UIApplication sharedApplication] keyWindow];
// manipulate boundries of key-window
CGRect rect = [keyWindow bounds];
// create context using size
UIGraphicsBeginImageContextWithOptions(rect.size,YES,0.0f);
// get the context reference
CGContextRef context = UIGraphicsGetCurrentContext();
// render in context
[keyWindow.layer renderInContext:context];
// get the rendered image
UIImage *capturedScreen = UIGraphicsGetImageFromCurrentImageContext();
// complete context
UIGraphicsEndImageContext();
// return image
return takeScreenshot;
}
Edited
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath {
UIButton *btnShareImage = (UIButton*)[cell1 viewWithTag:KTAGABSENTBUTTON];
[btnShareImage addTarget:self action:#selector(shareImageAction:) forControlEvents:UIControlEventTouchUpInside];
[btnShareImage setTitle:[NSString stringWithFormat:#"%li %li",(long)indexPath.section,(long)indexPath.row] forState:UIControlStateDisabled];
return cell;
}
-(void)shareImageAction:(UIButton*)sender{
NSString *str=[sender titleForState:UIControlStateDisabled];
NSArray *ar=[str componentsSeparatedByString:#" "];
NSIndexPath *indexPath=[NSIndexPath indexPathForRow:[[ar objectAtIndex:1] intValue] inSection:[[ar objectAtIndex:0] intValue]];
HomeCell *cellObj = (HomeCell*) [yourTableView cellForRowAtIndexPath:indexPath];
UIImage *imageToShare = [self makeImageofCell:cellObj];
}
-(UIImage *)makeImageofCell:(HomeCell*)cellObj
{
UIGraphicsBeginImageContext(cellObj.homeImg.frame.size );
[cellObj.homeImg.layer renderInContext:UIGraphicsGetCurrentContext()];
[cellObj.overlayText.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;
}
May this help to you.

Related

Where is the first good moment to take a screenshot in UIViewController

I would like to make a screenshot of my view when view is loaded. I use this method:
- (UIImage *)screenshot {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.opaque, 0.0f);
if( [self respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)] ) {
[self drawViewHierarchyInRect:self.bounds afterScreenUpdates:YES];
} else {
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
}
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenshot;
}
The screenshot method was tested and works well. I am using the method in my UIViewController in viewDidLoad. The problem is that the result is a grey image! It's mean that view is not loaded in viewDidLoad method. The same issue can be observe in viewDidAppear. So viewDidLoad and viewDidAppear are too early. I have made workaround in viewDidLoad method:
__weak MyUIViewController *weakSelf = self;
dispatch_async(dispatch_get_main_queue(), ^() {
UIImage* image = [weakSelf.view screenshot]
// do sth with image...
});
How to make a screenshot without dispatach_async ?
Try This:
- (UIImage *)screenshot {
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, [UIScreen mainScreen].scale);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
UIView-JTViewToImage project also usefull.
You could also try the snapshotViewAfterScreenUpdates method of UIView:
[self.view snapshotViewAfterScreenUpdates:YES];
Try this
New API has been added since iOS 7, that should provide efficient way of getting snapshot
snapshotViewAfterScreenUpdates: renders the view into a UIView with unmodifiable content
resizableSnapshotViewFromRect:afterScreenUpdates:withCapInsets : same thing, but with resizable insets
drawViewHierarchyInRect:afterScreenUpdates:
same thing if you need all subviews to be drawn too (like labels, buttons...)
Hope it helps.

How can I programmatically put together some UIImages to have one big UIImage?

This is my code:
- (void)scrollViewDidEndScrollingAnimation:(UIScrollView *)scrollView
{
// at this point the webView scrolled to the next section
// I save the offset to make the code a little easier to read
CGFloat offset = _webPage.scrollView.contentOffset.y;
UIGraphicsBeginImageContextWithOptions(_webPage.bounds.size, NO, 0);
[_webPage.layer renderInContext:UIGraphicsGetCurrentContext()];
viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
// if we are not done yet, scroll to next section
if (offset < _webPage.scrollView.contentSize.height)
{
[_webPage.scrollView scrollRectToVisible:CGRectMake(0, _webPage.frame.size.height+offset, _webPage.frame.size.width, _webPage.frame.size.height) animated:YES];
}
}
In which I save an undefined number of screenshots (UIImages) by scrolling the web view. This works, I have in my photo gallery all the parts of the web page.
But I don't want parts, I want ONE long UIImage. So how do I put (one by one?) my UIImages together?
You can write a UIImage category to do that
UIImage+Combine.h
#import <UIKit/UIKit.h>
#interface UIImage (Combine)
+ (UIImage*)imageByCombiningImage:(UIImage*)firstImage withImage:(UIImage*)secondImage;
#end
UIImage+Combine.m
#import "UIImage+Combine.h"
#implementation UIImage (Combine)
+ (UIImage*)imageByCombiningImage:(UIImage*)firstImage withImage:(UIImage*)secondImage {
UIImage *image = nil;
CGSize newImageSize = CGSizeMake(MAX(firstImage.size.width, secondImage.size.width), firstImage.size.height + secondImage.size.height);
if (UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, [[UIScreen mainScreen] scale]);
} else {
UIGraphicsBeginImageContext(newImageSize);
}
[firstImage drawAtPoint:CGPointMake(roundf((newImageSize.width-firstImage.size.width)/2), 0)];
[secondImage drawAtPoint:CGPointMake(roundf(((newImageSize.width-secondImage.size.width)/2) ),
roundf((newImageSize.height-secondImage.size.height)))];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
and then you can call the function in your code with:
UIImage *img = [UIImage imageByCombiningImage:image1 withImage:image2];
This will draw a new image that has the width of the biggest of the two images and the height of both images combined. image1 will be at the top position and image2 below that.

pass image with textField to another viewController

I have two view controller class on first i have a image view plus textField inside the imageView and on second view controller there is a imageView. First view controller have a done-button, on clicking done-button i want imageView with textField pass to the secondViewController imageView.
Is there any way to do it?
Please suggest me.
- (UIImage *)renderImageFromView:(UIView *)view withRect:(CGRect)frame
{
// Create a new context the size of the frame
CGSize targetImageSize = CGSizeMake(frame.size.width,frame.size.height);
// Check for retina image rendering option
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(targetImageSize, NO, 0);
else
UIGraphicsBeginImageContext(targetImageSize);
CGContextRef context2 = UIGraphicsGetCurrentContext();
// The view to be rendered
[[view layer] renderInContext:context2];
// Get the rendered image
UIImage *original_image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return original_image;
}
Try this function for Render image
And the below one for merge them
- (UIImage *)mergeImage:(UIImage *)img
{
CGSize offScreenSize = CGSizeMake(206, 432);
if (NULL != UIGraphicsBeginImageContextWithOptions) UIGraphicsBeginImageContextWithOptions(offScreenSize, NO, 0);
else UIGraphicsBeginImageContext(offScreenSize);
CGRect rect = CGRectMake(0, 0, 206, 432);
[imgView.image drawInRect:rect];
[img drawInRect:rect];
UIImage* imagez = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imagez;
}
You can change coordinates and height,width according to your requirement.
Example:
This method is declared in ClassB.h
- (void) setProperties:(UIImage *)myImage MyLabel:(UILabel *)myLabel;
This above is implemented in ClassB.m
- (void) setProperties:(UIImage *)myImage MyLabel:(UILabel *)myLabel
{
//assign myImage and myLabel here
}
Then in ClassA
ClassB *classB = [[ClassB alloc] init];
[classB setProperties:myImage MyLabel:myLabel];

UIImagePickerControllerEditedImage get nil

Hey guys I'm doing some image editing with UIImagePickerController. Here is some code in imagePickerController:didFinishPickingMediaWithInfo:
UIImage *editedImg = [info objectForKey:UIImagePickerControllerEditedImage];
UIImageView *imgView = [[UIImageView alloc] initWithImage:editedImg];
CGRect imgFrm = imgView.frame;
float rate = imgFrm.size.height / imgFrm.size.width;
imgFrm.size.width = size;
imgFrm.size.height = size * rate;
imgFrm.origin.x = 0;
imgFrm.origin.y = (size - imgFrm.size.height) / 2;
[imgView setFrame:imgFrm];
UIView *cropView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, size, size)];
[cropView setBackgroundColor:[UIColor blackColor]];
[cropView addSubview:imgView];
UIImage *croppedImg = [MyUtil createUIImageFromUIView:cropView];
The above is to set the image in a size*size view and draw a image from a view when the height of the image returned by picker is smaller than size.
Here is the code of createUIImageFromUIView:(UIView*)view :
+ (UIImage *)createUIImageFromUIView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.frame.size, NO, 2.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[view.layer renderInContext:ctx];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;
}
My problem is : when debugging, the 'editedImg'(defined in first line) just shows 'nil'. But, the following code works well. I get the corpView(shows 'nil' too) correctly and get cropped image and can encode it to base64 encoded string for sending to server side. I just want to know why the editedImg is nil(returned by [info objectForKey:UIImagePickerControllerEditedImage], but when I choose to print the info in debug mode, the output is not nil in the console)?
The editdImg gets nil, try:
UIImage *editedImg = [info objectForKey:#"UIImagePickerControllerOriginalImage"];
Get file sizeļ¼š
- (long long) fileSizeAtPath:(NSString*) filePath{
NSFileManager* manager = [NSFileManager defaultManager];
if ([manager fileExistsAtPath:filePath]){
return [[manager attributesOfItemAtPath:filePath error:nil] fileSize];
}
return 0;
}
Best wishes!
After some searching I accidentally found this : string value always shows nil in objective-c
This is the reason why I always see 'nil' in debug mode while the code works well.
You can get your cropped image size by
UIImage *croppedImg = [MyUtil createUIImageFromUIView:cropView];
NSData *dataForImage = UIImagePNGRepresentation(croppedImg);
Now you can check length
if (dataForImage.length)
{
}

Fast blurring for UITableViewCell contentView Background

I have made a UIViewController which conforms to the UITableViewDataSource and UITableViewDelegate protocol and has a UITableView as it's subview.
I have set the backgroundView property of the table to be a UIImageView in order to display an image as the background of the table.
In order to have custom spacings between the cells I made the row height larger than I wanted and customised the cell's contentView to be the size I wanted, making it look like there is extra space (Following this SO answer).
I wanted to add a blur to the cell so that the background was blurred and I did this through Brad Larson's GPUImage framework. This works fine however, since I want the background blur to update as it scrolls, the scroll becomes very laggy.
My code is:
//Gets called from the -scrollViewDidScroll:(UIScrollView *)scrollView method
- (void)updateViewBG
{
UIImage *superviewImage = [self snapshotOfSuperview:self.tableView];
UIImage* newBG = [self applyTint:self.tintColour image:[filter imageByFilteringImage:superviewImage]];
self.layer.contents = (id)newBG.CGImage;
self.layer.contentsScale = newBG.scale;
}
//Code to create an image from the area behind the 'blurred cell'
- (UIImage *)snapshotOfSuperview:(UIView *)superview
{
CGFloat scale = 0.5;
if (([UIScreen mainScreen].scale > 1 || self.contentMode == UIViewContentModeScaleAspectFill)) {
CGFloat blockSize = 12.0f/5;
scale = blockSize/MAX(blockSize * 2, floor(self.blurRadius));
}
UIGraphicsBeginImageContextWithOptions(self.bounds.size, YES, scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, -self.frame.origin.x, -self.frame.origin.y);
NSArray *hiddenViews = [self prepareSuperviewForSnapshot:superview];
[superview.layer renderInContext:context];
[self restoreSuperviewAfterSnapshot:hiddenViews];
UIImage *snapshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return snapshot;
}
-(UIImage*)applyTint:(UIColor*)colour image:(UIImage*)inImage{
UIImage *newImage;
if (colour) {
UIGraphicsBeginImageContext(inImage.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect area = CGRectMake(0, 0, inImage.size.width, inImage.size.height);
CGContextScaleCTM(ctx, 1, -1);
CGContextTranslateCTM(ctx, 0, -area.size.height);
CGContextSaveGState(ctx);
CGContextClipToMask(ctx, area, inImage.CGImage);
[[colour colorWithAlphaComponent:0.8] set];
CGContextFillRect(ctx, area);
CGContextRestoreGState(ctx);
CGContextSetBlendMode(ctx, kCGBlendModeLighten);
CGContextDrawImage(ctx, area, inImage.CGImage);
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
} else {
newImage = inImage;
}
return newImage;
}
Now for the question:
Is there a better way to add the blur? Maybe so that the layer doesn't have to be rendered each movement? iOS7's control centre/notification centre seem to be able to do this without any lagging.
Maybe with the GPUImageUIElement class? If so, how do I use this?
Another way I looked at was to create the blur on the background image initially and then crop just the areas I needed to use out, however I couldn't get this to work, since the images may or may not be the same size as the screen so the scaling was a problem (Using CGImageCreateWithImageInRect() and the rect being the cell's position on the table).
I also found out that I have to add the blur to the tableview itself with the frame being that of the cell, and the cell having a clear colour.
Thanks in advance
EDIT
Upon request, here is the code for the image cropping I attempted before:
- (void)updateViewBG
{
//self.bgImg is the pre-blurred image, -getContentViewFromCellFrame: is a convenience method to get just the content area from the whole cell (since the contentarea is smaller than the cell)
UIImage* bg = [self cropImage:self.bgImg
toRect:[LATableBlur getContentViewFromCellFrame:[self.tableView rectForRowAtIndexPath:self.cellIndexPath]]];
bg = [self applyTint:self.tintColour image:bg];
self.layer.contents = (id)bg.CGImage;
self.layer.contentsScale = bg.scale;
}
- (UIImage*)cropImage:(UIImage*)image toRect:(CGRect)frame
{
CGSize imgSize = [image size];
double heightRatio = imgSize.height/self.tableView.frame.size.height;
double widthRatio = imgSize.width/self.tableView.frame.size.width;
UIImage* cropped = [UIImage imageWithCGImage:CGImageCreateWithImageInRect(image.CGImage,
CGRectMake(frame.origin.x*widthRatio,
frame.origin.y*heightRatio,
frame.size.width*widthRatio,
frame.size.height*heightRatio))];
return cropped;
}
I managed to solve it with a solution I, at first, didn't think it would work.
Generating several blurred images is certainly not the solution as it costs a lot.
I used only one blurred image and cached it.
So I subclassed UITableViewCell :
#interface BlurredCell : UITableViewCell
#end
I implemented two class methods to access the cached images (blurred and normal ones)
+(UIImage *)normalImage
{
static dispatch_once_t onceToken;
static UIImage *_normalImage;
dispatch_once(&onceToken, ^{
_normalImage = [UIImage imageNamed:#"bg.png"];
});
return _normalImage;
}
I used REFrostedViewController's category on UIImage to generate the blurred image
+(UIImage *)blurredImage
{
static dispatch_once_t onceToken;
static UIImage *_blurredImage;
dispatch_once(&onceToken, ^{
_blurredImage = [[UIImage imageNamed:#"bg.png"] re_applyBlurWithRadius:BlurredCellBlurRadius
tintColor:[UIColorcolorWithWhite:1.0f
alpha:0.4f]
saturationDeltaFactor:1.8f
maskImage:nil];
});
return _blurredImage;
}
In order to have the effect of blurred frames inside the cell but still see the non blurred image on the sides, I used to scroll views.
One with an image view with the normal image and the other one with an image view with the blurred image. I set the content size to be the size of the image and the contentOffset will be set through an interface.
So the table view ends up with each cell holding the whole background image but cropping it at certain offset and still showing the entire image
#implementation BlurredCell
- (id)initWithStyle:(UITableViewCellStyle)style reuseIdentifier:(NSString *)reuseIdentifier
{
self = [super initWithStyle:style reuseIdentifier:reuseIdentifier];
if (self) {
// Initialization code
[self.contentView addSubview:self.normalScrollView];
[self.contentView addSubview:self.blurredScrollView];
}
return self;
}
-(UIScrollView *)normalScrollView
{
if (!_normalScrollView) {
_normalScrollView = [[UIScrollView alloc] initWithFrame:self.bounds];
_normalScrollView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight;
_normalScrollView.scrollEnabled = NO;
UIImageView *imageView =[[UIImageView alloc] initWithFrame:[UIScreen mainScreen].bounds];
imageView.contentMode = UIViewContentModeScaleToFill;
imageView.image = [BlurredCell normalImage];
_normalScrollView.contentSize = imageView.frame.size;
[_normalScrollView addSubview:imageView];
}
return _normalScrollView;
}
-(UIScrollView *)blurredScrollView
{
if (!_blurredScrollView) {
_blurredScrollView = [[UIScrollView alloc] initWithFrame:CGRectMake(BlurredCellPadding, BlurredCellPadding,
self.bounds.size.width - 2.0f * BlurredCellPadding,
self.bounds.size.height - 2.0f * BlurredCellPadding)];
_blurredScrollView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight;
_blurredScrollView.scrollEnabled = NO;
_blurredScrollView.contentOffset = CGPointMake(BlurredCellPadding, BlurredCellPadding);
UIImageView *imageView =[[UIImageView alloc] initWithFrame:[UIScreen mainScreen].bounds];
imageView.contentMode = UIViewContentModeScaleToFill;
imageView.image = [BlurredCell blurredImage];
_blurredScrollView.contentSize = imageView.frame.size;
[_blurredScrollView addSubview:imageView];
}
return _blurredScrollView;
}
-(void)setBlurredContentOffset:(CGFloat)offset
{
self.normalScrollView.contentOffset = CGPointMake(self.normalScrollView.contentOffset.x, offset);
self.blurredScrollView.contentOffset = CGPointMake(self.blurredScrollView.contentOffset.x, offset + BlurredCellPadding);
}
#end
setBlurredContentOffset: should be called each time the table view's content offset changes.
So in the table view delegate's implementation (the view controller) we do it in those two methods :
// For the first rows
-(void)tableView:(UITableView *)tableView willDisplayCell:(BlurredCell *)cell
forRowAtIndexPath:(NSIndexPath *)indexPath
{
[cell setBlurredContentOffset:cell.frame.origin.y];
}
// Each time the table view is scrolled
-(void)scrollViewDidScroll:(UIScrollView *)scrollView
{
for (BlurredCell *cell in [self.tableView visibleCells]) {
[cell setBlurredContentOffset:cell.frame.origin.y - scrollView.contentOffset.y];
}
}
Here is a complete working demo

Resources