Why does my programmatically created screenshot look so bad on iOS 7? - ios

I am trying to implement sharing app with facebook.
I used this code to take the screenshot:
CGSize imageSize = CGSizeMake(self.view.bounds.size.width, self.view.bounds.size.height);
UIGraphicsBeginImageContext(imageSize);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
It works great on iOS 6, but in iOS 7 the image looks very bad.
I used this answer: iOS: what's the fastest, most performant way to make a screenshot programmatically?
for trying to fix it, and it helped, still the screenshot looks bad.
The screen gets another color, and some objects (like labels) aren't showing on the image taken.
Any help?
----Update----
I managed to solve the most objects, by change them to retain instead of weak. My main problem remained my tableview, that shown as a big white block (It supposed to be transparent, with labels with white text, so all we see is white cells). I did try to define the table background as clearcolor,not helps..
----Last Update---
There are wonderful answers here that not really regarding to my issue.. I wanted to make it work on device that runs with iOS7 but without using iOS7 SDK, since it takes to much effort to switch the project SDK in this point, when the project is almost done.
Anyway, I added the peace of code that finally solved my issue:
This change simply solve the problem:
UIGraphicsBeginImageContextWithOptions(imageSize, NO , 0.0f);
instead of:
UIGraphicsBeginImageContext(imageSize);

New API has been added since iOS 7, that should provide efficient way of getting snapshot
snapshotViewAfterScreenUpdates: renders the view into a UIView with unmodifiable content
resizableSnapshotViewFromRect:afterScreenUpdates:withCapInsets : same thing, but with resizable insets
drawViewHierarchyInRect:afterScreenUpdates: : same thing if you need all subviews to be drawn too (like labels, buttons...)
You can use the UIView returned for any UI effect, or render in into an image like you did if you need to export.
I don't know how good this new method performs VS the one you provided (although I remember Apple engineers saying this new API was more efficient)

you can try this
- (UIImage *) screenshot {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, [UIScreen mainScreen].scale);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}

in iOS 8 : this is how i am doing to get ScreenShot
just added one UIImageView and Method to take screenshot in .h file
#property (weak, nonatomic) IBOutlet UIImageView *imageView;
-(IBAction)takeSnapShot:(id)sender;
2 added code snip for taking screen shot and set on UIImageView in .m file
- (IBAction)takeSnapShot:(id)sender
{
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *snapShotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageView.image = snapShotImage;
}
below is the output i got.

On iOS7 you can have glitches if you use
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES]
during ongoing animation. Set afterScreenUpdates = NO to get rid of glitches.

Make sure that opaque is set to NO

Related

How to draw/paint UIIMage on view as paint without UIImageView in iOS

I'm developing painting app in this app I'm using ACEDrawing tool to draw over the view. But I've to cut particular portion of the painting and to paste at anywhere on the view.
I can copy the paintings of the view as an image. my code is here.
In .h file
#property (strong,nonatomic)ACEDrawingView *DrawingViews;
#property (nonatomic, strong)UIScrollView *myScrollView;
in .m file
//creating the Drawing view
_DrawingViews = [[ACEDrawingView alloc] initWithFrame:CGRectMake(myOrigin, 0, self.testView.frame.size.width, self.testView.frame.size.height)];
_DrawingViews.drawTool=ACEDrawingToolTypePen;
_DrawingViews.tag=i;
_DrawingViews.delegate=self;
[myScrollView addSubview:_DrawingViews];
After Inside Copy method.
-(void)copy:(id)sender{
ACEDrawingView *dragView = [self.myScrollView viewWithTag:currentTag];// Getting ACEDrawing View with tag
CGSize size = [_userResizableView1 bounds].size;//just for size
UIGraphicsBeginImageContext(size);
[[dragView layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
CGContextRef context=UIGraphicsGetCurrentContext();
[image drawInRect:CGRectMake(200,200, 145, 150)];
UIGraphicsEndImageContext();
}
But, I can paste it only as an image using UIImageView. I don't know how to paste it as a painting overview then only I can erase or redraw.
-Gratefull to Mr.Duncan sir and all who will answer.
UIImage has various drawing methods like drawInRect. You've already figured out how to create a graphics context. You could use drawInRect To draw your image into the graphics context.
(Note that you want to use the longer form of begin image context that takes a scale and options. (I'm not at my Mac to look up the function name.) If you pass in a scale of 0 it creates a Retina context on Retina devices. With the code you're using it will degrade retina images to non-retina resolution.)

How to optimize memory usage in UIImage

I try to take screenshot of uiwebview and send it with observer to another UIImageView in another class.
I using this method to take screenshot:
-(UIImage*)takeScreenshoot{
#autoreleasepool {
UIGraphicsBeginImageContext(CGSizeMake(self.view.frame.size.width,self.view.frame.size.height));
CGContextRef context = UIGraphicsGetCurrentContext();
[self.webPage.layer renderInContext:context];
UIImage *__weak screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenShot;
}
}
But then I have problem. Everytime I take screenshot with this method, memory rate grows about 10-15mb and it's not realising it. And if I take screenshot in every webviewdidfinishload, you can imagine how much it can take memory!
How can I fix that issue?
If possible try to use [UIScreen snapshotViewAfterScreenUpdates] which returns UIView .
This is the snapshot of currently displayed content (snapshot of app).
Even apple also says " this method is faster than trying to render the contents of the screen into a bitmap image yourself."
According to your code, you are passing this bitmap image to just display in some other UIImageView. so i think using UIScreen method is appropriate here.
To display UIWebView part only->
Create another UIView instance. Set its frame to the frame of your webView.
Now add this screenShot view as subView to createdView and set its frame such that webView portion will be displayed.
Try calling CGContextRelease(context); after you have got your screen shot.
Or as #Greg said, remove the line and use UIGraphicsGetCurrentContext() directly

Blurring a UIView that contains the keyboard with content beneath it (iOS 7)

I have a UIView with a UITableView the extends beneath the keyboard. The content in the table view is bright enough, making it clear that content sits behind the keyboard. I'm attempting to take a screenshot of the entire view in order to blur it using the following code:
- (UIImage *)screenshotFromView:(UIView *)view;
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0.0);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:NO];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
However, the image that is returned does not create a transparent keyboard. This presents an odd transition when going from the non-blurred view to the blurred view, since there is clearly content behind the keyboard before the transition to the blurred image.
Is it possible to take a screenshot of the entire screen, without use of private APIs, while still keeping the transparency of the keyboard + the status bar?
I got the exact problem as you these days, so I exactly know what you want. I wanted the whole UI blurred behind a message including the keyboard, which is not included with any regular screenshot method. My cure is the following code:
- (UIImage*)screenShotWithKeyboard:(UIView *)viewToShoot
{
UIWindow *keyboard = nil;
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if ([[window description] hasPrefix:#"<UITextEffectsWin"])
{
keyboard = window;
break;
}
}
// Define the dimensions of the screenshot you want to take (the entire screen in this case)
CGSize size = [[UIScreen mainScreen] bounds].size;
// Create the screenshot
UIGraphicsBeginImageContext(size);
CGContextRef context=UIGraphicsGetCurrentContext();
// delete following line if you only want the keyboard
[[viewToShoot layer] renderInContext:context];
if(keyboard!=nil)
[[keyboard layer] renderInContext:context];
UIImage *screenImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenImg;
}
I got the idea from an article of Aran Balkan I broke it down to one method and testet it for iOS 7 which seems to work for me. The article worth reading as he explain the tricks behind a little. As you only want the actual keyboard as image, you can comment out the line I marked in the code. With that image keyboard you can do your blur stuff yourself.
The code is far from perfect, but I think you got the idea.
Two thinks at the end:
I am a freshman to objective c and iOS development. If you find any problematic bugs, a comment is very welcome to improve this answer.
Second, I wrote this code today in my app and I do not know yet if I violate any developer rules for iOS. At the moment I do not see any problems, but I will investigate this further as I want to release my app with that graphic trick. I will keeping this post updated. Until than, as with point one, I would highly appreciate any comment regarding this issue.
Have you considered using UIKeyboardAppearanceDark? Currently the default value of keyboardAppearance corresponds to UIKeyboardAppearanceLight, so it may not be suited to your use case.

Blur screen with iOS 7's snapshot API

I believe the NDA is down, so I can ask this question. I have a UIView subclass:
BlurView *blurredView = ((BlurView *)[self.view snapshotViewAfterScreenUpdates:NO]);
blurredView.frame = self.view.frame;
[self.view addSubview:blurredView];
It does its job so far in capturing the screen, but now I want to blur that view. How exactly do I go about this? From what I've read I need to capture the current contents of the view (context?!) and convert it to CIImage (no?) and then apply a CIGaussianBlur to it and draw it back on the view.
How exactly do I do that?
P.S. The view is not animated, so it should be OK performance wise.
EDIT: Here is what I have so far. The problem is that I can't capture the snapshot to a UIImage, I get a black screen. But if I add the view as a subview directly, I can see the snapshot is there.
// Snapshot
UIView *view = [self.view snapshotViewAfterScreenUpdates:NO];
// Convert to UIImage
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Apply the UIImage to a UIImageView
BlurView *blurredView = [[BlurView alloc] initWithFrame:CGRectMake(0, 0, 500, 500)];
[self.view addSubview:blurredView];
blurredView.imageView.image = img;
// Black screen -.-
BlurView.m:
- (id)initWithFrame:(CGRect)frame {
self = [super initWithFrame:frame];
if (self) {
// Initialization code
self.imageView = [[UIImageView alloc] init];
self.imageView.frame = CGRectMake(20, 20, 200, 200);
[self addSubview:self.imageView];
}
return self;
}
Half of this question didn't get answered, so I thought it worth adding.
The problem with UIScreen's
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates
and UIView's
- (UIView *)resizableSnapshotViewFromRect:(CGRect)rect
afterScreenUpdates:(BOOL)afterUpdates
withCapInsets:(UIEdgeInsets)capInsets
Is that you can't derive a UIImage from them - the 'black screen' problem.
In iOS7 Apple provides a third piece of API for extracting UIImages, a method on UIView
- (BOOL)drawViewHierarchyInRect:(CGRect)rect
afterScreenUpdates:(BOOL)afterUpdates
It is not as fast as snapshotView, but not bad compared to renderInContext (in the example provided by Apple it is five times faster than renderInContext and three times slower than snapshotView)
Example use:
UIGraphicsBeginImageContextWithOptions(image.size, NULL, 0);
[view drawViewHierarchyInRect:rect];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Then to get a blurred version
UIImage* lightImage = [newImage applyLightEffect];
where applyLightEffect is one of those Blur category methods on Apple's UIImage+ImageEffects category mentioned in the accepted answer (the enticing link to this code sample in the accepted answer doesn't work, but this one will get you to the right page: the file you want is iOS_UIImageEffects).
The main reference is to WWDC2013 session 226, Implementing Engaging UI on iOS
By the way, there is an intriguing note in Apple's reference docs for renderInContext that hints at the black screen problem..
Important: The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of OS X may add support for rendering these layers and properties.
The note hasn't been updated since 10.5, so I guess 'future versions' may still be a while off, and we can add our new CASnapshotLayer (or whatever) to the list.
Sample Code from WWDC ios_uiimageeffects
There is a UIImage category named UIImage+ImageEffects
Here is its API:
- (UIImage *)applyLightEffect;
- (UIImage *)applyExtraLightEffect;
- (UIImage *)applyDarkEffect;
- (UIImage *)applyTintEffectWithColor:(UIColor *)tintColor;
- (UIImage *)applyBlurWithRadius:(CGFloat)blurRadius
tintColor:(UIColor *)tintColor
saturationDeltaFactor:(CGFloat)saturationDeltaFactor
maskImage:(UIImage *)maskImage;
For legal reason I can't show the implementation here, there is a demo project in it. should be pretty easy to get start with.
To summarize how to do this with foundry's sample code, use the following:
I wanted to blur the entire screen just slightly so for my purposes so I'll use the main screen bounds.
CGRect screenCaptureRect = [UIScreen mainScreen].bounds;
UIView *viewWhereYouWantToScreenCapture = [[UIApplication sharedApplication] keyWindow];
//screen capture code
UIGraphicsBeginImageContextWithOptions(screenCaptureRect.size, NO, [UIScreen mainScreen].scale);
[viewWhereYouWantToScreenCapture drawViewHierarchyInRect:screenCaptureRect afterScreenUpdates:NO];
UIImage *capturedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//blur code
UIColor *tintColor = [UIColor colorWithWhite:1.0 alpha:0];
UIImage *blurredImage = [capturedImage applyBlurWithRadius:1.5 tintColor:tintColor saturationDeltaFactor:1.2 maskImage:nil];
//or use [capturedImage applyLightAffect] but I thought that was too much for me
//use blurredImage in whatever way you so desire!
Notes on the screen capture part
UIGraphicsBeginImageContextWithOptions() 2nd argument is opacity. It should be NO unless you have nothing with any alpha other than 1. If you return yes the screen capture will not look at transparency values so it will go faster but will probably be wrong.
UIGraphicsBeginImageContextWithOptions() 3rd argument is the scale. Probably want to put in the scale of the device like I did to make sure and differentiate between retina and non-retina. But I haven't really tested this and I think 0.0f also works.
drawViewHierarchyInRect:afterScreenUpdates: watch out what you return for the screen updates BOOL. I tried to do this right before backgrounding and if I didn't put NO the app would go crazy with glitches when I returned to the foreground. You might be able to get away with YES though if you're not leaving the app.
Notes on blurring
I have a very light blur here. Changing the blurRadius will make it blurrier, and you can change the tint color and alpha to make all sorts of other effects.
Also you need to add a category for the blur methods to work...
How to add the UIImage+ImageEffects category
You need to download the category UIImage+ImageEffects for the blur to work. Download it here after logging in: https://developer.apple.com/downloads/index.action?name=WWDC%202013
Search for "UIImageEffects" and you'll find it. Just pull out the 2 necessary files and add them to your project. UIImage+ImageEffects.h and UIImage+ImageEffects.m.
Also, I had to Enable Modules in my build settings because I had a project that wasn't created with xCode 5. To do this go to your target build settings and search for "modules" and make sure that "Enable Modules" and "Link Frameworks Automatically" are both set to yes or you'll have compiler errors with the new category.
Good luck blurring!
Check WWDC 2013 sample application "running with a snap".
The blurring is there implemented as a category.

Draw text and add to a UIImage iOS 5/6

I have a textbox, where i want the written text to be added to a UIImage.
How can i draw NSString to a UIImage?
I´ve searched, and found lots of examples, but non of them works. Xcode just gives me lots of errors.
Simply put, i want to draw a NSString to a UIimage. The UIImage should be the same size as a predefined UIImageView, and be placed in center.
Any ideas?
i think solution is here..i have used this method in one of my application
here lbltext is the object of my label. this method creates new image with given text & return the object of new image with text.
-(UIImage *) drawText:(NSString*) text inImage:(UIImage*)image atPoint:(CGPoint)point
{
UIFont *font = lblText.font;
UIGraphicsBeginImageContext(iv.frame.size);
[image drawInRect:CGRectMake(0,0,iv.frame.size.width,iv.frame.size.height)];
CGRect rect = CGRectMake(point.x,point.y,lblText.frame.size.width, lblText.frame.size.height);
[lblText.textColor set];
[text drawInRect:CGRectIntegral(rect) withFont:font];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
in your below comment you have passed iv.center as a point.. try manualy like this
CGPoint point;
point.x=30; // dynamicaly just for example lblText.frame.origin.x;
point.y=40; //lblText.frame.origin.y;
you have to call above method like this...
UIImage *img = [self drawText:#"Test String" inImage:originalImage atPoint:point];
here,img is another instace of UIImage for storing new image with text...after calling this method use img imager for your further use..
for example...
[newimageviewObj setImage:img];
i hope this will help you
UIImage is not a subview of UIView, so you cant add a subview to it. Also NSString is not a subview of UIView. If you want to show things on the screen, they should inherit from UIView.
So try this:
Create a UIImageView - set its image property to be your UIImage instance.
Create a UILabel - set its text property to your NSString.
Add the UILabel as a subview of your UIImageView.
and finally add your UIImageView to your current view controllers view.
Emm, here is some thoughts.
I think that one simple way is like this :
Put aUIImageView on aView;
Add aUITextView on aView;
Get ScreenShot from aView;
This may works fine.
Also, this may come with a problem that screenshot may be not clear.
Then, After step1 and step2, we may get new image by UIGraphics.
(With here, we already know position where text should be on image and this should be ok.)
PS: Some image may not have some attribution like CGImage and CGImage is needed for this. So, transform it.

Resources