I followed the traditional way of drawing the view into UIImage :
UIGraphicsBeginImageContextWithOptions(controller.skView.bounds.size, NO, 1.0);
[controller.skView drawViewHierarchyInRect:controller.skView.bounds afterScreenUpdates:YES];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I have tried to set the third parameter in UIGraphicsBeginImageContextWithOptions to 2.0 or larger,however the quality of the Image doesn't get better.
Here I post two images(1st is captured by the code,2nd is captured by pressing Home + power button ,this function is supported by system).
I want to know if there is a way to make it better.Appreciating any help!
Related
I have a feature, I want to take a screenshot of a UIView(called "arrangeView" in my codes). and send the picture to my server, and then I will print it, so I want a high quantity picture..
My codes:
UIGraphicsBeginImageContext(arrangeView.frame.size);
[arrangeView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
//save and send method
[self saveImage:newImage WithName:imgTitle];
With these codes I have 2 issues:
the picture quality is too low. because after I uploaded to the
server, I will print it. but the picture I got with the codes only
20~40k, it is too small to print.
one friend told me that the app store don't alow app take a screenshot and send it to the server. he told me to use bitmap.
And I searched in google, I don't find a good solution for it. Would someone help?
Thank you very much.
Screen shot you get will be of same resolution as you main screen size i.e If it is iPhone 5s it will be 640 x 1136. Best way to get screen shot is:
UIView *screenShotView = [arrangeView snapshotViewAfterScreenUpdates:YES];
Please try to use the code below.you may save first in photo album and then you can send to server
UIView* captureView = self.view;
/* Capture the screen shoot at native resolution */
UIGraphicsBeginImageContextWithOptions(captureView.bounds.size, captureView.opaque, 0.0);
[captureView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Render the screen shot at custom resolution */
CGRect cropRect = CGRectMake(0 ,0 ,1435 ,1435);
UIGraphicsBeginImageContextWithOptions(cropRect.size, captureView.opaque, 1.0f);
[screenshot drawInRect:cropRect];
UIImage * customScreenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Save to the photo album */
UIImageWriteToSavedPhotosAlbum(customScreenShot , nil, nil, nil);
I have a UIView and I want to save it's content to an image, I successfully did that using UIGraphicsBeginImageContext and UIGraphicsGetImageFromCurrentImageContext() but the problem is that the image quality is reduced. Is there a way to take a screenshot/save UIView content to an image without reducing it's quality?
Here's a snippet of my code:
UIGraphicsBeginImageContext(self.myView.frame.size);
[self.myview.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Try Following Code :
UIGraphicsBeginImageContextWithOptions(YourView.bounds.size, NO, 0);
[YourView drawViewHierarchyInRect:YourView.bounds afterScreenUpdates:YES];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This code working Awsome...!!!
Mital Solanki’s answer is a fine solution, but the root of the problem is that your view is on a retina screen (so has a scale factor of 2) and you are creating a graphics context with a scale factor of 1. The documentation for UIGraphicsBeginImageContext states:
This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to NO and a scale factor of 1.0.
Instead use UIGraphicsBeginImageContextWithOptions with a scale of 0, which is equivalent to passing a scale of [[UIScreen mainScreen] scale].
Image Size captured using Camera return's image of size 720*960.
The captured Image is displayed in a UIImageView of 320*436, like this.
UIImageView *imgView=[[UIImageView alloc] initWithFrame:CGRectMake(0.0,0.0,320.0,436.0)];
imgView.image=img;//Image received from camera.
[self.view addSubView:imgView];
This, works fine image 720*960 is scaled to 320*436 and displayed.
Now, from here actual problem starts. I have another image of size 72*72. This image is overlapped with the image received from camera at some arbitrary coordinates.
CGRectMake(0.0,0.0,72.0,72.0);
I am not able to find a better way to handle scaling and applying a overlay of another Image, at the same time maintain its quality.
The image needs to be send to a server.
Use the following code to scale images:
-(UIImage*) imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize
{
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I have a drawing app where you have one UIImageView that serves as the "drawing layer." You have another UIImageView beneath it that is the "image layer," containing the image you are drawing on. I like having this separation. However, I want the user to be able to "save and email" the drawing they have made on top of the image as one unified image. How do I do this?
Your UIImageView instances must be part of a UIView hierachy so all you need to do is paint that top containing UIView into a context
UIGraphicsBeginImageContext(CGSizeMake(width, height));
[container.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *fimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
or if that gives you trouble successively paint the two images into a context
UIGraphicsBeginImageContext(CGSizeMake(width, height));
[image1.layer renderInContext:UIGraphicsGetCurrentContext()];
[image2.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *fimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
from there you can write the data where you choose
NSData *data = UIImagePNGRepresentation(fimage);
Pass that data into the MailComposer setup.
I've got two UIImageView: the first one is laying on the top of the other (eg. an overlay).
I want now to take a screenshot of the whole thing.
Note that before that step, I allow the user to change the overlay by panning,scaling and ROTATING it, so I must keep track of his editing.
So, here's the homework:
rotating the context basing on the view's transform rotation value
positioning on the origin, where the user finished to pan the overlay
calculate the size of the overlay view (it's always a rectangle, however!)
I'm gonna merge them inside a similar piece of code:
UIGraphicsBeginImageContext...
...
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
but... what does "best fit" instead of the "..."?
Example code is well accepted!
Thanks
UIGraphicsBeginImageContext(firstImage.size);
[firstImage drawAtPoint:CGPointMake(0,0)];
[secondImage drawAtPoint:CGPointMake(0,0)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();