The following code block is used in my application to take a screenshot of the current screen of an iPad mini(768 x 1024):
UIImage *img;
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
In a different viewcontroller, I present a UIScrollView with a width of 540 and a height of 290. I display the screencapture UIImage in a UIImageView which I create programmatically initWithFrame with a rectangle width of 250 and height of 250. The content size of the scrollview is 768 by 250.
Now running the application, I display four rectangles and screenshot the screen using the above block of code. Transitioning to the UIScrollView, the image is not clear (and by not clear, some rectangles are missing sides while some are thicker than others). Is there a way to display the image clearer? I know the image has to be scaled down from the original 768 by 1024 to 250 by 250. Could this be the problem? If so, what would be the best fix?
Edit:
Above a screenshot of the image I want to capture.
Below is the UIImage in UIImageView within a UIScrollView:
Cast each coordinate to int, or use CGRectIntegral, to do that directly on a CGRect, decimal point requires AA and makes images blurry.
Try changing the content mode of your UIImageViews. If you use UIViewContentModeScaleAspectFill, you shouldn't see any extra space around the edges.
Update: From the screenshots you posted, it looks like this is just an effect of the built-in downscaling in UIKit. Try manually downscaling the image to fit using Core Graphics first. Alternatively, you might want to use something like the CILanczosScaleTransform Core Image filter (iOS 6+).
Related
In my app, it allows users to place text on top of images like snapchat, then they are allowed to save the image to their device. I simply add the text view on top of the image and take a picture of the image using the code:
UIGraphicsBeginImageContext(imageView.layer.bounds.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* savedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But when I compare the text on my image, to the text from a snapchat image...it is significantly different. Snapchat's word text on top of image is significantly sharper then mine. Mine looks very pixelated. Also I am not compressing the image at all, just saving the image as is using ALAssetLibrary.
Thank You
When you use UIGraphicsBeginImageContext, it defaults to a 1x scale (i.e. non-retina resolution). You probably want:
UIGraphicsBeginImageContextWithOptions(imageView.layer.bounds.size, YES, 0);
Which will use the same scale as the screen (probably 2x). The final parameter is the scale of the resulting image; 0 means "whatever the screen is".
If your imageView is scaled to the size of the screen, then I think your jpeg will also be limited to that resolution. If setting the scale on UIGraphicsBeginImageContextWithOptions does not give you enough resolution, you can do your drawing in a larger offscreen image. Something like:
UIGraphicsBeginImageContext(imageSize);
[image drawInRect:CGRectMake(0,0,imageSize.width,imageSize.height)];
CGContextScaleCTM(UIGraphicsGetCurrentContext(),scale,scale);
[textOverlay.layer renderInContext:UIGraphicsGetCurrentContext()];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You need to set the "scale" value to scale the textOverlay view, which is probably at screen size, to the offscreen image size.
Alternatively, probably simpler, you can start with a larger UIImageView, but put it within another UIView to scale it to fit on screen. Do the same with your text overlay view. Then, your code for creating composite should work, at whatever resolution you choose for the UIImageView.
I have taken a photo, and then I'm initializing a UIImageView object with this photo. The only problem is, when I take the photo, the photo is being taken using the full iPhone screen (portrait).
The UIImageView that is being initialized with this photo is only set to take up the top 50% of the iphone's screen. So you can imagine the image looks distorted.
I have been able to make it look a lot better by using the following code:
UIImageView *halfView = [[UIImageView alloc]initWithImage:image];
[self.view addSubview:halfView];
halfView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.frame.size.height/2);
halfView.contentMode = UIViewContentModeScaleAspectFill;
The only problem is, the final UIImageView called "halfView" is still slightly distorted.
I have a feeling that this is impossible to fix, because the original photo is being taken with the full iphone screen and can never perfectly scale to fit a UIImageView that only takes up the top 50% of the iphone screen.
I was basically trying to copy the frontback app. Here is what it looks like when you are taking the original image in their app:
This is what my app's screen looks like when you are taking the picture:
And then right after you take the picture, my app's screen changes to look like the frontback screen and takes the picture you just took and places it in the top half and tries to scale it.
I hope that makes sense. I know it is a long question, but I just really wanted to let the user use the full screen while taking the photo and then just scale it to half the screen.
Am I going about this all wrong? Am I crazy to think I could ever properly scale the image to half the screen when it was originally captured as a "full screen" image?
Thanks for the help.
For the sake of argument let's say your captured image size is 640x1136 (twice the size of an iPhone 5 screen) and you are trying to display it in a UIImageView with of size 320x284 (half the size of an iPhone 5 screen).
As you can already see from these dimensions the captured image's width is smaller than its height whereas the UIImageView's width is larger than its height - the proportions are different.
Therefore, scaling the captured image to fit the UIImageView's width (scale by 0.5) means the captured image will be of size 320x568 - its height is larger than the UIImageView's height.
Scaling the captured image to fit the UIImageView's height (scale by 0.25) means the captured image will be of size 160x284 - its width is smaller the the UIImageView's width.
The image can't scale exactly like you want it to scale. However, you can use UIViewContentModeScaleAspectFill to fill the entire UIImageView but lose some of the image (image's height is too big to fit). You can also choose to use UIViewContentModeScaleAspectFit which will show the entire image but will leave some space on the sides (image's width is too small).
Another option you have is to actually capture the image in the proportions of your UIImageView in the first place but that means you won't be able to capture a full screen image.
Try this function, pass your UIImage in this function along with the new size, in turn it will return you the UIImage with size specified by you.
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I guess this is what you want.
Hope this helps.
You mention the image takes up the full size of the screen. If it's to display the UIImageView taking up half the screen, then you'll need to add this code to clip the frame
halfView.clipToBounds = YES;
Despite making the size of the imageview half the screen, the actual image will show outside the boundaries of the imageview if it's original size is bigger with the aspectFit property. clipToBounds will fix this.
I hope this is what you're looking for. Thanks, Jim.
i want to set an image to a custom UIView. I created the Image in Photoshop with 100 by 100 pixels.
Then I did:
self.customView.backgroundColor = [UIColor colorWithPatternImage:[UIImage imageNamed:#"Image.png"]];
And it worked just fine, except that i could see the pixels on my Retina display. Don't get me wrong i saw that coming, because i set the image in Photoshop to 100x100 and the UIView to 100x100. So due to the "false" pixel definition in xcode it is displayed with the double size pixels.
But how can i create a image, say an image with 200x200 and then set it to the background of the 100x100 View so it is displayed with native resolution. I tried to search for something like "scale to fit"-function for the backgroundPattern for the UIView but couldn't find anything.
So my question is, how do i set an image as "Retina 2x" Image?
And how do i get it into my UIView?
Thanks in advance!
This is your answer.
Basically put two images in your project. "Image.png" and "Image#2x.png". The code inside [UIImage imageNamed:#"Image.png"] will pull from the correct image based on the screen hardware.
I have a simple app with a tabbar. The view size is 480 x 320. The tabbar is 49 x 320. However when i try to take a screenshot using UIgraphicsbeginimagecontext(self.view.frame.size) I get an image of 411 x 320..
My background image is 480 x 320 and shows properly in the ios simulator.
When I use UIgraphicsbeginimagecontext(480x320) I get empty space in the bottom of the screenshot..
However when the screen is captured the background image is cut by the 69 height pixels.
Anyone knows why my background image displays properly in simulator but is cropped when taking a screenshot.. Or how to capture all the screenshot if I increase the imagecontext size?
Many Thanks
Is the status bar showing? It sounds like the status bar (20px) and the UITabBar (49px) is where the extra space is being cropped. I imagine that self.view is returning a UIViewControllers view which is smaller than the shot you want.
If you want just the background image, try reducing the size of the screen shot, or if you want the whole screen, try targeting the apps main window, something like this
UIGraphicsBeginImageContext(self.window.bounds.size);
[self.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I am prerendering a composited image with a couple different UIImageViews and UILabels to speed up scrolling in a large tableview. Unfortunately, the main UILabel is looking a little blurry compared to other UILabels on the same view.
The black letters "PLoS ONE" are in a UILabel, and they look much blurrier than the words "Medical" or "Medicine". The logo "PLoS one" is probably similarly being blurred, but it's not as noticeable as the crisp text.
The entire magazine cover is a single UIImage assigned to a UIButton.
(source: karlbecker.com)
This is the code I'm using to draw the image. The magazineView is a rectangle that's 125 x 151 pixels.
I have tried different scaling qualities, but that has not changed anything. And it shouldn't, since the scaling shouldn't be different at all. The UIButton I'm assigning this image to is the exact same size as the magazineView.
UIGraphicsBeginImageContextWithOptions(magazineView.bounds.size, NO, 0.0);
[magazineView.layer renderInContext:UIGraphicsGetCurrentContext()];
[coverImage release];
coverImage = UIGraphicsGetImageFromCurrentImageContext();
[coverImage retain];
UIGraphicsEndImageContext();
Any ideas why it's blurry?
When I begin an image context and render into it right away, is the rendering happening on an even pixel, or do I need to manually set where that render is occurring?
Make sure that your label coordinates are integer values. If they are not whole numbers they will appear blurry.
I think you need to use CGRectIntegral for more information please see: What is the usage of CGRectIntegral? and Reference of CGRectIntegral
I came across the same problem today where my content got pixelated when I am producing an image from UILabel text.
We use UIGraphicsBeginImageContextWithOptions() to configure the drawing environment for rendering into a bitmap which accepts three parameters:
size: The size of the new bitmap context. This represents the size of the image returned by the UIGraphicsGetImageFromCurrentImageContext function.
opaque: A Boolean flag indicating whether the bitmap is opaque. If the opaque parameter is YES, the alpha channel is ignored and the bitmap is treated as fully opaque.
scale: The scale factor to apply to the bitmap. If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
So we should use a proper scale factor with respect to the device display (1x, 2x, 3x) to fix this issue.
Swift 5 version:
UIGraphicsBeginImageContextWithOptions(frame.size, true, UIScreen.main.scale)
if let currentContext = UIGraphicsGetCurrentContext() {
nameLabel.layer.render(in: currentContext)
let nameImage = UIGraphicsGetImageFromCurrentImageContext()
return nameImage
}