So I have some copypasta way to create a PNG file and it's working splendidly, however, when going to print this PNG it appears kind of "blurry" like a low resolution image. Is there any way to create the PNG with a higher pixel depth?
Here's my current code:
- (UIImage*) renderScrollViewToImage
{
UIImage* image = nil;
UIGraphicsBeginImageContext(self.scrollView.contentSize);
{
CGPoint savedContentOffset = self.scrollView.contentOffset;
CGRect savedFrame = self.scrollView.frame;
self.scrollView.contentOffset = CGPointZero;
self.scrollView.frame = CGRectMake(0, 0, _scrollView.contentSize.width, _scrollView.contentSize.height);
[self.scrollView.layer renderInContext: UIGraphicsGetCurrentContext()];
image = UIGraphicsGetImageFromCurrentImageContext();
_scrollView.contentOffset = savedContentOffset;
_scrollView.frame = savedFrame;
}
UIGraphicsEndImageContext();
if (image != nil) {
return image;
}
return nil;
}
try replacing:
UIGraphicsBeginImageContext(self.scrollView.contentSize);
with
UIGraphicsBeginImageContextWithOptions(self.scrollView.contentSize, NO, 0.0);
which will take into account retina scaling. docs:
UIGraphicsBeginImageContextWithOptions
Creates a bitmap-based graphics context with the specified options.
void UIGraphicsBeginImageContextWithOptions(
CGSize size,
BOOL opaque,
CGFloat scale
);
Parameters
size
The size (measured in points) of the new bitmap context. This represents the size of the image returned by the UIGraphicsGetImageFromCurrentImageContext function. To get the size of the bitmap in pixels, you must multiply the width and height values by the value in the scale parameter.
opaque
A Boolean flag indicating whether the bitmap is opaque. If you know the bitmap is fully opaque, specify YES to ignore the alpha channel and optimize the bitmap’s storage. Specifying NO means that the bitmap must include an alpha channel to handle any partially transparent pixels.
scale
The scale factor to apply to the bitmap. If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
Related
I have a CGPath shaped like an arrow that I am drawing in the CGContext of my current view. I would like to generate a miniature version (thumbnail) of the arrow to add it as an Image to a UITableView showing all selected arrows.
I am succeeding to downscale a picture of the full context which leaves the arrow smaller than it should be. Ideally I would like to crop the image of the full context to the bounds of the arrow. However, I was not yet successful. Any leads? Thanks for the help!
Here are a picture of the full view containing an arrow and another picture of the thumbnail I am generating.
Ideally the thumbnail above would be cropped to contain the arrow only - not the full context.
The code I use is the follwoing:
- (UIImage*) imageForObject:(id<GraphicalObject>) object
inRect:(CGRect)rect {
UIImage *image = [UIImage new];
CGRect objectBounds = [object objectBounds];
UIGraphicsBeginImageContext(self.view.frame.size);//objectBounds.size);
CGContextRef context =UIGraphicsGetCurrentContext();
[object drawInContext:context];
//doesn't work
CGContextClipToRect(context, objectBounds);
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
The CGRect called objectBounds has two components, an origin and a size. In order the draw the object correctly as a thumbnail, the code needs to scale the image (to get the size right) and translate the image (to move the origin to {0,0}). So the code looks like this
- (UIImage *)getThumbnailOfSize:(CGSize)size forObject:(UIBezierPath *)object
{
// to maintain the aspect ratio, we need to compute the scale
// factors for x and y, and then use the smaller of the two
CGFloat xscale = size.width / object.bounds.size.width;
CGFloat yscale = size.height / object.bounds.size.height;
CGFloat scale = (xscale < yscale) ? xscale : yscale;
// start a graphics context with the thumbnail size
UIGraphicsBeginImageContext( size );
CGContextRef context = UIGraphicsGetCurrentContext();
// here's where we scale and translate to make the image fit
CGContextScaleCTM( context, scale, scale );
CGContextTranslateCTM( context, -object.bounds.origin.x, -object.bounds.origin.y );
// draw the object and get the resulting image
[object stroke];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
I would like to rotate a UIImage from file (a jpeg on the file system) a computed amount of radians, but I would like to rotate it around a point in the image, as well as keep the original size of the image (with transparent gaps in the image where image data no longer exists, as well as cropping image data that has moved outside of the original frame). I would like to then store and display the resulting UIImage. I haven't found any resources for this task, any help would be much appreciated!
The closest thing I have found so far (with some slight modifications) is as follows:
-(UIImage*)rotateImage:(UIImage*)image aroundPoint:(CGPoint)point radians:(float)radians newSize:(CGRect)newSize
{
CGRect imageRect = { point, image.size };
UIGraphicsBeginImageContext(image.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, imageRect.origin.x, imageRect.origin.y);
CGContextRotateCTM(context, radians);
CGContextTranslateCTM(context, -imageRect.origin.x, -imageRect.origin.y);
CGContextDrawImage(context, (CGRect){ CGPointZero, imageRect.size }, [image CGImage]);
UIImage *returnImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return returnImg;
}
Unfortunately, this rotates the image incorrectly (in my tests, somewhere in the neighborhood of 180 degrees more than desired).
to rotate that UIImage image, lets say 90 degrees, you can easily do:
imageView.transform = CGAffineTransformMakeRotation(M_PI/2);
to rotate it multiple times you can use:
UIView animateKeyframesWithDuration method
and to anchor it to some point you can use:
[image.layer setAnchorPoint:CGPointMake(....)];
Many questions are available on SO, but unfortunately I couldn't solve my problem using them.
I've added a overlay view on my camera, and now want to get image within the blue border (only water bottle).
I tried code chunks like following
CGImageRef imageRef = CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
[UIImageView setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
but having two issues
Either cropped image is getting too big
The orientation changes to -90.
for point 1, I think I'm providing cropRect too small thats why it showing very small part of image with too zoomed view. on my other viewController I have UIImageView (where cropped image need to display) of same size as camera rect within blue border.
So question is how to crop the image and what values should I provide for cropRect?
Assuming the image size 1280* 1080 and your crop view size 320*480 You need to do the following
Convert your crop view's frame to Image size rect (0, 0, 1280, 1080) by find the scale factor
float xScale = 1280 / 320;
float yScale = 1080 / 480;
float scaleFactor = (xScale < yScale) ? xScale : yScale;
Multiply cropView frame by scale factor. This will map the screen coordinates to image size coordinates. Then use the new cropRect with the
CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
The problem with different orientation is that CoreGraphics uses a different coordinate system as compare to view's coordinate system. Quartz 2D Coordinate Systems so try setting
[UIImage imageWithCGImage:imageRef].imageOrientation = largeImage.imageOrientation
Hi,
I want to rotate my UIImageView without moving the whole "png". No code is only to test what happens
_fanImage.transform = CGAffineTransformMakeRotation(45);
It turns but the whole image moves. What can I do that this doesn't happen ?
You can try something like this.. You should rotate the UIImage rather than UIImageView.
- (UIImage *)imageWithTransform:(CGAffineTransform)transform {
CGRect rect = CGRectMake(0, 0, self.size.height, self.size.width);
CGImageRef imageRef = self.CGImage;
// Build a context that's the same dimensions as the new size
CGContextRef bitmap = CGBitmapContextCreate(NULL,
self.size.width,
self.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
CGImageGetBitmapInfo(imageRef));
// Rotate and/or flip the image if required by its orientation
CGContextConcatCTM(bitmap, transform);
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, rect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGContextRelease(bitmap);
CGImageRelease(newImageRef);
return newImage;
}
I think you mean that you want your image view to rotate around it's center point. Is that right? If so, that's what a view should do by default.
You should do a search on "Translating, Scaling, and Rotating Views" in Xcode and read the resulting article.
Note that all of iOS's angles are specified in radians, not degrees.
Your sample images aren't really helpful, since we can't see the frame that the image view is drawn into. It's almost impossible to tell what your image views are doing and what they are supposed to be doing instead based on the pictures you linked from your dropbox.
A full 360 degrees is 2pi.
You should use
CGFloat degrees = 45;
CGFloat radians = degrees/180*M_PI;
_fanImage.transform = CGAffineTransformMakeRotation(radians);
That will fix the rotation amount for your code, but probably not the rotation position.
I'm having a nightmare time trying to correct a photo taken with AVFoundation captureStillImageAsynchronouslyFromConnection to size and orient to exactly what is shown on the screen.
I show the AVCaptureVideoPreviewLayer with this code to make sure it displays the correct way up at all rotations:
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
previewLayer.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);
if ([[previewLayer connection] isVideoOrientationSupported])
{
[[previewLayer connection] setVideoOrientation:(AVCaptureVideoOrientation)[UIApplication sharedApplication].statusBarOrientation];
}
[self.view.layer insertSublayer:previewLayer atIndex:0];
Now when I have a returned image it needs cropping as it's much bigger than what was displayed.
I know there are loads of UIImage cropping examples, but the first hurdle I seem to have is finding the correct CGRect to use. When I simply crop to self.view.frame the image is cropped at the wrong location.
The preview is using AVLayerVideoGravityResizeAspectFill and I have my UIImageView also set to AspectFill
So how can I get the correct frame that AVFoundation is displaying on screen from the preview layer?
EDIT ----
Here's an example of the problem i'm facing. Using the front camera of an iPad Mini, the camera using the resolution 720x1280 but the display is 768x0124. The view displays this (See the dado rail at the top of the image:
Then when I take the image and display it, it looks like this:
Obviously the camera display was centred in the view, but the cropped image is taken from the top(none seen) section of the photo.
I'm working on a similar project right now and thought I might be able to help, if you haven't already figured this out.
the first hurdle I seem to have is finding the correct CGRect to use. When I simply crop to self.view.frame the image is cropped at the wrong location.
Let's say your image is 720x1280 and you want your image to be cropped to the rectangle of your display, which is a CGRect of size 768x1024. You can't just pass a rectangle of size 768x1024. First, your image isn't 768 pixels wide. Second, you need to specify the placement of that rectangle with respects to the image (i.e. by specifying the rectangle's origin point). In your example, self.view.frame is a CGRect that has an origin of (0, 0). That's why it's always cropping from the top of your image rather than from the center.
Calculating the cropping rectangle is a bit tricky because you have a few different coordinate systems.
You've got your view controller's view, which has...
...a video preview layer as a sublayer, which is displaying an aspect-filled image, but...
...the AVCaptureOutput returns a UIImage that not only has a different width/height than the video preview, but it also has a different aspect ratio.
So because your preview layer is displaying a centered and cropped preview image (i.e. aspect fill), what you basically want to find is the CGRect that:
Has the same display ratio as self.view.bounds
Has the same smaller dimension size as the smaller dimension of the UIImage (i.e. aspect fit)
Is centered in the UIImage
So something like this:
// Determine the width:height ratio of the crop rect, based on self.bounds
CGFloat widthToHeightRatio = self.bounds.size.width / self.bounds.size.height;
CGRect cropRect;
// Set the crop rect's smaller dimension to match the image's smaller dimension, and
// scale its other dimension according to the width:height ratio.
if (image.size.width < image.size.height) {
cropRect.size.width = image.size.width;
cropRect.size.height = cropRect.size.width / widthToHeightRatio;
} else {
cropRect.size.width = image.size.height * widthToHeightRatio;
cropRect.size.height = image.size.height;
}
// Center the rect in the longer dimension
if (cropRect.size.width < cropRect.size.height) {
cropRect.origin.x = 0;
cropRect.origin.y = (image.size.height - cropRect.size.height) / 2.0;
} else {
cropRect.origin.x = (image.size.width - cropRect.size.width) / 2.0;
cropRect.origin.y = 0;
}
So finally, to go back to your original example where the image is 720x1280, and you want your image to be cropped to the rectangle of your display which is 768x1024, you will end up with a CGRect of size 720x960, with an origin of x = 0, y = 1280-960/2 = 160.