Cropped image doesn't display correctly inside ImageView - ios

I have rather interesting issue where cropped image is not displaying correctly inside UIImageView.
In my app users are able to draw custom shapes and then crop image.
For drawing shapes I used this github library - ZImageCropper
Here's how I crop image:
UIGraphicsBeginImageContextWithOptions(pickedImage.bounds.size, false, 1)
pickedImage.layer.render(in: UIGraphicsGetCurrentContext()!)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
After that image gets placed inside UIImageView and here's what I got:
But when I place Image from assets catalog into same UIImageView here's the result:
Both these images have same size, I have resized cropped image manually when tried to fix this bug but it remained the same. UIImageView content mode is Aspect Fit.
What I'm doing wrong?

Related

Flipped image for Layer

so I am trying to create a flipped image for CAShapeLayer.
and I have the following code
shiplayer.contents = #imageLiteral(resourceName: "spaceship").withHorizontallyFlippedOrientation().cgImage
view.layer.addSublayer(shiplayer)
But the image rendered by the shiplayer is still the original unflipped image .
I tested it on an UIImageView, and the image is flipped properly.
What can I do to flip the image for a CALayer>?
Thanks
The CGImage is the underlying bitmap; it knows nothing of the image properties. So draw the image, flipped, into a new image graphics context and use the resulting image as the basis for a new CGImage.

How to store irregular shapes as a UIImage?

I used this GitHub Repository to make an image cropper that could crop images into irregular shapes. As of right now, I have implemented this repository and am able to crop UIImages in irregular shapes as depicted in the link.
In its current state, the cropper uses a UIBezierPath and a CAShapeLayer to take a normal (rectangular) UIImage and cut out the parts not included within the shape. Due to the fact that I am trying to store this UIImage in a database, I need it to be stored as just a UIImage and not have to also store the CAShapeLayer and UIBezierPath that crop the image.
Is there a way to make this cropped section into a UIImage that is an irregular shape? If not, is there an alternative way of cropping a photo using user-drawn paths like shown in the link above that will allow for it to be stored as an irregularly shaped UIImage?
Thanks!
Please check this library completely. They defined one method where you can get image from Graphic context.
func cropImage(){
UIGraphicsBeginImageContextWithOptions(tempImageView.bounds.size, false, 1)
tempImageView.layer.render(in: UIGraphicsGetCurrentContext()!)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
self.croppedImage = newImage!
}
This is the method where you can get UIImage and utilize it as per your requirement.

Combine two UIImageView in to one image stretching

When i tried two combine two UIImageview then images are stretching here's code what i am using
CGSize size =CGSizeMake(MAX(self.imgCapture.size.width, self.imgGallary.size.width), MAX(self.imgCapture.size.height, self.imgGallary.size.height));
UIGraphicsBeginImageContext(size);
[self.imgCaptured.image drawInRect:CGRectMake(self.view.frame.origin.x,self.view.frame.origin.y,size.width/2,self.imgCapture.size.height)];
[self.imgGallaryCD.image drawInRect:CGRectMake(self.view.frame.origin.x+(size.width/2),self.view.frame.origin.y,size.width/2,self.imgGallary.size.height)];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
Here's First screenshot is there is two UIImageview's
Second screenshot is when i am combine this image into one but image is stretching i want that aspect ratio like screenshot 1
The image is not "stretching". It is squeezing. It's a matter of simple arithmetic. Looking at your image context size and your drawInRect commands, we see that your image context is the size of one image, so now you are drawing both images at half width. So they are squeezed horizontally. You need the image context to be the size of both images added together.

Filling UIImage view with a small image

I'm working with a UIImage, when the image is larger than the image view, I'd like to aspect fill.
When the image is smaller, I'd like the same for it to stretch it, keeping the aspect ratio but filling it's UIImageView.
Currently, it will just leave the smaller image in it's original size..

Objective-C How does snapchat make the text on top of an image/video so sharp and not pixelated?

In my app, it allows users to place text on top of images like snapchat, then they are allowed to save the image to their device. I simply add the text view on top of the image and take a picture of the image using the code:
UIGraphicsBeginImageContext(imageView.layer.bounds.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* savedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But when I compare the text on my image, to the text from a snapchat image...it is significantly different. Snapchat's word text on top of image is significantly sharper then mine. Mine looks very pixelated. Also I am not compressing the image at all, just saving the image as is using ALAssetLibrary.
Thank You
When you use UIGraphicsBeginImageContext, it defaults to a 1x scale (i.e. non-retina resolution). You probably want:
UIGraphicsBeginImageContextWithOptions(imageView.layer.bounds.size, YES, 0);
Which will use the same scale as the screen (probably 2x). The final parameter is the scale of the resulting image; 0 means "whatever the screen is".
If your imageView is scaled to the size of the screen, then I think your jpeg will also be limited to that resolution. If setting the scale on UIGraphicsBeginImageContextWithOptions does not give you enough resolution, you can do your drawing in a larger offscreen image. Something like:
UIGraphicsBeginImageContext(imageSize);
[image drawInRect:CGRectMake(0,0,imageSize.width,imageSize.height)];
CGContextScaleCTM(UIGraphicsGetCurrentContext(),scale,scale);
[textOverlay.layer renderInContext:UIGraphicsGetCurrentContext()];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You need to set the "scale" value to scale the textOverlay view, which is probably at screen size, to the offscreen image size.
Alternatively, probably simpler, you can start with a larger UIImageView, but put it within another UIView to scale it to fit on screen. Do the same with your text overlay view. Then, your code for creating composite should work, at whatever resolution you choose for the UIImageView.

Resources