DrawView save combined image (multiply) - ios

I have 2 UIImageViews - one is at the bottom and shows a default image (like an Photo) - on the second UIImageView where you can draw.
I would like to create an UIImage from both images, and save it as new image.
How can i do that? Ill tried with:
func saveImage() {
print("save combined image")
let topImage = self.canvasView.image
let bottomImage = self.backgroundImageView.image
let size = CGSizeMake(topImage!.size.width, topImage!.size.height)
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
[topImage!.drawInRect(CGRectMake(0,0,size.width, topImage!.size.height))];
[bottomImage!.drawInRect(CGRectMake(0,0,size.width, bottomImage!.size.height))];
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(newImage, self, #selector(CanvasViewController.image(_:didFinishSavingWithError:contextInfo:)), nil)
}
But the result is not correct (stretched and no overlay)
Any ideas?

Ok ill found an solution for that, just need to add "multiply" as blend mode.
let topImage = self.canvasView.image
let bottomImage = self.backgroundImageView.image
let size = CGSizeMake(bottomImage!.size.width, bottomImage!.size.height)
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
[bottomImage!.drawInRect(CGRectMake(0,0,size.width, bottomImage!.size.height))];
[topImage!.drawInRect(CGRectMake(0,0,size.width, bottomImage!.size.height), blendMode: CGBlendMode.Multiply , alpha: 1.0)];
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()

Related

Take a screenshot of an area covered by an UIImageView object

I am trying to take a screenshot of an area covered by a UIImageview object. I have overlaid the image with some UILabel objects and want to save the shot of just the area
I have tried
UIGraphicsBeginImageContext(self.view.frame.size)
view.drawHierarchy(in: self.view.frame, afterScreenUpdates: true)
let memedImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
but it is giving my a shot of the whole screen
I have also tried
UIGraphicsBeginImageContext(imageHolder.frame.size)
view.drawHierarchy(in: self.view.frame, afterScreenUpdates: true)
let memedImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
That is giving me a rectangle of the top half of the screen
If you add the UILabel as a subview to the UIImageView you can use an extension function like this to capture an image context of the image view and its subviews (the labels).
extension UIImageView {
func renderSubviewsToImage() -> UIImage? {
guard let image = image else { return nil }
let scale = UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(image.size, false, scale)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
self.layer.render(in: context)
let renderedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return renderedImage
}
}
view.drawHierachy actually accept the boundary as bounds rather than frame
i will assume that image is actually the UIImageView
var bounds= CGRect(x: -image.frame.minX,y: -image.frame.minY,width: view.bounds.size.width,height: view.bounds.size.height)
UIGraphicsBeginImageContext(image.frame.size)
view.drawHierarchy(in: bounds, afterScreenUpdates: true)
let outputImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()

How to position image view on top of a photo correctly using Swift

Currently I am able to save the photo with an image on top, however the top image is not located at the coordinate it is when appearing on the screen in a preview view.
fileprivate func mergeUIViews( ) -> UIImage?
{
let bottomImage = photo
let topImage = uiViewInstance.image
let bottomImageHeight = bottomImage.size.height
let bottomImageWidth = bottomImage.size.width
let topImageHeight = uiViewInstance.frame.height
let topImageWidth = uiViewInstance.frame.width
let topImageOrigin = uiViewInstance.frame.origin
let bottomImageSize = CGSize(width: bottomImageWidth, height: bottomImageHeight)
let topImageSize = CGSize(width: topImageWidth, height: topImageHeight)
// Merge images
UIGraphicsBeginImageContextWithOptions(bottomImageSize, false, 0.0)
bottomImage.draw(in: CGRect(origin: CGPoint.zero, size: bottomImageSize))
topImage .draw(in: CGRect(origin: topImageOrigin, size: topImageSize)) // Where I believe the problem exists
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
What works instead of using the actual captured photo's size, is setting the photo size to the view of the view controllers size. This resulted in the desired output of the entire screen showing the merged photos and the proper point for the origin of the overlaid image.
fileprivate func mergeUIViews( ) -> UIImage?
{
let bottomImage = photo
let topImage = uiViewInstance.image
let bottomImageSize = self.view.frame.size
let topImageHeight = uiViewInstance.frame.height
let topImageWidth = uiViewInstance.frame.width
let topImageOrigin = uiViewInstance.frame.origin
let topImageSize = uiViewInstance.frame.size
// Merge images
UIGraphicsBeginImageContextWithOptions(bottomImageSize, false, 0.0)
bottomImage.draw(in: CGRect(origin: CGPoint.zero, size: bottomImageSize))
topImage .draw(in: CGRect(origin: topImageOrigin, size: topImageSize)) // Where I believe the problem exists
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}

screenshot is not in proper size IOS

In my IOS App Code I am taking screenshot of UIimageView but when I take it it's not taken properly by following code.
func captureView() -> UIImage {
// let rect: CGRect = self.imageView.bounds
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, false, 0.0)//add this line
let context: CGContextRef = UIGraphicsGetCurrentContext()!
self.view.layer.renderInContext(context)
let img: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}
I want the exect screen shot of the image in imageView because there are more images on it.
Kindly suggest me proper code for getting exact UIImageView Screenshot
How about you add UIScreen.mainScreen().scale instead of 0.0 in UIGraphicsBeginImageContextWithOptions(bounds.size, false, UIScreen.mainScreen().scale)
func captureView() -> UIImage {
UIGraphicsBeginImageContextWithOptions(bounds.size, false, UIScreen.mainScreen().scale)
drawViewHierarchyInRect(bounds, afterScreenUpdates: true)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img!
}

How to take Snap shot of AGSMapView for ios?

UIGraphicsBeginImageContextWithOptions(self.AgsMapView.bounds.size,false, 0.0)
self.view.layer.renderInContext(UIGraphicsGetCurrentContext()!)
snapShot = UIGraphicsGetImageFromCurrentImageContext()
He is my code, from above code getting blank image (white)
Try this code!!
let layer = self.AgsMapView.layer
let scale = UIScreen.mainScreen().scale
UIGraphicsBeginImageContextWithOptions(self.AgsMapView.frame.size, false, scale);
layer.renderInContext(UIGraphicsGetCurrentContext()!)
let snapShot = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(screenshot, nil, nil, nil)

Edit image into circle shape and insert into another image in Swift

I have image1, I would like to make that image circle shape and then insert it into image2. How to do it using swift? I can't do this with photoshop or other image editing tool, because my image1 will may be different every time.
EDIT: This what I am trying to achieve:
Flower is image1.
EDIT 2: What I have tried and kind of works:
let image2 = UIImage(named: "RedPin")
let newSize = CGSizeMake(image2!.size.width, image2!.size.height)
UIGraphicsBeginImageContext(newSize)
image2!.drawInRect(CGRectMake(0,0,newSize.width,newSize.height))
let imageView: UIImageView = UIImageView(image: image1)
var layer: CALayer = CALayer()
layer = imageView.layer
layer.masksToBounds = true
layer.cornerRadius = CGFloat(65)
UIGraphicsBeginImageContext(imageView.bounds.size)
layer.renderInContext(UIGraphicsGetCurrentContext()!)
let roundedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// decrease top image size
roundedImage.drawInRect(CGRectMake(10,10,45,45), blendMode: .Normal, alpha:1.0)
let createdNewImage = UIGraphicsGetImageFromCurrentImageContext()
Yet image1 is more a square with rounded edges and in general image seems to be low quality. Here is what that function above does:
EDIT 3: Toucan library is great at creating rounded images (let roundedImage = Toucan(image: newImage).maskWithEllipse().image), however final image version still looks blurry. Why's that?
I have Set image like this with rounded corner with same code like you.
func UserImageForAnnotation() -> UIImage {
let userPinImg : UIImage = UIImage(named: "pin_user.png")!
UIGraphicsBeginImageContextWithOptions(userPinImg.size, false, 0.0);
userPinImg.drawInRect(CGRect(origin: CGPointZero, size: userPinImg.size))
let roundRect : CGRect = CGRectMake(2, 2, userPinImg.size.width-4, userPinImg.size.width-4)
let myUserImgView = UIImageView(frame: roundRect)
myUserImgView.image = UIImage(named: "pic.png")
// myUserImgView.backgroundColor = UIColor.blackColor()
// myUserImgView.layer.borderColor = UIColor.whiteColor().CGColor
// myUserImgView.layer.borderWidth = 0.5
let layer: CALayer = myUserImgView.layer
layer.masksToBounds = true
layer.cornerRadius = myUserImgView.frame.size.width/2
UIGraphicsBeginImageContextWithOptions(myUserImgView.bounds.size, myUserImgView.opaque, 0.0)
layer.renderInContext(UIGraphicsGetCurrentContext()!)
let roundedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
roundedImage.drawInRect(roundRect)
let resultImg : UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return resultImg
}
Which gives me exact result what I want in my pin image.

Resources