UIImage (Frame) and UIImage (Picture) merge - ios

I have multiple sizes of frames, which can be hard coded, or server will decide. I have to select Image from Gallery, which definitely can be of many dimensions.
I am selecting Image from Gallery
I am generating white background UIImage, using code.
let size = CGSize(width: 424/2, height: 664/2)
UIGraphicsBeginImageContextWithOptions(size, true, 0)
UIColor.white.setFill()
UIRectFill(CGRect(x: 0, y: 0, width: size.width, height: size.height))
let background_image: UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Now, what I want, to make another Image, which keep Leading 20 pixel, Top 20 Pixel, and width and height 20 pixel smaller than original background.
How can I achieve it.
What I tried before coming to StackOverflow.
func mergedImageWith(frontImage:UIImage?, backgroundImage: UIImage?) -> UIImage{
if (backgroundImage == nil) {
return frontImage!
}
let size = CGSize(width: 424/2, height: 664/2)
UIGraphicsBeginImageContextWithOptions(size, true, 0)
UIColor.white.setFill()
UIRectFill(CGRect(x: 0, y: 0, width: size.width, height: size.height))
let backgroundImage2: UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
backgroundImage2?.draw(in: CGRect.init(x: 0, y: 0, width: size.width, height: size.height))
frontImage?.draw(in: getAspectFillFrame(sizeImageView: size2, sizeImage: (frontImage?.size)!))
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
here background image is created with aspect fill, but issue is of starting position and complete height and width.
In very simple words. Its like making custom frames and merge them with images(aspect fill) for printing.
can anyone help me out
Thanks.

Try not ending your image context until all of the images are drawn (I am also including some code that I have working, edited down a bit)
class layeredImageView: UIImageView {
var imageBackground:UIImage!
var imageForeground:UIImage!
UIGraphicsBeginImageContextWithOptions(self.frame.size, false, UIScreen.main.scale)
self.image?.draw(in: self.frame)
imageBackground.draw(in: CGRect(<rect>)
imageForeground.draw(in: CGRect(<rect>)
self.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}

Related

How to apply scale when drawing and composing UIImage

I have the following functions.
extension UIImage
{
var width: CGFloat
{
return size.width
}
var height: CGFloat
{
return size.height
}
private static func circularImage(diameter: CGFloat, color: UIColor) -> UIImage
{
UIGraphicsBeginImageContextWithOptions(CGSize(width: diameter, height: diameter), false, 0)
let context = UIGraphicsGetCurrentContext()!
context.saveGState()
let rect = CGRect(x: 0, y: 0, width: diameter, height: diameter)
context.setFillColor(color.cgColor)
context.fillEllipse(in: rect)
context.restoreGState()
let image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}
private func addCentered(image: UIImage, tintColor: UIColor) -> UIImage
{
let topImage = image.withTintColor(tintColor, renderingMode: .alwaysTemplate)
let bottomImage = self
UIGraphicsBeginImageContext(size)
let bottomRect = CGRect(x: 0, y: 0, width: bottomImage.width, height: bottomImage.height)
bottomImage.draw(in: bottomRect)
let topRect = CGRect(x: (bottomImage.width - topImage.width) / 2.0,
y: (bottomImage.height - topImage.height) / 2.0,
width: topImage.width,
height: topImage.height)
topImage.draw(in: topRect, blendMode: .normal, alpha: 1.0)
let mergedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return mergedImage
}
}
They work fine, but how do I properly apply UIScreen.main.scale to support retina screens?
I've looked at what's been done here but can't figure it out yet.
Any ideas?
Accessing UIScreen.main.scale itself is a bit problematic, as you have to access it only from main thread (while you usually want to put a heavier image processing on a background thread). So I suggest one of these ways instead.
First of all, you can replace UIGraphicsBeginImageContext(size) with
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
The last argument (0.0) is a scale, and based on docs "if you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen."
If instead you want to retain original image's scale on resulting UIImage, you can do this: after topImage.draw, instead of getting the UIImage with UIGraphicsGetImageFromCurrentImageContext, get CGImage with
let cgImage = context.makeImage()
and then construct UIImage with the scale and orientation of the original image (as opposed to defaults)
let mergedImage = UIImage(
cgImage: cgImage,
scale: image.scale,
orientation: image.opientation)

UIImage convert to data and back changes size?

I create a picture programmatically, convert it to data and back and get different pictures.
let image1: UIImage = {
let size = CGSize(width: 50, height: 50)
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
UIGraphicsBeginImageContextWithOptions(size, false, 0)
UIColor.black.setFill()
UIRectFill(rect)
let image: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}()
let data = UIImagePNGRepresentation(image1)!
let image2 = UIImage(data: data)!
print(image1.size) // (50.0, 50.0)
print(image2.size) // (100.0, 100.0)
Please explain what happens and how to solve the problem. Thank you!
The "culprit" line:
UIGraphicsBeginImageContextWithOptions(size, false, 0)
Looking at the doc of UIGraphicsBeginImageContextWithOptions(), for the last parameter (scale)
scale The scale factor to apply to the bitmap. If you specify a value of 0.0, the scale factor is set to the scale factor of the
device’s main screen.
If your device is Retina (*2), then the scale factor will be 2.

Frame and Image in specific aspect Ratio

I created empty gray UIImage, using below code
let size = CGSize(width: 212, height: 332)
UIGraphicsBeginImageContextWithOptions(size, true, 0)
UIColor.gray.setFill()
UIRectFill(CGRect(x: 0, y: 0, width: size.width, height: size.height))
let backgroundImage2: UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
It shows output as
Now I need to put UIImage on specific area in this UIImage. as shown in below Image. Say top, left, right should be 30 pixels, and bottom more than that, say 200 pixels. maintaining inner image aspect ratio.
Use two image views (either UIImageView or GLKView), making the "image" a subview of the "gray background" view. After positioning the "image" correctly, merge the two images into one.
Here's an extension to UIView that I use:
extension UIView {
public func createImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(
CGSize(width: self.frame.width, height: self.frame.height), true, 1)
self.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image!
}
}

Xcode: why does Xcode shows an image bigger then AS-IS?

Why does Xcode shows a image bigger then AS-IS?
http://users.telenet.be/thomazz/ScreenShot4.png
http://users.telenet.be/thomazz/ScreenShot3.png
Scenario:
I got an image.
I resize this UIImage.
I export the resized UIImage.
I comment out my resize code.
I import the resized image in Xcode.
problem 1: Xcode shows the image twice as big as normal.
problem 2: when I run my app with the exported-resized image, it is twice as big.
view screenshots.
This totally depends on your frame of your UIImageView and not its dimensions.
So if you have an 1024x1024 image and you place it in a 10x10 frame, it will render to 10x10 size and vice versa.
If you want it bigger, then make your UIImageView bigger
Edit: so it is a google maps icon
Set the resized image as marker icon ,i.e,
marker.icon = self.imageWithImage(image: UIImage(named: "imageName")!, scaledToSize: CGSize(width: 3.0, height: 3.0))
Add this function
func imageWithImage(image:UIImage, scaledToSize newSize:CGSize) -> UIImage{
UIGraphicsBeginImageContextWithOptions(newSize, false, 0.0);
image.drawInRect(CGRectMake(0, 0, newSize.width, newSize.height))
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
Dear Mohammad Bashir Sidani, I have this code.
And this code works. but it creates a new UIImage.
Then I use UIImagePNGRepresentation(resizedImage) to export the image.
I disable the code below to use the "programmatically-resized image".
this new resized image is blown up by Xcode... :(
extension UIImage {
func resizeImage(_ dimension: CGFloat, opaque: Bool, contentMode: UIViewContentMode = .scaleAspectFit) -> UIImage {
var width: CGFloat
var height: CGFloat
var newImage: UIImage
let size = self.size
let aspectRatio = size.width/size.height
switch contentMode {
case .scaleAspectFit:
if aspectRatio > 1 { // Landscape image
width = dimension
height = dimension / aspectRatio
} else { // Portrait image
height = dimension
width = dimension * aspectRatio
}
default:
fatalError("UIIMage.resizeToFit(): FATAL: Unimplemented ContentMode")
}
if #available(iOS 10.0, *) {
let renderFormat = UIGraphicsImageRendererFormat.default()
renderFormat.opaque = opaque
let renderer = UIGraphicsImageRenderer(size: CGSize(width: width, height: height), format: renderFormat)
newImage = renderer.image {
(context) in
self.draw(in: CGRect(x: 0, y: 0, width: width, height: height))
}
} else {
UIGraphicsBeginImageContextWithOptions(CGSize(width: width, height: height), opaque, 0)
self.draw(in: CGRect(x: 0, y: 0, width: width, height: height))
newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
}
return newImage
}
}

How apply and move a UIImageView layered above another UIImageView Swift3

i read here many guides how to create a new image merging two existing ones, using the UIGraphics and the layer.render methods for the two UIImageViews, and finally i can create an then save my new image. The problem is that i can't understand how to put the second UIImageView where i want, at the bottom for example. I 'll post now a image of an merged image and the function that my code run making this possible.
Captured merged photo
And here's my code that do the trick:
extension UIImage {
class func imageWithWatermark(image1: UIImageView, image2: UIImageView) -> UIImage {
UIGraphicsBeginImageContextWithOptions(image1.bounds.size, false, 0.0)
let frame = image1.frame
image2.frame = CGRect(x: 0, y: frame.size.height * 0.80, width: frame.size.width, height: frame.size.height * 0.20 )
image1.layer.render(in: UIGraphicsGetCurrentContext()!)
image2.layer.render(in: UIGraphicsGetCurrentContext()!)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img!
}
}
And then my func that saves the merged image:
func addWatermark() {
let newImage = UIImage.imageWithWatermark(image1: cameraPreview, image2: provaImage)
UIImageWriteToSavedPhotosAlbum(newImage, nil,nil,nil)
}
You can use this function which merge two images and the second will be replaces on bottom
func mergeTwoImageSeconInBottom(backgroundImage: UIImage, imageOnBottom: UIImage) -> UIImage {
let size = YOUR_CG_SIZE
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
backgroundImage.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
imageOnBottom.draw(at: CGPoint(x: (size.width - imageOnBottom.size.width) / 2, y: size.height - imageOnBottom.size.height))
let newImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}

Resources