Blurred Image as background in UIView - ios

So I have an image that I am rendering in a view. For the background color of that uiview I actually want a UIImage. The way I was doing was taking the currentImageView and applying this function makeBlurEffect.
func makeBlurImage(targetImageView:UIImageView?)
{
let blurEffect = UIBlurEffect(style: UIBlurEffectStyle.light)
let blurEffectView = UIVisualEffectView(effect: blurEffect)
blurEffectView.frame = targetImageView!.bounds
blurEffectView.autoresizingMask = [.flexibleWidth, .flexibleHeight] // for supporting device rotation
targetImageView?.addSubview(blurEffectView)
}
I typically set the background image and apply the blur filter inside the lazy var method that creates the UIView as shown below.
lazy var blurryBackGround : UIView = {
let blurryBackGround = UIView()
blurryBackGround.backgroundColor = UIColor.red
// blurryBackGround.backgroundColor = UIColor(i)
var blurryImage = currentImageView
blurryImage.makeBlurImage(blurryImage?)
return blurryBackGround
}()
I have also included the UIImageView creation:
//
lazy var currentEventImage : UIImageView = {
let currentEvent = UIImageView()
currentEvent.clipsToBounds = true
currentEvent.translatesAutoresizingMaskIntoConstraints = false
currentEvent.contentMode = .scaleAspectFill
currentEvent.isUserInteractionEnabled = true
currentEvent.layer.masksToBounds = true
let tapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(handlePromoVid))
currentEvent.isUserInteractionEnabled = true
currentEvent.addGestureRecognizer(tapGestureRecognizer)
return currentEvent
}()
Any idea how i would correctly do this?
The current way I implemented would blur the original image, change the blurred image to the main image and turning the background image to the unblurred image

Use following extension:
extension UIImageView {
func renderedImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, false, UIScreen.main.scale)
self.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}
}
To capture currently presented imageView to an UIImage.
EDIT
How about you try not capturing it yet, just use this as a background:
lazy var blurryBackGround : UIView = {
let blurryBackGround = currentImageView
blurryBackGround.makeBlurImage(blurryBackGround)
return blurryBackGround
}()

Related

Change White Pixels in a UIImage

I have a UIImage which is an outline, and one which is filled, both were created from OpenCV, one from grab cut and the other from structuring element.
my two images are like this:
I am trying to change all of the white pixels in the outlined image because I want to merge the two images together so I end up with a red outline and a white filled area in the middle. I am using this to merge the two together, and I am aware instead of red it will be a pink kind of colour and a grey instead of white since I am just blending them together with alpha.
// Start an image context
UIGraphicsBeginImageContext(grabcutResult.size)
// Create rect for this draw session
let rect = CGRect(x: 0.0, y: 0.0, width: grabcutResult.size.width, height: grabcutResult.size.height)
grabcutResult.draw(in: rect)
redGrabcutOutline.draw(in: rect, blendMode: .normal, alpha: 0.5)
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
and the idea is it should look something like this.
I want to be able to complete this operation quickly but the only solutions I have found are either for ImageView (which only affects how stuff is rendered not the underlying UIImage) or they involve looping over the entire image pixel by pixel.
I am trying to find a solution that will just mask all the white pixels in the outline to red without having to loop over the entire image pixel by pixel as it is too slow.
Ideally it would be good if I could get openCV to just return a red outline instead of white but I don't think its possible to change this (maybe im wrong).
Using swift btw... but any Help is appreciated, thanks in advance.
This may work for you - using only Swift code...
extension UIImage {
func maskWithColor(color: UIColor) -> UIImage? {
let maskingColors: [CGFloat] = [1, 255, 1, 255, 1, 255]
let bounds = CGRect(origin: .zero, size: size)
let maskImage = cgImage!
var returnImage: UIImage?
// make sure image has no alpha channel
let rFormat = UIGraphicsImageRendererFormat()
rFormat.opaque = true
let renderer = UIGraphicsImageRenderer(size: size, format: rFormat)
let noAlphaImage = renderer.image {
(context) in
self.draw(at: .zero)
}
let noAlphaCGRef = noAlphaImage.cgImage
if let imgRefCopy = noAlphaCGRef?.copy(maskingColorComponents: maskingColors) {
let rFormat = UIGraphicsImageRendererFormat()
rFormat.opaque = false
let renderer = UIGraphicsImageRenderer(size: size, format: rFormat)
returnImage = renderer.image {
(context) in
context.cgContext.clip(to: bounds, mask: maskImage)
context.cgContext.setFillColor(color.cgColor)
context.cgContext.fill(bounds)
context.cgContext.draw(imgRefCopy, in: bounds)
}
}
return returnImage
}
}
This extension returns a UIImage with white replaced with the passed UIColor, and the black "background" changed to transparent.
Use it in this manner:
// change filled white star to gray with transparent background
let modFilledImage = filledImage.maskWithColor(color: UIColor(red: 200, green: 200, blue: 200))
// change outlined white star to red with transparent background
let modOutlineImage = outlineImage.maskWithColor(color: UIColor.red)
// combine the images on a black background
Here is a full example, using your two original images (most of the code is setting up image views to show the results):
extension UIImage {
func maskWithColor(color: UIColor) -> UIImage? {
let maskingColors: [CGFloat] = [1, 255, 1, 255, 1, 255]
let bounds = CGRect(origin: .zero, size: size)
let maskImage = cgImage!
var returnImage: UIImage?
// make sure image has no alpha channel
let rFormat = UIGraphicsImageRendererFormat()
rFormat.opaque = true
let renderer = UIGraphicsImageRenderer(size: size, format: rFormat)
let noAlphaImage = renderer.image {
(context) in
self.draw(at: .zero)
}
let noAlphaCGRef = noAlphaImage.cgImage
if let imgRefCopy = noAlphaCGRef?.copy(maskingColorComponents: maskingColors) {
let rFormat = UIGraphicsImageRendererFormat()
rFormat.opaque = false
let renderer = UIGraphicsImageRenderer(size: size, format: rFormat)
returnImage = renderer.image {
(context) in
context.cgContext.clip(to: bounds, mask: maskImage)
context.cgContext.setFillColor(color.cgColor)
context.cgContext.fill(bounds)
context.cgContext.draw(imgRefCopy, in: bounds)
}
}
return returnImage
}
}
class MaskWorkViewController: UIViewController {
let origFilledImgView: UIImageView = {
let v = UIImageView()
v.translatesAutoresizingMaskIntoConstraints = false
v.contentMode = .center
return v
}()
let origOutlineImgView: UIImageView = {
let v = UIImageView()
v.translatesAutoresizingMaskIntoConstraints = false
v.contentMode = .center
return v
}()
let modifiedFilledImgView: UIImageView = {
let v = UIImageView()
v.translatesAutoresizingMaskIntoConstraints = false
v.contentMode = .center
return v
}()
let modifiedOutlineImgView: UIImageView = {
let v = UIImageView()
v.translatesAutoresizingMaskIntoConstraints = false
v.contentMode = .center
return v
}()
let combinedImgView: UIImageView = {
let v = UIImageView()
v.translatesAutoresizingMaskIntoConstraints = false
v.contentMode = .center
return v
}()
let origStack: UIStackView = {
let v = UIStackView()
v.translatesAutoresizingMaskIntoConstraints = false
v.axis = .horizontal
v.spacing = 20
return v
}()
let modifiedStack: UIStackView = {
let v = UIStackView()
v.translatesAutoresizingMaskIntoConstraints = false
v.axis = .horizontal
v.spacing = 20
return v
}()
let mainStack: UIStackView = {
let v = UIStackView()
v.translatesAutoresizingMaskIntoConstraints = false
v.axis = .vertical
v.alignment = .center
v.spacing = 10
return v
}()
override func viewDidLoad() {
super.viewDidLoad()
guard let filledImage = UIImage(named: "StarFill"),
let outlineImage = UIImage(named: "StarEdge") else {
return
}
var modifiedFilledImage: UIImage = UIImage()
var modifiedOutlineImage: UIImage = UIImage()
var combinedImage: UIImage = UIImage()
// for both original images, replace white with color
// and make black transparent
if let modFilledImage = filledImage.maskWithColor(color: UIColor(red: 200, green: 200, blue: 200)),
let modOutlineImage = outlineImage.maskWithColor(color: UIColor.red) {
modifiedFilledImage = modFilledImage
modifiedOutlineImage = modOutlineImage
let rFormat = UIGraphicsImageRendererFormat()
rFormat.opaque = true
let renderer = UIGraphicsImageRenderer(size: modifiedFilledImage.size, format: rFormat)
// combine modified images on black background
combinedImage = renderer.image {
(context) in
context.cgContext.setFillColor(UIColor.black.cgColor)
context.cgContext.fill(CGRect(origin: .zero, size: modifiedFilledImage.size))
modifiedFilledImage.draw(at: .zero)
modifiedOutlineImage.draw(at: .zero)
}
}
// setup image views and set .image properties
setupUI(filledImage.size)
origFilledImgView.image = filledImage
origOutlineImgView.image = outlineImage
modifiedFilledImgView.image = modifiedFilledImage
modifiedOutlineImgView.image = modifiedOutlineImage
combinedImgView.image = combinedImage
}
func setupUI(_ imageSize: CGSize) -> Void {
origStack.addArrangedSubview(origFilledImgView)
origStack.addArrangedSubview(origOutlineImgView)
modifiedStack.addArrangedSubview(modifiedFilledImgView)
modifiedStack.addArrangedSubview(modifiedOutlineImgView)
var lbl = UILabel()
lbl.textAlignment = .center
lbl.text = "Original Images"
mainStack.addArrangedSubview(lbl)
mainStack.addArrangedSubview(origStack)
lbl = UILabel()
lbl.textAlignment = .center
lbl.numberOfLines = 0
lbl.text = "Modified Images\n(UIImageViews have Green Background)"
mainStack.addArrangedSubview(lbl)
mainStack.addArrangedSubview(modifiedStack)
lbl = UILabel()
lbl.textAlignment = .center
lbl.text = "Combined on Black Background"
mainStack.addArrangedSubview(lbl)
mainStack.addArrangedSubview(combinedImgView)
view.addSubview(mainStack)
NSLayoutConstraint.activate([
mainStack.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor, constant: 20.0),
mainStack.centerXAnchor.constraint(equalTo: view.centerXAnchor, constant: 0.0),
])
[origFilledImgView, origOutlineImgView, modifiedFilledImgView, modifiedOutlineImgView, combinedImgView].forEach {
$0.backgroundColor = .green
NSLayoutConstraint.activate([
$0.widthAnchor.constraint(equalToConstant: imageSize.width),
$0.heightAnchor.constraint(equalToConstant: imageSize.height),
])
}
}
}
And the result, showing the original, modified and final combined image... Image views have green backgrounds to show the transparent areas:
The idea is to bitwise-or the two masks together which "merges" the two masks. Since this new combined grayscale image is still a single (1-channel) image, we need to convert it to a 3-channel so we can apply color to the image. Finally, we color the outline mask with red to get our result
I implemented it in Python OpenCV but you can adapt the same idea with Swift
import cv2
# Read in images as grayscale
full = cv2.imread('1.png', 0)
outline = cv2.imread('2.png', 0)
# Bitwise-or masks
combine = cv2.bitwise_or(full, outline)
# Combine to 3 color channel and color outline red
combine = cv2.merge([combine, combine, combine])
combine[outline > 120] = (57,0,204)
cv2.imshow('combine', combine)
cv2.waitKey()
Benchmarks using IPython
In [3]: %timeit combine()
782 µs ± 10.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Using the Vectorized feature of Numpy, it seems to be pretty fast
try this:
public func maskWithColor2( color:UIColor) -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.size, false, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()!
color.setFill()
context.translateBy(x: 0, y: self.size.height)
context.scaleBy(x: 1.0, y: -1.0)
let rect = CGRect(x: 0.0, y: 0.0, width: self.size.width, height: self.size.height)
context.draw(self.cgImage!, in: rect)
context.setBlendMode(CGBlendMode.sourceIn)
context.addRect(rect)
context.drawPath(using: CGPathDrawingMode.fill)
let coloredImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return coloredImage!
}

Efficient off-screen UIView rendering and mirroring

I have a "off-screen" UIView hierarchy which I want render in different locations of my screen. In addition it should be possible to show only parts of this view hierarchy and should reflect all changes made to this hierarchy.
The difficulties:
The UIView method drawHierarchy(in:afterScreenUpdates:) always calls draw(_ rect:) and is therefore very inefficient for large hierarchies if you want to incorporate all changes to the view hierarchy. You would have to redraw it every screen update or observe all changing properties of all views. Draw view hierarchy documentation
The UIView method snapshotView(afterScreenUpdates:) also does not help much since I have not found a way to get a correct view hierarchy drawing if this hierarchy is "off-screen". Snapshot view documentation
"Off-Screen": The root view of this view hierarchy is not part of the UI of the app. It has no superview.
Below you can see a visual representation of my idea:
Here's how I would go about doing it. First, I would duplicate the view you are trying to duplicate. I wrote a little extension for this:
extension UIView {
func duplicate<T: UIView>() -> T {
return NSKeyedUnarchiver.unarchiveObject(with: NSKeyedArchiver.archivedData(withRootObject: self)) as! T
}
func copyProperties(fromView: UIView, recursive: Bool = true) {
contentMode = fromView.contentMode
tag = fromView.tag
backgroundColor = fromView.backgroundColor
tintColor = fromView.tintColor
layer.cornerRadius = fromView.layer.cornerRadius
layer.maskedCorners = fromView.layer.maskedCorners
layer.borderColor = fromView.layer.borderColor
layer.borderWidth = fromView.layer.borderWidth
layer.shadowOpacity = fromView.layer.shadowOpacity
layer.shadowRadius = fromView.layer.shadowRadius
layer.shadowPath = fromView.layer.shadowPath
layer.shadowColor = fromView.layer.shadowColor
layer.shadowOffset = fromView.layer.shadowOffset
clipsToBounds = fromView.clipsToBounds
layer.masksToBounds = fromView.layer.masksToBounds
mask = fromView.mask
layer.mask = fromView.layer.mask
alpha = fromView.alpha
isHidden = fromView.isHidden
if let gradientLayer = layer as? CAGradientLayer, let fromGradientLayer = fromView.layer as? CAGradientLayer {
gradientLayer.colors = fromGradientLayer.colors
gradientLayer.startPoint = fromGradientLayer.startPoint
gradientLayer.endPoint = fromGradientLayer.endPoint
gradientLayer.locations = fromGradientLayer.locations
gradientLayer.type = fromGradientLayer.type
}
if let imgView = self as? UIImageView, let fromImgView = fromView as? UIImageView {
imgView.tintColor = .clear
imgView.image = fromImgView.image?.withRenderingMode(fromImgView.image?.renderingMode ?? .automatic)
imgView.tintColor = fromImgView.tintColor
}
if let btn = self as? UIButton, let fromBtn = fromView as? UIButton {
btn.setImage(fromBtn.image(for: fromBtn.state), for: fromBtn.state)
}
if let textField = self as? UITextField, let fromTextField = fromView as? UITextField {
if let leftView = fromTextField.leftView {
textField.leftView = leftView.duplicate()
textField.leftView?.copyProperties(fromView: leftView)
}
if let rightView = fromTextField.rightView {
textField.rightView = rightView.duplicate()
textField.rightView?.copyProperties(fromView: rightView)
}
textField.attributedText = fromTextField.attributedText
textField.attributedPlaceholder = fromTextField.attributedPlaceholder
}
if let lbl = self as? UILabel, let fromLbl = fromView as? UILabel {
lbl.attributedText = fromLbl.attributedText
lbl.textAlignment = fromLbl.textAlignment
lbl.font = fromLbl.font
lbl.bounds = fromLbl.bounds
}
if recursive {
for (i, view) in subviews.enumerated() {
if i >= fromView.subviews.count {
break
}
view.copyProperties(fromView: fromView.subviews[i])
}
}
}
}
to use this extension, simply do
let duplicateView = originalView.duplicate()
duplicateView.copyProperties(fromView: originalView)
parentView.addSubview(duplicateView)
Then I would mask the duplicate view to only get the particular section that you want
let mask = UIView(frame: CGRect(x: 0, y: 0, width: yourNewWidth, height: yourNewHeight))
mask.backgroundColor = .black
duplicateView.mask = mask
finally, I would scale it to whatever size you want using CGAffineTransform
duplicateView.transform = CGAffineTransform(scaleX: xScale, y: yScale)
the copyProperties function should work well but you can change it if necessary to copy even more things from one view to another.
Good luck, let me know how it goes :)
I'd duplicate the content I wish to display and crop it as I want.
Let's say I have a ContentViewController which carries the view hierarchy I wish to replicate. I would encapsule all the changes that can be made to the hierarchy inside a ContentViewModel. Something like:
struct ContentViewModel {
let actionTitle: String?
let contentMessage: String?
// ...
}
class ContentViewController: UIViewController {
func display(_ viewModel: ContentViewModel) { /* ... */ }
}
With a ClippingView (or a simple UIScrollView) :
class ClippingView: UIView {
var contentOffset: CGPoint = .zero // a way to specify the part of the view you wish to display
var contentFrame: CGRect = .zero // the actual size of the clipped view
var clippedView: UIView?
override init(frame: CGRect) {
super.init(frame: frame)
clipsToBounds = true
}
override func layoutSubviews() {
super.layoutSubviews()
clippedView?.frame = contentFrame
clippedView?.frame.origin = contentOffset
}
}
And a view controller container, I would crop each instance of my content and update all of them each time something happens :
class ContainerViewController: UIViewController {
let contentViewControllers: [ContentViewController] = // 3 in your case
override func viewDidLoad() {
super.viewDidLoad()
contentViewControllers.forEach { viewController in
addChil(viewController)
let clippingView = ClippingView()
clippingView.clippedView = viewController.view
clippingView.contentOffset = // ...
viewController.didMove(to: self)
}
}
func somethingChange() {
let newViewModel = ContentViewModel(...)
contentViewControllers.forEach { $0.display(newViewModel) }
}
}
Could this scenario work in your case ?

Frame of UIImageView changes after setting new image

I have a UIImageView which is instantiated with an low resolution thumbnail from core-data. After the view has been presented the loading of the high-res web version starts which will replace the original image in the UIImageView. The problem however is that the frame changes once the download of the high-res version completes. This can't happen because of the blur layer on top of the original image which will then be a smaller size then the original image. I use the following code:
let imageData = member.picture
if let imageData = imageData {
let image = imageFromData(imageData)
// Setup pictures
self.profilePicture.image = image
self.profilePictureBlurred.image = image
self.profilePictureBlurred.clipsToBounds = true
// Make profilePicture round
let width = self.profilePicture.frame.width
self.profilePicture.layer.cornerRadius = width * 0.5
self.profilePicture.layer.masksToBounds = true
self.profilePicture.layer.borderColor = UIColor.whiteColor().CGColor
self.profilePicture.layer.borderWidth = 2.0
// Blur profilePictureBlurred
// Create Effects
let blur = UIBlurEffect(style: .Light)
let vibrancy = UIVibrancyEffect(forBlurEffect: blur)
let effectView = UIVisualEffectView(effect: blur)
effectView.frame = self.profilePictureBlurred.frame
let vibrantView = UIVisualEffectView(effect: vibrancy)
vibrantView.frame = self.profilePictureBlurred.frame
print(self.profilePictureBlurred.frame)
// Add effects to profilePictureBlurred
self.profilePictureBlurred.addSubview(effectView)
self.profilePictureBlurred.addSubview(vibrantView)
let iPid = Int(member.iPid!)
getImageForPerson(iPid, completionHandler: { (image,error) in
if error == nil {
self.profilePicture.image = image
self.profilePictureBlurred.image = image
print(self.profilePictureBlurred.frame)
} else {
print("Error: \(error)")
}
})
} else {
print("No Image Data")
}
The size of the frame changes after the download completes, printlog:
(0.0, 0.0, 600.0, 150.0)
(0.0, 0.0, 375.0, 167.0)
This means that the blur effect no longer covers the entire UIImageView thus resulting in an ugly effect. Why does the UIImageView change size and what should be the correct way to set an image in a UIImageView for a second time?
The goal is to create something like the LinkedIn contact view
Edit: The outlets:
#IBOutlet weak var profilePictureBlurred: UIImageView!
#IBOutlet weak var profilePicture: UIImageView!

How to set corner radius for all corners with image view rendered from UIGraphicsGetImageFromCurrentImageContext

This is how it looks before:
and after I generate dragging view:
and this is how I generate the dragging view from cell:
private func setupDraggingViewForCell(cell: BWStudentWishlistBookCollectionViewCell) {
UIGraphicsBeginImageContextWithOptions(cell.bounds.size, false, 0)
cell.bookImageView.layer.renderInContext(UIGraphicsGetCurrentContext()!)
draggingView = UIImageView(image: UIGraphicsGetImageFromCurrentImageContext())
draggingView?.clipsToBounds = true
draggingView?.contentMode = .ScaleAspectFit
draggingView?.layer.masksToBounds = true
draggingView?.layer.cornerRadius = 10
view.addSubview(draggingView!)
UIGraphicsEndImageContext()
}
As you can see, only one corner is rounded. Why?
You can set the clipsToBounds to false, and probably it would solve the problem in your case. Otherwise it will show you the full frame of the cover, and you can decide afterwards what to do.
draggingView?.clipsToBounds = false
You Can Directly add this code into your custom cell otherwise as per your need.......
override func awakeFromNib() {
super.awakeFromNib()
// Initialization code
imageView.layer.shadowColor = UIColor.blackColor().CGColor
imageView.layer.shadowOpacity = 1
imageView.layer.cornerRadius = 5
imageView.layer.shadowOffset = CGSizeZero
imageView.layer.shadowRadius = 2
//If you added Aspect to Fill
imageView.clipsToBounds = true
}
I hope this will help you... (Y)
don't know but try to round image than imageView
func makeRoundedImage(image: UIImage, radius: Float) -> UIImage {
var imageLayer: CALayer = CALayer.layer
imageLayer.frame = CGRectMake(0, 0, image.size.width, image.size.height)
imageLayer.contents = (image.CGImage as! AnyObject)
imageLayer.masksToBounds = true
imageLayer.cornerRadius = 10
UIGraphicsBeginImageContext(image.size)
imageLayer.renderInContext(UIGraphicsGetCurrentContext())
var roundedImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return roundedImage
}
and then set image to you image view. and clear background color of dragableView
Hope it will help..!

How to get the backgroundImage for a view with backgroundColor set using init:patternImage

If I set a background to my view like this:
let image:UIImage = UIImage(named: self.backgroundImages[self.backgroundImageIndex!])
self.view.backgroundColor = UIColor(patternImage:image)
How can i read the UIImage of the background in another place?
let image:UIImage = self.view.backgroundColor as UIImage // Error
There is no readily available property you can use for this. You can create your own function or add a computed property to UIView with an extension:
extension UIView {
var backgroundImage: UIImage! {
let rect = self.frame
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, self.backgroundColor.CGColor)
CGContextFillRect(context, rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
Then use it with view.backgroundImage

Resources