add a drop shadow on an UIImage, not UIImageView - ios

I need to add a drop shadow on my image, not the image view. Is there anyway to do that? I know I can add shadow to imageView like -
imageView.layer.masksToBounds true
imageView.layer.shadowColor = UIColor.gray.cgColor
imageView.layer.shadowOffset = CGSizeMake(0, 1)
imageView.layer.shadowOpacity = 1
imageView.layer.shadowRadius = 1.0
but I need to add the shadow to the image, not imageView. Does anyone have any clue?

I think you can use CIFilter in Core Image. CIFilter is an image processor that produces an image by manipulating one or more input images or by generating new image data.
You can check various references here.
I think you can CIHighlightShadowAdjust
CIHighlightShadowAdjust is the properties you use to configure a highlight-shadow adjust filter.

Just for #dfd.
So, I went and had a look at Create new UIImage by adding shadow to existing UIImage. After scrolling down a bit, I started to find several Swift based solutions. Intrigued, I threw them into a Playground to see what they could do.
I settled on this solution...
import UIKit
extension UIImage {
/// Returns a new image with the specified shadow properties.
/// This will increase the size of the image to fit the shadow and the original image.
func withShadow(blur: CGFloat = 6, offset: CGSize = .zero, color: UIColor = UIColor(white: 0, alpha: 0.8)) -> UIImage {
let shadowRect = CGRect(
x: offset.width - blur,
y: offset.height - blur,
width: size.width + blur * 2,
height: size.height + blur * 2
)
UIGraphicsBeginImageContextWithOptions(
CGSize(
width: max(shadowRect.maxX, size.width) - min(shadowRect.minX, 0),
height: max(shadowRect.maxY, size.height) - min(shadowRect.minY, 0)
),
false, 0
)
let context = UIGraphicsGetCurrentContext()!
context.setShadow(
offset: offset,
blur: blur,
color: color.cgColor
)
draw(
in: CGRect(
x: max(0, -shadowRect.origin.x),
y: max(0, -shadowRect.origin.y),
width: size.width,
height: size.height
)
)
let image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}
}
let sourceImage = UIImage(named: "LogoSmall.png")!
let shadowed = sourceImage.withShadow(blur: 6, color: .red)
But wait, that's not a drop shadow, it's an outline!
🙄 Apparently we need to hand hold everybody now days...
Changing the parameters to ...
let shadowed = sourceImage.withShadow(blur: 6, offset: CGSize(width: 5, height: 5), color: .red)
Produces a drop shadow. I like the solution because it doesn't make assumptions and provides a suitable number of parameters to change the output as desired.
I liked the solution so much, I copied the extension into my personal library, never know when it might come in handy.
Remember, in order to produce this style of image, the original image needs to be transparent.
A little bit like...
Remember, iOS has been around a LONG time, ObjC has been around even longer. You're likely to come across many solutions which are only presented in ObjC, which means, it's still important to have the skill/ability to at least read the code. If we're lucky, other members of the community will produce suitable Swift variants, but this isn't always possible.
I'm sure I don't need to go to the extent of writing a full tutorial on how to include images in Playground, there are plenty of examples about that 😉

Related

Drawing a circular loader

I have to create a circular loader as per below image (Front is thicker than tail)
I am able to create a circular progress bar and also provided rotation animation.
below is code for circle
func circleFrame() -> CGRect {
var circleFrame = CGRect(x: 0, y: 0, width: 2 * circleRadius, height: 2 * circleRadius)
let circlePathBounds = circlePathLayer.bounds
circleFrame.origin.x = circlePathBounds.midX - circleFrame.midX
circleFrame.origin.y = circlePathBounds.midY - circleFrame.midY
return circleFrame
}
func circlePath() -> UIBezierPath {
return UIBezierPath(ovalIn: circleFrame())
}
Using above code I can create a circle of equal width but not like as displayed image.
Please guide me how to create a loader like above image (tail is thinner than front). Any idea or suggestion would be great.
The simplest approach might be to create the image you want to display and then rotate it, rather than trying to draw it from scratch.
I haven't tried the following tutorial but I'm including it as a sample of how this might be done:
https://bencoding.com/2015/07/27/spinning-uiimageview-using-swift/
Note that the GitHub version (linked on that page) includes a Swift 4 update.

Swift 3 - NSString.draw(in: rect, withAttributes:) -- Text not being drawn at expected point

Teeing off of this Stackoverflow post, which was very helpful, I've been able to successfully draw text onto a full-screen image (I'm tagging the image with pre-canned, short strings, e.g., "Trash"). However, the text isn't appearing where I want, which is centered at the exact point the user has tapped. Here's my code, based on some code from the above post but updated for Swift3 --
func addTextToImage(text: NSString, inImage: UIImage, atPoint:CGPoint) -> UIImage{
// Setup the font specific variables
let textColor: UIColor = UIColor.red
let textFont: UIFont = UIFont(name: "Helvetica Bold", size: 80)!
//Setups up the font attributes that will be later used to dictate how the text should be drawn
let textFontAttributes = [
NSFontAttributeName: textFont,
NSForegroundColorAttributeName: textColor,
]
// Create bitmap based graphics context
UIGraphicsBeginImageContextWithOptions(inImage.size, false, 0.0)
//Put the image into a rectangle as large as the original image.
inImage.draw(in: CGRect(x:0, y:0, width:inImage.size.width, height: inImage.size.height))
// Create the rectangle where the text will be written
let rect: CGRect = CGRect(x:atPoint.x, y:atPoint.y, width:inImage.size.width, height: inImage.size.height)
// Draft the text in the rectangle
text.draw(in: rect, withAttributes: textFontAttributes)
// Get the image from the graphics context
let newImag = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImag!
}
In the above, atPoint is the location of the user's tap. This is where I want the text to be drawn. However, the text is always written toward the upper left corner of the image. For example, in the attached image, I have tapped half way down the waterfall as that is where I want the text string "Trash" to be written. But instead, you can see that it is written way up in the left-hand corner. I've tried a bunch of stuff but can't get a solution. I appreciate any help.
enter image description here
TrashShouldBeInMiddleOfWaterfallButIsNot
How are you setting atPoint? If you are using the same coordinate space as the screen, that won't work... which is what I suspect is happening.
Suppose your image is 1000 x 2000, and you are showing it in a UIImageView that is 100 x 200. If you tap at x: 50 y: 100 in the view (at the center), and then send that point to your function, it will draw the text at x: 50 y: 100 of the image -- which will be in the upper-left corner, instead of in the center.
So, you need to convert your point from the Image View size to the actual image size.. either before you call your function, or by modifying your function to handle it.
An example (not necessarily the best way to do it):
// assume:
// View Size is 100 x 200
// Image Size is 1000 x 2000
// tapPoint is CGPoint(x: 50, y: 100)
let xFactor = image.size.width / imageView.frame.size.width
// xFactor now equals 10
let yFactor = image.size.height / imageView.frame.size.height
// yFactor now equals 10
let convertedPoint = CGPoint(x: tapPoint.x * xFactor, y: tapPoint.y * yFactor)
convertedPoint now equals CGPoint(x: 500, y: 1000), and you can send that as the atPoint value in your call to addTextToImage.

Cropping UI images

I have a SceneKit view that fills my screen. My goal is to let the user take snapshots of that scene, but the snapshots are not the whole screen, but an inset portion in a UIImageView which is slightly smaller than the screen. Ideally, the user should not notice, the image on top should be identical to the scene behind it.
I have coded this up using snapshot and cropped, but as you can see in the image, the scale ends up way off - see the width of the yellow line, and the size of the windows? It's also not positioned correctly, it's somewhat down and to the left from where it should be - the upper left should be below the line of windows, but you can see it is at the roofline above them. I can't see the original snapshot because the debugger QuickLook refuses to show it.
There's not much code to it, anyone see the problem:
let background = sceneView.snapshot().cgImage!
let cropped = background.cropping(to: overlayView.frame)
UIGraphicsBeginImageContextWithOptions(overlayView.frame.size, false, 1.0)
let context = UIGraphicsGetCurrentContext()
context!.setAlpha(0.50)
context!.draw(cropped!, in: overlayView.bounds)
let transparent = context!.makeImage();
UIGraphicsEndImageContext()
overlayView.image = UIImage.init(cgImage: transparent!, scale: 1.0, orientation: .downMirrored)
I have tried various scales and rects to no avail. I assume this is something very easy.
UPDATE: after several tries I was able to get quicklook to work. The snapshot is indeed the entire background as I would expect. But it is much larger than I would expect too - its 640, 998 while the cropped version is 228, 304. That explains the "zooming". This leads me to believe that the frame size of the inset view is NOT a direct relationship to the image size. Does that ring any bells? Is there some other rect I should be using rather than overlayView.frame?
So I assume the problem is that the frame coordinates are in one set of units and the image coordinates are in another. I was able to solve the problem this way:
let croprect = CGRect(x: overlayView.frame.origin.x * 2, y: overlayView.frame.origin.y * 2 - 45, width: overlayView.frame.width * 2, height: overlayView.frame.height * 2)
let drawrect = CGRect(x: 0, y: 0, width: overlayView.frame.width * 2, height: overlayView.frame.height * 2)
let background = sceneView.snapshot()
let cropped = background.cgImage!.cropping(to: croprect)
UIGraphicsBeginImageContextWithOptions(drawrect.size, false, 0.0)
let context = UIGraphicsGetCurrentContext()
context!.setAlpha(0.50)
context!.draw(cropped!, in: drawrect)
let transparent = context!.makeImage();
UIGraphicsEndImageContext()
I'm extremely curious why I had to adjust the Y starting point to get them to line up, anyone have an idea?

How to invert colors of a specific area of a UIView?

It's easy to blur a portion of the view, keeping in mind that if the contents of views behind change, the blur changes too in realtime.
My questions
How to make an invert effect, and you can put it over a view and the contents behind would have inverted colors
How to add an effect that would know the average color of the pixels behind?
In general, How to access the pixels and manipulate them?
My question is not about UIImageView, asking about UIView in general..
there are libraries that does something similar, but they are so slow and don't run as smooth as blur!
Thanks.
If you know how to code a CIColorKernel, you'll have what you need.
Core Image has several blur filters, all of which use the GPU, which will give you the performance you need.
The CIAreaAverage will give you the average color for a specified rectangular area.
Core Image Filters
Here is about the simplest CIColorKernel you can write. It swaps the red and green value for every pixel in an image (note the "grba" instead of "rgba"):
kernel vec4 swapRedAndGreenAmount(__sample s) {
return s.grba;
}
To put this into a CIColorKernel, just use this line of code:
let swapKernel = CIKernel(string:
"kernel vec4 swapRedAndGreenAmount(__sample s) {" +
"return s.grba;" +
"}"
#tww003 has good code to convert a view's layer into a UIImage. Assuming you call your image myUiImage, to execute this swapKernel, you can:
let myInputCi = CIImage(image: myUiImage)
let myOutputCi = swapKernel.apply(withExtent: myInputCi, arguments: myInputCi)
Let myNewImage = UIImage(ciImage: myOutputCi)
That's about it. You can do alot more (including using CoreGraphics, etc.) but this is a good start.
One last note, you can chain individual filters (including hand-written color, warp, and general kernels). If you want, you can chain your color average over the underlying view with a blur and do whatever kind of inversion you wish as a single filter/effect.
I don't think I can fully answer your question, but maybe I can point you in the right direction.
Apple has some documentation on accessing the pixels data from CGImages, but of course that requires that you have an image to work with in the first place. Fortunately, you can create an image from a UIView like this:
UIGraphicsBeginImageContext(view.frame.size)
view.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
From this image you created, you'll be able to manipulate the pixel data how you want to. It may not be the cleanest way to solve your problem, but maybe it's something worth exploring.
Unfortunately, the link I provided is written in Objective-C and is a few years old, but maybe you can figure out how to make good use of it.
1st I will recommend you to extend UIImageView for this purpose. ref
Good ref by Joe
You have to override drawRect method
import UIKit
#IBDesignable class PortholeView: UIView {
#IBInspectable var innerCornerRadius: CGFloat = 10.0
#IBInspectable var inset: CGFloat = 20.0
#IBInspectable var fillColor: UIColor = UIColor.grayColor()
#IBInspectable var strokeWidth: CGFloat = 5.0
#IBInspectable var strokeColor: UIColor = UIColor.blackColor()
override func drawRect(rect: CGRect) {
super.drawRect(rect:rect)
// Prep constants
let roundRectWidth = rect.width - (2 * inset)
let roundRectHeight = rect.height - (2 * inset)
// Use EvenOdd rule to subtract portalRect from outerFill
// (See https://stackoverflow.com/questions/14141081/uiview-drawrect-draw-the-inverted-pixels-make-a-hole-a-window-negative-space)
let outterFill = UIBezierPath(rect: rect)
let portalRect = CGRectMake(
rect.origin.x + inset,
rect.origin.y + inset,
roundRectWidth,
roundRectHeight)
fillColor.setFill()
let portal = UIBezierPath(roundedRect: portalRect, cornerRadius: innerCornerRadius)
outterFill.appendPath(portal)
outterFill.usesEvenOddFillRule = true
outterFill.fill()
strokeColor.setStroke()
portal.lineWidth = strokeWidth
portal.stroke()
}
}
Your answer is here

iOS Screen coordinates and scaling

I'm trying to draw on top of an image in a CALayer and am having trouble with where the drawing shows up on different size displays.
func drawLayer(){
let circleLayer = CAShapeLayer()
let radius: CGFloat = 30
let x = Thermo.frame.origin.x
let y = Thermo.frame.origin.y
let XX = Thermo.frame.width
let YY = Thermo.frame.height
print("X: \(x) Y: \(y) Width: \(XX) Height: \(YY)")
circleLayer.path = UIBezierPath(roundedRect: CGRect(x: 0, y: 0, width: 2.0 * radius, height: 2.0 * radius) , cornerRadius: radius).CGPath
circleLayer.fillColor = UIColor.redColor().CGColor
circleLayer.shadowOffset = CGSizeMake(0, 3)
circleLayer.shadowRadius = 5.0
circleLayer.shadowColor = UIColor.blackColor().CGColor
circleLayer.shadowOpacity = 0.8
circleLayer.frame = CGRectMake(0, 410, 0, 192);
self.Thermo.layer.addSublayer(circleLayer)
circleLayer.setNeedsDisplay()
}
That draws a circle, in the correct place ... for an iPhone 6s. But when the enclosing UIImageView component is scaled for a smaller device, well, to clearly doesn't. I added the print() to see what the image size, and position was and ... well, it's exactly the same on every device I run it on X: 192.0 Y: 8.0 Width: 216.0 Height: 584.0 but clearly it's being scaled by the constraints in the AuoLayout manager.
So, my question is how can I figure out the proper radio and position for different screen sizes if I can't use the enclosing View's size and position since that seems to never change?
Here is the image I am starting with, in a UIImageView, and trying to draw over.
Im of course trying to color it in based on data from an external device. Any suggestions/sample code most appreciated!
CALayer and its subclasses incl. CAShapeLayer have a property
var contentsScale: CGFloat
From class reference :
For layers you create and manage yourself, you must set the value of this property yourself based on the resolution of the screen and the content you are providing. Core Animation uses the value you specify as a cue to determine how to render your content.
So what you need to do is set the scale on the layer and you get the scale of the device from UIDevice class
circleLayer.scale = UIScreen.mainScreen().scale

Resources