Is it possible to make 2 subviews with same z index? - ios

I am looking to create a water fill effect into a shape and am using WaveViewAnimation project.
I want to allow the user to press on different buttons to create different colour waves to fill the shape. And when user presses on second colour while the first colour is
waveRed = WaveAnimationView(frame: CGRect(origin: .zero, size: lapView.bounds.size), color: UIColor.red.withAlphaComponent(0.5))
lapView.addSubview(waveRed)
waveBlue = WaveAnimationView(frame: CGRect(origin: .zero, size: lapView.bounds.size), color: UIColor.green.withAlphaComponent(0.5))
lapView.addSubview(waveBlue)
waveRed.layer.zPosition = 1
waveBlue.layer.zPosition = 1
waveRed.startAnimation()
waveBlue.startAnimation()
I want the output something like the combination of 2 colours blended, something like the If red wave is filled and green was is pressed. The wave overlap area needs to be in yellow colour.
Could someone please guide me/advice me how to achieve this.

You may be able to get your desired result by using the .compositingFilter property of CALayer.
There are various "blend" filters... this one:
topLayer.compositingFilter = "screenBlendMode"
is probably what you want to play with.
Apple's docs on it are vague -
Discussion
This results in colors that are at least as light as either of the two contributing sample colors. The formula used to create this filter is described in the PDF specification, which is available online from the Adobe Developer Center. See PDF Reference and Adobe Extensions to the PDF Specification.
Of course, the link doesn't help... but I did find this from searching:
Screen
Multiplies the complements of the backdrop and source color values, then complements the result. The result color is always at least as light as either of the two constituent colors. Screening any color with white produces white; screening with black leaves the original color unchanged. The effect is similar to projecting multiple photographic slides simultaneously onto a single screen.
So, if we have a red circle overlapping a green circle...
We get this with NO filter (or the default "normalBlendMode"):
If we give the red circle layer .compositingFilter = "screenBlendMode" we get this:
As we see, "screen blending" 1, 0, 0, 1 (red) with 0, 1, 0, 1 (green) gives us 1, 1, 0, 1 -- yellow.
Notice that we lose the "top" of the red circle. That's because any color screen-blended with white results in white.
If we add another "bottom" layer matching the top red layer, like this:
We'll get this result:
because, as we'd expect, any color screen-blended with itself results in itself.
Here's the sample code I used to produce these images that you can play with:
class ViewController: UIViewController {
let theView: UIView = UIView()
let topLayer = CAShapeLayer()
let bottomLayer = CAShapeLayer()
let middleLayer = CAShapeLayer()
let infoLabel: UILabel = {
let v = UILabel()
v.numberOfLines = 0
v.textAlignment = .center
v.font = .systemFont(ofSize: 18.0, weight: .light)
return v
}()
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = .systemYellow
theView.backgroundColor = .white
infoLabel.backgroundColor = UIColor(white: 0.95, alpha: 1.0)
theView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(theView)
infoLabel.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(infoLabel)
let g = view.safeAreaLayoutGuide
NSLayoutConstraint.activate([
// add the view 20-points inset from the sides
// height 1.2 times the width
// centered vertically
theView.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 20.0),
theView.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: -20.0),
theView.heightAnchor.constraint(equalTo: theView.widthAnchor, multiplier: 1.2),
theView.centerYAnchor.constraint(equalTo: g.centerYAnchor, constant: -40.0),
infoLabel.topAnchor.constraint(equalTo: theView.bottomAnchor, constant: 20.0),
infoLabel.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 20.0),
infoLabel.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: -20.0),
infoLabel.heightAnchor.constraint(greaterThanOrEqualToConstant: 40.0),
])
// bottom layer is red
bottomLayer.fillColor = UIColor.red.cgColor
// middle layer is green
middleLayer.fillColor = UIColor.green.cgColor
// top layer same color as bottom layer
topLayer.fillColor = UIColor.red.cgColor
theView.layer.addSublayer(bottomLayer)
theView.layer.addSublayer(middleLayer)
theView.layer.addSublayer(topLayer)
nextStep()
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
// set layer shapes to overlapping ovals
var b = theView.bounds.insetBy(dx: 20.0, dy: 20.0)
b.size.height *= 0.75
var pth = UIBezierPath(ovalIn: b)
// bottom and top layers get the same path
// so they exactly overlay each other
bottomLayer.path = pth.cgPath
topLayer.path = pth.cgPath
// shift the oval rect down for the "middle" layer
b.origin.y += theView.bounds.height * 0.25 - 10.0
pth = UIBezierPath(ovalIn: b)
middleLayer.path = pth.cgPath
}
var counter: Int = -1
func nextStep() {
counter += 1
switch counter % 3 {
case 1:
// screen blend
// hide bottom layer
topLayer.compositingFilter = "screenBlendMode"
bottomLayer.opacity = 0
infoLabel.text = "Screen Blend - with bottom layer hidden"
case 2:
// screen blend
// show bottom layer
topLayer.compositingFilter = "screenBlendMode"
bottomLayer.opacity = 1
infoLabel.text = "Screen Blend - with bottom layer visible"
default:
// normal blend
// bottom layer opacity doesn't matter, since it will be covered by top layer
topLayer.compositingFilter = "normalBlendMode"
infoLabel.text = "Default - no Blend Effect"
}
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
nextStep()
}
}
Each tap anywhere on the screen will step through the different blend-modes and "bottom layer" visibility.

Related

How to animate movement of UIView between different superViews

Hello I am new at swift and IOS apps , I am trying to animate movement of cardBackImage (UIImage) from deckPileImage to the card view, but everything got different superViews and I have no idea how to do it properly , all the location have different frames ( superviews as described in the Image) , Should I use CGAffineTransform ?
viewHierarchyDescription
try to imagine my abstraction as a "face down card fly from deck into its possition on boardView"
Don't animate the view at all. Instead, animate a snapshot view as a proxy. You can see me doing it here, in this scene from one of my apps.
That red rectangle looks like it's magically flying out of one view hierarchy into another. But it isn't. In reality there are two red rectangles. I hide the first rectangle and show the snapshot view in its place, animate the snapshot view to where the other rectangle is lurking hidden, then hide the snapshot and show the other rectangle.
To help get you going...
First, no idea why you have your "deckPileImage" in a stack view, but assuming you have a reason for doing so...
a simple "card" view - bordered with rounded corners
class CardView: UIView {
override init(frame: CGRect) {
super.init(frame: frame)
commonInit()
}
required init?(coder: NSCoder) {
super.init(coder: coder)
commonInit()
}
func commonInit() {
layer.cornerRadius = 16
layer.masksToBounds = true
layer.borderWidth = 1
layer.borderColor = UIColor.black.cgColor
}
}
a basic view controller - adds a "deck pile view" to a stack view, and a "card position view" as the destination for the new, animated cards.
class AnimCardVC: UIViewController {
let deckStackView: UIStackView = UIStackView()
let cardPositionView: UIView = UIView()
let deckPileView: CardView = CardView()
let cardSize: CGSize = CGSize(width: 80, height: 120)
// card colors to cycle through
let colors: [UIColor] = [
.systemRed, .systemGreen, .systemBlue,
.systemCyan, .systemOrange,
]
var colorIDX: Int = 0
// card position constraints to animate
var animXAnchor: NSLayoutConstraint!
var animYAnchor: NSLayoutConstraint!
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = .systemBackground
deckStackView.translatesAutoresizingMaskIntoConstraints = false
deckPileView.translatesAutoresizingMaskIntoConstraints = false
cardPositionView.translatesAutoresizingMaskIntoConstraints = false
deckStackView.addArrangedSubview(deckPileView)
view.addSubview(deckStackView)
view.addSubview(cardPositionView)
// always respect safe area
let g = view.safeAreaLayoutGuide
NSLayoutConstraint.activate([
deckStackView.topAnchor.constraint(equalTo: g.topAnchor, constant: 40.0),
deckStackView.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 20.0),
// we'll let the stack view subviews determine its size
deckPileView.widthAnchor.constraint(equalToConstant: cardSize.width),
deckPileView.heightAnchor.constraint(equalToConstant: cardSize.height),
cardPositionView.topAnchor.constraint(equalTo: deckStackView.bottomAnchor, constant: 100.0),
cardPositionView.centerXAnchor.constraint(equalTo: g.centerXAnchor),
cardPositionView.widthAnchor.constraint(equalToConstant: cardSize.width + 2.0),
cardPositionView.heightAnchor.constraint(equalToConstant: cardSize.height + 2.0),
])
// outline the card holder view
cardPositionView.backgroundColor = .systemYellow
cardPositionView.layer.borderColor = UIColor.blue.cgColor
cardPositionView.layer.borderWidth = 2
// make the "deck card" gray to represent the deck
deckPileView.backgroundColor = .lightGray
}
func animCard() {
let card = CardView()
card.backgroundColor = colors[colorIDX % colors.count]
colorIDX += 1
card.translatesAutoresizingMaskIntoConstraints = false
card.widthAnchor.constraint(equalToConstant: cardSize.width).isActive = true
card.heightAnchor.constraint(equalToConstant: cardSize.height).isActive = true
view.addSubview(card)
// center the new card on the deckCard
animXAnchor = card.centerXAnchor.constraint(equalTo: deckPileView.centerXAnchor)
animYAnchor = card.centerYAnchor.constraint(equalTo: deckPileView.centerYAnchor)
// activate those constraints
animXAnchor.isActive = true
animYAnchor.isActive = true
// run the animation *after* the card has been placed at its starting position
DispatchQueue.main.async {
// de-activate the current constraints
self.animXAnchor.isActive = false
self.animYAnchor.isActive = false
// center the new card on the cardPositionView
self.animXAnchor = card.centerXAnchor.constraint(equalTo: self.cardPositionView.centerXAnchor)
self.animYAnchor = card.centerYAnchor.constraint(equalTo: self.cardPositionView.centerYAnchor)
// re-activate those constraints
self.animXAnchor.isActive = true
self.animYAnchor.isActive = true
// 1/2 second animation
UIView.animate(withDuration: 0.5, animations: {
self.view.layoutIfNeeded()
})
}
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
animCard()
}
}
It looks like this:
Each time you tap anywhere the code will add a new "card" and animate it from the "deck" view to the "card position" view.

How do I make an image act like a map (zoom/pan/map markers)?

Swift 5 / Xcode 12.4
I've got a single png image that's downloaded into the Documents folder and then loaded at runtime (currently as UIImage). This image has to act as some type of map:
Pinch zoom
Pan
I want to place some type of map marker (e.g. a dot) in specific spots: The user can click on them (to open a popup with more information) and they move according to the zoom/pan but always stay the same size.
Not full screen but inside a specific area in my ViewController.
I already did the same thing in Android but all Java map libraries I found require tiles (I've only got a single big image), so I ended up using a "zoom/pan" library (also lets you set the maximum zoom) and created my own invisible image sublayer for the markers.
For iOS I've found the Goggle Maps SDK and the Apple MapKit so far but they both look like they load rl map data and you can't set the actual image - is this possible with either of them?
I haven't found a zoom/pan library for iOS yet (at least one that's not 5+ years old) either, so how do I best accomplish this? Write my own zoom/pan listeners and use some type of sublayer (that moves with the parent) for the map markers - is that the way to go/what UI objects do I have to use?
this will help with the pinch to zoom - https://stackoverflow.com/a/58558272/2481602
this will help you to achieve a pan - How do I pan the image inside a UIImageView?
As far as the imposed markers, that you will have to manually handle the transformations and apply the the anchor points of the marker image. the documentation here - https://developer.apple.com/documentation/coregraphics/cgaffinetransform and this supplies a loose explanation - https://medium.com/weeronline/swift-transforms-5981398b437d
it not being full screen just needs a view to hold the scrollView that will hold the map in the location on the screen that you want.
Not a full answer but hopefully this will all point you in the right direction
Use a UIScrollView for the pinch/zoom/pan. To add the markers add them to a container view atop the scroll view, and respond to scroll view changes (scrollViewDidEndZooming, scrollViewDidZoom, scrollViewDidEndDragging...) by updating the positions of the annotation views in the container - you'll need to use UIView's convert to convert between coordinate systems, setting the center of annotation views to the appropriate point converted from your scrollview to the container view. Container view should be same size as scrollview, not scrollview's content.
Or you could add the annotations into the scrollview's content but then you have to update the transforms of those views to counter-magnify them as you zoom in.
One approach...
Use "standard" scroll view zoom/pan functionality
Use image view as viewForZooming in the scroll view
add "marker views" to the scroll view as siblings of the image view (not subviews of the image view)
For the marker positions, calculate the percent location. So, for example, if your image is 1000x1000, and you want a marker at 100,200, that marker would have a "point percentage" of 0.1,0.2. When the image view is zoomed, change the frame origin of the marker by its location percentages.
Here is a complete example (done very quickly, so just to get you going)...
I used this 1600 x 1600 image, with marker locations:
A simple "marker view" class:
class MarkerView: UILabel {
var yPCT: CGFloat = 0
var xPCT: CGFloat = 0
}
And the controller class:
class ZoomWithMarkersViewController: UIViewController, UIScrollViewDelegate {
let imgView: UIImageView = {
let v = UIImageView()
v.translatesAutoresizingMaskIntoConstraints = false
return v
}()
let scrollView: UIScrollView = {
let v = UIScrollView()
v.translatesAutoresizingMaskIntoConstraints = false
return v
}()
var points: [CGPoint] = [
CGPoint(x: 200, y: 200),
CGPoint(x: 800, y: 300),
CGPoint(x: 500, y: 700),
CGPoint(x: 1100, y: 900),
CGPoint(x: 300, y: 1200),
CGPoint(x: 1300, y: 1400),
]
var markers: [MarkerView] = []
override func viewDidLoad() {
super.viewDidLoad()
// make sure we have an image
guard let img = UIImage(named: "points1600x1600") else {
fatalError("Could not load image!!!!")
}
// set the image
imgView.image = img
// add the image view to the scroll view
scrollView.addSubview(imgView)
// add scroll view to view
view.addSubview(scrollView)
// respect safe area
let safeG = view.safeAreaLayoutGuide
// to save on typing
let contentG = scrollView.contentLayoutGuide
NSLayoutConstraint.activate([
// scroll view inset 20-pts on each side
scrollView.leadingAnchor.constraint(equalTo: safeG.leadingAnchor, constant: 20.0),
scrollView.trailingAnchor.constraint(equalTo: safeG.trailingAnchor, constant: -20.0),
// square (1:1 ratio)
scrollView.heightAnchor.constraint(equalTo: scrollView.widthAnchor),
// center vertically
scrollView.centerYAnchor.constraint(equalTo: safeG.centerYAnchor),
// constrain all 4 sides of image view to scroll view's Content Layout Guide
imgView.topAnchor.constraint(equalTo: contentG.topAnchor),
imgView.leadingAnchor.constraint(equalTo: contentG.leadingAnchor),
imgView.trailingAnchor.constraint(equalTo: contentG.trailingAnchor),
imgView.bottomAnchor.constraint(equalTo: contentG.bottomAnchor),
// we will want zoom scale of 1 to show the "native size" of the image
imgView.widthAnchor.constraint(equalToConstant: img.size.width),
imgView.heightAnchor.constraint(equalToConstant: img.size.height),
])
// create marker views and
// add them as subviews of the scroll view
// add them to our array of marker views
var i: Int = 0
points.forEach { pt in
i += 1
let v = MarkerView()
v.textAlignment = .center
v.font = .systemFont(ofSize: 12.0)
v.text = "\(i)"
v.backgroundColor = UIColor.green.withAlphaComponent(0.5)
scrollView.addSubview(v)
markers.append(v)
v.yPCT = pt.y / img.size.height
v.xPCT = pt.x / img.size.width
v.frame = CGRect(origin: .zero, size: CGSize(width: 30.0, height: 30.0))
}
// assign scroll view's delegate
scrollView.delegate = self
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
print(#function)
guard let img = imgView.image else { return }
// max scale is 1.0 (original image size)
scrollView.maximumZoomScale = 1.0
// min scale fits the image in the scroll view frame
scrollView.minimumZoomScale = scrollView.frame.width / img.size.width
// start at min scale (so full image is visible)
scrollView.zoomScale = scrollView.minimumZoomScale
// just to make the markers "appear" nicely
markers.forEach { v in
v.center = CGPoint(x: scrollView.bounds.midX, y: scrollView.bounds.midY)
v.alpha = 0.0
}
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
// animate the markers into position
UIView.animate(withDuration: 1.0, animations: {
self.markers.forEach { v in
v.alpha = 1.0
}
self.updateMarkers()
})
}
func updateMarkers() -> Void {
markers.forEach { v in
let x = imgView.frame.origin.x + v.xPCT * imgView.frame.width
let y = imgView.frame.origin.y + v.yPCT * imgView.frame.height
// for example:
// put bottom-left corner of marker at coordinates
v.frame.origin = CGPoint(x: x, y: y - v.frame.height)
// or
// put center of marker at coordinates
//v.center = CGPoint(x: x, y: y)
}
}
func scrollViewDidZoom(_ scrollView: UIScrollView) {
updateMarkers()
}
func viewForZooming(in scrollView: UIScrollView) -> UIView? {
return imgView
}
}
I'm placing the markers so their bottom-left corner is at the marker-point.
It starts like this:
and looks like this after zooming-in on marker #3:

How to automatically change transparent color of a view based on the color of it’s background

I want to display a semi transparent text field over the uiimageview. I can’t choose static color for textfield because some images are bright while other images are dark. I want to automatically adapt text field color based on the colors behind it. Is there an easy solution for that?
UPD:
The effect I want to achieve is:
If UIImage is dark where my textfield should be placed, set textfield background color to white with 0.5 opacity.
If UIImage is light where my textfield should be placed, set textfield background color to black with 0.5 opacity.
So I want to somehow calculate the average color of the uiimageview in place where I want to put my textfield and then check if it is light or dark. I don't know, how to get the screenshot of the particular part my uiimageview and get it's average color. I want it to be optimised. I guess working with UIGraphicsImageRenderer isn't a good choice, that's why I ask this question. I know how to do it with UIGraphicsImageRenderer, but I don't think that my way is good enough.
"Brightness" of an image is not a straight-forward thing. You may or may not find this suitable.
If you search for determine brightness of an image you'll find plenty of documentation on it - likely way more than you want.
One common way to calculate the "brightness" of a pixel is to use the sum of:
red component * 0.299
green component * 0.587
blue component * 0.114
This is because we perceive the different colors at different "brightness" levels.
So, you'll want to loop through each pixel in the area of the image where you want to place your label (or textField), get the average brightness, and then decide what's "dark" and what's "light".
As an example, using this background image:
I generated a 5 x 8 grid of labels, looped through getting the "brightness" of the image in the rect under each label's frame, and then set the background and text color based on the brightness calculation (values range from 0 to 255, so I used < 127 is dark, >= 127 is light):
This is the code I used:
extension CGImage {
var brightness: Double {
get {
// common formula to get "average brightness"
let bytesPerPixel = self.bitsPerPixel / self.bitsPerComponent
let imageData = self.dataProvider?.data
let ptr = CFDataGetBytePtr(imageData)
var x = 0
var p = 0
var result: Double = 0
for _ in 0..<self.height {
for _ in 0..<self.width {
let r = ptr![p+0]
let g = ptr![p+1]
let b = ptr![p+2]
result += (0.299 * Double(r) + 0.587 * Double(g) + 0.114 * Double(b))
p += bytesPerPixel
x += 1
}
}
let bright = result / Double (x)
return bright
}
}
}
extension UIImage {
// get the "brightness" of self (entire image)
var brightness: Double {
get {
return (self.cgImage?.brightness)!
}
}
// get the "brightness" in a sub-rect of self
func brightnessIn(_ rect: CGRect) -> Double {
guard let cgImage = self.cgImage else { return 0.0 }
guard let croppedCGImage = cgImage.cropping(to: rect) else { return 0.0 }
return croppedCGImage.brightness
}
}
class ImageBrightnessViewController: UIViewController {
let imgView: UIImageView = {
let v = UIImageView()
v.contentMode = .center
v.backgroundColor = .green
v.clipsToBounds = true
v.translatesAutoresizingMaskIntoConstraints = false
return v
}()
var labels: [UILabel] = []
override func viewDidLoad() {
super.viewDidLoad()
// load an image
guard let img = UIImage(named: "bkg640x360") else { return }
imgView.image = img
let w = img.size.width
let h = img.size.height
// set image view's width and height equal to img width and height
view.addSubview(imgView)
let g = view.safeAreaLayoutGuide
NSLayoutConstraint.activate([
imgView.widthAnchor.constraint(equalToConstant: w),
imgView.heightAnchor.constraint(equalToConstant: h),
imgView.centerXAnchor.constraint(equalTo: g.centerXAnchor),
imgView.centerYAnchor.constraint(equalTo: g.centerYAnchor),
])
// use stack views to create a 5 x 8 grid of labels
let outerStackView: UIStackView = {
let v = UIStackView()
v.translatesAutoresizingMaskIntoConstraints = false
v.axis = .horizontal
v.spacing = 32
v.distribution = .fillEqually
return v
}()
for _ in 1...5 {
let vStack = UIStackView()
vStack.axis = .vertical
vStack.spacing = 12
vStack.distribution = .fillEqually
for _ in 1...8 {
let label = UILabel()
label.textAlignment = .center
vStack.addArrangedSubview(label)
labels.append(label)
}
outerStackView.addArrangedSubview(vStack)
}
let padding: CGFloat = 12.0
imgView.addSubview(outerStackView)
NSLayoutConstraint.activate([
outerStackView.topAnchor.constraint(equalTo: imgView.topAnchor, constant: padding),
outerStackView.leadingAnchor.constraint(equalTo: imgView.leadingAnchor, constant: padding),
outerStackView.trailingAnchor.constraint(equalTo: imgView.trailingAnchor, constant: -padding),
outerStackView.bottomAnchor.constraint(equalTo: imgView.bottomAnchor, constant: -padding),
])
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
guard let img = imgView.image else {
return
}
labels.forEach { v in
if let sv = v.superview {
// convert label frame to imgView coordinate space
let rect = sv.convert(v.frame, to: imgView)
// get the "brightness" of that rect from the image
// it will be in the range of 0 - 255
let d = img.brightnessIn(rect)
// set the text of the label to that value
v.text = String(format: "%.2f", d)
// just using 50% here... adjust as desired
if d > 127.0 {
// if brightness is than 50%
// black translucent background with white text
v.backgroundColor = UIColor.black.withAlphaComponent(0.5)
v.textColor = .white
} else {
// if brightness is greater than or equal to 50%
// white translucent background with black text
v.backgroundColor = UIColor.white.withAlphaComponent(0.5)
v.textColor = .black
}
}
}
}
}
As you can see, when getting the average of a region of a photo, you'll quite often not be completely happy with the result. That's why it's much more common to see one or the other, with a contrasting order and either a shadow or glow around the frame.

How to stretch the bottom pixels of image to full screen that the content mode is scaleAspectFill on iOS?

I want to set an image as background that the content mode is scaleAspectFill and stretch the bottom pixel of line to fullscreen.
Here is a simple sample image that the size is 1280x300px that will be set in Assets as 2x.
Here is a screenshot of the storyboard. I want to fill the the bottom pixel of line to the white space.
I have tried the Stretching and the Slicing feature but both effect are not what I want.
Here is the expected result what I want on iPhone 4S.
It's not clear how you are sizing your image...
The 1280 x 300 image you posted, with the imageView constrained to the width of the view...
AspectFit:
AspectFill (using the image's original 300 height):
AspectFill (using 150 height):
However, assuming the text in your images will have varying lengths, you may wind up with...
AspectFit:
AspectFill (using 150 height):
However... assuming you have a plan for solving that, one approach would be to use two image views.
The top image view would hold your full image as you already have it... using .scaleAspectFill and constraining its height to show just the centered text in the image.
The bottom image view would be constrained to the bottom of the view, with .scaleToFill, and you could use this extension:
extension UIImage {
// return the CGRect portion as a new UIImage
func subImage(in rect: CGRect) -> UIImage? {
let scale = UIScreen.main.scale
guard let cgImage = cgImage else { return nil }
guard let crop = cgImage.cropping(to: rect) else { return nil }
return UIImage(cgImage: crop, scale: scale, orientation:.up)
}
}
// then, use it like this
let bottomImage = fullImage.subImage(in: CGRect(x: 0.0, y: fullImage.size.height - 1.0, width: 8.0, height: 1.0))
to get an 8px x 1px portion of the original image. Set the .image of the bottom image view to that image and it will scale to fill the entire frame.
Result:
and here is a complete example, using your original image (I named it "wake.png"):
extension UIImage {
// return the CGRect portion as a new UIImage
func subImage(in rect: CGRect) -> UIImage? {
let scale = UIScreen.main.scale
guard let cgImage = cgImage else { return nil }
guard let crop = cgImage.cropping(to: rect) else { return nil }
return UIImage(cgImage: crop, scale: scale, orientation:.up)
}
}
class StretchBottomViewController: UIViewController {
var topImageView: UIImageView = {
let v = UIImageView()
v.translatesAutoresizingMaskIntoConstraints = false
v.contentMode = .scaleAspectFill
v.setContentHuggingPriority(.required, for: .vertical)
return v
}()
var bottomImageView: UIImageView = {
let v = UIImageView()
v.translatesAutoresizingMaskIntoConstraints = false
v.contentMode = .scaleToFill
return v
}()
override func viewDidLoad() {
super.viewDidLoad()
view.addSubview(topImageView)
view.addSubview(bottomImageView)
let g = view.safeAreaLayoutGuide
NSLayoutConstraint.activate([
// constrain top image view to top / leading / trailing at Zero
topImageView.topAnchor.constraint(equalTo: g.topAnchor, constant: 0.0),
topImageView.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 0.0),
topImageView.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: 0.0),
// constrain bottom image view to bottom of top image view / leading / trailing at Zero
bottomImageView.topAnchor.constraint(equalTo: topImageView.bottomAnchor, constant: 0.0),
bottomImageView.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 0.0),
bottomImageView.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: 0.0),
// constrain bottom image view to bottom at Zero
bottomImageView.bottomAnchor.constraint(equalTo: g.bottomAnchor, constant: 0.0),
// however you are determining the height of the top image view
// using 150 here to work with your original image
topImageView.heightAnchor.constraint(equalToConstant: 150.0),
])
// load "wake me up" image
if let fullImage = UIImage(named: "wake") {
// get the bottom part of the full image (8 pixels wide, 1-pixel tall)
if let bottomImage = fullImage.subImage(in: CGRect(x: 0.0, y: fullImage.size.height - 1.0, width: 8.0, height: 1.0)) {
topImageView.image = fullImage
bottomImageView.image = bottomImage
}
}
}
}
Try to use (UIImage *)resizableImageWithCapInsets:(UIEdgeInsets)capInsets resizingMode:(UIImageResizingMode)resizingMode with resizingModel is .stretch
If you want the text is docked at the top, the blue vertical line is in the bottom, so let's try this (assuming that the blue line height is 20px, the distance from the top of image to bottom of text 'Before you go' is 100 px)
let image = yourOriginalImage.resizableImage(withCapInsets: UIEdgeInsets(top: 100, left: 10, bottom: 10, right: 10), resizingMode: .stretch)
Hope it helps.
Update
Below is what display with the above lines of code

UIView transform, autolayout and anchor point

I'm using ios 9 sdk. I created this view hierarchy setup with autoloayout:
----- Top
| 30
----- CustomView (A)
|
|
-----
| 0
----- Bot
I'm trying to create a second CustomView (call it B) behind the first (call it A), scaled by 80% in both directions, and above A by 5 points. So only the top of B is visible.
This should be easy, but it is not. Applying a -5 points translation and a .8 scaling on B moves B down instead of up, because autolayout seems to use the anchor point (set to middle by default) to position B: it seems it detects that a scaling has been applied and recenters B vertically - moving it down.
Changing the Anchor point to (x=0.5,y=1) moves A and B up by half their size, i don't understand why. So it does not fix the problem.
Any idea ?
Edit: some code
var card = new CardView();
InsertSubview(card, 0);
card.TranslatesAutoresizingMaskIntoConstraints = false;
this.AddConstraints(
card.AtTopOf(this, 30),
card.WithSameWidth(this).WithMultiplier(.95f),
card.WithSameCenterX(this),
card.AtBottomOf(this)
);
var transform = CGAffineTransform.MakeTranslation(0, -10);
transform.Scale(.8f,.8f);
Transform = transform;
Result: card is not at 20pts from top, it is at about 50pts, beside its original position of 30pt.
You're adding the wrong constraints. After some experimentation I got the results you want. The key seems to be to pin the center Y of the background view to the top of the foreground view, this then offsets the vertical shift given when changing the anchor point.
The following code in a playground gives the results you want:
import UIKit
import XCPlayground
let view = UIView(frame: CGRect(origin: .zero, size: CGSize(width: 400, height:400)))
view.backgroundColor = UIColor.whiteColor()
XCPlaygroundPage.currentPage.liveView = view
let a = UIView()
a.translatesAutoresizingMaskIntoConstraints = false
a.backgroundColor = UIColor(red: 1.0, green: 0.0, blue: 0.0, alpha: 0.5)
view.addSubview(a)
a.topAnchor.constraintEqualToAnchor(view.topAnchor, constant: 30).active = true
a.widthAnchor.constraintEqualToAnchor(view.widthAnchor).active = true
a.bottomAnchor.constraintEqualToAnchor(view.bottomAnchor).active = true
let b = UIView()
b.translatesAutoresizingMaskIntoConstraints = false
b.backgroundColor = UIColor(red: 0, green: 1, blue: 0, alpha: 1)
b.layer.anchorPoint = CGPoint(x: 0.5, y: 0)
view.addSubview(b)
view.sendSubviewToBack(b)
NSLayoutConstraint.activateConstraints([
b.centerYAnchor.constraintEqualToAnchor(a.topAnchor),
b.widthAnchor.constraintEqualToAnchor(a.widthAnchor),
b.heightAnchor.constraintEqualToAnchor(a.heightAnchor),
b.centerXAnchor.constraintEqualToAnchor(a.centerXAnchor)
])
let translate = CGAffineTransformMakeTranslation(0.0, -10)
let scale = CGAffineTransformMakeScale(0.8, 0.8)
b.transform = CGAffineTransformConcat(translate, scale)
Results:
The two possible approaches I see are:
- scale first, then add it to the view hierarchy
or
- change the anchorpoint of view B before you add it to the view hierarchy. Then you can transform it the way you want.

Resources