I would like to get the frame size of my image, to use for some calculations in a drag gesture recognizer (basically normalize the touch coordinates of the drag).
I have tried to use GeometryReader but it expands to fill the whole height and thus the reported height is not correct.
How can I fix this behavior? Is there any other way of getting the view size of the image?
struct ContentView: View {
var body: some View {
ZStack(alignment: .center) {
GeometryReader { reader in
Image(uiImage: UIImage(named: "test")!)
.resizable()
.aspectRatio(contentMode: .fit)
.shadow(radius: 5)
//.gesture(dragGesture(forSize: reader.size))
}
.background(Color.red)
}
}
}
Use AVMakeRect from AVFoundation. For more
struct ContentView: View {
var body: some View {
GeometryReader { reader in
ZStack(alignment: .center) {
if let uiImage = UIImage(named: "test") {
Image(uiImage: uiImage)
.resizable()
.aspectRatio(contentMode: .fit)
.shadow(radius: 5)
.onReceive(Just(reader), perform: { _ in
let localFrame = reader.frame(in: .local)
let imageFrame = AVMakeRect(aspectRatio: uiImage.size, insideRect: localFrame)
print("Full frame : ", localFrame)
print("Image frame : ", imageFrame)
})
}
}.frame(minWidth: 0, maxWidth: .infinity, minHeight: 0, maxHeight: .infinity, alignment: .center)
}.background(Color.red)
}
}
Related
I am using AsyncImage to download a jpg from my Parse Server. If I download the file manually it has an orientation of 6 (90 Degree counterclockwise), but when the file is returned by AsyncImage, it is missing this orientation and appears rotated.
As an aside, when I substitute SBPAsyncImage for AsyncImage, the orientation is correct.
Is there away for AsyncImage to detect and correct for orientation?
AsyncImage(url: photoURL) { image in
image
.resizable()
.scaledToFill()
.frame(width: 200, height: 200, alignment: .leading)
} placeholder: {
ProgressView()
}
Edit: I was over zealous in simplifying the code. Orientation is correct when AsyncImage is displayed in a simple View but my layout has a list of ScrollViews displaying the images fetched from a Parse Server. Here is a version of the original:
struct TimeLineView: View {
//: A view model in SwiftUI
#StateObject var viewModel = PFTour.query(matchesKeyInQuery(key: "routeID", queryKey: "uniqueID", query: PFRoute.query("state" == "Colorado")))
.order([.descending("date")])
.include("route")
.include("creator")
.include("photos")
.viewModel
var body: some View {
Group {
if let error = viewModel.error {
Text(error.description)
} else {
List(viewModel.results, id: \.id) { tour in
ParseTourImageOnlyScrollView(tour: tour)
}
.frame(maxWidth: .infinity, maxHeight: .infinity, alignment: .center)
}
Spacer()
}
.frame(maxWidth: .infinity, maxHeight: .infinity, alignment: .center)
.edgesIgnoringSafeArea(.all)
.onAppear(perform: {
viewModel.find()
})
}
Each cell displays a ScrollView:
struct ParseTourImageOnlyScrollView: View {
let tour: PFTour
var body: some View {
VStack {
Divider()
ScrollView(.horizontal) {
HStack(spacing: 5.0) {
if let photos = tour.photos {
ForEach(photos, id: \.self) { parsePhoto in
PhotoOnlyView(photoFile: parsePhoto.photo!)
}
}
}
}
Divider()
Spacer()
}
}
When comparing AsyncImage with BackPortAsyncImage, the former does not show the correct orientation for the image data at https://parsefiles.back4app.com/zgqRRfY6Tr5ICfdPocRLZG8EXK59vfS5dpDM5bqr/7d5eaf509e0745be0314aa493099dc82_file.bin:
struct PhotoOnlyView: View {
let photoFile: ParseFile
var body: some View {
if #available(iOS 15.0, *) {
VStack{
SwiftUI.AsyncImage(url: photoFile.url) { image in
image
.resizable()
.scaledToFill()
} placeholder: {
ProgressView()
}
.frame(width: UIScreen.main.bounds.size.width - 150, height: UIScreen.main.bounds.size.width - 150, alignment: .leading)
BackportAsyncImage(url: photoFile.url) { image in
image
.resizable()
.scaledToFill()
} placeholder: {
ProgressView()
}
.frame(width: UIScreen.main.bounds.size.width - 150, height: UIScreen.main.bounds.size.width - 150, alignment: .leading)
}
}
else {
ProgressView()
}
}
I have a goal to convert View to Image using SwiftUI.
From the first image I have a look that is exactly what I want.
Link picture View will be convert to an Image:
but when I convert the display to an image, the background object and the sticker (Santa's hat) size and position are messed up, even though I've made it neat and fit exactly what I want.
the following is the result of the image:
View results that have been converted into an Image:
This is my Code View
import SwiftUI
struct SnapshotView: View {
#ObservedObject var effectData: EffectViewModel
#ObservedObject var eraseModel: EraseViewModel
#ObservedObject var stickerData: StickerViewModel
#Binding var image: UIImage?
#Binding var isActive: Bool
#Binding var isEffectAvailable: Bool
#Binding var sliderValueBlur: Double
init(image: Binding<UIImage?>, isActive: Binding<Bool>,
isEffectAvailable: Binding<Bool>,
sliderValueBlur: Binding<Double>,
effectData: ObservedObject<EffectViewModel>,
eraseModel: ObservedObject<EraseViewModel>,
stickerData: ObservedObject<StickerViewModel>) {
_image = image
_isActive = isActive
_isEffectAvailable = isEffectAvailable
_sliderValueBlur = sliderValueBlur
_effectData = effectData
_eraseModel = eraseModel
_stickerData = stickerData
}
var body: some View {
GeometryReader { geo in
VStack {
ZStack(alignment: .center) {
if isEffectAvailable {
VStack {
Image(uiImage: self.effectData.modelEffect!.image_effect)
.resizable()
.scaledToFit()
.frame(width: geo.size.width, alignment: .center)
.position(self.effectData.modelEffect!.position_effect)
.scaleEffect(self.effectData.modelEffect!.currentScale)
.mask(
Image(uiImage: image!)
.resizable()
.scaledToFit()
.frame(width: geo.size.width, alignment: .center)
)
.blur(radius: CGFloat(self.effectData.modelEffect!.valueBlur))
.opacity(self.effectData.modelEffect!.opacity_effect)
}
.background(
ZStack {
Image(uiImage: image!)
.resizable()
.scaledToFit()
.frame(width: geo.size.width, alignment: .top)
.blur(radius: CGFloat(sliderValueBlur)*4)
}
.border(Color.black, width: 1)
)
} else {
ZStack {
Image(uiImage: image!)
.resizable()
.scaledToFit()
.frame(width: geo.size.width, alignment: .top)
.blur(radius: CGFloat(sliderValueBlur)*4)
}
.border(Color.black, width: 1)
}
Image(uiImage: self.eraseModel.inputImage)
.resizable()
.scaledToFit()
.frame(width: geo.size.width, alignment: .center)
.overlay(
VStack {
ForEach(stickerData.selectedSticker.indices, id: \.self) { index in
ZStack(alignment: .bottomTrailing) {
Image(uiImage: stickerData.selectedSticker[index].image)
.resizable()
.frame(width: stickerData.selectedSticker[index].width, height: stickerData.selectedSticker[index].height, alignment: .center)
.scaleEffect(stickerData.selectedSticker[index].scale)
.overlay(
Rectangle()
.stroke(Color.gray, lineWidth: stickerData.selectedSticker[index].isEditing ? 4 : 0)
)
}
.frame(width: 130, height: 130, alignment: .center)
.position(stickerData.selectedSticker[index].position)
.scaleEffect(stickerData.selectedSticker[index].scale)
}
}
)
}
}
}
}
}
And this is the function I use to convert View to Image
extension UIView {
var renderedImage: UIImage {
// rect of capure
let rect = self.bounds
// create the context of bitmap
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)
let context: CGContext = UIGraphicsGetCurrentContext()!
self.layer.render(in: context)
// get a image from current context bitmap
let capturedImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return capturedImage
}
}
extension View {
func takeScreenshot(origin: CGPoint, size: CGSize) -> UIImage {
let window = UIWindow(frame: CGRect(origin: origin, size: size))
let hosting = UIHostingController(rootView: self)
hosting.view.frame = window.frame
window.addSubview(hosting.view)
window.makeKeyAndVisible()
return hosting.view.renderedImage
}
}
Usage example:
Button(action: {
let viewSnapshot = SnapshotView(image: self.$image, isActive: self.$isActive, isEffectAvailable: self.$isEffectAvailable, sliderValueBlur: self.$sliderValueBlur, effectData: self._effectData, eraseModel: self._eraseModel, stickerData: self._stickerData)
let saveImage = viewSnapshot.takeScreenshot(origin: geo.frame(in: .global).origin, size: self.image!.size)
UIImageWriteToSavedPhotosAlbum(saveImage, nil, nil, nil)
}, label: {
Text("SAVE")
.foregroundColor(.white)
})
Maybe you can help me, what's wrong here
I'm trying to replicate this UI in SwiftUI using a Grid.
I created the cell like this.
struct MenuButton: View {
let title: String
let icon: Image
var body: some View {
Button(action: {
print(#function)
}) {
VStack {
icon
.resizable()
.aspectRatio(contentMode: .fit)
.frame(width: 60, height: 60)
Text(title)
.foregroundColor(.black)
.font(.system(size: 20, weight: .bold))
.multilineTextAlignment(.center)
.padding(.top, 10)
}
}
.frame(width: 160, height: 160)
.overlay(RoundedRectangle(cornerRadius: 10).stroke(Color.fr_primary, lineWidth: 0.6))
}
}
And the Grid like so.
struct LoginUserTypeView: View {
private let columns = [
GridItem(.flexible(), spacing: 20),
GridItem(.flexible(), spacing: 20)
]
var body: some View {
ScrollView {
LazyVGrid(columns: columns, spacing: 30) {
ForEach(Menu.UserType.allCases, id: \.self) { item in
MenuButton(title: item.description, icon: Image(item.icon))
}
}
.padding(.horizontal)
.padding()
}
}
}
But on smaller screens like the iPod, the cells are overlapped.
On bigger iPhone screens, still the spacing is not correct.
What adjustments do I have to make so that in every screen size, the cells would show in a proper square shape and equal spacing on all sides?
MenuButton has fixed width and height, thats why it behaves incorrectly.
You could utilise .aspectRatio and .frame(maxWidth: .infinity, maxHeight: .infinity) for this:
struct MenuButton: View {
let title: String
let icon: Image
var body: some View {
Button(action: {
print(#function)
}) {
VStack(spacing: 10) {
icon
.resizable()
.aspectRatio(contentMode: .fit)
.frame(maxWidth: 60, maxHeight: 60)
Text(title)
.foregroundColor(.black)
.font(.system(size: 20, weight: .bold))
.multilineTextAlignment(.center)
}
.padding()
}
.frame(maxWidth: .infinity, maxHeight: .infinity)
.aspectRatio(1, contentMode: .fill)
.overlay(RoundedRectangle(cornerRadius: 10).stroke(Color. fr_primary, lineWidth: 0.6))
}
}
I have a ScrollView with multiple Buttons. A Button contains a Image and a Text underneath.
As the images are pretty large I am using .scaledToFill and .clipped. And it seems that the 'clipped' part of the image is still clickable even if it's not shown.
In the video you see I am clicking on button 1 but button 2 is triggered.
This is my Coding. The Image is inside the View Card.
struct ContentView: View {
#State var useWebImage = false
#State var isSheetShowing = false
#State var selectedIndex = 0
private let images = [
"https://images.unsplash.com/photo-1478368499690-1316c519df07?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=2706&q=80",
"https://images.unsplash.com/photo-1507154258-c81e5cca5931?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=2600&q=80",
"https://images.unsplash.com/photo-1513310719763-d43889d6fc95?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=2734&q=80",
"https://images.unsplash.com/photo-1585766765962-28aa4c7d719c?ixlib=rb-1.2.1&auto=format&fit=crop&w=2734&q=80",
"https://images.unsplash.com/photo-1485970671356-ff9156bd4a98?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=2734&q=80",
"https://images.unsplash.com/photo-1585607666104-4d5b201d6d8c?ixlib=rb-1.2.1&auto=format&fit=crop&w=2700&q=80",
"https://images.unsplash.com/photo-1577702066866-6c8897d06443?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=2177&q=80",
"https://images.unsplash.com/photo-1513809491260-0e192158ae44?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=2736&q=80",
"https://images.unsplash.com/photo-1582092723055-ad941d1db0d4?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=2700&q=80",
"https://images.unsplash.com/photo-1478264635837-66efba4b74ba?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjF9&auto=format&fit=crop&w=2682&q=80"
]
var body: some View {
NavigationView {
ScrollView {
VStack(spacing: 40) {
Text(useWebImage ? "WebImage is used." : "SwiftUI Image is used")
.font(.system(size: 18))
.bold()
.kerning(0.5)
.padding(.top, 20)
Toggle(isOn: $useWebImage) {
Text("Use WebImage")
.font(.system(size: 18))
.bold()
.kerning(0.5)
.padding(.top, 20)
}
ForEach(0..<images.count) { index in
Button(action: {
self.selectedIndex = index
self.isSheetShowing.toggle()
}) {
Card(imageUrl: self.images[index], index: index, useWebImage: self.$useWebImage)
}
.buttonStyle(PlainButtonStyle())
}
}
.padding(.horizontal, 20)
.sheet(isPresented: self.$isSheetShowing) {
DestinationView(imageUrl: self.images[self.selectedIndex], index: self.selectedIndex, useWebImage: self.$useWebImage)
}
}
.navigationBarTitle("Images")
}
}
}
struct Card: View {
let imageUrl: String
let index: Int
#Binding var useWebImage: Bool
var body: some View {
VStack {
if useWebImage {
WebImage(url: URL(string: imageUrl))
.resizable()
.indicator(.activity)
.animation(.easeInOut(duration: 0.5))
.transition(.fade)
.scaledToFill()
.frame(minWidth: 0, maxWidth: .infinity, minHeight: 250, maxHeight: 250, alignment: .center)
.cornerRadius(12)
.clipped()
} else {
Image("image\(index)")
.resizable()
.aspectRatio(contentMode: .fill)
.frame(minWidth: 0, maxWidth: .infinity, minHeight: 250, maxHeight: 250, alignment: .center)
.cornerRadius(12)
.clipped()
}
HStack {
Text("Image #\(index + 1) (\(useWebImage ? "WebImage" : "SwiftUI Image"))")
.font(.system(size: 18))
.bold()
.kerning(0.5)
Spacer()
}
}
.padding(2)
.border(Color(.systemRed), width: 2)
}
}
Do you have an idea how to fix this issue?
I already tried to use .resizable(resizingMode: .tile) but I need to shrink the image before I could use just a tile.
For detailed information you can also find the project on GitHub GitHub Project
I would appreciate your help a lot.
The .clipped affects only drawing, and by-default Button has all content clickable not depending what it is.
So if you want make your button clickable only in image area, you have to limit hit testing only to its rect explicitly and disable everything else.
Here is a demo of possible approach. Tested with Xcode 11.4 / iOS 13.4.
Demo code (simplified variant of your snapshot):
struct ButtonCard: View {
var body: some View {
VStack {
Image("sea")
.resizable()
.aspectRatio(contentMode: .fill)
.frame(minWidth: 0, maxWidth: .infinity, minHeight: 250, maxHeight: 250, alignment: .center)
.cornerRadius(12)
.contentShape(Rectangle()) // << define clickable rect !!
.clipped()
HStack {
Text("Image #1")
.font(.system(size: 18))
.bold()
.kerning(0.5)
Spacer()
}.allowsHitTesting(false) // << disable label area !!
}
.padding(2)
.border(Color(.systemRed), width: 2)
}
}
struct TestClippedButton: View {
var body: some View {
Button(action: { print(">> tapped") }) {
ButtonCard()
}.buttonStyle(PlainButtonStyle())
}
}
I want to resize an Image frame to be a square that takes the same width of the iPhone's screen and consequently the same value (screen width) for height.
The following code don't work cause it gives the image the same height of the view.
var body: some View {
Image("someImage")
.resizable()
.frame(minWidth: 0, maxWidth: .infinity, minHeight: 0, maxHeight: .infinity, alignment: .center)
.clipped()
}
You can create a UIScreen extension for the same. Like:
extension UIScreen{
static let screenWidth = UIScreen.main.bounds.size.width
static let screenHeight = UIScreen.main.bounds.size.height
static let screenSize = UIScreen.main.bounds.size
}
Usage:
UIScreen.screenWidth
Try using Geometry Reader
let placeholder = UIImage(systemName: "photo")! // SF Symbols
struct ContentView: View {
var body: some View {
GeometryReader { geometry in
Image(uiImage: placeholder)
.resizable()
.frame(width: geometry.size.width, height: geometry.size.height, alignment: .center)
// .frame(minWidth: 0, maxWidth: .infinity, minHeight: 0, maxHeight: .infinity, alignment: .center)
.clipped()
}
}
}
You can use UIScreen.main.bounds .width or .height
.frame(
width:UIScreen.main.bounds.width,
height:UIScreen.main.bounds.height
)
I've come up with a solution using Environment Keys, by creating the following:
private struct MainWindowSizeKey: EnvironmentKey {
static let defaultValue: CGSize = .zero
}
extension EnvironmentValues {
var mainWindowSize: CGSize {
get { self[MainWindowSizeKey.self] }
set { self[MainWindowSizeKey.self] = newValue }
}
}
Then by reading the size from where the window is created:
var body: some Scene {
WindowGroup {
GeometryReader { proxy in
ContentView()
.environment(\.mainWindowSize, proxy.size)
}
}
}
Finally I can directly get the window size in any SwiftUI view, and it changes dynamically (on device rotation or window resizing on macOS):
#Environment(\.mainWindowSize) var mainWindowSize
The simplest way would be to make the image resizable and set the aspect ratio to 1.0:
var body: some View {
Image("someImage")
.resizable()
.aspectRatio(1.0, contentMode: .fit)
}