hopefully anyone of you can help me and give me advice.
I have the task of making a photo editing application with SwiftUI.
In the application there is a feature to remove the background from a person's photo (so only the object of the person), then I created a new UIImage as a layer to replace the background with an image that can be shifted (behind the photo whose background has been removed), so that it is positioned When I tried to drag the new background but I couldn't, because the photo frame that was removed from the background was blocking the new UIImage background.
this is the code i have made.
struct MaskImage: View {
#State var currentPositions = CGPoint.init(x: UIScreen.main.bounds.width/2, y: UIScreen.main.bounds.height/4)
#State var currentPosition = CGPoint.init(x: UIScreen.main.bounds.width/2, y: UIScreen.main.bounds.height/4)
#GestureState private var isDragging = false
var body: some View {
GeometryReader { geo in
HStack {
VStack {
Spacer()
ZStack {
Image("EFFECT-1") // DRAGGABLE PHOTO BACKGROUND
.resizable()
.frame(width: abs(geo.size.width-32), height: 375, alignment: .center)
.position(currentPosition)
.gesture(
DragGesture()
.onChanged { value in
self.currentPosition = value.location
}
.updating($isDragging) { (value, state, transaction) in // 1, 2
state = true
self.currentPosition = value.location
}
)
Image("SEGMENTED_IMAGE_REMOVE_BACKGROUND") // The image that will be the front
.resizable()
.frame(width: abs(geo.size.width - 32), height: abs(round((geo.size.width / 2) * 2.438)), alignment: .center)
}
Spacer()
}
}
.background(Color.red)
}
}
}
You can disable touch interactions for the top image using allowsHitTesting:
Image("SEGMENTED_IMAGE_REMOVE_BACKGROUND") // The image that will be the front
.resizable()
.frame(width: abs(geo.size.width - 32), height: abs(round((geo.size.width / 2) * 2.438)), alignment: .center)
.allowsHitTesting(false)
Related
In the wonderful world of SwiftUI, I have a View that I use as a cell. I intend to reproduce the Layout of a previous application of mine, with Autolayout, not SwiftUI, in which the background image filled the entire cell, adjusting to the width and losing pieces of image above and below.
In my new app, the code in SwiftUI is the following:
struct CharacterRow2: View {
var character: Character
var body: some View {
Text(character.name)
.font(Font.custom(avengeanceHeroicAvengerNormal, size: 30))
.foregroundColor(.white)
.baselineOffset(-10)
.shadow(color: .black, radius: 1, x: -1, y: 1)
.frame(width: UIScreen.main.bounds.width, height: 140)
.background {
WebImage(url: extractImage(data: character.thumbnail))
.resizable()
.frame(width: UIScreen.main.bounds.width, height: 150)
}
}
}
With this code, my app looks like this:
I tried to add scaledToFill():
WebImage(url: extractImage(data: character.thumbnail))
.resizable()
.scaledToFill()
.frame(width: UIScreen.main.bounds.width, height: 150)
But this is the result:
I'm stuck...
Thank you in advance!
In this case, you are simply using too many frames. And using them incorrectly. You should avoid using UIScreen.main.bounds in SwiftUI, especially in something like a view cell. By doing this, the cell will not behave properly with other views, and can cause UI issues that would be difficult to trace.
The simplest way to get the behavior you want is this:
Text(character.name)
.font(.largeTitle)
.foregroundColor(.white)
.baselineOffset(-10)
.shadow(color: .black, radius: 1, x: -1, y: 1)
.frame(height: 140)
.frame(maxWidth: .infinity) // use .infinity for max width to make it
// as large as the space offered by the
// parent view
.background {
WebImage(url: extractImage(data: character.thumbnail))
.resizable()
// .frame(width: UIScreen.main.bounds.width, height: 150) <-- Remove this frame altogether
.scaledToFill()
}
.clipped // This keeps the image from being larger than the frame
This will size to be as wide as the parent view allows it to be. Leaving the UIScreen.main.bounds.width could cause it to be larger than the parent view and cause partial eclipsing.
In your last example the images are overlaying each other. This is due to calling scaleToFill(). The images are now ignoring their frame boundaries regarding the height. Adding .clipped solves the problem.
struct CharacterRow2: View {
var character: String
var body: some View {
Text(character)
.font(.largeTitle)
.foregroundColor(.white)
.baselineOffset(-10)
.shadow(color: .black, radius: 1, x: -1, y: 1)
.frame(width: UIScreen.main.bounds.width, height: 140)
.background {
Image(character)
.resizable()
.scaledToFill()
.frame(width: UIScreen.main.bounds.width, height: 150)
.clipped() // <-- Add this
}
}
}
It seems this will work only with a ForEach inside a ScrollView. Using a List seems to breack the vertical frame boundary.
struct ContentView: View{
let content = ["1.jpg", "2.jpg", "3.jpg" ]
var body: some View{
//These look really weird
// List(content, id: \.self){ name in
// CharacterRow2(character: name)
// }
// List{
// VStack{
// ForEach(content, id: \.self){ name in
// CharacterRow2(character: name)
// }
// }
// }
//working
ScrollView{
VStack{
ForEach(content, id: \.self){ name in
CharacterRow2(character: name)
.padding()
}
}
}
}
}
Result:
I want to drag circle shape only within a frame horizontal. I try below code you can see shape work perfectly horizontal and also left side but when drag right side shape drag up to out of device frame. I check so many answer but did't work anything. also try to add .onEnded but don't work.
please help me
thanks in advance
here is a code that i tried
struct ProgressIndicator: View {
#State private var position = CGSize.zero
var body: some View {
GeometryReader { geometry in
VStack(alignment: .leading) {
ZStack {
Circle()
.foregroundColor(Color.blue)
.opacity(0.3)
.frame(width: 40, height: 40, alignment: .leading)
.padding(.bottom, -12)
}
.gesture(
DragGesture()
.onChanged({ (value) in
self.position.width += value.location.x
})
)
.offset(x: self.position.width - 20, y: 0)
}
}
}
}
here is screenshot of circle shape for drag to left and right
You can restrict the position with GeometryReader:
self.position.width = min(self.position.width + value.location.x, geometry.size.width / 2)
Here's the result from Playgrounds:
https://imgur.com/FA5xyAz
This is how it works right now
I want the magnifying glass to pop up when the user is dragging, and show a zoomed in view of the photo to help the user drag the line to the right spot. I can't get the X and Y coordinates right, so it just goes wherever.
This is the ZStack that contains the magnifying glass...
ZStack{
Image("demo")
.resizable()
.aspectRatio(contentMode: .fit)
.cornerRadius(50)
.frame(width: 500, height: 500, alignment: .center)
.position(x: CGFloat(self.magnifyX), y: CGFloat(self.magnifyY))
}
.frame(width: 100, height: 100, alignment: .center)
.clipShape(Circle())
.overlay(
ZStack{
Circle().fill(Color.black).frame(width: 5, height: 5, alignment: .center)
Circle().stroke(Color.black, lineWidth: 4)
}
)
.position(x: CGFloat(self.magnifyX), y: CGFloat(self.magnifyY-75))
I'd be open to other ways of doing it as well, if this is not the correct way to do it.
Thanks for the help!
Not an answer, but I can't comment yet... Are you able to share your code for magnifyX and magnifyY?
Looking at the screen capture you shared, the magnification seems to be following the line, it's obviously just higher, but it's also the opposite of/mirroring your movements, i.e. as you move left to right, it's moving right to left.
Have a look at the code below, the only thing I am still unable to fix/implement is to prevent the image moving off-screen:
import SwiftUI
struct ContentDetailPhoto : View {
//Properties
#State var scale: CGFloat = 1.0
#State var touchScale: CGFloat = 2
#State var isTouchingScreen = false
#State var isZoomedIn = false
#State var pointTouchedOnScreen: CGPoint = CGPoint.zero
#State var panSize: CGSize = CGSize.zero
#State var fingerState: String = "Finger is not touching the image"
#State var prevSize: CGSize = CGSize.zero
//#State var counter = 0
#State var touching = false
#State var xPos = 0
#State var yPos = 0
//let timer = Timer.publish(every: 0.1, on: .main, in: .common).autoconnect()
var cakeName: String
var body: some View {
ZStack {
Image("\(self.cakeName)")
.grayscale(0.999)
.opacity(0.5)
GeometryReader { reader in
Image("\(self.cakeName)")
.resizable()
.cornerRadius(10)
.offset(x: self.panSize.width, y: self.panSize.height)
.scaleEffect(self.isTouchingScreen ? self.touchScale : 1, anchor: UnitPoint(x: self.pointTouchedOnScreen.x / reader.frame(in: .global).maxX, y: self.pointTouchedOnScreen.y / reader.frame(in: .global).maxY))
.aspectRatio(contentMode: self.isZoomedIn ? .fill : .fit)
.frame(maxWidth: UIScreen.main.bounds.size.width - 10, maxHeight: UIScreen.main.bounds.size.height * 0.7, alignment: .center)
.gesture(DragGesture(minimumDistance: 0, coordinateSpace: .global)
.onChanged { (value) in
self.fingerState = "Finger is touching the image" // for debug purpose only
self.isZoomedIn = true
self.isTouchingScreen = true
self.pointTouchedOnScreen = value.startLocation
self.scale = self.touchScale
self.panSize = CGSize(width: (value.translation.width * self.touchScale/2), height: (value.translation.height * self.touchScale/2))
}
.onEnded { _ in
self.fingerState = "Finger is not touching the image" // for debug purpose only
self.isZoomedIn = false
self.isTouchingScreen = false
self.panSize = CGSize.zero
self.prevSize = CGSize.zero
})
.cornerRadius(10)
.animation(.easeInOut(duration: 1))
//.offset(x: 0, y: -50)
}.edgesIgnoringSafeArea(.top)
// Add a watermark on top of all the layers
// Image("logo_small")
// .resizable()
// .scaledToFit()
// .opacity(0.05)
// .frame(width: UIScreen.main.bounds.width - 10, height: UIScreen.main.bounds.height - 10, alignment: .center)
// .clipShape(Circle())
}
}
}
#if DEBUG
struct MapView01_Previews: PreviewProvider {
static var previews: some View {
Group {
// iPhone SE
ContentDetailPhoto(cakeName: "cake_0000")
.previewDevice("iPhone SE")
// iPhone X
ContentDetailPhoto(cakeName: "cake_0000")
.previewDevice("iPhone X")
}
}
}
#endif
It looks like you're SwiftUI in an iPhone app, using the iOS simulator. The simulator simulates touches. There is no actual API on the iPhone that supports that because you're finger can't magically turn into a magnifying glass (there is no other pointer input).
macOS and iPadOS is a bit of a different story. On macOS you could use the Hover API to natively change the pointer using NSCursor, or make use of the new UIPointerInteraction on iPadOS.
Imagine a Grid (n x n) squares. Those squares are ZStacks. It contains an optinal piece (In this case a circle). If I offset that piece over another ZStack it gets hidden by the other ZStack.
What I'm trying to do is a chess game. Imagine the Circle() being a piece.
This was my initial attemp:
struct ContentView: View {
#State var circle1Offset: CGSize = .zero
var body: some View {
VStack {
ZStack {
Color.blue
Circle().fill(Color.black)
.frame(width: 50, height: 50)
.offset(circle1Offset)
.gesture(
DragGesture()
.onChanged { value in
self.circle1Offset = value.translation
}
)
}
.frame(width: 150, height: 150)
ZStack {
Color.red
}
.frame(width: 150, height: 150)
}
}
}
Also I tried to add an overlay() instead of using a ZStack. Not sure which is more precise for this case, but unluckily i can't add an "optional" overlay like so:
struct ContentView: View {
#State var circle1Offset: CGSize = .zero
var body: some View {
VStack {
Color.blue
.frame(width: 150, height: 150)
.overlay(
Circle().fill(Color.black)
.frame(width: 50, height: 50)
.offset(circle1Offset)
.gesture(
DragGesture()
.onChanged { value in
self.circle1Offset = value.translation
}
)
)
Color.red
.frame(width: 150, height: 150)
}
}
func tileHasPiece() -> Circle? {
return Circle() // This would consult my model
}
}
But as I said, I don't know how to use tileHasPiece() to add an overlay depending on this.
Just put all board static elements below, and all figure active elements above, as in below modified your code snapshot. In such case everthing will be in one coordinate space. (Of course calculation of coordinates for figures is out of this topic)...
struct FTContentView: View {
#State var circle1Offset: CGSize = .zero
var body: some View {
ZStack {
// put board below
VStack {
Color.blue
.frame(width: 150, height: 150)
Color.red
.frame(width: 150, height: 150)
}
// put figure above
Circle().fill(Color.black)
.frame(width: 50, height: 50)
.offset(circle1Offset)
.gesture(
DragGesture()
.onChanged { value in
self.circle1Offset = value.translation
}
)
} // board coordinate space
}
}
If you have played around with Apple's Room Tutorial (link: ../WWDC2019/204/), I added a small touch-to-zoom gesture (thanks to #Alladinian and brar07), per the code below.
HOWEVER, the image when touched and panned, moves off the screen and does not return to its original position. You should be able to copy+paste this code into Apple's project with little modifications.
REQUIREMENTS: 1) The image should stay within the confines of the image frame, i.e. when zoomed, the edges of the image should not go beyond the edges of the defined frame (or screen if the frame is not defined). 2) The image should return to its original position.
This final result would be similar to how mouse-over works for product images on some websites.
import SwiftUI
struct RoomDetail: View {
let room : Room
#State var scale: CGFloat = 1.0
#State var isTouchingScreen = false
#State var isZoomedIn = false
#State var pointTouchedOnScreen: CGPoint = CGPoint.zero
#State var panSize: CGSize = CGSize.zero
#State var fingerState: String = "Finger is not touching the image"
var body: some View {
ZStack {
// Show the room selected by the user, implement zooming capabilities
GeometryReader { reader in
Image("\(self.room.name)" + "_Thumb")
.resizable()
.offset(x: self.panSize.width, y: self.panSize.height)
.gesture(DragGesture(minimumDistance: 0, coordinateSpace: .global)
.onChanged { (value) in
self.fingerState = "Finger is touching the image" // for debug purpose only
self.isZoomedIn = true
self.isTouchingScreen = true
self.pointTouchedOnScreen = value.startLocation
self.scale = 1.1
let offsetWidth = (reader.frame(in: .global).maxX * self.scale - reader.frame(in: .global).maxX) / 2
let newDraggedWidth = self.panSize.width * self.scale
if (newDraggedWidth > offsetWidth) {
self.panSize = CGSize(width: (value.translation.width + self.panSize.width), height: (value.translation.height + self.panSize.height))
} else if (newDraggedWidth < -offsetWidth) {
self.panSize = CGSize(width: (value.translation.width + self.panSize.width), height: (value.translation.height + self.panSize.height))
} else {
self.panSize = CGSize(width: (value.translation.width + self.panSize.width), height: (value.translation.height + self.panSize.height))
}
}
.onEnded { _ in
self.fingerState = "Finger is not touching the image" // for debug purpose only
self.isZoomedIn = false
self.isTouchingScreen = false
})
.aspectRatio(contentMode: self.isZoomedIn ? .fill : .fit)
.scaleEffect(self.isTouchingScreen ? self.scale : 1, anchor: UnitPoint(x: self.pointTouchedOnScreen.x / reader.frame(in: .global).maxX, y: self.pointTouchedOnScreen.y / reader.frame(in: .global).maxY))
.animation(.easeInOut(duration: 1))
.frame(maxWidth: UIScreen.main.bounds.size.width - 50, maxHeight: UIScreen.main.bounds.size.height - 200, alignment: .center)
.clipped()
.offset(x: 0, y: -50)
}
}
}
struct RoomDetail_Previews: PreviewProvider {
static var previews: some View {
RoomDetail(room: testData[0])
}
}
I have no idea if this answers your question exactly, but I wanted to make this note:
The ORDER of modifiers MATTERS
I was trying to do a zoom effect and
it kept exceeding the frame of the image
However, when I set the FRAME AFTER the scale effect, it stayed within the bounds
This didn't work, the image wouldn't "crop" to the size of the frame for the image:
Image(uiImage: self.userData.image!)
.resizable()
.offset(x: self.currentPosition.width, y: self.currentPosition.height)
.aspectRatio(contentMode: .fill)
.frame(maxWidth:metrics.size.width * 0.60, maxHeight: metrics.size.height * 0.60, alignment: .top)
.scaleEffect(self.scale)
.clipped()
.foregroundColor(.gray)
Whereas this DID work:
Image(uiImage: self.userData.image!)
.resizable()
.scaleEffect(self.scale)
.offset(x: self.currentPosition.width, y: self.currentPosition.height)
.aspectRatio(contentMode: .fill)
.frame(maxWidth:metrics.size.width * 0.60, maxHeight: metrics.size.height * 0.60, alignment: .top)
.clipped()
.foregroundColor(.gray)
The only difference being applying the scaleEffect before the frame. (In this case, I have a drag gesture and a zoom gesture).
Spent a lot of time trying to sort this out and not sure it exactly solves your issue but perhaps useful to someone.