Swift UI - UltraThinMaterial Glitch on Moving Views - ios

When I move a view with ultra thin material background, it turns black. Is it a bug or am I doing something wrong?
Is there a workaround to achieve this view on a moving view?
I noticed only happens when there is angular motion. If I delete rotation effect the problem goes away.
Testable code:
struct Test: View {
#State var offset: CGFloat = 0
#GestureState var isDragging: Bool = false
var body: some View {
GeometryReader { reader in
ZStack {
Image(systemName: "circle.fill")
.font(.largeTitle)
.frame(width: 300, height: 300)
.background(.red)
.overlay(alignment: .bottom) {
Rectangle()
.frame(height: 75)
.background(.ultraThinMaterial)
}
.clipShape(
RoundedRectangle(cornerRadius: 15, style: .continuous)
)
.compositingGroup()
.offset(x: offset)
.rotationEffect(.degrees(getRotation(angle: 8)))
.compositingGroup()
.gesture(
DragGesture()
.updating($isDragging) { _, state, _ in
state = true
}
.onChanged { value in
let translation = value.translation.width
offset = (isDragging ? translation : .zero)
}
.onEnded { value in
let width = getRect().width
let translation = value.translation.width
let checkingStatus = translation > 0 ? translation : -translation
withAnimation {
if checkingStatus > (width / 2) {
offset = (translation > 0 ? width : -width) * 2
} else {
offset = 0
}
}
}
)
}
.frame(maxWidth: .infinity, maxHeight: .infinity)
}
}
private func getRotation(angle: Double) -> Double {
let rotation = (offset / getRect().width) * angle
return rotation
}
}

Code is not testable so hard to say definitely, but
try to move all that image with modifiers into separated standalone view
try composite it
.clipShape(
RoundedRectangle(cornerRadius: 15, style: .continuous)
)
.compositingGroup() // << here !!
.offset(y: -topOffset)

For me, just adding .compositingGroup() modifier changed nothing, I still had a glitch
In my case, I also use blur, shadows and other modifiers as well as rotationEffect. Sure, the glitch is connected with rotation
I managed to solve the problem via changing the order or view modifiers. Make sure that rotationEffect(_:axes:) is used before others

Related

What's the right way of observing interpolated values of an animated property of a SwiftUI view?

I'd like to observe the interpolated value of rotation in the below example. How does one achieve that? I only seem to be able to get the end value.
Preview
struct ExampleView: View {
#State var rotation = Angle.zero
var body: some View {
ZStack {
RoundedRectangle(cornerRadius: 20)
.fill(Color.blue)
Text("Example")
.foregroundColor(.white)
}
.frame(width: 300, height: 300)
.rotation3DEffect(rotation, axis: (x: 0, y: 1, z: 0))
.onTapGesture {
withAnimation {
rotation = Angle(degrees: rotation.degrees + 60)
}
}.onChange(of: rotation) { newValue in
// Only prints the end value
print(newValue.degrees)
}
}
}
I tried both with implicit animation and explicit animations but both give me the same output. Same applies to animatedData of the Angle instance rotation here.

SwiftUI - stacking views like the notifications on the Lockscreen

I am trying to stack views of my app like the notifications on the lockscreen. (See attachment) I want them to be able to stack by clicking the button and expand on a click on the view. I have tried many things on VStack and setting the offsets of the other views but without success.
Is there maybe a standardized and easy way of doing this?
Somebody else have tried it and has a tip for me programming this?
Thanks
Notifications stacked
Notifications expanded
EDIT:
So this is some Code that I've tried:
VStack{
ToDoItemView(toDoItem: ToDoItemModel.toDo1)
.background(Color.green)
.cornerRadius(20.0)
.padding(.horizontal, 8)
.padding(.bottom, -1)
.zIndex(2.0)
ToDoItemView(toDoItem: ToDoItemModel.toDo2)
.background(Color.blue)
.cornerRadius(20.0)
.padding(.horizontal, 8)
.padding(.bottom, -1)
.zIndex(1.0)
.offset(x: 0.0, y: offset)
ToDoItemView(toDoItem: ToDoItemModel.toDo3)
.background(Color.yellow)
.cornerRadius(20.0)
.padding(.horizontal, 8)
.padding(.bottom, -1)
.offset(x: 0.0, y: offset1)
ToDoItemView(toDoItem: ToDoItemModel.toDo3)
.background(Color.gray)
.cornerRadius(20.0)
.padding(.horizontal, 8)
.padding(.bottom, -1)
.offset(x: 0.0, y: offset1)
ToDoItemView(toDoItem: ToDoItemModel.toDo3)
.background(Color.pink)
.cornerRadius(20.0)
.padding(.horizontal, 8)
.padding(.bottom, -1)
.offset(x: 0.0, y: offset1)
}
Button(action: {
if(!stacked){
stacked.toggle()
withAnimation(.easeOut(duration: 0.5)) { self.offset = -100.0 }
withAnimation(.easeOut(duration: 0.5)) { self.offset1 = -202.0 }
}
else {
stacked.toggle()
withAnimation(.easeOut(duration: 0.5)) { self.offset = 0 }
withAnimation(.easeOut(duration: 0.5)) { self.offset1 = 0 }
}
}, label: {
Text("Button")
})
So the first problem is that view 4 and 5 are not stacked on z behind view 3.
The second problem is that the button is not getting under the 5th view. (the frame/VStack of the views is still the old size)
Views stacked vertical
Views stacked on z
Another way using ZStack if your interested.
Note-:
I have declared variables globally in view, you can create your own model if you wish.
struct ContentViewsss: View {
#State var isExpanded = false
var color :[Color] = [Color.green,Color.gray,Color.blue,Color.red]
var opacitys :[Double] = [1.0,0.7,0.5,0.3]
var height:CGFloat
var width:CGFloat
var offsetUp:CGFloat
var offsetDown:CGFloat
init(height:CGFloat = 100.0,width:CGFloat = 400.0,offsetUp:CGFloat = 10.0,offsetDown:CGFloat = 10.0) {
self.height = height
self.width = width
self.offsetUp = offsetUp
self.offsetDown = offsetDown
}
var body: some View {
ZStack{
ForEach((0..<color.count).reversed(), id: \.self){ index in
Text("\(index)")
.frame(width: width , height:height)
.padding(.horizontal,isExpanded ? 0.0 : CGFloat(-index * 4))
.background(color[index])
.cornerRadius(height/2)
.opacity(isExpanded ? 1.0 : opacitys[index])
.offset(x: 0.0, y: isExpanded ? (height + (CGFloat(index) * (height + offsetDown))) : (offsetUp * CGFloat(index)))
}
Button(action:{
withAnimation {
if isExpanded{
isExpanded = false
}else{
isExpanded = true
}
}
} , label: {
Text(isExpanded ? "Stack":"Expand")
}).offset(x: 0.0, y: isExpanded ? (height + (CGFloat(color.count) * (height + offsetDown))) : offsetDown * CGFloat(color.count * 3))
}.padding()
Spacer().frame(height: 550)
}
}
I find it a little easier to get exact positioning using GeometryReader than using ZStack and trying to fiddle with offsets/positions.
This code has some numbers hard-coded in that you might want to make more dynamic, but it does function. It uses a lot of ideas you were already doing (zIndex, offset, etc):
struct ToDoItemModel {
var id = UUID()
var color : Color
}
struct ToDoItemView : View {
var body: some View {
RoundedRectangle(cornerRadius: 20).fill(Color.clear)
.frame(height: 100)
}
}
struct ContentView : View {
#State private var stacked = true
var todos : [ToDoItemModel] = [.init(color: .green),.init(color: .pink),.init(color: .orange),.init(color: .blue)]
func offsetForIndex(_ index : Int) -> CGFloat {
CGFloat((todos.count - index - 1) * (stacked ? 4 : 104) )
}
func horizontalPaddingForIndex(_ index : Int) -> CGFloat {
stacked ? CGFloat(todos.count - index) * 4 : 4
}
var body: some View {
VStack {
GeometryReader { reader in
ForEach(Array(todos.reversed().enumerated()), id: \.1.id) { (index, item) in
ToDoItemView()
.background(item.color)
.cornerRadius(20)
.padding(.horizontal, horizontalPaddingForIndex(index))
.offset(x: 0, y: offsetForIndex(index))
.zIndex(Double(index))
}
}
.frame(height:
stacked ? CGFloat(100 + todos.count * 4) : CGFloat(todos.count * 104)
)
.border(Color.red)
Button(action: {
withAnimation {
stacked.toggle()
}
}) {
Text("Unstack")
}
Spacer()
}
}
}

Question about .gesture(drag gesture) in swift ui

I am trying to detect when this text view has been swiped on. The code compiles fine, but I am not able to trigger the swipe on my actual device. When I swipe, nothing happens. Tap seems to work just fine. Can anyone let me know what I'm doing wrong in my code?
In case this matters, I'm developing a watch OS app in swift 5.3 with the latest Xcode.
var body: some View {
Text(tempstring).onTapGesture { checkStateRoll() }
.frame(maxWidth: .infinity, maxHeight: .infinity)
.gesture(DragGesture(minimumDistance: 10, coordinateSpace: .global)
.onEnded { value in
let horizontalAmount = value.translation.width as CGFloat
let verticalAmount = value.translation.height as CGFloat
if abs(horizontalAmount) > abs(verticalAmount) {
horizontalAmount < 0 ? leftswipe() : rightswipe()
} else {
verticalAmount < 0 ? upswipe() : downswipe()
}
tempstring = String(numdice) + "d" + String(typesofdice[typedice])
speaknumber()
} )
.background(progstate == 2 ? Color.blue : Color.red)
}
}
Thanks a lot for any help. This has been stumping me for weeks.
A couple of things:
With respect to layout, setting a frame will no change the text size because the text sizes itself to its content. Use a greedy view like a ZStack if you want to take up all of the space.
The sequence of the modifiers does matter. You read the order in which they are applied from bottom to top, so in this case they need to go on the stack.
Here is an example playground:
import SwiftUI
import PlaygroundSupport
struct V: View {
var body: some View {
ZStack {
Color.red
Text("Test")
}
.onTapGesture { print("tap") }
.gesture(DragGesture(minimumDistance: 10, coordinateSpace: .global).onEnded { print($0)})
}
}
let host = UIHostingController(rootView: V().frame(width: 500.0, height: 500.0))
host.preferredContentSize = CGSize(width: 300, height: 300)
PlaygroundPage.current.liveView = host

SwiftUI Stack Offset changing on rotation leading to crash

I'm trying to implement an app where the user can swipe to the left and the right to go to a different horizontal ScrollViews. In my app I use GeometryReader to detect if the current active ScrollView should change. I didn't include that in this example.
I noticed that very strange things happen if the offset of the HStack is changing with an animation and the device gets rotated to landscape mode. Not only does the app crash, the phone becomes completely unresponsive, too!
I tried this on iOS 13.5 and with 13.6. iOS 14 doesn't even recognizes the simultaneousGesture (why's that?). I think the .offset causes the problem.
Does anybody have a solution? I couldn't find another way to implement what I described at the beginning but I'd like to know about easier ways.
Thanks!
struct ContentView: View {
#State var offset: CGFloat = UIScreen.main.bounds.width/2
var body: some View {
HStack{
ScrollView(.horizontal){
Text("Test")
}
.simultaneousGesture(DragGesture()
.onChanged{ translation in
print("triggered")
if(translation.predictedEndTranslation.width < 50){
withAnimation(.easeInOut(duration: 5)){
self.offset = -UIScreen.main.bounds.width/2
}
}
}
)
.frame(width: UIScreen.main.bounds.width)
ScrollView(.horizontal){
Text("Test2")
}
.simultaneousGesture(DragGesture()
.onChanged{ translation in
print("triggered")
if(translation.predictedEndTranslation.width > 50){
withAnimation(.easeInOut(duration: 5)){
self.offset = UIScreen.main.bounds.width/2
}
}
}
)
}
.frame(width: UIScreen.main.bounds.width * 2)
.offset(x: self.offset)
}
}
Another example with .position used instead of .offset (works on iOS 14 but not below):
struct ContentView: View {
#State var offset: CGFloat = UIScreen.main.bounds.width
var body: some View {
HStack{
ScrollView(.horizontal){
Button(action: {
withAnimation(.easeInOut(duration: 5)){
self.offset = 0
}
})
{
Circle()
.foregroundColor(.green)
.frame(width: 100, height: 100)
}
}
.frame(width: UIScreen.main.bounds.width)
ScrollView(.horizontal){
Button(action: {
withAnimation(.easeInOut(duration: 5)){
self.offset = UIScreen.main.bounds.width
}
})
{
Circle()
.frame(width: 100, height: 100)
.foregroundColor(.red)
}
} .frame(width: UIScreen.main.bounds.width)
}
.frame(width: UIScreen.main.bounds.width * 2)
.position(x: self.offset, y: 300)
}
}

SwiftUI Can't PanGesture an Image

I have not been able to find an equivalent to the pan gesture in SwiftUI. I do see
and use magnify, tap, drag and rotate - but I do not see any built in pan. In the
following code snippet I add an image and allow the user to zoom - but I want the
user to also move the zoomed image to focus on the area of interest. Dragging, of
course does not do the job - it just moves the frame.
I tried layering a frame on top and moving the bottom image but could not make that
work either.
struct ContentView: View {
#State var scale: CGFloat = 1.0
#State var isScaled: Bool = false
#State private var dragOffset = CGSize.zero
var body: some View {
GeometryReader { geo in
VStack {
ZStack{
RoundedRectangle(cornerRadius: 40)
.foregroundColor(Color.white)
.frame(width: geo.size.width - 45, height: geo.size.width - 45)
.shadow(radius: 10)
Image("HuckALaHuckMedium")
.resizable()
.scaleEffect(self.scale)
.frame(width: geo.size.width - 60, height: geo.size.width - 60)
.cornerRadius(40)
.aspectRatio(contentMode: .fill)
.shadow(radius: 10, x: 20, y: 20)
//need pan not drag
.gesture(
DragGesture()
.onChanged { self.dragOffset = $0.translation }
.onEnded { _ in self.dragOffset = .zero }
)
//this works but you can't "zoom twice"
.gesture(MagnificationGesture()
.onChanged { value in
self.scale = self.isScaled ? 1.0 : value.magnitude
}
.onEnded({ value in
//self.scale = 1.0
self.isScaled.toggle()
})
)
.animation(.easeInOut)
.offset(self.dragOffset)
}//zstack
Spacer()
}
}
}
}
An original image example:
And that image after zoom - but with drag not pan:
Any guidance would be appreciated. Xcode 11.3 (11C29)
To make it simpler and more readable, I created an extension/modifier for that
struct DraggableView: ViewModifier {
#State var offset = CGPoint(x: 0, y: 0)
func body(content: Content) -> some View {
content
.gesture(DragGesture(minimumDistance: 0)
.onChanged { value in
self.offset.x += value.location.x - value.startLocation.x
self.offset.y += value.location.y - value.startLocation.y
})
.offset(x: offset.x, y: offset.y)
}
}
extension View {
func draggable() -> some View {
return modifier(DraggableView())
}
}
Now all you have to do is call the modifier:
Image(systemName: "plus")
.draggable()
For others. This is really simple. Ensure that the magnification is accomplished before the drag is allowed - it will work like a pan. First define the drag gesture:
let dragGesture = DragGesture()
.onChanged { (value) in
self.translation = value.translation
}
.onEnded { (value) in
self.viewState.width += value.translation.width
self.viewState.height += value.translation.height
self.translation = .zero
}
Then create an #State value that you toggle to true after the magnification gesture has been activated. When you attach the gesture to the image, do so conditionally:
.gesture(self.canBeDragged ? dragGesture : nil)
I'm a little late, but for anyone looking, please try this as it worked for me. What you want to do is change the offset of the image right after you zoom out in your MagnificationGesture.onChanged() lifecycle.
var magnification: some Gesture {
MagnificationGesture()
.updating($magnifyBy) {
.....
}
.onChanged() { _ in
//Zoom Out value of magnifyBy < 1.0
if self.magnifyBy <= 0.6{
//Change offset of image after zoom out to center of screen
self.currentPosition = CGSize(width: 1.0, height: 1.0)
}
self.newPosition = self.currentPosition
}
.onEnded{
.....
} }
var body: some View{
Image()
.offset(x: self.currentPosition.width, y: self.currentPosition.height)}
Anyone questions please let me know

Resources