Reference to captured var in concurrently-executing code - ios

I was trying out the new async/await pattern in swift and I came accross something which I found it confusing.
struct ContentView: View {
var body: some View {
Text("Hello World!")
.task {
var num = 1
Task {
print(num)
}
Task {
print(num)
}
}
}
func printScore() async {
var score = 1
Task { print(score) }
Task { print(score) }
}
}
Can someone please clarify looking at the above screenshot on why the compiler only complaints about captured var inside printScore() function and does not complaint when the same is being done using the task modifier on the body computed property of the ContentView struct (i.e Line 14-24) ?
This is the example I came up with and got confused on the compiler behavior.I also change the compiler setting "Strict Concurrency Checking” build setting to “Complete” and still don't see the compiler complaining.

This is a special power of #MainActor. In a View, body is tagged MainActor:
#ViewBuilder #MainActor var body: Self.Body { get }
Tasks inherit the context of their caller, so those are are also MainActor. If you replace Task with Task.detached in body, you will see the same error, since this will move the Task out of the MainActor context.
Conversely, if you add #MainActor to printScore it will also compile without errors:
#MainActor func printScore() async {
var score = 1
Task { print(score) }
Task { print(score) }
}
score inherits the MainActor designation, and is protected. There is no concurrent access to score, so this is fine and correct.
This actually applies to all actors, but I believe the compiler has a bug that makes things not quite behave the way you expect. The following code compiles:
actor A {
var actorVar = 1
func capture() {
var localVar = 1
Task {
print(actorVar)
print(localVar)
}
}
}
However, if you remove the reference to actorVar, the closure passed to Task will not be put into the actor's context, and that will make the reference to localVar invalid. IMO, this is a compiler bug.

Related

Updating a #State var in SwiftUI from an async method doesn't work

I have the following View:
struct ContentView: View {
#State var data = [SomeClass]()
var body: some View {
List(data, id: \.self) { item in
Text(item.someText)
}
}
func fetchDataSync() {
Task.detached {
await fetchData()
}
}
#MainActor
func fetchData() async {
let data = await SomeService.getAll()
self.data = data
print(data.first?.someProperty)
// > Optional(115)
print(self.data.first?.someProperty)
// > Optional(101)
}
}
now the method fetchDataSync is a delegate that gets called in a sync context whenever there is new data. I've noticed that the views don't change so I've added the printouts. You can see the printed values, which differ. How is this possible? I'm in a MainActor, and I even tried detaching the task. Didn't help. Is this a bug?
It should be mentioned that the objects returned by getAll are created inside that method and not given to any other part of the code. Since they are class objects, the value might be changed from elsewhere, but if so both references should still be the same and not produce different output.
My theory is that for some reason the state just stays unchanged. Am I doing something wrong?
Okay, wow, luckily I ran into the Duplicate keys of type SomeClass were found in a Dictionary crash. That lead me to realize that SwiftUI is doing some fancy diffing stuff, and using the == operator of my class.
The operator wasn't used for actual equality in my code, but rather for just comparing a single field that I used in a NavigationStack. Lesson learned. Don't ever implement == if it doesn't signify true equality or you might run into really odd bugs later.

Awaiting Task Completion in SwiftUI View

I am working with a view that displays a list of locations. When a user taps on a location, a didSet block containing a Task is triggered in a separate class wrapped with the #ObservedObject property:
struct LocationSearch: View {
#StateObject var locationService = LocationService()
#ObservedObject var networking: Networking
var savedLocations = SavedLocations()
var body: some View {
ForEach(locationService.searchResults, id: \.self) { location in
Button(location.title) {
getCoordinate(addressString: location.title) { coordinates, error in
if error == nil {
networking.lastLocation = CLLocation(latitude: coordinates.latitude, longitude: coordinates.longitude)
// wait for networking.locationString to update
// this smells wrong
// how to better await Task completion in didSet?
DispatchQueue.main.asyncAfter(deadline: .now() + 1) {
savedLocations.all.append(networking.locationString!.locality!)
UserDefaults.standard.set(savedLocations.all, forKey: "savedLocations")
dismiss()
}
}
}
}
}
}
}
The Task that gets triggered after networking.lastLocation is set is as follows:
class Networking: NSObject, ObservableObject, CLLocationManagerDelegate {
#Published public var lastLocation: CLLocation? {
didSet {
Task {
getLocationString() { placemark in
self.locationString = placemark
}
getMainWeather(self.lastLocation?.coordinate.latitude ?? 0, self.lastLocation?.coordinate.longitude ?? 0)
getAQI(self.lastLocation?.coordinate.latitude ?? 0, self.lastLocation?.coordinate.longitude ?? 0)
locationManager.stopUpdatingLocation()
}
}
}
What is a better way to ensure that the task has had time to complete and that the new string will be saved to UserDefaults versus freezing my application's UI for one second?
In case it isn't clear, if I don't wait for one second, instead of the new value of locationString being saved to UserDefaults, the former value is saved instead because the logic in the didSet block hasn't had time to complete.
The first thing I think is odd about your code is that you have a class called "Networking" which is storing a location. It seems that you are conflating storing location information with making network requests which is strange to me.
That aside, the way you synchronize work using Async/Await is by using a single task that can put the work in order.
I think you want to coordinate getting some information from the network and then storing that information in user defaults. In general the task structure you are looking for is something like:
Task {
let netData = await makeNetworkRequest(/* parameters /*)
saveNetworkDataInUserDefaults()
}
So instead of waiting for a second, or something like that, you create a task that waits just long enough to get info back from the network then stores the data away.
To do that, you're going to need an asynchronous context in which to coordinate that work, and you don't have one in didSet. You'll likely want to pull that functionality out into a function that can be made async. Without a lot more detail about what your code means, however, it's difficult to give more specific advice.

What is the appropriate strategy for using #MainActor to update UI?

Suppose you have a method that executes asynchronously in a global context. Depending on the execution you need to update the UI.
private func fetchUser() async {
do {
let user = try await authService.fetchCurrentUser()
view.setUser(user)
} catch {
if let error = error {
view.showError(message: error.message)
}
}
}
Where is the correct place to switch to the main thread?
Assign #MainActor to the fetchUser() method:
#MainActor
private func fetchUser() async {
...
}
Assign #MainActor to the setUser(_ user: User) and showError(message: String) view's methods:
class SomePresenter {
private func fetchUser() async {
do {
let user = try await authService.fetchCurrentUser()
await view.setUser(user)
} catch {
if let error = error {
await view.showError(message: error.message)
}
}
}
}
class SomeViewController: UIViewController {
#MainActor
func setUser(_ user: User) {
...
}
#MainActor
func showError(message: String) {
...
}
}
Do not assign #MainActor. Use await MainActor.run or Task with #MainActor instead to run setUser(_ user: User) and showError(message: String) on the main thread (like DispatchQueue.main.async):
private func fetchUser() async {
do {
let user = try await authService.fetchCurrentUser()
await MainActor.run {
view.setUser(user)
}
} catch {
if let error = error {
await MainActor.run {
view.showError(message: error.message)
}
}
}
}
Option 2 is logical, as you are letting functions that must run on the main queue, declare themselves as such. Then the compiler can warn you if you incorrectly call them. Even simpler, you can declare the class that has these functions to be #MainActor, itself, and then you don't have to declare the individual functions as such. E.g., because a well-designed view or view controller limits itself to just view-related code, it is safe for that whole class to be declared as #MainActor and be done with it.
Option 3 (in lieu of option 2) is brittle, requiring the app developer to have to remember to manually run them on the main actor. You lose compile-time warnings should you fail to do the right thing. Compile-time warnings are always good. But WWDC 2021 video Swift concurrency: Update a sample app points out that even if you adopt option 2, you might still use MainActor.run if you need to call a series of MainActor methods and you might not want to incur the overhead of awaiting one call after another, but rather wrap the group of main actor functions in a single MainActor.run block. (But you might still consider doing this in conjunction with option 2, not in lieu of it.)
In the abstract, option 1 is arguably a bit heavy-handed, designating a function that does not necessarily have to run on the main actor to do so. You should only use the main actor where it is explicitly needed/desired. That having been said, in practice, I have found that there is often utility in having presenters (or controllers or view models or whatever pattern you adopt) run on the main actor, too. This is especially true if you have, for example, synchronous UITableViewDataSource or UICollectionViewDataSource methods grabbing model data from the presenter. If you have the relevant presenter using a different actor, you cannot always return to the data source synchronously. So you might have your presenter methods running on the main actor, too. Again, this is best considered in conjunction with option 2, not in lieu of it.
So, in short, option 2 is prudent, but is often married with options 1 and 3 as appropriate. Routines that must run on the main actor should be designated as such, rather than placing that burden on the caller.
The aforementioned Swift concurrency: Update a sample app covers many of these practical considerations and is worth watching if you have not already.

Using Kotlin mulitplatform classes in SwiftUI

I am building a small project using the Kotlin Mulitplatform Mobile (KMM) framework and am using SwiftUI for the iOS application part of the project (both of which I have never used before).
In the boilerplate for the KMM application there is a Greeting class which has a greeting method that returns a string:
package com.example.myfirstapp.shared
class Greeting {
fun greeting(): String {
return "Hello World!"
}
}
If the shared package is imported into the iOS project using SwiftUI, then the greeting method can be invoked and the string that's returned can put into the View (e.g. Text(Greeting().greeting()))
I have implemented a different shared Kotlin class whose properties are modified/fetched using getters and setters, e.g. for simplicity:
package com.example.myfirstapp.shared
class Counter {
private var count: Int = 0
getCount() {
return count
}
increment() {
count++
}
}
In my Android app I can just instantiate the class and call the setters to mutate the properties of the class and use it as application state. However, I have tried a number of different ways but cannot seem to find the correct way to do this within SwiftUI.
I am able to create the class by either creating a piece of state within the View that I want to use the Counter class in:
#State counter: Counter = shared.Counter()
If I do this then using the getCount() method I can see the initial count property of the class (0), but I am not able to use the setter increment() to modify the property the same way that I can in the Android Activity.
Any advice on the correct/best way to do this would be greatly appreciated!
Here's an example of what I'd like to be able to do just in case that helps:
import shared
import SwiftUI
struct CounterView: View {
#State var counter: shared.Counter = shared.Counter() // Maybe should be #StateObject?
var body: some View {
VStack {
Text("Count: \(counter.getCount())")
Button("Increment") { // Pressing this button updates the
counter.increment() // UI state on the previous line
}
}
}
}
I believe the fundamental issue is that there isn't anything that's notifying SwiftUI layer when the count property is changed in the shared code (when increment is called). You can at least verify that value is being incremented by doing something like following (where we manually retrieve updated count after incrementing it)
struct ContentView: View {
#ObservedObject var viewModel = ViewModel(counter: Counter())
var body: some View {
VStack {
Text("Count: \(viewModel.count)")
Button("Increment") {
viewModel.increment()
}
}
}
}
class ViewModel: ObservableObject {
#Published var count: Int32 = 0
func increment() {
counter.increment()
count = counter.getCount()
}
private let counter: Counter
init(counter: Counter) {
self.counter = counter
}
}

Does "let _ = ..." (let underscore equal) have any use in Swift?

Does using let _ = ... have any purpose at all?
I've seen question and answers for What's the _ underscore representative of in Swift References? and I know that the underscore can be used to represent a variable that isn't needed.
This would make sense if I only needed one value of a tuple as in the example from the above link:
let (result, _) = someFunctionThatReturnsATuple()
However, I recently came across this code:
do {
let _ = try DB.run( table.create(ifNotExists: true) {t in
t.column(teamId, primaryKey: true)
t.column(city)
t.column(nickName)
t.column(abbreviation)
})
} catch _ {
// Error throw if table already exists
}
I don't get any compiler warnings or errors if I just remove the let _ =. It seems to me like this is simpler and more readable.
try DB.run( table.create(ifNotExists: true) {t in
t.column(teamId, primaryKey: true)
t.column(city)
t.column(nickName)
t.column(abbreviation)
})
The author of the code has written a book and a blog about Swift. I know that authors aren't infallible, but it made me wonder if there is something I am missing.
You will get a compiler warning if the method has been marked with a warn_unused_result from the developer documentation:
Apply this attribute to a method or function declaration to have the compiler emit a warning when the method or function is called without using its result.
You can use this attribute to provide a warning message about incorrect usage of a nonmutating method that has a mutating counterpart.
Using let _ = ... specifically tells the compiler that you know that the expression on the right returns a value but that you don't care about it.
In instances where the method has been marked with warn_unused_result, if you don't use the underscore then the compiler will complain with a warning. (Because in some cases it could be an error to not use the return value.)
Sometimes it is simple and cleaner to use try? than do-catch, when you call something that throws, but decided not to handle any errors. If you leave call with try? as-is, compiler will warn you about unused result, which is not good. So you can discard results using _.
Example:
let _ = try? NSFileManager.defaultManager().moveItemAtURL(url1, toURL: url2)
Also, let _ = or _ = can be used when right side of expression is lazy variable and you want it calculated right now, but have no use for the value of this variable yet.
A lazy stored property is a property whose initial value is not
calculated until the first time it is used. You indicate a lazy stored
property by writing the lazy modifier before its declaration.
Lazy properties are useful when the initial value for a property is
dependent on outside factors whose values are not known until after an
instance’s initialization is complete. Lazy properties are also useful
when the initial value for a property requires complex or
computationally expensive setup that should not be performed unless or
until it is needed.
Example:
final class Example {
private func deepThink() -> Int {
// 7.5 million years of thinking
return 42
}
private(set) lazy var answerToTheUltimateQuestionOfLifeTheUniverseAndEverything: Int = deepThink()
func prepareTheAnswer() {
_ = answerToTheUltimateQuestionOfLifeTheUniverseAndEverything
}
func findTheQuestion() -> (() -> Int) {
// 10 millions of thinking
let theQuestion = {
// something
return self.answerToTheUltimateQuestionOfLifeTheUniverseAndEverything
}
return theQuestion
}
}
let example = Example()
// And you *want* to get the answer calculated first, but have no use for it until you get the question
example.prepareTheAnswer()
let question = example.findTheQuestion()
question()
It's also useful to print() something in SwiftUI.
struct ContentView: View {
var body: some View {
let _ = print("View was refreshed")
Text("Hi")
}
}
View was refreshed
You can use it to see the current value of properties:
struct ContentView: View {
#State var currentValue = 0
var body: some View {
let _ = print("View was refreshed, currentValue is \(currentValue)")
Button(action: {
currentValue += 1
}) {
Text("Increment value")
}
}
}
View was refreshed, currentValue is 0
View was refreshed, currentValue is 1
View was refreshed, currentValue is 2
View was refreshed, currentValue is 3
View was refreshed, currentValue is 4
If you just did _ = print(...), it won't work:
struct ContentView: View {
var body: some View {
_ = print("View was refreshed") /// error!
Text("Hi")
}
}
Type '()' cannot conform to 'View'; only struct/enum/class types can conform to protocols
You can user also #discardableResult in your own functions if sometimes you don't need the result.
#discardableResult
func someFunction() -> String {
}
someFunction() // Xcode will not complain in this case
if let _ = something {
...
}
is equal to
if something != nil {
...
}
If you replaced the underscore with a property name, the fix that Xcode would suggest is the second option (it would not suggest the first). And that's because the second option makes more programming sense. However—and this is something Apple themselves stress to developers—write the code that is the most readable. And I suspect the underscore option exists because in some cases it may read better than the other.

Resources