I'm aware that Kotlin/Native has very specific rules in regards to mutability of objects between threads.
However to my surprise, I've found that when running unit tests from commonMain (deployed to androidTestDebug), I am able to change mutable states on different threads. For example this works fine when changing the value in MyData:
data class MyData(var value : Int = 0)
suspend fun main() = coroutineScope {
val myData = MyData()
val newContext1 = newSingleThreadContext("contextOne")
val newContext2 = newSingleThreadContext("contextTwo")
launch (newContext1) {
myData.value = 1
}
launch (newContext2) {
myData.value = 2
}
}
However if I run this while targeting iOS, it crashes, giving me kotlin.native.concurrent.InvalidMutabilityException. This is what I would expect to happen on both platforms. I'm new to KMM, but why aren't concurrent mutability rules enforced when running commonMain code on JVM?
Also is there a way to force mutability rules on the JVM so that tests fail on Android just as they would on iOS? I think this would help ensure platform consistency.
No, it shouldn’t crash on Android.
Kotlin native creates Native code from Kotlin, that’s why there’s a need in it’s own concurrency model.
For android part this is plain kotlin code, so JVM concurrency model can be used, and as any other kotlin code that mutates a variable it shouldn’t crash.
I don’t think there’s a way to make it crash on Android.
In kotlin 1.6.0 this should be changed so this code is not gonna crash on both platforms
Related
Folks, is it possible to obtain currently used Scheduler within an operator?
The problem that I have is that Mono.fromFuture() is being executed on a native thread (AWS CRT Http Client in my case). As result all subsequent operators are also executed on that thread. And later code wants to obtain class loader context that is obviously null. I realize that I can call .publishOn(originalScheduler) after .fromFuture() but I don't know what scheduler is used to materialize Mono returned by my function.
Is there elegant way to deal with this?
fun myFunction(): Mono<String> {
return Mono.just("example")
.flatMap { value ->
Mono.fromFuture {
// invocation of 3rd party library that executes Future on the thread created in native code.
}
}
.map {
val resource = Thread.currentThread().getContextClassLoader().getResources("META-INF/services/blah_blah");
// NullPointerException because Thread.currentThread().getContextClassLoader() returns NULL
resource.asSequence().first().toString()
}
}
It is not possible, because there's no guarantee that there is a Scheduler at all.
The place where the subscription is made and the data starts flowing could simply be a Thread. There is no mechanism in Java that allows an external actor to submit a task to an arbitrary thread (you have to provide the Runnable at Thread construction).
So no, there's no way of "returning to the previous Scheduler".
Usually, this shouldn't be an issue at all. If your your code is reactive it should also be non-blocking and thus able to "share" whichever thread it currently runs on with other computations.
If your code is blocking, it should off-load the work to a blocking-compatible Scheduler anyway, which you should explicitly chose. Typically: publishOn(Schedulers.boundedElastic()). This is also true for CPU-intensive tasks btw.
What is the best approach to wrap java 7 futures inside a kotlin suspend function?
Is there a way to convert a method returning Java 7 futures into a suspending function?
The process is pretty straightforward for arbitrary callbacks or java 8 completablefutures, as illustrated for example here:
* https://github.com/Kotlin/kotlin-coroutines/blob/master/kotlin-coroutines-informal.md#suspending-functions
In these cases, there is a hook that is triggered when the future is done, so it can be used to resume the continuation as soon as the value of the future is ready (or an exception is triggered).
Java 7 futures however don't expose a method that is invoked when the computation is over.
Converting a Java 7 future to a Java 8 completable future is not an option in my codebase.
Of course, i can create a suspend function that calls future.get() but this would be blocking, which breaks the overall purpose of using coroutine suspension.
Another option would be to submit a runnable to a new thread executor, and inside the runnable call future.get() and invoke a callback. This wrapper will make the code looks like "non-blocking" from the consumer point of view, the coroutine can suspend, but under the hood we are still writing blocking code and we are creating a new thread just for the sake of blocking it
Java 7 future is blocking. It is not designed for asynchronous APIs and does not provide any way to install a callback that is invoked when the future is complete. It means that there is no direct way to use suspendCoroutine with it, because suspendCoroutine is designed for use with asynchronous callback-using APIs.
However, if your code is, in fact, running under JDK 8 or a newer version, there are high chances that the actual Future instance that you have in your code happens to implement CompletionStage interface at run-time. You can try to cast it to CompletionStage and use ready-to-use CompletionStage.await extension from kotlinx-coroutines-jdk8 module of kotlinx.coroutines library.
Of course Roman is right that a Java Future does not let you provide a callback for when the work is done.
However, it does give you a way to check if the work is done, and if it is, then calling .get() won't block.
Luckily for us, we also have a cheap way to divert a thread to quickly do a poll check via coroutines.
Let's write that polling logic and also vend it as an extension method:
suspend fun <T> Future<T>.wait(): T {
while(!isDone)
delay(1) // or whatever you want your polling frequency to be
return get()
}
Then to use:
fun someBlockingWork(): Future<String> { ... }
suspend fun useWork() {
val result = someBlockingWork().wait()
println("Result: $result")
}
So we have millisecond-response time to our Futures completing without using any extra threads.
And of course you'll want to add some upper bound to use as a timeout so you don't end up waiting forever. In that case, we can update the code just a little:
suspend fun <T> Future<T>.wait(timeoutMs: Int = 60000): T? {
val start = System.currentTimeMillis()
while (!isDone) {
if (System.currentTimeMillis() - start > timeoutMs)
return null
delay(1)
}
return get()
}
You should be now be able to do this by creating another coroutine in the same scope that cancels the Future when the coroutine is cancelled.
withContext(Dispatchers.IO) {
val future = getSomeFuture()
coroutineScope {
val cancelJob = launch {
suspendCancellableCoroutine<Unit> { cont ->
cont.invokeOnCancellation {
future.cancel(true)
}
}
}
future.get().also {
cancelJob.cancel()
}
}
}
Update
It is fixed in Xcode 8.0
Environment
Xcode 7.3.1, iPhone SE, iOS 9.3.5
Problem
I bound C function to JavaScript function by JSObjectMakeFunctionWithCallback with JavaScriptCore framework.
If I put breakpoint in this C function,
Xcode debugger hangs when execution position comes here from JavaScript by JSEvaluateScript.
I want to know reason of issue.
I want to know approach to implement C function which works with breakpoint correctly and is callable from JavaScript environment.
I don't need idea using Objective-C API,
Because I want to share code with iOS and Android.
Condition
From my experiment, I got these conditions.
It does not happens in iOS Simulator.
It does not happens when JS function is calling from JSObjectCallAsFunction.
It happens when JS function is calling from JSEvaluateScript.
My opinions by looking call stack in Simulator running is
this issue is from JavaScriptCore's JIT/LLINT embedded assembly.
I guess that these assembly lacks something relates debugging mechanism.
So I worry that there is no solution in programmer on user side of JavaScriptCode.
Reproducible source code
I packaged small reproducible example in this repository.
https://github.com/omochi/jscore-debugger-hangup
steps
Clone this repository.
Open with Xcode.
Open test_main.c.
Put breakpoint at printf in TestNativeFunc and JSEvaluateScript in TestMain.
Run application.
It paused at printf.
Continue.
It paused at JSEvaluateScript.
Xcode hangs.
If you remove two breakpoints, it run correctly and print two func messages.
code
static JSValueRef TestNativeFunc(JSContextRef ctx,
JSObjectRef function,
JSObjectRef thisObject,
size_t argumentCount,
const JSValueRef arguments[],
JSValueRef* exception)
{
printf("func\n");
return JSValueMakeNull(ctx);
}
void TestMain() {
JSGlobalContextRef context = JSGlobalContextCreate(NULL);
JSObjectRef func = JSObjectMakeFunctionWithCallback(context, NULL, &TestNativeFunc);
JSValueProtect(context, func);
JSObjectCallAsFunction(context, func, NULL, 0, NULL, NULL);
JSObjectRef global_object = JSContextGetGlobalObject(context);
JSStringRef f_str = JSStringCreateWithUTF8CString("f");
JSObjectSetProperty(context, global_object, f_str, func, 0, NULL);
JSStringRef script = JSStringCreateWithUTF8CString("f();");
JSEvaluateScript(context, script, NULL, NULL, 1, NULL);
JSStringRelease(f_str);
JSValueUnprotect(context, func);
JSGlobalContextRelease(context);
}
This is a known bug in Xcode 7.3.1, and it should be fixed in Xcode 8.0. If you find that it isn't when you get a chance to upgrade to 8.0, please file a bug with http://bugreporter.apple.com.
Forced unwrapping causes your application crashes if there exists a nil. This is really cool during the development phase of your application. But this is a headache for your production build espescially if you were too lazy to do the if let nil check.
Has anyone tried any operator overloading/overriding that stops these crashes for production build?
No, there was not, there is not, and there should never be.
The crash is INTENTIONAL. The implementers of the Swift language went out of their way, on purpose, to design the force unwrap operator (!) to crash.
This is by design.
When nil is encountered and not safely handled, there are two ways to proceed:
Allow the program to continue in an inconsistent state, and allow it to behave in an undefined, unforeseen manner.
or
Crash the program, preventing it from continuing in an inconsistent, undefined, unforeseen state. This will protect your file system, databases, web services, etc. from permanent damage.
Which of the two options do you think makes more sense?
To be honest I'd gouge my eyes out if I had to maintain a codebase that used something like this if it was possible. Swift features an easy way to solve your problem that you're actively avoiding because of laziness (optionals). You could probably put a guard around those variables, but it requires the same amount of effort as using if let statements. My suggested solution is to stop being lazy and use the language properly. Go through your codebase and fix this, it will save you more hours in the long run.
Can't overload ! since it's reserved, but we can use ❗️
protocol Bangable {
init()
}
postfix operator ❗️
postfix func ❗️<T: Bangable>(value: T?) -> T {
#if DEBUG
value!
#else
value ?? T.init()
#endif
}
extension String: Bangable {}
extension Int: Bangable {}
let bangable: Int? = 8
let cantBangOnDebug: Int? = nil
print(bangable❗️) // 8
print(cantBangOnDebug❗️) // Crashes on Debug!
Please don't actually use this in production. This is just give an idea on how it COULD be accomplished, not that it should
I have an app with a couple of thousand lines and within that code there are a lot of println() commands. Does this slow the app down? It is obviously being executed in the Simulator, but what happens when you archive, submit and download the app from the app store/TestFlight. Is this code still "active", and what about code that is "commented out"?
Is it literally never read or should I delete commented out code when I submit to test flight/app store?
Yes it does slow the code.
Both print and println decrease performance of the application.
Println problem
println is not removed when Swift does code optimisation.
for i in 0...1_000 {
println(i)
}
This code can't be optimised and after compiling Assembly code would perform a loop with 1000 instructions that actually don't do anything valuable.
Analysing Assembly code
The problem is that Swift compiler can't do optimal optimisation to the code with print and println commands.
You can see it if you have a look on generated Assembly code.
You can do see assembly code with Hopper Disassembler or by compiling Swift code to the Assembly with by using swiftc compiler:
xcrun swiftc -emit-assembly myCode.swift
Swift code optimisation
Lets have a look on few examples for better understanding.
Swift compiler can eliminate a lot of unnecessary code like:
Empty function calls
Creating objects that are not used
Empty Loops
Example:
class Object {
func nothing() {
}
}
for i in 0...1_000 {
let object = Object3(x: i)
object.nothing()
object.nothing()
}
In this example Swift complier would do this optimisation:
1. Remove both nothing method calls
After this the loop body would have only 1 instruction
for i in 0...1_000 {
let object = Object(x: i)
}
2. Then it would remove creating Object instance, because it's actually not used.
for i in 0...1_000 {
}
3. The final step would be removing empty loop.
And we end up with no code to execute
Solutions
Comment out print and println
This is definitely not the best solution.
//println("A")
Use DEBUG preprocessor statement
With this solution you can simple change logic of your debug_print function
debug_println("A)
func debug_println<T>(object: T) {
#if DEBUG
println(object)
#endif
}
Conclusion
Always Remove print and println from release application!!
If you add print and println instruction, the Swift code can't be optimised in the most optimal way and it could lead to the big performance penalties.
Generally you should not leave any form of logging turned on in a production app, it will most likely not impact performance but it is poor practice to leave it enabled and unneeded.
As for commented code, this is irrelevant as it will be ignored by the compiler and not be part of the final binary.
See this answer on how to disable println() in production code, there is a variety of solutions, Remove println() for release version iOS Swift
As you do not want to have to comment out all your println() calls just for a release, it is much better to just disable them, otherwise you'll be wasting a lot of time.
printLn shouldn't have much of an impact at all as the bulk of the operation has already been carried out before that point.
Commented out code is sometimes useful, although it can make your source difficult to read it has absolutely no bearing on performance whatsoever and I've never had anything declined for commented out code and my stuff is full of it.