I have an app with a couple of thousand lines and within that code there are a lot of println() commands. Does this slow the app down? It is obviously being executed in the Simulator, but what happens when you archive, submit and download the app from the app store/TestFlight. Is this code still "active", and what about code that is "commented out"?
Is it literally never read or should I delete commented out code when I submit to test flight/app store?
Yes it does slow the code.
Both print and println decrease performance of the application.
Println problem
println is not removed when Swift does code optimisation.
for i in 0...1_000 {
println(i)
}
This code can't be optimised and after compiling Assembly code would perform a loop with 1000 instructions that actually don't do anything valuable.
Analysing Assembly code
The problem is that Swift compiler can't do optimal optimisation to the code with print and println commands.
You can see it if you have a look on generated Assembly code.
You can do see assembly code with Hopper Disassembler or by compiling Swift code to the Assembly with by using swiftc compiler:
xcrun swiftc -emit-assembly myCode.swift
Swift code optimisation
Lets have a look on few examples for better understanding.
Swift compiler can eliminate a lot of unnecessary code like:
Empty function calls
Creating objects that are not used
Empty Loops
Example:
class Object {
func nothing() {
}
}
for i in 0...1_000 {
let object = Object3(x: i)
object.nothing()
object.nothing()
}
In this example Swift complier would do this optimisation:
1. Remove both nothing method calls
After this the loop body would have only 1 instruction
for i in 0...1_000 {
let object = Object(x: i)
}
2. Then it would remove creating Object instance, because it's actually not used.
for i in 0...1_000 {
}
3. The final step would be removing empty loop.
And we end up with no code to execute
Solutions
Comment out print and println
This is definitely not the best solution.
//println("A")
Use DEBUG preprocessor statement
With this solution you can simple change logic of your debug_print function
debug_println("A)
func debug_println<T>(object: T) {
#if DEBUG
println(object)
#endif
}
Conclusion
Always Remove print and println from release application!!
If you add print and println instruction, the Swift code can't be optimised in the most optimal way and it could lead to the big performance penalties.
Generally you should not leave any form of logging turned on in a production app, it will most likely not impact performance but it is poor practice to leave it enabled and unneeded.
As for commented code, this is irrelevant as it will be ignored by the compiler and not be part of the final binary.
See this answer on how to disable println() in production code, there is a variety of solutions, Remove println() for release version iOS Swift
As you do not want to have to comment out all your println() calls just for a release, it is much better to just disable them, otherwise you'll be wasting a lot of time.
printLn shouldn't have much of an impact at all as the bulk of the operation has already been carried out before that point.
Commented out code is sometimes useful, although it can make your source difficult to read it has absolutely no bearing on performance whatsoever and I've never had anything declined for commented out code and my stuff is full of it.
Related
I'm currently facing a problem with some Swift source files when a crash occurs. Indeed, on Crashlytics I have a weird info about the line and the reason of the crash. It tells me the source has crashed at the line 0 and it gives me a SIGTRAP error. I read that this error occurs when a Thread hits a BreakPoint. But the problem is that this error occurs while I'm not debugging (application test from TestFlight).
Here is an example when Crashlytics tells me there's a SIGTRAP Error at line 0 :
// Method that crashs
private func extractSubDataFrom(writeBuffer: inout Data, chunkSize: Int) -> Data? {
guard chunkSize > 0 else { // Prevent from having a 0 division
return nil
}
// Get nb of chunks to write (then the number of bytes from it)
let nbOfChunksToWrite: Int = Int(floor(Double(writeBuffer.count) / Double(chunkSize)))
let dataCountToWrite = max(0, nbOfChunksToWrite * chunkSize)
guard dataCountToWrite > 0 else {
return nil // Not enough data to write for now
}
// Extract data
let subData = writeBuffer.extractSubDataWith(range: 0..<dataCountToWrite)
return subData
}
Another Swift file to explain what happens at the line "writeBuffer.extractSubDataWith(range: 0..
public extension Data {
//MARK: - Public
public mutating func extractSubDataWith(range: Range) -> Data? {
guard range.lowerBound >= 0 && range.upperBound <= self.count else {
return nil
}
// Get a copy of data and remove them from self
let subData = self.subdata(in: range)
self.removeSubrange(range)
return subData
}
}
Could you tell me what I'm doing wrong ? Or what can occurs this weird SIGTRAP error ?
Thank you
Crashing with a line of zero is indeed weird. But, common in Swift code.
The Swift compiler can do code generation on your behalf. This can happen quite a bit with generic functions, but may also happen for other reasons. When the compiler generates code, it also produces debug information for the code it generates. This debug information typically references the file that caused the code to be generated. But, the compiler tags it all with a line of 0 to distinguish it from code that was actually written by the developer.
These generic functions also do not have to be written by you - I've seen this happen with standard library functions too.
(Aside: I believe that the DWARF standard can, in fact, describe this situation more precisely. But, unfortunately Apple doesn't seem to use it in that way.)
Apple verified this line zero behavior via a Radar I filed about it a number of years ago. You can also poke around in your app's own debug data (via, for example dwarfdump) if you want to confirm.
One reason you might want to try to do this, is if you really don't trust that Crashlytics is labelling the lines correctly. There's a lot of stuff between their UI and the raw crash data. It is conceivable something's gone wrong. The only way you can confirm this is to grab the crashing address + binary, and do the lookup yourself. If dwarfdump tells you this happened at line zero, then that confirms this is just an artifact of compile-time code generation.
However, I would tend to believe there's nothing wrong with the Crashlytics UI. I just wanted to point it out as a possibility.
As for SIGTRAP - there's nothing weird about that at all. This is just an indication that the code being run has decided to terminate the process. This is different, for example, from a SIGBUS, where the OS does the terminating. This could be caused by Swift integer and/or range bounds checking. Your code does have some of that kind of thing in both places. And, since that would be so performance-critical - would be a prime candidate for inline code generation.
Update
It now seems like, at least in some situations, the compiler also now uses a file name of <compiler-generated>. I'm sure they did this to make this case clearer. So, it could be that with more recent versions of Swift, you'll instead see <compiler-generated>:0. This might not help tracking down a crash, but will least make things more obvious.
I hope this question isn't too open-ended. I ran into a memory issue with Rust, where I got an "out of memory" from calling next on an Iterator trait object. I'm unsure how to debug it. Prints have only brought me to the point where the failure occurs. I'm not very familiar with other tools such as ltrace, so although I could create a trace (231MiB, pff), I didn't really know what to do with it. Is a trace like that useful? Would I do better to grab gdb/lldb? Or Valgrind?
In general I would try to do the following approach:
Boilerplate reduction: Try to narrow down the problem of the OOM, so that you don't have too much additional code around. In other words: the quicker your program crashes, the better. Sometimes it is also possible to rip out a specific piece of code and put it into an extra binary, just for the investigation.
Problem size reduction: Lower the problem from OOM to a simple "too much memory" so that you can actually tell the some part wastes something but that it does not lead to an OOM. If it is too hard to tell wether you see the issue or not, you can lower the memory limit. On Linux, this can be done using ulimit:
ulimit -Sv 500000 # that's 500MB
./path/to/exe --foo
Information gathering: If you problem is small enough, you are ready to collect information which has a lower noise level. There are multiple ways which you can try. Just remember to compile your program with debug symbols. Also it might be an advantage to turn off optimization since this usually leads to information loss. Both can be archived by NOT using the --release flag during compilation.
Heap profiling: One way is too use gperftools:
LD_PRELOAD="/usr/lib/libtcmalloc.so" HEAPPROFILE=/tmp/profile ./path/to/exe --foo
pprof --gv ./path/to/exe /tmp/profile/profile.0100.heap
This shows you a graph which symbolizes which parts of your program eat which amount of memory. See official docs for more details.
rr: Sometimes it's very hard to figure out what is actually happening, especially after you created a profile. Assuming you did a good job in step 2, you can use rr:
rr record ./path/to/exe --foo
rr replay
This will spawn a GDB with superpowers. The difference to a normal debug session is that you can not only continue but also reverse-continue. Basically your program is executed from a recording where you can jump back and forth as you want. This wiki page provides you some additional examples. One thing to point out is that rr only seems to work with GDB.
Good old debugging: Sometimes you get traces and recordings that are still way too large. In that case you can (in combination with the ulimit trick) just use GDB and wait until the program crashes:
gdb --args ./path/to/exe --foo
You now should get a normal debugging session where you can examine what the current state of the program was. GDB can also be launched with coredumps. The general problem with that approach is that you cannot go back in time and you cannot continue with execution. So you only see the current state including all stack frames and variables. Here you could also use LLDB if you want.
(Potential) fix + repeat: After you have a glue what might go wrong you can try to change your code. Then try again. If it's still not working, go back to step 3 and try again.
Valgrind and other tools work fine, and should work out of the box as of Rust 1.32. Earlier versions of Rust require changing the global allocator from jemalloc to the system's allocator so that Valgrind and friends know how to monitor memory allocations.
In this answer, I use the macOS developer tool Instruments, as I'm on macOS, but Valgrind / Massif / Cachegrind work similarly.
Example: An infinite loop
Here's a program that "leaks" memory by pushing 1MiB Strings into a Vec and never freeing it:
use std::{thread, time::Duration};
fn main() {
let mut held_forever = Vec::new();
loop {
held_forever.push("x".repeat(1024 * 1024));
println!("Allocated another");
thread::sleep(Duration::from_secs(3));
}
}
You can see memory growth over time, as well as the exact stack trace that allocated the memory:
Example: Cycles in reference counts
Here's an example of leaking memory by creating an infinite reference cycle:
use std::{cell::RefCell, rc::Rc};
struct Leaked {
data: String,
me: RefCell<Option<Rc<Leaked>>>,
}
fn main() {
let data = "x".repeat(5 * 1024 * 1024);
let leaked = Rc::new(Leaked {
data,
me: RefCell::new(None),
});
let me = leaked.clone();
*leaked.me.borrow_mut() = Some(me);
}
See also:
Why does Valgrind not detect a memory leak in a Rust program using nightly 1.29.0?
Handling memory leak in cyclic graphs using RefCell and Rc
Minimal `Rc` Dependency Cycle
In general, to debug, you can use either a log-based approach (either by inserting the logs yourself, or having a tool such a ltrace, ptrace, ... to generate the logs for you) or you can use a debugger.
Note that ltrace, ptrace or debugger-based approaches require that you be able to reproduce the problem; I tend to favor manual logs because I work in an industry where bug reports are generally too imprecise to allow immediate reproduction (and thus we use logs to create the reproducer scenario).
Rust supports both approaches, and the standard toolset that one uses for C or C++ programs works well for it.
My personal approach is to have some logging in place to quickly narrow down where the issue occurs, and if logging is insufficient to fire up a debugger for a more fine-combed inspection. In this case I would recommend going straight away for the debugger.
A panic is generated, which means that by breaking on the call to the panic hook, you get to see both the call stack and memory state at the moment where things go awry.
Launch your program with the debugger, set a break point on the panic hook, run the program, profit.
Forced unwrapping causes your application crashes if there exists a nil. This is really cool during the development phase of your application. But this is a headache for your production build espescially if you were too lazy to do the if let nil check.
Has anyone tried any operator overloading/overriding that stops these crashes for production build?
No, there was not, there is not, and there should never be.
The crash is INTENTIONAL. The implementers of the Swift language went out of their way, on purpose, to design the force unwrap operator (!) to crash.
This is by design.
When nil is encountered and not safely handled, there are two ways to proceed:
Allow the program to continue in an inconsistent state, and allow it to behave in an undefined, unforeseen manner.
or
Crash the program, preventing it from continuing in an inconsistent, undefined, unforeseen state. This will protect your file system, databases, web services, etc. from permanent damage.
Which of the two options do you think makes more sense?
To be honest I'd gouge my eyes out if I had to maintain a codebase that used something like this if it was possible. Swift features an easy way to solve your problem that you're actively avoiding because of laziness (optionals). You could probably put a guard around those variables, but it requires the same amount of effort as using if let statements. My suggested solution is to stop being lazy and use the language properly. Go through your codebase and fix this, it will save you more hours in the long run.
Can't overload ! since it's reserved, but we can use ❗️
protocol Bangable {
init()
}
postfix operator ❗️
postfix func ❗️<T: Bangable>(value: T?) -> T {
#if DEBUG
value!
#else
value ?? T.init()
#endif
}
extension String: Bangable {}
extension Int: Bangable {}
let bangable: Int? = 8
let cantBangOnDebug: Int? = nil
print(bangable❗️) // 8
print(cantBangOnDebug❗️) // Crashes on Debug!
Please don't actually use this in production. This is just give an idea on how it COULD be accomplished, not that it should
In Xcode, is there a way for me run a single test case n times automatically?
Reason for doing this is that some of my beta testers are encountering random crashes in my app. I see the crash logs in TestFlight, along with the stack trace, but I can't reproduce the crash.
The crash happens infrequently but when it does, it always happens when users are trying to create a DB record, which then gets uploaded to a server. The problem with the crash logs is that my code does not make an appearance in their stack traces (all UIKit & CoreFoundation stuff - and different each time).
My solution is to run the test for that part of the app 100s of times, with the exception breakpoint set, to try to trigger the bug in my dev environment. But I don't know how to do this automatically.
It took 7 years, but as of Xcode 13, support for test repetition is now built-in.
From the Xcode 13 release notes:
Enable test repetition in your test plan, xcodebuild, or by running your test from the test diamond by Control-clicking and selecting Run Repeatedly to bring up the test repetition dialog.
You can read my answer here.
Basically you need to override invokeTest method
override func invokeTest() {
for time in 0...15 {
print("this test is being invoked: \(time) times")
super.invokeTest()
}
}
In Xcode as such, no.
You can create an XCTestCase class that hooks into the test-running methods it inherits to return multiple runs, but that tends to be annoying and mostly undocumented.
It's probably easier to instead have a "meta-test" that calls out to your other test method repeatedly:
func testOnce() {}
func testManyTimes() {
for _ in 0..<1000 { testOnce() }
}
You might need to call out to some per-test setup/teardown methods. You could perhaps work around that by instead making the loop body be something like:
let test = XCTestCase(selector: #selector(testOnce))
test.invokeTest()
This would lean on the XCTest machinery that your standard tests use, but it might gripe about not being wired into an XCTestCaseRun (or not).
I have a couple of time been caught out putting function code in NSAssert or NSParameter assert, like
NSParameterAssert( [self doSomeWork] );
The problem with this is when you do a release build not only does the code that does an abort if the test fails get omitted from the code, but my code within the () is also omitted also.
Obviously this fix is simple but to me it still this seems wrong, the logic of the code is changed between test build and release build.
I should make it clear I only use this pattern for situation where if the assert fail it is a programmer error.
Personally, I prefer the AssertMacros, which guarantee that the code will be executed, but don't use assertions.
http://www.opensource.apple.com/source/xnu/xnu-1456.1.26/EXTERNAL_HEADERS/AssertMacros.h
I use both assert and NSAssert, and so define the appropriate values to disable them in Distribution builds (I also normally work with a configuration ReleaseWithAsserts to get optimized code with the asserts enabled, so testing as close to the actual distributed code as possible).
What I do is this (you can use the other define if you want NDEBUG is for assert()):
#ifndef NDEBUG
BOOL didWorkSucceed =
#endif
[self doSomeWork]; // returns a BOOL if succeeds
assert(didWorkSucceed);
You can use the defines for all kinds of stuff too - logging etc. Mostly I do test the return codes in Distribution, and return a nil object if it fails instead of ignoring it.