XCTest doesn't measure CPU and memory - ios

I have this performance test to check Memory usage during scrolling.
func testMemotyUsage() {
let app = XCUIApplication()
let measureOptions = XCTMeasureOptions()
measureOptions.invocationOptions = [.manuallyStop]
measure(
metrics: [XCTMemoryMetric(application: app)],
options: measureOptions
) {
app.buttons["Listing Page"].tap()
swipeUp()
stopMeasuring()
tapBack()
}
}
func tapBack() {
app.navigationBars.buttons.element(boundBy: 0).tap()
}
func swipeUp() {
collectionView.swipeUp(velocity: .fast)
}
func swipeDown() {
collectionView.swipeDown(velocity: .fast)
}
var collectionView: XCUIElement {
app.collectionViews["collectionViewId"]
}
But when I run the test, it doesn't display any metrics at all.
I tried to update
XCTMemoryMetric(application: app) -> to XCTMemoryMetric()
In this case it at least shows some result, but the result is incorrect, because as it's seen on the screenshot below, the app consumes around 130 MB of memory, but the test shows 9 KB only. BTW the real memory consumption is around 130-150 MB, because there are a lot of images in the collection view.
My guess that it doesn't show the correct result, because the app is not passed as a parameter. Although when I pass the app, it doesn't show any results at all 🙃
Same issue happens when I write the test to check CPU usage with XCTCPUMetric.
Questions:
How to write a performance test that will show memory and CPU usage of some UI tests?
(Optional) Why when I run the test in Debug mode, it shows that 2 processes are running (ExampleUITests - the target for UI tests, and Example - the main target). Is it normal and when I measure the memory consumption, am I supposed to get the consumption of the main target Example, right?

Related

Is there an option to capture memory footprint for the iOS app during XCUITest? Not for the whole test run, we need to capture for each user action

I have an XCUITest that executes a set of user actions. lets say
launch app
Tap on login button
Enter credentials and login
I want to capture the memory metrics for each of these steps individually, to identify which step has more memory pressure.
I have tried XCTMemoryMetric, but it provides a single value for the entire test run.
Is there a way I can capture the memory footprint for each of these steps during the test run? fyi - we need to run the test on a real device and not simulator.
Unfortunately, XCUITest does not provide a direct way to measure memory footprint for individual steps during a test run. However, you can write custom code to track the memory usage by using the Mach task APIs. This requires a deeper understanding of the underlying iOS system, so be sure to familiarize yourself with the documentation before attempting to implement this solution.
Update:
You can use XCTestCase's addUIInterruptionMonitor(withDescription:handler:) method to interrupt the running app during testing and retrieve its memory usage information using the task_info function from the mach framework.
Here is an example implementation in Swift:
let task = mach_task_self_
var taskInfo = task_basic_info()
var count = mach_msg_type_number_t(MemoryLayout.size(ofValue: taskInfo) / MemoryLayout.size(ofValue: integer_t(0)))
let kerr: kern_return_t = withUnsafeMutablePointer(to: &taskInfo) {
$0.withMemoryRebound(to: integer_t.self, capacity: Int(count)) {
task_info(task, task_flavor_t(TASK_BASIC_INFO), $0, &count)
}
}
if kerr == KERN_SUCCESS {
let usedMemory = taskInfo.resident_size / 1024 / 1024
print("Memory used: \(usedMemory) MB")
} else {
print("Error with task_info(): " + (String(cString: mach_error_string(kerr), encoding: String.Encoding.ascii) ?? "unknown error"))
}
Note that this code captures the memory usage of the entire process, not just the app under test. If you want to measure the memory usage of the app only, you will need to subtract the memory usage of the XCUITest process.

Setting backpressure in OperationQueue (or alternative API, e.g. PromiseKit, Combine Framework)

I have 2 steps in processing pipeline which runs over many images:
Step 1: Load locally (or download) image (IO bound)
Step 2: Run machine learning model (CPU/ GPU/ Compute bound/ single threaded because the model is big). How do I limit the number of images stored in memory (from step 1) queuing for the 2nd step. This is called backpressure in Reactive programming.
Without backpressure, all the work from Step 1 might pile up, leading to a high memory usage just for having images open.
I guess I could use a semaphore (e.g. of 5) which represents roughly the amount of memory I am willing to give for step 1 (5 pictures). I guess this would make 5 of my background threads to block, which is probably a bad thing? (that's a serious question: is it bad to block a background thread, since it consumes resources.)
If you're using Combine, flatMap can provide the back pressure. FlatMap creates a publisher for each value it receives, but exerts back pressure when it reaches the specified maximum number of publishers that haven't completed.
Here's a simplified example. Assuming you have the following functions:
func loadImage(url: URL) -> AnyPublisher<UIImage, Error> {
// ...
}
func doImageProcessing(image: UIImage) -> AnyPublisher<Void, Error> {
// ...
}
let urls: [URL] = [...] // many image URLs
let processing = urls.publisher
.flatMap(maxPublishers: .max(5)) { url in
loadImage(url: url)
.flatMap { uiImage in
doImageProcessing(image: uiImage)
}
}
In the example above, it will load 5 images, and start processing them. The 6th image will start loading when one of the earlier ones is done processing.
If you really do want to use OperationQueue, then simply set the queue's maxConcurrentOperationCount to 5 to prevent more than 5 operations from being started simultaneously.

Extremely high Memory & CPU usage when uploading parsed JSON data to Firebase in loop function

This is my very first question here so go easy on me!
I'm a newbie coder and I'm currently trying to loop through JSON, parse the data and backup the information to my Firebase server - using Alamofire to request the JSON information.
Swift 4, Alamofire 4.5.1, Firebase 4.2.0
The process works - but not without infinitely increasing device memory usage & up to 200% CPU usage. Through commenting out lines, I singled the memory and CPU usage down to the Firebase upload setValue line in my data pulling function - which iterates through a JSON database of unknown length (by pulling a max of 1000 rows of data at a time - hence the increasing offset values). The database that I'm pulling information from is huge, and with the increasing memory usage, the function grinds to a very slow pace.
The function detects if it's found an empty JSON (end of the results), and then either ends or parses the JSON, uploads the information to Firebase, increases the offset value by 1000 rows, and then repeats itself with the new offset value.
var offset: Int! = 0
var finished: Bool! = false
func pullCities() {
print("step 1")
let call = GET_CITIES + "&offset=\(self.offset!)&rows=1000"
let cityURL = URL(string: call)!
Alamofire.request(cityURL).authenticate(user: USERNAME, password: PASSWORD).responseJSON { response in
let result = response.result
print("step 2")
if let dict = result.value as? [Dictionary<String, Any>] {
print("step 3")
if dict.count == 0 {
self.finished = true
print("CITIES COMPLETE")
} else {
print("step 4")
for item in dict {
if let id = item["city"] as? String {
let country = item["country"] as? String
let ref = DataService.ds.Database.child("countries").child(country!).child("cities").child(id)
ref.setValue(item)
}
}
self.finished = false
print("SUCCESS CITY \(self.offset!)")
self.offset = self.offset! + 1000
}
}
if self.finished == true {
return
} else {
self.pullCities()
}
}
}
It seems to me like the data being uploaded to Firebase is being saved somewhere and not emptied once the upload completes? Although I couldn't find much information on this issue when searching through the web.
Things I've tried:
a repeat, while function (no good as I only want 1 active repetition of each loop - and still had high memory, CPU usage)
performance monitoring (Xcode call tree found that "CFString (immutable)" and "__NSArrayM" were the main reason for the soaring memory usage - both relating to the setValue line above)
memory usage graphing (very clear that memory from this function doesn't get emptied when it loops back round - no decreases in memory at all)
autoreleasepool blocks (as per suggestions, unsuccessful)
Whole Module Optimisation already enabled (as per suggestions, unsuccessful)
Any help would be greatly appreciated!
UPDATE
Pictured below is the Allocations graph after a single run of the loop (1,000 rows of data). It shows that what is likely happening is that Firebase is caching the data for every item in the result dict, but appears to only de-allocate memory as one whole chunk when every single upload has finished?
Ideally, it should be de-allocating after every successful upload and not all at once. If anyone could give some advice on this I would be very grateful!
FINAL UPDATE
If anyone should come across this with the same problem, I didn't find a solution. My requirements changed so I switched the code over to nodejs which works flawlessly. HTTP requests are also very easy to code for on javascript!
I had a similar issue working with data on external websites and the only way I could fix it was to wrap the loop in an autoreleasepool {} block which forced the memory to clear down on each iteration. Given ARC you might think such a structure is not needed in Swift but see this SO discussion:
Is it necessary to use autoreleasepool in a Swift program?
Hope that helps.
sometimes compiler is not able to properly optimise your code unless you enable whole module optimisation in project build settings. this is usually happening when generics is being used.
try to turn it on even for debug env and test.

Elusive crash: terminated due to memory issue

I'm still trying to debug an elusive crash in my app. See here for my earlier post.
The app takes sound from the microphone, processes it, and continuously updates the display with the processed results. After running uneventfully for many minutes, the app will halt with Message from debugger: terminated due to memory issue. There is no stack trace.
The timing of the crash makes it appear that there is some finite resource that gets exhausted after so many minute of running. The time it takes to crash is quite uniform. The time to crash may vary unpredictably when I change something in my code, but as long as the code stays the same, the time to crash keeps approximately the same. On a recent set of 10 test runs, the time to crash varied between 1014 seconds and 1029 seconds.
The number of times the display gets updated is even more uniform. On that same set of 10 tests, the number of calls to UIView.draw varied from 15311 to 15322. That's a variation of 0.07 percent, as opposed to 1.5 percent in the time to crash.
It's not running out of memory. My code is written in Swift 3, so I'm not doing any explicit mallocs or frees. I've made my class references weak where needed. And I've tested under the Activity Monitor, Allocations, and Leaks Instruments under XCode. My program takes up 44.6 MiB, and it doesn't grow with time.
And I've been careful about thread safety when accessing shared data. All shared data is read and written on the same serial DispatchQueue.
I've traced the crash to a section of code that writes a byte array to disk, then reads in another array of bytes. Here's a simplified version of that code:
var inputBuf:Buffer = Buffer()
var outputBuf: Buffer = Buffer()
var fileHandle:FileHandle? = ...
struct Buffer {
let bufferSize = 16384
var fileIndex:Int = 0
var bytes:[UInt8]
init() {
bytes = [UInt8](repeating:0, count:bufferSize)
}
func save(fileHandle:FileHandle) {
fileHandle.seek(toFileOffset: UInt64(Int64(fileIndex)))
fileHandle.write(Data(bytes))
}
}
func bug()
{
outputBuf.save(fileHandle:fileHandle!)
fileHandle!.seek(toFileOffset: UInt64(inputBuf.fileIndex))
let data = fileHandle!.readData(ofLength: inputBuf.bufferSize )
for i in 0..<data.count {
inputBuf.bytes[i] = data[i] // May crash here
}
}
Usually the crash occurs during the loop that copies data from the result of the readData to my buffer. But on one occasion, the loop completed before the crash. That leads me to suspect the actual crash occurs on another thread. There's no stack trace, so my only debugging technique is to insert print statements in the code.
fileIndex is always between 0 and 2592500. I modified the code to close the FileHandle after use and create a new FileHandle when next needed. It did not affect the outcome.
It was the zombie detector! I turned off zombie detection and the app runs forever.

Memory leak: steady increase in memory usage with simple device motion logging

Consider this simple Swift code that logs device motion data to a CSV file on disk.
let motionManager = CMMotionManager()
var handle: NSFileHandle? = nil
override func viewDidLoad() {
super.viewDidLoad()
let documents = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)[0] as NSString
let file = documents.stringByAppendingPathComponent("/data.csv")
NSFileManager.defaultManager().createFileAtPath(file, contents: nil, attributes: nil)
handle = NSFileHandle(forUpdatingAtPath: file)
motionManager.startDeviceMotionUpdatesToQueue(NSOperationQueue.currentQueue(), withHandler: {(data, error) in
let data_points = [data.timestamp, data.attitude.roll, data.attitude.pitch, data.attitude.yaw, data.userAcceleration.x,
data.userAcceleration.y, data.userAcceleration.z, data.rotationRate.x, data.rotationRate.y, data.rotationRate.z]
let line = ",".join(data_points.map { $0.description }) + "\n"
let encoded = line.dataUsingEncoding(NSUTF8StringEncoding)!
self.handle!.writeData(encoded)
})
}
I've been stuck on this for days. There appears to be a memory leak, as memory
consumption steadily increases until the OS suspends the app for exceeding resources.
It's critical that this app be able to run for long periods without interruption. Some notes:
I've tried using NSOutputStream and a CSV-writing library (CHCSVParser), but the issue is still present
Executing the logging code asynchronously (wrapping startDeviceMotionUpdatesToQueue in dispatch_async) does not remove the issue
Performing the sensor data processing in a background NSOperationQueue does fix the issue (only when maxConcurrentOperationCount >= 2). However, that causes concurrency issues in file writing: the output file is garbled with lines intertwined between each other.
The issue does not seem to appear when logging accelerometer data only, but does seem to appear when logging multiple sensors (e.g. accelerometer + gyroscope). Perhaps there's a threshold of file writing throughput that triggers this issue?
The memory spikes seem to be spaced out at roughly 10 second intervals (steps in the above graph). Perhaps that's indicative of something? (could be an artifact of the memory instrumentation infrastructure, or perhaps it's garbage collection)
Any pointers? I've tried to use Instruments, but I don't have the skills the use it effectively. It seems that the exploding memory usage is caused by __NSOperationInternal. Here's a sample Instruments trace.
Thank you.
First, see this answer of mine:
https://stackoverflow.com/a/28566113/341994
You should not be looking at the Memory graphs in the debugger; believe only what Instruments tells you. Debug builds and Release builds are memory-managed very differently in Swift.
Second, if there is still trouble, try wrapping the interior of your handler in an autoreleasepool closure. I do not expect that that would make a difference, however (as this is not a loop), and I do not expect that it will be necessary, as I suspect that using Instruments will reveal that there was never any problem in the first place. However, the autoreleasepool call will make sure that autoreleased objects are not given a chance to accumulate.

Resources