I am working with my app in test flight now and I have experienced this maybe only one time myself. How can I get to the bottom of a rare crash without it being hooked up to the computer? The only information is that it occasionally will freeze, but other than that it doesn't seem to be happening with a specific action. How do I debug this?
Here are some approaches:
Write log files to the disk, which you can later sync. or send to a server for investigating which operations succeeded before it crashed
Integrate a custom (such as KSCrash, Crashlytics, etc.) crash reporter tool that allows you to see a backtrace (Note though that enabling Bitcode will make things much harder)
Try to reproduce the problem in the simulator (try different builds, debug build, release build, send memory warnings, etc.)
Try to reproduce with a computer connected. Interactive debugging makes things so much easier.
Good luck!
Related
We've recently released our app to our userbase and we are seeing a bunch of redacted exceptions in Sentry that we can't debug in any logical way.
The only thing these exceptions seem to have in common is that they never happen when the application is active:
And the available memory seems to be very low on these devices:
One theory we have is that the OS decides to close any background applications due to low memory available.
But it's quite a assumption to make at these point when I'm more inclined to believe we have made a mistake in our own code.
To my questions, how would we go about debugging these redacted exceptions? Are we right to believe that our app being closed when it's not active is no cause for concern?
The on-premise version of Sentry have several issues related to this specific problem. According to the Sentry team these will be fixed in a upcoming release for the on-premise version. But to summarize.
At first we had difficulties getting the upload scripts for the dSYMs to work. The Fastlane lane mentioned here did not work at all. Neither did the bash script that was prompted in the Sentry interface under debugging symbols.
What did work was using the sentry-cli (latest version) and bumping up the accepted file size for upload on our nginx server for our on premise. But after successfully getting our dSYMs file to actually show up in Sentry we had more problems.
The issues we've encountered are listed below:
A required debug symbol file was missing
#johan12345 Sorry for getting back to you so late. We've verified your debug symbols and can confirm they should process and symbolicate correctly. The issue you are referring to has been fixed a while back in both sentry-cli and sentry and will be available with the next release.
We have been preparing a major launch over the last couple of months which is why there have been no releases recently. However, since we've received a couple of requests regarding symbolication for on-premise customers, we will try to push a new release out soon. I cannot give you an exact timeline, though, so please stay tuned.
Again, I'm very sorry for the inconvenience this might have caused.
https://github.com/getsentry/sentry/issues/7595
Reprocessing 12 events …
Some users are reporting sometimes to be stuck on reprocessing. Mostly happens with self-installations but we also had two support issues.
This seems to be triggered by internal server errors in the processing pipeline in bad places.
Related: https://forum.sentry.io/t/stuck-there-are-x-events-pending-reprocessing/1518/6
https://github.com/getsentry/sentry/issues/5862
We've added a new button called "Discard all" which can be found above your processing issues list.
This will discard all processing issues and the corresponding events.
We've also found an error in our processing pipeline we've yet to fix.
I will close this issue for now and link new issues regarding processing errors later.
So the only thing I can advise you right now is basically deploy the master branch of Sentry because our last release was in November and we fixed a bunch of stuff since then.
Not sure if we release a new version before Sentry 9 (which still needs some time).
https://forum.sentry.io/t/ios-exceptions-shows-up-as-redacted/3681
TLDR: We are switching to Crashlytics
I was wondering if there was a performance difference if you launch an app on your iPhone from Xcode vs starting it up from the phone itself. When you launch it from Xcode it seems to be on a debug "lite" mode in the sense that you are getting data to the console.
Are there any performance differences when you don't launch from Xcode ?
There are performance differences. You wouldn't be able to put a general number on how big they are, because they depend on a number of factors. The differences can go from not noticeable to significantly slowing down even a fairly simple application.
First you have three different operating modes:
A release build that is installed to the device directly
This will enable things such as compiler optimizations. For exact settings have a look at your project file.
A debug build run with a debugger attached (e.g. by launching from Xcode)
On top of your application missing compiler optimizations because of being built with the debug configuration, the debugger can be an additional performance drain.
From personal experience I know that having a lot of breakpoints (especially symbolic ones) or watchpoints, can make things noticably slower. You may experience similar problems if you are doing intensive logging.
In some projects debug code is also compiled to work slightly differently than in production. For example if using a logging framework such as CocoaLumberjack, you would set a more verbose log level, which can again be more processing work.
A debug build run without a debugger attached (e.g. by installing through Xcode and launching separately)
You will not have any compiler optimizations and still work on Debug configuration logic, but your application isn't additionally slowed down by the debugger
Actually there could be a lot of difference between debug and release codes.
By default debug build don't use optimizations but add debug symbols to the binaries created. These two can have dramatic effect on the app performance, size and some.
You can go and look at the project settings file to understand what exactly are the differences.
But again to your question, there could be performance changes between debug and release.
First of all, I'm sorry to ask a somewhat vague question but I'm only doing it because I'm clueless to what might be causing the problem.
I've been building an app with Sprite Kit and it has worked great. I've made some additions to code at suddently I see a dramatic decrease in performance. I have rolled back all my code changes but the performance didn't go up. I'm left clueless to what's wrong.
After hidious debugging I've noticed that the performance problem only affects run and test builds. If I do a profiling build, the app behaves as normally = with high performance. So this would suggest that my problem is somewhere in the build configuration but I'm completely new to ios build environment configurations.
Can anybody suggest what might be the cause of this? Where should I start looking? I have my background in Java 5 and the compiler and other settings are quite strange to me.
The default setup is to have a Debug config and a Release config, where Debug is configured with no optimizations, and Release does include optimizations. Optimizing often makes it hard to use the debugger, which is why unoptimized code is preferred there.
By default, Run, Test, and Analyze use the Debug config, Profile and Archive use the Release config. Your change in behavior could come from lots of differences: you may have had it configured before to build for Run in Release mode. You may have had a subproject built in Release mode that is now in Debug. You may have made a coding change that is very slow unless the optimizer is used. (Since you say you rolled back the code, this last one is unlikely, but check your version control and see if you changed any project settings.)
If you're seeing weird "it wasn't like that before" behavior when you think you've put everything back, make sure you're rebuilding everything. Delete your Derived Data directory. You can determine where it is in Preferences, Locations. "Delete your Derived Data directory" is the Xcode-equivalent of "try rebooting." It's the most common way to fix strange "Xcode isn't working right" problems. In fact, I put my Derived Data in /tmp/build so that it gets deleted every time I reboot (and so the path is easier for me to remember).
Sometimes I'm trying to track down a really rare bug in an iOS app. I'll hit it in the debugger after hours of trying to repro only to have xcode or lldb crash on me while I'm debugging (usually if I'm stepping through C++ code). This is beyond infuriating.
With gdb you can use generate-core-dump to create a core dump of the file so that I can re-load it in gdb and at least look at all of the memory. What I want is the ability to do something similar in lldb so that when xcode crashes (as it always tends to do at the worst times) I can recover my debugging session without having to reproduce the crash.
The app is running on a non-jailbroken iPhone, so I don't have much access to the OS to do something like dump the memory from there.
One possible answer is to just use gdb instead of lldb, but I think that causes some other problems that I'm not remembering at the moment, plus it doesn't have some of the features that are useful in lldb.
UPDATE: Xcode 6, released fall of 2014, includes a new process save-core command in lldb -- lldb can now generate a coredump of a user process. e.g. (lldb) process save-core /tmp/corefile and wait a little bit.
The original answer for Xcode 5 and earlier lldb's, was:
This feature isn't implemented in lldb yet. This feature isn't implemented in the Apple version of gdb, either, for that matter.
While not a commonly requested feature, it is something other people have said would be useful as well. Hopefully someone will be sufficiently motivated to add that capability to lldb. I'm not sure how well it would work on an iOS device because it's going to involve gigantic amounts of data being transferred up to the Mac over a protocol that isn't very efficient for large data transfers - I expect it would be remarkably slow.
The core file can be opened with lldb -c /tmp/corefile
It's worth noting that the process explorer tool for iOS can generate core dumps of any PID (assuming you have root or it's the same UID as you), without impacting the process.
This is a pretty silly situation: I'm using GHUnit to test an app and I'm running those tests outside the simulator according to the instructions.
Everything was great for a long time, but we're getting in to a situation now where I'm getting this mysterious log message in the console coinciding with a pause of several seconds rather frequently in my test suite:
Timed out trying to acquire capabilities data.
This is a little disconcerting if only because it's only happening on one machine; everything is as smooth as can be everywhere else I run this test suite. I can totally believe that there's hardware missing or failing on this machine, but does anybody have any idea where to go next in debugging this? Google has never heard the phrase before.
We found this to be a problem when the unit tests are the first thing run after reboot.
While it would be nice to know the cause more fundamentally, we were able to fix it by having the simulator run on login on our continuous integration machine.