Xcodebuild - Skip Finished requesting crash reports. Continuing with testing - ios

I'm running a CI machine with the Xcode.
The tests are triggered using fastlane gym. I see this line in the output:
2019-05-27 16:04:28.417 xcodebuild[54605:1482269] [MT]
IDETestOperationsObserverDebug: (A72DBEA3-D13E-487E-9D04-5600243FF617)
Finished requesting crash reports. Continuing with testing.
This operation takes some time (about a minute) to complete. As far, as I understand, the Xcode requests crash reports from Apple to show in the "Organizer" window.
Since this is a CI machine, the crash reports will never be viewed on it and this step could be skipped completely how can I skip it?

Your mileage may vary, but after setting up a new machine with the following configuration, I encountered the same issue OP details:
macOS 10.15.2
Xcode 11.3
fastlane 2.139.0
Simulators # 13.3
When I run my fastlane test with 3 devices, I wind up at the following message and was sitting idle for about four minutes before I terminated it:
I then took the steps that I outlined in the comment to OP:
fastlane scan init
Edit my scanfile to look like this
I initially set disable_concurrent_testing(false), and when I ran the tests through fastlane, I got stuck again. Changing the value to disable_concurrent_testing(true) has allowed the tests to now run on my machine.

I think blaming "Finished requesting crash reports. Continuing with testing" may be a red herring. I was having several jobs stop at this step, but when I looked closer (I ran the lane locally and tailed the logs) I saw that my test was failing due to something else. It looks like Fastlane doesn't correctly show how long this step takes, in fact, I think if you're seeing that message, the process is already complete, and your tests are running. That changing concurrency fixes it for you may indicate your tests are failing due to a race condition.
So, anyway. Install fastlane locally, run your lane locally, tail -f the build output as well as the log file and see if the problem is revealed there. It was for me, but, as with everything, YMMV.

Related

Why Jest tests are SOMETIMES failing on CircleCI?

I have Jest tests that are running against the dockerized Neo4j Database, and sometimes they fail on CircleCI. The error message for all 25+ of them is :
thrown: "Exceeded timeout of 5000 ms for a hook.
#*******api: Use jest.setTimeout(newTimeout) to increase the timeout value, if this is a long-running test."
Since they fail sometimes, like once in 25 runs, I am wondering if jest.setTimeout will solve the issue. I was able to fail them locally by setting jest.setTimeout(10), but I am not sure how to debug this even more, or whether something else could be an issue here aside from a small timeout (default 5000). I would understand if 1/25 or a few fails, or if all other suits fail, but only a single file with all tests within that file is failing. And it is always the same file, never some other file for this reason ever.
Additional information, locally, that single file runs in less than a 1000ms connected to the staging database which is huge compared to the dockerized that has only a few files at the time of running
For anyone who sees this, I was able to solve this by adding the --maxWorkers=2 flag to the test command in my CircleCI config. See here for details: https://support.circleci.com/hc/en-us/articles/360005442714-Your-test-tools-are-smart-and-that-s-a-problem-Learn-about-when-optimization-goes-wrong-
Naman's answer is perfect! I couldn't believe it but it really solved my problem. Just to be extra clear on how to do it:
I change the test script from my package.json from jest to jest --maxWorkers=2. Then I pushed and it did solve my error.

Missing crash logs for iOS unit test runs in test result bundle

I am running a large unit test suite on CircleCI using Xcode 10.1, with fastlane executing the test run.
After some recent code changes, it seems the tests crash, once, and the test run is restarted, after which it completes. The crashes have happened while running different tests each time, but only one test per run crashes.
There is little to no useful output in the console about why the crash occurred. Based on WWDC 2016 session 409, I would expect to see crash (diagnostic) logs in the xcresult bundle produced by the tests, but I can't find them.
Is this something specific to Xcode Server, which they use in the WWDC demo? Is there some flag or environment variable that needs to be set to make these get saved in other environments?

Electron build timing out on CircleCI

I am trying to get CircleCI to work with my electron app but I can't understand how to stop the timeout error.
You can look at the app here: https://github.com/sauravyash/OutFlux
It fails on the npm test stage of the build with:
> outflux#1.0.0 test /home/ubuntu/OutFlux
> electron .
Xlib: extension "RANDR" missing on display ":99".
Xlib: extension "RANDR" missing on display ":99".
command ((npm :test)) took more than 10 minutes since last output
I'm new to the whole idea of CI so bear with me if the answer is obvious.
In case anyone sees this, you must understand that running electron by itself isnt testing anything, which is the point of CI. You should use something like Spectron to test if your app is successful in headless testing (testing without a real display).
https://github.com/electron/spectron

older go versions on travis are killed in the middle of running tests

I made a new commit and my build for go 1.5.4 and go 1.6.3 started failing but go 1.7 was still working.
I then reverted this commit but it was still failing even though the previous commit was passing.
Then I rebuilt an old commit which was passing and still these older versions consistently fail.
https://travis-ci.org/gogo/protobuf/builds/171003019
While running tests these are failing with signal killed
/home/travis/.gimme/versions/go1.5.4.linux.amd64/pkg/tool/linux_amd64/compile: signal: killed
go build github.com/gogo/protobuf/test: /home/travis/.gimme/versions/go1.5.4.linux.amd64/pkg/tool/linux_amd64/compile: signal: killed
go build github.com/gogo/protobuf/test/combos/both: /home/travis/.gimme/versions/go1.5.4.linux.amd64/pkg/tool/linux_amd64/compile: signal: killed
We hit this issue on travis and adding the option sudo: required seems to fix this. Possibly due to more memory available in the sudo-enabled env https://docs.travis-ci.com/user/reference/precise/

TFS build partially succeeded when calling a batch file, but no error in log

I’m building a solution which requires a batch file to be run after the build (there's a sequence in the workflow for this). TFS flags the build as partially succeeded, but there’s no error in the log even in full verbose mode ("diagnostic"). I’m checking the errorlevel after each line in the batch file and it’s always 0. I’ve also tested redirecting stdout and stderr in a file after each line and there’s no clue there.
It’s got nothing to do with unit tests because I’m skipping them for the time being.
I’ve noticed that usually when an error occurs in a batch file (e.g. file not found) there’s a visual cue to indicate the error and this matches the partially succeeded status. But I don’t see any visual cue.
So how can TFS decide that the build is only partially succeeded?
Thank you,
Solved.
It turns out the GetImpactedTests activity is throwing an exception (I can see it in the event viewer of the TFS machine), but it doesn't show at all in the build log.
I'm guessing that this exception makes the build partially succeeded (because the compilation part succeeded) but I couldn't see the assignment explicitly in the buid log. When I bypass the impact analysis (either by setting Analyze Test Impact to False or by removing the GetImpactedTests activity altogether), the error does not occur.
We experiment something similar here using the Lab Workflow (to kick our CodedUI tests). Different build template, same symptoms.
I have noticed that the build process reports that it partially succeeded, highlighting what seems to be a successful step in the deploy script (batch file).
The command is question is a command to install our mobile app on a mobile device (in order to test it at night):
adb install -d -r test.apk
I thought about looking the errorlevel right after running the adb command but the errorlevel was 0.
Then I thought that maybe the command is sending its output to stderr and found out this article on the android open source project, which confirms my hypothesis.
Following is my fix:
adb install -r -d test.apk 2>&1
Appending 2>&1 simply redirects stderr to stdout and now my deploy script does not report an error anymore and the build now succeeds (when all tests pass!).
Conclusion: When a script writes anything to stderr, the build workflow will report it as an error (partial success since it does not prevent execution of the workflow).
I know this is not your particular issue but since we had the same symptoms, I thought the stderr information could help somebody else find out the reason why their build process is reporting a partial success even though everything seems to work.

Resources