I have an Xcode 5 CI server running my XCTest unit tests.
My test cases rely on breakpoints to trigger specific actions. These actions are essential to the running of the tests.
Everything passes if I run the tests locally. The problem is: when a bot runs the tests on the server it seems as though breakpoints are ignored.
I tried a sample breakpoint, with an alert sound, just for testing. I shared the breakpoint and committed the shared breakpoint to the project's git repository. The bot correctly checks out the project with the breakpoint included (I can verify this by examining the project in /Library/Server/Xcode/Data/BotRuns/Cache/...).
However, when the bot runs the breakpoint is NOT triggered. I don't hear the sound and execution does not pause.
This behaviour obviously makes sense for most cases, but in my specific case - is there any way to configure the bot so that breakpoints are not ignored?
Whether you can enable this or not having your tests rely on something external to the system under test, like a breakpoint, to ensure that your tests pass seems like a broken design to me.
Ideally your tests should be able to run on any machine either in an interactive or non interactive way. As you cant guarantee that Breakpoints have the "Automatically continue after evaluating" flag set then they would seem to definitely not be suited to a non interactive run.
Using breakpoints for testing also then adds a dependency on Xcode for running the tests as other build systems like xcodebuild and xctool might not even understand breakpoints in the project file.
I would refactor your tests to remove this dependency on breakpoints. If you need help with that it sounds like a great stack overflow question ;)
Related
I am trying to use XCUI and XC tests together. I found this twitter post saying it was possible. However, which section in the build settings do I put those new attributes?
I ask because I tried the method and put those settings in the user defined section of the project target and it would not let me run my tests because those settings were defined.
UI tests operate like this:
The app is launched.
Tests control another process external to the app, telling the app what to do.
Unit tests operate like this:
The app is launched.
The test code is injected into the running app.
The tests are executed.
These are radically different. UI tests operate strictly from the outside. They have no access to the internals of the program. In the end, UI tests boil down to simulating user actions.
Unit tests, on the other hand, operate from the inside. They can reach anything that is non-private.
The only way for UI tests to perform something like a unit test would be to build the test functionality into the production code, accessible through gestures. There are better ways to unit test than that, namely using unit testing frameworks.
So… no. They shouldn't live together.
We have a TFS 2017 build agent executing a Visual Studio Test task to execute our unit tests. This has worked fine for several years, but all of a sudden - without any code changes - the task gets stuck.
All the tests have finished running, we see summary information, and it will sit at what appears to be the place where it would normally publish the results... but then nothing happens. We've waited 12+ hours for it to finish. This step normally takes about 90 minutes.
I've confirmed that the TRX file is being created. It's about 4MB in size. We're running a bit over 3000 unit tests.
I've also tried disabling code coverage and attachments upload inside the test task, but it doesn't appear to make a difference.
Below is a screen cap of the log output when the step is stuck.
Lastly, we have lots of other projects on this server whose tests run / publish fine, as well as TFS Releases for this same build that also run tests (integration/system tests) which work without issue.
UPDATE: We ran this build on a different build server, and it published tests correctly. So this means there is something wrong with this specific build server...
UPDATE 2: So I'm not longer sure what is happening here. The original build server we were having issues on is now working fine with no changes whatsoever. Just started working again. The other build server was working, and then stopped. Same issue. I broke up the 3000+ tests into two steps, roughly 50/50, and that worked a couple of times, but now does not. So this does not appear to be server specific, nor does it appear to be related to the quantity of tests. Debug logging offers nothing useful, as everything seems fine right up until it just stops doing anything after generating the TRX file.
UPDATE 3: Well, it's happening again. I'm not sure how to proceed. I even tried Fiddler on the build box to see if I could catch funky looking traffic, but most of the traffic I'd expect to see I don't. It's like a good chunk of the work isn't being captured (such as source downloads, reporting progress, or test result publishing) by Fiddler. Is it not over HTTP/HTTPS?
This was difficult to figure out due to the quantity of tests we're running, but I was able to narrow it down to a test that launched ping.exe:
[ExpectedException(typeof(TimeoutException))]
[TestMethod]
public void ProcessWillTimeout()
{
const string command = "cmd";
const string args = "/C ping 127.0.0.1 /t";
var externalProcessService = new ExternalProcessService();
externalProcessService.Execute(command, args, TimeSpan.FromMilliseconds(500));
}
For whatever reason, this test was leaving both conhost.exe and ping.exe "orphaned". The fact these processes were not terminating was, for an unknown reason, preventing the tests from publishing their results back to TFS. There is probably something somewhere that waits for the a process to finish and that was never happening.
Indeed, we would see a bunch of conhost.exe and ping.exe processes in both Task Manager and Process Explorer:
You'll notice the tool tip there... "[Error opening process]". I couldn't even use Process Explorer to kill these processes - although Task Manager could. Sure enough, when I killed them, the TFS build task would immediately resume and finish publishing results.
So there is clearly some kind of bug in that ExternalProcessService code we were testing (despite carefully having a finally block that terminated the process), but we are at least able to have our build tests run again without issue.
Suggest you abandon this build and trigger it again. To narrow down if this issue could be reproduced stably.
According to your description, all other builds work properly. And it worked fine for several years. All tests pass, the test report is written, but just the task hangs. Please double check if some other processes might possibly not be properly closing down.
Besides use another build agent to test again. Also try to create a newly build definition with the same settings, trigger that definition, this may do the trick.
Moreover, you could enable verbose logging for trouble shooting. To do this simply adding a build variable named system.debug and setting its value to 'true', this will contain a more detail log info.
When running a UI Test, how do I keep the simulator open so I can manually test additional steps?
After a UI Test completes, the simulator will shut down. I'd like to use the UI Test as an automation to reach a certain point in the app. Then I'll do a few additional things not covered by the test.
I don't think there's an option for that. You should separate your automatic and manual testing. Automatic testing ideally should be done on a CI. You should do your manual testing separately from UI tests.
I'm configuring continuous integration with TFS 2012. I have one problem need solve.
I need to exclude some paths from triggering builds.
For e.g. I have:
$/Project1
$/Project2
And I want that after each check-in of $/Project1 - build has been triggered. And it must build both $/Project1 and $/Project2.
But after checking in $/Project2 I don't want to trigger a build for that Build Definition.
In Source Settings of Build Definition are only functions "Active" and "Cloaked", but it isn't what I need.
Thanks a lot in advance.
P.S. The worth solution is to add the comment ***NO_CI*** on check-in. It will be great if there is some other way.
Based on the comments, this boils down to an X-Y problem. You can't do what you want to do, but the reason you want to do it is because you're trying to solve the wrong problem.
You're running the UI tests at the wrong point in your dev-test cycle. UI tests should not run during a build, they should run after a release. A change to your test project should absolutely result in a build.
Someone is developing UI tests against code that's not yet in source control, which makes no sense. If someone is writing tests against code, the code should be source controlled.
I'm guessing that someone is manually pushing uncommitted code out to a dev server, which is being used by someone else to write tests. Don't do this. Use a real release management solution so that as developers write code, each check-in is automatically deployed to a dev/QA environment. Then the folks writing the UI tests will have something to test against. What's the point in writing tests against code that's in such a state of flux that the developer responsible for it isn't even sure it's worth being source controlled? That just results in spending a lot of time rewriting tests as the code evolves.
Assuming you set everything up properly, every commit of the application code will result in the current set of tests being run against the latest commit. Every commit of the test code will result in the new set of tests being run against the existing application code. The two things (application code and test code) should be coupled, and should always build together.
And one last thing, mostly opinion: UI tests are awful and serve very little utility. They are brittle, slow, and hard to maintain. I have never seen a comprehensive UI test suite actually provide value. UI tests are best served as a small set of post-release smoke tests. Business logic should be primarily unit tested, with a smaller suite of integration tests to back it up.
The Simperium Android Github tells how to run the Android tests, but I can't find how to run the iOS tests. I tried opening Simperium.xcodeproj but Product->Test is grayed out.
Eventually I'd like to write my own unit tests that use Simperium, but I thought I'd start by looking into how Simperium structures their tests.
Thanks.
The process you describe adds Simperium's Integration Tests target to your own app's schema.
Normally, you would want to switch to the 3rd party library scheme first, and run the tests right there. To do so, please, click the Scheme picker (right by the Play / Stop buttons), and select 'Simperium'.
Make sure to select a simulator as well, since Tests are not supported in the real device.
Regarding failures, the Integration Tests simulate real interaction with the backend, and have several Timeouts.
Would it be possible you're running them on a slow internet connection?.
Thanks!
I figured out how to run the tests. In Xcode, I selected the Integration Tests scheme and edited that scheme. I selected 'Test' on the left side, then clicked the little plus at the bottom of the main pane. I added the 'Integration Tests' target. The list of tests to run appeared in the pane, and Product->Test could then be used to run the tests.
Unfortunately, 9 of the integration tests failed when I ran them.