I have the app with login screen and the screen that appear after login (authorized part).
What is the best approach to test these screens from Authorized part?
I have several ideas:
1) Somehow I need to remove all the data from keychain before each test and then I go through the entire flow each time to get to the first screen after login. When I need to send a request to backend to login, I wait for the main screen using
let nextGame = self.app.staticTexts["Main Screen Text"]
let exists = NSPredicate(format: "exists == true")
expectation(for: exists, evaluatedWithObject: nextGame, handler: nil)
waitForExpectations(timeout: 5, handler: nil)
2) I pass some arguments here
app = XCUIApplication(bundle:….)
app.launchArguments = [“notEmptyArguments”:”value”]
app.launch()
So I can pass a fake token and out backend will accept this token, so that my app will know that it has to route me to the Main Screen and all the requests will be successful because my Network Service has this fake token
But I fill that it's a not vary safe way.
Do you have any ideas what is the best approach and may be you can give advice of a better approach?
The second idea you mentioned is a good way to skip login screen in tests. Furthermore, implementing token passing will be helpful to developer team as well. Those launch arguments can be stored at running schemes settings.
Also, if you implement deep linking in the same manner it will bring even more speed enhancements for both QA and developer team.
Surely, these "shortcuts" shall be only accessible while running a debug configuration (using #if DEBUG...)
In my opinion your Login Service or whatever service your app might need to perform or show some use cases, should be all mocked. That means in your automated unit/ui testing environment your app is going to talk to mocked service implementations, that means that the login service or authorization service response should be mocked to either be success or failure, so you can test both of them.
To achieve that your services should all be represented as interfaces/protocols and the implementation/implementation details should be in either the production, development or automated testing environment.
I would never involve any networking in automated testing. You should create a mock implementation of your authorization service for example that in automated test environment could be mocked to either give a response of success or failure depending on the test you are running (and this setup you can do in the setup() method maybe).
The most authentic test suite would sign in at the beginning of each test (if needed) and would sign out if appropriate during teardown. This keeps each test self-contained and allows each test to use a different set of credentials without needing to check if it's already signed in/needs to change to a different user account, because tests will always sign out at the end.
This is not a fool-proof method as it may not always be possible for teardown code to execute correctly if a test has failed (since the app may not be in the state that teardown expects, depending on your implementation) but if you are looking for end-to-end tests, using only the codepaths used by production users, this is one way you could do it.
Introducing mocking/stubbing can make your tests more independent and reliable - it's up to you to choose how much you want to mirror the production user experience in your tests.
Related
I have an iOS app with Cognito authentication implemented very similar to CognitoYourUserPoolsSample. Most important fragments are in SignInViewController.swift:
When user taps Sign In, asynch task is added:
var passwordAuthenticationCompletion: AWSTaskCompletionSource<AWSCognitoIdentityPasswordAuthenticationDetails>?
...
#IBAction func signInPressed<...> {
...
let authDetails = AWSCognitoIdentityPasswordAuthenticationDetails(username: self.username.text!, password: self.password.text! )
self.passwordAuthenticationCompletion?.set(result: authDetails)
...
Later we get either success or error response:
extension SignInViewController: AWSCognitoIdentityPasswordAuthentication {
public func getDetails<...> {
DispatchQueue.main.async {
// do something in case of success
...
public func didCompleteStepWithError<...> {
DispatchQueue.main.async {
// do something in case of failure
...
I also have a UI test, which fills username and password, clicks Sign In and validates response:
class MyAppUITests: XCTestCase {
...
func loginTest() {
let usernameField = <...>
usernameField.tap()
usernameField.typeText("user#domain.com")
...
// same for password field
// then click Sign In
<...>.buttons["Sign In"].tap()
Currently this test is working against the actual AWS infrastructure, which is not ideal for many reasons. What I want is to simulate various responses from AWS instead.
How can I do that?
I think the best would be mocking or stabbing task queue, but I'm not sure how to approach that. Any direction will be much appreciated. If you handled similar task in an alternative way, I'd like to hear your ideas too, thanks.
Okay, I am not that familiar with the AWS iOS SDK and how exactly it implements the auth flow, so take the following with a grain of salt. It's less a full answer but more a general "strategy" I hope. I implemented a similar approach in my current project, not just for the login, but actually all remote connections I make.
There's three things you need to do:
Run a small local webserver inside your UI test target. I use Embassy and Ambassador for that in my current project. Configure it to return whatever response Cognito (or another endpoint) usually gives. I simply curled a request manually and saved the response somewhere, but in my case I received plain data (and not, for example, a complete login page to show in a webview...). My guess is that Cognito actually shows a login (web)view and on successful login uses a deep link to "go back" to your app, which eventually calls your AWSCognitoIdentityPasswordAuthentication methods (success or error). You could have the test target, i.e. the webserver call the deep link directly if you know what it looks like (which should be possible to find out?).
Add some mechanism to switch the Cognito endpoint during the test. This unfortunately requires addition of production code, but if done right it shouldn't be too difficult. I did it by using a launch environment variable I set during the test (see below). Unless your webserver supports https (Embassy does not out of the box) this also requires somehow configuring App Transport Security. The hardest part is surely to figure out where in the SDK that endpoint is constructed and how to change it. A quick look at the documentation leads me to believe that webDomain is where it's saved, but I don't see how it's set up. That property is even read-only, which complicates things. I assume, though, that you can change it in some configuration in your project? Otherwise it reeks like a case for method swizzling... Sorry I can't offer more sound direction here.
During your tests, ensure the real endpoints that are going to be accessed during the relevant app flows are switched to http://localhost/.... I did so by using XCUIApplication().launchEnvironment["somekey"] = "TESTINGKEY", which matched my production code preparation in the second step. In my case I could simply load different endpoints (which had the localhost domain and otherwise the same paths as the original domains). Configure your webserver's responses according to the test case (successful login, invalid credentials etc.).
I admit this was/is a lot of work, but for me it was worth it since I could easily run the entire app flow (which involved a lot of outgoing requests) without any network access. I had to implement our auth system on my own anyway, which gave me a lot of control over what URLs were used and where, making it easy to have a single place to stub them out depending on the launch environment variable. The ugliest part in my case was actually enabling ATS exceptions in my tests only, for which I had to use a run script for various reasons.
I am new to iOS development. I want to write unit tests for an app which uses an SDK where the authenticate method is of the form:
(void)authenticate:(UIViewController *)presentingViewController clearCookies:(BOOL)clearCookies completionBlock:(AuthCompletionBlock)completionBlock.
To authenticate the user, an embedded web browser needs to open in the UIViewController(passed in the method parameters) . Can the unit tests access app UI?
How do I make sure that the browser opens, user authenticates thru app UI and then the unit tests execute.
Depends if you want unit tests or UI tests.
Unit tests: Fast. Consistent. They confirm step-by-step. But they don't go end-to-end.
UI tests: Slow. Fragile. They confirm end-to-end.
For unit testing, you wouldn't write tests that actually bring up a browser or interact with it in any way. Instead, you'd write tests that would bring up the browser, and tests that simulate different inputs returned from the browser.
This works as long as you're confident in the back-and-forth communication. If so, there's no need to test Apple's code. If not, then you can write a spike solution to understand the communication.
I'd like to inject or fake a login into the TwitterKit iOS SDK. I'm trying to write some unit tests for my library (which is kind of a wrapper around the most important Twitter APIs).
Unfortunately Fabric requires the user to have a system account set up or to present the OAuth screen. Is there any way to fake that login to make my test API calls succeed?
Any advice would be really appreciated.
It depends on what kind of tests you are trying to write and whether you are actually trying to make network requests during your tests.
If you are writing integration tests that have to work through the login flow the easiest way to accomplish this is to just add an account to the ACAccountStore by creating your own ACAccountCredential. One downside to this approach is that it will make a network call behind the scenes which is managed by the system, so there is no way to intercept/mock it which will likely make your tests flaky. Once you have that account added to the store it will get picked up whenever you try to go through the login flow.
If you are making network requests using the -[TWTRAPIClient sendTwitterRequest:completion:] method and you actually need to be logged in during these calls because you are trying to hit the Twitter API you can add a session to the TWTRSessionStore directly by calling [TWTRSessionStore saveSessionWithAuthToken:authTokenSecret:completion:]. Again, this method will make a network call that is hard to mock/intercept but that shouldn't really matter if you are making actual network requests during your tests.
If you are writing unit tests that do not need to make network requests, but you need there to be a session in the TWTRSessionStore you can directly save a session in the TWTRSessionStore. You can call -[TWTRSessionStore saveSession:withVerification:completion:] with a session that you create and without verification. Note, this method is private and is subject to change without notice. With that said, I don't really see any reason why we would change it anytime soon so it should be safe for you to use.
If none of that works for you, let me know more specifically what you are trying to accomplish and I can suggest other options.
I used a function similar to this to create a sample Twitter test account insid of unit or integration tests:
static func addTestAccount(completion: ()->()) {
let credential = ACAccountCredential(OAuthToken: TestAccountToken, tokenSecret: TestAccountSecret)
let store = ACAccountStore()
let type = store.accountTypeWithAccountTypeIdentifier(ACAccountTypeIdentifierTwitter)
let newAccount = ACAccount(accountType: type)
newAccount.credential = credential
newAccount.username = TestAccountUsername
store.saveAccount(newAccount, withCompletionHandler: { (success, error) -> Void in
print(success ? "Saved new account" : "Failed to save account \(error)")
completion()
})
}
Is there a way to mock requests when writing automated UI tests in Swift 2.0. As far as I am aware the UI tests should be independent of other functionality. Is there a way to mock the response from server requests in order to test the behaviour of the UI dependant on the response. For example, if the server is down, the UI tests should still run. Quick example, for login, mock if password failed then UI should show alert, however, if the login is successful the next page should be shown.
In its current implementation, this is not directly possible with UI Testing. The only interface the framework has directly to the code is through it's launch arguments/environment.
You can have the app look for a specific key or value in this context and switch up some functionality. For example, if the MOCK_REQUESTS key is set, inject a MockableHTTPClient instead of the real HTTPClient in your networking layer. I wrote about setting the parameters and NSHipster has an article on how to read them.
While not ideal, it is technically possible to accomplish what you are looking for with some legwork.
Here's a tutorial on stubbing network data for UI Testing I put together. It walks you through all of the steps you need to get this up and running.
If you are worried about the idea of mocks making it into a production environment for any reason, you can consider using a 3rd party solution like Charles Proxy.
Using the map local tool you can route calls from a specific endpoint to a local file on your machine. You can past plain text in your local file containing the response you want it to return. Per your example:
Your login hits endpoint yoursite.com/login
in Charles you using the map local tool you can route the calls hitting that endpoint to a file saved on your computer i.e mappedlocal.txt
mappedlocal.txt contains the following text
HTTP/1.1 404 Failed
When Charles is running and you hit this endpoint your response will come back with a 404 error.
You can also use another option in Charles called "map remote" and build an entire mock server which can handle calls and responses as you wish. This may not be exactly what you are looking for, but its an option that may help others, and its one I use myself.
I have two applications, say, A and B, talking to each other via API, now I am writing cucumber tests for A, I have two options:
Just test if the API is sent to B and stub the response from B
Setup test data on B from A (since i am testing A), and send real request to B, and record the request/response with VCR
I prefer option #1, but my coworker says it needs at least one real request to make sure the system (including A and B) is working.
My concern is:
How to prepare testing data for B from A's tests?
It's fragile to mix them together, anything changed on B may cause failure on A
Any comments?
For the majority of your tests, stub the request/response, that way the test will pass when offline, or what not.
For one test, do a simple test that the external service is behaving as your stubs and mocks say it should.
E.G. Doing a get request still returns JSON with the attributes you expect to ensure your mocks are valid.
For the most part, "Up time" for an external service shouldn't be monitored by your test suite. Just that it behaves how you expect it to.
For the uptime concern you should look at the sysadmin side with Nagios, Pingdom, Pagerduty or what not.
You are writing cucumber test, means it is an integration test.
For integration test, you'd better not mock anything, it's the last safety guard to keep your application save.
So you'd better send at last once real request to make sure your request is correct and what's more you can repeat this real request at any time.
the problem of solution 1:
You can not make sure B changing API implement
You can not make sure A send correct parameter to B
It's hard to mock complex request
So I sugguest create a sandbox app for B, make real request