I have an iOS app with Cognito authentication implemented very similar to CognitoYourUserPoolsSample. Most important fragments are in SignInViewController.swift:
When user taps Sign In, asynch task is added:
var passwordAuthenticationCompletion: AWSTaskCompletionSource<AWSCognitoIdentityPasswordAuthenticationDetails>?
...
#IBAction func signInPressed<...> {
...
let authDetails = AWSCognitoIdentityPasswordAuthenticationDetails(username: self.username.text!, password: self.password.text! )
self.passwordAuthenticationCompletion?.set(result: authDetails)
...
Later we get either success or error response:
extension SignInViewController: AWSCognitoIdentityPasswordAuthentication {
public func getDetails<...> {
DispatchQueue.main.async {
// do something in case of success
...
public func didCompleteStepWithError<...> {
DispatchQueue.main.async {
// do something in case of failure
...
I also have a UI test, which fills username and password, clicks Sign In and validates response:
class MyAppUITests: XCTestCase {
...
func loginTest() {
let usernameField = <...>
usernameField.tap()
usernameField.typeText("user#domain.com")
...
// same for password field
// then click Sign In
<...>.buttons["Sign In"].tap()
Currently this test is working against the actual AWS infrastructure, which is not ideal for many reasons. What I want is to simulate various responses from AWS instead.
How can I do that?
I think the best would be mocking or stabbing task queue, but I'm not sure how to approach that. Any direction will be much appreciated. If you handled similar task in an alternative way, I'd like to hear your ideas too, thanks.
Okay, I am not that familiar with the AWS iOS SDK and how exactly it implements the auth flow, so take the following with a grain of salt. It's less a full answer but more a general "strategy" I hope. I implemented a similar approach in my current project, not just for the login, but actually all remote connections I make.
There's three things you need to do:
Run a small local webserver inside your UI test target. I use Embassy and Ambassador for that in my current project. Configure it to return whatever response Cognito (or another endpoint) usually gives. I simply curled a request manually and saved the response somewhere, but in my case I received plain data (and not, for example, a complete login page to show in a webview...). My guess is that Cognito actually shows a login (web)view and on successful login uses a deep link to "go back" to your app, which eventually calls your AWSCognitoIdentityPasswordAuthentication methods (success or error). You could have the test target, i.e. the webserver call the deep link directly if you know what it looks like (which should be possible to find out?).
Add some mechanism to switch the Cognito endpoint during the test. This unfortunately requires addition of production code, but if done right it shouldn't be too difficult. I did it by using a launch environment variable I set during the test (see below). Unless your webserver supports https (Embassy does not out of the box) this also requires somehow configuring App Transport Security. The hardest part is surely to figure out where in the SDK that endpoint is constructed and how to change it. A quick look at the documentation leads me to believe that webDomain is where it's saved, but I don't see how it's set up. That property is even read-only, which complicates things. I assume, though, that you can change it in some configuration in your project? Otherwise it reeks like a case for method swizzling... Sorry I can't offer more sound direction here.
During your tests, ensure the real endpoints that are going to be accessed during the relevant app flows are switched to http://localhost/.... I did so by using XCUIApplication().launchEnvironment["somekey"] = "TESTINGKEY", which matched my production code preparation in the second step. In my case I could simply load different endpoints (which had the localhost domain and otherwise the same paths as the original domains). Configure your webserver's responses according to the test case (successful login, invalid credentials etc.).
I admit this was/is a lot of work, but for me it was worth it since I could easily run the entire app flow (which involved a lot of outgoing requests) without any network access. I had to implement our auth system on my own anyway, which gave me a lot of control over what URLs were used and where, making it easy to have a single place to stub them out depending on the launch environment variable. The ugliest part in my case was actually enabling ATS exceptions in my tests only, for which I had to use a run script for various reasons.
Related
I have the app with login screen and the screen that appear after login (authorized part).
What is the best approach to test these screens from Authorized part?
I have several ideas:
1) Somehow I need to remove all the data from keychain before each test and then I go through the entire flow each time to get to the first screen after login. When I need to send a request to backend to login, I wait for the main screen using
let nextGame = self.app.staticTexts["Main Screen Text"]
let exists = NSPredicate(format: "exists == true")
expectation(for: exists, evaluatedWithObject: nextGame, handler: nil)
waitForExpectations(timeout: 5, handler: nil)
2) I pass some arguments here
app = XCUIApplication(bundle:….)
app.launchArguments = [“notEmptyArguments”:”value”]
app.launch()
So I can pass a fake token and out backend will accept this token, so that my app will know that it has to route me to the Main Screen and all the requests will be successful because my Network Service has this fake token
But I fill that it's a not vary safe way.
Do you have any ideas what is the best approach and may be you can give advice of a better approach?
The second idea you mentioned is a good way to skip login screen in tests. Furthermore, implementing token passing will be helpful to developer team as well. Those launch arguments can be stored at running schemes settings.
Also, if you implement deep linking in the same manner it will bring even more speed enhancements for both QA and developer team.
Surely, these "shortcuts" shall be only accessible while running a debug configuration (using #if DEBUG...)
In my opinion your Login Service or whatever service your app might need to perform or show some use cases, should be all mocked. That means in your automated unit/ui testing environment your app is going to talk to mocked service implementations, that means that the login service or authorization service response should be mocked to either be success or failure, so you can test both of them.
To achieve that your services should all be represented as interfaces/protocols and the implementation/implementation details should be in either the production, development or automated testing environment.
I would never involve any networking in automated testing. You should create a mock implementation of your authorization service for example that in automated test environment could be mocked to either give a response of success or failure depending on the test you are running (and this setup you can do in the setup() method maybe).
The most authentic test suite would sign in at the beginning of each test (if needed) and would sign out if appropriate during teardown. This keeps each test self-contained and allows each test to use a different set of credentials without needing to check if it's already signed in/needs to change to a different user account, because tests will always sign out at the end.
This is not a fool-proof method as it may not always be possible for teardown code to execute correctly if a test has failed (since the app may not be in the state that teardown expects, depending on your implementation) but if you are looking for end-to-end tests, using only the codepaths used by production users, this is one way you could do it.
Introducing mocking/stubbing can make your tests more independent and reliable - it's up to you to choose how much you want to mirror the production user experience in your tests.
I'd like to inject or fake a login into the TwitterKit iOS SDK. I'm trying to write some unit tests for my library (which is kind of a wrapper around the most important Twitter APIs).
Unfortunately Fabric requires the user to have a system account set up or to present the OAuth screen. Is there any way to fake that login to make my test API calls succeed?
Any advice would be really appreciated.
It depends on what kind of tests you are trying to write and whether you are actually trying to make network requests during your tests.
If you are writing integration tests that have to work through the login flow the easiest way to accomplish this is to just add an account to the ACAccountStore by creating your own ACAccountCredential. One downside to this approach is that it will make a network call behind the scenes which is managed by the system, so there is no way to intercept/mock it which will likely make your tests flaky. Once you have that account added to the store it will get picked up whenever you try to go through the login flow.
If you are making network requests using the -[TWTRAPIClient sendTwitterRequest:completion:] method and you actually need to be logged in during these calls because you are trying to hit the Twitter API you can add a session to the TWTRSessionStore directly by calling [TWTRSessionStore saveSessionWithAuthToken:authTokenSecret:completion:]. Again, this method will make a network call that is hard to mock/intercept but that shouldn't really matter if you are making actual network requests during your tests.
If you are writing unit tests that do not need to make network requests, but you need there to be a session in the TWTRSessionStore you can directly save a session in the TWTRSessionStore. You can call -[TWTRSessionStore saveSession:withVerification:completion:] with a session that you create and without verification. Note, this method is private and is subject to change without notice. With that said, I don't really see any reason why we would change it anytime soon so it should be safe for you to use.
If none of that works for you, let me know more specifically what you are trying to accomplish and I can suggest other options.
I used a function similar to this to create a sample Twitter test account insid of unit or integration tests:
static func addTestAccount(completion: ()->()) {
let credential = ACAccountCredential(OAuthToken: TestAccountToken, tokenSecret: TestAccountSecret)
let store = ACAccountStore()
let type = store.accountTypeWithAccountTypeIdentifier(ACAccountTypeIdentifierTwitter)
let newAccount = ACAccount(accountType: type)
newAccount.credential = credential
newAccount.username = TestAccountUsername
store.saveAccount(newAccount, withCompletionHandler: { (success, error) -> Void in
print(success ? "Saved new account" : "Failed to save account \(error)")
completion()
})
}
Almost all of my graphql objects require that the user is authenticated to access them. If the user is not logged in, or their credentials are invalid, the server returns an error with a flag requireLogin set to true.
How can I intercept errors wherever they occur in Relay, capture this specific error, and then use it to update my state in redux (which will then show a message and a login box)?
The ideal place seems to be the NetworkLayer, but before I implement my own custom NetworkLayer is there a better existing solution (some sort of Relay-wide onError handler for example)?
You're likely looking for the renderFailure prop on your Relay Root Container. This gives you a place to handle errors that occur while fetching your data. If you're looking for errors directly relating to Relay Mutations, you can provider success and error handlers to Relay.Store.commitUpdate. I think those two should be capable of handling most scenarios.
You did mention using Redux along with Relay. I've not done any research into OSS projects combining these two tools, but Relay itself handles a lot of what Redux also handles and more. While Redux is great, I do think Relay is a more custom fit for GraphQL itself, and React, and I've not felt the need to find a spot for Redux in this stack, yet. It might complicate things.
Is there a way to mock requests when writing automated UI tests in Swift 2.0. As far as I am aware the UI tests should be independent of other functionality. Is there a way to mock the response from server requests in order to test the behaviour of the UI dependant on the response. For example, if the server is down, the UI tests should still run. Quick example, for login, mock if password failed then UI should show alert, however, if the login is successful the next page should be shown.
In its current implementation, this is not directly possible with UI Testing. The only interface the framework has directly to the code is through it's launch arguments/environment.
You can have the app look for a specific key or value in this context and switch up some functionality. For example, if the MOCK_REQUESTS key is set, inject a MockableHTTPClient instead of the real HTTPClient in your networking layer. I wrote about setting the parameters and NSHipster has an article on how to read them.
While not ideal, it is technically possible to accomplish what you are looking for with some legwork.
Here's a tutorial on stubbing network data for UI Testing I put together. It walks you through all of the steps you need to get this up and running.
If you are worried about the idea of mocks making it into a production environment for any reason, you can consider using a 3rd party solution like Charles Proxy.
Using the map local tool you can route calls from a specific endpoint to a local file on your machine. You can past plain text in your local file containing the response you want it to return. Per your example:
Your login hits endpoint yoursite.com/login
in Charles you using the map local tool you can route the calls hitting that endpoint to a file saved on your computer i.e mappedlocal.txt
mappedlocal.txt contains the following text
HTTP/1.1 404 Failed
When Charles is running and you hit this endpoint your response will come back with a 404 error.
You can also use another option in Charles called "map remote" and build an entire mock server which can handle calls and responses as you wish. This may not be exactly what you are looking for, but its an option that may help others, and its one I use myself.
I'm aware of the Chris Fulstow project log4net.signalr, it is a great idea if you want a non production log since it logs all messages from all requests. I would like to have something that discriminates log messages by the request originating them and sed back to the proper browser.
Here what I've done in the appender:
public class SignalRHubAppender:AppenderSkeleton
{
protected override void Append(log4net.Core.LoggingEvent loggingEvent)
{
if (HttpContext.Current != null)
{
var cookie = HttpContext.Current.Request.Cookies["log-id"];
if (null != cookie)
{
var formattedEvent = RenderLoggingEvent(loggingEvent);
var context = GlobalHost.ConnectionManager.GetHubContext<Log4NetHub>();
context.Clients[cookie.Value].onLog(new { Message = formattedEvent, Event = loggingEvent });
}
}
}
}
I'm trying to attach the session id to a cookie, but this does not work on the same machine because the cookie is overwritten.
here is the code I use on the client to attach the event:
//start hubs
$.connection.hub.start()
.done(function () {
console.log("hub subsystem running...");
console.log("hub connection id=" + $.connection.hub.id);
$.cookie("log-id", $.connection.hub.id);
log4netHub.listen();
});
As a result, just the last page connected shows the log messages. I would like to know if there is some strategies to have the current connection id from the browser which originate the current request, if there is any.
Also I'm interested to know if there is better design to achieve a per browser logging.
EDIT
I could made a convention name based cookie ( like log-id-someguid ), but I wonder if there is something smarter.
BOUNTY
I decided to start a bounty on that question, and I would additionally ask about the architecture, in order to see if my strategy makes sense or not.
My doubt is, I'm using the hub in a single "direction" from server to client, and I use it to log activities not originating from calls to the hub, but from other requests ( potentially requests raised on other hubs ), is that a correct approach, having as a goal a browser visible log4net appender?
The idea about how to correctly target the right browser instance/tab, even when multiple tabs are open on the same SPA, is to differentiate them through the Url. One possible way to implement that is to redirect them at the first access from http://foo.com to http://foo.com/hhd83hd8hd8dh3, randomly generated each time. That url rewriting could be done in other ways too, but it's just a way to illustrate the problem. This way the appender will be able to inspect the originating Url, and from the Url through some mapping you keep server side you can identify the right SignalR ConnectionId. The implementation details may vary, but the basic idea is this one. Tracking some more info available in the HttpContext since the first connection you could also put in place additional strategies in order to prevent any hijacking.
About your architecture, I can tell you that this is exactly the way I used it in ElmahR. I have messages originating from outside the notification hub (errors posted from other web apps), and I do a broadcast to all clients connected to that hub (and subscribing certain groups): it works fine.
I'm not an authoritative source, but I also guess that such an architecture is ok, even with multiple hubs, because hubs at the end of the day are just an abstraction over a (one) persistent connection which allows you to group messaging by contexts. Behind the scenes (I'm simplifying) you have just a persistent connection with messages going back and forth, so whatever hub structure you define on top of it (which is there just to help you organizing things) you still insist on that connection, so you cannot do any harm.
SignalR is good on doing 2 things: massive broadcast (Clients), and one-to-one communication (Caller). As long as you do not try to do weird things like building keeping server-side references to specific callers, you should be ok, whatever number of Hubs, and interactions among them, you have.
These are my conclusions, coming from the field. Maybe you can twit #dfowler about this question and see if he has (much) more authoritative guidelines.