Currently, I am using AWS Cognito for user management + MFA. For my E2E tests (run in-band), I have a user (test#email.com) in Cognito with a Twilio phone number. For every E2E test, the test user needs to sign in using the standard MFA flow:
User inputs email + password
Cognito verifies + sends MFA to associated phone number
User enters MFA code in application
Cognito verifies it and logs in
This is fine for a single E2E test suite being ran since there will be no race conditions with other E2E tests. Though if I were to scale this approach (multiple test suites ran in parallel), different test suites would be receiving login pins that will not be validated since Cognito invalidates previously sent MFA codes.
Having N unique phone numbers where the test suite will pick one at random will also not work because if I have N+1 test suites running simultaneously, then the race condition will still exist.
Is there a more sound approach to scaling E2E tests using MFA?
I am a bit late to this question but let’s give it a try anyways for posterity.
I would personally “batch” your E2E tests.
For instance, let’s say you’d like to run 120 E2E tests in parallel. For this case I’d run 12 batches of 10 parallel tests with 10 different users with each a phone number attached.
In the case you need to have a unique user for each E2E test (which I presume you don’t at the moment), I would rotate phone numbers at the end of each test batch (through an admin-update-user-attributes call).
On our case we have gone with a more specialized solution like GetMyMFA which allows us to provision and deprovision phone numbers quickly and easily for MFA testing. Not sure if this kind of service would be more helpful for you.
Let me know what you think :)
Related
We have developed a framework combination keyword and data driven framework. The Framework will have a main class which will look for the test case which is marked as Yes and execute the corresponding Keywords. The Keywords will be basic modules like login , logout ..etc .The test data will be mapped to the Testcase and will be passed on run time. Under the Execution entity we have only one class (i.e. scripts/main).
Our need is to perform the parallel distributed testing in multiple machines. we have created a job by following the documentation below:
https://support.smartbear.com/testcomplete/docs/working-with/integration/jenkins/pipeline.html
Able to successfully configure machines but on execution the same test case is getting executed in both machines.
Sample scenario
TestcaseName Execution Keyword1 Keword2 Keyword3
TC001 Yes login Createorder Logout
TC002 Yes login DeletedOrder Logout
TC003 No Login Logout
In the Above scenario, TC001 is getting executed on both machines after that tc002.
I wanted to distribute test cases in machines like tc001 needs to be executed in machine 1 and tc002 in machine 2. Can someone help with this?
I have the app with login screen and the screen that appear after login (authorized part).
What is the best approach to test these screens from Authorized part?
I have several ideas:
1) Somehow I need to remove all the data from keychain before each test and then I go through the entire flow each time to get to the first screen after login. When I need to send a request to backend to login, I wait for the main screen using
let nextGame = self.app.staticTexts["Main Screen Text"]
let exists = NSPredicate(format: "exists == true")
expectation(for: exists, evaluatedWithObject: nextGame, handler: nil)
waitForExpectations(timeout: 5, handler: nil)
2) I pass some arguments here
app = XCUIApplication(bundle:….)
app.launchArguments = [“notEmptyArguments”:”value”]
app.launch()
So I can pass a fake token and out backend will accept this token, so that my app will know that it has to route me to the Main Screen and all the requests will be successful because my Network Service has this fake token
But I fill that it's a not vary safe way.
Do you have any ideas what is the best approach and may be you can give advice of a better approach?
The second idea you mentioned is a good way to skip login screen in tests. Furthermore, implementing token passing will be helpful to developer team as well. Those launch arguments can be stored at running schemes settings.
Also, if you implement deep linking in the same manner it will bring even more speed enhancements for both QA and developer team.
Surely, these "shortcuts" shall be only accessible while running a debug configuration (using #if DEBUG...)
In my opinion your Login Service or whatever service your app might need to perform or show some use cases, should be all mocked. That means in your automated unit/ui testing environment your app is going to talk to mocked service implementations, that means that the login service or authorization service response should be mocked to either be success or failure, so you can test both of them.
To achieve that your services should all be represented as interfaces/protocols and the implementation/implementation details should be in either the production, development or automated testing environment.
I would never involve any networking in automated testing. You should create a mock implementation of your authorization service for example that in automated test environment could be mocked to either give a response of success or failure depending on the test you are running (and this setup you can do in the setup() method maybe).
The most authentic test suite would sign in at the beginning of each test (if needed) and would sign out if appropriate during teardown. This keeps each test self-contained and allows each test to use a different set of credentials without needing to check if it's already signed in/needs to change to a different user account, because tests will always sign out at the end.
This is not a fool-proof method as it may not always be possible for teardown code to execute correctly if a test has failed (since the app may not be in the state that teardown expects, depending on your implementation) but if you are looking for end-to-end tests, using only the codepaths used by production users, this is one way you could do it.
Introducing mocking/stubbing can make your tests more independent and reliable - it's up to you to choose how much you want to mirror the production user experience in your tests.
Our CI system uses the python module jenkinsapi to launch test jobs on Jenkins. But it's slow - the time taken to run a single job varies between 10 - 30 seconds. That really bogs the system down.
Our production Jenkins is tied into our corporate LDAP. Hence jenkinsapi requires a username/password. Without a doubt this contributes to the problem. I suspect that each time it runs a job it needs to perform a login on Jenkins. The issue is vastly reduced when I run the setup against my local unsecured instance of Jenkins.
Is there any way to workaround these limitation? Can I speed up the execution of jenkinsapi? Or is there an alternative approach that will work better with a secure Jenkins?
If the LDAP authentication is really the bottleneck, you may be able to get around it by using the user's API token instead of the password on API login. It should be as simple as replacing the password with the API token (available on the user's configuration page in Jenkins) in your scripts.
You can try to use the parameter: lazy=True
server = Jenkins(
JENKINS_HOST,
username=JENKINS_USER,
password=JENKINS_TOKEN,
lazy=True,
)
I am trying to send email reminders to users who have not completed the sign-up process. The sign-up process has three different stages:
1. input for interested users (this will redirect them to a registration section)
2. registration section (this will redirect them to set-up profile)
3. set-up profile
If the user has not continued to the next stage in the process I would like to send an email reminder:
1. after 18 hrs
2. after 1 days
3. after 4 days
I have heard about CRON (whenever gem) and DELAYED JOBS but don't know which one to use. And most important WHY I should choose one over the other?
Please provide an example if possible.
I would write a script with all the logic for timing, what email to send, who to send it to etc.
Then schedule a cronjob every 24 hours to run the script. Don't try to use the cronjobs to do the timing of how many days after to send the message.
Well the reason why you would choose one over the other should be based on what you're trying to do and how you are doing it. As a developer, I would create new branches and experiment with both gems to see which one works better for you and your app.
FYI though. The whenever gem is not supported by Heroku and I believe delayed jobs is. That might be your deciding factor.
I suggest you write a function that checks for unfinished registrations. Then on your server, simple run a cron job every 18 hours, 1 day and 4 days (one line of script).
This cron job with call the controller that triggers the function which send reminder emails.
You could also use sidekiq as a background processor for email sending.
I have a Rails 3.1 app deployed to Heroku. This app makes heavy use of mailers. I'm looking for a way to run a sort of integration-stress test. In other words, I would like to automate integration tests that cover from user action to email receipt (not simply delivery), and I want to use these test to stress-test the app. As Heroku runs everything in production mode, I'm can't run this server-side.
(I'm happy enough to script the actual user interaction, though I'm interested in suggestions. What's really tripping me up is actual email receipt. What would I use to monitor incoming emails? I'd like to not use a separate tool, and I'd prefer not to check that emails were received after testing, as I would like my stress test to also calculate elapsed time between user interaction and email receipt, etc.)
I don't think you can avoid using a separate tool if you actually want to check the messages were received at the end point. I wrote a blog post on a number of options for receiving emails.
Since you're running things locally and don't nessesarily need to be performant it might actually be enough for your tool to connect via pop3 or imap and download the email to check it was delivered.