We have developed a framework combination keyword and data driven framework. The Framework will have a main class which will look for the test case which is marked as Yes and execute the corresponding Keywords. The Keywords will be basic modules like login , logout ..etc .The test data will be mapped to the Testcase and will be passed on run time. Under the Execution entity we have only one class (i.e. scripts/main).
Our need is to perform the parallel distributed testing in multiple machines. we have created a job by following the documentation below:
https://support.smartbear.com/testcomplete/docs/working-with/integration/jenkins/pipeline.html
Able to successfully configure machines but on execution the same test case is getting executed in both machines.
Sample scenario
TestcaseName Execution Keyword1 Keword2 Keyword3
TC001 Yes login Createorder Logout
TC002 Yes login DeletedOrder Logout
TC003 No Login Logout
In the Above scenario, TC001 is getting executed on both machines after that tc002.
I wanted to distribute test cases in machines like tc001 needs to be executed in machine 1 and tc002 in machine 2. Can someone help with this?
Related
We do something called feature testing like so -> https://blog.twitter.com/engineering/en_us/topics/insights/2017/the-testing-renaissance.html
TLDR of that article, we send request to microservice(REST POST with body), mock GCP Storage, mock downstream api call so the entire microservice can be refactored. Also, we can swap out our platforms/libs with no changes in our testing which makes us extremely agile.
My first questions is can DataFlow (apache beam) receive a REST request to trigger the job? I see much of the api is around 'create job' but I don't see 'execute job' in the docs while I do see get status returns the status of job execution. I just don't see a way to trigger a job to
read from my storage api (which is mockable and sits in front of GCP)
process the file hopefully across many nodes
call the apis downstream (which is also mockable)
Then, I simply want to in my test simulate the http call, then when file is read, return a real customer file and then after done, my test will verify all the correct requests were sent to the apis downstream.
We are using apache beam in our feature tests though not sure if it's the same version as google's dataflow :( as that would be the most ideal!!! -> hmmm, is there a reported apache beam version of google's dataflow we can get?
thanks,
Dean
thanks,
Dean
Apache Beam's DirectRunner should be very close to Dataflow's environment, and it's what we recommend for this type of single-process pipeline test.
My advise would be the same: Use the DirectRunner for your feature tests.
You can also use the Dataflow runner, but that sounds like it would be a full integration test. Depending on the data source / data sink, you may be able to pass it mocking utilities.
BigQueryIO is a good example. It has a withTestServices method that you can use to pass objects that mock the behavior of external services
Currently, I am using AWS Cognito for user management + MFA. For my E2E tests (run in-band), I have a user (test#email.com) in Cognito with a Twilio phone number. For every E2E test, the test user needs to sign in using the standard MFA flow:
User inputs email + password
Cognito verifies + sends MFA to associated phone number
User enters MFA code in application
Cognito verifies it and logs in
This is fine for a single E2E test suite being ran since there will be no race conditions with other E2E tests. Though if I were to scale this approach (multiple test suites ran in parallel), different test suites would be receiving login pins that will not be validated since Cognito invalidates previously sent MFA codes.
Having N unique phone numbers where the test suite will pick one at random will also not work because if I have N+1 test suites running simultaneously, then the race condition will still exist.
Is there a more sound approach to scaling E2E tests using MFA?
I am a bit late to this question but let’s give it a try anyways for posterity.
I would personally “batch” your E2E tests.
For instance, let’s say you’d like to run 120 E2E tests in parallel. For this case I’d run 12 batches of 10 parallel tests with 10 different users with each a phone number attached.
In the case you need to have a unique user for each E2E test (which I presume you don’t at the moment), I would rotate phone numbers at the end of each test batch (through an admin-update-user-attributes call).
On our case we have gone with a more specialized solution like GetMyMFA which allows us to provision and deprovision phone numbers quickly and easily for MFA testing. Not sure if this kind of service would be more helpful for you.
Let me know what you think :)
I have the app with login screen and the screen that appear after login (authorized part).
What is the best approach to test these screens from Authorized part?
I have several ideas:
1) Somehow I need to remove all the data from keychain before each test and then I go through the entire flow each time to get to the first screen after login. When I need to send a request to backend to login, I wait for the main screen using
let nextGame = self.app.staticTexts["Main Screen Text"]
let exists = NSPredicate(format: "exists == true")
expectation(for: exists, evaluatedWithObject: nextGame, handler: nil)
waitForExpectations(timeout: 5, handler: nil)
2) I pass some arguments here
app = XCUIApplication(bundle:….)
app.launchArguments = [“notEmptyArguments”:”value”]
app.launch()
So I can pass a fake token and out backend will accept this token, so that my app will know that it has to route me to the Main Screen and all the requests will be successful because my Network Service has this fake token
But I fill that it's a not vary safe way.
Do you have any ideas what is the best approach and may be you can give advice of a better approach?
The second idea you mentioned is a good way to skip login screen in tests. Furthermore, implementing token passing will be helpful to developer team as well. Those launch arguments can be stored at running schemes settings.
Also, if you implement deep linking in the same manner it will bring even more speed enhancements for both QA and developer team.
Surely, these "shortcuts" shall be only accessible while running a debug configuration (using #if DEBUG...)
In my opinion your Login Service or whatever service your app might need to perform or show some use cases, should be all mocked. That means in your automated unit/ui testing environment your app is going to talk to mocked service implementations, that means that the login service or authorization service response should be mocked to either be success or failure, so you can test both of them.
To achieve that your services should all be represented as interfaces/protocols and the implementation/implementation details should be in either the production, development or automated testing environment.
I would never involve any networking in automated testing. You should create a mock implementation of your authorization service for example that in automated test environment could be mocked to either give a response of success or failure depending on the test you are running (and this setup you can do in the setup() method maybe).
The most authentic test suite would sign in at the beginning of each test (if needed) and would sign out if appropriate during teardown. This keeps each test self-contained and allows each test to use a different set of credentials without needing to check if it's already signed in/needs to change to a different user account, because tests will always sign out at the end.
This is not a fool-proof method as it may not always be possible for teardown code to execute correctly if a test has failed (since the app may not be in the state that teardown expects, depending on your implementation) but if you are looking for end-to-end tests, using only the codepaths used by production users, this is one way you could do it.
Introducing mocking/stubbing can make your tests more independent and reliable - it's up to you to choose how much you want to mirror the production user experience in your tests.
I have been trying to implement DI for Azure Functions where the functions is triggered by ServiceBus (topics/subscriptions in this case):
[Singleton]
[FunctionName("Alert")]
public static async Task Alert([ServiceBusTrigger(Topic.Alert, Subscription.PowerBi, Connection = "servicebusconnectionstring")] Message message, [Inject]IPowerBiService powerBiService, [Inject]IQueueService queueService)
I have read about Azure Functions and DI on following sites:
https://mcguirev10.com/2018/04/03/service-locator-azure-functions-v2.html
https://blog.wille-zone.de/post/azure-functions-proper-dependency-injection/
https://github.com/introtocomputerscience/azure-function-autofac-dependency-injection
All examples works fins using HTTP trigger, I assume the IIS host is up and running and is containing the services. But using ServiceBus trigger, I can't get it to work. I have implemented the solutions mention above, and a few more but all get same issues. The code works, bu the services are created for message/trigger.
Anyone out there that has manage to do this, or arn't it possible to do?
NOTE (update):
I got some more information that I haven’t got time to verify yet, but I have been using a consumption plan for my Azure Functions. It may be the case that you need an App Service Plan instead (using consumption since that price model is more convenient). If anyone know more about this?
I will look into this later this week.
I just want to confirm that it work’s fine now using an App Service Plan instead of an Consumption Plan. The difference is the "cold start" instead of a "warm" host.
I guess all different once of DI implementations should work fine.
I have been using following : https://github.com/MV10/Azure.FunctionsV2.Service.Locator
Is there a way to mock requests when writing automated UI tests in Swift 2.0. As far as I am aware the UI tests should be independent of other functionality. Is there a way to mock the response from server requests in order to test the behaviour of the UI dependant on the response. For example, if the server is down, the UI tests should still run. Quick example, for login, mock if password failed then UI should show alert, however, if the login is successful the next page should be shown.
In its current implementation, this is not directly possible with UI Testing. The only interface the framework has directly to the code is through it's launch arguments/environment.
You can have the app look for a specific key or value in this context and switch up some functionality. For example, if the MOCK_REQUESTS key is set, inject a MockableHTTPClient instead of the real HTTPClient in your networking layer. I wrote about setting the parameters and NSHipster has an article on how to read them.
While not ideal, it is technically possible to accomplish what you are looking for with some legwork.
Here's a tutorial on stubbing network data for UI Testing I put together. It walks you through all of the steps you need to get this up and running.
If you are worried about the idea of mocks making it into a production environment for any reason, you can consider using a 3rd party solution like Charles Proxy.
Using the map local tool you can route calls from a specific endpoint to a local file on your machine. You can past plain text in your local file containing the response you want it to return. Per your example:
Your login hits endpoint yoursite.com/login
in Charles you using the map local tool you can route the calls hitting that endpoint to a file saved on your computer i.e mappedlocal.txt
mappedlocal.txt contains the following text
HTTP/1.1 404 Failed
When Charles is running and you hit this endpoint your response will come back with a 404 error.
You can also use another option in Charles called "map remote" and build an entire mock server which can handle calls and responses as you wish. This may not be exactly what you are looking for, but its an option that may help others, and its one I use myself.