How to test the GHA (Github Actions) workflow itself - devops

I would like to test the GHA (Github Actions) workflow itself on a continuous cycle.
There is logic in the workflow, and I want to test the logic to make sure it is correct or not.
For example, I want to verify in a test that a value is set in variable by a "if" statement.
Please let me know if you have any best practices.
Normally, I would use mock, etc., but I have not been able to find it in my searches.

Related

BDD - should each scenario be self contained?

I have a BDD feature containing multiple scenarios. Should each scenario be completely self contained and runnable individually?
Should be, yes. It is generally good practise in all forms of TDD (BDD included) to make sure that each "test" can run independently, and isn't coupled with or have a dependency on another test having been run first. This will help avoid creating a brittle test suite (i.e. one that is prone to breaking).
That's not to say that you cannot chain readability together. For a very cheap/quick example:
Feature: Users can register and log in
Scenario: Should be able to register
Given I am not registered
When I complete the registration form
Then I will be registered
Scenario: Should be able to log in
Given I am registered
When I correctly sign-in with my credentials
Then I will be logged in
Scenario: Should be able to log out
Given I am logged in
When I sign-out
Then I will be logged out
Each scenario indicates a test that can be automated - and each should be designed behind the scenes to be able to run independently. But as a reader of the feature (say a business stakeholder) - the process is complete and they can understand the entire picture more easily.

How should I test my Rails app?

In the past couple of days I've been making slow progress adding tests to an existing rails app I've been working on for a little bit.
I'm just trying to figure out how much and what kind of tests (unit, functional, integration) will be enough to save me time debugging and fixing deploys that break existing functionality.
I'm the only one working on this application. It's basically an inventory management database for a small company (~20 employees). I didn't add testing from the beginning because I didn't really see the point but I've have a couple of deploys screw up existing functionality in the last little bit, so I thought it might be a good thing to add.
Do I need to test my models and controllers individually AND perform integration testing? There seem to be developers who believe that you should just integration test and backtrack to figure out what's wrong if you get an error from there.
So far I'm using RSpec + Factory Girl + Shoulda. That made it pretty easy to set up tests for the models.
I'm starting on the controllers now and am more than a little bit lost. I know how to test an individual controller but I don't know if I should just be testing application flow using integration tests as that would test the controllers at the same time.
I use integration tests to test the successful paths and general failure paths, but for the edge cases or more in-depth scenarios then I write model/view/controller tests.
I usually write the tests before writing the functionality in the application but if you want to add tests afterwards then I recommend using something like SimpleCov to see which areas of your application need more testing and slowly build up the test coverage for your application.
Writing tests after the app is already written is going to be tedious. At a minimum you should be testing for integration cases (which will test most controller functions and views), and also model tests separately so that your model calls work as you think they should.
For integration testing I find Capybara easy to use.
As a rule of thumb, most of your tests need to be unit, some functional, few integration.
Rails adheres to this wholeheartedly and you should do the same.
At the minimum you have to have:
A unit test for each of your models, add a test case for each of your validations in each of them, and finally a last one to make the sure the model is saved. If you have stuff like dependant: :destroy, touch: true on related models (has_many, belongs_to), add test cases for them as well.
A functional test for each of your controllers. Test cases for each action. Some actions can have multiple responses (like in case of 422 errors) so add another case to test the error response. If you have a authorization filter, test them as well.
Then you can have an integration test for a typical new user flow. Like
Sign Up -> Create Post -> View it -> logout. Another one is to make sure that unauthorized users cannot access your resources.
And from now, do not commit ANY code without the above mentioned tests.
The best time to plant a tree was 10 years ago, second best time is now, so start testing!

MTM Test Case run does not update in all suites

I have assigned one test case to two different user stories. I know it's not the cleanest method but it helps in the case I created that.
In the test plan I added requirements and hence their respective test cases. Now this single test case is present in two different test suites since it tests two different user stories.
When I run this test case I expect it to either fail or succeed in both suites, but it seems that there are two totally different instances of that test case in the plan and I can have one passing and the other one failing.
Is there a need for such a behavior or is it unexpected, therefore a bug in MTM?
When you create now test plans on MTM you can specify the configurations for it and which one of them will be the default. So, when you add new requirements they automatically take the default configuration. However, you can always change it by assigning another available configuration for any requirement you want. My point is that test case, that belongs to two different User Stories, when it is assigned to the test plan has an extra information which is the configuration that it will be used to test it.
So, if your test case A is assigned to user stories A and B, and these requirements have been assigned to the same Test Plan but they have different configurations it is very possible the one test case instance to fail and the other to pass.

QTP Keyword driven basic example

I have been looking for very basic keyword driven test..i do not understand well how you can separate the test specifically from application so its reusable. In my understanding, the QTP commands like "navigate" are keywords. But how to create my own independent ones? I would be very grateful for example of how to do that. I found either too complex or just theoretical ones.
Thank you very much
In QTP jargon a keyword is a combination of a test object and method (see the available keywords pane).
Keyword driven testing is used to mean creating a test without recording. You can create test objects in one of the following methods and then construct a test from these test objects.
Descriptive programming
Manually create test objects in the object repository (using the create new command)
Using navigate and learn
Record and discard the script
Import from XML
Example of a test.
Go to a web store. Search for a product. Login. Buy. Logout.
(The test is already broken down to keywords)
The simplest approach.
Simply write a list of operations for the corresponding objects. E.g. a simplified variant:
Browser.Open(WebStoreURL)
Browser.Sync
Browser.Page.WebEdit(SearchBoxName).Type "something I want"
' then login, buy, logout using the same approach
' add verification points where needed
In the end you have quite a long script.
If you need to write another script that tests a similar case, you need to repeat most of the actions above.
Another approach.
To avoid duplication, you can, for example, create such functions/actions: Login, Logout, Search(product_name), etc. And then create scripts using these actions/functions, i.e. keywords:
Login
Search "something I want"
Buy
Logout
It is an example of a keyword driven approach. It works on a higher level of abstraction then QTP commands.
The approach is not limited to using QTP functions. Keywords can be implemented as words in an Excel file.
I don't know about overloading of keywords. But when i was writing test cases in QTP for automation. I used configurable navigation paths in the prop or config file n all i needed to do was call a generic function which took source n destination n using those prop files navigate to the proper location.

Behavior Driven Development using existing software components

I'm currently reading through the beta version of The Rspec Book: http://www.pragprog.com/titles/achbd/the-rspec-book
It describes The Behaviour Driven Development Cycle (red, green, refactor) as taking small steps during the development process. This means adding features one at a time.
My question is:
If I were to describe a single feature of my software (eg: a successful user login scenario in a cucumber test) AND if I were using a modular component (such as Devise) which has many features (scenarios). How could I possibly follow Behaviour Driven techniques? Once my first step passes, I must reverse engineer my other tests to reflect the functionality of the software component I am using thus violating the principal of BDD!
Edit (for clarity):
My first scenario is passing after implementing Devise. But now I must factor all my subsequent end to end tests (that I have not yet written) around the behaviour of Devise rather than to my stake holder's requirements. So the BDD cycle can no longer be applied. I must reverse engineer Devise in my tests to make them pass or not write the tests.
The BDD cycle involves creating scenarios, then having conversations around those scenarios to discover any more which are missing, any misunderstandings, etc.
If you use a BDD tool like Cucumber, then you can capture the scenarios you've discussed.
Ideally, the scenarios will be in terms of high-level steps, focused on the capabilities of the system and the value they provide to the users. They won't be about logging in or failing to authenticate.
BDD isn't really about testing. It's about learning, and keeping things easy to change. If you're never going to change the mechanisms for logging in then it's enough to manually verify authentication. Focus instead on the things which make your application different to all the other application out there. How does your software actually provide value? How does it communicate with other systems, applications and users? How is money made, or lives saved, or fun had?
If you can answer those questions and keep the steps focused on those, you'll have failing scenarios still, even with Devise. Your scenarios will look more like:
Given I have registered an account
When I <do this differentiating thing>
Then I <achieve this differentiating outcome>
Logging in will be implicit, and called as a lower level step as part of the first Given.
So I can understand better: You have an existing system and it includes tests but does not include login functionality. You decide to add login:
Given a visitor is not logged in
When a visitor goes to the admin page
Then the visitor should see the login page
Ok, so then you decide how you want to implement the login because this scenario is failing. Right? So you choose Devise and the scenario passes but all the tests or specs that relied on a security-free system now fail. Am I still on track? If that's the case, you've encountered a case where you are adding a feature that is so pervasive in your application that you need to touch a number of tests/specs to get them all running. You are still testing first because either the code is broken or the test doesn't recognize the new feature. In any case, you know where the work needs to be done.
Helpful?

Resources