Here is a quick question and seeking some clarifications reg the documentation provided by playwright.
Here are my queries:
the features mentioned under the "Playwright test" section of the documentation are available only when someone use playwright Test as test runner only ?
if someone is using a different test runner. eg : cucymber.js , the same features is not accessible directly (although with some extra codes , these features might be still possible)?
please help me understand if the above are true.
Related
I am currently working for the Test Team of my company and one of the managers gave me the task, to reduce manual regression tests based on code coverage. Now, before anyone mentions, that manual tests should be chosen depending on Use Cases and Requirements, please consider, that this task was not mine to choose, but instead a task i have to solve.
The tested application is deployed obfuscated via click once deployment and since this makes it seem alsmost impossible for any application to get coverage out of a test run, i would also love an explanation how the impact anylsis works directly.
Also a short explanation on how to set up Test Impact Analysis für TFS 2017 would be really appreciated, since the Microsoft Documentation doesn't really explain TIA for manual testing properly.
Manual Desktop Application Tests is not in the supported scenarios with the TIA. See Impact Analysis supported scenarios for detials.
Refer to Speed up testing by using Test Impact Analysis (TIA) to know how the TIA works and how to set up Test Impact Analysis.
Related link : TIA overview and VSTS integration
At my workplace, I've been tasked to look into some metrics that the Jenkins tool provides and somehow pull them programatically and display them in some presentable format. The metrics that I need to pull are:
How many unit tests are passing? Failing? Skipping? The total % of passing?
How many integration tests are passing? Failing? Skipping? The total % of passing?
How many acceptance tests are passing? Failing? Skipping? The total % of passing?
How long does it take to execute the test? Make the build?
What is the number of tests executing in pipelines?
... the list goes on
Now I have a very small 1000 ft understanding of Jenkins, and an even smaller understanding of the steps that I need to take to make this program come to life. I am an intern with not much programming experience either, but after some research, I learned that I can navigate through the Jenkins API by adding '.../api' to the link that I want to find API elements for, and I know that I'm going to need to develop a plugin. Aside from that I don't have much direction at all. I don't know what environment I need to develop these plugins (Maven? Never heard of it)... I don't know what languages are supported (I only know C++, Java, and JS)... I don't know how to even install a plugin or get to the plugin on the Jenkins site. I feel like I'm drinking from a firehose with this task and need some guidance.
Does anyone have good guides, advice, tips, tricks, videos... anything that might help me get started on Jenkins plugin development? Any insight into how I might solve this problem too would be much appreciated.
Thanks so much.
First of all there are tools out there which generates HTML reports. You can start there.
For example: MSTest report (.trx) can be converted to HTML by TRXER
and can be published using the HTML Publisher Plugin
However if you're into building your own plugin use NetBeans (I have tried it; and it works)
But creating Jenkins graphs you have to google and see.
Can anyone help me describing how function of Fitnesse differs from the JUnit.
I have been working with Fitnesse for few days. I am asking this question because I am still trying to understand the objective of the tool. I went through some of the web Articles, they were saying:
JUnit is to build the Code Right and Fitnesse is to build the Right code. But I want to know how?
Great if we can get some examples...
Thanks in Advance
---Sreenisha S---
The main difference for me is the target audience.
JUnit (and other unit test frameworks) are targeted to developers validating their code (still) does what they expected it to do. The test definition, execution and interpreting of output is expected to be done by programmers. The toolkit is designed for white-box testing, although it can be used for other goals as well.
FitNesse is intended for black-box testing, based on what the functionality has to be, not on how it is implemented. The intention is that domain knowledge suffices to create tests (or at least add test cases), execute them and get meaningful information from the test output.
In my usage programmers also work with FitNesse, but they use it to define integration tests working against an installed system to validate end-to-end behavior. We for instance test web applications with a browser (using Selenium) and SOAP interfaces offered by the system.
Unit tests are used for component tests (often with external components replaced by fakes/mocks) in a (dedicated) test process.
Does this help at all?
How do scalatest and spock differ? what is the added-value of each ? Which is more agile for Behavior Driven Development (BDD)? Please could you share some thoughts on the matter ?
I want to start BDD, I want to pick one between the two, therefore I'd like to make an educated decision. Hence i get the maximum of information first, especially given that I'm a java programmer and that scala seem to have a learning curve that is important.
Any advise or ideas or return from experiences would be welcome.
Many thanks
In a nutshell, I would recommend to use ScalaTest for testing Scala code, and Spock for testing Java or Groovy code. (Of course, it's also perfectly possible to test Java code with ScalaTest.) Why not give both tools a shot and stick with the one that you are more comfortable with?
Disclaimer: I'm the creator of Spock.
While I agree with Peter's answer, I'd like to give my perspective on this.
Both ScalaTest and Spock provide fluent BDD test. However I find that the best feature of Spock is that you can to create single scenario with multiple set of data and expected results. This is very much like Cucumber's Scenario Outline, only run at unit test level.
I can't find other unit test framework/library that does that.
In summary, if you need to test multiple input/output for single test scenario, use Spock, otherwise, feel free to choose whatever you feel comfortable with.
I LOVE cucumber, my clients love it too.
As far as I know currently there isn't a nice way to share your features with your clients. Us nerds have TextMate or NetBean bundles that give us nice syntax highlighting -- my clients not so much.
What I would love is to be have something hosted at features.myclientsapp.com that would be a organized nice marked up view of the features of the application. Maybe as a bonus an overview page with % coverage, which steps are passing. Ideally this will be exposed as a Rack Engine.
If I am getting greedy -- git integration to see version control, and a way to solicit feedback from the clients.
Does anyone know of anything that does can do this? What other strategies do people have on sharing there features files with their clients/users?
I have been working on this and this is what I have come up with. Its a less known feature that cucumber can be output in pretty nice html. I have this task namespaced as part of a bigger task list that is run with rake doc:features and includes all the rdoc for the app and the README for the app, etc.
desc "runs cucumber features and outputs them to doc/cucumber as html"
task :html do
Dir.glob("features/*.feature").each do |story|
system("cucumber",
"#{story}",
"--format=html",
"-o",
"doc/cucumber/#{story.gsub(/features\/(\w*).feature/, '\1.html')}")
end
end
then its up to you how you want to serve them up. I've been writing some tasks that hook this task in with others to build the documentation and then serve it up with the serve gem. http://github.com/jlong/serve but there are a lot of other options too. other options include running the features on a ci server and putting these feature files in a directory to be browsed, etc.
I agree with you, it would be nice if there was a dashboard page that gave pass fail, etc. and links to each feature file output, etc. If anyone would like share the workload implementing this as part of cucumber core, I would be happy to contribute. I personally think the html formatting should be more robust and part of the central cucumber feature set.
I really like this idea. What do you think about using this-fork of metric_fu that claims to combine cucumber with the rcov and other nice pretty graphs.
As far as formatting the feature themselves, I really like how Chargify uses cucumber features as documentation. They appear to wrap them in a 'pre' tag to be pre-formatted.
I just found Viewcucumber. I haven't used it yet since it currently does not support cucumber 0.10.0, but I will be monitoring it -- looks great.
I new service that looks promising is Relish
Though it is a closed beta, and I couldn't get access. But one to keep an eye on.
features2html is a script that creates a self-contained HTML file from all Cucumber feature files in a folder.
P.S. Self promotion alert :) D.S.