Is there any test framework in Dart that is as robust as Jasmine
I did use pub test, but it is made me wanting something like Javascript's Jasmine.
Any suggestions?
The test package that you used is as robust as Jasmine. It is backed by tests, can run tests on multiple platforms (VM/server, browsers, Flutter), allows tagging, skipping, grouping, filtering by name, and more.
In what areas do you feel that Jasmine is more robust?
Related
Any frameworks available for writing test cases to Support iOS applications developed using Appcelerator Titanium?
There are quite a few frameworks for Unit Testing:
MockTi
Titanium-Jasmine
Ti-Mocha
Ti02
TiUnit
TiShadow
But the most of those frameworks listed above are either discontinued or don’t work anymore. And the frameworks that are still working mostly run in the Titanium container/runtime. This means the Titanium project needs to be build and run first which makes the execution process of your tests very slow. Besides that, most of them don’t provide mocking the Titanium namespace (e.g. manipulating/mocking Ti.Network).
We’re using the TiUnit toolset for our Unit Testing in combination with Istanbul (test/code coverage). TiUnit covers our needs within Unit Testing:
Fast and execution outside of the Titanium container/runtime
Mocking all dependencies (e.g. required CommonJS modules)
Generating a mock for all functions, constants and properties in the
Ti namespace (Ti)
Callback, L macro and $ testing
More information can be found on the TiUnit github page
We recommend ti-mocha (https://github.com/tonylukasavage/ti-mocha) which suited, test cases, supports asserts, skips, advanced validations and many more features.
I'm trying to use SpecFlow for a .net project. I'm new to SpecFlow. The Development team are using NUnit, so it would seem that SpecFlow would be a good option in conjunction with Cucumber. However, the Development team have come back say that SpecFlow cannot be used saying they do not have an API/Service that is available to use at the level required. Currently all of their automated tests are through the UI using Test Complete, I am keen to move to API level testing.
Can anyone explain to me why SpecFlow cannot be used, I'm sorry it's a newbie question but no one can answer it, I've asked everyone I can think of, surely the first steps would be to see if we can use SpecFlow with NUnit but perhaps not.
Can anyone give me a guide on my next steps, how can I be sure this isn't an option without righting it off without concern that it's just being blocked?
Thank you
SpecFlow has a unit test generator that generates unit test code for a variety of unit test frameworks. SpecFlow generates NUnit tests in its default configuration. The getting started page on specflow.org explains a quick way to get up and running with SpecFlow and NUnit, http://www.specflow.org/getting-started/.
If the UI is HTTP based, SpecFlow can be used with WebDriver or another browser automation framework to test the UI. This blog post provides an overview of how to get started with SpecFlow, NUnit and WebDriver, http://blogs.lessthandot.com/index.php/enterprisedev/application-lifecycle-management/using-specflow-to/
I am unclear on the API you want to test. If you could provide more information on the specific API and UI you are trying to test, I could possibly provide some code examples or references for you.
Is the API exposed through HTTP?
Is the UI a web, mobile, or desktop
application?
Have you tried to use SpecFlow at all?
SpecFlow doesn't run tests. It simply maps readable language to tests. If their test can be written as a nunit test, then SpecFlow is available to you to use. With no change, here is how it would look.
Scenario: Running 'testname'
Then I execute the test 'TestName'
You would map that to
[Then(#"I execute the test '(.*)'")]
public void ExecuteSpecificTest(string testname)
{
// Using reflection, execute the method listed
}
Obviously you would want to do better than that. You want a given, when, then so you clearly show the setup, action, and then compare expected verses actual result but it isn't necessary. Best practices however is another discussion.
To sum it up, code is code and SpecFlow simply maps to code. You can use WatiN, WebDriver, or anything else to hook into the UI or an API. SpecFlow doesn't care. It simple executes the methods without knowing what's inside.
I've got an Erlang project comprising a bunch of different applications. I'm using Common Test to do some of the testing.
apps/foo/suites/foo_SUITE.erl
apps/bar/suites/bar_SUITE.erl
I'm starting to see duplication of utility code in those suites.
Where should I put my utility code so that it can be shared between the two suites?
I've considered adding another application:
apps/test_stuff
...but I can't make the CT suites depend on this without making the application under test depend on this (or can I?). I don't want to do that, because test_stuff is only needed when testing.
I have a similar problem with my eunit tests, both between applications (apps/foo/test vs. apps/bar/test), and where I'm using similar functionality between the eunit and CT tests in the same application (apps/bar/suites vs apps/bar/test). Can I use the same solution for this case as well? Or do I need to ask another question about that?
Do you think ct:require/1,2 could help you so that foo and bar SUITE would require test_stuff before it gets executed? For more information http://www.erlang.org/doc/man/ct.html#require-1
It depends on how you are packaging your final releases. For example, I use rebar for relase management. I have Cowboy fetched along with other dependencies for testing purposes, but in my reltool.config, I omit it, so it doesn't get packaged with the final product. I use rebar to run Common Test, and it's able to add Cowboy to the path without having it bundled as a lib with everything else or added as a dependency to the app I'm testing.
However, if you have another process which infers your release configuration from your dependencies, you'll have to find a way to exclude your test code when you generate a release.
G'day guys,
I've started a new role at a company and am working on a rails application that's used internally for particular management applications. At any rate, there are no unit tests associated with this system, and as I am building the system out I've realised there is a serious need to build unit tests so that I can have regression testing as I continue to add to this codebase.
It's a rails 1.2.7 app that I cannot migrate for reasons of compatibility, but I will be beginning to port components over it to 3.0 in the background.
There is currently only the built in tests that rails makes, and I would love some insights into how to get started with building small tests into the system and particularly at what aspects would be good to start?
This system builds a lot of individual config files, so I'm assuming a lot of the test generation would begin with testing those config files for individual aspects/elements and going from there.
I've had a look at cucumber and rspec, but just wanted to find something straightforward and easy to use that I could slowly build upon within the current system and build up a testing base over time.
Any insights, links or others would be really really appreciated.
Cheers!
A lot has changed since Rails 1.2.7, definitely one of the first things would be to right unit tests, but instead of going an all out unit test attack.. Attack the problem component,
Write tests
Upgrade the component
Run Tests ( mostly they'll fail / possibilities of errors )
Make them pass
Refactor ( May be the new features at your disposal )
I usually do the above steps with writing the integration tests, and functional tests and unit tests for the components in the integration tests.
All the best, this ain't going to be a simple task :).
We use a bunch of specific apps/APIs that (unfortunately) differ quite a bit from dev to staging/production. We use tests and continuous integration at each stage, but in dev, the tests fail annoyingly (throwing dialogs, etc - thanks Windows for the 64-bit notification!). I hate to write custom code, but are there some best practices for how to allow a subset of testing in ruby/rails - or for patching out specific tests when you're running on Windows?
Some specific situations that:
Identify.exe does not support 64-bit Windows and throws a dialog.
Sethostname is not supported, and throws an error (at least it's command line).
You can mock out the code to decouple the dependency on the other apps. Use Mocha to create the mocks dynamically.