How do I write a query to find test cases that were created by a specific person from copying other test cases?
If I add a "Created By" criterion then no results are displayed.
query
enter image description here
Related
I'm trying to create a test suite query that would only pull back as a result a test case that is not at a state of "passed".
The set up I have is across various Static suits in a tree structure similar to below:
Functional Tests
Feature 1
Page 1 Tests (40)
Page 2 Tests (27)
etc...
Feature 2
Page 1 Tests (22)
Page 2 Tests (18)
etc...
Automated Tests
Area 1
Function 1 (73)
Function 2 (54)
etc...
I have created charts at the top level (Functional Tests and Automated tests) that record the results of the tests after they have been run, including those that haven't been run, but what i want is to be able to see an overall list or tests that haven't had a "pass" result against them.
Is this possible? Unfortunately we don't have the MTM desktop application so all I can use is the TFS webview and query builders.
It's not able to use work item query to filter the test result pass or not.
To see the overall status of test cases in a test plan/test suite, and to identify which test cases are passing/ failing at a test plan/test suite level, you would need to create an Excel report. Base on data warehouse and analysis services cube, you could import the date to excel and use multiple tools in excel to filter the result.
See this blog post for more details on how to use excel report.
I'm writing a simple CanadaPost price shipping API, where the prices are given in a PDF document in table format (see page 3 of https://www.canadapost.ca/tools/pg/prices/SBParcels-e.pdf).
The data is extracted into csv files and imported into the database (table prices has columns weight, rate_code and date). The model and controller are very simple: they take a weight, rate_code and date and query the prices table.
I'm trying to create a suite of tests which tests the existence and correctness of every single price in that chart. It has to be generic and reusable because the data changes every year.
I thought of some ideas:
1) Read the same csv files that were used to import the data into the database and create a single RSpec example that loops around all rows and columns and tests the returned price.
2) Convert the csv files into fixtures for the prices controller and test each fixture.
I'm also having a hard time categorizing these tests. They are not unit tests, but are they really functional/feature tests? They sound more like integration tests, but I'm not sure if Rails differentiates functional tests from integration tests.
Maybe the answer is obvious and I'm not seeing it.
Program testing can be a very effective way to show the presence of bugs, but is hopelessly inadequate for showing their absence (Edsger Wybe Dijkstra)
First of all, I think you should assume that the CSV parser did its job correctly.
Once you did this, get the output CSV file and put it in your specs (fixtures, require, load), then test that the output is the one you see.
If it is correct for every single line of this csv file (I'm assuming it's a big one given what I say in your link), then I would conclude that your code works. I wouldn't bother testing it every year, unless it would make you feel very secure about your code.
You might get an anomaly afterwards, but unfortunalely your specs cannot detect that. it can simply ensure that it works great for this year's data.
I'm looking for a tool that could fit to the next task.
For example, user selects in interface entity University and types in some id-s for searching it and gets the result of universities list related to his request, then he does the same with entity Person and at last he types the maximum relationship length. The result of his request is some graph of relationships for example.
(: Person) -[: IS_BROTHER] ->(: Person) -[: IS_STUDENT] ->(: University)
or he might get several results that fits relationship length
I'm not very experienced with neo4j and don't know if there is any tool to fit this task. Or any other tool not related to neo4j would be fine, but I doubt that sql works fine with relationship search. Thanks.
Edited
I'm loking for user friendly tool that will generate this request without user knowing chypher language at all
Here is a Cypher query that returns all paths that are at most 5 relationships deep between any Person whose ID is in a given list and any University whose ID is in another list:
MATCH path=(p:Person)-[*..5]->(u:University)
WHERE ID(p) IN [1,22,333] AND ID(u) IN [2,444,192,678]
RETURN path;
You could use the neo4j Browser to see the paths.
I have a fullDB, (a graph clustered by Country) that contains ALL countries and I have various single country test DBs that contain exactly the same schema but only for one given country.
My query's "start" node, is identified via a match on a given value for a property e.g
match (country:Country{name:"UK"})
and then proceeds to the main query defined by the variable country. So I am expecting the query times to be similar given that we are starting from the same known node and it will be traversing the same number of nodes related to it in both DBs.
But I am getting very difference performance for my query if I run it in the full DB or just a single country.
I immediately thought that I must have some kind of "Cartesian Relationship" issue going on so I profiled the query in the full DB and a single country DB but the profile is exactly the same for each step in the plan. I was assuming that the profile would reveal a marked increase in db hits at some point in the plan, but the values are the same. Am I mistaken in what profile is displaying?
Some sizing:
The fullDB would have 70k nodes, the test DB 672 nodes, the time in full db for the query to complete is 218764ms while the test db is circa 3407ms.
While writing this I realised that there will be an increase in the number of outgoing relationships on certain nodes (suppliers can supply different countries) which I think is probably the cause, but the question remains as to why I am not seeing any indication of this in the profiling.
Any thoughts welcome.
What version are you using?
Both query times are way too long for your dataset size.
So you might check your configuration / disk.
Did you create an index/constraint for :Country(name) and is that index online?
And please share your query and your query plans.
I am building a contact management sort system. I am having a list page which has several filters to filter the results such as "area", "category", etc. And also I have search fields for name, address and contact info.
Suppose I set area as "Chicago" and category as "Family" and then press "apply filters" (filters and search fields will be submitted), I will get the result. Now if I had mentioned something in name filed then Il attach a where query to the resulting activerelation.
Suppose Ive got a result with above filters in one request. Then I want to search a different name, Ill have to query the database with the filters of are and category again which is not necessary.. is there a way to cache results from previous search?
I recommend not worrying about this until you can show you have a problem.
If you did have a problem you could:
Return all results and do the filtering in JavaScript
Cache all results on the server and do the filtering in Ruby there