Use both columns and previously defined values in fitnesse ColumnFixture - fitnesse

The rows in my test-table all repeat the same values, except for two columns which are different for each row. I would like to use values i defined earlier for the repeating rows.
The Fixture uploads files to FTP, each row in the test-table now has username, password, host and so on, these are always the same. The name of the file is different.

If your tests use Slim you can use constructor parameters to define the repeated values in the first (i.e. header) row of your table. In that case you only have to define the file names in the table's rows.
If your table is a 'decision table' based on a 'scenario' you can also supply repeated parameters in the header row (using a 'having' syntax). More details can be found in FitNesse's own acceptance tests. For instance:
|scenario |Division _ _ _|numerator, denominator, quotient?|
|setNumerator |#numerator |
|setDenominator|#denominator |
|$quotient= |quotient |
|Division |having|numerator|9|
|denominator|quotient? |
|3 |3.0 |
|2 |4.5 |
Another option, but this seems less appropriate when the values are really the same for ALL rows, is to use a baseline decision table where the first row defines values for all columns and subsequent rows only define the altered values.

You can use FitNesse variables:
!define username {bob}
!define password {secret}
|myfixture|
|username|password|other|stuff|
|${username}|${password}|a|b|
|${username}|${password}|c|d|
|${username}|${password}|x|y|

The answer by Fried Hoeben works for Slim, the following answer is for fit:
If your Fixture is a child of Fixture, then you can define extra parameters by adding extra columns in the header row.
|!-UploadFileToFtps-! |ftpPassword=${password} | ftpUserName=${userName}|
|host |ftpDir |localFile |result? |
|${ftpHost}|${ftpSrc}|${folder1}${file1}.xlsx |File '${folder1}${file1}.xlsx' successfully uploaded|
|${ftpHost}|${ftpSrc}|${folder2}${file2}.xlsx |File '${folder2}${file2}.xlsx' successfully uploaded|
|${ftpHost}|${ftpSrc}|${folder2}${file3}.pdf |File '${folder2}${file3}.pdf' successfully uploaded |
You can access the values in those columns with getArgs() which retrieves a String Array.
I use key-value pairs separated by '=', this enables me to use named parameters. Otherwise i would have to reference the parameters in order, which i think is wrong.

Related

Combine log lines with awk

I have a log file that simplified looks like this (it has enough columns so that direct addressing of the columns is not feasible):
id,time,host,ip,user_uuid
foo1,2022-05-10T00:01.001Z,,,
foo1,2022-05-10T00:01.002Z,foo_host,,
foo1,2022-05-10T00:01.003Z,,192.168.0.1,
foo1,2022-05-10T00:01.004Z,,,foo_user
bar1,2022-05-10T00:02.005Z,,,
bar1,2022-05-10T00:03.006Z,bar_host,,
bar1,2022-05-10T00:04.007Z,,192.168.0.13,
bar1,2022-05-10T00:05.008Z,,,bar_user
Most of the fields appear only once by id but not all of them (see time, for example).
What I want to achieve is to have one line per id that combines the columns of all records with the same id:
id,time,host,ip,user_uuid
foo1,2022-05-10T00:01.001Z,foo_host,192.168.0.1,foo_user
bar1,2022-05-10T00:03.006Z,bar_host,192.168.0.13,bar_user
For the columns that appear more than once in each id, I don't care which one is returned as long as it relates to a record with the same id.
I would exploit GNU AWK 2D arrays following way, let file.txt content be
id,time,host,ip,user_uuid
foo1,2022-05-10T00:01.001Z,,,
foo1,2022-05-10T00:01.002Z,foo_host,,
foo1,2022-05-10T00:01.003Z,,192.168.0.1,
foo1,2022-05-10T00:01.004Z,,,foo_user
bar1,2022-05-10T00:02.005Z,,,
bar1,2022-05-10T00:03.006Z,bar_host,,
bar1,2022-05-10T00:04.007Z,,192.168.0.13,
bar1,2022-05-10T00:05.008Z,,,bar_user
then
awk 'BEGIN{FS=OFS=",";cols=5}NR==1{print}NR>1{for(i=1;i<=cols;i+=1){arr[$1][i]=arr[$1][i]?arr[$1][i]:$i}}END{for(i in arr){for(j in arr[i]){$j=arr[i][j]};print}}' file.txt
output
id,time,host,ip,user_uuid
bar1,2022-05-10T00:02.005Z,bar_host,192.168.0.13,bar_user
foo1,2022-05-10T00:01.001Z,foo_host,192.168.0.1,foo_user
Explanation: Firstly I inform GNU AWK that both field separator (FS) and output field separator (OFS) is ,, I use cols variable for holding information how many columns you wish to have. First row I simply print, for following rows for each column I check if there is already some truthy value in arr[id][number of field] using so-called ternary operator if yes I use it otherwise I set value to current field. In END I use nested for loops, for each id I do set value of its field in current line, so GNU AWK build string from these which I can print. Disclaimer: this solution assumes number of columns is equal in all lines and number of columns is known a priori and any order of output is acceptable. If this does not hold then develop own superior solution.
(tested in gawk 4.2.1)
You can use the ruby csv parser to group then reduce the repeated entries:
ruby -r csv -e '
data=CSV.parse($<.read, **{:col_sep=>","})
puts data[0].to_csv
data[1..].group_by { |row| row[0] }.
each{ |k, arr|
puts arr.transpose().map{ |ta| ta.find { |x| !x.nil? }}.to_csv
}
' file
Prints:
id,time,host,ip,user_uuid
foo1,2022-05-10T00:01.001Z,foo_host,192.168.0.1,foo_user
bar1,2022-05-10T00:02.005Z,bar_host,192.168.0.13,bar_user
This assumes the valid data is the first non-nil, nonblank encountered for that particular column.

Splunk join with an in-memory record

Sorry for the lame question, I am new to Splunk.
What I am trying to do is to join my search result with a declared in the search body fake record, something like
index=...
| joint type=outer <column>
[ | <here declare a record to join with>
......
The idea is to make sure there is at least one record in the resulting search. There are the following cases expected:
the original search returns records
the original search does not return anything because the result is filtered
the original search does not return anything because the source is empty
I need to distinguish cases 2 and 3, which the join is for. The fake record will eliminate the case 3 so I will only need to filter the result.
There's a better way to handle the case of no results returned. Use the appendpipe command to test for that condition and add fields needed in later commands.
| appendpipe [ stats count | eval column="The source is empty"
| where count=0 | fields - count ]

Auto-assigning objects to users based on priority in Postgres/Ruby on Rails

I'm building a rails app for managing a queue of work items. I have several types of users ("access levels") to whom I want to auto-assign these work items.
The end goal is an "Auto-assign" button on one of my views that will automatically grab the next work item based on a priority, which is defined by the users's access level.
I'm trying to set up a class method in my work_item model to automatically sort work items by type based on the user's access level. I am looking at something like this:
def self.auto_assign_next(access_level)
case
when access_level = 2
where("completed = 'f'").order("requested_time ASC").limit(1)
when access_level > 2
where("completed = 'f'").order("CASE WHEN form='supervisor' THEN 1 WHEN form='installer' THEN 2 WHEN form='repair' THEN 3 WHEN form='mail' THEN 4 WHEN form='hp' THEN 5 ELSE 6 END").limit(1)
end
This isn't very DRY, though. Ideally I'd like the sort order to be configurable by administrators, so maybe setting up a separate table on which the sort order is kept would be best. The problem with that idea is that I have no idea how to pass the priority order on that table to the [postgre]SQL query. I'm new to SQL in general and somewhat lost with this one. Does anybody have any suggestions as to how this should be handled?
One fairly simple approach starts with turning your case statement into a new table, listing form values versus what precedence value they should be sorted by:
id | form | precedence
-----------------------------------
1 | supervisor | 1
2 | installer | 2
(etc)
Create a model for this, say, FormPrecedences (not a great name, but I don't totally grok your data model so pick one that better describes it). Then, your query can look like this (note: I'm assuming your current model is called WorkItems):
when access_level > 2
joins("LEFT JOIN form_precedences ON form_precedences.form = work_items.form")
.where("completed = 'f'")
.order("COALESCE(form_precedences.precedence, 6)")
.limit(1)
The way this works isn't as complicated as it looks. A "left join" in SQL simply takes all the rows of the table on the left (in this case, work_items) and, for each row, finds all the matching rows from the table on the right (form_precedences, where "matching" is defined by the bit after the "ON" keyword: form_precedences.form = work_items.form), and emits one combined row. If no match is found, a LEFT JOIN will still emit a row, but with all the right-hand values being NULL. A normal join would skip any rows with no right-hand match found.
Anyway, with the precedence data joined on to our work items, we can just sort by the precedence value. But, in case no match was found during the join above, that value will be NULL -- so, I use COALESCE (which returns the first of its arguments that's not NULL) to default to a precedence of 6.
Hope that helps!

Comparing values in two columns of two different Splunk searches

I am new to splunk and facing an issue in comparing values in two columns of two different queries.
Query 1
index="abc_ndx" source="*/jkdhgsdjk.log" call_id="**" A_to="**" A_from="**" | transaction call_id keepevicted=true | search "xyz event:" | table _time, call_id, A_from, A_to | rename call_id as Call_id, A_from as From, A_to as To
Query 2
index="abc_ndx" source="*/ jkdhgsdjk.log" call_id="**" B_to="**" B_from="**" | transaction call_id keepevicted=true | search " xyz event:"| table _time, call_id, B_from, B_to | rename call_id as Call_id, B_from as From, B_to as To
These are my two different queries. I want to compare each values in A_from column with each values in B_from column and if the value matches, then display the those values of A_from.
Is it possible?
I have run the two queries separately and exported the results of each into csv and used vlookup function. But the problem is there is a limit of max 10000 rows of data which can be exported and so I miss out lots of data as my data search has more than 10000 records.
Any help?
Haven't got any data to test this on at the moment, however, the following should point you in the right direction.
When you have the table for the first query sorted out, you should 'pipe' the search string to an appendcols command with your second search string. This command will allow you to run a subsearch and "import" a columns into you base search.
Once you have the two columns in the same table. You can use the eval command to create a new field which compares the two values and assigns a value as you desire.
Hope this helps.
http://docs.splunk.com/Documentation/Splunk/5.0.2/SearchReference/Appendcols
http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Eval
I'm not sure why there is a need to keep this as two separate queries. Everything is coming from the same sourcetype, and is using almost identical data. So I would do something like the following:
index="abc_ndx" source="*/jkdhgsdjk.log" call_id="**" (A_to="**" A_from="**") OR (B_to="**" B_from="**")
| transaction call_id keepevicted=true
| search "xyz event:"
| eval to=if(A_from == B_from, A_from, "no_match")
| table _time, call_id, to
This grabs all events from your specified sourcetype and index, which have a call_id, and either A_to and A_from or B_to and B_from. Then it transactions all of that, lets you filter based on the "xyz event:" (Whatever that is)
Then it creates a new field called 'to' which shows A_from when A_from == B_from, otherwise it shows "no_match" (Placeholder since you didn't specify what should be done when they don't match)
There is also a way to potentially tackle this without using transactions. Although without more details into the underlying data, I can't say for sure. The basic idea is that if you have a common field (call_id in this case) you can just use stats to collect values associated with that field instead of an expensive transaction command.
For example:
index="abc_ndx" index="abc_ndx" source="*/jkdhgsdjk.log" call_id="**"
| stats last(_time) as earliest_time first(A_to) as A_to first(A_from) as A_from first(B_to) as B_to first(B_from) as B_from by call_id
Using first() or last() doesn't actually matter if there is only one value per call_id. (You can even use min() max() avg() and you'll get the same thing) Perhaps this will help you get to the output you need more easily.

Using SharePoint's Data Query Webpart to link two lists

I have two SharePoint Lists: A & B. List A has a column where the user can add multilple references (displayed as hyperlinks) for each entry to entries in B
A: B:
... | RefB | ... Name | OtherColumns....
----------------- -----------------------
... | B1 | ... B1 |
... | B2,B3 | ... B2 |
... | B1,B3 | ... B3 |
Now I want to display all entries from list B that are referenced by an (specific) entry in A. I.e: I set the filter to [Entry 2] and the Web part displays all the stuff from entries B2 and B3. Is this even possible?
I think the problem you've got which is ruining some of the way's I'm thinking of solving it is that the RefB column is multi-valued. You may have some joy doing filtering with the DataView but it might get messy fast, as you try to split RefB on the comma and compare against the resulting array of values.
I think the problem could be made easier by having only a single value in the RefB column.
Three solutions come to mind.
Have only one value in RefB per item in Table A and repeat the other fields in Table A. You'd have to accept some data redundancy and would need to be careful with data entry.
The normal relational database way of solving your data redundancy problem would be to have a 3rd table joining tabe A to table B. If you're not familiar with relational database techniques, there are lots of straight-forward tutorials on data normalisation on the net. While there's some more work, it may lead to a cleaner solution. Be careful when trying to fake a relational database within SharePoint though - it's not meant for relational data. You may be better off using a SQL database.
Put everything in one table, though I think you've already ruled this one out.

Resources