FItNesse: Convert Fit fixture to Slim - fitnesse

Looking for a solution to convert Fit fixture for FitNesse test to Slim.
I got the Command-Line Fit fixture.
Since all my Fitnesse test system is running on Slim I need to have CommandLineFixture as Slim to execute bash script from my test.
Any other workaround for this should work for me.
I am trying to execute a script from FitNesse test and this script writes some text in file present in a server where my Fitnesse server is running.
But what I am observing with the provided fixture its opening file and not writing any text into it.
So just wanted to check do we have any constraint with Fitnesse to execute a script which writes into a file.
Also, I have given all rwx permission to the text file
Below is my modified script:
!define TEST_SYSTEM {slim}
!path ../fixtures/*.jar
|Import |
| nl.hsac.fitnesse.fixture.slim.ExecuteProgramTest |
|script |
|set |-c |as argument|0 |
|set |ls -l / |as argument|1 |
|execute|/bin/bash |
|check |exit code |0 |
|show |standard out |
|check |standard error|!--! |
Executing the above test fetched no response and gives the result as:
Test Pages: 0 right, 0 wrong, 1 ignored, 0 exceptions
Assertions: 0 right, 0 wrong, 0 ignored, 0 exceptions
(0.456 seconds)

I had a helper method to start a program in my my fixture library already, but I started work on fixture today. Would the execute program test fixture work for you?
Example usage:
We can run a program with some arguments, check its exit code and show its output.
|script |execute program test |
|set |-c |as argument|0|
|set |ls -l / |as argument|1|
|execute|/bin/bash |
|check |exit code |0 |
|show |standard out |
|check |standard error|!--! |
The default timeout for program execution is one minute, but we can set a custom timeout. Furthermore we can control the directory it is invoked from, set all arguments using a list and get its output 'unformatted'.
|script |execute program test |
|check |timeout |60000 |
|set timeout of |100 |milliseconds|
|set working directory|/ |
|set |-c, ls -l |as arguments|
|execute |/bin/bash |
|check |exit code |0 |
|show |raw standard out |
|check |raw standard error|!--! |
The timeout can also be set in seconds, and pass environment variables (and the process's output is escaped to ensure it is displayed properly).
|script |execute program test |
|set timeout of|1 |seconds |
|set value |Hi <there> |for |BLA|
|set |-c |as argument|0 |
|set |!-echo ${BLA}-!|as argument|1 |
|execute |/bin/bash |
|check |exit code |0 |
|check |raw standard out |!-Hi <there>
-!|
|check|standard out|{{{Hi <there>
}}}|

Related

Grep finds the word and then stucks (bash)

I have a loop (while, extracting 2 variables) where I found one command is not working. Even when I put the command in the console directly (subsituting by my own the variable) it gives the result but continue working without any advance.
The command's objective is to find in a big file.gct, in specific in its first three lines, an object obtained from other file and then print the finding and everything before in that line.
If someone know why it stucks and how to fix it or even an alternative that works well in loops and does not demands more RAM's use it would be appreciated.
head -3 file_2 | grep -E -o ".{0,1000}$variable."
Kind of an example as how it looks the big file (file_2):
head -3 file_2
| #1.2 |
| 57000 | 17300 |
|Irrelevant|Irrelevant2| DATA-B12-18 | DATA-Y17-72 | DATA-A12-44 | .... |
When I run in the terminal: head -3 file_2 | grep -E -o ".{0,1000}DATA-B12-18"
the output is:
Irrelevant Irrelevant2 DATA-B12-18 and then stacks.

Alternative of grep in lua script

I have the following text file output.txt that I created (it has 15 colums including symbol | ):
[66] | alert:n | 3.0 | 10/22/2020-14:45:50.066928 | local_ip | 123.123.123.123 | United States of America | SURICATA STREAM ESTABLISHED SYNACK resend with different ACK
[67] | alert:n | 3.0 | 10/22/2020-14:45:51.096955 | local_ip | 12.12.12.11 | United States of America | SURICATA STREAM ESTABLISHED SYNACK resend with different ACK
[68] | alert:n | 3.0 | 10/22/2020-14:45:53.144942 | 123.123.123.123 | local_ip | United States of America | SURICATA STREAM ESTABLISHED SYNACK resend with different ACK
[69] | alert:n | 3.0 | 10/22/2020-14:45:57.176956 | local_ip | 68.73.203.109 | United States of America | SURICATA STREAM ESTABLISHED SYNACK resend with different ACK
[70] | alert:n | 3.0 | 10/22/2020-14:46:05.240953 | 123.123.123.123 | local_ip | United States of America | SURICATA STREAM ESTABLISHED SYNACK resend with different ACK
[71] | alert:n | 3.0 | 10/22/2020-14:46:21.624979 | local_ip | 68.73.203.109 | United States of America | SURICATA STREAM ESTABLISHED SYNACK resend with different ACK
I'm familiar with the bash script, let say if I want to count total specific ip of 123.123.123.123 that can be found in the 9th column, I can implement like this:
#!/bin/bash
ip = "123.123.123.123"
report = output.txt
src_ip_count=$(grep "${ip}" "${report}" | awk '{ print $9 }' | grep -v "local_ip" | uniq -c | awk '{ print $1 }')
and the output is:
[root#me lua-output]# ./test.sh
2
How do I implement the same code above in lua ? I know there is popen function can be used.. but is there a native way to do this in lua ? Also if I use popen, I also need to pass variable $ip and $report inside that command which I'm not sure if it's possible.
There's a bunch of ways to go about this, really. Assuming you read your data from stdin (though the same works for any file you manually open), you can do something like this:
local c = 0
for line in io.lines() do -- or or file:lines() if you have a different file
if line:find("123.123.123.123") -- Only lines containing the IP we care about
if (true) -- Whatever other conditions you want to apply
c = c + 1
end
end
end
print(c)
Lua doesn't have a concept of what a "column" is, so you have to build that yourself as well. Either use a pattern to count spaces, or split the string into a table and index it.
You mentioned that if it is possible to use variable inside popen in lua. It is possible, and you can use grep command in lua.
So in lua you can do this:
-- lua script using grep example
ip = "123.123.123.123"
report = output.txt
local cmd = "grep -F " .. ip .. " " .. report .. " | awk '{ print $9 }' | grep -v 'local_ip' | uniq -c | awk '{ print $1 }'"
local handle = io.popen(cmd)
local src_ip_count = handle:read("*a")
print(src_ip_count)
handle:close()
output:
2

Conky cpubar filling in wrong way

I have a simple conky cpubar for monitoring CPU load working on Debian KDE 9, here is the relevant part:
${image ~/script/conky/static/img/cpu.png -p 0,280 -s 26x26}\
${goto 40}${font monospace:bold:size=15}${color1}CPU ${font monospace:bold:size=10}(TOT: ${cpu cpu0}%) ${color0}${hr 5}${color white}
${font monospace:bold:size=11}\
${execi 99999 neofetch | grep 'CPU' | cut -f 2 -d ":" | sed 's/^[ \t]*//;s/[ \t]*$//' | sed 's/[\x01-\x1F\x7F]//g' | sed 's/\[0m//g' | sed 's/\[.*\]//'}\
[${execi 5 sensors | grep 'temp1' | cut -c16-22}]
${cpugraph cpu0 40,340 52ff00 6edd21}
CPU 1${goto 70}${cpu cpu1}%${goto 100}${cpubar 8,width_cpu_bar cpu1}
CPU 2${goto 70}${cpu cpu2}%${goto 100}${cpubar 8,width_cpu_bar cpu2}
CPU 3${goto 70}${cpu cpu3}%${goto 100}${cpubar 8,width_cpu_bar cpu3}
CPU 4${goto 70}${cpu cpu4}%${goto 100}${cpubar 8,width_cpu_bar cpu4}
Ad this is the result:
Another example:
As you can see the result looks good but the filling cpubars dont work properly and all 4 bars have the same filling, clearly seen in the last one where I have a 100% core load (CPU3) and his bar is not completely full.
Where am I wrong?
The cpu number comes before the height,width part, i.e. use
${cpubar cpu1 8,width_cpu_bar}

Does Release Management 2013 rollback across tags

We're heavy users of tags and I'm confused how tags and rollbacks interact together.
I understand that rollbacks cascade (at least within a sequence) from this article:
http://incyclesoftware.com/2014/03/understanding-rollbacks-release-management/
But I'm not clear how this would interact when you use tags, i.e. we tag servers by what features are installed on them (web, database, service) and vary the mix of features depending on environment (i.e. DEV might have web & services running on the same machine, but UAT & PROD would have seperate machines)
So does the rollback go back across the tag boundaries? If for example your sequence looked like this
+--Database tag --+
| Backup DB |
| | |
| Update DB |
| | | <- Runs against SQL server
| +--Rollback--+ |
| | Restore DB | |
| +------------+ |
+-----------------+
|
+---Web Tag-------+
| Do Stuff | <- Runs against WEB server
+-----------------+
|
+---Service tag----+
| Backup |
| | |
| Install new ver | <- Runs against Service server
| | |
| Smoke test |
| | |
| +--Rollback----+ |
| | Replace with | |
| | backup | |
| +--------------+ |
+------------------+
Would a roll back inside the service tag cause the database tag to execute it's rollback? Do rollbacks cascade across sequences?
I haven't had time to set this up yet and test so I thought I'd ask the question instead.
By accident I've managed to test this out with a suitable release and roll back does roll back across the tags as #joerage says.
It appears I was wrong... faulty memory and all that. Rollbacks work across tag boundaries.
I generally recommend against using rollback blocks, since their behavior is generally backwards, unpredictable, and not immediately obvious. The current best practice is actually to not use agent-based releases at all, as they will not be portable to the forthcoming Release Management Service.

Cypher load CSV eager and long action duration

im loading a file with 85K lines - 19M,
server has 2 cores, 14GB RAM, running centos 7.1 and oracle JDK 8
and it can take 5-10 minutes with the following server config:
dbms.pagecache.memory=8g
cypher_parser_version=2.0
wrapper.java.initmemory=4096
wrapper.java.maxmemory=4096
disk mounted in /etc/fstab:
UUID=fc21456b-afab-4ff0-9ead-fdb31c14151a /mnt/neodata
ext4 defaults,noatime,barrier=0 1 2
added this to /etc/security/limits.conf:
* soft memlock unlimited
* hard memlock unlimited
* soft nofile 40000
* hard nofile 40000
added this to /etc/pam.d/su
session required pam_limits.so
added this to /etc/sysctl.conf:
vm.dirty_background_ratio = 50
vm.dirty_ratio = 80
disabled journal by running:
sudo e2fsck /dev/sdc1
sudo tune2fs /dev/sdc1
sudo tune2fs -o journal_data_writeback /dev/sdc1
sudo tune2fs -O ^has_journal /dev/sdc1
sudo e2fsck -f /dev/sdc1
sudo dumpe2fs /dev/sdc1
besides that,
when running a profiler, i get lots of "Eagers", and i really cant understand why:
PROFILE LOAD CSV WITH HEADERS FROM 'file:///home/csv10.csv' AS line
FIELDTERMINATOR '|'
WITH line limit 0
MERGE (session :Session { wz_session:line.wz_session })
MERGE (page :Page { page_key:line.domain+line.page })
ON CREATE SET page.name=line.page, page.domain=line.domain,
page.protocol=line.protocol,page.file=line.file
Compiler CYPHER 2.3
Planner RULE
Runtime INTERPRETED
+---------------+------+---------+---------------------+--------------------------------------------------------+
| Operator | Rows | DB Hits | Identifiers | Other |
+---------------+------+---------+---------------------+--------------------------------------------------------+
| +EmptyResult | 0 | 0 | | |
| | +------+---------+---------------------+--------------------------------------------------------+
| +UpdateGraph | 9 | 9 | line, page, session | MergeNode; Add(line.domain,line.page); :Page(page_key) |
| | +------+---------+---------------------+--------------------------------------------------------+
| +Eager | 9 | 0 | line, session | |
| | +------+---------+---------------------+--------------------------------------------------------+
| +UpdateGraph | 9 | 9 | line, session | MergeNode; line.wz_session; :Session(wz_session) |
| | +------+---------+---------------------+--------------------------------------------------------+
| +ColumnFilter | 9 | 0 | line | keep columns line |
| | +------+---------+---------------------+--------------------------------------------------------+
| +Filter | 9 | 0 | anon[181], line | anon[181] |
| | +------+---------+---------------------+--------------------------------------------------------+
| +Extract | 9 | 0 | anon[181], line | anon[181] |
| | +------+---------+---------------------+--------------------------------------------------------+
| +LoadCSV | 9 | 0 | line | |
+---------------+------+---------+---------------------+--------------------------------------------------------+
all the labels and properties have indices / constrains
thanks for the help
Lior
He Lior,
we tried to explain the Eager Loading here:
And Marks original blog post is here: http://www.markhneedham.com/blog/2014/10/23/neo4j-cypher-avoiding-the-eager/
Rik tried to explain it in easier terms:
http://blog.bruggen.com/2015/07/loading-belgian-corporate-registry-into_20.html
Trying to understand the "Eager Operation"
I had read about this before, but did not really understand it until Andres explained it to me again: in all normal operations, Cypher loads data lazily. See for example this page in the manual - it basically just loads as little as possible into memory when doing an operation. This laziness is usually a really good thing. But it can get you into a lot of trouble as well - as Michael explained it to me:
"Cypher tries to honor the contract that the different operations
within a statement are not affecting each other. Otherwise you might
up with non-deterministic behavior or endless loops. Imagine a
statement like this:
MATCH (n:Foo) WHERE n.value > 100 CREATE (m:Foo {m.value = n.value + 100});
If the two statements would not be
isolated, then each node the CREATE generates would cause the MATCH to
match again etc. an endless loop. That's why in such cases, Cypher
eagerly runs all MATCH statements to exhaustion so that all the
intermediate results are accumulated and kept (in memory).
Usually
with most operations that's not an issue as we mostly match only a few
hundred thousand elements max.
With data imports using LOAD CSV,
however, this operation will pull in ALL the rows of the CSV (which
might be millions), execute all operations eagerly (which might be
millions of creates/merges/matches) and also keeps the intermediate
results in memory to feed the next operations in line.
This also
disables PERIODIC COMMIT effectively because when we get to the end of
the statement execution all create operations will already have
happened and the gigantic tx-state has accumulated."
So that's what's going on my load csv queries. MATCH/MERGE/CREATE caused an eager pipe to be added to the execution plan, and it effectively disables the batching of my operations "using periodic commit". Apparently quite a few users run into this issue even with seemingly simple LOAD CSV statements. Very often you can avoid it, but sometimes you can't."

Resources