I'm declaring a custom environment in the preamble of conf.py which takes 6 arguments
% REQUIREMENT STYLE/FORMAT
\newenvironment{sphinxclasscustomizedrequirement}[6]{
\rule{15cm}{1pt}\\
\fontfamily{qcr}\selectfont
\color{red}
ARG1 = #1\\[1ex]
\color{black}
ARG2 = #2\\[1ex]
ARG3 = #3\\[1ex]
ARG4 = #4\\[1ex]
ARG5 = #5\\[1ex]
\rule{15cm}{1pt}\\
}{}
I wrap some blocks into this environment using a container directive
.. container:: customizedrequirement
HELLO WORLD THIS IS A TEST
However, I cannot figure out how to specify the 6 arguments of this environment.
I want to generate this LaTeX code
\begin{sphinxuseclass}{customizedrequirement}{123456}{LOREM}{IPSUM}{DOLOR}{SIT}
\sphinxAtStartPar
HELLO WORLD THIS IS A TEST
\end{sphinxuseclass}
But I can't figure out how to do that in rst.
If I specify the arguments like in code-block
.. container:: customizedrequirement
:a: A
:b: B
:c: C
:d: DEBUG
HELLO WORLD THIS IS A TEST
Then it generates this
\begin{sphinxuseclass}{customizedrequirement}
\begin{sphinxuseclass}{a}
\begin{sphinxuseclass}{a}
\begin{sphinxuseclass}{b}
\begin{sphinxuseclass}{b}
\begin{sphinxuseclass}{c}
\begin{sphinxuseclass}{c}
\begin{sphinxuseclass}{d}
\begin{sphinxuseclass}{debug}
\sphinxAtStartPar
HELLO WORLD THIS IS A TEST
\end{sphinxuseclass}
\end{sphinxuseclass}
\end{sphinxuseclass}
\end{sphinxuseclass}
\end{sphinxuseclass}
\end{sphinxuseclass}
\end{sphinxuseclass}
\end{sphinxuseclass}
\end{sphinxuseclass}
How do I generate a call to my environment with the specified arguments ?
Related
local code = [[
client_script 'Bait.lua'
client_script 'Test.lua' --Test
]]
how am I gonna make a regex/pattern that takes everything that's between client_script ' and ' --Test
code seems to be Lua code and thus any pattern-based solution will fail if an equivalent but different piece of code is used instead (" instead of ', parentheses, line breaks, multi-line comments etc.). Why not parse it as Lua?
local code = [[
client_script 'Bait.lua'
client_script 'Test.lua' --Test
]]
local scripts = {}
local newenv = {
client_script = function(name)
table.insert(scripts, name)
end
}
load("local _ENV=...;"..code)(newenv)
for i, v in ipairs(scripts) do
print(v)
end
This parses and loads the code, but uses newenv as the environment with a different definition of client_script that stores the value. Note that FiveM also uses client_scripts and a couple of other functions that will have to be present (but most of them can be simply specified as function()end).
Also the code above works only for Lua 5.2 and higher. The difference for Lua 5.1 is the line with load, which has to be changed to this:
setfenv(loadstring(code), newenv)()
The reason is that load and loadstring got merged in 5.2, and accessing the environment is only defined in terms of accessing the _ENV variable, so there is no specific environment attached to a function anymore.
I am in the process of creating a Groovy email template for a Jenkins pipeline running Robot Framework tests. I intend to use Groovy's XMLSlurper to parse the output.xml created by Jenkins to extract the information I need. However, the template also relies on using Robot Publisher which I've now realized automatically deletes the output.xml. I would rather not have to archive the artifacts and access them that way, so is there a way to create a copy of the output.xml in the Jenkins pipeline before the Robot Publisher stage, that will not be deleted by Robot Publisher, that I can parse in my email stage?
Please bear with me as I'm relatively new to Jenkins (and stackoverflow for that matter), so apologies if I've excluded vital information, but any ideas would be much appreciated! Thanks
I would approach your problem from a different angle. First of all I do not suggest using Groovy's XMLSlurper or any other XML parser to extract the information you need from Robot Framework's output.xml.
What you should use is Robot Framework's own API that already implements the parsers you need. You could easily access any information described in the robot.result.model module. You can find everything here, suites, tests and keywords with all thier attributes like, test messages, failure messages, execution times, test results, etc.
All in all this would be the most future proof parsing solution as this parser will always match the version of the framework. Make sure to use the API documentation that matches your current framework version.
Now back to your task, you should utilize the above mentioned API via Robot Framework's listener interface. Implementing the output_file listener method you can access the output.xml (you can even make a copy of it here) file before the Robot Publisher plugin moves the file. The output_file will be automatically called once the output.xml is ready. The method will get the path to the xml file as an input. You can pass this path straight to the ExecutionResult class from the API, then you could "visit" the results by your ResultVisitor and acquire the information needed.
Last step would be to write the data into a file that would serve as an input to your e-mail stage. Note that this file won't be touched by the Robot Publisher by default as it is not a standard output, but a custom you just made using Robot Framework's API.
As it could sound a lot, here is an example to demonstrate the idea. The listener and the result visitor in EmailInputProvider.py:
from robot.api import ExecutionResult, ResultVisitor
class MyTestResultVisitor(ResultVisitor):
def __init__(self):
self.test_results = dict()
def visit_test(self, test):
self.test_results[test.longname] = test.status
class EmailInputProvider:
ROBOT_LISTENER_API_VERSION = 3
def output_file(self, path):
output = 'EmailInput.txt'
visitor = MyTestResultVisitor() # Instantiate result visitor
result = ExecutionResult(path) # Parse up execution result using robot API
result.visit(visitor) # Visit and top level suite to retrive needed metadata
with open(output, 'w') as f: # Write retrived data into a file
for testname, result in visitor.test_results.items():
print(f'{testname} - {result}', file=f)
# You can make a copy of the output.xml here as well
print(f'Email: Input saved into {output}') # Log about custom output to console
globals()[__name__] = EmailInputProvider
This would give the following results for this dummy suite (SO2.robot):
*** Test Cases ***
Test A
No Operation
Test B
No Operation
Test C
No Operation
Test D
No Operation
Test E
No Operation
Test F
Fail
Console output:
$ robot --listener EmailInputProvider SO2.robot
==============================================================================
SO2
==============================================================================
Test A | PASS |
------------------------------------------------------------------------------
Test B | PASS |
------------------------------------------------------------------------------
Test C | PASS |
------------------------------------------------------------------------------
Test D | PASS |
------------------------------------------------------------------------------
Test E | PASS |
------------------------------------------------------------------------------
Test F | FAIL |
AssertionError
------------------------------------------------------------------------------
SO2 | FAIL |
6 critical tests, 5 passed, 1 failed
6 tests total, 5 passed, 1 failed
==============================================================================
Email: Input saved into EmailInput.txt
Output: ..\output.xml
Log: ..\log.html
Report: ..\report.html
Custom output file:
SO2.Test A - PASS
SO2.Test B - PASS
SO2.Test C - PASS
SO2.Test D - PASS
SO2.Test E - PASS
SO2.Test F - FAIL
I have used gflags in my test to define custom flags. How can I pass such a flag to my test while running the test via bazel test command?
For example: I can run a test multiple times using:
bazel test //xyz:my_test --runs_per_test 10
In the same command I would like to pass a flag defined in my_test say --use_xxx, how do I do so?
Use the --test_arg flag.
bazel test //xyz:my_test --runs_per_test=10 --test_arg=--use_xxx --test_arg=--some_number=42
From the docs:
--test_arg arg: Passes command-line options/flags/arguments to each test process. This
option can be used multiple times to pass several arguments, e.g.
--test_arg=--logtostderr --test_arg=--v=3.
You can also specify arguments for the test as part of the BUILD definition:
cc_test(
name = "my_test",
srcs = [".."],
deps = [".."],
args = ["--use_xxx", "--some_number=42"],
)
You can add a main into your test. It will look like this.
TEST(A, FUNC) {
// Your test here.
}
int main(int argc, char** argv) {
gflags::ParseCommandLineFlags(&argc, &argv, /*remove_flags=*/true);
testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
It works fine for me.
let prefix prefixString baseString =
prefixString + " " + baseString
prefix "Hello" "World"
With the code above I'm getting the error Stuff.fs(34,1): error FS0039: The value or constructor 'prefix' is not defined.
I'm not entirely sure why this is happening, as I'm watching a video series on F# in which literally the same code is being compiled and ran successfully. Is there something wrong with my environment?
In the comment, you mentioned that you are running the snippet using "run selection" command. This command runs the selected piece of code in F# Interactive which initially contains no definitions. So, if you select and run just the last line, you will get:
> prefix "Hello" "World";;
stdin(1,1): error FS0039: The value or constructor 'prefix' is not defined
This is because F# Interactive does not know what the definition of prefix is - it does not look for it automatically in your file. You can fix this by selecting everything and running all code in a single interaction, or you can first run the definition and then the last line, i.e.:
> let prefix prefixString baseString =
prefixString + " " + baseString;;
val prefix : prefixString:string -> baseString:string -> string
> prefix "Hello" "World";;
val it : string = "Hello World"
Note that when you run the first command, F# Interactive will print the type of the defined functions, so you can see what has just been defined.
The fact that F# Interactive has its own separate "state of the world" is quite important, as it also means that you need to re-run functions after you change them so that subsequent commands use the new definition.
When working on compile time features it would be nice to echo something at compile time. If an echo is withing a macro it is already executed at compile time. But is it also possible to print something at compile time e.g. from the global scope? I'm looking for a function like echoStatic in this:
echoStatic "Compiling 1. set of macros..."
# some macro definitions
echoStatic "Compiling 2. set of macros..."
# more macro definitions
There is no need for a special echoStatic. This is solved by the general solution of running code at compile time, which is to use a static block:
static:
echo "Compiling 1. set of macros..."
# some macro definitions
static:
echo "Compiling 2. set of macros..."
# more macro definitions
In languages like C, C++ and D, you can typically use a pragma for this job. This also works for Nim:
from strformat import `&`
const x = 3
{. hint: &"{$typeof(x)} x = {x}" .} # <file location> Hint: int x = 3
It also prints the file, the line and the column which can be useful for compile-time debugging.