how to change saxon param=values - saxon

SAXON 6.5.4 from Michael Kay
Usage: java com.icl.saxon.StyleSheet [options] source-doc style-doc {param=value}...
Options:
-a Use xml-stylesheet PI, not style-doc argument
-ds Use standard tree data structure
-dt Use tinytree data structure (default)
-o filename Send output to named file or directory
-m classname Use specified Emitter class for xsl:message output
-r classname Use specified URIResolver class
-t Display version and timing information
-T Set standard TraceListener
-TL classname Set a specific TraceListener
-u Names are URLs not filenames
-w0 Recover silently from recoverable errors
-w1 Report recoverable errors and continue (default)
-w2 Treat recoverable errors as fatal
-x classname Use specified SAX parser for source file
-y classname Use specified SAX parser for stylesheet
-? Display this message

If your stylesheet declares a parameter
<xsl:param name="iridescent"/>
Then you can set it from the command line with (for example)
java com.icl.saxon.Stylesheet source.xml style.xsl iridescent=no

Related

Use a file as a Capture filter Wireshark

Is it possible to use a file containing filters as a filter itself? Instead of having to write each filter -f ...... -f ....... have a file that contains all the filters I wish to use to capture? What should the format of this file be? How do I create said file? "Filter1" udp "Filter2" ip6 ........ When using this file using CMD what would the expression be? dumpcap -i 5 -???????? -w capture.pcapng
I expect an expression of what to type in CMD in order to use a file as a capture filter instead of manually writing all filters as -f ........ -f .......

Does Mercurial have a template to capture output of "hg grep"?

I was searching for a change that included "foreach" so I used this Mercurial command:
$ hg grep -r "user(mjh) & public() & date(-30)" --diff -i foreach
and it does return the hits where "foreach" was added and removed.
However, I'd like to know the actual commit hashes too. If I add a template:
$ hg grep ... -T '{date|shortdate}\n{node|short}\n{desc|firstline}\n\n'
then I get the commit hash and description as expected, but then I don't see the changed files listed.
Is there a template to capture the output of hg grep? The {files} template lists the files associated with a commit, but that's not the actual grep output. Is there an iterable template keyword available for the grep results?
Please, re-read carefully hg help grep -v (-v is important option), note the following part (new and unexpected for me also)
The following keywords are supported in addition to the common
template
keywords and functions. See also 'hg help templates'.
change String. Character denoting insertion "+" or removal "-".
Available if "--diff" is specified.
lineno Integer. Line number of the match.
path String. Repository-absolute path of the file.
texts List of text chunks.
After it you'll be able to repeat (so-so, because some details will differ slightly) default output of grep in you template
>hg grep --diff -i -r 1166 to_try
>hg grep --diff -i -r 1166 -T "{path}:{rev}:{change}:{texts}\n" to_try
hggit/compat.py:1166:-: for args in parameters_to_try:
hggit/compat.py:1166:+: for (args, kwargs) in parameters_to_try:
and after replacing {rev} by {node|short}
>hg grep --diff -i -r 1166 -T "{path}:{node|short}:{change}:{texts}\n" to_try
hggit/compat.py:f6cef55e6aeb:-: for args in parameters_to_try:
hggit/compat.py:f6cef55e6aeb:+: for (args, kwargs) in parameters_to_try:

Grepping list of phpass hashes against a file

I'm trying to grep multiple strings which look like this (there's a few hundred) against a file which contains data:string
Example strings: (no sensitive data is provided, they have been modified).
$H$9a...DcuCqC/rMVmfiFNm2rqhK5vFW1
$H$9n...AHZAV.sTefg8ap8qI8U4A5fY91
$H$9o...Bi6Z3E04x6ev1ZCz0hItSh2JJ/
$H$9w...CFva1ddp8IRBkgwww3COVLf/K1
I've been researching how to grep a file of patterns against another file, and came across the following commands
grep -f strings.txt datastring.txt > output.txt
grep -Ff strings.txt datastring.txt > output.txt
But unfortunately, these commands do NOT work successfully, and only print out a handful of results to my output file. I think it may be something to do with the symbols contained in strings.txt, but I'm unsure. Any help/advice would be great.
To further mention, I'm using Cygwin on Windows (if this is relevant).
Here's an updated example:
strings.txt contains the following:
$H$9a...DcuCqC/rMVmfiFNm2rqhK5vFW1
$H$9n...AHZAV.sTefg8ap8qI8U4A5fY91
$H$9o...Bi6Z3E04x6ev1ZCz0hItSh2JJ/
$H$9w...CFva1ddp8IRBkgwww3COVLf/K1
datastring.txt contains the following:
$H$9a...DcuCqC/rMVmfiFNm2rqhK5vFW1:53491
$H$9n...AHZAV.sTefg8ap8qI8U4A5fY91:03221
$H$9o...Bi6Z3E04x6ev1ZCz0hItSh2JJ/:20521
$H$9w...CFva1ddp8IRBkgwww3COVLf/K1:30142
So technically, all lines should be included in the OUTPUT file, but only this line is outputted:
$H$9w...CFva1ddp8IRBkgwww3COVLf/K1:30142
I just don't understand.
You have showed the output of cat -A strings.txt elsewhere, which includes ^M representing a CR (carriage return) character at the end of each line:
This indicates your file has Windows line endings (CR LF) instead of the Unix line endings (only LF) that grep would expect.
You can convert files with dos2unix strings.txt and back with unix2dos strings.txt.
Alternatively, if you don't have dos2unix installed in your Cygwin environment, you can also do that with sed.
sed -i 's/\r$//' strings.txt # dos2unix
sed -i 's/$/\r/' strings.txt # unix2dos

How to see the GNU debuglink value of an ELF file?

So I can add a link to a debug symbol file like this objcopy --add-gnu-debuglink=$name.dbg $name, but how can I later retrieve that value again?
I checked with readelf -a and grepped for \.dbg without any luck. Similarly I checked the with objdump -sj .gnu_debuglink (.gnu_debuglink is the section) and could see the value there:
$ objdump -sj .gnu_debuglink helloworld|grep \.dbg
0000 68656c6c 6f776f72 6c642e64 62670000 helloworld.dbg..
However, would there be a command that allows me to extract the retrieve the exact value again (i.e. helloworld.dbg in the above example)? That is the file name only ...
I realize I could use some shell foo here, but it seems odd that an option exists to set this value but none to retrieve it. So I probably just missed it.
You can use readelf directly:
$ readelf --string-dump=.gnu_debuglink helloworld
String dump of section '.gnu_debuglink':
[ 0] helloworld
[ 1b] 9
I do not know what the second entry means (it seems to always be different). To get rid of the header and the offsets, you can use sed:
$ readelf --string-dump=.gnu_debuglink helloworld | sed -n '/]/{s/.* //;p;q}'
helloworld
Something like this should work:
objcopy --output-target=binary --set-section-flags .gnu_debuglink=alloc \
--only-section=.gnu_debuglink helloworld helloworld.dbg
--output-target=binary avoids adding ELF headers. --set-section-flags .gnu_debuglink=alloc is needed because objcopy only writes allocated sections by default (with the binary emulation). And --only-section=.gnu_debuglink finally identifies the answer. See this earlier answer.
Note that the generated file may have a trailing NUL byte and four bytes of CRC, so some post-processing is needed to extract everything up to the first NUL byte (perhaps using head -z -n 1 helloworld.dbg | tr -d '\0' or something similar).

HDF5 integration with ROOT framework

I've worked extensively with ROOT, which has it's own format for data files, but for various reasons we would like to switch to HDF5 files. Unfortunately we'd still require some way of translating files between formats. Does anyone know of any existing libraries which do this?
You might check out rootpy, which has a facility for converting ROOT files into HDF5 via PyTables: http://www.rootpy.org/commands/root2hdf5.html
If this issue is still of interest to you, recently there have been large improvements to rootpy's root2hdf5 script and the root_numpy package (which root2hdf5 uses to convert TTrees into NumPy structured arrays):
root2hdf5 -h
usage: root2hdf5 [-h] [-n ENTRIES] [-f] [--ext EXT] [-c {0,1,2,3,4,5,6,7,8,9}]
[-l {zlib,lzo,bzip2,blosc}] [--script SCRIPT] [-q]
files [files ...]
positional arguments:
files
optional arguments:
-h, --help show this help message and exit
-n ENTRIES, --entries ENTRIES
number of entries to read at once (default: 100000.0)
-f, --force overwrite existing output files (default: False)
--ext EXT output file extension (default: h5)
-c {0,1,2,3,4,5,6,7,8,9}, --complevel {0,1,2,3,4,5,6,7,8,9}
compression level (default: 5)
-l {zlib,lzo,bzip2,blosc}, --complib {zlib,lzo,bzip2,blosc}
compression algorithm (default: zlib)
--script SCRIPT Python script containing a function with the same name
that will be called on each tree and must return a tree or
list of trees that will be converted instead of the
original tree (default: None)
-q, --quiet suppress all warnings (default: False)
As of when I last checked (a few months ago) root2hdf5 had a limitation that it could not handle TBranches which were arrays. For this reason I wrote a bash script: root2hdf (sorry for non-creative name).
It takes a ROOT file and the path to the TTree in the file as input arguments and generates source code & compiles to an executable which can be run on ROOT files, converting them into HDF5 datasets.
It also has a limitation that it cannot handle compound TBranch types, but I don't know that root2hdf5 does either.

Resources