Flip doxygen's graphs from top-to-bottom orientation to left-to-right - orientation

The doxygen graph for "includes" and "is included by" are created with nesting depth increasing from top to bottom (using 1.8.5).
Since we have mostly shallow graphs with many nodes, this leads to very wide graphs with ugly horizontal scroll bars. Is there a way to teach doxygen to create these graphs in a left-to-right orientation, the way it creates caller/call graphs?
I know that graphviz/dot supports this, but can't find a way to tell doxygen my preference.

There is a similar question asked recently which I am duplicate answering:
Doxygen: Is it possible to control the orientation of dependency graphs?
After looking for the same myself and finding nothing, the best I can offer is a hack using the graph attribute rankdir.
Step 1) Make sure Doxygen keeps the dot files. Put DOT_CLEANUP=NO in your confige file.
Step 2) find your dot files that Doxygen generated. Should be in the form *__incl.dot. for steps below I will refer to this file as <source>.dot
Step 3a) Assuming the dot file did not explicitly specify rankdir (usually it is TB" by default), regenerate the output with this command.
dot -Grankdir="LR" -Tpng -o<source>.png -Tcmapx -o<source>.map <source>.dot
Step 3b) If for some reason rankdir is specified in the dot file, go into the file and add the rankdir="LR" (by default they are rankdir is set to "TB").
digraph "AppMain"
{
rankdir="LR";
...
Then regenerate the output with:
dot -Tpng -o<source>.png -Tcmapx -o<source>.map <source>.dot
You need to redo this after every run of Doxygen. A batch file might be handy, especially if you want to process all files. For step 3b, batch replacing text is outside of the scope of this answer :). But here seems to be a good answer:
How can you find and replace text in a file using the Windows command-line environment?

Related

VIFM: Remapping control right/left in command-line mode

I'm not a big fan of the command line readline keyboard shortcuts so I'm hoping to remap C-Right/C-Left to navigate one word back/forward and C-BS and C-Del to delete one word back/forward. However, after reading the documentation and forumns, I'm not able to figure out how to do this.
Currently, when C-left/C-right are typed in the command line, the cursor doesn't move and instead keycodes are inserted (C-left = [1;5D, C-right = [1;5C). I've tried many remappings but the mappings that I would think would work best for this are:
cmap <C-right> <A-f>
cmap <C-left> <A-b>
I was able to figure out how to delete one word back using the following mappings (on further review there is documentation in the VIFM manual regarding mapping BS):
cnoremap <BS> <C-w>
cnoremap <C-h> <C-w>
However, I'm still uncertain on how to map delete one word forward using C-Del. When I use the following remapping for C-Del, the result is that one character to the left of the cursor is deleted. Note, when I use other C-* combinations for delete word forward the remappings actually works making me think that it may not be possible to remap C-Del:
cmap <C-Del> <A-d>
I'm using VIFM version 0.12 on Arch Linux. Any suggestions?
List of keys that are supported by angle-bracket notation is available in the documentation. And combinations of Ctrl with arrow keys are not there.
See this GitHub issue for a discussion of why and an example of how to work around it:
" ctrl-right
cnoremap <esc>[1;5C <a-f>
" ctrl-left
cnoremap <esc>[1;5D <a-b>
" ctrl-del
cnoremap <esc>[3;5~ <a-d>
At this point I'm still not sure that these sequences are common enough among different terminal types to be hard-coded and not cause trouble.

How to define a multipage environment not interrupted by tables and figures?

I have defined a new LaTeX environment for excursions in a book chapter I am writing. The environment is multipage and often includes inline images. Moreover, I am using the shaded environment to give the environment a background colour to make it stand out a bit.
However, the environment, as shown below, is split up by floating tables and images, which makes the flow of the environment visually more difficult to follow. For example, it is now difficult to see if that floating image or table is part (the missing background colour does not help). So, I like to extend my environment to disallow it to be interrupted by floating elements, but do not know how to get that done.
\newcounter{bioclipse}
\def\thebioclipse{\thechapter-\arabic{bioclipse}}
\newenvironment{bioclipse}[2][]{\begin{small}\begin{shaded}\refstepcounter{bioclipse} \par\medskip\noindent%
\textbf{Bioclipse Excursion~\thebioclipse #1: #2
\vspace{0.1cm} \hrule \vspace{0.1cm}}
\rmfamily}{\medskip \end{shaded}\end{small}}
Any solution to disallow interruption is fine, even if the background colour is done differently.
The algorithm for insertion is rather complex. Basically you want that any pending insertion must not be put into the page where the env bioclipse apply. As first fast solution you could flush all the insertion first, and then start the new chapter. If you want to put figures or whatever into the environment, and you want them to be "flushed" only after the last page where the env is at work... second fast solution is: put them after the environment directly!! So they won't "annoy" the page/s at all (of course, avoid using of footnotes).
The other solution (making it someway automatic) is a little bit tricky. Insertion places for "pending" materials are chosen while constructing the vertical list that is the page ("candidate"), in the output routine. This means you have to play with the output routine at worst; but maybe it is too much, unless you're planning your own TeX format, and maybe LaTeX gives easier choices...
Digging a bit in LaTeX codes, I see there's a conditional you can try to use, it is \#insert* i.e. \#insertfalse and \#inserttrue. If you're lucky they "drive" the possibility of putting insertions, so that at the beginning of your env you can say \#insertfalse and at the end \#inserttrue. Try, I am not saying it works.
As maybe you know to use # as letter catcode so that it can be part of a "command" name, you have to use \makeatletter and \makeatother when you finished (likely default class/style preamble does it for you).
You could also be iterested in having a look at placeins style (it could be already in your installation, otherwise, see here ) which apparently could solve (part of) your problem.

Transform a tex source so that all macros are replaced by their definition

Is it possible to see the output of the TeX ‘pre-processor’, i. e. the intermediate step before the actual output is done but with all user-defined macros replaced and only a subset of TeX primitives left?
Or is there no such intermediate step?
Write
\edef\xxx{Any text with any commands. For example, $\phantom x$.}
And then for output in the log-file
\show\xxx
or for output in your document
\meaning\xxx
There is no "pre-processor" in TeX. The replacement text for any control sequence at any stage can vary (this is used for a lot of things!). For example
\def\demo{\def\demo{cde}}
\demo
will first define \demo in one way and then change it. In the same way, you can redirect TeX primitives. For example, the LaTeX kernel moves \input to an internal position and alters it. A simplified version:
\let\##input\input
\def\input#1{\##input#1 }
Try the Selective Macro Expander.
TeX has a lot of difference tracing tools built in, including tracing macro expansion. This only traces live macros as they are actually expanded, but it's still quite useful. Full details in The TeXbook and probably elsewhere.
When I'm trying to debug a macro problem I generally just use the big hammer:
\tracingall\tracingonline
then I dig in the output or the .log file for what I want to know.
There's a lot of discussion of this issue on this question at tex.SE, and this question. But I'll take the opportunity to note that the best answer (IMO) is to use the de-macro program, which is a python script that comes with TeXLive. It's quite capable, and can handle arguments as well as simple replacements.
To use it, you move the macros that you want expanded into a <something>-private.sty file, and include it into your document with \usepackage{<something>-private}, then run de-macro <mydocument>. It spits out <mydocument>-clean.tex, which is the same as your original, but with your private macros replaced by their more basic things.

Define every symbol as a command in LaTeX

I'm working on a large project involving multiple documents typeset in LaTeX. I want to be consistent in my use of symbols, so it might be a nice idea to define a command for every symbol that has a specific meaning throughout the project. Does anyone have any experience with this? Are there issues I should pay attention to?
A little more specific. Say that, throughout the document I would denote something called permability by a script P, would it be an idea to define
\providecommand{\permeability}{\mathscr{P}}
or would this be more like the case "defining a command for $n$"?
A few tips:
Using \providecommand will define that command only if it's not been previously defined. So if you're not getting the results you expected, you may be trying to define a command that's been defined elsewhere.
If you wrap the math in your commands with \ensuremath, it will do the right thing regardless of whether you're in math mode when you issue the command:
\providecommand{\permeability}{\ensuremath{\mathscr{P}}}
Now I can easily use \permeability in text or $\permeability$ in math mode.
Using your own commands allows you to easily change the typographical representation of something later. For instance:
\newcommand{\vect}[1]{\ensuremath{\mathbf{#1}}}
would print \vect{x} as a boldfaced x. If you later decide you prefer arrows above your vectors, you could change the command to:
\newcommand{\vect}[1]{\ensuremath{\vec{#1}}}
I have been doing this for anything that has a specific meaning and is longer than a single symbol, mostly to save typing:
\newcommand{\objId}{\mbox{$\mathit{objId}$}\xspace}
\newcommand{\insOp}[1]{#1\mbox{$^+$}\xspace}
\newcommand{\delOp}[1]{#1\mbox{$^-$}\xspace}
However then I noticed that I stopped making inconsistency errors (objId vs ObjId vs ObjID), so I agree that it is a nice idea.
However I am not sure if it is a good idea in case symbols in the output are, well, single Latin symbols, as in:
\newcommand{\numOfObjs}{$n$}
It is too easy to type a single symbol and forget about it even though a command was defined for it.
EDIT: using your example IMHO it'd be a good idea to define \permeability because it is more than a single P that you have to type in without the command. But it's a close call.

How do I "diff" multiple files against a single base file?

I have a configuration file that I consider to be my "base" configuration. I'd like to compare up to 10 other configuration files against that single base file. I'm looking for a report where each file is compared against the base file.
I've been looking at diff and sdiff, but they don't completely offer what I am looking for.
I've considered diff'ing the base against each file individually, but my problem then become merging those into a report. Ideally, if the same line is missing in all 10 config files (when compared to the base config), I'd like that reported in an easy to visualize manner.
Notice that some rows are missing in several of the config files (when compared individually to the base). I'd like to be able to put those on the same line (as above).
Note, the screenshot above is simply a mockup, and not an actual application.
I've looked at using some Delphi controls for this and writing my own (I have Delphi 2007), but if there is a program that already does this, I'd prefer it.
The Delphi controls I've looked at are TDiff, and the TrmDiff* components included in rmcontrols.
For people that are still wondering how to do this, diffuse is the closest answer, it does N-way merge by way of displaying all files and doing three way merge among neighboors.
None of the existing diff/merge tools will do what you want. Based on your sample screenshot you're looking for an algorithm that performs alignments over multiple files and gives appropriate weights based on line similarity.
The first issue is weighting the alignment based on line similarity. Most popular alignment algorithms, including the one used by GNU diff, TDiff, and TrmDiff, do an alignment based on line hashes, and just check whether the lines match exactly or not. You can pre-process the lines to remove whitespace or change everything to lower-case, but that's it. Add, remove, or change a letter and the alignment things the entire line is different. Any alignment of different lines at that point is purely accidental.
Beyond Compare does take line similarity into account, but it really only works for 2-way comparisons. Compare It! also has some sort of similarity algorithm, but it also limited to 2-way comparisons. It can slow down the comparison dramatically, and I'm not aware of any other component or program, commercial or open source, that even tries.
The other issue is that you also want a multi-file comparison. That means either running the 2-way diff algorithm a bunch of times and stitching the results together or finding an algorithm that does multiple alignments at once.
Stitching will be difficult: your sample shows that the original file can have missing lines, so you'd need to compare every file to every other file to get the a bunch of alignments, and then you'd need to work out the best way to match those alignments up. A naive stitching algorithm is pretty easy to do, but it will get messed up by trivial matches (blank lines for example).
There are research papers that cover aligning multiple sequences at once, but they're usually focused on DNA comparisons, you'd definitely have to code it up yourself. Wikipedia covers a lot of the basics, then you'd probably need to switch to Google Scholar.
Sequence alignment
Multiple sequence alignment
Gap penalty
Try Scooter Software's Beyond Compare. It supports 3-way merge and is written in Delphi / Kylix for multi-platform support. I've used it pretty extensively (even over a VPN) and it's performed well.
for f in file1 file2 file3 file4 file5; do echo "$f\n\n">> outF; diff $f baseFile >> outF; echo "\n\n">> outF; done
Diff3 should help. If you're on Windows, you can use it from Cygwin or from diffutils.
I made my own diff tool DirDiff because I didn't want parts that match two times on screen, and differing parts above eachother for easy comparison. You could use it in directory-mode on a directory with an equal number of copies of the base file.
It doesn't render exports of diff's, but I'll list it as a feature request.
You might want to look at some Merge components as what you describe is exactly what Merge tools do between the common base, version control file and local file. Except that you want more than 2 files (+ base)...
Just my $0.02
SourceGear Diffmerge is nice (and free) for windows based file diffing.
I know this is an old thread but vimdiff does (almost) exactly what you're looking for with the added advantage of being able to edit the files right from the diff perspective.
But none of the solutions does more than 3 files still.
What I did was messier, but for the same purpose (comparing contents of multiple config files, no limit except memory and BASH variables)
While loop to read a file into an array:
loadsauce () {
index=0
while read SRCCNT[$index]
do let index=index+1
done < $SRC
}
Again for the target file
loadtarget () {
index=0
while read TRGCNT[$index]
do let index=index+1
done < $TRG
}
string comparison
brutediff () {
# Brute force string compare, probably duplicates diff
# This is very ugly but it will compare every line in SRC against every line in TRG
# Grep might to better, version included for completeness
for selement in $(seq 0 $((${#SRCCNT[#]} - 1)))
do for telement in $(seq 0 $((${#TRGCNT[#]} - 1)))
do [[ "$selement" == "$telement" ]] && echo "${selement} is in ${SRC} and ${TRG}" >> $OUTMATCH
done
done
}
and finally a loop to do it against a list of files
for sauces in $(cat $SRCLIST)
do echo "Checking ${sauces}..."
loadsauce
loadtarget
brutediff
echo -n "Done, "
done
It's still untested/buggy and incomplete (like sorting out duplicates or compiling a list for each line with common files,) but it's definitely a move in the direction OP was asking for.
I do think Perl would be better for this though.

Resources