Use clang to extract documentation comments as XML - clang

I've seen this presentation from 2012 on clang features to handle C++ documentation comments (eg. for doxygen). Slide 20 mentions a new feature to export comments as XML as being part of libclang.
I'd like to try that feature out. More specifically, I want to test producing an XML like shown on slide 31.
But I don't know how. Which tool of libclang is this part of? Was the tool removed in the meantime? Or is this just an extra compiler flag?

They might just be referring to…
CXString clang_FullComment_getAsXML( CXComment Comment)
… which returns an XML document for the given comment. You still need to traverse the node trees yourself.
It would be spectacular to have an option for dropping an XML file containing the extracted documentation during a regular compile, but it doesn't seem to be in the cards.

Related

What is the purpose of \summary in tex?

I have to write a report using LaTeX for my final year project at university. Having been given some example documents to learn to use it, a common command, \summary, keeps appearing. However, what's written inside the summary doesn't appear anywhere in the produced document. Is it some kind of internal documentation?
With a quick google, it is likely that /summary is used as a shortcut to reuse the abstract in one place.
Looking at a few templates : ucl thesis template, book template and stackoverflow it tends to be a custom command used for repeated style. Look through the different files for "summary" to see if appears in the preamble somewhere.

Parsing comments with clang

I am trying to utilize clang tooling library for the purpose of my future tool.
What I would like to do with this tool is:
1. parse all the source code (with includes) and detect any of my keywords in the comments (comments will be some kind of interface between the programmer and my tool, which will do various things with the rest of the source code according to commands placed in the comments).
2. according to commands from the source code, do some refactoring of it
The refactoring itself will be done using clang AST, like from example below:
http://eli.thegreenplace.net/2014/07/29/ast-matchers-and-clang-refactoring-tools
The thing I am looking for currently is how to parse the comments, within the same run of clang tooling procedures. I do not want to make separate step just for parsing the source code, because it have to be already done in tooling library.
Do you know how to somehow get the information about comments included in the source code I am parsing by tooling library?
Try the options -Wdocumentation and associated options (as -fparse-all-comments). If U use some tools (as clang-check or clang-tidy, adds these options in the compile commands db.

How to integrate Sphinx-generated Latex code in existing Latex documents?

I've used Sphinx to document a Python library. So far this works great, I get nice HTML and LaTeX output. Concerning Latex, Sphinx generates a complete standalone document with lots of special packages and configurations.
But, I would like to integrate the generated Latex files within an already existing Latex project (more precise: in the appendix of a book). In particular I want the Sphinx-generated documentation pages to have the header, footer and section heading styles of the parent document. I guess I could somehow transfer the relevant parts by manually removing unneeded stuff and adjusting various options in the tex files generated by Sphinx. However, probably this is going to be a very tedious fiddling taking too much of my time (thinking of conflicting packages and options I have to detect and fix).
Does Sphinx' Latex-Builder support such a use case? If not, is there a more general approach how to merge independent Latex-documents?
Thanks for any hints!
It seems there is no generally valid answer to this question. I've asked this question on the sphinx mailing list and received an answer which basically says one has to manually extract and partly convert the relevant parts of the latex code generated by sphinx - a less expensive solution does not yet exist.

Getting "longnamesfirst" option to work with natbib in LaTeX - custom .bst

I'm writing an MSc dissertation and I'm having difficulty getting the longnamesfirst option working in natbib.
My University has a very specific referencing style a little like APA, but not quite the same. I've used the docstrip utility to build a basic framework and then edited it to fit the requirements of my University.
Having tested it with the simplest possible document; applying my bst then trying it again with one of the defaults (\bibliographystyle{apacite}) I can see than natbib works as intended with apacite. It doesn't however produce correct results with my bst.
So my question:
How does the .bst file link with natbib to enforce the "longnamesfirst" option?
I've come to a solution. Looks like my bst file wasn't correctly written to take advantage of natbib's longnamesfirst option. In particular, there are a few functions like format.full.names I didn't have. It appears natbib needs these to generate those crucial first few references.
A regeneration from latex makebst and a merge later and I'm good to go.

Anyone used Boost SERIALIZATION with Codegear Builder 2009 Successfully

If you have been successful in persisting your data, which type of stream did you get to work
Text or Binary
ANSI or UNICODE
Did you have to use any BOOST_ASSERTS or some extra MACRO or dance around the fairy ring at 4:00 am wearing your Moose sweater backwards.
Thanks for your answer
There is a posting for C++Builder 2010 (unfortunately I do not know of one for 2009), that shows the portions of the Boost (1.39) that are included in the shipping product. The serialization library is listed as not supported. Note the posting also includes a link to the source code they used in case someone wants to experiment with the unsupported libraries.
I haven't tried, so I can't directly answer. However, here are the boost 1.37.0 test results for C++Builder 2009 (the column on the right, "borland-6.1.0").
You can see most things in 'serialization' pass the tests. Some don't, so if you compare what you're trying to do to those it should help you to know what to avoid. The test suites may also useful to you, because they can be used as code examples for using the features they test.
You may find other resources on the C++ Builder Boost page to be useful too.

Resources