I would be grateful for any help typesetting music in LaTeX. I've tried to use MusiXTeX but have been very frustrated.
As I understand it, the MusiXTeX notation has a steep learning curve, but I'm OK with that; the notation seems to be well documented. The hardest part is installation and getting a simple "hello world" example to work.
I'm not committed to MusiXTeX; I'll try anything that works with LaTeX. But I've tried other alternatives and been equally frustrated with them.
How about LilyPond? It uses its own plaintext notation, but uses TeX for output. The engine itself uses a whole slew of measures to analyze the music and produce pretty sheet music, so it's automated to a much greater extent than MusixTex is.
Lilypond has a preprocessor called lilypond-book that lets you mix LaTeX code with Lilypond code in one source file.
Sample usage: tsst.lytex contains this:
\documentclass{article}
\begin{document}
\begin[quote,fragment,staffsize=26]{lilypond}
c' d' e'
\end{lilypond}
\end{document}
It also supports inline notation (instead of a display), and reading from external files.
Compile it with lilypond-book --pdf tsst.lytex, producing pdf images of each system along with a LaTeX file tsst.tex that includes the snippets, which compiles as usual with pdflatex.
If you have simple notations (folk tunes and the like), something like ABC might be a good fit. Simple markup-based notation, but prints to LaTeX. Wikipedia has a good example
X:1
T:The Legacy Jig
M:6/8
L:1/8
R:jig
K:G
GFG BAB | gfg gab | GFG BAB | d2A AFD |
GFG BAB | gfg gab | age edB |1 dBA AFD :|2 dBA ABd |:
efe edB | dBA ABd | efe edB | gdB ABd |
efe edB | d2d def | gfe edB |1 dBA ABd :|2 dBA AFD |]
Which produces
ABC example png http://en.wikipedia.org/wiki/File:Legacy_jig.png
There is also lyluatex, which uses lualatex.
Sample usage: (compiles with lualatex -shell-escape DOCUMENT.TEX)
\usepackage{lyluatex}
% include file
\lilypondfile[staffsize=17]{PATH/TO/THE/FILE}
% direct input
\begin{lilypond}
\relative c' { c d e f g a b c }
\end{lilypond}
% short direct input
\lilypond[staffsize=12]{c' d' g'}
Related
I am looking for a solution to extract the list of concepts that a text (or html) document is about. I'd like the concepts to be wikidata topics (or freebase or DBpedia).
For example "Bad is a song by Mikael Jackson" should return Michael Jackson (the artist, wikidata Q2831) and Bad (the song, wikidata Q275422). As this example shows, the system should be robust to spelling mistakes (Mikael) and ambiguity (Bad).
Ideally the system should work across multiple languages, it should work both on short texts and long texts, and when it is unsure it should return multiple topics (eg. Bad song + Bad album). Also, it should ideally be open source and have a python API.
Yes, that sounds like a list for Santa Claus. Any ideas?
Edit
I checked out a few solutions, but no silver bullet so far.
NLTK parses text and extract "named entities" (AFAIU, a part of a sentence that refers to a name), but it does not return Wikidata topics, just plain text. This means that it will likely not understand that "I shot the sheriff" is the name of a song by Bob Marley, it will instead treat this as a sentence.
OpenNLP does roughly the same.
Wikidata has a search API, but it's just one term at a time, and it does not handle disambiguation.
There are a few commercial services (OpenCalais, AlchemyAPI, CogitoAPI...) but none really shines, IMHO.
You can use Spacy to retrieve Named Entity then link them to WikiData using the search API.
For what remains of the sentence that is not matched as named entity by Spacy you can create a list of ngrams from the sentence starting with the biggest ngram you use the WikiData search API to lookup WikiData topics.
POS tagging can be put to good use, that said syntax parse informations is more powerful since you can know the relations between the words. For instance given the following output from link-grammar:
Found 8 linkages (8 had no P.P. violations)
Linkage 1, cost vector = (UNUSED=0 DIS= 0.15 LEN=9)
+-------------------------Xp-------------------------+
+----------->WV---------->+ |
+-------Wd------+ +---------Osn--------+ |
| +---G---+----Ss---+----Os----+ | |
| | | | | | |
LEFT-WALL Bob.m Marley[!] wrote.v-d Natural[!] Mystic[!] .
You can tell that the subject is “Bob Marley” because
“wrote” is connected to “Marley” with a S which connects subject nouns to finite verbs.
“Marley” is connected to “Bob” using a G which connects proper noun together.
So a “Bob Marley” is a good candidate for an entity (also it has both word capitalized).
Given the above parse "tree" it difficult to tell whether “Natural” and “Mystic” are related even if they are on the same side of the sentence.
The second parse provided by link grammar has the same cost vector and links together “Natural Mystic” with again a G.
Here is it:
Linkage 2, cost vector = (UNUSED=0 DIS= 0.15 LEN=9)
+-------------------------Xp-------------------------+
+----------->WV---------->+ |
+-------Wd------+ +---------Os---------+ |
| +---G---+----Ss---+ +----G----+ |
| | | | | | |
LEFT-WALL Bob.m Marley[!] wrote.v-d Natural[!] Mystic[!] .
So in my opinion “Bob Marley” and “Natural Mystic” are good candidate for a wikidata search.
That was the easy problem where grammar and spelling are correct.
Here is one parse out of 11 of the same sentence with lower cases:
Linkage 1, cost vector = (UNUSED=1 DIS= 0.15 LEN=14)
+------------------------Xp------------------------+
+----------------------Wa---------------------+ |
| +------------------AN-----------------+ |
| | +-------------AN-------------+ |
| | | +----AN---+ |
| | | | | |
LEFT-WALL Bob.m marley[?].n [wrote] natural.n mystic.n .
LG doesn't even recognize the verb.
I am using Pandoc to generate a PDF from markdown, but am having trouble producing a table. The terminal command used is:
$ pandoc -s -o foo.pdf --latex-engine=xelatex --filter pandoc-citeproc bar.md
The grid table used in my markdown document looks like this:
+---------------+---------------+--------------------+
| Fruit | Price | Advantages |
+===============+===============+====================+
| Bananas | $1.34 | - built-in wrapper |
| | | - bright color |
+---------------+---------------+--------------------+
| Oranges | $2.10 | - cures scurvy |
| | | - tasty |
+---------------+---------------+--------------------+
I've tried switching the LaTeX engine, using different forms of markdown tables, everything I can think of. Infuriatingly, I got this to work once and have spent the past few hours trying to reproduce the results, but with no success. Instead, I just keep getting the following error message:
pandoc: Error producing PDF from TeX source.
! Undefined control sequence.
\y ->\LT#array
l.7128 }{}
Any ideas? I'm using Pandoc v. 1.12.0.2.
This doesn't look like a Pandoc issue per se. The error you are getting is a TeX error and what it says is that your TeX interpreter doesn't recognize the longtable environment. Maybe you are using a document class that doesn't use the longtablepackage by default...
If you are using a tex template, try to add the longtable package manually:
\usepackage{longtable}
This must be added right after the line that defines the document class:
\documentclass[some options]{some class}
This will solve the current issue, although TeX may then complain about other stuff.
By the way, you might be better off asking this kind of question on TeX StackExchange.
I am testing out F# and using NUnit as my test library; I have discovered the use of double-back ticks to allow arbitrary method naming to make my method names even more human readable.
I was wondering, whether rightly or wrongly, if it is possible to parameterise the method names when using NUnit's TestCaseAttribute to change the method name, for example:
[<TestCase("1", 1)>]
[<TestCase("2", 2)>]
let ``Should return #expected when "#input" is supplied`` input expected =
...
This might not be exactly what you need, but if you want to go beyond unit testing, then TickSpec (a BDD framework using F#) has a nice feature where it lets you write parameterized scenarios based on back-tick methods that contain regular expressions as place holders.
For example, in Phil Trelford's blog post, he uses this to define tic-tac-toe scenario:
Scenario: Winning positions
Given a board layout:
| 1 | 2 | 3 |
| O | O | X |
| O | | |
| X | | X |
When a player marks X at <row> <col>
Then X wins
Examples:
| row | col |
| middle | right |
| middle | middle |
| bottom | middle |
The method that implements the When clause of the scenario is defined in F# using something like this:
let [<When>] ``a player marks (X|O) at (top|middle|bottom) (left|middle|right)``
(mark:string,row:Row,col:Col) =
let y = int row
let x = int col
Debug.Assert(System.String.IsNullOrEmpty(layout.[y].[x]))
layout.[y].[x] <- mark
This is a neat thing, but it might be an overkill if you just want to write a simple parameterized unit test - BDD is useful if you want to produce human readable specifications of different scenarios (and there are actually other people reading them!)
This is not possible.
The basic issue is that for every input and expected you need to create a unique function. You would then need to pick the correct function to call (or your stacktrace wouldn't make sense). As a result this is not possible.
Having said that if you hacked around with something like eval (which must exist inside fsi), it might be possible to create something like this, but it would be very slow.
If I run grep -C 1 match over the following file:
a
b
match1
c
d
e
match2
f
match3
g
I get the following output:
b
match1
c
--
e
match2
f
match3
g
As you can see, since the context around the contiguous matches "match2" and "match3" overlap, they are merged. However, I would prefer to get one context description for each match, possibly duplicating lines from the input in the context reporting. In this case, what I would like is:
b
match1
c
--
e
match2
f
--
f
match3
g
What would be the best way to achieve this? I would prefer solutions which are general enough to be trivially adaptable to other grep options (different values for -A, -B, -C, or entirely different flags). Ideally, I was hoping that there was a clever way to do that just with grep....
I don't think it is possible to do that using plain grep.
the sed construct below works to some extent, now I only need to figure out how to add the "--" separator
$ sed -n -e '/match/{x;1!p;g;$!N;p;D;}' -e h log
b
match1
c
e
match2
f
f
match3
g
I don't think this is possible using plain grep.
Have you ever used Python? In my opinion it's a perfect language for such tasks (this code snippet will work for both Python 2.7 and 3.x):
with open("your_file_name") as f:
lines = [line.rstrip() for line in f.readlines()]
for num, line in enumerate(lines):
if "match" in line:
if num > 0:
print(lines[num - 1])
print(line)
if num < len(lines) - 1:
print(lines[num + 1])
if num < len(lines) - 2:
print("--")
This gives me:
b
match1
c
--
e
match2
f
--
f
match3
g
I'd suggest to patch grep instead of working around it. In GNU grep 2.9 in src/main.cpp:
933 /* We print the SEP_STR_GROUP separator only if our output is
934 discontiguous from the last output in the file. */
935 if ((out_before || out_after) && used && p != lastout && group_separator)
936 {
937 PR_SGR_START_IF(sep_color);
938 fputs (group_separator, stdout);
939 PR_SGR_END_IF(sep_color);
940 fputc('\n', stdout);
941 }
942
A simple additional flag would suffice here.
Edit: Well, d'oh, it is of course not THAT simple since grep would not reproduce the context, just add a few more separators. Due to the linearity of grep, the whole patch is probably not that easy. Nevertheless, if you have a good case for the patch, it could be worth it.
This does not appear possible with grep or GNU grep. However it is possible with standard POSIX tools and a good shell like bash as leverage to obtain the desired output.
Note: neither python nor perl should be necessary for the solution. Worst case, use awk or sed.
One solution I rapidly prototyped is something like this (it does involve overhead of re-reading the file, and this solution depends on whether this overhead is OK, and the give-away is the original question's use of -1 as fixed number of lines of context which allows simple use of head & tail) :
$ OIFS="$IFS"; lines=`grep -n match greptext.txt | /bin/cut -f1 -d:`;
for l in $lines;
do IFS=""; match=`/bin/tail -n +$(($l-1)) greptext.txt | /bin/head -3`;
echo $match; echo "---";
done; IFS="$OIFS"
This might have some corner case associated with it, and this resets IFS when perhaps not necessary, though it is a hint for trying to use the power of POSIX shell & tools rather than a high level interpreter to get the desired output.
Opinion: All good operating systems have: grep, awk, sed, tr, cut, head, tail, more, less, vi as built-ins. On the best operating systems, these are in /bin.
I've got a framed environment of the memoir class with content like this:
\begin{framed}
\subsection{Article 1}
Content of Article 1
\subsection{Article 2}
Content: Article 2
\end{framed}
This renders in the following way:
._________________.
| | <-- superfluous whitespace
| Article 1 |
| Content of Art- |
| icle 1 |
| |
| Article 2 |
| Content: Artic- |
| le 2 |
.-----------------.
The \subsection{} is introducing the whitespace preceding itself, which I'd prefer not to be there inside this framed environment, though I do want such whitespace in regular text (i.e. outside the framed environment) and for subsections-after-the first one.
When inside the framed environment, I'd like to have formatting essentially like this:
._________________.
| Article 1 |
| Content of Art- |
| icle 1 |
| |
| Article 2 |
| Content: Artic- |
| le 2 |
.-----------------.
Any thoughts or suggestions as to how one may achieve this modification to headings at the beginning of the framed environment would be much appreciated.
Edit: Based on mkluwe's comments, I've rooted out the \subsection command in memoir.cls:
3314 \newcommand{\subsection}{%
3315 \subsechook%
3316 \#startsection{subsection}{2}% level 2
3317 {\subsecindent}% heading indent
3318 {\beforesubsecskip}% skip before the heading
3319 {\aftersubsecskip}% skip after the heading
3320 {\normalfont\subsecheadstyle}} % font
3321 \newcommand{\subsechook}{}
3322 \newcommand{\setsubsechook}[1]{\renewcommand{\subsechook}{#1}}
3323 \newlength{\subsecindent}
3324 \newcommand{\setsubsecindent}[1]{\setlength{\subsecindent}{#1}}
3325 \setsubsecindent{\z#}
3326 \newskip\beforesubsecskip
3327 \newcommand{\setbeforesubsecskip}[1]{\setlength{\beforesubsecskip}{#1}}
3328 \setbeforesubsecskip{-3.25ex \#plus -1ex \#minus -.2ex}
3329 \newskip\aftersubsecskip
3330 \newcommand{\setaftersubsecskip}[1]{\setlength{\aftersubsecskip}{#1}}
3331 \setaftersubsecskip{1.5ex \#plus .2ex}
So a corollary to my question above would seem to be: How can one refine this subsection command such that e.g. if it's the first element in an environment (such as the framed environment) its \beforesubsecskip is very small?
Thank you for reading.
Sincerely,
Brian
If it happens infrequently enough you could just use a vspace command as first entry inside each frame. You could even create a new frame environment to do it automatically. In any case you would need to tweak the vspace to take away the right amount of padding. As you want, the new environment below will remove padding for first subsection entry but not for the subsequent ones:
\newenvironment{subsectframe}{\begin{framed}\vspace{-1.0\baselineskip}}{\end{framed}}
\begin{document}
\begin{subsectframe}
\subsection{Article 1}
Content of Article 1
\subsection{Article 2}
Content: Article 2
\end{subsectframe}
\end{document}
I do understand that the problem is "with the subsection". However I think fixing it by creating a new environment is going to be cleaner solution than trying to alter the subsection command so it intelligently avoids adding space depending on where it is.
I don't know this environment, but in the documentation I find:
\FrameHeightAdjust: macro; height of frame above baseline at top of
page
You might try diddling that...
As a quick and dirty solution I copied the definition of the \subsection command from article.cls and deleted the vertical skip:
\documentclass{article}
\usepackage{framed}
\makeatletter
\newcommand\subsectionx{\#startsection{subsection}{2}{\z#}%
{0ex}%
{1.5ex \#plus .2ex}%
{\normalfont\large\bfseries}}
\makeatother
\begin{document}
\begin{framed}
\subsectionx{Article 1}
Content of Article 1
\subsection{Article 2}
Content: Article 2
\end{framed}
\end{document}