String representation of typed literals - jena

I am using Jena 2.6.4.
The following code
String v = "Parnell Square East";
Literal l = ModelFactory.createDefaultModel().createTypedLiteral(
v, XSDDatatype.XSDstring);
System.out.println(l.toString());
Produces the following output:
Parnell Square East^^http://www.w3.org/2001/XMLSchema#string
which looks wrong to me: I would have expected:
"Parnell Square East"^^http://www.w3.org/2001/XMLSchema#string
From a quick look at the source code of LiteralImpl.java I see:
#Override public String toString() {
return asNode().toString( PrefixMapping.Standard, false );
}
Why is the second parameter (quoting) set to false?
If I do
String v = "Parnell Square East";
Literal l = ModelFactory.createDefaultModel().createTypedLiteral(v,
XSDDatatype.XSDstring);
System.out.println(l.asNode().toString(PrefixMapping.Standard, true));
I get the desired output
"Parnell Square East"^^http://www.w3.org/2001/XMLSchema#string
I just wonder why this is not the default behaviour?
Thanks,
marco

There's no guarantee, implicit or explicit, that toString on any Jena node produces output that fits any particular serialization (e.g. Turtle). You might just as well ask why toString does not produce an XML node, or why the datatype is not abbreviated to a q-name. The view that Jena takes is that toString produces enough information to be useful in debugging. Any requirements beyond that are application responsibilities.
So saying, if you have a good use case feel free to submit a patch to the Jena Jira. Bear in mind though, that existing user code may have come to rely on the current behaviour, so switching would have some cost so you would need to make a strong case for change!

Related

Writing a text file containing LaTeX code from maxima expressions

Suppose in a (wx)Maxima session I have the following
f:sin(x);
df:diff(f,x);
Now I want to have it output a text file containing something like, for example
If $f(x)=\sin(x)$, then $f^\prime(x)=\cos(x)$.
I found the tex and tex1 functions but I think I need some additional string processing to be able to do what I want.
Any help appreciated.
EDIT: Further clarifications.
Auto Multiple Choice is a software that helps you create and manage questionaires. To declare questions one may use LaTeX syntax. From AMC's documentation, a question looks like this:
\element{geographie}{
\begin{question}{Cameroon}
Which is the capital city of Cameroon?
\begin{choices}
\correctchoice{Yaoundé}
\wrongchoice{Douala}
\wrongchoice{Abou-Dabi}
\end{choices}
\end{question}
}
As can be seen, it is just LaTeX. Now, with a little modification, I can turn this example into a math question
\element{derivatives}{
\begin{question}{trig_fun_diff_1}
If $f(x)=\sin(x)$ then $f^\prime(0)$ is
\begin{choices}
\correctchoice{$1$}
\wrongchoice{$-1$}
\wrongchoice{$0$}
\end{choices}
\end{question}
}
This is the sort of output I want. I'll have, say, a list of functions then execute a loop calculating their derivatives and so on.
OK, in response to your updated question. My advice is to work with questions and answers as expressions -- build up your list of questions first, and then when you have the list in the structure that you want, then output the TeX file as the last step. It is generally much clearer and simpler to work with expressions than with strings.
E.g. Here is a simplistic approach. I'll use defstruct to define a structure so that I can refer to its parts by name.
defstruct (question (name, datum, item, correct, incorrect));
myq1 : new (question);
myq1#name : "trig_fun_diff_1";
myq1#datum : f(x) = sin(x);
myq1#item : 'at ('diff (f(x), x), x = 0);
myq1#correct : 1;
myq1#incorrect : [0, -1];
You can also write
myq1 : question ("trig_fun_diff_1", f(x) = sin(x),
'at ('diff (f(x), x), x = 0), 1, [0, -1]);
I don't know which form is more convenient for you.
Then you can make an output function similar to this:
tex_question (q, output_stream) :=
(printf (output_stream, "\\begin{question}{~a}~%", q#name),
printf (output_stream, "If $~a$, then $~a$ is:~%", tex1 (q#datum), tex1 (q#item)),
printf (output_stream, "\\begin{choices}~%"),
/* make a list comprising correct and incorrect here */
/* shuffle the list (see random_permutation) */
/* output each correct or incorrect here */
printf (output_stream, "\\end{choices}~%"),
printf (output_stream, "\\end{question}~%));
where output_stream is an output stream as returned by openw (which see).
It may take a little bit of trying different stuff to get derivatives to be output in just the format you want. My advice is to put the logic for that into the output function.
A side effect of working with expressions is that it is straightforward to output some representations other than TeX (e.g. plain text, XML, HTML). That might or might not become important for your project.
Well, tex is the TeX output function. It can be customized to some extent via texput (which see).
As to post-processing via string manipulation, I don't recommend it. However, if you want to go down that road, there are regex functions which you can access via load(sregex). Unfortunately it's not yet documented; see the comment header of sregex.lisp (somewhere in your Maxima installation) for examples.

What is perl experimental feature, postderef?

I see use experimental 'postderef' being used in Moxie here on line 8. I'm just confused at what it does. The man pages for experimental are pretty vague too,
allow the use of postfix dereferencing expressions, including in interpolating strings
Can anyone show what you would have to do without the pragma, and what the pragma makes easier or possible?
What is it
It's simple. It's syntactic sugar with ups and downs. The pragma is no longer needed as the feature is core in 5.24. But in order for the feature to be supported in between 5.20 and 5.24, it had to be enabled with: use experimental 'postderef'. In the provided example, in Moxie, it's used in one line which has $meta->mro->#*; without it you'd have to write #{$meta->mro}.
Synopsis
These are straight from D Foy's blog, along with Idiomatic Perl for comparison that I've written.
D Foy example Idiomatic Perl
$gimme_a_ref->()->#[0]->%* %{ $gimme_a_ref->()[0] }
$array_ref->#* #{ $array_ref }
get_hashref()->#{ qw(cat dog) } #{ get_hashref() }{ qw(cat dog) }
These examples totally provided by D Foy,
D Foy example Idiomatic Perl
$array_ref->[0][0]->#* #{ $array_ref->[0][0] }
$sub->&* &some_sub
Arguments-for
postderef allows chaining.
postderef_qq makes complex interpolation into scalar strings easier.
Arguments-against
not at all provided by D Foy
Loses sigil significance. Whereas before you knew what the "type" was by looking at the sigil on the left-most side. Now, you don't know until you read the whole chain. This seems to undermine any argument for the sigil, by forcing you to read the whole chain before you know what is expected. Perhaps the days of arguing that sigils are a good design decision are over? But, then again, perl6 is still all about them. Lack of consistency here.
Overloads -> to mean, as type. So now you have $type->[0][1]->#* to mean dereference as $type, and also coerce to type.
Slices do not have an similar syntax on primitives.
my #foo = qw/foo bar baz quz quuz quuuz/;
my $bar = \#foo;
# Idiomatic perl array-slices with inclusive-range slicing
say #$bar[2..4]; # From reference; returns bazquzquuz
say #foo[2..4]; # From primitive; returns bazquzquuz
# Whizbang thing which has exclusive-range slicing
say $bar->#[2,4]; # From reference; returns bazquz
# Nothing.
Sources
Brian D Foy in 2014..
Brian D Foy in 2016..

Why doesn't Haskell's Prelude.read return a Maybe?

Is there a good reason why the type of Prelude.read is
read :: Read a => String -> a
rather than returning a Maybe value?
read :: Read a => String -> Maybe a
Since the string might fail to be parseable Haskell, wouldn't the latter be be more natural?
Or even an Either String a, where Left would contain the original string if it didn't parse, and Right the result if it did?
Edit:
I'm not trying to get others to write a corresponding wrapper for me. Just seeking reassurance that it's safe to do so.
Edit: As of GHC 7.6, readMaybe is available in the Text.Read module in the base package, along with readEither: http://hackage.haskell.org/packages/archive/base/latest/doc/html/Text-Read.html#v:readMaybe
Great question! The type of read itself isn't changing anytime soon because that would break lots of things. However, there should be a maybeRead function.
Why isn't there? The answer is "inertia". There was a discussion in '08 which got derailed by a discussion over "fail."
The good news is that folks were sufficiently convinced to start moving away from fail in the libraries. The bad news is that the proposal got lost in the shuffle. There should be such a function, although one is easy to write (and there are zillions of very similar versions floating around many codebases).
See also this discussion.
Personally, I use the version from the safe package.
Yeah, it would be handy with a read function that returns Maybe. You can make one yourself:
readMaybe :: (Read a) => String -> Maybe a
readMaybe s = case reads s of
[(x, "")] -> Just x
_ -> Nothing
Apart from inertia and/or changing insights, another reason might be that it's aesthetically pleasing to have a function that can act as a kind of inverse of show. That is, you want that read . show is the identity (for types which are an instance of Show and Read) and that show . read is the identity on the range of show (i.e. show . read . show == show)
Having a Maybe in the type of read breaks the symmetry with show :: a -> String.
As #augustss pointed out, you can make your own safe read function. However, his readMaybe isn't completely consistent with read, as it doesn't ignore whitespace at the end of a string. (I made this mistake once, I don't quite remember the context)
Looking at the definition of read in the Haskell 98 report, we can modify it to implement a readMaybe that is perfectly consistent with read, and this is not too inconvenient because all the functions it depends on are defined in the Prelude:
readMaybe :: (Read a) => String -> Maybe a
readMaybe s = case [x | (x,t) <- reads s, ("","") <- lex t] of
[x] -> Just x
_ -> Nothing
This function (called readMaybe) is now in the Haskell prelude! (As of the current base -- 4.6)

Point-free style with objects/records in F#

I'm getting stymied by the way "dot notation" works with objects and records when trying to program in a point-free functional style (which I think is a great, concise way to use a functional language that curries by default).
Is there an operator or function I'm missing that lets me do something like:
(.) object method instead of object.method?
(From what I was reading about the new ? operator, I think it works like this. Except it requires definition and gets into the whole dynamic binding thing, which I don't think I need.)
In other words, can I apply a method to its object as an argument like I would apply a normal function to its argument?
Short answer: no.
Longer answer: you can of course create let-bound functions in a module that call a method on a given type... For example in the code
let l = [1;2;3]
let h1 = l.Head
let h2 = List.hd l
there is a sense in which "List.hd" is the version of what you want for ".Head on a list". Or locally, you can always do e.g.
let AnotherWay = (fun (l:list<_>) -> l.Head)
let h3 = AnotherWay l
But there is nothing general, since there is no good way to 'name' an arbitrary instance method on a given type; 'AnotherWay' shows a way to "make a function out of the 'Head' property on a 'list<_>' object", but you need such boilerplate for every instance method you want to treat as a first-class function value.
I have suggested creating a language construct to generalize this:
With regards to language design
suggestions, what if
SomeType..Foo optArgs // note *two* dots
meant
fun (x : SomeType) -> x.Foo optArgs
?
In which case you could write
list<_>..Head
as a way to 'functionize' this instance property, but if we ever do anything in that arena in F#, it would be post-VS2010.
If I understand your question correctly, the answer is: no you can't. Dot (.) is not an operator in F#, it is built into the language, so can't be used as function.

Gold Parsing System - What can it be used for in programming?

I have read the GOLD Homepage ( http://www.devincook.com/goldparser/ ) docs, FAQ and Wikipedia to find out what practical application there could possibly be for GOLD. I was thinking along the lines of having a programming language (easily) available to my systems such as ABAP on SAP or X++ on Axapta - but it doesn't look feasible to me, at least not easily - even if you use GOLD.
The final use of the parsed result produced by GOLD escapes me - what do you do with the result of the parse?
EDIT: A practical example (description) would be great.
Parsing really consists of two phases. The first is "lexing", which convert the raw strings of character in to something that the program can more readily understand (commonly called tokens).
Simple example, lex would convert:
if (a + b > 2) then
In to:
IF_TOKEN LEFT_PAREN IDENTIFIER(a) PLUS_SIGN IDENTIFIER(b) GREATER_THAN NUMBER(2) RIGHT_PAREN THEN_TOKEN
The parse takes that stream of tokens, and attempts to make yet more sense out of them. In this case, it would try and match up those tokens to an IF_STATEMENT. To the parse, the IF _STATEMENT may well look like this:
IF ( BOOLEAN_EXPRESSION ) THEN
Where the result of the lexing phase is a token stream, the result of the parsing phase is a Parse Tree.
So, a parser could convert the above in to:
if_statement
|
v
boolean_expression.operator = GREATER_THAN
| |
| v
V numeric_constant.string="2"
expression.operator = PLUS_SIGN
| |
| v
v identifier.string = "b"
identifier.string = "a"
Here you see we have an IF_STATEMENT. An IF_STATEMENT has a single argument, which is a BOOLEAN_EXPRESSION. This was explained in some manner to the parser. When the parser is converting the token stream, it "knows" what a IF looks like, and know what a BOOLEAN_EXPRESSION looks like, so it can make the proper assignments when it sees the code.
For example, if you have just:
if (a + b) then
The parser could know that it's not a boolean expression (because the + is arithmetic, not a boolean operator) and the parse could throw an error at this point.
Next, we see that a BOOLEAN_EXPRESSION has 3 components, the operator (GREATER_THAN), and two sides, the left side and the right side.
On the left side, it points to yet another expression, the "a + b", while on the right is points to a NUMERIC_CONSTANT, in this case the string "2". Again, the parser "knows" this is a NUMERIC constant because we told it about strings of numbers. If it wasn't numbers, it would be an IDENTIFIER (like "a" and "b" are).
Note, that if we had something like:
if (a + b > "XYZ") then
That "parses" just fine (expression on the left, string constant on the right). We don't know from looking at this whether this is a valid expression or not. We don't know if "a" or "b" reference Strings or Numbers at this point. So, this is something the parser can't decided for us, can't flag as an error, as it simply doesn't know. That will happen when we evaluate (either execute or try to compile in to code) the IF statement.
If we did:
if [a > b ) then
The parser can readily see that syntax error as a problem, and will throw an error. That string of tokens doesn't look like anything it knows about.
So, the point being that when you get a complete parse tree, you have some assurance that at first cut the "code looks good". Now during execution, other errors may well come up.
To evaluate the parse tree, you just walk the tree. You'll have some code associated with the major nodes of the parse tree during the compile or evaluation part. Let's assuming that we have an interpreter.
public void execute_if_statment(ParseTreeNode node) {
// We already know we have a IF_STATEMENT node
Value value = evaluate_expression(node.getBooleanExpression());
if (value.getBooleanResult() == true) {
// we do the "then" part of the code
}
}
public Value evaluate_expression(ParseTreeNode node) {
Value result = null;
if (node.isConstant()) {
result = evaluate_constant(node);
return result;
}
if (node.isIdentifier()) {
result = lookupIdentifier(node);
return result;
}
Value leftSide = evaluate_expression(node.getLeftSide());
Value rightSide = evaluate_expression(node.getRightSide());
if (node.getOperator() == '+') {
if (!leftSide.isNumber() || !rightSide.isNumber()) {
throw new RuntimeError("Must have numbers for adding");
}
int l = leftSide.getIntValue();
int r = rightSide.getIntValue();
int sum = l + r;
return new Value(sum);
}
if (node.getOperator() == '>') {
if (leftSide.getType() != rightSide.getType()) {
throw new RuntimeError("You can only compare values of the same type");
}
if (leftSide.isNumber()) {
int l = leftSide.getIntValue();
int r = rightSide.getIntValue();
boolean greater = l > r;
return new Value(greater);
} else {
// do string compare instead
}
}
}
So, you can see that we have a recursive evaluator here. You see how we're checking the run time types, and performing the basic evaluations.
What will happen is the execute_if_statement will evaluate it's main expression. Even tho we wanted only BOOLEAN_EXPRESION in the parse, all expressions are mostly the same for our purposes. So, execute_if_statement calls evaluate_expression.
In our system, all expressions have an operator and a left and right side. Each side of an expression is ALSO an expression, so you can see how we immediately try and evaluate those as well to get their real value. The one note is that if the expression consists of a CONSTANT, then we simply return the constants value, if it's an identifier, we look it up as a variable (and that would be a good place to throw a "I can't find the variable 'a'" message), otherwise we're back to the left side/right side thing.
I hope you can see how a simple evaluator can work once you have a token stream from a parser. Note how during evaluation, the major elements of the language are in place, otherwise we'd have got a syntax error and never got to this phase. We can simply expect to "know" that when we have a, for example, PLUS operator, we're going to have 2 expressions, the left and right side. Or when we execute an IF statement, that we already have a boolean expression to evaluate. The parse is what does that heavy lifting for us.
Getting started with a new language can be a challenge, but you'll find once you get rolling, the rest become pretty straightforward and it's almost "magic" that it all works in the end.
Note, pardon the formatting, but underscores are messing things up -- I hope it's still clear.
I would recommend antlr.org for information and the 'free' tool I would use for any parser use.
GOLD can be used for any kind of application where you have to apply context-free grammars to input.
elaboration:
Essentially, CFGs apply to all programming languages. So if you wanted to develop a scripting language for your company, you'd need to write a parser- or get a parsing program. Alternatively, if you wanted to have a semi-natural language for input for non-programmers in the company, you could use a parser to read that input and spit out more "machine-readable" data. Essentially, a context-free grammar allows you to describe far more inputs than a regular expression. The GOLD system apparently makes the parsing problem somewhat easier than lex/yacc(the UNIX standard programs for parsing).

Resources