I tried to parse some Java Code using the Java15 grammar of Rascal. However, it does not accept the declaration of local variable of parameterized types. In more details:
it does not recognize List<String> files = ...
it recognizes final List<String> files = ...
it recognizes List<String, String> files = ...
It seems to me that the problem is related to some ambiguity involving LocalVarDecStatements and expressions involving "<" and ">". However, I could not figure out how to fix the problem.
I'm not one to say "works for me", but it does :-) See:
rascal>import lang::java::\syntax::Java15;
ok
rascal>import ParseTree;
ok
rascal>parse(#LocalVarDec, "List\<String\> files = null")
LocalVarDec: (LocalVarDec) `List<String> files = null`
Could you provide the example or a simplified example which has the error in it?
Related
I am new to F#/.NET and I am trying to run the F# example provided in the accepted answer of How to translate the intro ML.Net demo to F#? with the ML.NET library, using F# on Visual Studio, using Microsoft.ML (0.2.0).
When building it I get the error error FS0039: The type 'TextLoader' is not defined.
To avoid this, I added the line
open Microsoft.ML.Data
to the source.
Then, however, the line
pipeline.Add(new TextLoader<IrisData>(dataPath,separator = ","))
triggers:
error FS0033: The non-generic type 'Microsoft.ML.Data.TextLoader' does not expect any type arguments, but here is given 1 type argument(s)
Changing to:
pipeline.Add(new TextLoader(dataPath,separator = ","))
yields:
error FS0495: The object constructor 'TextLoader' has no argument or settable return property 'separator'. The required signature is TextLoader(filePath: string) : TextLoader.
Changing to:
pipeline.Add(new TextLoader(dataPath))
makes the build successful, but the code fails when running with
ArgumentOutOfRangeException: Column #1 not found in the dataset (it only has 1 columns), I assume because the comma separator is not correctly picked up (incidentally, you can find and inspect the iris dataset at https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data).
Also
pipeline.Add(new TextLoader(dataPath).CreateFrom<IrisData>(separator: ','))
won't work.
I understand that there have been changes in TextLoader recently (see e.g. https://github.com/dotnet/machinelearning/issues/332), can somebody point me to what I am doing wrong?
F# just has a bit of a different syntax that can take some getting used to. It doesn't use the new keyword to instantiate a new class and to use named parameters it uses the = instead of : that you would in C#.
So for this line in C#:
pipeline.Add(new TextLoader(dataPath).CreateFrom<IrisData>(separator: ','))
It would be this in F#:
pipeline.Add(TextLoader(dataPath).CreateFrom<IrisData>(separator=','))
I am using dependency parsing of coreNLP for a project of mine. The basic and enhanced dependencies are different result for a particular dependency.
I used the following code to get enhanced dependencies.
val lp = LexicalizedParser.loadModel("edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz")
lp.setOptionFlags("-maxLength", "80")
val rawWords = edu.stanford.nlp.ling.Sentence.toCoreLabelList(tokens_arr:_*)
val parse = lp.apply(rawWords)
val tlp = new PennTreebankLanguagePack()
val gsf:GrammaticalStructureFactory = tlp.grammaticalStructureFactory()
val gs:GrammaticalStructure = gsf.newGrammaticalStructure(parse)
val tdl = gs.typedDependenciesCCprocessed()
For the following example,
Account name of ramkumar.
I use simple API to get basic dependencies. The dependency i get between
(account,name) is (compound). But when i use the above code to get enhanced dependency i get the relation between (account,name) as (dobj).
What is the fix to this? Is this a bug or am i doing something wrong?
When I run this command:
java -Xmx8g edu.stanford.nlp.pipeline.StanfordCoreNLP -annotators tokenize,ssplit,pos,lemma,ner,parse -file example.txt -outputFormat json
With your example text in the file example.txt, I see compound as the relationship between both of those words for both types of dependencies.
I also tried this with the simple API and got the same results.
You can see what simple produces with this code:
package edu.stanford.nlp.examples;
import edu.stanford.nlp.semgraph.SemanticGraphFactory;
import edu.stanford.nlp.simple.*;
import java.util.*;
public class SimpleDepParserExample {
public static void main(String[] args) {
Sentence sent = new Sentence("...example text...");
Properties props = new Properties();
// use sent.dependencyGraph() or sent.dependencyGraph(props, SemanticGraphFactory.Mode.ENHANCED) to see enhanced dependencies
System.out.println(sent.dependencyGraph(props, SemanticGraphFactory.Mode.BASIC));
}
}
I don't know anything about any Scala interfaces for Stanford CoreNLP. I should also note my results are using the latest code from GitHub, though I presume Stanford CoreNLP 3.8.0 would also produce similar results. If you are using an older version of Stanford CoreNLP that could be a potential cause of the error.
But running this example in various ways using Java I don't see the issue you are encountering.
I'm trying to use the haskell-src-exts package to parse Haskell modules. Currently, I'm trying to parse the acme-io package's module, but I keep getting this error no matter what parse mode I try:
*** Exception: fromParseResult: Parse failed at [System/IO/Unsafe/Really/IMeanIt] (1:57): TemplateHaskell is not enabled
The module mentioned makes no references to TemplateHaskell, not in it's LANGUAGE pragma, nor is there a $ anywhere in the source file.
I'm wondering if my parse mode has something to do with it - here it is:
defaultParseMode { parseFilename = toFilePath m
, baseLanguage = Haskell2010
, extensions = []
, ignoreLanguagePragmas = True
, ignoreLinePragmas = True
, fixities = Nothing
}
I've also tried to replace the extensions field with knownExtensions from the parsing suite, without any luck.
This is a duplicate question of this answer - using the parseFile function fixed the issue. However, the reader should note that haskell-src-exts uses different parsing than GHC - I ran into another similar issue right after this, because haskell-src-exts can't handle multi-param contexts without -XMultiParamTypeClasses, yet GHC can, borking the parser if you're scraping Hackage. Hint may be a better option, can't say for sure though.
UPDATE
The general question is: how to use verbose syntax of F# correctly? Verbose syntax is the syntax which is close to OCaml syntax, i.e. syntax with many commas etc.
OLD TEXT
I want to turn light syntax off in F# to have verbose syntax which is closer to OCaml.
I wrote the following code
#light "off"
let k=3.14;;
and got an error on let:
Unexpected keyword 'let' or 'use' in implementation file
What is correct implementation file structure without light syntax?
The problem is that you have written this inside a .fsi file - which is an FSharp Interface definition file; it has nothing to do with fsi.exe (FSharp Interactive).
The message "Unexpected keyword 'let' or 'use' in implementation file" is a tell - interface definitions were expected. Simply use a .fs extension.
If you want reuse ML code, consider changing the file extension to .ml, and add a #nowarn "62" directive at the beginning to ignore the legacy warning.
#nowarn "62"
#light "off"
let div2 = 2;;
let f x =
let r = x % div2 in
if r = 1 then
begin "Odd" end
else
begin "Even" end
I don't see anything wrong but... why the two ;? Are you compiling it or running in fsi?
In Antlrworks I get this error:
[18:21:03] Checking Grammar Grammar.g...
[18:21:26] Grammar.java:12: code too large
[18:21:26] public static final String[] tokenNames = new String[] {
[18:21:26] ^
[18:21:26] 1 error
Using instead the generated code in a Java project works normally. What can be had this problem?
Thanks.
For larger grammars, it's easier to split your grammar into bite-sized chunks (at least a separate lexer and parser). If you do so, ANTLRWorks will probably stop complaining as well.
Checkout the Wiki entry about "Composite grammars".