I try a small code but I have a strange behavior that I can't explain.
I want according to a value to return the "keyvalue" of a map which is based on the key.
My code works with positive value.
If the value is not in the array then it returns null.
It also works with negative values only if the value is included in my array.
If I put a negative value lower than my array then it returns not null but zero which is false!
Keys in my map must be String.
My code that you can test on dartPad :
import 'dart:collection';
void main() {
int myVar = -360;
Map<String, dynamic> values = {
"-200" : 42,
"-100" : 21,
"0" : 0,
"100" : -22,
"150" : -30,
"200" : -43,
"300" : -64
};
Map<String, dynamic> filter(int myVar, Map<String, dynamic> values) {
SplayTreeMap<String, dynamic> newval = SplayTreeMap.of(values);
String convertString = myVar.toString();
if (values.containsKey(convertString)) {
return {convertString: values[convertString]};
}
String lowerKey;
String upperKey;
if(myVar > 0){
lowerKey = newval.lastKeyBefore(convertString);
upperKey = newval.firstKeyAfter(convertString);
}
else{
lowerKey = newval.firstKeyAfter(convertString);
upperKey = newval.lastKeyBefore(convertString);
}
print(lowerKey);
print(upperKey);
return {
if (lowerKey != null) lowerKey: values[lowerKey],
if (upperKey != null) upperKey: values[upperKey],
};
}
var result = filter(myVar, values);
print('============================');
print(result);
}
First I want to give a minor complain about the use of dynamic in the code. It is totally fine to use dynamic in cases where the type cannot be determined on runtime like JSON parsing. But in this case, all the types can be determined and the use of dynamic is not necessary. So I have fixed the code to remove the usage of dynamic and also removed unnecessary typing:
import 'dart:collection';
void main() {
const myVar = -360;
final values = {
"-200": 42,
"-100": 21,
"0": 0,
"100": -22,
"150": -30,
"200": -43,
"300": -64
};
Map<String, int> filter(int myVar, Map<String, int> values) {
final newVal = SplayTreeMap.of(values);
final convertString = myVar.toString();
if (values.containsKey(convertString)) {
return {convertString: values[convertString]};
}
String lowerKey;
String upperKey;
if (myVar > 0) {
lowerKey = newVal.lastKeyBefore(convertString);
upperKey = newVal.firstKeyAfter(convertString);
} else {
lowerKey = newVal.firstKeyAfter(convertString);
upperKey = newVal.lastKeyBefore(convertString);
}
print(lowerKey);
print(upperKey);
return {
if (lowerKey != null) lowerKey: values[lowerKey],
if (upperKey != null) upperKey: values[upperKey],
};
}
final result = filter(myVar, values);
print('============================');
print(result);
}
Your problem is that you are using SplayTreeMap to sort your keys in values but you have used Strings to represent your numbers. This is rather confusing since numbers is valid keys. But this also means that your sorting in your SplayTreeMap is alphabetical and not by number. This is properly the reason why your code does not work as expected.
You can either change the type of your keys to int or provide a compare method to your SplayTreeMap which changes how the sorting are done.
I have made the following example where I have changed the type of keys into int which makes your code work:
import 'dart:collection';
void main() {
const myVar = -360;
final values = {
-200: 42,
-100: 21,
0: 0,
100: -22,
150: -30,
200: -43,
300: -64
};
Map<int, int> filter(int myVar, Map<int, int> values) {
final newVal = SplayTreeMap.of(values);
if (values.containsKey(myVar)) {
return {myVar: values[myVar]};
}
int lowerKey;
int upperKey;
if (myVar > 0) {
lowerKey = newVal.lastKeyBefore(myVar);
upperKey = newVal.firstKeyAfter(myVar);
} else {
lowerKey = newVal.firstKeyAfter(myVar);
upperKey = newVal.lastKeyBefore(myVar);
}
print(lowerKey);
print(upperKey);
return {
if (lowerKey != null) lowerKey: values[lowerKey],
if (upperKey != null) upperKey: values[upperKey],
};
}
final result = filter(myVar, values);
print('============================');
print(result);
}
Output
-200
null
============================
{-200: 42}
Related
I am trying to add new data to keys named "translations" in my items for all files sitting in the Directory.
However, as soon as I enable the commented section in the method translateRecipeMatching and want to merge my recipes, my script gets so slow/wonky that a simple log(recipeMatchingList.length.toString()) in that method crashes my visual studio code window and I have to restart it. Without that block the script ends in around 10 seconds but with that block I can wait over 5 minutes and nothing will happen.
Every other section runs in a reasonable time bewteen 1 and 5 seconds, for reference
Why is my script becoming so slow?
The translations list has around 23434 Elements and recipeMatchingList has 1419 elements. Recipelist contains around 700 elements.
The script is this:
import 'dart:convert';
import 'dart:developer';
import 'dart:io';
import 'package:translationMatcherNorviah/languagemaps.dart';
import 'package:translationMatcherNorviah/sourcepaths.dart';
enum Mergemode { normal, recipes }
List<Map<String, dynamic>> translationList = [];
List<String> clothingCategorys = [
"Accessories",
"Bags",
"Bottoms",
"Dress-Up",
"Headwear",
"Shoes",
"Socks",
"Tops",
"Clothing Other"
];
void setJsonFileAsString(Mergemode mergemode) {
List<String> jsonPaths = getJsonDataPaths(mergemode);
//Read translations
if (translationList.isEmpty) {
List<dynamic> translationData =
json.decode(File("json/sources/translations/translationsNew.json").readAsStringSync());
for (int j = 0; j < translationData.length; j++) {
Map<String, dynamic> tempMap = translationData[j];
translationList.add(tempMap);
}
}
List<Map<String, dynamic>> recipeMatchingList = [];
if (mergemode == Mergemode.recipes) {
//Read recipeMatchingList
recipeMatchingList = loadRecipeClothMatchingList();
}
//Loop through all files in the path List
for (int i = 0; i < jsonPaths.length; i++) {
String currentPath = jsonPaths[i];
print("Extracting from Filepath: ${jsonPaths[i]}");
//open the json file
List<dynamic> data = json.decode(File(currentPath).readAsStringSync());
List<Map<String, dynamic>> categoryItemList = [];
//Make sure that the dynamic type gets converted into a Map
//Doing it not this way causes an exception
for (int j = 0; j < data.length; j++) {
Map<String, dynamic> tempMap = data[j];
categoryItemList.add(tempMap);
}
List<Map<String, dynamic>> outPutList = [];
//Merge in translations depending on the files propertys
for (int j = 0; j < categoryItemList.length; j++) {
outPutList.add(
addTranslationsToItem(categoryItemList[j], recipeMatchingList),
);
}
//Create a new file at the output path
print("Writing to File: json/output/${getFileName(currentPath)}");
File newFile = File("json/output/${getFileName(currentPath)}");
//Write data to the new File
newFile.createSync();
newFile.writeAsStringSync(json.encode(categoryItemList));
}
}
String getFileName(String filePath) {
String fileName = "";
fileName = filePath.split("/").last;
return fileName;
}
Map<String, dynamic> addTranslationsToItem(Map<String, dynamic> item, List<Map<String, dynamic>> recipeMatchingList) {
Map<String, dynamic> translation = {};
bool hasVariants = item.containsKey("variations");
bool clothGroupMatching = (clothingCategorys.contains(item["sourceSheet"]));
bool fileNameMatching = (item["sourceSheet"] == "Sheet1");
bool iconFileNameMatching = (item["sourceSheet"] == "Sheet2");
bool recipeMatching = (item["sourceSheet"] == "Recipes");
bool hasPlural = (item["sourceSheet"] == "Other");
//The method will loop through the list of translations once and collect the nessecary translations for the item
for (int i = 0; i < translationList.length; i++) {
translation = translationList[i];
//See if the item does not have any specific category and add the translations in the fitting style
if (!clothGroupMatching && !recipeMatching && !fileNameMatching && !iconFileNameMatching && !hasPlural) {
item = translateMatchInternalId(item, translation);
}
//See if the item a Recipe and add the translations in the fitting style
if (!clothGroupMatching && recipeMatching && !fileNameMatching && !iconFileNameMatching && !hasPlural) {
item = translateRecipeMatching(item, translation, recipeMatchingList);
}
}
return item;
}
Map<String, dynamic> translateMatchInternalId(Map<String, dynamic> item, Map<String, dynamic> translation) {
bool hasVariants = item.containsKey("variations");
String itemInternalId = hasVariants ? item["variations"][0]["internalId"].toString() : item["internalId"].toString();
if (translation["id"].toString() == itemInternalId) {
Map<String, dynamic> tempMap = {};
translation.forEach((key, value) {
String oldLanguageString = getOldLanguageString(key);
tempMap[oldLanguageString] = value;
});
item["translations"] = tempMap;
}
return item;
}
Map<String, dynamic> translateMatchFileName(Map<String, dynamic> item, Map<String, dynamic> translation) {
String itemFileName = item["filename"];
if (translation["id"].toString() == itemFileName) {
Map<String, dynamic> tempMap = {};
translation.forEach((key, value) {
String oldLanguageString = getOldLanguageString(key);
tempMap[oldLanguageString] = value;
});
item["translations"] = tempMap;
}
return item;
}
Map<String, dynamic> translateRecipeMatching(
Map<String, dynamic> item, Map<String, dynamic> translation, List<Map<String, dynamic>> recipeMatchingList) {
//First we need to find out if the item is a cloth type and needs GroupID Matching or Internal ID Matching
bool needsGroupIdMatching = item["category"] == "Equipment";
String craftedItemId = item["craftedItemInternalId"].toString();
if (needsGroupIdMatching) {
for (int k = 0; k < recipeMatchingList.length; k++) {
Map<String, dynamic> matchingListItem = recipeMatchingList[k];
bool matchingItemHasVariations = matchingListItem.containsKey("variations");
String matchingListItemId = "";
if (matchingItemHasVariations) {
matchingListItemId = matchingListItem["variations"][0]["internalId"].toString();
} else {
matchingListItemId = matchingListItem["internalId"].toString();
}
if (craftedItemId == matchingListItemId) {
if (matchingItemHasVariations) {
craftedItemId = matchingListItem["variations"][0]["clothGroupId"].toString();
} else {
matchingListItemId = matchingListItem["clothGroupId"].toString();
}
}
}
}
//We need to load the target item files:
if (translation["id"].toString() == craftedItemId) {
Map<String, dynamic> tempMap = {};
translation.forEach((key, value) {
String oldLanguageString = getOldLanguageString(key);
tempMap[oldLanguageString] = value;
});
item["translations"] = tempMap;
}
return item;
}
List<Map<String, dynamic>> loadRecipeClothMatchingList() {
List<dynamic> clothingData = [];
for (int i = 0; i < jsonDataPaths.length; i++) {
String filename = getFileName(jsonDataPaths[i]);
if (clothingFiles.containsValue(filename)) {
clothingData.addAll(json.decode(File(jsonDataPaths[i]).readAsStringSync()));
}
}
List<Map<String, dynamic>> clothingDataMapList = [];
for (int j = 0; j < clothingData.length; j++) {
Map<String, dynamic> tempMap = clothingData[j];
clothingDataMapList.add(tempMap);
}
print("Clothmatchinglist length is: " + clothingDataMapList.length.toString());
return clothingDataMapList;
}
I'm trying to implement this logic: I have a deck with several cards. Each card has a suit and a value. However, there may be repeated cards in the deck. I want to count how many of each card are in the deck. Suits are an enum and there's also a Card class:
enum Suit { Red, Green, Blue };
class Card {
Suit suit;
int value;
Card(this.suit, this.value);
}
This would be the deck:
final deck = Map<Card, int>();
final addCardToDeck = (Card c) {
if (deck[c] != null) deck[c]++;
else deck[c] = 1;
};
So let's say I put 2 equal cards in the deck.
final cardA = Card(Suit.red, 7);
final cardB = Card(Suit.red, 7);
addCardToDeck(cardA);
addCardToDeck(cardB);
Since the two cards are equal, I would expect deck[cardA] and deck[cardB] to return 2, right? Wrong! Both returned 1. So I thought, ok, must be an object reference problem, I'll overload the == operator.
bool operator ==(otherCard) {
return otherCard is Card
&& suit == otherCard.suit
&& value == otherCard.value;
}
And it still doesn't work as expected. So, how would I correctly implement this? I know I could just make a Map of Maps, so I would access it like deck[suit][value], but I find this approach neater. Is this feasible?
Whole code below.
enum Suit { Red, Blue, Green }
class Card {
Suit suit;
int value;
Card(this.suit, this.value);
bool operator ==(otherCard) {
return otherCard is Card && suit == otherCard.suit && value == otherCard.value;
}
}
void main() {
final deck = Map<Card, int>();
final addCardToDeck = (Card c) {
if (deck[c] != null) deck[c]++;
else deck[c] = 1;
};
final cardA = Card(Suit.Red, 7);
final cardB = Card(Suit.Red, 7);
addCardToDeck(cardA);
addCardToDeck(cardB);
print(deck[cardA]); // Expected 2, got 1
print(deck[cardB]); // Expected 2, got 1
}
hashCode will be used to determine equality in a Map. If you override == you should also override hashCode
https://dart.dev/guides/language/effective-dart/design#equality
Here is the updated code:
enum Suit { Red, Blue, Green }
class Card {
Suit suit;
int value;
Card(this.suit, this.value);
#override
bool operator ==(otherCard) {
return otherCard is Card &&
suit == otherCard.suit &&
value == otherCard.value;
}
#override
int get hashCode => suit.hashCode^value.hashCode;
}
void main() {
final deck = <Card, int>{};
final addCardToDeck = (Card c) {
if (deck[c] != null) {
deck[c]++;
} else {
deck[c] = 1;
}
};
final cardA = Card(Suit.Red, 7);
final cardB = Card(Suit.Red, 7);
final cardC = Card(Suit.Green, 4);
addCardToDeck(cardA);
addCardToDeck(cardB);
addCardToDeck(cardC);
print(deck[cardA]); // Expected 2, got 2
print(deck[cardB]); // Expected 2, got 2
print(deck[cardC]); // Expected 1, got 1
}
I'm doing Join algorithm in MapReduce. In the Map phase, I made joinColumn as key and the tuple as value. In the reduce method, I have keys and values as (columnname, row). In the reduce phase, I need to separate the "row" into two based on which table they belong to.
I used MultiMap to do this. But the MultiMap is overwriting the existing value. To try to overcome this, I override "equals" and "hashcode" but this did not fix the problem.
public void reduce(Text key,Iterable<Text> values,Context context) throws IOException, InterruptedException{
Multimap<String,Table> entry=LinkedListMultimap.create();
for(Text val : values){
String[] row=val.toString().split(",");
Table t = new Table();
t.setTablename(row[0]);
t.setColumns(val);
entry.put(row[0],t);
}
for (String k: entry.keySet()){
System.out.println("Key : "+k);
Collection<Table> rows=entry.get(k);
Iterator<Table> i=rows.iterator();
while(i.hasNext()){
Table t=i.next();
System.out.println(t.getColumns());
}
}
public class Table {
private String tablename;
private Text columns;
public String getTablename() {
return tablename;
}
public void setTablename(String tablename) {
this.tablename = tablename;
}
public Text getColumns() {
return columns;
}
public void setColumns(Text columns) {
this.columns = columns;
}
#Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((columns == null) ? 0 : columns.hashCode());
result = prime * result
+ ((tablename == null) ? 0 : tablename.hashCode());
return result;
}
#Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
Table other = (Table) obj;
if (columns == null) {
if (other.columns != null)
return false;
} else if (!columns.equals(other.columns))
return false;
if (tablename == null) {
if (other.tablename != null)
return false;
} else if (!tablename.equals(other.tablename))
return false;
return true;
}
}
I'm getting the following output:
Key : S
R, 2, Don, Larson, Newark, 555-3221
R, 2, Don, Larson, Newark, 555-3221
Key : R
R, 2, Don, Larson, Newark, 555-3221
Key : S
R, 3, Sal, Maglite, Nutley, 555-6905
R, 3, Sal, Maglite, Nutley, 555-6905
Key : R
R, 3, Sal, Maglite, Nutley, 555-6905
Key : R
S, 4, 22000, 7000, part1
Key : S
S, 4, 22000, 7000, part1
It is overriding the existing values. Can anyone help me to sort out this problem?
Your problem is that the object returned by iterating over values is reused by the iterator. Instead of just assigning the value in setColumns(), you need to copy it. Something like:
public void setColumns(Text columns) {
this.columns = new Text(columns.toString());
}
This question is a continuation of a previous question. I wrote the following piece of code to determine if File.openRead() created a Stream that could be streamed line-by-line. It turns out that the answer is no. The entire file is read and then passed to the next transform. My question is then: How do you Stream a file line-by-line in Dart?
import 'dart:async';
import 'dart:convert';
import 'dart:io';
void main(List<String> arguments) {
Stream<List<int>> stream = new File('Data.txt').openRead();
stream
.transform(const Utf8InterceptDecoder())
.transform(const LineSplitterIntercept())
.listen((line) {
// stdout.writeln(line);
}).asFuture().catchError((_) => print(_));
}
int lineSplitCount = 0;
class LineSplitterIntercept extends LineSplitter {
const LineSplitterIntercept() : super();
// Never gets called
List<String> convert(String data) {
stdout.writeln("LineSplitterIntercept.convert : Data:" + data);
return super.convert(data);
}
StringConversionSink startChunkedConversion(ChunkedConversionSink<String> sink) {
stdout.writeln("LineSplitterIntercept.startChunkedConversion Count:"+lineSplitCount.toString()+ " Sink: " + sink.toString());
lineSplitCount++;
return super.startChunkedConversion(sink);
}
}
int utfCount = 0;
class Utf8InterceptDecoder extends Utf8Decoder {
const Utf8InterceptDecoder() : super();
//never gets called
String convert(List<int> codeUnits) {
stdout.writeln("Utf8InterceptDecoder.convert : codeUnits.length:" + codeUnits.length.toString());
return super.convert(codeUnits);
}
ByteConversionSink startChunkedConversion(ChunkedConversionSink<String> sink) {
stdout.writeln("Utf8InterceptDecoder.startChunkedConversion Count:"+ utfCount.toString() + " Sink: "+ sink.toString());
utfCount++;
return super.startChunkedConversion(sink);
}
}
I think this code is useful:
import 'dart:io';
import 'dart:convert';
import 'dart:async';
main() {
final file = new File('file.txt');
Stream<List<int>> inputStream = file.openRead();
inputStream
.transform(utf8.decoder) // Decode bytes to UTF-8.
.transform(new LineSplitter()) // Convert stream to individual lines.
.listen((String line) { // Process results.
print('$line: ${line.length} bytes');
},
onDone: () { print('File is now closed.'); },
onError: (e) { print(e.toString()); });
}
If a stream is necessary, you can create it from the future that readAsLines() returns:
Stream<List<String>> stream =
new Stream.fromFuture(new File('Data.txt').readAsLines());
However it looks simpler to me to plainly process the lines one by one,
List<String> lines = new File('Data.txt').readAsLinesSync();
for (var line in lines) {
stdout.writeln(line);
}
The converter's startChunkedConversion is only called once, when the transformation is started. However, the returned sink's add method is invoked multiple times with parts of the file.
It's up to the source to decide how big the chunks are, but a 37MB file (as mentioned in your previous question) will definitely be sent in smaller chunks.
If you want to see the chunks you can either intercept startChunkedConversion and return a wrapped sink, or you can put yourself between the openRead and the transformer.
Intercept:
class InterceptSink {
static int lineSplitCount = 0;
final _sink;
InterceptSink(this._sink);
add(x) {
print("InterceptSink.add Count: $lineSplitCount");
lineSplitCount++;
_sink.add(x);
}
close() { _sink.close(); }
}
class LineSplitterIntercept extends Converter {
convert(x) { throw "unimplemented"; }
startChunkedConversion(outSink) {
var lineSink = new LineSplitter().startChunkedConversion(outSink);
return new InterceptSink(lineSink);
}
}
After openRead:
file.openRead()
.transform(UTF8.decoder)
.map(x) {
print("chunk size: ${x.length)");
return x;
}
.transform(new LineSplitter())
...
Because none of the other answers suited my situation, here is another technique:
import 'dart:io';
import 'dart:convert';
void main()
{
var file = File('/path/to/some/file.txt');
var raf = file.openSync(mode: fileMode.read);
String line;
while ((line = readLine(raf)) != null)
{
print(line);
}
}
String readLine(RandomAccessFile raf, {String lineDelimiter = '\n'}) {
var line = '';
int byte;
var priorChar = '';
var foundDelimiter = false;
while ((byte = raf.readByteSync()) != -1) {
var char = utf8.decode([byte]);
if (isLineDelimiter(priorChar, char, lineDelimiter)) {
foundDelimiter = true;
break;
}
line += char;
priorChar = char;
}
if (line.isEmpty && foundDelimiter == false) {
line = null;
}
return line;
}
bool isLineDelimiter(String priorChar, String char, String lineDelimiter) {
if (lineDelimiter.length == 1) {
return char == lineDelimiter;
} else {
return priorChar + char == lineDelimiter;
}
}
Adjusting Brett Sutton's answer for sound null safety and wider availability:
import 'dart:io';
import 'dart:convert';
bool isLineDelimiter(String priorChar, String char, String lineDelimiter)
{
if (lineDelimiter.length == 1) {
return char == lineDelimiter;
} else {
return priorChar + char == lineDelimiter;
}
}
/// Reads one line and returns its contents.
///
/// If end-of-file has been reached and the line is empty null is returned.
String? readLine(RandomAccessFile raf,
{String lineDelimiter = '\n', void Function()? onEOF}) {
String line = '';
int byte;
String priorChar = '';
byte = raf.readByteSync();
while (byte != -1) {
String char = utf8.decode([byte]);
if (isLineDelimiter(priorChar, char, lineDelimiter)) return line;
line += char;
priorChar = char;
byte = raf.readByteSync();
}
onEOF?.call();
if (line.isEmpty) return null;
return line;
}
EDIT 1:
I wanted to add some more line-specific functions I made:
/// Skips one line and returns the last byte read.
///
/// If end-of-file has been reached -1 is returned.
int skipLine(RandomAccessFile raf,
{String lineDelimiter = '\n', void Function()? onEOF}) {
int byte;
String priorChar = '';
byte = raf.readByteSync();
while (byte != -1) {
String char = utf8.decode([byte]);
if (isLineDelimiter(priorChar, char, lineDelimiter)) return byte;
priorChar = char;
byte = raf.readByteSync();
}
return byte;
}
/// Reads all lines in the file and executes [onLine] per each.
///
/// If [onLine] returns true the function terminates.
void processLines(
RandomAccessFile raf, {
String lineDelimiter = '\n',
required bool? Function(String line, bool eofReached) onLine,
}) {
bool _eofReached = false;
do {
String? _line;
_line = readLine(raf,
lineDelimiter: lineDelimiter, onEOF: () => _eofReached = true);
if (_line == null) return;
if (onLine(_line, _eofReached) == true) return;
} while (!_eofReached);
}
I'm trying to write a piece of code that will take an ANTLR4 parser and use it to generate ASTs for inputs similar to the ones given by the -tree option on grun (misc.TestRig). However, I'd additionally like for the output to include all the line number/offset information.
For example, instead of printing
(add (int 5) '+' (int 6))
I'd like to get
(add (int 5 [line 3, offset 6:7]) '+' (int 6 [line 3, offset 8:9]) [line 3, offset 5:10])
Or something similar.
There aren't a tremendous number of visitor examples for ANTLR4 yet, but I am pretty sure I can do most of this by copying the default implementation for toStringTree (used by grun). However, I do not see any information about the line numbers or offsets.
I expected to be able to write super simple code like this:
String visit(ParseTree t) {
return "(" + t.productionName + t.visitChildren() + t.lineNumber + ")";
}
but it doesn't seem to be this simple. I'm guessing I should be able to get line number information from the parser, but I haven't figured out how to do so. How can I grab this line number/offset information in my traversal?
To fill in the few blanks in the solution below, I used:
List<String> ruleNames = Arrays.asList(parser.getRuleNames());
parser.setBuildParseTree(true);
ParserRuleContext prc = parser.program();
ParseTree tree = prc;
to get the tree and the ruleNames. program is the name for the top production in my grammar.
The Trees.toStringTree method can be implemented using a ParseTreeListener. The following listener produces exactly the same output as Trees.toStringTree.
public class TreePrinterListener implements ParseTreeListener {
private final List<String> ruleNames;
private final StringBuilder builder = new StringBuilder();
public TreePrinterListener(Parser parser) {
this.ruleNames = Arrays.asList(parser.getRuleNames());
}
public TreePrinterListener(List<String> ruleNames) {
this.ruleNames = ruleNames;
}
#Override
public void visitTerminal(TerminalNode node) {
if (builder.length() > 0) {
builder.append(' ');
}
builder.append(Utils.escapeWhitespace(Trees.getNodeText(node, ruleNames), false));
}
#Override
public void visitErrorNode(ErrorNode node) {
if (builder.length() > 0) {
builder.append(' ');
}
builder.append(Utils.escapeWhitespace(Trees.getNodeText(node, ruleNames), false));
}
#Override
public void enterEveryRule(ParserRuleContext ctx) {
if (builder.length() > 0) {
builder.append(' ');
}
if (ctx.getChildCount() > 0) {
builder.append('(');
}
int ruleIndex = ctx.getRuleIndex();
String ruleName;
if (ruleIndex >= 0 && ruleIndex < ruleNames.size()) {
ruleName = ruleNames.get(ruleIndex);
}
else {
ruleName = Integer.toString(ruleIndex);
}
builder.append(ruleName);
}
#Override
public void exitEveryRule(ParserRuleContext ctx) {
if (ctx.getChildCount() > 0) {
builder.append(')');
}
}
#Override
public String toString() {
return builder.toString();
}
}
The class can be used as follows:
List<String> ruleNames = ...;
ParseTree tree = ...;
TreePrinterListener listener = new TreePrinterListener(ruleNames);
ParseTreeWalker.DEFAULT.walk(listener, tree);
String formatted = listener.toString();
The class can be modified to produce the information in your output by updating the exitEveryRule method:
#Override
public void exitEveryRule(ParserRuleContext ctx) {
if (ctx.getChildCount() > 0) {
Token positionToken = ctx.getStart();
if (positionToken != null) {
builder.append(" [line ");
builder.append(positionToken.getLine());
builder.append(", offset ");
builder.append(positionToken.getStartIndex());
builder.append(':');
builder.append(positionToken.getStopIndex());
builder.append("])");
}
else {
builder.append(')');
}
}
}