smartgwt listgrid population from excel file data [duplicate] - smartgwt

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I want to write a program for a school java project to parse some CSV I do not know. I do know the datatype of each column - although I do not know the delimiter.
The problem I do not even marginally know how to fix is to parse Date or even DateTime Columns. They can be in one of many formats.
I found many libraries but have no clue which is the best for my needs:
http://opencsv.sourceforge.net/
http://www.csvreader.com/java_csv.php
http://supercsv.sourceforge.net/
http://flatpack.sourceforge.net/
The problem is I am a total java beginner. I am afraid non of those libraries can do what I need or I can't convince them to do it.
I bet there are a lot of people here who have code sample that could get me started in no time for what I need:
automatically split in Columns (delimiter unknown, Columntypes are known)
cast to Columntype (should cope with $, %, etc.)
convert dates to Java Date or Calendar Objects
It would be nice to get as many code samples as possible by email.
Thanks a lot!
AS

You also have the Apache Commons CSV library, maybe it does what you need. See the guide. Updated to Release 1.1 in 2014-11.
Also, for the foolproof edition, I think you'll need to code it yourself...through SimpleDateFormat you can choose your formats, and specify various types, if the Date isn't like any of your pre-thought types, it isn't a Date.

There is a serious problem with using
String[] strArr=line.split(",");
in order to parse CSV files, and that is because there can be commas within the data values, and in that case you must quote them, and ignore commas between quotes.
There is a very very simple way to parse this:
/**
* returns a row of values as a list
* returns null if you are past the end of the input stream
*/
public static List<String> parseLine(Reader r) throws Exception {
int ch = r.read();
while (ch == '\r') {
//ignore linefeed chars wherever, particularly just before end of file
ch = r.read();
}
if (ch<0) {
return null;
}
Vector<String> store = new Vector<String>();
StringBuffer curVal = new StringBuffer();
boolean inquotes = false;
boolean started = false;
while (ch>=0) {
if (inquotes) {
started=true;
if (ch == '\"') {
inquotes = false;
}
else {
curVal.append((char)ch);
}
}
else {
if (ch == '\"') {
inquotes = true;
if (started) {
// if this is the second quote in a value, add a quote
// this is for the double quote in the middle of a value
curVal.append('\"');
}
}
else if (ch == ',') {
store.add(curVal.toString());
curVal = new StringBuffer();
started = false;
}
else if (ch == '\r') {
//ignore LF characters
}
else if (ch == '\n') {
//end of a line, break out
break;
}
else {
curVal.append((char)ch);
}
}
ch = r.read();
}
store.add(curVal.toString());
return store;
}
There are many advantages to this approach. Note that each character is touched EXACTLY once. There is no reading ahead, pushing back in the buffer, etc. No searching ahead to the end of the line, and then copying the line before parsing. This parser works purely from the stream, and creates each string value once. It works on header lines, and data lines, you just deal with the returned list appropriate to that. You give it a reader, so the underlying stream has been converted to characters using any encoding you choose. The stream can come from any source: a file, a HTTP post, an HTTP get, and you parse the stream directly. This is a static method, so there is no object to create and configure, and when this returns, there is no memory being held.
You can find a full discussion of this code, and why this approach is preferred in my blog post on the subject: The Only Class You Need for CSV Files.

My approach would not be to start by writing your own API. Life's too short, and there are more pressing problems to solve. In this situation, I typically:
Find a library that appears to do what I want. If one doesn't exist, then implement it.
If a library does exist, but I'm not sure it'll be suitable for my needs, write a thin adapter API around it, so I can control how it's called. The adapter API expresses the API I need, and it maps those calls to the underlying API.
If the library doesn't turn out to be suitable, I can swap another one in underneath the adapter API (whether it's another open source one or something I write myself) with a minimum of effort, without affecting the callers.
Start with something someone has already written. Odds are, it'll do what you want. You can always write your own later, if necessary. OpenCSV is as good a starting point as any.

i had to use a csv parser about 5 years ago. seems there are at least two csv standards: http://en.wikipedia.org/wiki/Comma-separated_values and what microsoft does in excel.
i found this libaray which eats both: http://ostermiller.org/utils/CSV.html, but afaik, it has no way of inferring what data type the columns were.

You might want to have a look at this specification for CSV. Bear in mind that there is no official recognized specification.
If you do not now the delimiter it will not be possible to do this so you have to find out somehow. If you can do a manual inspection of the file you should quickly be able to see what it is and hard code it in your program. If the delimiter can vary your only hope is to be able to deduce if from the formatting of the known data. When Excel imports CSV files it lets the user choose the delimiter and this is a solution you could use as well.

I agree with #Brian Clapper. I have used SuperCSV as a parser though I've had mixed results. I enjoy the versatility of it, but there are some situations within my own csv files for which I have not been able to reconcile "yet". I have faith in this product and would recommend it overall--I'm just missing something simple, no doubt, that I'm doing in my own implementation.
SuperCSV can parse the columns into various formats, do edits on the columns, etc. It's worth taking a look-see. It has examples as well, and easy to follow.
The one/only limitation I'm having is catching an 'empty' column and parsing it into an Integer or maybe a blank, etc. I'm getting null-pointer errors, but javadocs suggest each cellProcessor checks for nulls first. So, I'm blaming myself first, for now. :-)
Anyway, take a look at SuperCSV. http://supercsv.sourceforge.net/

At a minimum you are going to need to know the column delimiter.

Basically you will need to read the file line by line.
Then you will need to split each line by the delimiter, say a comma (CSV stands for comma-separated values), with
String[] strArr=line.split(",");
This will turn it into an array of strings which you can then manipulate, for example with
String name=strArr[0];
int yearOfBirth = Integer.valueOf(strArr[1]);
int monthOfBirth = Integer.valueOf(strArr[2]);
int dayOfBirth = Integer.valueOf(strArr[3]);
GregorianCalendar dob=new GregorianCalendar(yearOfBirth, monthOfBirth, dayOfBirth);
Student student=new Student(name, dob); //lets pretend you are creating instances of Student
You will need to do this for every line so wrap this code into a while loop. (If you don't know the delimiter just open the file in a text editor.)

I would recommend that you start by pulling your task apart into it's component parts.
Read string data from a CSV
Convert string data to appropriate format
Once you do that, it should be fairly trivial to use one of the libraries you link to (which most certainly will handle task #1). Then iterate through the returned values, and cast/convert each String value to the value you want.
If the question is how to convert strings to different objects, it's going to depend on what format you are starting with, and what format you want to wind up with.
DateFormat.parse(), for example, will parse dates from strings. See SimpleDateFormat for quickly constructing a DateFormat for a certain string representation.
Integer.parseInt() will prase integers from strings.
Currency, you'll have to decide how you want to capture it. If you want to just capture as a float, then Float.parseFloat() will do the trick (just use String.replace() to remove all $ and commas before you parse it). Or you can parse into a BigDecimal (so you don't have rounding problems). There may be a better class for currency handling (I don't do much of that, so am not familiar with that area of the JDK).

Writing your own parser is fun, but likely you should have a look at
Open CSV. It provides numerous ways of accessing the CSV and also allows to generate CSV. And it does handle escapes properly. As mentioned in another post, there is also a CSV-parsing lib in the Apache Commons, but that one isn't released yet.

Related

How to force nom to parse the whole input string?

I am working with nom version 6.1.2 and I am trying to parse Strings like
A 2 1 2.
At the moment I would be happy to at least differentiate between input that fits the requirements and inputs which don't do that. (After that I would like to change the output to a tuple that has the "A" as first value and as second value a vector of the u16 numbers.)
The String always has to start with a capital A and after that there should be at least one space and after that one a number. Furthermore, there can be as much additional spaces and numbers as you want. It is just important to end with a number and not with a space. All numbers will be within the range of u16. I already wrote the following function:
extern crate nom;
use nom::sequence::{preceded, pair};
use nom::character::streaming::{char, space1};
use nom::combinator::recognize;
use nom::multi::many1;
use nom::character::complete::digit1;
pub fn parse_and(line: &str) -> IResult<&str, &str>{
preceded(
char('A'),
recognize(
many1(
pair(
space1,
digit1
)
)
)
)(line)
}
Also I want to mention that there are answers for such a problem which use CompleteStr but that isn't an option anymore because it got removed some time ago.
People explained that the reason for my behavior is that nom doesn't know when the slice of a string ends and therefore I get parse_and: Err(Incomplete(Size(1))) as answer for the provided example as input.
It seems like that one part of the use declarations created that problem. In the documentation (somewhere in some paragraph way to low that I looked at it) it says:
"
Streaming / Complete
Some of nom's modules have streaming or complete submodules. They hold different variants of the same combinators.
A streaming parser assumes that we might not have all of the input data. This can happen with some network protocol or large file parsers, where the input buffer can be full and need to be resized or refilled.
A complete parser assumes that we already have all of the input data. This will be the common case with small files that can be read entirely to memory.
"
Therefore, the solution to my problem is to swap use nom::character::complete::{char, space1}; instead of nom::character::streaming::{char, space1}; (3rd loc without counting empty lines). That worked for me :)

How to prevent Flex from ignoring previous analysis?

I recently started using Lex, as a simple way to explain the problem I encoutered, supposing that I'm trying to realize a lexical analyser with Flex that print all the letters and also all the bigrams in a given text, that seems very easy and simple, but once I implemented it, I've realised that it shows bigrams first and only shows letters when they are single, example: for the following text
QQQZ ,JQR
The result is
Bigram QQ
Bigram QZ
Bigram JQ
Letter R
Done
This is my lex code
%{
%}
letter[A-Za-z]
Separ [ \t\n]
%%
{letter} {
printf(" Letter %c\n",yytext[0]);
}
{letter}{2} {
printf(" Bigram %s\n",yytext);
}
%%
main()
{ yylex();
printf("Done");
}
My question is How can realise the two analysis seperatly, knowing that my actual problem isn't as simple as this example
Lexical analysers divide the source text into separate tokens. If your problem looks like that, then (f)lex is an appropriate tool. If your problem does not look like that, then (f)lex is probably not the correct tool.
Doing two simultaneous analyses of text is not really a use case for (f)lex. One possibility would be to use two separate reentrant lexical analysers, arranging to feed them the same inputs. However, that will be a lot of work for a problem which could easily be solved in a few lines of C.
Since you say that your problem is different from the simple problem in your question, I did not bother to either write the simple C code or the rather more complicated code to generate and run two independent lexical analysers, since it is impossible to know whether either of those solutions is at all relevant.
If your problem really is matching two (or more) different lexemes from the same starting position, you could use one of two strategies, both quite ugly (IMHO):
I'm assuming the existence of handler functions:
void handle_letter(char ch);
void handle_bigram(char* s); /* Expects NUL-terminated string */
void handle_trigram(char* s); /* Expects NUL-terminated string */
For historical reasons, lex implements the REJECT action, which causes the current match to be discarded. The idea was to let you process a match, and then reject it in order to process a shorter (or alternate) match. With flex, the use of REJECT is highly discouraged because it is extremely inefficient and also prevents the lexer from resizing the input buffer, which arbitrarily limits the length of a recognisable token. However, in this particular use case it is quite simple:
[[:alpha:]][[:alpha:]][[:alpha:]] handle_trigram(yytext); REJECT;
[[:alpha:]][[:alpha:]] handle_bigram(yytext); REJECT;
[[:alpha:]] handle_letter(*yytext);
If you want to try this solution, I recommend using flex's debug facility (flex -d ...) in order to see what is going on.
See debugging options and REJECT documentation.
The solution I would actually recommend, although the code is a bit clunkier, is to use yyless() to reprocess part of the recognised token. This is quite a bit more efficient than REJECT; yyless() just changes a single pointer, so it has no impact on speed. Without REJECT, we have to know all the lexeme handlers which will be needed, but that's not very difficult. A complication is the interface for handle_bigram, which requires a NUL-terminated string. If your handler didn't impose this requirement, the code would be simpler.
[[:alpha:]][[:alpha:]][[:alpha:]] { handle_trigram(yytext);
char tmp = yytext[2];
yytext[2] = 0;
handle_bigram(yytext);
yytext[2] = tmp;
handle_letter(yytext[0]);
yyless(1);
}
[[:alpha:]][[:alpha:]] { handle_bigram(yytext);
handle_letter(yytext[0]);
yyless(1);
}
[[:alpha:]] handle_letter(*yytext);
See yyless() documentation

what is %s and %d?

I am trying to learn iOs programming. and I suppose this is a bit of a reverse question.
I have just completed a tutorial on youtube using Xcode to create a simple iPhone app that will allow you to store, list and delete data from an SQLite3 database (as the app i want to produce will need a database).
However the bloke who put the video up didn't seem to explain 'why' he did what he did, so I am now trying to understand what each bit of code does
(I come from a PHP and SQL web programming background, so I understand accessing databases, calling data rows etc to show the content on a website.)
The one part of this iOs bit I don't quite understand is the %s and %d values used as they didn't seem to be declared anywhere.
The code is;
if(sqlite3_open([dbPathString UTF8String], &personDB)==SQLITE_OK) {
NSString *inserStmt = [NSString stringWithFormat:#"INSERT INTO PERSONS(NAME,AGE) values ('%s', '%d')",[self.nameField.text UTF8String],[self.ageField.text intValue]];
now %s and %d clearly get their values from the self.nameField and self.ageField. However that implies that I could only ever submit two values into a table? or are there other % for other values, but surely then there is a max of 26.
I would be grateful for any explanation you could give.
Also in addition, does anyone have any suggestions about other fully explained ways to learn to code for iOS? especially if you were a starter just learning iOS programming for a first time with limited C programming skills before hand.
The area i am looking for is to create an app that will store some text fields and an image, which either will be stored in a database and the image either in the database or as a link and appropriately named.
I'd like to be able to manipulate the image to resize it so it is optimised for the iPhone display (don't need a HD image in the APP)
Later I'd like to be able to work out how either upload the local database (sqlite3) file to a an online storage (either my own server or dropbox), or synchronise it to an SQL database (from initial looks just exporting the file would be better and embedding the images into a field would be better for this project, even though i know it is not the normal way of doing things)
%s and %d are format specifiers for a null-terminated array of characters and a signed 32-bit integer respectively. You can find the details about specifiers in the String Programming Guide. However, you should not format the string this way for a SQLite statement as it puts you at risk of SQL injection. Instead you should bind the values using ? and the appropriate sqlite3_bind* function. For your situation you would use sqlite3_bind_text for NAME and sqlite3_bind_int for AGE.
Have a look at the class reference:
https://developer.apple.com/library/mac/#documentation/Cocoa/Reference/Foundation/Classes/NSString_Class/Reference/NSString.html
Here are the string format specifiers:
https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/Strings/Articles/formatSpecifiers.html#//apple_ref/doc/uid/TP40004265
As you can see, %d is outputting an integer while %s is outputting a string.
Part 1:
The % convention "string format specifiers" is a common standard for string substitution.
They are not variables, but typed substitution placeholders.
%s --> string
%d --> number
Part 2:
You might check out the iTunes U course:
iPhone Application Programming '11
by Prof. Jan Borchers
https://itunes.apple.com/us/itunes-u/iphone-application-programming/id474416629
https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/Strings/Articles/formatSpecifiers.html
%s
Null-terminated array of 8-bit unsigned characters. Because the %s specifier causes the characters to be interpreted in the system default encoding, the results can be variable, especially with right-to-left languages. For example, with RTL, %s inserts direction markers when the characters are not strongly directional. For this reason, it’s best to avoid %s and specify encodings explicitly.
%d, %D
Signed 32-bit integer (int).
That is what is called a formatted string, basically, it is a way to inject values into a string. The character after the % sign is used to indicate the datatype that the value should be formatted as. In your case, %s is used to indicate a string value and %d is used to indicate a decimal integral value.
This type of string formatting is extremely common; many programming languages provide some mechanism for performing this type of string formatting and the formatting symbols are largely standardized. You can find a more information on the C++ website.

What is parsing in terms that a new programmer would understand? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am a college student getting my Computer Science degree. A lot of my fellow students really haven't done a lot of programming. They've done their class assignments, but let's be honest here those questions don't really teach you how to program.
I have had several other students ask me questions about how to parse things, and I'm never quite sure how to explain it to them. Is it best to start just going line by line looking for substrings, or just give them the more complicated lecture about using proper lexical analysis, etc. to create tokens, use BNF, and all of that other stuff? They never quite understand it when I try to explain it.
What's the best approach to explain this without confusing them or discouraging them from actually trying.
I'd explain parsing as the process of turning some kind of data into another kind of data.
In practice, for me this is almost always turning a string, or binary data, into a data structure inside my Program.
For example, turning
":Nick!User#Host PRIVMSG #channel :Hello!"
into (C)
struct irc_line {
char *nick;
char *user;
char *host;
char *command;
char **arguments;
char *message;
} sample = { "Nick", "User", "Host", "PRIVMSG", { "#channel" }, "Hello!" }
Parsing is the process of analyzing text made of a sequence of tokens to determine its grammatical structure with respect to a given (more or less) formal grammar.
The parser then builds a data structure based on the tokens. This data structure can then be used by a compiler, interpreter or translator to create an executable program or library.
(source: wikimedia.org)
If I gave you an english sentence, and asked you to break down the sentence into its parts of speech (nouns, verbs, etc.), you would be parsing the sentence.
That's the simplest explanation of parsing I can think of.
That said, parsing is a non-trivial computational problem. You have to start with simple examples, and work your way up to the more complex.
What is parsing?
In computer science, parsing is the process of analysing text to determine if it belongs to a specific language or not (i.e. is syntactically valid for that language's grammar). It is an informal name for the syntactic analysis process.
For example, suppose the language a^n b^n (which means same number of characters A followed by the same number of characters B). A parser for that language would accept AABB input and reject the AAAB input. That is what a parser does.
In addition, during this process a data structure could be created for further processing. In my previous example, it could, for instance, to store the AA and BB in two separate stacks.
Anything that happens after it, like giving meaning to AA or BB, or transform it in something else, is not parsing. Giving meaning to parts of an input sequence of tokens is called semantic analysis.
What isn't parsing?
Parsing is not transform one thing into another. Transforming A into B, is, in essence, what a compiler does. Compiling takes several steps, parsing is only one of them.
Parsing is not extracting meaning from a text. That is semantic analysis, a step of the compiling process.
What is the simplest way to understand it?
I think the best way for understanding the parsing concept is to begin with the simpler concepts. The simplest one in language processing subject is the finite automaton. It is a formalism to parsing regular languages, such as regular expressions.
It is very simple, you have an input, a set of states and a set of transitions. Consider the following language built over the alphabet { A, B }, L = { w | w starts with 'AA' or 'BB' as substring }. The automaton below represents a possible parser for that language whose all valid words starts with 'AA' or 'BB'.
A-->(q1)--A-->(qf)
/
(q0)
\
B-->(q2)--B-->(qf)
It is a very simple parser for that language. You start at (q0), the initial state, then you read a symbol from the input, if it is A then you move to (q1) state, otherwise (it is a B, remember the remember the alphabet is only A and B) you move to (q2) state and so on. If you reach (qf) state, then the input was accepted.
As it is visual, you only need a pencil and a piece of paper to explain what a parser is to anyone, including a child. I think the simplicity is what makes the automata the most suitable way to teaching language processing concepts, such as parsing.
Finally, being a Computer Science student, you will study such concepts in-deep at theoretical computer science classes such as Formal Languages and Theory of Computation.
Have them try to write a program that can evaluate arbitrary simple arithmetic expressions. This is a simple problem to understand but as you start getting deeper into it a lot of basic parsing starts to make sense.
Parsing is about READING data in one format, so that you can use it to your needs.
I think you need to teach them to think like this. So, this is the simplest way I can think of to explain parsing for someone new to this concept.
Generally, we try to parse data one line at a time because generally it is easier for humans to think this way, dividing and conquering, and also easier to code.
We call field to every minimum undivisible data. Name is field, Age is another field, and Surname is another field. For example.
In a line, we can have various fields. In order to distinguish them, we can delimit fields by separators or by the maximum length assign to each field.
For example:
By separating fields by comma
Paul,20,Jones
Or by space (Name can have 20 letters max, age up to 3 digits, Jones up to 20 letters)
Paul 020Jones
Any of the before set of fields is called a record.
To separate between a delimited field record we need to delimit record. A dot will be enough (though you know you can apply CR/LF).
A list could be:
Michael,39,Jordan.Shaquille,40,O'neal.Lebron,24,James.
or with CR/LF
Michael,39,Jordan
Shaquille,40,O'neal
Lebron,24,James
You can say them to list 10 nba (or nlf) players they like. Then, they should type them according to a format. Then make a program to parse it and display each record. One group, can make list in a comma-separated format and a program to parse a list in a fixed size format, and viceversa.
Parsing to me is breaking down something into meaningful parts... using a definable or predefined known, common set of part "definitions".
For programming languages there would be keyword parts, usable punctuation sequences...
For pumpkin pie it might be something like the crust, filling and toppings.
For written languages there might be what a word is, a sentence, what a verb is...
For spoken languages it might be tone, volume, mood, implication, emotion, context
Syntax analysis (as well as common sense after all) would tell if what your are parsing is a pumpkinpie or a programming language. Does it have crust? well maybe it's pumpkin pudding or perhaps a spoken language !
One thing to note about parsing stuff is there are usually many ways to break things into parts.
For example you could break up a pumpkin pie by cutting it from the center to the edge or from the bottom to the top or with a scoop to get the filling out or by using a sledge hammer or eating it.
And how you parse things would determine if doing something with those parts will be easy or hard.
In the "computer languages" world, there are common ways to parse text source code. These common methods (algorithims) have titles or names. Search the Internet for common methods/names for ways to parse languages. Wikipedia can help in this regard.
In linguistics, to divide language into small components that can be analyzed. For example, parsing this sentence would involve dividing it into words and phrases and identifying the type of each component (e.g.,verb, adjective, or noun).
Parsing is a very important part of many computer science disciplines. For example, compilers must parse source code to be able to translate it into object code. Likewise, any application that processes complex commands must be able to parse the commands. This includes virtually all end-user applications.
Parsing is often divided into lexical analysis and semantic parsing. Lexical analysis concentrates on dividing strings into components, called tokens, based on punctuationand other keys. Semantic parsing then attempts to determine the meaning of the string.
http://www.webopedia.com/TERM/P/parse.html
Simple explanation: Parsing is breaking a block of data into smaller pieces (tokens) by following a set of rules (using delimiters for example),
so that this data could be processes piece by piece (managed, analysed, interpreted, transmitted, ets).
Examples: Many applications (like Spreadsheet programs) use CSV (Comma Separated Values) file format to import and export data. CSV format makes it possible for the applications to process this data with a help of a special parser.
Web browsers have special parsers for HTML and CSS files. JSON parsers exist. All special file formats must have some parsers designed specifically for them.

Decoding byte stream

I have a series of messages that are defined by independent structs. These structs share a common header are sent between applications. I am creating a decoder that will take the raw data captures in the messages that were built using these structs and decode/parse them to some plain text.
I have over 1000 different messages that need to be decoded so I am not sure if defining all the struct formats in XML and then using XSL or some translation is the way to go or if there is a better way to do this.
There are times when I will need to decode logs containing over a million messages so performance is a concern.
Any recommendations for techniques/tools/algorithms to go about creating the decoder/parser?
struct:
struct {
dword messageid;
dword datavalue1;
dword datavalue2;
} struct1;
Example raw data:
0101010A0A0A0A0F0F0F0F
Decoded message (desired output):
message id: 0x01010101, datavalue1: 0x0A0A0A0A, datavalue2: 0x0F0F0F0F
I am using C++ to do this development.
Regarding "performance" - if you are using disk IO and possible display IO I doubt your parser/decoder will have much effect unless you use a truly horrible algorithm.
I am also unsure about what the problem is - Given the question right now - you have 3 DWORDs in a struct and you claim that there are over 1000 unique messages based on these values.
Your decoded message does not imply to me that you need any kind of parsing - just straight output seems to work (convert from bytes to ascii representation of a hex value)
If you do have a mapping from a value to a string, then a big switch statement is simple - or alternatively if you want to be able to have these added dynamically or change the display, then I would provide the key/value pairs (mapping) in a config file (text, xml, etc) and then do a lookup as the log file/raw data is read.
map is what I would use in that case.
Perhaps if you provide another specific example of the values and decoded output I can come up with a more appropriate suggestion.
If you have the message definitions already given in the syntax that you've used in your example, you should definitely not try to convert it manually into some other syntax (XML or otherwise).
Instead, you should try to write a compiler that takes these method definitions, and compiles them into a decoder function.
These days, the recommendation is to use ANTLR as the parser generator, using any of the ANTLR languages for the actual compiler (Java, Python, Ruby, C#, C++). That compiler then should output C code, which does the entire decoding and pretty-printing.
You can use yacc or antlr, add appropriate parsing rules, populate some data structure out of it(a tree may be) while parsing, then traverse the data structure and do whatever you like.
I'm going to assume that all you need to do is format the records and output them.
Use a custom code generator. The generated code will look something like this:
typedef struct { word messageid; } Header;
//repeated for each record type
typedef struct {
word messageid;
// <members here>
} Record_##;
//END
void Process(Input inp, Output out) {
char buffer[BIG_ENOUGH];
char *offset;
offset = &buffer[BIG_ENOUGH];
while(notEnd) {
if(&offset[sizeof(LargestStruct)] >= &buffer[BIG_ENOUGH])
// move remaining buffer to start and fill tail from inp
Header *hpt = (Header*)offset;
switch(hpt->messageid)
{
//repeated for each record type
case <recond ID for given type>:
{
Record_##* rpt = (Record_##*)offset;
outp.format("name1: %t, ...\n", rpt->name1, ...);
offset += sizeof(Record_##);
break;
}
//END
}
}
}
Most of that's boiler plate so writing a program to generate it shouldn't be to hard.
If you need more processing, I think this idea could be tweaked some to make that work as well.
Edit: after re-reading the question, it looks like you might have the structs defined already. In that cases you can just #include them and use them directly. However then you end up with the issue of how to parse the structs to generate the input to the formating function. Awk or sed might be handy there.

Resources