XML Serialization VS XML Parsing - xml-parsing

What is the difference between XML Serialization and XML Parsing? When should we use each one?

Parsing is, generally speaking, the processing of an input stream into meaningful data structures; in the XML context, parsing is the process of reading a sequence of characters conforming to the grammar and other constraints of the XML spec into whatever internal representation of XML your program uses.
Serialization is the opposite process: processing the internal data structures of a program (in this context, your internal representation of an XML document) and creating a character sequence (typically written to an output stream) that conforms to the angle-bracket syntax of the spec.
Use a parser to read XML from a character stream into data structures; use a serializer to write data structures out into a character stream.

I don't know much about XML, but here's what I know about serialization and parsing.
parsing - reading data (parse-in) from storage, and writing data (parse-out) to storage… "such as a text file"
serializing - (serialize) translating data into a readable format, and (de-serialize) translate that format back to data… "i.e. you want to translate a struct into readable content, stream that content across a network, and translate it back into code."
here's a new one…
marshalling - (marshall and unmarshall) similar to serialize, except marshalling is used to translate data into a different format… "i.e. you want to translate a stream of bytes into an 32 bit structure (one byte to four bytes)"

in easy terms (for beginners)
TL;DR
XML parsing (or XML deserialization) ==> input: valid XML, output: data structures
XML serialization ==> input: data structures, output: valid XML
XML parsing (a.k.a XML de-serialization)
You take a .xml file (example.xml) as input to process it with your programming language of choise, so that your programm can do something usefull with the data in that file. Your programm will transform the information from the file into data structures that your programming language can deal with (i.e. lists, arrays, objects, etc.).
XML serialization
Your programm (in any programming language), transforms information represented as data structures (lists, arrays, objects, etc.) into a valid XML output which can be saved into a file or tranmitted to another programm.
NOTE: Technically the input (when we are takling about parsing) and the output (when we are talking about serialization) does not have to be a file. As said in the more professional answer above it can be any input/output stream, too. And files don't have to have .xml extension, they can have any file extension which represents a valid XML format (i.e. .svg is also a XML based format). The key to understanding is, that when we do XML parsing we have valid XML on the input side and data structures on the output side, and when we do XML serialization we have data structures on the input side and valid XML on the output side.
To give an example from the Python world: you can use buildin packages (like xml.etree.ElementTree) or third party libraries (like lxml (recommended) or xmltodict) to do both - parse (deserialize) or create (serialize) XML data.

Related

Azure data factory encoding issue for characters with acute. For example: ú, á, é

Azure data factory is not encoding the special characters properly.
For example, the CSV file has word sún which gets converted into sún after performing transformation through data flow and writing it to the blob storage container.
There are many files with different encoding types in my container which dataflow is selecting to apply transformation and these encoding types are like UTF-8, ANSI, etc.
So if I set my encoding part to WINDOWS-1252 in DelimitedText dataset then it works fine for ANSI encoding type csv file but if encoding type if UTF-8 then I have to set this part to UTF-8, then only dataflow generates proper output for these special characters.
Dataset Image
My CSV file data screenshot is here: CSV file data
Is there any generic way that irrespective of what encoding type of file, we can generate proper output for such characters?
I got it if I understand you correctly. For Data Factory, we must choose one encoding type firstly to read the file. If you files have many encoding, you want to keep the data between different encoding, that is limited my the encoding type not Data Factory. If the output encoding can't parse the data and it will be converted to other type. Data Factory only provide these encoding type for us to read/write data.
Data Factory can't get the encoding type of the files even with get Metadata active. Maybe you can achieve that in code level, try function or notebook, that's the only way.
HTH.

How to parse dynamic XML with a SAX parser

Scenario:
Large (dynamic) xml files being uploaded by users.
We need to map the xml to our own database structure.
We need to use a SAX parser (or something like it) because of memory issues when parsing large XML files.
We currently use https://github.com/craigambrose/sax_stream for parsing XML's that all have the same structure.
For a new feature, we need to parse XML with unknown contents.
How would one use a SAX parser when the xml nodes are different each time ?
I've tried using https://github.com/soulcutter/saxerator, especially the at_depth() function could come in handy to collect the elements at a certain depth, after that we could get the elements inside a node by using the for_tag() function. Based on this info we maybe could create a mapping on the fly
If a SAX parser isn't an option, are there any alternatives for parsing very large (dynamic) XML files?

Fastest iOS data format for parsing

I am in need of a data format which will allow me to reduce the time needed to parse it to a minimum. In other words I'm looking for a format with as little overhead as possible and being parseable in the shortest amount of time.
I am building an application which will pull a lot of data from an API, parse it and display it to the user. So the format should be as small as possible so that the transmission will be fast and also should be very efficient for parsing. What are my options?
Here are a few formats that pop in in my head:
XML (a lot of overhead and slow parsing IMO)
JSON (still too cumbersome)
MessagePack (looks interesting)
CSV (with a custom parser written in C)
Plist (fast parsing, a lot of overhead)
... any others?
So currently I'm looking at CSV the most. Any other suggestions?
As stated by Apple in Property List Programming Guide binary plist representation should be fastest
Property List Representations
A property list can be stored in one of three different ways: in an
XML representation, in a binary format, or in an “old-style” ASCII
format inherited from OpenStep. You can serialize property lists in
the XML and binary formats. The serialization API with the old-style
format is read-only.
XML property lists are more portable than the binary alternative and
can be manually edited, but binary property lists are much more
compact; as a result, they require less memory and can be read and
written much faster than XML property lists. In general, if your
property list is relatively small, the benefits of XML property lists
outweigh the I/O speed and compactness that comes with binary property
lists. If you have a large data set, binary property lists, keyed
archives, or custom data formats are a better solution.
You just need to set the correct flag while creation or reading NSPropertyListBinaryFormat_v1_0. Just be sure that the data you want to represent in the plist are resented by this format.

Convert unicode characters to their respective language letter in ios

I've unicode character text for indian language(telugu) like this
పురాణాలు
I'm getting the above text from database to an xml file format. I'm reading the xml file and
when i am printing the text it is showing as పురాణాలు
Is there any way print the text as it is without any encoded character type &#...?
How are you parsing the XML? A proper XML parser should decode the numeric references.
I'm guessing that you are attempting to hand parse an XML document instead of relying on NSXMLParser. If so, you really should use an XML parser. Bad Guess on my part, it's likely that the entities are being double encoded.
To answer your question directly, Objective C HTML escape/unescape shows how to decode entities with a quick and dirty method.

Adobe Alchemy returning C data structures

I have lexer/parser (Generated from an ANTLR grammar file) which (for performance reasons) I have compiled to C code which will be included into my actionscript project using Adobe Alchemly.
The parser will generate an abstract syntax tree (In C) from an input string (passed from Actionscript) - I wish to return the C AST back into actionscript for further processing. How can I convert the tree structure of the AST to a format which I can return to actionscript?
Thanks,
Unfortunately you can't just send a C data structure across. You've got three options, in increasing order of madness:
Serialize the data on the C side and reconstitute it on the AS3 side.
Pack up the data into Objects and return those.
Pass a pointer and size back to AS3 and pull out the data from Alchemy's ram ByteArray.
I only include #3 for completeness-- I think it would be crazy to try it for any kind of complex data structure. The code would be fragile. Following pointers would be clunky. Bleah.
For #2 you could use dynamic Objects (via AS3_Object) or concrete ones (via AS3_Get, AS3_New). This is fairly complex code also and not so fast. Can be hard to maintain.
For #1, the type of serialization is what matters. You could have your C code render the structures to a binary 'file', return that, and have your AS3 parse the file format via ByteArray. Or you could render it to XML and have AS3's XML class parse it. This has the benefit of being fairly fast (since XML is implemented natively), at least on the de-serialization end. If you have a fast XML renderer on the C side (or, ahem, sprintfs), its not so bad.

Resources