How to convert ontology from DAML to OWL - ontology

I'm trying to convert this DAML ontology into OWL (or any other supported by Protege) format. I found an online converter, but the attempt of opening the result of conversion by protege entailed the following error message:
org.semanticweb.owlapi.rdf.syntax.RDFParserException: [line=21:column=39] IRI '#To Interpret' cannot be resolved against curent base IRI file:/home/citxx/Downloads/musicV1.0.xml
There is the result of conversion.
What is wrong with the mentioned converter? Or is there another ways of conversion DAML ontologies into Protege-compatible format?

Two things, spaces can't be in the rdf:ID unless they are escaped, those have to end up forming valid URI's. Also, it doesnt look like there's a defined xml base in that file, which is also a problem.
You might be better off just doing the conversion by hand, its a very basic ontology, and it would not take you long to recreate it in Protege.

Related

how can I parse json-ld to markdown

Is there an existing parser to parse json-ld to markdown? I want to generate documentation from my jsonld file. If such a thing doesn't exist, how should I go ahead writing one? or perhaps I could use a json to markdown converter? Any suggestions on how could do this?
I was just googling for such a program, and found your question.
The closest things I could find are: ocxmd, which is an extension to Markdown; and md-ld, which does not even use proper Markdown - instead, it apparently creates an incompatible version of the format which can be parsed to JSON-LD.
If I were writing such a converter in Python, I would use:
pyld to parse JSON-LD files and expand them using the #context;
And a template engine, likely Jinja2, to generate Markdown representation of every node of the JSON-LD document.
The program would be based on recursion. You might have separate functions to display:
URIs,
Numbers,
Images,
...
The program will recurse over the JSON-LD document and convert each of its sections into Markdown format.

Parsing and pretty printing the same file format in Haskell

I was wondering, if there is a standard, canonical way in Haskell to write not only a parser for a specific file format, but also a writer.
In my case, I need to parse a data file for analysis. However, I also simulate data to be analyzed and save it in the same file format. I could now write a parser using Parsec or something equivalent and also write functions that perform the text output in the way that it is needed, but whenever I change my file format, I would have to change two functions in my code. Is there a better way to achieve this goal?
Thank you,
Dominik
The BNFC-meta package https://hackage.haskell.org/package/BNFC-meta-0.4.0.3
might be what you looking for
"Specifically, given a quasi-quoted LBNF grammar (as used by the BNF Converter) it generates (using Template Haskell) a LALR parser and pretty pretty printer for the language."
update: found this package that also seems to fulfill the objective (not tested yet) http://hackage.haskell.org/package/syntax

how do I convert DAQ-derived mxd file format to csv?

Background:
I was given a pile of yokagawa "mxd" files without documentation or
description, and told "convert it".
I have looked for documentation and found none. The OEM doesn't seem to "do" reproducibility in the sense of a "code book". (link)
I have looked for online code for converters and found none.
National Instruments has a connector, but only if I use latest/greatest
LabVIEW (link). I don't have that version.
The only compatible suffix is from ArcGIS, but why would DAQ use a format like that.
Questions:
Is there a straightforward way to convert "mxd" to "csv"?
How do I find the relationship using the binary data? Eyeballing HEX seems slow/inefficient.
Is there any relationship between DAQ mxd and ArcGIS mxd?
Yokogawa supplies a progam called MX100 Standard Software: https://y-link.yokogawa.com/YL008/?Download_id=DL00002238&Language_id=EN, this program can read the *.mxd files and also export them to ascii or excel. See the well hidden manual: http://web-material3.yokogawa.com/IMMX180-01E_040.pdf, page 105 has chapter 3.7: converting data formats.

Extracting FOAF information from Jena

I'm new here,and i have some problem about FOAF. I use jena create a FOAF like this :
<rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Person"/>
<foaf:phone>12312312312</foaf:phone>
<foaf:nick>L</foaf:nick>
<foaf:name>zhanglu</foaf:name>
But i want to the FOAF shows like this:
<foaf:Person>
<foaf:phone>12312312312</foaf:phone>
<foaf:nick>L</foaf:nick>
<foaf:name>zhanglu</foaf:name>
</foaf:Person>
What can i do?
this is my source code:
Model m = ModelFactory.createDefaultModel();
m.setNsPrefix("foaf", FOAF.NS);
Resource r = m.createResource(NS);
r.addLiteral(FOAF.name, "zhanglu");
r.addProperty(FOAF.nick, "L");
r.addProperty(FOAF.phone, "123123123");
r.addProperty(RDF.type, FOAF.Person);
FileOutputStream f = new FileOutputStream(fileName);
m.write(f);
who can tell me?
thanks
First thing to say is that the two forms that you quote have exactly the same meaning to RDF - that is, they produce exactly the same set of triples when parsed into an RDF graph. For this reason, it's generally not worth worrying about the exact syntax of the XML produced by the writer. RDF/XML is, in general, not a friendly syntax to read. If you just want to serialize the Model, so that you can read it in again later, I would suggest Turtle syntax as it's more compact and easier for humans to read and understand.
However, there is one reason you might want to care specifically about the XML serialization, which is if you want the file to be part of an XML processing pipeline (e.g. XSLT or similar). In this case, you can produce the format you want by changing the last line of your example:
m.write( f, "RDF/XML-ABBREV" );
or, equivalently,
m.write( f, FileUtils.langXMLAbbrev );

Using Haskell's Parsec to parse binary files?

Parsec is designed to parse textual information, but it occurs to me that Parsec could also be suitable to do binary file format parsing for complex formats that involve conditional segments, out-of-order segments, etc.
Is there an ability to do this or a similar, alternative package that does this? If not, what is the best way in Haskell to parse binary file formats?
The key tools for parsing binary files are:
Data.Binary
cereal
attoparsec
Binary is the most general solution, Cereal can be great for limited data sizes, and attoparsec is perfectly fine for e.g. packet parsing. All of these are aimed at very high performance, unlike Parsec. There are many examples on hackage as well.
You might be interested in AttoParsec, which was designed for this purpose, I think.
I've used Data Binary successfully.
It works fine, though you might want to use Parsec 3, Attoparsec, or Iteratees. Parsec's reliance on String as its intermediate representation may bloat your memory footprint quite a bit, whereas the others can be configured to use ByteStrings.
Iteratees are particularly attractive because it is easier to ensure they won't hold onto the beginning of your input and can be fed chunks of data incrementally a they come available. This prevents you from having to read the entire input into memory in advance and lets you avoid other nasty workarounds like lazy IO.
The best approach depends on the format of the binary file.
Many binary formats are designed to make parsing easy (unlike text formats that are primarily to be read by humans). So any union data type will be preceded by a discriminator that tells you what type to expect, all fields are either fixed length or preceded by a length field, and so on. For this kind of data I would recommend Data.Binary; typically you create a matching Haskell data type for each type in the file, and then make each of those types an instance of Binary. Define the "get" method for reading; it returns a "Get" monad action which is basically a very simple parser. You will also need to define a "put" method.
On the other hand if your binary data doesn't fit into this kind of world then you will need attoparsec. I've never used that, so I can't comment further, but this blog post is very positive.

Resources