I am quite new to RDF and Jena. I want load a .nt (N- TRIPLE) file to a model. I have tried read(inputStream, "N-TRIPLE") but did not help.
It throws
org.apache.jena.riot.RiotException: Element or attribute do not match QName production: QName::=(NCName':')?NCName.
Can anyone point me out what is wrong?
Here is the link for the N-TRiple file which I tried to load : http://dbpedia.org/data/Berlin.ntriples
read(inputStream, string) uses the string argument as the base URI, not the syntax language. It's trying the default, which is RDF/XML. Check the javadoc for Model#read(InputStream in, String base) and Model#read(InputStream in, String base, String lang) for more information.
model.read(inputStream, null, "N-TRIPLES") ;
or
RDFDataMgr.read(model, inputStream, LANG.NTRIPLES) ;
If you are just opening the stream from a file (or URL) then Apache Jena will sort out the details. E.g.,
RDFDataMgr.read(model, "file:///myfile.nt") ;
There are various related operations. See the javadoc for Model and RDFDataMgr.
Related
I am scripting with DM and would like to read hdf5 file format.
I borrowed Tore Niermann's gms_HDF5_Plug-In (hdf5_GMS2X_amd64.dll) and his CMD_import_hdf5.s script. It use h5_read_dataset(filename, datapath) to read a image dataset.
I am trying to figure out the way to read a string info stored in the same file. I am particular interested to read the angle stored in string as shown in this figure.Demonstrated string to read. The h5_read_dataset(filename, datapath) function doesn't work for reading string.
There is a help file (hdf5_plugin.chm) with a list of functions but unfortunately I can't open them to see more info.
hdf5_plugin.chm showing the function list.
I suppose the right function to read strings should be something like h5_read_attr() or h5_info() but I didn't test them out. DM always says the two functions doesn't exist.
After reading out the angle by string, I will also need a bit help to convert the string to a double datatype.
Thank you.
Converting String to Number is done with the Val() command.
There is no integer/double/float concept for variables in DM-script, all are just number. ( This is different for images, where you can define the numeric type. Also: For file-inport/export a type differntiation can be made using the taggroup streaming commands in the other answer. )
Example script:
string numStr = "1.234e-2"
number num = val( numStr )
ClearResults()
Result( "\n As string:" + numStr )
Result( "\n As value:" + num )
Result( "\n As value, formatted:" + Format(num,"%3.2f") )
Potential answer regarding the .chm files: When you download (or email) .chm files in Windows, the OS classifies them as "potentially dagerouse" (because it could contain executable HTML code, I think). As a result, these files can not be shown by default. However, you can right-click these files and "unblock" them in the file properties.
Example:
I think this will be most likely a question specific to that plugin and not general DM scripting. So it might be better to contact the plugin-author directly.
The alternative (not good) solution would be to "rewrite" your own HDF5 file-reader, if you know the file-format. For this you would need the "Streaming" commands of the DM script language and browse through the (binary?) source file to the apropriate file location. The starting point for reading on this in the F1 help documentation would be here:
I am facing an issue while exporting japanese text in CSV format. Junk characters are being exported instead of original japanese text. I am using .NET MVC FileStreamResult to export records in Csv file and used encoding format as UTF8 (I have also used some other encoding format, but no luck). I debugged my code and able to convert string from memory stream and vice versa and able to see original japanese text being exported. Once exporting completed, I opened the CSV file, but only able to see junk character instead of expected text. If I open the CSV file in NotePad ( Opening the csv file in Notepad is NOT my requirement. I am referring Notepad only to verify whether i am able to see Japanese translated language ), then i can see the expected japanese text. It would be really helpful if someone please help me find root cause of this issue and provide a resolution.
Ex. 東京都品川区大崎 gets written as æ±äº¬éƒ½å“å·åŒºå¤§å´Ž
Note: I can see expected japanese text is exported properly if I opened the sample .CSV file using LibreOffice Calc, Linux default gEdit. But the issue is with opening this csv file using MS Office.
Please find the below attached code -
Controller/Action to execute while clicking on export to Csv button
================================================================================
[HttpPost]
[ValidateInput(false)]
public FileStreamResult SaveCustomerInfo()
{
return ExportToCsv();
}
================================================================================
private static FileStreamResult ExportToCsv()
{
var exportedData = new StringBuilder();
exportedData
.AppendLine("実行日,口座番号,支店番号,アカウント名,支店名,の/受益秩序,ステートメント日,入力日,お問い合わせ番号, ,Date Range")
.Append(
"CS0001,Demo FName,Demo LName,8/20/2015,\"Demo User Address\",City,Country,08830,0123456789,15813,Absolute from 8/20/2015 to 8/22/2015");
var stream = PrintingHelper.StringToMemoryStream(Encoding.UTF8, exportedData.ToString());
var fileStreamResult = new FileStreamResult(stream, "text/csv")
{
FileDownloadName =
new StringBuilder("TestExportedFileInCsv")
.Append(".csv").ToString()
};
return fileStreamResult;
}
It sound as though you haven't installed the language pack for MS Office on the machine that you are trying to open the csv on.
I need to read an XML file using DXL scripting and Exporting to EXCEl.
Is there any script in DXL already available?
Please help me out.
Thanks,
Sri
It can be done but there is no quick function built in to DXL to read XML, you have to build your own. Exporting to Excel is also possible by either going to Comma Separated Values or by using the OLE methods.
There is one library created by Mathias Mamtsch, using it you can create DOM from input string xml, then read,write attributes, values, iterate over attributes ,tags and so on.
http://sourceforge.net/projects/dxlstandardlib
In DXL world you have to write everything on you own.
It uses some helpers dxl files so to use it you have to use more than one additional dxl files.
For example one of the function there:
/*! \memberof XMLDocument
\return true - if the XML document could be loaded and parsed correctly. 'false' if there was an error loading the document.
\param xd The XMLDocument into which the contents shall be loaded.
\param s the contents of the XML document to load
\brief This function loads the XML content of a string into an XMLDocument object.
*/
bool loadXML (XMLDocument xd, string s) {
bool result = false
checkOLE ( oleMethod (getOleHandle_ xd, "LoadXML", oleArgs <- s, result) )
return result
}
These functions use OLE objects so I think they are Windows specific and under Linux the code will not work. But not sure.
I am trying to create individuals and save them in OWL file. The OWL file was created in Protégé. The size of the file was 10KB but after trying to save the individuals in the ontology the size of the code becomes 7KB.
Then I tried to open the OWL file using Protégé but it will not open.
The code is:
String SOURCE = "http://www.semanticweb.org/ontologies/2012/9/untitled-ontology-19";
String NS = SOURCE + "#";
OntModel onto = ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM, null);
onto.read("file:/home/tourism.owl", "RDF/XML");
OntClass place = onto.getOntClass(NS+"Mountains");
Individual am1 = onto.createIndividual(NS+Concept1, place);
FileOutputStream output = null;
try {
output = new FileOutputStream( "/home/tourism.owl ");
} catch(Exception e) {}
onto.writeAll(output, "RDF/XML-ABBREV","xmlbase");
Have you checked that the new file does not contain the new information? It may have been written out in a more compact form that than the original file because you used the "RDF/XML-ABBREV" form.
PS "xmlbase" should be a URI.
Jena is purely RDF/XML based API for accessing Ontology or Models. Make a Note of these things: What is the format when you save your OWL file after editing it with Protege? Is it OWL/XML or RDF/XML?? Jena being RDF centric can read OWL files written in RDF/XML format, but cannot read OWL file written in OWL/XML syntax (specifically the OWL2 syntax). Similarly it can write a Model/OntModel from memory to OWL file or an RDF file but ALWAYS in either "RDF/XML" syntax or "N3" or "RDF/XML-ABBREV".
And Since you are using "RDF/XML-ABBREV", which lists your triples in abbreviated format.. possibly that is the reason why your output file is decreasing in size.
I read an Excel 2003 file with a text editor to see some markup language.
When I open the file in Excel it displays incorrect characters. On inspection of the file I see that the encoding is Windows 1252 or some such. If I manually replace this with UTF-8, my file opens fine. Ok, so far so good, I can correct the thing manually.
Now the trick is that this file is generated automatically, that I need to process it automatically (no human interaction) with limited tools on my desktop (no perl or other scripting language).
Is there any simple way to open this XL file in VBA with the correct encoding (and ignore the encoding specified in the file)?
Note, Workbook.ReloadAs does not function for me, it bails out on error (and requires manual action as the file is already open).
Or is the only way to correct the file to go through some hoops? Either: text in, check line for encoding string, replace if required, write each line to new file...; or export to csv, then import from csv again with specific encoding, save as xls?
Any hints appreciated.
EDIT:
ADODB did not work for me (XL says user defined type, not defined).
I solved my problem with a workaround:
name2 = Replace(name, ".xls", ".txt")
Set wb = Workbooks.Open(name, True, True) ' open read-only
Set ws = wb.Worksheets(1)
ws.SaveAs FileName:=name2, FileFormat:=xlCSV
wb.Close False ' close workbook without saving changes
Set wb = Nothing ' free memory
Workbooks.OpenText FileName:=name2, _
Origin:=65001, _
DataType:=xlDelimited, _
Comma:=True
Well I think you can do it from another workbook. Add a reference to AcitiveX Data Objects, then add this sub:
Sub Encode(ByVal sPath$, Optional SetChar$ = "UTF-8")
Dim stream As ADODB.stream
Set stream = New ADODB.stream
With stream
.Open
.LoadFromFile sPath ' Loads a File
.Charset = SetChar ' sets stream encoding (UTF-8)
.SaveToFile sPath, adSaveCreateOverWrite
.Close
End With
Set stream = Nothing
Workbooks.Open sPath
End Sub
Then call this sub with the path to file with the off encoding.