In my application , I am trying to update the content of an ontology already present in TDB. Before getting stored in TDB, I convert the ontology in RDF/XML format. While loading ontology data in empty model using same model.read method does not create issue but while updating it gives this Error. Can someone help me with solution ?
Input Arguments : modelName ->namedModel saved in TDB
inputString ->ontology data converted to String
My code is as below :
public void updateModel(String modelName, String inputString) {
dataset.begin(ReadWrite.WRITE);
try {
Model model = dataset.getNamedModel(modelName);
model.removeAll();
StringReader reader = new StringReader(inputString);
model.read(reader,null,"RDF/XML-ABBREV");
dataset.commit();
logger.info("dataset committed");
} catch (Exception e) {
logger.error("Exception in reading : " + e);
} finally {
dataset.end();
}
}
It throws the following error : org.apache.jena.riot.RiotException: [line: 1, col: 1 ] Content is not allowed in prolog.
Related
I want to parse any complex swagger-API-document(swagger.json) to Java objects.
may be List>
what are available options?
I am trying with io.swagger.parser.SwaggerParser.
but want to make sure that I know other available options and I use the correct parser which suffices to parse any complex document.
currently we are trying as below.
public List<Map<String,Object>> parse(String swaggerDocString) throws SwaggerParseException{
try{
Swagger swagger = new SwaggerParser().parse(swaggerDocString);
return processSwagger(swagger);
}catch(Exception ex){
String exceptionRefId=OSGUtil.getExceptionReferenceId();
logger.error("exception ref id " + exceptionRefId + " : Error while loading swagger file " + ex);
throw new SwaggerParseException("", ex.getLocalizedMessage(),exceptionRefId);
}
}
public List<Map<String,Object>> processSwagger(Swagger swagger){
List<Map<String,Object>> finalResult=new ArrayList<>();
Map<String, Model> definitions = swagger.getDefinitions();
// loop all the available paths of the swagger
if(swagger.getPaths()!=null && swagger.getPaths().keySet()!=null &&swagger.getPaths().keySet().size()>0 ){
swagger.getPaths().keySet().forEach(group->{
//get the path
Path path=swagger.getPath(group);
//list all the operations of the path
Map<HttpMethod,Operation> mapList=path.getOperationMap();
mapList.forEach((httpMethod,operation)->{
processPathData(finalResult,operation,path,group,httpMethod,definitions,group);
});
});
}
return finalResult;
}
whats the differences between
swagger-compat-spec-parser,
swagger-parser
swagger has the implementations for all the technologies.
https://swagger.io/tools/open-source/open-source-integrations/
and details for parsing swagger into Java is here.
https://github.com/swagger-api/swagger-parser/tree/v1
I'm building a Dataflow pipeline to process Stackdriver logs, the data are read from Pub/Sub and results written into BigQuery.
When I read from Pub/Sub I get JSON strings of LogEntry objects but what I'm really interested in is protoPayload.line records which contain user log messages. To get those I need to parse LogEntry JSON object and I found a two years old Google example how to do it:
import com.google.api.client.json.JsonParser;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.logging.model.LogEntry;
try {
JsonParser parser = new JacksonFactory().createJsonParser(entry);
LogEntry logEntry = parser.parse(LogEntry.class);
logString = logEntry.getTextPayload();
}
catch (IOException e) {
LOG.error("IOException parsing entry: " + e.getMessage());
}
catch(NullPointerException e) {
LOG.error("NullPointerException parsing entry: " + e.getMessage());
}
Unfortunately this doesn't work for me, the logEntry.getTextPayload() returns null. I'm not even sure if it's suppose to work as com.google.api.services.logging library is not mentioned anywhere in Google Cloud docs, the current logging library seems to be google-cloud-logging.
So if anyone could suggest what is the right or simplest way of parsing LogEntry objects?
I ended up with manually parsing LogEntry JSON with gson library, using the tree traversing approach in particular.
Here is a small snippet:
static class ProcessLogMessages extends DoFn<String, String> {
#ProcessElement
public void processElement(ProcessContext c) {
String entry = c.element();
JsonParser parser = new JsonParser();
JsonElement element = parser.parse(entry);
if (element.isJsonNull()) {
return;
}
JsonObject root = element.getAsJsonObject();
JsonArray lines = root.get("protoPayload").getAsJsonObject().get("line").getAsJsonArray();
for (int i = 0; i < lines.size(); i++) {
JsonObject line = lines.get(i).getAsJsonObject();
String logMessage = line.get("logMessage").getAsString();
// Do what you need with the logMessage here
c.output(logMessage);
}
}
}
This is simple enough and works fine for me since I'm interested in protoPayload.line.logMessage objects only. But I guess this is not ideal way of parsing LogEntry objects if you need to work with many attributes.
this is my first time to get my feet wet with serialization...in fact i am developing Autodesk Revit through C#.
Objective:
I need to record data to a new file on HDD, so that this file can be opened from another computer through Revit.
Procedure:
Handle all the required data.from Main class.
Instantiate and pass these data to a Serializable class.
Save to file the data from main class.
Dispose stream and set serializable class to null.
Deserialize.
Do stuff on revit on the basis of acquired data.
Problem
- the program works perfectly with no error and every thing is OK.
- press the button again to rerun the program it fails at deserialization with this error code
[A]Cons_Coor.ThrDviewData cannot be cast to [B]Cons_Coor.ThrDviewData. Type A originates from 'Cons_Coor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' in the context 'LoadNeither' at location 'C:\Users\mostafa\AppData\Local\Temp\RevitAddins\Cons_Coor-Executing-20140820_224454_4113\Cons_Coor.dll'. Type B originates from 'Cons_Coor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' in the context 'LoadNeither' at location 'C:\Users\mostafa\AppData\Local\Temp\RevitAddins\Cons_Coor-Executing-20140820_230011_0316\Cons_Coor.dll'.A first chance exception of type 'System.NullReferenceException' occurred in Cons_Coor.dll
Main Class:
///main class
.....
.....
ThrDviewData v3ddata = new ThrDviewData(); ///instantiate a serializable class
///collect all required data
string filename = UT_helper.conpaths(UT_constants.paths.Desktop) + "\\comment2" + DateTime.Today.ToShortTimeString().Replace(":", "") + ".a4h";
using (Stream stream = File.Open(filename, FileMode.Create))
{
BinaryFormatter bformatter = new BinaryFormatter();
Debug.WriteLine("Writting Data\r\n");
bformatter.Serialize(stream, v3ddata);
stream.Close();
}
v3ddata = null;
using (Stream stream = File.Open(filename, FileMode.Open))
{
BinaryFormatter bformatter = new BinaryFormatter();
Debug.WriteLine("Reading data from file");
try
{
v3ddata = (ThrDviewData)bformatter.Deserialize(stream);
}
catch (Exception ex)
{
Debug.Write(ex.Message);
// File.Delete(filename);
}
stream.Close();
}
....
....
///do some stuff with the acquired data
Serializable Class
public string myvariables;
public ThrDviewData()
{
myvariables = null;
}
public ThrDviewData(SerializationInfo info, StreamingContext ctxt)
{
myvariables= (String)info.GetValue("name", typeof(string));
}
public void GetObjectData(SerializationInfo info, StreamingContext context)
{
info.AddValue("name", myvariables);
}
// Public implementation of Dispose pattern callable by consumers.
public void Dispose()
{
GC.SuppressFinalize(this);
}
}
so any hints?
The Binary Serializer you're using is very tightly tied to the class you're exporting.
When you're using the Revit Addin manager to load your addin, it's making a dynamic copy of your assembly (so that you can come back and load it again while you're in the same session). When it does that, you wind up with duplicate types that have the same name (ThrDviewData). When you try to load a previously serialized binary that was from a different copy, it is still trying to map to the original type (not the new copy of the type).
Your choices are:
1. Don't use the Addin manager, just statically use your addin.
2. Use something other than the Binary serializer that is not so tightly coupled to the types (like an XML or JSON serializer - as you tried).
That's what happened...
We are using JAXB in conjuction with sTAX XMLEventReaderAPI to parse and extract data xml retrieved by making a REST Call.
InputStream responseStream = response.getEntityInputStream();
if (responseStream != null)
{
XMLInputFactory xmlif = XMLInputFactory.newInstance();
// stax API
XMLEventReader xmler = xmlif.createXMLEventReader(new InputStreamReader(responseStream));
EventFilter filter = new EventFilter() {
public boolean accept(XMLEvent event) {
return event.isStartElement();
}
};
XMLEventReader xmlfer = xmlif.createFilteredReader(xmler, filter);
xmlfer.nextEvent();
// use jaxb
JAXBContext ctx = JAXBContext.newInstance(Summary.class);
Unmarshaller um = ctx.createUnmarshaller();
while (xmlfer.peek() != null) {
JAXBElement<CustomObject> se = um.unmarshal(xmler,
CustomObject.class);
CustomObject = se.getValue();
}
responseStream.close();
} else {
logger.error("InputStream response from API is null. No data to process");
}
response.close();
}
So Basically we parse using sTAX first then unarshall content using JAXB which unmarshalls it the CustomObject type. We do other stuff to this CustomObject type later.
However we ran into an issue as this chunk of code executes on JBoss AS 6.1.0.Final
We get an exception saying "The declaration for the entity "HTML.version" must end with '>'"
It appears that either sTAX or JAXB is validating against a DTD/XSD. The XSD is defined on the same server to which the REST call is made.
Because we are using SUN sTAX and not woodstox that there is no inherent DTD/XSD Validation that comes with it. There is no validation and the error cannot come from the sTAX call
Is that correct ?
If the issue is not validation failure with sTAX it has got to be JAXB.
However I cannot do the following:
um.setValidating(false);
because setValidating is a deprecated method.
Any ideas/suggestions on how to go about this ? Is our hypothesis correct ? Is this a known JBoss Issue perhaps ?
I have a need to access the encoded stream in OpenRasta before it gets sent to the client. I have tried using a PipelineContributor and registering it before KnownStages.IEnd, tried after KnownStages.IOperationExecution and after KnownStages.AfterResponseConding but in all instances the context.Response.Entity stream is null or empty.
Anyone know how I can do this?
Also I want to find out the requested codec fairly early on yet when I register after KnowStages.ICodecRequestSelection it returns null. I just get the feeling I am missing something about these pipeline contributors.
Without writing your own Codec (which, by the way, is really easy), I'm unaware of a way to get the actual stream of bytes sent to the browser. The way I'm doing this is serializing the ICommunicationContext.Response.Entity before the IResponseCoding known stage. Pseudo code:
class ResponseLogger : IPipelineContributor
{
public void Initialize(IPipeline pipelineRunner)
{
pipelineRunner
.Notify(LogResponse)
.Before<KnownStages.IResponseCoding>();
}
PipelineContinuation LogResponse(ICommunicationContext context)
{
string content = Serialize(context.Response.Entity);
}
string Serialize(IHttpEntity entity)
{
if ((entity == null) || (entity.Instance == null))
return String.Empty;
try
{
using (var writer = new StringWriter())
{
using (var xmlWriter = XmlWriter.Create(writer))
{
Type entityType = entity.Instance.GetType();
XmlSerializer serializer = new XmlSerializer(entityType);
serializer.Serialize(xmlWriter, entity.Instance);
}
return writer.ToString();
}
}
catch (Exception exception)
{
return exception.ToString();
}
}
}
This ResponseLogger is registered the usual way:
ResourceSpace.Uses.PipelineContributor<ResponseLogger>();
As mentioned, this doesn't necessarily give you the exact stream of bytes sent to the browser, but it is close enough for my needs, since the stream of bytes sent to the browser is basically just the same serialized entity.
By writing your own codec, you can with no more than 100 lines of code tap into the IMediaTypeWriter.WriteTo() method, which I would guess is the last line of defense before your bytes are transferred into the cloud. Within it, you basically just do something simple like this:
public void WriteTo(object entity, IHttpEntity response, string[] parameters)
{
using (var writer = XmlWriter.Create(response.Stream))
{
XmlSerializer serializer = new XmlSerializer(entity.GetType());
serializer.Serialize(writer, entity);
}
}
If you instead of writing directly to to the IHttpEntity.Stream write to a StringWriter and do ToString() on it, you'll have the serialized entity which you can log and do whatever you want with before writing it to the output stream.
While all of the above example code is based on XML serialization and deserialization, the same principle should apply no matter what format your application is using.