Parsing the swagger API doc (swagger.json) to Java objects - swagger

I want to parse any complex swagger-API-document(swagger.json) to Java objects.
may be List>
what are available options?
I am trying with io.swagger.parser.SwaggerParser.
but want to make sure that I know other available options and I use the correct parser which suffices to parse any complex document.
currently we are trying as below.
public List<Map<String,Object>> parse(String swaggerDocString) throws SwaggerParseException{
try{
Swagger swagger = new SwaggerParser().parse(swaggerDocString);
return processSwagger(swagger);
}catch(Exception ex){
String exceptionRefId=OSGUtil.getExceptionReferenceId();
logger.error("exception ref id " + exceptionRefId + " : Error while loading swagger file " + ex);
throw new SwaggerParseException("", ex.getLocalizedMessage(),exceptionRefId);
}
}
public List<Map<String,Object>> processSwagger(Swagger swagger){
List<Map<String,Object>> finalResult=new ArrayList<>();
Map<String, Model> definitions = swagger.getDefinitions();
// loop all the available paths of the swagger
if(swagger.getPaths()!=null && swagger.getPaths().keySet()!=null &&swagger.getPaths().keySet().size()>0 ){
swagger.getPaths().keySet().forEach(group->{
//get the path
Path path=swagger.getPath(group);
//list all the operations of the path
Map<HttpMethod,Operation> mapList=path.getOperationMap();
mapList.forEach((httpMethod,operation)->{
processPathData(finalResult,operation,path,group,httpMethod,definitions,group);
});
});
}
return finalResult;
}
whats the differences between
swagger-compat-spec-parser,
swagger-parser

swagger has the implementations for all the technologies.
https://swagger.io/tools/open-source/open-source-integrations/
and details for parsing swagger into Java is here.
https://github.com/swagger-api/swagger-parser/tree/v1

Related

Cannot parse id for appbundles using Design Automation SDK

Here I am again trying to use the Design Automation SDK and I get this error when I try to retrieve bundle aliases, versions or other information that require the id.
I am testing that using one of the existing appbundles available...
public static async Task<dynamic> GetAppBundleVersionsAsync(ForgeService service, Token token, string id)
{
try
{
if (token.ExpiresAt < DateTime.Now)
token = Get2LeggedToken();
AppBundlesApi appBundlesApi = new AppBundlesApi(service);
Dictionary<string, string> headers = new Dictionary<string, string>();
headers.Add("Authorization", "Bearer " + token.AccessToken);
headers.Add("content-type", "application/json");
var aliases = await appBundlesApi.GetAppBundleVersionsAsync(id, null, null, headers);
return aliases;
}
catch (Exception ex)
{
Console.WriteLine(string.Format("Error : {0}", ex.Message));
return null;
}
}
Almost thinking to go to my previous RestSharp implementation :)
There are 2 kinds of IDs:
Fully qualified (string in format owner.name+alias)
Unqualified (just name)
You are trying to list versions of your own AppBundle, so you need to use Unqualified. It seems your ID is fully qualified form.
For more info look at API documentation description of endpoint id parameter you are using https://forge.autodesk.com/en/docs/design-automation/v3/reference/http/design-automation-appbundles-id-versions-GET/#uri-parameters

Jena Riot Exception while updating ontology in Jena TDB dataset

In my application , I am trying to update the content of an ontology already present in TDB. Before getting stored in TDB, I convert the ontology in RDF/XML format. While loading ontology data in empty model using same model.read method does not create issue but while updating it gives this Error. Can someone help me with solution ?
Input Arguments : modelName ->namedModel saved in TDB
inputString ->ontology data converted to String
My code is as below :
public void updateModel(String modelName, String inputString) {
dataset.begin(ReadWrite.WRITE);
try {
Model model = dataset.getNamedModel(modelName);
model.removeAll();
StringReader reader = new StringReader(inputString);
model.read(reader,null,"RDF/XML-ABBREV");
dataset.commit();
logger.info("dataset committed");
} catch (Exception e) {
logger.error("Exception in reading : " + e);
} finally {
dataset.end();
}
}
It throws the following error : org.apache.jena.riot.RiotException: [line: 1, col: 1 ] Content is not allowed in prolog.

Parse Stackdriver LogEntry JSON in Dataflow pipeline

I'm building a Dataflow pipeline to process Stackdriver logs, the data are read from Pub/Sub and results written into BigQuery.
When I read from Pub/Sub I get JSON strings of LogEntry objects but what I'm really interested in is protoPayload.line records which contain user log messages. To get those I need to parse LogEntry JSON object and I found a two years old Google example how to do it:
import com.google.api.client.json.JsonParser;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.logging.model.LogEntry;
try {
JsonParser parser = new JacksonFactory().createJsonParser(entry);
LogEntry logEntry = parser.parse(LogEntry.class);
logString = logEntry.getTextPayload();
}
catch (IOException e) {
LOG.error("IOException parsing entry: " + e.getMessage());
}
catch(NullPointerException e) {
LOG.error("NullPointerException parsing entry: " + e.getMessage());
}
Unfortunately this doesn't work for me, the logEntry.getTextPayload() returns null. I'm not even sure if it's suppose to work as com.google.api.services.logging library is not mentioned anywhere in Google Cloud docs, the current logging library seems to be google-cloud-logging.
So if anyone could suggest what is the right or simplest way of parsing LogEntry objects?
I ended up with manually parsing LogEntry JSON with gson library, using the tree traversing approach in particular.
Here is a small snippet:
static class ProcessLogMessages extends DoFn<String, String> {
#ProcessElement
public void processElement(ProcessContext c) {
String entry = c.element();
JsonParser parser = new JsonParser();
JsonElement element = parser.parse(entry);
if (element.isJsonNull()) {
return;
}
JsonObject root = element.getAsJsonObject();
JsonArray lines = root.get("protoPayload").getAsJsonObject().get("line").getAsJsonArray();
for (int i = 0; i < lines.size(); i++) {
JsonObject line = lines.get(i).getAsJsonObject();
String logMessage = line.get("logMessage").getAsString();
// Do what you need with the logMessage here
c.output(logMessage);
}
}
}
This is simple enough and works fine for me since I'm interested in protoPayload.line.logMessage objects only. But I guess this is not ideal way of parsing LogEntry objects if you need to work with many attributes.

Modifying XPath 2.0 result trees using Saxon

I would like to
add/remove/update elements/attributes/values to the "subTree"
be able to save the updated "targetDoc" back to the "target" file location.
determine which tree model would be best for this xpath + tree modification procedure.
I thought I should somehow be able to get a MutableNodeInfo object, but I don't know how to do this. I tried using the processor.setConfigurationProperty(FeatureKeys.TREE_MODEL, Builder.LINKED_TREE); but this still gives me an underlying node of TinyElementImpl. I require xpath 2.0 to avoid having to enter default namespaces, which is why I am using saxon s9api instead of Java's default DOM model. I would also like to avoid using xslt/xquery if possible because these tree modifications are being done dynamically, making xslt/xquery more complicated in my situation.
public static void main(String[] args) {
// XML File namesspace URIs
Hashtable<String, String> namespaceURIs = new Hashtable<>();
namespaceURIs.put("def", "http://www.cdisc.org/ns/def/v2.0");
namespaceURIs.put("xmlns", "http://www.cdisc.org/ns/odm/v1.3");
namespaceURIs.put("xsi", "http://www.w3.org/2001/XMLSchema-instance");
namespaceURIs.put("xlink", "http://www.w3.org/1999/xlink");
namespaceURIs.put("", "http://www.cdisc.org/ns/odm/v1.3");
// The source/target xml document
String target = "Path to file.xml";
// An xpath string
String xpath = "/ODM/Study/MetaDataVersion/ItemGroupDef[#OID/string()='IG.TA']";
Processor processor = new Processor(true);
// I thought this tells the processor to use something other than
// TinyTree
processor.setConfigurationProperty(FeatureKeys.TREE_MODEL,
Builder.LINKED_TREE);
DocumentBuilder builder = processor.newDocumentBuilder();
XPathCompiler xpathCompiler = processor.newXPathCompiler();
for (Entry<String, String> entry : namespaceURIs.entrySet()) {
xpathCompiler.declareNamespace(entry.getKey(), entry.getValue());
}
try {
XdmNode targetDoc = builder.build(Paths.get(target).toFile());
XPathSelector selector = xpathCompiler.compile(xpath).load();
selector.setContextItem(targetDoc);
XdmNode subTree = (XdmNode) selector.evaluateSingle();
// The following prints: class
// net.sf.saxon.tree.tiny.TinyElementImpl
System.out.println(subTree.getUnderlyingNode().getClass());
/*
* Here, is where I would like to modify subtree and save modified doc
*/
} catch (SaxonApiException e) {
e.printStackTrace();
}
}
I think you can supply a DOM node to Saxon and run XPath against it but it that case you don't use the document builder for Saxon's native trees, you build a DOM using the javax.xml.parsers.DocumentBuilder and once you have a W3C DOM node you can supply it to Saxon using the wrap method of a Saxon DocumentBuilder. Here is sample code taken from the file S9APIExamples.java in the Saxon 9.6 resources file:
// Build the DOM document
File file = new File("data/books.xml");
DocumentBuilderFactory dfactory = DocumentBuilderFactory.newInstance();
dfactory.setNamespaceAware(true);
javax.xml.parsers.DocumentBuilder docBuilder;
try {
docBuilder = dfactory.newDocumentBuilder();
} catch (ParserConfigurationException e) {
throw new SaxonApiException(e);
}
Document doc;
try {
doc = docBuilder.parse(new InputSource(file.toURI().toString()));
} catch (SAXException e) {
throw new SaxonApiException(e);
} catch (IOException e) {
throw new SaxonApiException(e);
}
// Compile the XPath Expression
Processor proc = new Processor(false);
DocumentBuilder db = proc.newDocumentBuilder();
XdmNode xdmDoc = db.wrap(doc);
XPathCompiler xpath = proc.newXPathCompiler();
XPathExecutable xx = xpath.compile("//ITEM/TITLE");
// Run the XPath Expression
XPathSelector selector = xx.load();
selector.setContextItem(xdmDoc);
for (XdmItem item : selector) {
XdmNode node = (XdmNode) item;
org.w3c.dom.Node element = (org.w3c.dom.Node) node.getExternalNode();
System.out.println(element.getTextContent());
}
There are also samples showing how to use Saxon with JDOM and other mutable tree implementations but I think you need Saxon PE or EE to have direct support for those.
The MutableNodeInfo interface in Saxon is designed very specifically to meet the needs of XQuery Update, and I would advise against trying to use it directly from Java; the implementation isn't likely to be robust when handling method calls other than those made by XQuery Update.
In fact, it's generally true that the Saxon NodeInfo interface is designed as a target for XPath, rather than for user-written Java code. I would therefore suggest using a third party tree model; the ones I like best are JDOM2 and XOM. Both of these allow you to mix direct Java navigation and update with use of XPath 2.0 navigation using Saxon.

Twitter data streaming - dynamic keyword value change

I have currently doing Sentiment Analysis on Twitter data in Hadoop.
I have configured FLUME to bring data based upon certain keywords, which I have maintained in "FLUME.conf" file, as shown below
TwitterAgent.sources.Twitter.keywords = Kitkat, Nescafe, Carnation, Milo, Cerelac
With mentioning this, I would like to know whether it is possible to dynamically change the keywords, based upon a java program that will pop-up asking for keywords from user.
Also, apart from this method, please provide any other methods you folks would suggest to hide this complexity of updating the Flume.conf file.
Best Regards,
Ram
Flume provides an Application class to run its configuration file using a Java program.
public class Flume{
public static void main(String[] args) throws Exception
{
new Conf().setConfiguration();
String[] args1 = new String[] { "agent","-nTwitterAgent",
"-fflume.conf" };
BasicConfigurator.configure();
Application.main(args1);
System.setProperty("hadoop.home.dir", "/");
}
}
Here you will get a change to edit your conf file using some file utils like this
class Conf{
int ch;
String keyword ="";
Scanner sc= new Scanner(System.in);
public void setConfiguration(){
System.out.println("Enter the keyword");
keyword=sc.nextLine();
byte[] key= keyword.getBytes();
FileOutputStream fp=null;
FileInputStream src=null;
try{
fp= new FileOutputStream("flumee.conf");
src= new FileInputStream("flume.conf");
while((ch=src.read())!=-1){
fp.write((char)ch);
}
fp.write(key);
}catch(Exception e){
System.out.println("file Exception:"+ e);
}finally{
try{
if(fp!=null){
fp.close();
}
if(src!=null){
src.close();
}
}catch(Exception e){
System.out.println("file closing Exception:"+ e);
}
}
}
}
Now you need to keep the TwitterAgent.sources.Twitter.keywords= line at the end of your flume configuration file so that it will be easier for you to add the keywords into the file. My flume.conf file looks like this
TwitterAgent.sources= Twitter
TwitterAgent.channels= MemChannel
TwitterAgent.sinks=HDFS
TwitterAgent.sources.Twitter.type = com.cloudera.flume.source.TwitterSource
TwitterAgent.sources.Twitter.channels=MemChannel
TwitterAgent.sources.Twitter.consumerKey=xxx
TwitterAgent.sources.Twitter.consumerSecret=xxx
TwitterAgent.sources.Twitter.accessToken=xxx
TwitterAgent.sources.Twitter.accessTokenSecret=
TwitterAgent.sinks.HDFS.channel=MemChannel
TwitterAgent.sinks.HDFS.type=hdfs
TwitterAgent.sinks.HDFS.hdfs.path=hdfs://localhost:9000/user/flume/direct
TwitterAgent.sinks.HDFS.hdfs.fileType=DataStream
TwitterAgent.sinks.HDFS.hdfs.writeformat=Text
TwitterAgent.sinks.HDFS.hdfs.batchSize=1000
TwitterAgent.sinks.HDFS.hdfs.rollSize=0
TwitterAgent.sinks.HDFS.hdfs.rollCount=10000
TwitterAgent.sinks.HDFS.hdfs.rollInterval=600
TwitterAgent.channels.MemChannel.type=memory
TwitterAgent.channels.MemChannel.capacity=10000
TwitterAgent.channels.MemChannel.transactionCapacity=1000
TwitterAgent.sources.Twitter.keywords=
When i run the program it will ask for the keywords first and then it will start collecting the tweets about that keyword.

Resources