I am trying to use Jena to write to a local free standalone GraphDB (version 8.5.0) repository.
What I have tried
(1) Direct use from Jena
I used this Jena 3.7.0 code snippet:
String strInsert =
"INSERT DATA {"
+ "<http://dbpedia.org/resource/Grace_Hopper> "
+ "<http://dbpedia.org/ontology/birthDate>"
+ " \"1906-12-9\"^^<http://www.w3.org/2001/XMLSchema#date> .}";
UpdateRequest updateRequest = UpdateFactory.create(strInsert);
UpdateProcessor updateProcessor = UpdateExecutionFactory.createRemote(updateRequest,
"http://localhost:7200/repositories/PersonData");
updateProcessor.execute();
which results in the following exception
org.apache.jena.atlas.web.HttpException: 415 -
at org.apache.jena.riot.web.HttpOp.exec(HttpOp.java:1091)
at org.apache.jena.riot.web.HttpOp.execHttpPost(HttpOp.java:718)
at org.apache.jena.riot.web.HttpOp.execHttpPost(HttpOp.java:501)
at org.apache.jena.riot.web.HttpOp.execHttpPost(HttpOp.java:459)
at org.apache.jena.sparql.modify.UpdateProcessRemote.execute(UpdateProcessRemote.java:81)
at org.graphdb.jena.tutorial.SimpleInsertQueryExample.main(SimpleInsertQueryExample.java:91)
On the GraphDB side I get the following error:
[INFO ] 2018-06-29 11:33:05,605 [repositories/PersonData | o.e.r.h.s.ProtocolExceptionResolver] Client sent bad request ( 415)
org.eclipse.rdf4j.http.server.ClientHTTPException: Unsupported MIME type: application/sparql-update
(2) GraphDB via Jena Fuseki
As an alternative I explored the GraphDB documentation, which states that it is possible to access GraphDB using the Jena Joseki, now Fuseki, server. But for that Fuseki needs to be configured to read the GraphDB as a Jena dataset and then accessed via a Ontotext Jena adapter com.ontotext.jena.SesameDataset. But I can find no GraphDB libraries that inlude this class.
(3) Accessing GraphDB using RDF4J
Accessing GraphDB from RDF4J works without issues:
Repository repository = new HTTPRepository(GRAPHDB_SERVER, REPOSITORY_ID);
repository.initialize();
RepositoryConnection repositoryConnection = repository.getConnection();
repositoryConnection.begin();
Update updateOperation = repositoryConnection.prepareUpdate(QueryLanguage.SPARQL, strInsert);
updateOperation.execute();
try {
repositoryConnection.commit();
} catch (Exception e) {
if (repositoryConnection.isActive())
repositoryConnection.rollback();
}
My Question
Is there a way to access GraphDB efficiently from Jena? I have seen this related SO question, but I was hoping for a better approach.
GraphDB implements standard SPARQL 1.1 endpoints according the RDF4J protocol.
http://localhost:7200/repositories/PersonData - SPARQL query endpoint, which does not support "application/sparql-update"
http://localhost:7200/repositories/PersonData/statements - SPARQL update endpoint
Try changing your code to point to the update endpoint:
UpdateProcessor updateProcessor = UpdateExecutionFactory.createRemote(updateRequest,
"http://localhost:7200/repositories/PersonData/statements");
The Jena adapter to GraphDB is no longer supported.
FWIW not an answer to "how to connect with Jena", but the code you use to access GraphDB via the RDF4J API is more complicated than it needs to be. You can simply do this:
Repository repository = new HTTPRepository(GRAPHDB_SERVER, REPOSITORY_ID);
repository.initialize();
try(RepositoryConnection conn = repository.getConnection()) {
conn.prepareUpdate(strInsert).execute();
}
It will auto-commit and also automatically roll back on connection close if necessary.
Related
We are using SAP SDK 3.25.0 and calling a batch request Read query passing some filters. I am getting the response of all the records and it can be seen that the filter is not working properly.
I have referred this blog here but I am getting the same issue of decode URL issue YY1_QuantityContractTracki?$filter=((CustomerName eq %27Ford%27) and (SalesSchedulingAgreement eq %270030000141%27)) and (PurchaseOrderByCustomer eq %27TEST%27)&$select=SalesSchedulingAgreement,PurchaseOrderByCustomer,Customer,CustomerName,SalesSchedulingAgreementItem,Material,MaterialByCustomer&$format=json
Below is the query program which I am using.
Am I missing something here. Please let us know
Thanks,
Arun Pai
final BatchRequestBuilder builder = BatchRequestBuilder.withService("/sap/opu/odata/sap/YY1_QUANTITYCONTRACTTRACKI_CDS");
for (Contract contract : contracts) {
FilterExpression mainFilter = new FilterExpression("CustomerName", "eq", ODataType.of(contract.getCustomerName()))
.and(new FilterExpression("SalesSchedulingAgreement", "eq", ODataType.of(contract.getSchAgrmntNo())))
.and(new FilterExpression("PurchaseOrderByCustomer", "eq", ODataType.of(contract.getCustRefNo())));
final ODataQuery oDataQuery = ODataQueryBuilder
.withEntity(sapConfig.getEssentialsContractServiceUrl(),
sapConfig.getEssentialsContractListEntity())
.select("SalesSchedulingAgreement", "PurchaseOrderByCustomer", "Customer", "CustomerName",
"SalesSchedulingAgreementItem", "Material", "MaterialByCustomer")
.filter(mainFilter)
.build();
builder.addQueryRequest(oDataQuery);
}
final BatchRequest batchRequest = builder.build();
final BatchResult batchResult = batchRequest.execute(httpClient);
Update
I have changed the version to 3.35.0 today with connectivity version 1.40.11 but it did'nt work either.
Below is the log request which gets printed in the console
2021-01-15 19:15:03.831 INFO 42640 --- [io-8084-exec-10] c.s.c.s.o.c.impl.BatchRequestImpl : --batch_123
Content-Type: application/http
Content-Transfer-Encoding: binary
GET YY1_QuantityContractTracki?%24filter%3D%28%28CustomerName+eq+%2527Ford27%29+and+%28SalesSchedulingAgreement+eq+%25270030000141%2527%29%29+and+%28PurchaseOrderByCustomer+eq+%2527TEST%2527%29%26%24select%3DSalesSchedulingAgreement%2CPurchaseOrderByCustomer%2CCustomer%2CCustomerName%2CSalesSchedulingAgreementItem%2CMaterial%2CMaterialByCustomer%26%24format%3Djson HTTP/1.1
Accept: application/json;odata=verbose
--batch_123--
For your information: with the release of SAP Cloud SDK 3.41.0 we enabled support for read operations in OData batch requests on the type-safe API. Please find the chapter in the respective documentation. You would no longer need to use the Generic OData Client of SAP Cloud SDK as suggested in the other response. Example:
BusinessPartnerService service;
BusinessPartnerAddress addressToCreate1;
BusinessPartnerAddress addressToCreate2;
BusinessPartnerFluentHelper requestTenEntities = service.getAllBusinessPartner().top(10);
BusinessPartnerByKeyFluentHelper requestSingleEntity = service.getBusinessPartnerByKey("bupa9000");
BatchResponse result =
service
.batch()
.addReadOperations(requestTenEntities)
.addReadOperations(requestSingleEntity)
.executeRequest(destination);
List<BusinessPartner> entities = result.getReadResult(requestTenEntities);
BusinessPartner entity = result.getReadResult(requestSingleEntity);
Update (22.03.2021)
With the release of SAP Cloud SDK 3.41.0 this week we'll enable support for read operations in OData batch requests on the type-safe API. Please find the chapter in the respective documentation.
Example:
BusinessPartnerService service;
BusinessPartnerAddress addressToCreate1;
BusinessPartnerAddress addressToCreate2;
BusinessPartnerFluentHelper requestTenEntities = service.getAllBusinessPartner().top(10);
BusinessPartnerByKeyFluentHelper requestSingleEntity = service.getBusinessPartnerByKey("bupa9000");
BatchResponse result =
service
.batch()
.addReadOperations(requestTenEntities)
.addReadOperations(requestSingleEntity)
.executeRequest(destination);
List<BusinessPartner> entities = result.getReadResult(requestTenEntities);
BusinessPartner entity = result.getReadResult(requestSingleEntity);
Original response:
I'm from the SAP Cloud SDK team. Generally we recommend our users to generate classes for their OData service interactions. This way you can easily make sure that requests are according to specification, while type safety is taken care of.
Unfortunately I cannot help you with the API of BatchRequestBuilder, BatchRequest or BatchResult because they are not directly a part of SAP Cloud SDK and not maintained by us. Instead we suggest our own request builders.
If the generation of classes, as linked above, is not an option for you, then I would suggest to try our expert API featuring the Generic OData Client of SAP Cloud SDK. This is the code that we would also use internally for our generated request builders:
String servicePath = "/sap/opu/odata/sap/YY1_QUANTITYCONTRACTTRACKI_CDS";
ODataRequestBatch requestBatch = new ODataRequestBatch(servicePath, ODataProtocol.V2);
Map<Contract, ODataRequestRead> batchedRequests = new HashMap<>();
// iterate over contracts, construct OData query objects and add them to the OData batch request builder
for (Contract contract : contracts) {
String entityName = sapConfig.getEssentialsContractListEntity();
String serviceUrl = sapConfig.getEssentialsContractServiceUrl();
StructuredQuery structuredQuery = StructuredQuery.onEntity(entityName, ODataProtocol.V2);
structuredQuery.select("SalesSchedulingAgreement", "PurchaseOrderByCustomer", "Customer", "CustomerName", "SalesSchedulingAgreementItem", "Material", "MaterialByCustomer");
structuredQuery.filter(FieldReference.of("SalesSchedulingAgreement").equalTo(contract.getSchAgrmntNo()));
structuredQuery.filter(FieldReference.of("PurchaseOrderByCustomer").equalTo(contract.getCustRefNo()));
String encodedQuery = structuredQuery.getEncodedQueryString();
ODataRequestRead requestRead = new ODataRequestRead(serviceUrl, entityName, encodedQuery, ODataProtocol.V2);
batchedRequests.put(contract, requestRead);
requestBatch.addRead(requestRead);
}
// execute the OData batch request
ODataRequestResultMultipartGeneric batchResult = requestBatch.execute(httpClient);
// extract information from batch response, by referring to the individual OData request references
for( Map.Entry<Contract, ODataRequestRead> requestMapping : batchedRequests.entrySet() ) {
ODataRequestResultGeneric queryResult = batchResult.getResult(requestMapping.getValue());
List<Map<String, Object>> itemsForQuery = queryResult.asListOfMaps();
}
Kind regards
Alexander
I started kafka connector using following command:
./bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-postgres/connect-postgres.properties
Serialization props in the connect-avro-standalone.properties is:
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
I've created a java backend which listen to this kafka stream topic and its able to get the data from postgres with each add/update/delete.
But the data is coming in some unknown encoding format and that's why ican't read the data correctly.
Here is the relevant code snippet:
properties.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG,
Serdes.String().getClass().getName());
properties.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG,
Serdes.ByteArray().getClass().getName());
StreamsBuilder streamsBuilder = new StreamsBuilder();
final Serde<String> stringSerde = Serdes.String();
final Serde<byte[]> byteSerde = Serdes.ByteArray();
streamsBuilder.stream(Pattern.compile(getTopic()), Consumed.with(stringSerde, byteSerde))
.mapValues(data -> {
System.out.println("->"+new String(data));
return data;
});
I'm confused on where and what I need to change; in the avro connector prop or in the java side code
Your Kafka Connect config here means that the messages on the Kafka topic will be Avro serialised:
value.converter=io.confluent.connect.avro.AvroConverter
Which means that you need to deserialise using Avro in your Streams app. See here for more details: https://docs.confluent.io/current/streams/developer-guide/datatypes.html#avro
URL .../SEMP/v2/config/msgVpns/default returns data
{
"data":{
"authenticationBasicEnabled":true,
"authenticationBasicProfileName":"default",
"authenticationBasicRadiusDomain":"",
"authenticationBasicType":"radius",
"authenticationClientCertAllowApiProvidedUsernameEnabled":false,
....
What is the Java API to return this data? Apparently there is no getMsgVpnsDefault(...) method
Generally speaking what is the translation of URL's into API calls? This doesn't seem to be addressed in the documentation.
What is the Java API to return this data? Apparently there is no getMsgVpnsDefault(...) method
There's no API provided by Solace.
SEMP(v2 in your case) is a series of REST commands to be executed over the management port to manage the configuration of the Solace routers.
This is not to be mistaken for the Java API that's provided for messaging over the messaging port/interface.
Generally speaking what is the translation of URL's into API calls?
The complete list of URL's is documented here:
https://docs.solace.com/API-Developer-Online-Ref-Documentation/swagger-ui/index.html#/
In the Solace Samples repository on GitHub there's a gradle file which uses Swagger CodeGen to generate a POJO wrapper around SEMP v2.
This then gives you a Java API to interact with Solace routers.
WRT your original question about getMsgVpnsDefault(...) I believe you'd use
MsgVpn defaultVPN = sempApiInstance.getMsgVpn("default", null);
Or you could grab the list of all VPNs
MsgVpnsResponse resp = sempApiInstance.getMsgVpns(1000, null, null, null);
List<MsgVpn> allVpsn = resp.getData();
then iterate over the list checking until you find one whose name is "default"
https://github.com/SolaceSamples/solace-samples-semp/tree/master/java
We are migrating from Watson Java SDK 3.8.0 to the latest one (4.2.1).
While doing the migration, I took the Watson Discovery code snippet given in this section
https://www.ibm.com/watson/developercloud/discovery/api/v1/?java#query-collection
Discovery discovery = new Discovery("2017-11-07");
discovery.setEndPoint("https://gateway.watsonplatform.net/discovery/api/");
discovery.setUsernameAndPassword("{username}", "{password}");
String environmentId = "{environment_id}";
String collectionId = "{collection_id}";
QueryRequest.Builder queryBuilder = new QueryRequest.Builder(environmentId, collectionId);
queryBuilder.query("{field}:{value}");
QueryResponse queryResponse = discovery.query(queryBuilder.build()).execute();
But looks like the 4.2.1 jar does not contain QueryRequest class, I am not able to find it. Is the code snippet given on the api reference page old?
Instead of query request use QueryOptions as new sdk doesnt contains query request.
I have a Cosmos DB Document database and I am using the SDL OData framework to plug in the Cosmos DB as the datasource of a OData service. Since Comos DB is schema-agnostic so that any valid JSON can be stored/indexed, the OData service with Cosmos DB datasource needs Open Type support so that undeclared dynamic properties from the input JSON request can be saved in the Cosmos DB.
I checked the EdmEntity notation and found the option to flag it as open, for example:
#EdmEntity(namespace = "SDL.OData.Example", key = "id", containerName = "SDLExample", open = true)
However, when looking at ODataJsonParser in odata_renderer module of SDL OData, I do not see any support for dynamic fields in the input JSON -- they are just ignored.
How is Open Type supposed to work in SDL OData?
Any help/hints/suggestions will be appreciated.