My question is basically how to properly execute a SPARQL update using SailGraph created by Tinkerpop.
DELETE { ?i_id_iri rdfs:label "BII-I-1" }
INSERT { ?i_id_iri rdfs:label "BII-I-4" }
WHERE
{
?investigation rdf:type obi:0000011.
?i_id_iri rdf:type iao:0000577.
?i_id_iri iao:0000219 ?investigation.
}
I have this query so far with the prefixes added on top from another file but it does not work.
The code i run is as follows
query = parser.parseUpdate(queryString, baseURI);
UpdateExpr expr = query.getUpdateExprs().get(0);
Dataset dataset = query.getDatasetMapping().get(expr);
GraphDatabase.getSailConnection().executeUpdate(expr, dataset, new EmptyBindingSet(), false);
I'm not particularly familiar with the Tinkerpop GraphSail, but assuming it implements the Sesame SAIL API correctly, executing a SPARQL query is far easier if you just wrap it in a SailRepository, like so:
TinkerGraph graph = new TinkerGraph();
Sail sail = new GraphSail(graph);
Repository rep = new SailRepository(sail);
rep.initialize();
You can then use Sesame's Repository API, which is far more user-friendly than trying to do operations directly on the SAIL (which is not designed for that purpose).
To open a connection and execute a SPARQL update, for example, you do:
RepositoryConnection conn = rep.getConnection();
try {
String sparql = "INSERT {?s a <foo:example> . } WHERE {?s ?p ?o } ";
Update update = conn.prepareUpdate(QueryLanguage.SPARQL, sparql);
update.execute();
}
finally {
conn.close();
}
See link above for more details on Sesame's Repository API, or check out the Javadoc.
Related
I'm trying to write a script that creates a Docker image using a Jenkins image as base, that is, the first line of my Dockerfile is...
FROM jenkins/jenkins:2.249.3
However I wanna be smart and write a script that gets the latest Jenkins stable version and sed that into my Dockerfile, like this
Dockerfile:
FROM jenkins/jenkins:JENKINS_LATEST_STABLE_VER
$ export JENKINS_LATEST_STABLE_VER=`some_api_call`
$ sed -i "s/JENKINS_LATEST_STABLE_VER/$JENKINS_LATEST_STABLE_VER/g" Dockerfile
$ docker build -t docker_url/jenkins:$JENKINS_LATEST_STABLE_VER .
$ docker push docker_url/jenkins:$JENKINS_LATEST_STABLE_VER
I'm aware of jenkins/jenkins:lts but I NEED the actual version number, e.g., 2.249.3. What is "some_api_call" ?
I saw this post, and tried this. However it returns Alpine images, but I need CentOS ones. For the life of me I can't figure out the URL for the Centos images? I essentially want to get this list.
// Import the JsonSlurper class to parse Dockerhub API response
import groovy.json.JsonSlurper
// Set the URL we want to read from, it is MySQL from official Library for this example, limited to 20 results only.
// docker_image_tags_url = "https://hub.docker.com/v2/repositories/library/mysql/tags/?page_size=20"
docker_image_tags_url = "https://hub.docker.com/v2/repositories/library/jenkins/tags?page_size=30"
try {
// Set requirements for the HTTP GET request, you can add Content-Type headers and so on...
def http_client = new URL(docker_image_tags_url).openConnection() as HttpURLConnection
http_client.setRequestMethod('GET')
// Run the HTTP request
http_client.connect()
// Prepare a variable where we save parsed JSON as a HashMap, it's good for our use case, as we just need the 'name' of each tag.
def dockerhub_response = [:]
// Check if we got HTTP 200, otherwise exit
if (http_client.responseCode == 200) {
dockerhub_response = new JsonSlurper().parseText(http_client.inputStream.getText('UTF-8'))
} else {
println("HTTP response error")
System.exit(0)
}
// Prepare a List to collect the tag names into
def image_tag_list = []
// Iterate the HashMap of all Tags and grab only their "names" into our List
dockerhub_response.results.each { tag_metadata ->
image_tag_list.add(tag_metadata.name)
}
// The returned value MUST be a Groovy type of List or a related type (inherited from List)
// It is necessary for the Active Choice plugin to display results in a combo-box
return image_tag_list.sort()
} catch (Exception e) {
// handle exceptions like timeout, connection errors, etc.
println(e)
}
Hi I have a couple of queries I want to run & save results in sequence one after another using Apache Beam, I've seen some similar questions but couldn't find an answer. I'm used to designing pipelines using Airflow and I'm fairly new to Apache Beam. I'm using the Dataflow runner. Here's my code right now: I would like query2 to run only after query1 results are saved to the corresponding table. How do I chain them?
PCollection<TableRow> resultsStep1 = getData("Run Query 1",
"Select * FROM basetable");
resultsStep1.apply("Save Query1 data",
BigQueryIO.writeTableRows()
.withSchema(BigQueryUtils.toTableSchema(resultsStep1.getSchema()))
.to("resultsStep1")
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)
);
PCollection<TableRow> resultsStep2 = getData("Run Query 2",
"Select * FROM resultsStep1");
resultsStep2.apply("Save Query2 data",
BigQueryIO.writeTableRows()
.withSchema(BigQueryUtils.toTableSchema(resultsStep2.getSchema()))
.to("resultsStep2")
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)
);
And here's my getData function definition:
private PCollection<TableRow> getData(final String taskName, final String query) {
return pipeline.apply(taskName,
BigQueryIO.readTableRowsWithSchema()
.fromQuery(query)
.usingStandardSql()
.withCoder(TableRowJsonCoder.of()));
}
Edit (Update): Turns out:
You can’t sequence the completion of a BigQuery write with other steps of your pipeline.
Which I think is a big limitation for designing pipelines.
Source: https://beam.apache.org/documentation/io/built-in/google-bigquery/#limitations
You can use the Wait method to do this. A contrived example is below
PCollection<Void> firstWriteResults = data.apply(ParDo.of(...write to first database...));
data.apply(Wait.on(firstWriteResults))
// Windows of this intermediate PCollection will be processed no earlier than when
// the respective window of firstWriteResults closes.
.apply(ParDo.of(...write to second database...));
You can find more details in the API documentation present here - https://beam.apache.org/releases/javadoc/2.17.0/index.html?org/apache/beam/sdk/transforms/Wait.html
I am totally new to Jira. In fact I don't even know where to start. I went to the jira atlassian website but got nothing solid enough to help me. I would like to validate if the information entered into a textbox already exists. I clicked around jira and ended up on the screen below:
Now I would like to find out the following:
Which programming language should be used for validation ? Is it Java
If the name of the custom field(of type Textbox) is XYZ and I wanna if check if value entered into XYZ already exist, how do I go about doing that ? Can I just write conditional statements in Java ?
I wrote some stuff and nothing worked.
That's a screenshot from the Script Runner add-on.
There are some documentation and examples for custom validators here.
You can also find an example here that shows how to query the JIRA (or an external) database from a groovy script. Ie.:
import com.atlassian.jira.component.ComponentAccessor
import groovy.sql.Sql
import org.ofbiz.core.entity.ConnectionFactory
import org.ofbiz.core.entity.DelegatorInterface
import java.sql.Connection
def delegator = (DelegatorInterface) ComponentAccessor.getComponent(DelegatorInterface)
String helperName = delegator.getGroupHelperName("default");
def sqlStmt = """
SELECT project.pname, COUNT(*) AS kount
FROM project
INNER JOIN jiraissue ON project.ID = jiraissue.PROJECT
GROUP BY project.pname
ORDER BY kount DESC
"""
Connection conn = ConnectionFactory.getConnection(helperName);
Sql sql = new Sql(conn)
try {
StringBuffer sb = new StringBuffer()
sql.eachRow(sqlStmt) {
sb << "${it.pname}\t${it.kount}\n"
}
log.debug sb.toString()
}
finally {
sql.close()
}
For anything that gets a bit complex it's easier to implement your script in a groovy file and make it available to Script Runner via the file system. That also allows you use a vcs like git to easily push/pull your changes. More info about how to go about that, is here.
I had used Tinkerpop and openrdf sail to connect the neo4j at local well.
String dB_DIR = "neo4j//data";
Sail sail = new GraphSail(new Neo4jGraph(dB_DIR));
sail.initialize();
that I can import ttl or rdf file then query0
but now I want to connect the remote neo4j.
How can I use neo4j jdbc in this case ?
or the Tinkerpop blueprint has the way can do it ?
( I did some searching work but no good answer )
You should use the Sesame Repository API for accessing a Sail object.
Specifically, what you need to do is wrap your Sail object in a Repository object:
Sail sail = new GraphSail(new Neo4jGraph(dB_DIR));
Repository rep = new SailRepository(sail);
rep.initialize();
After this, use the Repository object to connect to your store and perfom actions, e.g. to load a Turtle file and then do a query:
RepositoryConnection conn = rep.getConnection();
try {
// load data
File file = new File("/path/to/file.tll");
conn.add(file, file.getAbsolutePath(), RDFFormat.TURTLE);
// do query and print result to STDOUT
String query = "SELECT * WHERE {?s ?p ?o} LIMIT 10";
TupleQueryResult result =
conn.prepareTupleQuery(QueryLanguage.SPARQL, query).evaluate();
while (result.hasNext()) {
System.out.println(result.next().toString());
}
}
finally {
conn.close();
}
See the Sesame documentation or Javadoc for more info and examples of how to use the Repository API.
(disclosure: I am on the Sesame development team)
I use the exactly query just as http://docs.neo4j.org/chunked/milestone/query-using.html says.
My Neo4j Kernel version is
Neo4j - Graph Database Kernel 2.0.0-M03
I don't know why?
It's ok for me to run
CREATE (_1 { `name`:"Emil" })
CREATE (_2:`German` { `name`:"Stefan", `surname`:"Plantikow" })
CREATE (_3 { `age`:34, `name`:"Peter" })
CREATE (_4:`Swedish` { `age`:36, `awesome`:true, `name`:"Andres", `surname`:"Taylor" })
CREATE _1-[:`KNOWS`]->_3
CREATE _2-[:`KNOWS`]->_4
CREATE _4-[:`KNOWS`]->_3
But I got Unknown error while using
match n:Swedish using index n:Swedish(surname)
where n.surname = 'Taylor'
return n
If your query explicitly mandates to use an index, you need to make sure that it exists.
So run before querying:
CREATE INDEX ON :Swedish(surname)