What is the difference between "update" and "insert" in Jena ARQ documentation when using FUseki? For example, if I need to add to a Fuseki database (or graph) a simple statement, should I use UpdateExecutionFactory.createRemote method or ARQ - SPARQL Update?.
An "Update" is a SPARQL Update operation.
An "Update Request" is a number of SPARQL Update operations, separate by ";" all sent at once.
"INSERT" is one of the SPARQL Update verbs - there are other things you can do in SPARQL Update like delete data or work on whole graphs.
In addition, adding data to a dataset or graph can be referred to called "inserting".
Related
I am trying to connect to my organisation's SQL database using Power Query to create some reports. I need to delete/edit some tables and join multiple tables to come up with the desired report output...
I don't want the change or edit I will do on the excel-power query to reflect on the live database but just in excel .
The short answer is no, any button you press in the Power Query Editor interface does not modify the source database. I must admit that I have not found any page in the Microsoft Docs on Power Query that states this clearly. The page What is Power Query? states that:
Power Query is a data transformation and data preparation engine. Power Query comes with a graphical interface for getting data from sources and a Power Query Editor for applying transformations.
Other pages contain similarly general and vague descriptions but let me reassure you that any data transformation you carry out by using the Power Query Editor interface will not modify your SQL database. All you see in Power Query is a view of the source database.
Seeing as you are connecting to a SQL database, it is likely that query folding is activated. This means that when you remove a column (or row), this will update the SQL query used to extract the data from the database. That query is written as a single SELECT statement that can contain multiple clauses like GROUP BY and WHERE. Transformations that add data (e.g. Add Custom Column, Fill Down) are not included in the query, they are carried out only within the Power Query engine. You can read more about this in the docs.
How to edit a database with Power Query when native SQL queries are supported
That being said, you can actually edit a database from within Power Query if the database supports the use of native SQL queries, if you have write permission for the database, and if you edit and run one of the two M functions that let you write native SQL queries. Here is an example using the Sql.Database function:
Sql.Database("servername", "dbname", [Query = "DROP TABLE tablename"])
And here is an example using the Value.NativeQuery function:
Source = Sql.Databases("servername"){[Name="dbname"]}[Data],
#"Native Query" = Value.NativeQuery(Source, "DROP TABLE tablename")
Unless you have changed the default Query Options, these functions should raise a warning message requiring you to permit running the query:
This prevents you from modifying the database without confirmation, so any database modification cannot happen just by accident.
I verified this using Excel Microsoft 365 (Version 2108) on Windows 10 64-bit connected to a local SQL Server 2019 (15.x) database.
I have requirement to extract SQL queries from snowflake stored procedure. I have decided to extract SQL queries via Snowflake-JDBC API.
I have analyzed Java documentation of Snowflake-JDBC API but unfortunately could not find any methods to extract SQL queries from stored procedure. I found a class namely QueryExecDTO in Snowflake-JDBC API , which has getSqlText() method but it is of no use in my concern (I have to extract SQL from stored procedure). I am also aware of Snowflake-JavaScript API's Statement object , which has method getSqlText() to get text of SQL queries but it can be use inside JavaScript only as it is part of JavaScript-API
Is there any way to extract SQL from stored procedure using Snowflake-JDBC API?
You would need to run something like:
select get_ddl('procedure', '*proc_name*(*arg list*)');
To get the text of the SP and then you would need to parse that text to extract the SQL statements.
If you want to just extract the SQL statements that should be relatively straightforward; however if you want to parse the statements to, for example, list the tables being used, then you are going to struggle.
Parsing SQL is incredibly complex (given how flexible the language is) which is illustrated by the fact that there are very few general SQL parsers available - and those that actually work are not cheap.
I am using Olingo 1.2 on top of Hibernate.
I have a request that returns 250 rows, each row links to another table in a one-to-many relationship.
I execute $expand to get all data in the child table, but when I examine the query executed in the database it appears that there are 251 individual calls being made, one for the master table returning 250 rows, and then one for each of the rows to return the child records.
Looking at the Olingo code, this lazy approach is by design.
I've test $expand on Microsoft ODATA processor and they use a greedy approach in this case.
My Question is: How can I switch Olingo to use a greedy approach for $expand (i.e. push the join down into the database)?
The queries that you are seeing generated are a result of Hibernate, not Olingo. This is the default way that Hibernate uses to generate queries for the child table. You need to look at the #Fetch(FetchMode.Join) annotation in Hibernate and apply it to your relation. Please take a look at this link for the explanation:
https://stackoverflow.com/a/11077041/3873392
Can u please share any links/sample source code for generating the graph using neo4j from Oracle database tables data .
And my use case is oracle schema table names as Nodes and columns are properties. And also need to genetate graph in tree structure.
Make sure you commit the transaction after creating the nodes with tx.success(), tx.finish().
If you still don't see the nodes, please post your code and/or any exceptions.
Use JDBC to extract your oracle db data. Then use the Java API to build the corresponding nodes :
GraphDatabaseService db;
try(Transaction tx = db.beginTx()){
Node datanode = db.createNode(Labels.TABLENAME);
datanode.setProperty("column name", "column value"); //do this for each column.
tx.success();
}
Also remember to scale your transactions. I tend to use around 1500 creates per transaction and it works fine for me, but you might have to play with it a little bit.
Just do a SELECT * FROM table LIMIT 1000 OFFSET X*1000 with X being the value for how many times you've run the query before. Then keep those 1000 records stored somewhere in a collection or something so you can build your nodes with them. Repeat this until you've handled every record in your database.
Not sure what you mean with "And also need to genetate graph in tree structure.", if you mean you'd like to convert foreign keys into relationships, remember to just index the key and in stead of adding the FK as a property, create a relationship to the original node in stead. You can find it by doing an index lookup. Or you could just create your own little in-memory index with a HashMap. But since you're already storing 1000 sql records in-memory, plus you are building the transaction... you need to be a bit careful with your memory depending on your JVM settings.
You need to code this ETL process yourself. Follow the below
Write your first Neo4j example by following this article.
Understand how to model with graphs.
There are multiple ways of talking to Neo4j using Java. Choose the one that suits your needs.
I am relatively new to oData service and I am trying to explore if oData is feasible for my project.
From all the examples / demos that I have come across,every demo always loads up all data into the repository and then oData filters are applied over the data.
Is there a way to not load up all data (apply the filters to SQL from oData) from SQL which will obviously be highly inefficient for N number of requests coming in /second ?
So for example if I had a movies service :
localhost:4502/OdataService/movies(55)
The above example is actually just filtering for movie id 55 from an "entire" set of movies.Is there a way to make this filter happen at SQL level instead of bloating the memory first with all movies and then allowing oData to filter it?
Can anyone guide me in the right direction?
I found out after doing a small POC that Entity framework takes care of building dynamic query based on the request.