I have requirement to extract SQL queries from snowflake stored procedure. I have decided to extract SQL queries via Snowflake-JDBC API.
I have analyzed Java documentation of Snowflake-JDBC API but unfortunately could not find any methods to extract SQL queries from stored procedure. I found a class namely QueryExecDTO in Snowflake-JDBC API , which has getSqlText() method but it is of no use in my concern (I have to extract SQL from stored procedure). I am also aware of Snowflake-JavaScript API's Statement object , which has method getSqlText() to get text of SQL queries but it can be use inside JavaScript only as it is part of JavaScript-API
Is there any way to extract SQL from stored procedure using Snowflake-JDBC API?
You would need to run something like:
select get_ddl('procedure', '*proc_name*(*arg list*)');
To get the text of the SP and then you would need to parse that text to extract the SQL statements.
If you want to just extract the SQL statements that should be relatively straightforward; however if you want to parse the statements to, for example, list the tables being used, then you are going to struggle.
Parsing SQL is incredibly complex (given how flexible the language is) which is illustrated by the fact that there are very few general SQL parsers available - and those that actually work are not cheap.
Related
I am trying to connect to my organisation's SQL database using Power Query to create some reports. I need to delete/edit some tables and join multiple tables to come up with the desired report output...
I don't want the change or edit I will do on the excel-power query to reflect on the live database but just in excel .
The short answer is no, any button you press in the Power Query Editor interface does not modify the source database. I must admit that I have not found any page in the Microsoft Docs on Power Query that states this clearly. The page What is Power Query? states that:
Power Query is a data transformation and data preparation engine. Power Query comes with a graphical interface for getting data from sources and a Power Query Editor for applying transformations.
Other pages contain similarly general and vague descriptions but let me reassure you that any data transformation you carry out by using the Power Query Editor interface will not modify your SQL database. All you see in Power Query is a view of the source database.
Seeing as you are connecting to a SQL database, it is likely that query folding is activated. This means that when you remove a column (or row), this will update the SQL query used to extract the data from the database. That query is written as a single SELECT statement that can contain multiple clauses like GROUP BY and WHERE. Transformations that add data (e.g. Add Custom Column, Fill Down) are not included in the query, they are carried out only within the Power Query engine. You can read more about this in the docs.
How to edit a database with Power Query when native SQL queries are supported
That being said, you can actually edit a database from within Power Query if the database supports the use of native SQL queries, if you have write permission for the database, and if you edit and run one of the two M functions that let you write native SQL queries. Here is an example using the Sql.Database function:
Sql.Database("servername", "dbname", [Query = "DROP TABLE tablename"])
And here is an example using the Value.NativeQuery function:
Source = Sql.Databases("servername"){[Name="dbname"]}[Data],
#"Native Query" = Value.NativeQuery(Source, "DROP TABLE tablename")
Unless you have changed the default Query Options, these functions should raise a warning message requiring you to permit running the query:
This prevents you from modifying the database without confirmation, so any database modification cannot happen just by accident.
I verified this using Excel Microsoft 365 (Version 2108) on Windows 10 64-bit connected to a local SQL Server 2019 (15.x) database.
I'm new to neo4j and graph databases in general.
Given a complex Cypher query, that I don't want to store inside the application (or several applications), but keep centralized, what options are left to me?
In a SQL database I would use a stored function. Are UDF function the way to go in neo4j?
From the docs it seems to me that they're more a way to extend the database functionality by being able to access the graph internals, but I've just started studying them.
Take a look at the custom functions and procedures available in the apoc library.
https://neo4j.com/docs/labs/apoc/current/cypher-execution/cypher-based-procedures-functions/
CALL apoc.custom.asProcedure('answer','RETURN 42 as answer')
CALL custom.answer() YIELD row RETURN row.answer
Suppose I have an application which fetches a custom XML packet from the server which represents a dataset. Then, suppose I wish to execute a SQL statement on that data via a dataset. What can I use to do this? I don't need to know the code necessarily, but just what to use to make this possible and a general explanation of how.
For example, I may fetch a list of customers in XML format from the server. Then, I can use any third-party parser to dump that XML data into some client dataset. Then, execute a query on that dataset, for example select * from customers where ZipCode = '12345' without fetching this data from the server again.
XML is not the only limitation, that's just an example. I might want to do the same to some application settings loaded from an INI file. Either way, the concept is that the original source of the data is unknown.
Whether the dataset stores its temporary data in the memory or on the disk doesn't matter, but it would be excellent if it could keep it in the disk.
TXQuery (http://code.google.com/p/txquery/) is a component that provides a local SQL engine for executing SQL queries against one or more TDataSets. The only issues I have had with it is updating data via a TDBGrid of a query joining multiple tables (TDataSets) - specifically which table is being updated.
AnyDac v6 (now FireDac) also has a local SQL engine. http://www.da-soft.com/anydac/docu/frames.html?frmname=topic&frmfile=Local_SQL.html
Edit: For the example SQL in your question, because it only involves a single table, you do this with just a Filter on the datatset. For example
ADataSet.Filtered := False;
ADataSet.Filter := 'ZipCode=' + QuotedStr('12345');
ADataSet.Filtered := True;
Such a feature can be done using a local database. You just insert the TDataSet result into a local in-memory (or file-based) stand-alone database, then you can use regular SQL queries on it, including JOIN.
You can for instance use SQLite3, or the free edition of NexusDB.
NexusDB embedded has the benefit of being a native Delphi database, so stick to the DB.pas TDataSet paradigm.
Another option is to use the so-called Virtual Table mechanism of SQLite3, which allows to expose any data (even from TDataSet, XML, JSON or in-memory objects) to the SQLite3 engine, just as regular tables. Then you can run SQL statements on those "virtual" tables, including JOINs. With this approach, you do not require to INSERT the data into regular tables, but the data remain in their original form. Of course, you will miss some performance features like indexes, which should be handled on the virtual table provider side. We use this feature as the database core of our mORMot ORM/SOA framework, and this is pretty powerful.
The general process that you want to perform is complicated by the difference in data representation. SQL data is stored in tables made up of distinguishable records. XML is a structured representation of data, but in tree form rather than table/row form.
Each of these data forms may be qualified by a schema that provides a context for the data.
You have two general paths that you can follow:
Take the XML, and based on the schema insert it into a set of interlinked tables, then perform the SQL query. - if you have the schema, you can use code generators to make a parser, and then based ont the parse tree, you can insert into a local db with tables constructed on the fly. You can set up my SQL pretty easily from https://dev.mysql.com/doc/refman/5.7/en/installing.html and then in your version of delphi make a connection to the database, first fill it in, then query. This would satisfy your desire to have the data stored on the disk. unless you purge the tables when done, the data are still available in the local machine db.
This seems like more work than:
Use Xpath or Xquery and work directly on the XML. For this, a package like saxon in your favorite environment, or expat in python would work nicely.
Let me know if either of these paths seems as if it may be fruitful.
I'm writing some stored procedures to do CRUD operations against some tables in a SQL Server database, which will be used in a FormView on an ASP.NET 2.0 page. I've already written the hardest one, which is the insert SP. Now I'm going to work on the select, update and delete SP's. What I'd like to know is, do the parameters to the SP's that are used by the SqlDataSource have to be in exactly the same order? For example, the insert operation requires about a dozen parameters, all of which are stored into the 3 tables that the insert handles. However, to retrieve the same data, all I need is the primary keys, which are just 2 parameters. Do I need to provide all of the parameters, in the same order, as I've specified for the insertion stored procedure?
No. Each command will have separate parameter collections SqlDataSource.DeleteParameters
SSMS tools pack has a CRUD generator that might be useful too
Let's say I have 'myStoredProcedure' that takes in an Id as a parameter, and returns a table of information.
Is it possible to write a SQL statement similar to this?
SELECT
MyColumn
FROM
Table-ify('myStoredProcedure ' + #MyId) AS [MyTable]
I get the feeling that it's not, but it would be very beneficial in a scenario I have with legacy code & linked server tables
Thanks!
You can use a table value function in this way.
Here is a few tricks...
No it is not - at least not in any official or documented way - unless you change your stored procedure to a TVF.
But however there are ways (read) hacks to do it. All of them basically involved a linked server and using OpenQuery - for example seehere. Do however note that it is quite fragile as you need to hardcode the name of the server - so it can be problematic if you have multiple sql server instances with different name.
Here is a pretty good summary of the ways of sharing data between stored procedures http://www.sommarskog.se/share_data.html.
Basically it depends what you want to do. The most common ways are creating the temporary table prior to calling the stored procedure and having it fill it, or having one permanent table that the stored procedure dumps the data into which also contains the process id.
Table Valued functions have been mentioned, but there are a number of restrictions when you create a function as opposed to a stored procedure, so they may or may not be right for you. The link provides a good guide to what is available.
SQL Server 2005 and SQL Server 2008 change the options a bit. SQL Server 2005+ make working with XML much easier. So XML can be passed as an output variable and pretty easily "shredded" into a table using the XML functions nodes and value. I believe SQL 2008 allows table variables to be passed into stored procedures (although read only). Since you cited SQL 2000 the 2005+ enhancements don't apply to you, but I mentioned them for completeness.
Most likely you'll go with a table valued function, or creating the temporary table prior to calling the stored procedure and then having it populate that.
While working on the project, I used the following to insert the results of xp_readerrorlog (afaik, returns a table) into a temporary table created ahead of time.
INSERT INTO [tempdb].[dbo].[ErrorLogsTMP]
EXEC master.dbo.xp_readerrorlog
From the temporary table, select the columns you want.