How do you declare a variable in Denodo? - denodo

I'm having trouble declaring a variable in my query to use with Denodo.
I've tried writing it using sql syntax, but I get an error with "declare".
declare #var1 varchar(6) = 'table1'
select column_name, column_description
from view('pb', '#var1')
order by column_name
I expect this to run with the variable "var1" but I get error code 1100 with a message: "Syntax Error: Exception parsing query near declare".

You can use variables in Denodo with SETVAR('', '', '', ''). But this will only work with column names or conditions in where clauses. I have tested it and it does not work with view names.
Example:
select SETVAR('columnname','foo');
select GETVAR('columnname', 'text', 'asdf') from foobar
this returns the foo column of the foobar table
This does not work:
select SETVAR('tablename','foobar');
select * from GETVAR('tablename', 'text', 'asdf')
I don't think using a variable as table name is possible in denodo. You might need to do that in a script while writing the query.

Related

bigquery sql table function with string interpolation

I am trying to write a BigQuery SQL function / stored procedure / table function that accepts as input:
a INT64 filter for the WHERE clause,
a table name (STRING type) as fully qualified name e.g. project_id.dataset_name.table_name
The idea is to dynamically figure out the table name and provide a filter to slice the data to return as a table.
However if try to write a Table Function (TVF) and I use SET to start dynamically writing the SQL to execute, then I see this error:
Syntax error: Expected "(" or keyword SELECT or keyword WITH but got keyword SET at [4:5]
If I try to write a stored procedure, then it expects BEGIN and END and throws this error:
Syntax error: Expected keyword BEGIN or keyword LANGUAGE but got keyword AS at [3:1]
If I try to add those, then I get various validation errors basically because I need to remove the WITH using CTEs (Common Table Expression), and semicolons ; etc.
But what I am really trying to do is using a table function:
to combine some CTEs dynamically with those inputs above (e.g. the input table name),
to PIVOT that data,
to then eventually return a table as a result of a SELECT.
A bit like producing a View that could be used in other SQL queries, but without creating the view (because the slice of data can be decided dynamically with the other INT64 input filter).
Once I dynamically build the SQL string I would like to EXECUTE IMMEDIATE that SQL and provide a SELECT as a final step of the table function to return the "dynamic table".
The thing is that:
I don't know before runtime the name of this table.
But I have all these tables with the same structure, so the SQL should apply to all of them.
Is this possible at all?
This is the not-so-working SQL I am trying to work around. See what I am trying to inject with %s and num_days:
CREATE OR REPLACE TABLE FUNCTION `my_dataset.my_table_func_name`(num_days INT64, fqn_org_table STRING)
AS (
-- this SET breaks !!!
SET f_query = """
WITH report_cst_t AS (
SELECT
DATE(start) as day,
entity_id,
conn_sub_type,
FROM `%s` AS oa
CROSS JOIN UNNEST(oa.connection_sub_type) AS conn_sub_type
WHERE
DATE(start) > DATE_SUB(CURRENT_DATE(), INTERVAL num_days DAY)
AND oa.entity_id IN ('my-very-long-id')
ORDER BY 1, 2 ASC
),
cst AS (
SELECT * FROM
(SELECT day, entity_id, report_cst_t FROM report_cst_t)
PIVOT (COUNT(*) AS connection_sub_type FOR report_cst_t.conn_sub_type IN ('cat1', 'cat2','cat3' ))
)
""";
-- here I would like to EXECUTE IMMEDIATE !!!
SELECT
cst.day,
cst.entity_id,
cst.connection_sub_type_cat1 AS cst_cat1,
cst.connection_sub_type_cat2 AS cst_cat2,
cst.connection_sub_type_cat3 AS cst_cat3,
FROM cst
ORDER BY 1, 2 ASC
);
This might not be satisfying but since Procedural language or DDL are not allowed inside Table functions currently, one possible way around would be simply using PROCEDURE like below.
CREATE OR REPLACE PROCEDURE my_dataset.temp_procedure(filter_value INT64, table_name STRING)
BEGIN
EXECUTE IMMEDIATE FORMAT(CONCAT(
"SELECT year, COUNT(1) as record_count, ",
"FROM %s ",
"WHERE year = %d ",
"GROUP BY year ",
"; "
), table_name, filter_value);
END;
CALL my_dataset.temp_procedure(2002, 'bigquery-public-data.usa_names.usa_1910_current');

How to declare a variable in denodo?

There is a question related to this topic but is not the same that I am going to ask, I need to do something similar as we do in SQL but this time in Denodo.
This is in SQL:
DECLARE #curr varchar (10);
SET #curr = 'USD;
SELECT
Country,
Currency
FROM
Currencies
WHERE
Currency = #curr;
I have tried something like this in Denodo
SELECT
Country,
Currency
FROM
Currencies
WHERE
Currency = GETVAR('curr', 'VARCHAR', 'USD');
But is not showing results. Does anyone know how can we do something similar to SQL variable declaration in Denodo?
The error is caused because of the wrong data type
You can use 'text' instead of using 'varchar'
This change will save your query :)
Here is a sample usage
select
*
from
storm_storm_t001l
where
werks = GETVAR('werks', 'text', '1331')
;
Use function GETVAR('variable_name', 'type', default_value) in your query.
Then declare it in the CONTEXT statement: CONTEXT('VAR variable_name' = literal)
For example
SELECT *
FROM (
SELECT 1 FROM DUAL()
UNION ALL
SELECT 2 FROM DUAL()
UNION ALL
SELECT 3 FROM DUAL()
)
WHERE field1 > GETVAR('a', 'int', 3)
ORDER BY field1
CONTEXT('VAR a' = 1)

Rails: Omit order by clause when using ActiveRecord .first query?

I'm having a problem with a .first query in Rails 4 ActiveRecord. New behavior in Rails 4 is to add an order by the id field so that all db systems will output the same order.
So this...
Foo.where(bar: baz).first
Will give the query...
select foos.* from foos order by foos.id asc limit 1
The problem I am having is my select contains two sum fields. With the order by id thrown in the query automatically, I'm getting an error that the id field must appear in the group by clause. The error is right, no need for the id field if I want the output to be the sum of these two fields.
Here is an example that is not working...
baz = Foo.find(77).fooviews.select("sum(number_of_foos) as total_number_of_foos, sum(number_of_bars) as total_number_of_bars").reorder('').first
Here is the error...
ActiveRecord::StatementInvalid: PG::GroupingError: ERROR: column "foos.id" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: ...bars FROM "fooviews" ORDER BY "...
Since the select is an aggregate expression, there is no need for the order by id, but AR is throwing it in automatically.
I found that I can add a reorder('') on to the end before the .first and that removes the order by id, but is that the right way to fix this?
Thank you
[UPDATE] What I neglected to mention is that I'm converting a large Rails 3 project to Rails 4. So the output from the Rails 3 is an AR object. If possible, the I would like the solution to keep in that format so that there is less code to change in the conversion.
You will want to use take:
The take method retrieves a record without any implicit ordering.
For example:
baz = Foo.find(77).fooviews.select("sum(number_of_foos) as total_number_of_foos, sum(number_of_bars) as total_number_of_bars").take
The commit message here indicates that this was a replacement for the old first behavior.

.group not returning all columns

I have a .group query that is not returning all the columns in the select and I was wondering if someone could validate my syntax.
Here is a query with a .group and the result from my console;
Expense.select('account_number, SUM(credit_amount)').group(:account_number).first
Expense Load (548.8ms) EXEC sp_executesql N'SELECT TOP (1) account_number, SUM(credit_amount) FROM [expenses] GROUP BY account_number'
(36.9ms) SELECT table_name FROM information_schema.views
Even though I select two columns, I'm only getting the first one to return. I'm wondering if I may be dealing with an db adapter problem.
Try giving your sum an alias:
expense = Expense.select('account_number, SUM(credit_amount) AS credit_amount').group(:account_number).first
puts expense.credit_amount
ActiveRecord doesn't create a default alias for aggregation operations such as SUM, COUNT etc... you have to do it explicitly to be able to access the results, as shown above.
The SUM(credit_amount) column from the SQL has no alias and will not have a column name by default. If you change it to have an alias SUM(credit_amount) As 'A' for example and select the alias name, it should pick it up.

union between requests with remplacement variables in sqlplus

I have 14 fields which are similar and I search the string 'A' on each of them. I would like after that order by "position" field
-- some set in order to remove a lot of useless text
def col='col01'
select '&col' "Fieldname",
&col "value",
position
from oneTable
where &col like '%A%'
/
-- then for the second field, I only have to type two lines
def col='col02'
/
...
def col='col14'
/
Write all the fields which contains 'A'. The problem is that those field are not ordered by position.
If I use UNION between table, I cannot take advantage of the substitution variables (&col), and I have to write a bash in unix in order to make the replacement back into ksh. The problem is of course that database code have to be hard-coded in this script (connection is not easy stuff).
If I use a REFCURSOR with OPEN, I cannot group the results sets together. I have only one request and cannot make an UNION of then. (print refcursor1 union refcursor2; print refcursor1+refcursor2 raise an exception, select * from refcursor1 union select * from refcursor2, does not work also).
How can concatenate results into one big "REFCURSOR"? Or use a union between two distinct run ('/') of my request, something like holding the request while typing new definition of variables?
Thank you for any advice.
Does this answer your question ?
CREATE TABLE #containingAValueTable
(
FieldName VARCHAR(10),
FieldValue VARCHAR(1000),
position int
)
def col='col01'
INSERT INTO #containingAValueTable
(
FieldName , FieldValue, position
)
SELECT '&col' "Fieldname",
&col "value",
position
FROM yourTable
WHERE &col LIKE '%A%'
/
-- then for the second field, I only have to type two lines
def col='col02'
INSERT INTO...
/
def col='col14'
/
select * from #containingAValueTable order by postion
DROP #containingAValueTable
But I'm not totally sure about your use of the 'substitution variable' called "col" (and i only have SQL Server to test my request so I used explicit field names)
edit : Sorry for the '#' charcater, we use it so often in SQL Server for temporaries, I didn't even know it was SQL Server specific (moreover I think it's mandatory in sql server for creating temporary table). Whatever, I'm happy I could be useful to you.

Resources