Is it possible to pass null values to parameter queries? For example
Sql = "insert into TableX values (?,?)".
Params = [{sql_integer, [Val1]}, {sql_float, [Val2]}].
% Val2 may be a float, or it may be the atom, undefined
odbc:param_query(OdbcRef, Sql, Params).
Now, of course odbc:param_query/3 is going to complain if Val2 is undefined when trying to match to a sql_float, but my question is... Is it possible to use a parameterized query, such as:
Sql = "insert into TableY values (?,?,?,?,?,?,?,?,?)".
with any null parameters? I have a use case where I am dumping a large number of real-time data into a database by either inserting or updating. Some of the tables I am updating have a dozen or so nullable fields, and I do not have a guarantee that all of the data will be there.
Concatenating a SQL together for each query, checking for null values seems complex, and the wrong way to do it.
Having a parameterized query for each permutation is simply not an option.
Any thoughts or ideas would be fantastic! Thank you!
You can use the atom null to denote a null value. For instance:
Sql = "insert into TableX values (?,?)".
Params = [{sql_integer, [Val1]}, {sql_float, [null]}].
Related
I'm having trouble defining both start and end dates as a query parameters. When the data gets pulled, it needs to return as a range of dates based on the query parameters. The GET URL would look like http://localhost:8081/test?FileType=Sales&StartDate=2022-10-01&EndDate=2022-10-26. This should return a date range of data from 10/1/2022-10/26/2022.
In my query, my where clause is set to:
where dp.Nid = 405 and fs.DDate=:DDate
**dp and fs are used in my joins and 405 is an ID that i'll need to unique identify a product.
My input Parameters:
{ DDate : attributes.queryParams.StartDate, DDate : attributes.queryParams.EndDate }
What do i need to set to make a range of dates? Do i need to set startdate to > and enddate to < ? Also, is it possible to define query parameters when using a stored procedure instead of select database method in anypoint studio?
Operations in Mule 4 (ie the boxes inside a flow) can have several inputs (payload, variables, attributes) and 1 output, but they are expected to be independent from each other. The Database query operation doesn't care if its inputs come the query params or from somewhere else. You need to map inputs explicitly to parameters in the query.
Once you have the arguments you need to use them in the SQL query. Usually that means adding a greater than and a lesser than comparison, to ensure that the value is in range. Or the same including also equals, if the business logic requires it.
Depending on the data types and the SQL dialect you may need to convert the inputs to a date format that is compatible with the database type of the column. The inputs here are strings, because that's what query params always are parsed to. The column type is something that you will need to understand and see how to transform, in DataWeave or in the SQL query.
As an example:
<db:select config-ref="dbConfig">
<db:sql>SELECT ... WHERE dp.Nid = 405 AND fs.DDate >= :StartDate AND fs.DDate <= :StartDate</db:sql>
<db:input-parameters>
#[{
StartDate : attributes.queryParams.StartDate,
EndDate : attributes.queryParams.EndDate
}]
</db:input-parameters>
</db:select>
I have a Json field stored in DB as text.
I want to look for records that contain Harvard as a value for those keys
json_data['infoOnProgram']['universityName']
How should I construct my where request to search on that Json field.
Student.where(json_data...
Thank you!
If there are a not very many records you can do this, which will be very expensive:
harvard_students = Student.map {|stud|
hash = JSON.parse(stud.json_data)
hash['infoOnProgram']['universityName'] == 'Harvard' ? stud : nil
}.compact
if you have lots and lots of records to iterate through you can limit your list with a regex query:
Simple
Student.where("json data like '%Harvard%'")
slightly better
Student.where("json data like '%infoOnProgram%universityName%Harvard%'")
The problem is without knowing how the data is structured it is hard to create a specific regex. You can use the where to limit and the iteration to confirm more precisely that these records are what you are looking for.
If you're sure you really do have valid JSON in your text column then you could cast it to jsonb (or json) and use the usual JSON operators and functions that PostgreSQL provides:
Student.where(
"json_data::jsonb -> 'infoOnProgram' ->> 'universityName' = ?",
'Harvard'
)
Student.where(
'json_data::jsonb -> :info ->> :name = :uni',
info: 'infoOnProgram',
name: 'universityName',
uni: 'Harvard'
)
Student.where(
'cast(json_data as jsonb) #>> array[:path] = :uni',
path: %w[infoOnProgram universityName],
uni: 'Harvard'
)
This is going to be an expensive query though as you won't be able to use any indexes. You really should start the process of changing the column's type to jsonb, that would validate the JSON, avoid having to type cast in queries, allow indexing, ...
I have array of strings:
a = ['*#foo.com', '*#bar.com', '*#baz.com']
I would like to query my model so I will get all the records where email isn't in any of above domains.
I could do:
Model.where.not(email: a)
If the list would be a list of strings but the list is more of a regexp.
It depends on your database adapter. You will probably be able to use raw SQL to write this type of query. For example in postgres you could do:
Model.where("email NOT SIMILAR TO '%#foo.com'")
I'm not saying thats exactly how you should be doing it but it's worth looking up your database's query language and see if anything matches your needs.
In your example you would have to join together your matchers as a single string and interpolate it into the query.
a = ['%#foo.com', '%#bar.com', '%#baz.com']
Model.where("email NOT SIMILAR TO ?", a.join("|"))
Use this code:
a = ['%#foo.com', '%#bar.com', '%#baz.com']
Model.where.not("email like ?",a.join("|"))
Replace * to % in array.
I'm trying to use Contains() in a search procedure. The full text indexes are created and working. The issue arises because you cannot used Contains() call on a NULL variable or parameter, it throws an error.
This takes 9 sec to run (passing in non-null param):
--Solution I saw on another post
IF #FirstName is null OR #FirstName = '' SET #FirstName = '""'
...
Select * from [MyTable] m
Where
(#FirstName = '""' OR CONTAINS(m.[fname], #FirstName))
This runs instantly (passing in non-null param)
IF #FirstName is null OR #FirstName = '' SET #FirstName = '""'
...
Select * from [MyTable] m
Where
CONTAINS(m.[fname], #FirstName)
Just by adding that extra 'OR' in front of the 'contains' completely changed the Query Plan. I have also tried using a 'case' statement instead of 'OR' to no avail, I still get the slow query.
Has anyone solved this the problem of null parameters in full text searching or experience my issue? Any thoughts would help, thanks.
I'm using SQL Server 2012
You are checking value of bind variable in SQL. Even worse, you do it in OR with access predicate. I am not an expert on SQL Server, but it is generally a bad practice, and such predicates lead to full table scans.
If you really need to select all values from table when #FirstName is null then check it outside of SQL query.
IF #FirstName is null
<query-without-CONTAINS>
ELSE
<query-with-CONTAINS>
I believe, in the majority of times #FirstName is not null. This way you will access table using your full text index most of the time. Getting all the rows from table is a lost cause anyway.
From a logical standpoint, the first query takes longer to execute because it has to evaluate 2 conditions:
#FirstName = '""'
and ,in case the first condition fails, which should be the majority of the time,
CONTAINS(m.[fname], #FirstName)
My guess is that in your table, you don't have any null or empty FirstName, that's why the results are the same. Otherwise, you would have a few "" in the result set as FirstName.
Maybe you should try reversing the order to see if it makes any difference:
WHERE (CONTAINS(m.[fname], #FirstName) OR #FirstName = '""')
This is the first time I've seen this issue. I'm building up an SQL array to run through sanitize_sql_array and Rails is adding extra, unnecessary single quotes in the return value. So instead of returning:
SELECT DISTINCT data -> 'Foo' from products
it returns:
SELECT DISTINCT data -> ''Foo'' from products
which of course Postgres doesn't like.
Here is the code:
sql_array = ["SELECT DISTINCT %s from products", "data -> 'Foo'"]
sql_array = sanitize_sql_array(sql_array)
connection.select_values(sql_array)
Note the same thing happens when I use the shorter and more usual:
sql_array = ["SELECT DISTINCT %s from products", "data -> 'Foo'"]
connection.select_values(send(:sanitize_sql_array, sql_array))
Ever seen this before? Does it have something to do with using HStore? I definitely need that string sanitized since the string Foo is actually coming from a user-entered variable.
Thanks!
You're giving sanitize_sql_array a string that contains an hstore expression and expecting sanitize_sql_array to understand that the string contains some hstore stuff; that's asking far too much, sanitize_sql_array only knows about simple things like strings and numbers, it doesn't know how to parse PostgreSQL's SQL extensions or even standard SQL. How would you expect sanitize_sql_array to tell the difference between, for example, a string that happens to contain '11 * 23' and a string that is supposed to represent the arithmetical expression 11 * 23?
You should split your data -> 'Foo' into two pieces so that sanitize_sql_array only sees the string part when it is sanitizing things:
sql_array = [ 'select distinct data -> ? from products', 'Foo' ]
sql = sanitize_sql_array(sql_array)
That will give you the SQL you're looking for:
select distinct data -> 'Foo' from products