I'm building a project where the front end is react and the backend is ruby on rails and uses a postgres DB. I want to be able to export the response from the following query using Postgresql's built in COPY command and send it to the front end.
query = <<-SQL
SELECT * FROM ORDERS WHERE ORDERS.STORE_ID = ? OFFSET ? LIMIT ?
SQL
query_result = Order.find_by_sql([query, store_id.to_i, offset.to_i, 600000])
As you can see, my ruby code uses find_by_sql to execute the query and this allows me to enter the values for the ? placeholders. How do i fill these values in using Postgres Copy command? The query will return 600000 records so delegating the act of generating the CSV to PostgreSQL itself seems like a good idea. The issue is that I don't know how to do this when my query is parameterized to avoid SQL injection. Any help with sample ruby code would be great!
We can still use find_by_sql and just include the COPY in the query like this:
query = <<-SQL
COPY
(SELECT * FROM ORDERS WHERE ORDERS.STORE_ID = ? OFFSET ? LIMIT ?)
to '/path/temp/output.txt' WITH CSV HEADER
SQL
query_result = Order.find_by_sql([query, store_id.to_i, offset.to_i, 600000])
Related
In an attempt to summarise traffic data base on a time span, one cannot search invoking a component of a datetime object as such:
txat0 = Transaction.where(['shop_id = ? AND created_at.hour = ?', shop, 0]).count
One could go via the SQL route (i.e. postgresql)
select extract(shop_id, hour from created_at) from transactions
and filter from there.
But what is a succinct way of achieving this with ruby or rails (performance is not a concern for this query) ?
I believe you could do a mix and run the SQL part inside an ActiveRecord query.
What about:
Transaction.where("DATE_PART('hour', created_at) = ?", 0)
PS: I've ignored the shop_id clause in the above example, but you can just add it afterwards.
I have some RAW sql and I'm not sure if it would be better as an Activerecord call or should I use RAW sql. Would this be easy to convert to AR?
select *
from logs t1
where
log_status_id = 2 and log_type_id = 1
and not exists
(
select *
from logs t2
where t2.log_version_id = t1.log_version_id
and t2.log_status_id in (1,3,4)
and log_type_id = 1
)
ORDER BY created_at ASC
So something like this?:
Log.where(:log_status_id=>2, log_type_id => 1).where.not(Log.where.....)
You could do this using AREL. See Rails 3: Arel for NOT EXISTS? for an example.
Personally I often find raw SQL to be more readable/maintainable than AREL queries, though. And I guess most developers are more familiar with it in general, too.
But in any case, your approach to separate the narrowing by log_states_id and log_type_id from the subquery is a good idea. Even if your .where.not construct won't work as written.
This should do the trick however:
Log.where(log_status_id: 2, log_type_id: 1)
.where("NOT EXISTS (
select *
from logs t2
where t2.log_version_id = logs.log_version_id
and t2.log_status_id in (1,3,4)
and t2.log_type_id = logs.log_type_id)")
.order(:created_at)
The only constellation where this might become problematic is when you try to join this query to other queries because the outer table will likely receive a different alias than logs.
I'm using ActiveRecord 4.2 / Arel 6.0 / Postgres and have the following inputs:
An Arel::Attributes::Attribute from an Arel::Table (column)
Several Arel::Nodes::Ordering nodes (orders)
I want to build an Arel::Nodes::NamedFunction with an aggregate function that includes the column specified by the Attribute and is ordered by the Ordering nodes.
The resulting SQL could look something like:
array_agg("posts"."id" ORDER BY "posts"."published_at" DESC)
My current solution is to first build an Arel::Nodes::SelectStatement, add the column and orders to it, convert it to SQL, strip the leading SELECT keyword, wrap it in an Arel::Nodes::SqlLiteral and pass that to the NamedFunction node:
select = Arel::Nodes::SelectStatement.new
select.cores.last.projections << column
select.orders = orders
sql = select.to_sql.sub(/^SELECT /, '')
literal = Arel::Nodes::SqlLiteral.new(sql)
array_agg = Arel::Nodes::NamedFunction.new('array_agg', [literal])
Obviously, this is a huge hack.
Keeping the ORDER BY outside the aggregate function is not an option, because it would conflict with the GROUP BY used to aggregate.
So is there a cleaner way to achieve this without abusing SelectStatement / SelectManager?
I don't know if this works for you but I found this to be a clean way to order in DESC order. You're using some different syntax then me but it seems like this is what you're looking for.
arel = Post.arel_table
query = arel.project(Arel.select('*'))
query.order(arel[:published_at].desc).to_sql
This is the best guess I have for what you're trying to do.
The solution is to use an Arel::Nodes::InfixOperation to build the "ORDER BY" node and an Arel::Nodes::StringJoin to build a comma separated list of Arel::Nodes::Ordering:
ordering = Arel::Nodes::StringJoin.new(orders)
order_by = Arel::Nodes::InfixOperation.new('ORDER BY', column, ordering)
array_agg = Arel::Nodes::NamedFunction.new('array_agg', [order_by])
I am testing a Ext JS application (Client Side) and Play Framework (Service Side).
I am using a grid in Ext JS with pagination.
The pagination part requires to send URL Query Parameters to my Play! server. This is no big deal, but how to process these parameters in the SQL Statement??
Example:
First request:
http://myDomain:9000/GetUsers?_dc=123456789&page=1&start=0&limit=25
Second reqeust:
http://myDomain:9000/GetUsers?_dc=123456789&page=2&start=25&limit=25
My thoughts:
Normally in SQL you can set the TOP results:
SELECT TOP 25 FROM USERS
But how to translate the second request into a Sql query?
Thank you for taking time to help me out!
======>>
EDIT: I am developing on SQL Server 2008, but I want this working on Sql Server 2005 or higher and Oracle 9 and higher :-)
Since you're using the Play! framework, what you should do is have a proper model, with entities representing your SQL tables. Then receiving a range of results is built in:
// 25 max users start at 25
List<User> users = User.all().from(25).fetch(25);
You should also look at the pagination module. I haven't tested it, but it looks like exactly what you want.
You could try something like:
WITH Query_1 AS (
SELECT
Field1, Field2, etc
ROW_NUMBER() OVER (ORDER BY Field1, Field2, etc) AS RowID
FROM Table
WHERE x=y
)
SELECT * FROM Query_1 WHERE RowID >= #start
AND RowID < #start + #limit
Of course ROW_NUMBER didn't exist back in SQL 2000 but since you've not told us which SQL you're working with I'm assuming something newer.
I have a Custom Query that look like this
self.account.websites.find(:all,:joins => [:group_websites => {:group => :users}],:conditions=>["users.id =?",self])
where self is a User Object
I manage to generate the equivalent SQL for same
Here how it look
sql = "select * from websites INNER JOIN group_websites on group_websites.website_id = websites.id INNER JOIN groups on groups.id = group_websites.group_id INNER JOIN group_users ON (groups.id = group_users.group_id) INNER JOIN users on (users.id = group_users.user_id) where (websites.account_id = #{account_id} AND (users.id = #{user_id}))"
With the decent understanding of SQL and ActiveRecord I assumed that(which most would agree on) the result obtained from above query might take a longer time as compare to result obtained from find_by_sql(sql) one.
But Surprisingly
When I ran the above two
I found the ActiveRecord custom Query leading the way from ActiveRecord "find_by_sql" in term of load time
here are the test result
ActiveRecord Custom Query load time
Website Load (0.9ms)
Website Columns(1.0ms)
find_by_sql load time
Website Load (1.3ms)
Website Columns(1.0ms)
I repeated the test again an again and the result still the came out the same(with Custom Query winning the battle)
I know the difference aren't that big but still I just cant figure out why a normal find_by_sql query is slower than Custom Query
Can Anyone Share a light on this.
Thanks Anyway
Regards
Viren Negi
With the find case, the query is parameterized; this means the database can cache the query plan and will not need to parse and compile the query again.
With the find_by_sql case the entire query is passed to the database as a string. This means there is no caching that the database can do on the structure of the query, and it needs to be parsed and compiled on each occasion.
I think you can test this: try find_by_sql in this way (parameterized):
User.find_by_sql(["select * from websites INNER JOIN group_websites on group_websites.website_id = websites.id INNER JOIN groups on groups.id = group_websites.group_id INNER JOIN group_users ON (groups.id = group_users.group_id) INNER JOIN users on (users.id = group_users.user_id) where (websites.account_id = ? AND (users.id = ?))", account_id, users.id])
Well, the reason is probably quite simple - with custom SQL, the SQL query is sent immediately to db server for execution.
Remember that Ruby is an interpreted language, therefore Rails generates a new SQL query based on the ORM meta language you have used before it can be sent to the actual db server for execution. I would say additional 0.1 ms is the time taken by framework to generate the query.