I have a Ruby on Rails application in which I call different stored procedures from a Postgres database. I want to know if there is a way to see the currently running queries inside a stored procedure when called. This is needed in order to create console logs in case of a stored procedure bottleneck.
I don't want to modify the stored procedure with lines like raise notice, I only want to see the queries from the application level if possible.
I'm currently using the PG gem: connection.exec(sql_query).to_a where connection is of type PGConnection and sql_query = stored procedure (E.g. procedure1("1", true))
I tried searching for some solutions in the pg-gem documentation but couldn't find any.
Any tips?
Related
I'm building a project where the front end is react and the backend is ruby on rails and uses a postgres DB. A required functionality is the ability for users to export a large datasets.
I have the following code snippet that creates a CSV and stores it on the database server.
query = <<-SQL
COPY (SELECT * FROM ORDERS WHERE ORDERS.STORE_ID = ? OFFSET ? LIMIT ?) to '/temp/out.txt' WITH CSV HEADER
SQL
query_result = Order.find_by_sql([query, store_id.to_i, offset.to_i, 1000000])
How would I be able to retrieve that file to send to the front end. I've seen examples that use copy_data and get_copy_data but I couldn't get it to work with parameterized query. Any help would be great. Thanks!
There are two problems with your approach:
COPY doesn't support parameters, so you will have to construct the complete query string on the client side (beware of SQL injection).
COPY ... TO 'file' requires superuser rights or membership in the pg_write_server_files role.
Don't even think of running an application as a superuser.
Even without that, allowing client code to create files on the database server opens you the risk of denial-of-service through a full file system.
I think that the whole idea is ill-conceived. If you have a large query result, the database server will automatically use temporary files if an intermediate result won't fit into memory. Keep it simple.
I would like to know if there's any special requirement when calling a Snowflake stored procedure in a Informatica mapping. Concretely, I have a mapping in which the target is a snowflake table, and as Post-SQL, I want to call a stored procedure that is in the same database as my table.
I call my stored procedure in Post-SQL as following:
CALL spname();
However, I get the following error when running:
SQL compilation error: Unknown function spname
Do you know which could be the problem here?
That error message is coming from Snowflake, so Informatica (is this PowerCenter on-prem?) is attempting to run the SP and it's getting a response back from Snowflake. Here are some things to check:
Does the Snowflake user PowerCenter runs as have the required grants to run the SP? The error message will be the same whether the SP does not exist or the user lacks privileges to run it.
Does the user running PowerCenter have the required grants on the database and schema containing the stored procedure?
You can ensure that PowerCenter is looking in the right namespace by specifying both the database and schema before the SP name, such as call "MY_DB"."MY_SCHEMA"."MY_PROC"();
I have a DSN (data source name) of the following format:
<driver>://<username>:<password>#<host>:<port>/<database>
and I am asked to retrieve rows from the corresponding database, which has a single table in this specific example and it is on AWS. I would like to do it using an endpoint in a Rails app.
I did some research online to look for an example about DSN, but couldn't find any help.
I am looking for some high level explanation of how to work with DSN, and ideally how to use Rails to communicate with the database
I am not sure if it is going to be useful for anybody, but this is as much info I could gather.
The DSN format is something like:
<driver>://<username>:<password>#<host>:<port>/<database>
Within rails it should be used like, assuming you want an array of users:
require 'pg'
conn = PG.connect('<driver>://<username>:<password>#<host>:<port>/<database>')
puts conn.exec("SELECT count(*) FROM users").to_a
Let's say I need to implement a search algorithm for a product catalog database. This would include multiple joins across the products table, manufacturers table, inventory table, etc. etc.
In .NET / MSSQL, I would isolate such logic in a DB stored procedure, then write a wrapper method in my data access layer of my .NET app to simply call this stored procedure.
How does something like this work in RoR? From my basic understanding, RoR uses its ORM by default. Does this mean, I have to move my search logic into the application layer, and write it using its ORM? The SQL stored proc is pretty intense... For performance, it needs to be in the stored procedure.
How does this work in RoR?
Edit: From the first two responses, I gather that ActiveRecord is the way to do things in Ruby. Does this mean that applications that require large complex queries with lots of joins, filtering and even dynamic SQL can (should) be re-written using ActiveRecord classes?
Thanks!
While it is possible to run raw SQL statements in Rails, using the execute method on a connection object, by doing so you will forfeit all the benefits of ActiveRecord. If you still want to go down this path, you can use it like so:
ActiveRecord::Base.connection.execute("call stored_procedure_name")
Another option to explore might be to create a "query object" to encapsulate your query logic. Inside, you could still use ActiveRecord query methods. ActiveRecord has become fairly proficient in optimizing your SQL queries, and there is still some manual tweaking you could do.
Below is a simple scaffoold for such an object:
# app/queries/search_products.rb
class SearchProducts
def initialize(params)
#search = search
end
def call
Product.where(...) # Plus additional search logic
end
end
The third option would be to go with something like elasticsearch-rails or sunspot. This will require some additional setup time and added complexity, but might pay off down the line, if your search requirements change.
Stored procedure is one of way to make the apps can be faster sometimes but it will need high costs and time to debug code for developer. Rails is using ActiveRecord ORM so if you want to use stored procedure that will lead to the main function ActiveRecord unused well.
There are some explains about rails
stored-procedures-in-ruby-on-rails and using stored procedure
I have a Rails 3 application where I need to ingest an XML file provided by an external system into a Postgres database. I would like to use something like ActiveRecord-Import but this does not appear to handle upsert capabilities for Postgres, and some of the records I will be ingesting will already exist, but will need to be updated.
Most of what I'm reading recommends writing SQL on the fly, but this seems like a problem that may have been solved already. I just can't find it.
Thanks.
You can do upserting on MySQL and PostgreSQL with upsert.
If you're looking for raw speed, you could use nokogiri and upsert.
It might be easier to import the data using data_miner, which uses nokogiri and upsert internally.
If you are on PostgreSQL 9.1 you should use writeable common table expressions. Something like:
WITH updates (id) AS (
UPDATE mytable SET .....
WHERE ....
RETURNING id
)
INSERT INTO mytable (....)
SELECT ...
FROM mytemptable
WHERE id NOT IN (select id from updates);
In this case you bulk process thins in a temp table first, then it will try to update the records from the temptable according to your logic, and insert the rest.
Its a two step thing. First you need to fetch the XML File. If its provided by a user via a form that luck for you otherwise you need to fetch it using the standard HTTP lib of ruby or otherwise some gem like mechanize (which is actually really great)
The second thing is really easy. You read all the XML into a string and then you can convert it into a hash with this pice of code:
Hash.from_xml(xml_string)
Then you can parse and work with the data...