Where is the database file created in this code? - ruby-on-rails

I have this code embedded in a .html.erb file:
require 'sqlite3'
# Open a SQLite 3 database file
db = SQLite3::Database.new 'file.db'
# Create a table
result = db.execute <<-SQL
CREATE TABLE numbers (
name VARCHAR(30),
val INT
);
SQL
# Insert some data into it
{ 'one' => 1, 'two' => 2 }.each do |pair|
db.execute 'insert into numbers values (?, ?)', pair
end
# Find some records
db.execute 'SELECT * FROM numbers' do |row|
p row
end
As you can see the code is creating a database, then create a table, etc..
My problem: I don't know where the file "file.db" is created. I have searched for it on the whole Windows, but I cannot find it.
NOTE 1: it seems the file and the table are being created, because if I request for the same .html.erb, I get Error: the table 'numbers' already exists.
NOTE 2: I know this is not the best place to add the ruby code related to a database. I know I should write it on the controller.
Here it is the full code:
<%
require 'sqlite3'
# Open a SQLite 3 database file
db = SQLite3::Database.new 'development.sqlite3'
# Create a table
result = db.execute <<-SQL
CREATE TABLE numbers (
name VARCHAR(30),
val INT
);
SQL
# Insert some data into it
{ 'one' => 1, 'two' => 2 }.each do |pair|
db.execute 'insert into numbers values (?, ?)', pair
end
# Find some records
db.execute 'SELECT * FROM numbers' do |row|
p row
end
%>
<h1>Welcome#index</h1>
<p>Find me in app/views/welcome/index.html.erb</p>

Look for file.db. SQL lite is a truly light database. To view the tables and records read following:
https://www.tutorialspoint.com/sqlite/sqlite_create_database.htm

Related

Add and edit data to postgresql with ruby on rails

I have a ruby code that should integrate with database. I use this code:
user_id = params[:user_id].to_i
sql = "SELECT * FROM user_stats WHERE user_id = #{user_id}"
user_stats = ActiveRecord::Base.connection.execute(sql)
but adding rows to database like this:
sql = "INSERT INTO user_custom_fields (id, user_id, name, value, created_at, updated_at) VALUES (#{last_id}, #{user_id}, 'user_field_#{user_field_id[0]["id"]}', '#{personality}', '#{formatted}', '#{formatted}')"
res = ActiveRecord::Base.connection.execute(sql)
won't increase the sequence table and I can't edit this table using this way.
How should I add data to these tables that the sequence tables increase automatically.

Getting the column type from a raw SQL statement for reporting purposes

I'm building a generalized reporting tool for rails, and I'd like to not only get the column names of a raw SQL query, but the converted ruby type as well. I'm going to use the type to make the interface a little better.
The following works, but surely there's a more "rails"-way to approach this? The query is just an example, it could potentially span every table using whatever dynamic SQL the user wants.
sql = "SELECT * FROM SOME_TABLE"
results = ActiveRecord::Base.connection.raw_connection.exec(sql)
results.nfields.times do |i|
puts results.fname(i)
name = results.fname(i)
typename = DataSet.connection.raw_connection.
exec( "SELECT format_type($1,$2)", [results.ftype(i), results.fmod(1)] ).
getvalue( 0, 0 )
column = ActiveRecord::ConnectionAdapters::Column.new(name, nil, typename)
puts column.klass # gives a decent assumption of type
end
ModelX.columns.each do |column|
puts column.sql_type # int(11)
puts column.type # :integer
end
This assumes you have an ActiveRecord model for each table.

Produce a report by getting data from multiple tables

I have 4 tables:
key: id, name
project: id, name
project_key: id, key_id, project_id
project_report: id, status, project_id, key_id
project_c_report: id, status, project_id, key_id, c_id
I want to produce a report using those tables:
The output should be:
Key.name, project_report.status, project_c_report.status
I was able to do this by getting all the keys from a project, and loop over them
array = []
project.keys.each do |k|
p = ProjectReport.where(keyword_id: k, project_id: p.id).map(&:status)
c = ProjectCReport.where(keyword_id: k, project_id: p.id, c_id:1).map(&:status)
array << {name: k.name, pr: p, pcr: c}
end
array
The problem is that I am doing a lot of selects and everything is slow, can someone help me please with a better way of doing this.
Thank you
First, create a function in your DataBase. This is just a brief example, and also its done in PostgreSQL but shouldnt difer much
from MySQL, SQLServer, etc
Function get_myreport(key_id integer, project_id integer [As many params as you'd like the function to get))
pProject ALIAS FOR $1;
pKey ALIAS FOR $2;
BEGIN
CREATE TEMP TABLE IF NOT EXISTS tmp_project_report(id integer, project_name character varying, *All the values you want to see in the report);
TRUNCATE tmp_project_report;
INSERT INTO tmp_project_report(all the columns)
SELECT a.table1_fields, b.table2_fields, c.table3_fields, d.table4_fields, e.table5_fields
FROM table1 a, table2 b, table3 c, table4 d, table5 e
WHERE
a.key = pKey
AND b.project_key = pProject
END;
Then, in your controller's method you call the up the function like this
myFunction = ActiveRecord:Base.connection.execute = "Select get_myreport("param1, param2, etc...")
You will have to make a model where you put all the fields that are on the temp_table you've made, and also you will set the temp_table as the self.table_name
Then, in your view, you'd only have to iterate on your collection and display the values accordingly
#report = TempTable.all
<% #report.each_do |report| %>
<% report.value1 %>
<% etc... %>
<% end %>
Figure out the database query, then query the database directly from your model:
def records
connection = ActiveRecord::Base.connection
records = connection.select %Q {
SELECT key.name, project_report.status, project_c_report.status
FROM ...
JOIN ...
;
}
records
end
Here is something you can try if you choose to keep this within Rails (note that the following query is untested and is shown for concept only):
report_data = Project.joins(project_key: :key)
.joins('left join project_reports on project_keys.project_id = project_reports.project_id and project_keys.key_id = project_reports.key_id
left join project_c_reports on project_keys.project_id = project_c_reports.project_id and project_keys.key_id = project_c_reports.key_id')
.where('project_c_reports.c_id = ?', 1)
.select('projects.name, project_reports.status as report_status, project_c_reports.status as c_report_status')
This should give you an array of Project objects each including the selected three attributes name, report_status, c_report_status. To get these values in an array of these three elements you could do:
report_data.map { |p| [ p.name, p.report_status, p.c_report_status ] }
The type of join for the query depends on your requirement. Given the index are in place the query should be better compared to how it looks in code!

How can I remove unique constraints from a PostgreSQL DB?

I'm trying to write (Ruby) script that will drop all the foreign key and unique constraints in my PostgreSQL DB and then re-add them.
The FK part seems to be working OK.
However, dropping and recreating unique constraints isn't working.
I think the reason is that when the unique constraint is created, PostgreSQL creates an index along with it, and that index doesn't get automatically dropped when the unique constraint is dropped. So then when the script tries to re-add the unique constraint, I get an error like...
PG::Error: ERROR: relation "unique_username" already exists
: ALTER TABLE users ADD CONSTRAINT unique_username UNIQUE (username)
And indeed when I look at the DB in the pgAdmin GUI utility, that index exists.
The question is, how do I find it in my script and drop it?
Here's my script...
manage_constraints.rake
namespace :journal_app do
desc 'Drop constraints'
task :constraints_drop => :environment do
sql = %Q|
SELECT
constraint_name, table_catalog, table_name
FROM
information_schema.table_constraints
WHERE
table_catalog = 'journal_app_#{Rails.env}'
AND
constraint_name NOT LIKE '%_pkey'
AND
constraint_name NOT LIKE '%_not_null';
|
results = execute_sql(sql)
results.each do |row|
puts "Dropping constraint #{row['constraint_name']} from table #{row['table_name']}."
execute_sql("ALTER TABLE #{row['table_name']} DROP CONSTRAINT #{row['constraint_name']}")
end
end
# --------------------------------------------------------------------------------------------------------------------
desc 'Drops constraints, then adds them'
task :constraints_add => :environment do
Rake::Task['journal_app:constraints_drop'].invoke
UNIQUE_KEYS = [
{
:name => 'unique_username',
:table => 'users',
:columns => ['username']
},
{
:name => 'unique_email',
:table => 'users',
:columns => ['email']
}
]
FKs = [
{
:name => 'fk_entries_users',
:parent_table => 'users',
:child_table => 'entries',
:on_delete => 'CASCADE'
},
{
:name => 'fk_entries_entry_tags',
:parent_table => 'entries',
:child_table => 'entry_tags',
:on_delete => 'CASCADE'
},
# etc...
]
UNIQUE_KEYS.each do |constraint|
sql = "ALTER TABLE #{constraint[:table]} ADD CONSTRAINT #{constraint[:name]} UNIQUE (#{constraint[:columns].join(', ')})"
puts "Adding unique constraint #{constraint[:name]} to table #{constraint[:table]}."
puts ' SQL:'
puts " #{sql}"
execute_sql(sql)
end
FKs.each do |fk|
sql = %Q|
ALTER TABLE #{fk[:child_table]} ADD CONSTRAINT #{fk[:name]} FOREIGN KEY (#{fk[:parent_table].singularize}_id)
REFERENCES #{fk[:parent_table]} (id)
ON UPDATE NO ACTION ON DELETE #{fk[:on_delete]}|.strip!
puts "Adding foreign key #{fk[:name]}."
puts ' SQL:'
puts " #{sql}"
execute_sql(sql)
end
end
end
def execute_sql(sql)
ActiveRecord::Base.connection.execute(sql)
end
First, why do such a thing? This has the feel of one of those "I've decided on solution Y to problem X, and am having a problem with solution Y that I'm asking about" - where the real answer is "use solution Z not solution Y to solve problem X". In other words, try explaining the underlying problem you are having, there might be a much better way to solve it.
If you must do it, query pg_catalog.pg_index inner join pg_class on pg_class.oid = pg_index.indexrelid for indexes that are not indisprimary and exclude anything with EXISTS (SELECT 1 FROM pg_constraint on pg_index.indrelid = pg_constraint.conindid).
eg:
SELECT pg_class.relname
FROM pg_index INNER JOIN pg_class ON (pg_class.oid = pg_index.indexrelid)
INNER JOIN pg_namespace ON (pg_class.relnamespace = pg_namespace.oid)
WHERE NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE pg_index.indrelid = pg_constraint.conindid
)
AND pg_index.indisprimary = 'f'
AND pg_namespace.nspname NOT LIKE 'pg_%';
Be aware that such queries may break in any major version transition as the pg_catalog is not guaranteed to retain the same schema across versions. Query the Pg version and use version-specific queries if necessary. Sound painful? It is, but it shouldn't generally be necessary, you're just doing something kind of weird.
For most purposes the very stable information_schema is quite sufficient.

Sqlite where clause is not working (is this a bug?)

I was debugging a Ruby on Rails engine which has problems when running on Sqlite, it has a problem in finding records that the app itself creates. When run on MySQL everything works but the same query on SQLite is failing.
I've tracked down the issue and I found that the problem is in a simple WHERE query which won't find the created record. Essentially the table structure has a column called key which stores some md5 hashes. The failing spec insert a record with a given hash then on the following instruction do a SELECT query for the same hash, but SQLite returns no record for the same key. I've extracted the generated database and the failing query from the app and this is a copy of the app database:
http://dl.dropbox.com/u/2289657/combustion_test.sqlite
Here is a transcript of the queries executed by the software (made with the command line utility):
# Here I'm selecting all the records from the table
# there is a single record in it, the key is the third field
$ sqlite3 combustion_test.sqlite 'SELECT * FROM tr8n_translation_keys'
1||b56c67d10759f8012aff28fc03f26cbf|Hello World|We must start with this sentence!||||en-US|0|2012-03-14 11:49:50.335322|2012-03-14 11:49:50.335322|
# Here I'm selecting the record with that key and it doesn't return anything
$ sqlite3 combustion_test.sqlite "SELECT * FROM tr8n_translation_keys WHERE key = 'b56c67d10759f8012aff28fc03f26cbf'"
# Here I'selecting the record with a LIKE clause and it finds the record
$ sqlite3 combustion_test.sqlite "SELECT * FROM tr8n_translation_keys WHERE key LIKE 'b56c67d10759f8012aff28fc03f26cbf'"
1||b56c67d10759f8012aff28fc03f26cbf|Hello World|We must start with this sentence!||||en-US|0|2012-03-14 11:49:50.335322|2012-03-14 11:49:50.335322|
Should I report this as a bug to SQLite site?
P.S. I've tried also on a different system with a different SQLite version, but the results are the same.
Update
Here is the table schema
sqlite> .schema tr8n_translation_keys
CREATE TABLE "tr8n_translation_keys" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
"type" varchar(255), "key" varchar(255) NOT NULL,
"label" text NOT NULL,
"description" text,
"verified_at" datetime,
"translation_count" integer,
"admin" boolean,
"locale" varchar(255),
"level" integer DEFAULT 0,
"created_at" datetime,
"updated_at" datetime,
"synced_at" datetime
);
CREATE UNIQUE INDEX "index_tr8n_translation_keys_on_key" ON "tr8n_translation_keys" ("key");
CREATE INDEX "index_tr8n_translation_keys_on_synced_at" ON "tr8n_translation_keys" ("synced_at");
Update 2
Here is the rails code which compute the key value inserted into the table (I've removed some code, full method is here)
def self.find_or_create(label, desc = "", options = {})
key = generate_key(label, desc).to_s
# IF I UNCOMMENT THIS LINE EVERYTHING WORKS
#key = 'b56c67d10759f8012aff28fc03f26cbf'
tkey = Tr8n::Cache.fetch("translation_key_#{key}") do
existing_key = where(:key => key).first ### THIS IS THE FAILING WHERE
existing_key ||= begin
new_tkey = create(:key => key.to_s,
:label => label,
:description => desc,
:locale => locale,
:level => level,
:admin => Tr8n::Config.block_options[:admin])
# rest of method...
And here is the generate_key method, the comment about sqlite is from author, not mine)
def self.generate_key(label, desc = "")
# TODO: there is something iffy going on with the strings from the hash
# without the extra ~ = the strings are not seen in the sqlite database - wtf?
"#{Digest::MD5.hexdigest("#{label};;;#{desc}")}"
end
This works:
SELECT * FROM tr8n_translation_keys WHERE LOWER(key)='b56c67d10759f8012aff28fc03f26cbf';
But this doesn't:
SELECT * FROM tr8n_translation_keys WHERE key='b56c67d10759f8012aff28fc03f26cbf' COLLATE NOCASE;
When I examine the database in SQLiteManager, it shows the key as this:
X'6235366336376431303735396638303132616666323866633033663236636266'
which implies it's treating the key as a BLOB (raw binary data) rather than TEXT. This is why the comparison fails. But LOWER(key) causes the field to be cast to text, hence the comparison succeeds.
So, we need to find out why the entry has been stored as a BLOB instead of TEXT. How were these values inserted into the database?
Following your update 2: I'm not a Ruby expert, but the value returned from generate_key is not being converted to a string in the way you expect. Try to_str instead of to_s when calling generate_key.
Based on the following Stack Overflow answer...
https://stackoverflow.com/a/6591427/18064
... you might want to update the generation of your key as follows:
def self.generate_key(label, desc = "")
# TODO: there is something iffy going on with the strings from the hash
# without the extra ~ = the strings are not seen in the sqlite database - wtf?
"#{Digest::MD5.hexdigest("#{label};;;#{desc}").encode('UTF-8')}"
end
Note the addition of .encode('UTF-8').
This worked for me when I had the same problem as yourself.

Resources