Rails - tinytds crashing ruby - ruby-on-rails

When querying MSSQL 2008 database using freetds and tinytds gem with syntax below:
db = TinyTds::Client.new(:username => ...)
select = db.execute("EXEC dbo.__stored_procedure__")
db.close
Then this line is causing ruby to crash on windows:
select.each {|x| p x}
Strange thing, when querying simple select:
select = db.execute("SELECT field FROM table")
select.each doesn't crash - it doesn't do any loop either
It doesn't crash webrick nor rails console either.
But when I change code to:
db = TinyTds::Client.new(:username => ...)
select = []
db.execute("EXEC dbo.__stored_procedure__").each { |x|
select << x
}
db.close
Then it works like a charm (even with select).
Don't how it works on os better than windows...

Your expectations are incorrect. I suggest you read over the TinyTDS usage here.
https://github.com/rails-sqlserver/tiny_tds#tinytdsclient-usage

Related

PGconn.connect .... where is disconnect?

Environment:
psql (PostgreSQL) 9.6.3
Rails 5.1.1
Ruby 2.4.1p111
Question:
I may have a large group of (Devise) users each of whom is a separate Postgres user, e.g. SomePostgresRole01, SomePostgresRole02, etc.
I can successfully do:
conn = PGconn.connect("localhost", 5432,"","","db_development","SomePostgresRole01","SomePassword")
I cannot find a conn.disconnect method. Does such functionality exist?
.close() can be used to close connection. Used ensure to make sure even after exception it will close database connection.
begin
conn = PGconn.connect("localhost", 5432,"","","db_development","SomePostgresRole01","SomePassword")
rescue PG::Error => e
puts e.message
ensure
conn.close if conn
end
You can use #finish or #close, they're just alias' for the same thing.

Neo4j - too many connections resets - from Ruby on Rails console

There is one and only one connection and user here.
d = l.descriptions.first
Language#descriptions 1200270ms MATCH language137, language137-[rel1:`DESCRIBED_IN`]->(result_descriptions:`Description`) WHERE (ID(language137) = {ID_language137}) RETURN result_descriptions | {:ID_language137=>137}
Faraday::TimeoutError: too many connection resets (due to Net::ReadTimeout - Net::ReadTimeout) after 0 requests on 70156828873380, last used 1438628883.105085 seconds ago
After that no other connection is allowed until server is restarted.
What is wrong here?
Here is in more detail what I am trying to do: selecting a language i.e. English. Getting the count of descriptions in English. Searching the first description in English. This never returns or delivers the connection error. During the long run of the last one no other connections can be open to the database.
irb(main):001:0> l = Language.find_by(iso_639_2_code: 'eng')
CYPHER 316ms MATCH (n:`Language`) WHERE (n.iso_639_2_code = {n_iso_639_2_code}) RETURN n LIMIT {limit_1} | {:n_iso_639_2_code=>"eng", :limit_1=>1}
=> #<Language uuid: nil, english_name_of_language: "English", french_name_of_language: "anglais", german_name_of_language: "Englisch", iso_639_1_code: "en", iso_639_2_code: "eng", spoken_in: "English, a West Germanic language is the first language for about 309–400 million people. See: Countries by Languages - English Speaking Countries.">
irb(main):002:0>
irb(main):005:0* n = l.descriptions.count
Language#descriptions 17749ms MATCH language137, language137-[rel1:`DESCRIBED_IN`]->(result_descriptions:`Description`) WHERE (ID(language137) = {ID_language137}) RETURN count(result_descriptions) AS result_descriptions | {:ID_language137=>137}
=> 2107041
irb(main):006:0> d = l.descriptions.first
I think that we fixed this is version 5.0 of the gems. Could you try upgrading?
The issue was moved to the neo4jrb git repo and solved by the neo4j maintainers by recommending to upgrade to the gem core version 5.0.11 from 5.0.9
You might see this error if your Neo4j database server becomes unresponsive. To fix this, restart your Neo4j database server.
bundle exec rake neo4j:stop
bundle exec rake neo4j:start

Dbix::Class slow response

I have a DBIx::Class query that's taking too long to complete.
All SQL below were generated by DBIx::Class.
First scenario (DBIx Simple select):
SELECT me.pf_id, me.origin_id, me.event_time, me.proto_id FROM pf me ORDER BY event_time DESC LIMIT 10;
DBIx query time: 0.390221s (ok)
Second scenario (DBIx Simple select using where):
SELECT me.pf_id, me.origin_id, me.event_time, me.proto_id FROM pf me WHERE ( proto_id = 7 ) ORDER BY event_time DESC LIMIT 10;
DBIx query time: 29.27025s!! :(
Third scenario (Using pgadmin3 to run the query above):
SELECT me.pf_id, me.origin_id, me.event_time, me.proto_id FROM pf me WHERE ( proto_id = 7 ) ORDER BY event_time DESC LIMIT 10;
Pgadmin query time: 25ms (ok)
The same query is pretty fast using pgdamin.
Some info:
Catalyst 5.90091
DBIx::Class 0.082820 (latest)
Postgres 9.1
I did all tests on localhost, using Catalyst internal server.
I have no problems with any other table/column combination, it's specific with proto_id.
Database Schema automatically generated by DBIx::Class::Schema::Loader
proto_id definition:
"proto_id",
{ data_type => "smallint", is_foreign_key => 1, is_nullable => 0 },
Anybody have a clue why DBIx is taking so long to run this simple query?
Edit 1: Column is using index (btree).
Edit 2: This is a partitioned table, I'm checking if all the sub-tables have all the indexes, but still doesn't explain why the same query is slower on DBIx::Class.
Edit 3: I did a simple DBIx::Class script and I got the same results, just to make sure the problem is not the Catalyst Framework.
Edit 4: Using tcpdump I noticed postgres is taking too long to respond, still trying...
Edit 5: Using DBI with SQL seems pretty fast, I'm almost convinced this is a DBIx::Class problem.
After some tests, I found the problem:
When I do the query using DBI bind_param() (As DBIx::Class does) for some reason it became very slow.
SELECT me.pf_id, me.origin_id, me.event_time, me.proto_id FROM pf me WHERE ( proto_id = ? ) ORDER BY event_time DESC LIMIT ?;
my $sth = $dbh->prepare($sql);
$sth->bind_param(1, 80, { TYPE => SQL_INTEGER });
$sth->bind_param(2, 10, { TYPE => SQL_INTEGER });
$sth->execute();
So after some time searching CPAN I've noticed that my DBD::Pg was outdated (My bad). I downloaded the source from CPAN, compiled and the problem is gone. Must be some bug from older versions.
TL;DR: If you're having problems with DBI or DBIx::Class make sure your DBI database driver is updated.

Mix debugger commands and ruby code evaluation

I'm currently upgrading an old project from and old version of ruby(1.8.7)/rails(3.0) to 1.9.3/3.1 (as a stepping stone to newer versions).
I'm using gems debugger for 1.9.3 and ruby-debug for 1.8.7
When I run , I can run commands like info variables to get the list and values of all currently-scoped variables:
...
#current_phone = nil
#fields = {}
#global = {:source_type=>"pdf"}
#images = []
#index = {}
#lines = []
...
Also I can run arbitrary ruby code - a useful one I've been using is
File.open("/tmp/new_version", "w"){|f|f.write(#fields)}
which is useful for me to quickly compare between the old version and the new version using a file diff program.
Can I link these together so I can write to a file all the output of info variables? It would be sufficient if I could do
tempvar = info variables
or something along those lines, of course, but that gives
*** NameError Exception: undefined local variable or method `variables' for <ClassWhatever>
instance_variables.map { |v| [v, instance_variable_get(v)] }
Not exactly hash map, but you'll be good with it.

How to optimize adding new stock locations in Spree?

When I have a lot of products (3000 and 22000 variants), adding new stock location takes hours because Spree is creating stock items for every variant.
During this time variants table is locked and whole system is unusable. Is there some workaround for this or maybe it was fixed in some new version of Spree?
I am using spree 2.0.3.
I face the same problem, with >400K variants, it's impossible to add a new stock location. So, I create a script in ruby and for all variants write an insert statement to a SQL file. I must create the stock location without propagate_all_variants
# lib/create_stock_items.rb
begin
file = File.open("stock_items.sql", "w")
rescue IOError => e
puts e
end
file.write("INSERT INTO spree_stock_items (stock_location_id, variant_id, backorderable) VALUES \n")
variants = Spree::Variant.all.pluck(:id)
length = variants.count
variants.each_with_index do |variant, index|
if index+1 == length
file.write("(#{stock_location_id}, #{variant}, false); \n")
else
file.write("(#{stock_location_id}, #{variant}, false), \n")
end
end
file.close
Then run bundle exec rails runner lib/create_stock_items.rb -e production. This will create a stock_items.sql file in Rails root path, and finally load that SQL directly on BD (rails dbconsole).
I know it's a little hack, but a very fast solution for me.

Resources