I am working on a rails project that involves the solr sunspot gem. I left the default behavior to auto update the index on model saves, but I was wondering if there was a way to temporarily disable the indexing when mass creating objects, such as during a rake db:seed process. When using the seed command I was hoping it would add all of the objects and then perform one big reindex call to update the entire table. Any ideas?
Thanks!
You could set Sunspot's session to a StubSessionProxy.
There's also this.
Basically, you should be able to add this to sunspot.yml:
development:
disabled: true
This works great if you're running some tasks or queries directly on the DB. However, if you are running your app with this setting, and anywhere in your code you have something like:
Sunspot.config.pagination.default_per_page = 50
Then you'll hit an error like this:
undefined method `config' for #<Sunspot::Rails::StubSessionProxy:0x007ff6ee33df28>
Related
I have an initializer that wants to fetch a variable from the database. This initialization is only needed when running the app with rails server or in production, as it is only used in the views.
config.default_country_id = Spree::Country.find_by(name: 'Netherlands').try(:id)
When running rake db:setup on a blank database (or any rake task) it fails, because the table where the value is being fetched does not exist (db:setup is trying to create it).
ActiveRecord::StatementInvalid: Mysql2::Error: Table 'revitalised_staging.spree_countries' doesn't exist:...
How should I write such an initializer instead? Can I wrap it in some generic condition that allows me to skip it unless the database is setup correctly? Should I instead only run it when I know that will need it (i.e. whitelist a few commands or modes?).
You can ask ActiveRecord whether the table exists and set the option only if it does. I guess you can add some sort of fallback but since the value is used only in views, I wouldn't bother.
if ActiveRecord::Base.connection.table_exists? Spree::Country.table_name
config.default_country_id = Spree::Country.find_by(name: 'Netherlands').try(:id)
end
config.default_country_id = Spree::Country.find_by(name: 'Netherlands').try(:id) if defined?(Spree::Country)
should do the trick :-)
Ref: https://www.ruby-forum.com/topic/171753
I have a lot of spots in my code that actually call activerecord finders. For example, in a Blog engine, I might have a table of tags that correspond to an activerecord model Tag. Suppose, for some reason, that I want special logic to happen if a post is created with a tag where tag.description == 'humor'. Then I might have a method in the model:
class Tag < ActiveRecord::Base
def self.humor_tag
find_by_description('humor')
end
end
Whether or not this is poor design, it causes insane amounts of problems for me when using rake commands to build a database. Say that later on, I've finished my development and I want to deploy to production. So I take the dumped schema.rb file, and then I want to load a new database structure from that schema.rb, or alternatively, just run my migrations to create a production database.
RAILS_ENV=production rake db:schema:load
The problem is that, in the production environment, the rake command seems to load every model. When it tries to load the Tag#humor_tag method, it throws an error that stops the process:
rake aborted!
Table 'production_database.tags' doesn't exist
Well of course it doesn't exist, it hasn't been created yet! I've googled around and people seem to solve this problem by either cloning the database in SQL or moving around their code just so they can run the rake task.
What are you supposed to do? It seems like there might be some configuration somewhere to let you tell rake to freaking ignore calls to database tables before any tables are created.
I would suggest replacing queries by class methods with scopes: http://guides.rubyonrails.org/active_record_querying.html#scopes
and if you have an initializer that is causing the models to load, use a proc in the scope definition, such as
class Post < ActiveRecord::Base
scope :published, Proc.new { where(:published => true) }
end
to prevent the scope from running at initialization time.
I'm not completely satisfied with this answer, but if anyone gets to this question and has a similar problem, this may be helpful. In order to move over a database in a situation where you would usually rake db:schema:load or just create it and run the migrations, you can alternatively load the database from SQL (or presumably other database technologies).
rake db:structure:dump
That command will dump the structure of the database into a file that will then be able to recreate it. For me, it created a file db/development_dump.sql, that contained calls to create all of the tables and indices, but didn't copy any of the data like on a normal sql dump. Then, I moved that file to my production database, and ran it:
mysql prod_database < development_dump.sql
This doesn't answer the question at hand, but it may be relevant for someone facing a similar problem.
I have some weird issues going on (for a very weird use case as I'll explain). I'm setting up a multi-tenant application using postgres schemas for data multi-tenancy.
Each company in my system will get its own schema. The way I accomplish this is with an after_commit on the model, on create, that then goes and creates a new postgres schema, and loads schema.rb into it. (copied from rake db:schema:load code) using ruby load.
You can see the gem here
Anyway, all this works (in the console). Creating a company creates the new schema and i can switch to it etc... my problem lies in my integration tests. I have an rspec test that creates to companies like so:
before do
#c1 = Factory :company
#c2 = Factory :company
end
What's odd is that I start to get the logs about the db schema loading, but they're truncated. Almost as if they're happening in parallel. Here's a sample output:
>> create: database: unique_name1
-- create_table("first_table_in_schema_rb", {:force=>true})
>> create: database: unique_name2
create: database is my log line, the -- create_table is from schema.rb itself.
As you can see, the second create: database seems to happen while I'm loading schema.rb from the first company creation.
Does anyone know if load is somehow asynchronous? I know ruby doesn't have real threads, but could it be using fibres or something? This is really messing me up because when my test comes around, the postgres schema that was meant to be created doesn't seem to exist.
Rails 3.0.8
Ruby 1.9.2
Im not 100% sure this is your problem because im sure of what happens with require but not with load, the things this happen to me once with require because require is not atomic, so loading code from a file with require will cause a race condition. Maybe that is what is happening with load but i was not able to find any info about load been atomic or not.
nevermind... issue had nothing to do with load it was the fact that i was already connected to the wrong schema when importing the schema.rb
There was in fact an exception being thrown that was silently caught somewhere
If I have the following code defined inside db/seeds.rb,
default_car=Car.create({:name=>'TOYOTA'})
User.create({:username=>'default_user', car_id=>default_car.id})
I know the default_car and the user instances will be stored into Database when I run "rake db:seed".
My question is, if I run 'rake db:seed' again, again and again(multiple times), will the same instances be stored to database with multiple copies or it only save the instance once into database no matter how many times I run rake db:seed?
Better solution:
default_car = Car.find_or_create_by_name 'TOYOTA'
user = User.find_or_create_by_username 'default_user'
user.car = default_car
user.save
That way you can run "rake db:seed" multiple times without having to drop the database manually every time.
This is a limitation of having a single seed file. I was finding this frustrating as the application grows you often want to add new seed data so you end up either doing what Pascal suggests or creating either migrations with data in them or rake tasks to load the seeds. To get round this I knocked up seedbank. So I combine this with Pascals approach so I can re-run the seeds but can also target specific ones if I want to.
depends on your models if you allow duplicate values. if you don't it will throw an error. what you do is to clear your db first before running seed via rake db:resetdb
I have a migration that runs an SQL script to create a new Postgres schema. When creating a new database in Postgres by default it creates a schema called 'public', which is the main schema we use. The migration to create the new database schema seems to be working fine, however the problem occurs after the migration has run, when rails tries to update the 'schema_info' table that it relies on it says that it does not exist, as if it is looking for it in the new database schema and not the default 'public' schema where the table actually is.
Does anybody know how I can tell rails to look at the 'public' schema for this table?
Example of SQL being executed: ~
CREATE SCHEMA new_schema;
COMMENT ON SCHEMA new_schema IS 'this is the new Postgres database schema to sit along side the "public" schema';
-- various tables, triggers and functions created in new_schema
Error being thrown: ~
RuntimeError: ERROR C42P01 Mrelation "schema_info" does not exist
L221 RRangeVarGetRelid: UPDATE schema_info SET version = ??
Thanks for your help
Chris Knight
Well that depends what your migration looks like, what your database.yml looks like and what exactly you are trying to attempt. Anyway more information is needed change the names if you have to and post an example database.yml and the migration. does the migration change the search_path for the adapter for example ?
But know that in general rails and postgresql schemas don't work well together (yet?).
There are a few places which have problems. Try and build and app that uses only one pg database with 2 non-default schemas one for dev and one for test and tell me about it. (from thefollowing I can already tell you that you will get burned)
Maybe it was fixed since the last time I played with it but when I see http://rails.lighthouseapp.com/projects/8994/tickets/390-postgres-adapter-quotes-table-name-breaks-when-non-default-schema-is-used or this http://rails.lighthouseapp.com/projects/8994/tickets/918-postgresql-tables-not-generating-correct-schema-list or this in postgresql_adapter.rb
# Drops a PostgreSQL database
#
# Example:
# drop_database 'matt_development'
def drop_database(name) #:nodoc:
execute "DROP DATABASE IF EXISTS #{name}"
end
(yes this is wrong if you use the same database with different schemas for both dev and test, this would drop both databases each time you run the unit tests !)
I actually started writing patches. the first one was for the indexes methods in the adapter which didn't care about the search_path ending up with duplicated indexes in some conditions, then I started getting hurt by the rest and ended up abandonning the idea of using schemas: I wanted to get my app done and I didn't have the extra time needed to fix the problems I had using schemas.
I'm not sure I understand what you're asking exactly, but, rake will be expecting to update the version of the Rails schema into the schema_info table. Check your database.yml config file, this is where rake will be looking to find the table to update.
Is it a possibility that you are migrating to a new Postgres schema and rake is still pointing to the old one? I'm not sure then that a standard Rails migration is what you need. It might be best to create your own rake task instead.
Edit: If you're referencing two different databases or Postgres schemas, Rails doesn't support this in standard migrations. Rails assumes one database, so migrations from one database to another is usually not possible. When you run "rake db:migrate" it actually looks at the RAILS_ENV environment variable to find the correct entry in database.yml. If rake starts the migration looking at the "development" environment and database config from database.yml, it will expect to update to this environment at the end of the migration.
So, you'll probably need to do this from outside the Rails stack as you can't reference two databases at the same time within Rails. There are attempts at plugins to allow this, but they're majorly hacky and don't work properly.
You can use pg_power. It provides additional DSL for migration to create PostgreSQL schemas and not only.