I'm using https://github.com/elastic/elasticsearch-rails,
and want to change indexes.
I changed
indexes :status, type: :keyword
to
indexes :status_id, type: :integer
and executed in rails console
Diary.__elasticsearch__.create_index! force: true
Diary.import
But the indexes in ElasticSearch did not change from status .
What do you think is wrong?
Related
I am trying to index a particular model using Ruby on Rails and Elasticsearch but even if I use index: false the email still shows up on the index? I do not want the email to be indexed, what else can I do to prevent the email attribute from being indexed?
mappings dynamic: false do
indexes :author, type: 'text'
indexes :title, type: 'text'
indexes :email, index: false
end
I then produce the index and import records using:
Book.__elasticsearch__.create_index!
followed by Book.import force: true
I'm currently in the process of writing code to convert our databases default ID columns to a UUID format.
One example of the code that I have for our migration is
def up
remove_reference :examples, :user, null: false, foreign_key: true
add_column :users, :uuid, :uuid, default: "gen_random_uuid()", null: false
change_table :users do |t|
t.remove :id, type: :id
t.rename :uuid, :id
end
execute "ALTER TABLE users ADD PRIMARY KEY (id);"
add_reference :examples, :user, null: false, foreign_key: true, type: :uuid
end
Essentially this allowed me to convert my ID column to a UUID format.
I created a down function so I would be able to rollback but it fails due to this error ERROR: column "user_id" of relation "examples" contains null values
I realize that there would be an issue because once there is data in the database it would be unable to rollback and create the correct references again. Does anyone have any ideas on how I should work on my down function?
def down
remove_reference :examples, :user, null: false, foreign_key: true, type: :uuid
execute 'ALTER TABLE users DROP CONSTRAINT users_pkey'
add_column :users, :new_id, :primary_key
change_table :users do |t|
t.remove :id, type: :uuid
t.rename :new_id, :id
end
add_reference :examples, :user, null: false, foreign_key: true
end
Does anyone have any suggestions on how I should proceed with this? The original migration was in one change function, but it would be unable to rollback due to the execute block.
Be careful doing this kind of database change in one-shot. I would suggest you to break into steps.
First step (new column uuid)
Create the new column for the uuid
def up
add_column :users, :uuid, :uuid, default: "gen_random_uuid()", null: false
add_column :examples, :user_uuid, :uuid
end
Adapt your code to populate the examples.user_uuid column with the recent created column; You can easily achieve this by creating a model callback, feeling the user_uuid automatically.
If your database has a GB of data, consider adding the uuid as nullable, and populate the column using queues or in batches. The new records will be already filled.
We have now two new columns, with new data comming and all synced
Second step (the renaming)
Once populated and working with new columns, is time to rename the columns and associate the new keys.
def up
change_table :examples do |t|
t.rename :user_id, type: :old_user_id
t.rename :user_uuid, :user_id
end
change_table :users do |t|
t.rename :id, type: :old_id
t.rename :uuid, :id
end
execute "ALTER TABLE users ADD PRIMARY KEY (id);"
add_reference :examples, :user, null: false, foreign_key: true, type: :uuid
end
def down
# ...
end
Rember to remove, or review the model code changed before, to support this new columns or ignore.
Now we have changed the columns, without loosing the old reference.
Be careful here, if your database is big, you may lock your operation. Perhaps you may need a maintenance window here.
Third step (removing the old columns)
Now we can remove the old columns and everything should work fine
note: Always be careful when making this kind of change into your database. It is very risk to perform something like this. If you want to go on, simulate several times the step of renaming. Make snapshot of your database before performing and inform your clients that might have a downtime in your service.
I don't know why you want to change your primary keys to be an uuid, this costs a lot to the database to query and join data. It's more complicated to compare an UUID than an integer. Consider just create a new indexed column uuid into your tables and let the database to join and based on this field.
So I have been trying to make this work for few hours now, nothing seems to work.
I have mappings defined in my model:
settings do
mappings dynamic: false do
indexes :title, type: 'text'
indexes :description, type: 'text'
indexes :user, type: 'text' do
indexes :name, type: 'text'
end
end
end
But when I do:
Podcast.__elasticsearch__.delete_index! force: true
Podcast.__elasticsearch__.create_index! force: true
Podcast.__elasticsearch__.import force: true
and visit: http://localhost:9200/podcasts/_search?pretty=true&q=*:*&size=1000
I see all of the model data poured into the indexes(I need only title, description and user name).
What is the problem here?
When indexing, rails-ealsticsearch uses as_indexed_json. Here's my example:
def as_indexed_json(options = {})
as_json(include: {
user: {
only: [
:id,
:slug,
:name
]
}
})
end
So I read this question, answer and the comments, but it doesn't answer my case, which is what to do when of the columns is a foreign key?
Here is my original migration to create the table in question:
class CreateTemplates < ActiveRecord::Migration[5.1]
def change
create_table :templates, id: :uuid do |t|
t.references :account, type: :uuid, foreign_key: true
t.string :name
t.text :info
t.string :title
t.timestamps
end
end
end
Since account_id is a foreign_key (and identifies the customer) it will appear in almost all (99%) of queries on this table.
Now it has been decided that name should be unique to account, so the model has been updated:
validates_uniqueness_of :name, scope: [:account]
So once I add the joint index:
add_index :templates, [:name, :account_id], unique: true
should I delete the index on account_id?
I ask because in SQLLite (see this), it seems the answer would be that I don't need the single index on account_id and to create my new index with account_id in the first position:
add_index :templates, [:account_id, :name], unique: true
I'm using postgres, so does the same idea apply?
You have to add extra index if it's not the first index.
So if you have this:
add_index :templates, [:name, :account_id], unique: true
then you should not delete the original :account_id foreign key index, since it is second index.
I recommend you to read about index implementations. It's pretty interesting and you can learn a lot from it.
I have a Rails app running Mongoid on Heroku and I need to set up some indexes to make the database faster. I've tried to read about it several places and I know Mongoid has some built-in functions for indexing, but I'm not sure on what to apply them and how often to index.
It is mostly my Design-model I want to index:
scope :full_member_and_show, where(full_member: true).and(show: true).desc(:created_at)
scope :not_full_member_and_show, where(full_member: false).and(show: true).desc(:created_at)
embeds_many :comments
belongs_to :designer
search_in :tags_array
attr_accessible :image, :tags, :description, :title, :featured, :project_number, :show
field :width
field :height
field :description
field :title
field :tags, type: Array
field :featured, :type => Boolean, :default => false
field :project_number, :type => Integer, :default => 0
field :show, :type => Boolean, :default => true
field :full_member, :type => Boolean, :default => false
field :first_design, :type => Boolean, :default => false
What do I need to index, how exactly do I do it with Mongoid and how often should I do it?
ERROR UPDATE
If try to index the below:
index({ full_member: 1, show: 1 }, { unique: true })
It throws me this error:
Invalid index specification {:full_member=>1, :show=>1}; should be either a string, symbol, or an array of arrays.
You don't need to index periodically: once you've added an index, mongo keeps that index up to date as the collection changes. This is the same as an index in MySQL or Postgres (you may have been thinking of something like solr)
What to index depends on what queries you'll be making against your dataset. Indexes do carry some overhead when you do updates and consume disk space so you don't want to add them when you don't need them.
You tell mongoid what indexes you want by index, for example
class Person
include Mongoid::Document
index :city
end
There are loads of examples in the mongoid docs for the various kinds of indexes mongo supports.
Then you run
rake db:mongoid:create_indexes
This determines what indexes you want (based in the calls to index in your model) and then ensures that they exist in the db, creating them if necessary. In development you'd run this after adding indexes to your models. In production it makes sense to run this as part of your deploy (you only need to if you've added indexes since the last deploy but it's way easier to just do it systematically)
There's a lot of information about how mongo uses indexes in the documentation