I have a json column in my Categories table and I want to update each category record with a translation from a json file. I have built the json file so that it contains a categories array and each category has a name and a translation, like so:
{
"categories": [
{
"name": "starter",
"message": "Abs/Анти блокираща система (система против боксуване)"
},
{
"name": "alternator",
"message": "Алтернатор"
}
...
]
}
I want every category record to be updated with the language key as well as the translation from the file, like so:
{ bg: 'translation from file' }
I have this code
file = File.read('app/services/translations/files/bg.json')
data = JSON.parse(file)
language = File.basename(file, '.json')
Translations::CategoriesMigrator.call(file: data, language: language)
module Translations
class CategoriesMigrator < Service
def initialize(category_repo: Category)
#category_repo = category_repo
end
def call(file:, language:)
file['categories'].each do |category|
found_category = #category_repo.find_by(name: category['name'])
found_category.translated_categories[language] = category['message']
found_category.save
end
end
end
end
Right now I end up having all categories in a single category record. What am I doing wrong?
Update
My db migration looks like this:
class AddTranslatedCategoriesToCategories < ActiveRecord::Migration[5.1]
def change
add_column :categories, :translated_categories, :jsonb, null: false, default: {}
add_index :categories, :translated_categories, using: :gin
end
end
JSON/JSONB is a good choice when you have data that does not fit in the relational model. In most other cases its an anti-pattern since it makes it much harder to query the data and provides no data integrity or normalization.
This case is definitely the later since the underlaying structure is not dynamic. To keep track of translations we just need to know the subject, the language and the translation.
class Category
has_many :category_translations
end
# rails g model category_translation category:belongs_to locale:string text:string
class CategoryTranslation
belongs_to :category
end
You can add a compound index on category_id and locale to enforce uniqueness.
See:
https://www.2ndquadrant.com/en/blog/postgresql-anti-patterns-unnecessary-jsonhstore-dynamic-columns/
Related
I am using an enum in Rails and PostgreSQL. In my model tests I usually verify that my Rails validations are backed by database constraints where appropriate (e.g. presence: true in Model and null: false in DB). I do this by deliberately making the model invalid, attempting to save it without validations and making sure it raises a ActiveRecord::StatementInvalid error.
How do I test a PostgreSQL enum from MiniTest? My usual approach isn't working as everything I try to do to set my ActiveRecord model to an invalid enum value raises an ArgumentError, even using write_attribute() directly.
Is there a way to deliberately bypass the enum restrictions in Rails? Do I need to drop down out of ActiveRecord and send an AREL or SQL query direct to the database? Is there some other approach?
# Model
class SisRecord < ApplicationRecord
enum record_type: {
student: "student",
staff: "staff",
contact: "contact"
}
validates :record_type, presence: true
end
# Migration
class CreateSisRecords < ActiveRecord::Migration[7.0]
def change
create_enum :sis_record_type, %w(student staff contact)
create_table :sis_records do |t|
t.enum :record_type, enum_type: :sis_record_type, null: false
t.timestamps
end
end
end
# Test
require "test_helper"
class SisRecordTest < ActiveSupport::TestCase
test "a record_type is required" do
record = sis_records(:valid_sis_record)
record.record_type = nil
assert_not record.save, "Saved the SIS Record without a record type"
end
test "a record_type is required by the database too" do
record = sis_records(:valid_sis_record)
record.record_type = nil
assert_raises(ActiveRecord::StatementInvalid) {
record.save(validate: false)
}
end
test "record_type is restricted to accepted values" do
accepted_values = %w(student staff contact)
record = sis_records(:valid_sis_record)
assert_nothing_raised {
record.record_type = accepted_values.sample
}
assert_raises(ArgumentError) {
record.record_type = "something else"
}
end
test "record_type is restricted to accepted values by the database too" do
accepted_values = %w(student staff contact)
record = sis_records(:valid_sis_record)
record.record_type = accepted_values.sample
assert record.save, "Record didn't save despite accepted type value '#{record.record_type}'"
record.write_attribute(:record_type, "nonsense") ### <-- ArgumentError
assert_raises(ActiveRecord::StatementInvalid) {
record.save(validate: false)
}
end
end
I have an answer to my own question, but I'm still open to better answers.
I found a comment on a gist that showed how to fairly simply insert a record with Arel so for now I am using this approach:
# Just the test in question
test "record_type is restricted to accepted values by the database too" do
accepted_values = %w(student staff contact)
table = Arel::Table.new(:sis_records)
manager = Arel::InsertManager.new
manager.insert [
[table[:record_type], accepted_values.sample],
[table[:created_at], Time.now],
[table[:updated_at], Time.now],
]
assert_nothing_raised {
SisRecord.connection.insert(manager.to_sql)
}
manager.insert [
[table[:record_type], "other type"],
[table[:created_at], Time.now],
[table[:updated_at], Time.now],
]
assert_raises(ActiveRecord::StatementInvalid) {
SisRecord.connection.insert(manager.to_sql)
}
end
created_at and updated_at are required fields so we have to add a value for those.
In my real case (not the simplified version I posted above), SisRecord belongs to Person so I had to provide a valid person ID (UUID) too. I did this by grabbing an ID from my people fixtures:
manager.insert [
[table[:record_type], "other type"],
[table[:person_id], people(:valid_person).id], # <--------
[table[:created_at], Time.now],
[table[:updated_at], Time.now],
]
I have a model JournalDay with two sub-models: journal_entries and completed_to_dos.
I have created a method journal_entries_and_to_dos that combines all journal_entries and to_dos for a given JournalDay, but I can't figure out how to sort it given that I need to sort by different object variable names: journal_entries.created_at and to_dos.completed_at.
I've gotten as far as the code below to create the array of combined journal_entries_and_to_dos. How could I go about sorting it by journal_entries.created_at and completed_to_dos.completed_at?
class JournalDay < ApplicationRecord
has_many :journal_entries
has_many :completed_to_dos, -> { where.not(completed_at: [nil,""]) }, class_name: 'ToDo'
def journal_entries_and_completed_to_dos
combined = []
self.journal_entries.each do |journal_entry|
combined << journal_entry
end
self.completed_to_dos.each do |completed_to_do|
combined << completed_to_do
end
end
It looks like the following is working:
combined.sort_by { |obj| obj.class == "ToDo" ? obj.completed_at : obj.created_at }
This essentially uses sort_by where we use "completed_at" if the object class is "ToDo" otherwise we use "created_at" as the sorting variable.
In some cases nested associations are embedded in the JSON, in others they are not. So far so good, that behaves the way I want. But I want that in the cases where they aren't embedded the IDs of the nested associations are still emitted.
E.g.:
class FooSerializer < ActiveModel::Serializer
attributes :id, :x, :y
belongs_to :bar
end
class BarSerializer < ActiveModel::Serializer
attributes :id, :z
end
When I serialize a Foo object without include: [:bar] I want the result to look like:
{
"id": 123
"x": 1,
"y": 2,
"bar": 456
}
And if bar would be a polymorphic association I'd like something like that:
{
"id": 123
"x": 1,
"y": 2,
"bar": {"id": 456, "schema": "Bar"}
}
Actually I would like the IDs to be strings ("id": "123") because they should be black boxes for the API consumer and definitely not use JavaScript's Number type (which is double precision floating point!).
How do I do that? I didn't find any information about that.
Define attribute id this way in FooSerializer to get it as a string:
attribute :id do
object.to_s
end
"When I serialize a Foo object without include: [:bar] I want the result to look like:"
attribute :bar do
object.bar.id.to_s
end
"And if bar would be a polymorphic association I'd like something like that:"
attribute :bar do
{id: object.barable_id.to_s, schema: object.barable_type}
end
NOTE: I haven't tested this.
I found a way to do that by using a BaseSerializer like this:
class BaseSerializer < ActiveModel::Serializer
attributes :id, :type
def id
if object.id.nil?
nil
else
object.id.to_s
end
end
def type
# I also want all the emitted objects to include a "type" field.
object.class.name
end
def self.belongs_to(key, *args, &block)
attribute key do
assoc = object.class.reflect_on_association(key)
foreign_id = object.send(assoc.foreign_key)
if foreign_id.nil?
nil
elsif assoc.polymorphic?
{
type: object.send(assoc.foreign_type),
id: foreign_id.to_s
}
else
foreign_id.to_s
end
end
super
end
end
I'm facing a case when a need to display information contained in my join table. For example:
# == Schema Information
#
# Table name: quality_inspections
#
# id, content
#
# =============================================================================
class QualityInspection
has_many :inspection_inspectors
has_many :inspector, through: :inspection_inspectors
end
# == Schema Information
#
# Table name: inspection_inspectors
#
# quality_inspection_id, inspector_id, inspection_date
#
# =============================================================================
class InspectionInspector
belongs_to :quality_inspection
belongs_to :user, foreign_key: :inspector_id
end
Then, I'd like to have the following json:
{
"quality_inspectors": [{
"id": 1,
"content": "foo",
"inspectors": [{
"id": 1, // which is the user's id
"first_name": "Bar",
"last_name": "FooFoo",
"inspection_date": "random date"
}]
}]
}
For now, I'm doing the following in my serializer:
module Api::V1::QualityInspections
class InspectorSerializer < ActiveModel::Serializer
type :inspector
attributes :id, :first_name, :last_name, :inspection_date
def id
inspector.try(:public_send, __callee__)
end
def first_name
inspector.try(:public_send, __callee__)
end
def last_name
inspector.try(:public_send, __callee__)
end
private
def inspector
#inspector ||= object.inspector
end
end
end
Do you have any better solution ? Or maybe I'm not using the right methods on my Serializer ?
Anyway, I'm really stuck when it came to display information on a join table. Oddly, I'd the same issue when using cerebris/jsonapi-resources.
EDIT: Linked issue on GH: https://github.com/rails-api/active_model_serializers/issues/1704
I don't think you need to put additional methods such as id, first_name, last_name. Instead, use association within your serializer to get the appropriate JSON data as mentioned afore.
Rails 4, Mongoid instead of ActiveRecord (but this should change anything for the sake of the question).
Let's say I have a MyModel domain class with some validation rules:
class MyModel
include Mongoid::Document
field :text, type: String
field :type, type: String
belongs_to :parent
validates :text, presence: true
validates :type, inclusion: %w(A B C)
validates_uniqueness_of :text, scope: :parent # important validation rule for the purpose of the question
end
where Parent is another domain class:
class Parent
include Mongoid::Document
field :name, type: String
has_many my_models
end
Also I have the related tables in the database populated with some valid data.
Now, I want to import some data from an CSV file, which can conflict with the existing data in the database. The easy thing to do is to create an instance of MyModel for every row in the CSV and verify if it's valid, then save it to the database (or discard it).
Something like this:
csv_rows.each |data| # simplified
my_model = MyModel.new(data) # data is the hash with the values taken from the CSV row
if my_model.valid?
my_model.save validate: false
else
# do something useful, but not interesting for the question's purpose
# just know that I need to separate validation from saving
end
end
Now, this works pretty smoothly for a limited amount of data. But when the CSV contains hundreds of thousands of rows, this gets quite slow, because (worst case) there's a write operation for every row.
What I'd like to do, is to store the list of valid items and save them all at the end of the file parsing process. So, nothing complicated:
valids = []
csv_rows.each |data|
my_model = MyModel.new(data)
if my_model.valid? # THE INTERESTING LINE this "if" checks only against the database, what happens if it conflicts with some other my_models not saved yet?
valids << my_model
else
# ...
end
end
if valids.size > 0
# bulk insert of all data
end
That would be perfect, if I could be sure that the data in the CSV does not contain duplicated rows or data that goes against the validation rules of MyModel.
My question is: how can I check each row against the database AND the valids array, without having to repeat the validation rules defined into MyModel (avoiding to have them duplicated)?
Is there a different (more efficient) approach I'm not considering?
What you can do is validate as model, save the attributes in a hash, pushed to the valids array, then do a bulk insert of the values usint mongodb's insert:
valids = []
csv_rows.each |data|
my_model = MyModel.new(data)
if my_model.valid?
valids << my_model.attributes
end
end
MyModel.collection.insert(valids, continue_on_error: true)
This won't however prevent NEW duplicates... for that you could do something like the following, using a hash and compound key:
valids = {}
csv_rows.each |data|
my_model = MyModel.new(data)
if my_model.valid?
valids["#{my_model.text}_#{my_model.parent}"] = my_model.as_document
end
end
Then either of the following will work, DB Agnostic:
MyModel.create(valids.values)
Or MongoDB'ish:
MyModel.collection.insert(valids.values, continue_on_error: true)
OR EVEN BETTER
Ensure you have a uniq index on the collection:
class MyModel
...
index({ text: 1, parent: 1 }, { unique: true, dropDups: true })
...
end
Then Just do the following:
MyModel.collection.insert(csv_rows, continue_on_error: true)
http://api.mongodb.org/ruby/current/Mongo/Collection.html#insert-instance_method
http://mongoid.org/en/mongoid/docs/indexing.html
TIP: I recommend if you anticipate thousands of rows to do this in batches of 500 or so.