Create partial index for postgresql jsonb field - ruby-on-rails

I am trying to add a partial index to postgresql jsonb column. The jsonb column is named: email_provider. I have tried adding a partial index on the column as shown below but it throws different error like this PG::UndefinedColumn: ERROR: column "email_povider" does not exist and at other times it raises the error: PG::AmbiguousFunction: ERROR: operator is not unique: unknown -> unknown
The partial index in the rails-5 migration looks like this:
add_index :accounts, :name, name: "index_accounts_on_email_provider_kind", using: :gin, where: "('email_provider' -> 'sparkpost' ->> 'smtp_host' = 'www.sparkpost.com') AND ('email_povider' -> 'amazon ses' ->> 'smtp_host' = 'www.awses.com')"
The json for the email_provider column looks like this:
{
"email_provider": {
"sparkpost": {
"smtp_host": "www.sparkpost.com",
"smtp_port": ""
},
"aws ses": {
"smtp_host": "www.amazon/ses.com ",
"smtp_port": " ",
"username": " ",
"password": " "
}
}
}
The table looks like this:
class CreateAccounts < ActiveRecord::Migration[5.0]
def change
create_table :accounts do |t|
t.string :name, null: false
t.jsonb :email_provider, null: false
t.jsonb :social_account, default: '[]', null: false
t.timestamps
end
add_index :accounts, :name, name: "index_accounts_on_email_provider_kind", using: :gin, where: "('email_provider' -> 'sparkpost' ->> 'smtp_host' = 'www.sparkpost.com') AND ('email_povider' -> 'amazon ses' ->> 'smtp_host' = 'www.awses.com')"
add_index :accounts, :name, name: "index_accounts_on_social_account_type", using: :gin, where: " 'social_account' #> [{'type': 'facebook'}] AND 'social_account' #> [{'type': 'twitter'}]"
end
end
Update
Based on slight adjustment to the accepted answer below, the code I am using to create btree not gin index in rails activerecord is shown below. It creates a btree index because we are using the name column which is of string datatype and not jsonb as described here :
add_index :accounts, :name, name: "index_accounts_on_name_email_provider", where: "email_provider -> 'sparkpost' ->> 'smtp_host' = 'www.sparkpost.com' AND email_provider -> 'amazon ses' ->> 'smtp_host' = 'www.awses.com' "
add_index :accounts, :name, name: "index_accounts_on_name_social_account", where: " social_account #> '[{\"type\": \"facebook\"}]'::jsonb AND social_account #> '[{\"type\": \"twitter\"}]'::jsonb"

I think you need something like that:
add_index :accounts, :email_provider, name: "index_accounts_on_email_provider_kind", using: :gin, where: "(email_provider -> 'email_provider' -> 'sparkpost' ->> 'smtp_host' = 'www.sparkpost.com') AND (email_provider -> 'email_povider' -> 'amazon ses' ->> 'smtp_host' = 'www.awses.com')"
I've changed the second argument of add_index to :email_provider because that's the column name. Also, for the the where clause, I changed
'email_provider' -> 'sparkpost' ->> 'smtp_host' = 'www.sparkpost.com'
to
email_provider -> 'email_provider' -> 'sparkpost' ->> 'smtp_host' = 'www.sparkpost.com'
because the -> operator expects it left argument to be a json(b) value, but you provided a string. So e.g. email_provider -> 'email_provider' extracts the value corresponding to email_provider from the column called email_provider. Note that this last line can be written more compactly as:
email_provider #>> '{email_provider,sparkpost,smtp_host}' = 'www.sparkpost.com'
by using the #>> which extracts a "path" from a josn(b) object. So the add_index statement can be written as:
add_index :accounts, :email_provider, name: "index_accounts_on_email_provider_kind", using: :gin, where: "(email_provider #>> '{email_provider,sparkpost,smtp_host}' = 'www.sparkpost.com') AND (email_provider -> '{email_povider,amazon ses,smtp_host}' = 'www.awses.com')"
For you second index, you should try something like:
where: " social_account -> 'social_account' #> '[{\"type\": \"facebook\"}]'::jsonb AND social_account -> 'social_account' #> '[{\"type\": \"twitter\"}]'::jsonb"
In this case, I did the same think with the column as in the first case. I also changed the right argument of #>, which has to be a jsonb value. So you must define it as a JSON string, which requires double quotes for strings (also notice that we have to escape them for ruby), and then I type cast that string to jsonb to get the desired type.

Related

How to query records where jsonb string field is in array

I want to query ActiveRecord collection and select records, where string value inside jsonb field is included in a given array.
Model:
create_table "dishes", force: :cascade do |t|
...
t.jsonb "params"
...
end
Content of params always has this structure:
{"procart_id"=>"4", "procart_config"=>{}}
I have a given array:
availabilities = ['4', '8', '11']
How can I query Dish models where params.procart_id is in availabilities array?
I tried the following:
Dish.where("params::jsonb ->> 'procart_id' = any (array[?])", availabilities)
But it gave me the error:
ActiveRecord::StatementInvalid: PG::InvalidTextRepresentation: ERROR: invalid input syntax for type json
DETAIL: The input string ended unexpectedly.
CONTEXT: JSON data, line 1:
: SELECT "dishes".* FROM "dishes" WHERE (params::jsonb ->> 'procart_id' = any (array['4', '8', '11']))
Dish.where("params::jsonb ->> 'procart_id' = any (array[?]::jsonb[])", availabilities)
Try this..it might work
You can modify the query params before querying the Dish model and query on procart_id like below:
availabilities = ['4', '8', '11']
query_params = availabilities.map { |availability| { procart_id: availability } }
Dish.where(params: query_params)

Why GraphQL-Ruby does not understand dynamically generated schema definitions?

In Rails app, I have the following part of the schema defined:
# frozen_string_literal: true
module Types
module KVInfo
def self.kv_value_scalar(typename, raw_type: String, typeid:)
clazz = Class.new(BaseObject) do
graphql_name "KVEntry#{typename}Value"
field :value, raw_type, null: false
end
clazz.define_singleton_method(:typeid) { typeid }
clazz.define_singleton_method(:typename) { typename }
clazz
end
# typeids taken from enum in (.../kv_info.ts)
KVScalars = [
kv_value_scalar('String', typeid: 0),
kv_value_scalar('Markdown', typeid: 1),
kv_value_scalar(
'Date',
raw_type: GraphQL::Types::ISO8601DateTime,
typeid: 2
),
kv_value_scalar('Country', typeid: 3),
kv_value_scalar('Address', typeid: 5)
].freeze
KVScalars.each { |t| KVInfo.const_set(t.graphql_name, t) }
class KVScalarValue < BaseUnion
possible_types(*KVScalars)
def self.resolve_type(obj, _ctx)
KVScalars.select { |t| t.typeid == obj['type'] }.first
end
end
def self.kv_value_array(subtype)
clazz = Class.new(BaseObject) do
graphql_name "KVEntryArray#{subtype.typename}"
field :value, [subtype], null: false
end
clazz.define_singleton_method(:sub_typeid) { subtype.typeid }
clazz
end
KVArrays = KVScalars.map { |s| kv_value_array(s) }
KVArrays.each { |t| KVInfo.const_set(t.graphql_name, t) }
class KVArrayValue < BaseUnion
possible_types(*KVArrays)
def self.resolve_type(obj, _ctx)
KVArrays.select { |t| t.sub_typeid == obj['subtype'] }
end
end
class KVValue < BaseUnion
# PP HERE
possible_types(KVArrayValue, KVScalarValue)
def self.resolve_type(obj, _ctx)
obj['type'] == 4 ? # typeid for array
KVArrayValue :
KVScalarValue
end
end
class KVEntry < BaseObject
field :name, String, null: false
field :value, KVValue, null: false
end
end
end
While running a Rake task that dumps the whole schema to a file to be consumed by frontend, I see the type denoted by KVEntry class having only the name field.
If I put all possible types in the KVValue class like such:
pp(*KVScalars, *KVArrays)
possible_types(*KVScalars, *KVArrays)
it works and generates types correctly.
But note the pp line above - it does not work without this line (???).
Also, if I keep it as is (with nested unions), it does not work regardless of number and positions of pp clauses. When going through with the debugger, all classes are loaded correctly, including generated ones, but the schema still lacks required types.
So the question is what the bisq... why the types are not processed and how can pp affect this process in any sense?
P.S. The data format is fixed by frontend and no way subject to change.
The problem was in the nested unions. GraphQL does not support these. And on the part of plain union not working - I still have no idea of reasons for such behavior, but it fixed itself after N-th restart.

Elixir not altering table in database migration

I've a table called Leads. It stores items such as name, quote_amount, lead_source_id and specifications.
My current update method:
def update(conn, %{"id" => _id, "data" => data} = params, user) do
#data has structure like this: data["attributes"][value_to be stored]
#Example: data["attributes"]["specifications"] for specifications value
lead =
Lead.leads(user)
|> Lead.with_id(params)
|> Repo.one!()
attrs = JaSerializer.Params.to_attributes(data)
attrs = conversion_of_company_id_param_if_necessary(attrs, Map.get(attrs, "company_id"), lead)
changeset = Lead.changeset(lead, attrs, user)
case Repo.update(changeset) do
{:ok, lead} ->
lead = Repo.get_by(Lead, id: lead.id) |> Repo.preload([:zipcode])
render(conn, "show.json-api", data: lead)
{:error, changeset} ->
conn
|> put_status(:unprocessable_entity)
|> render(PortalApi.ChangesetView, "error.json", changeset: changeset)
end
end
schema "leads" do
field(:first_name, :string)
field(:last_name, :string)
field(:email, :string)
field(:next_step, :string)
field(:activity_date, :date)
field(:phone, :string)
field(:quote_number, :string)
field(:quote_amount, :integer)
field(:po_number, :string)
field(:po_amount, :integer)
field(:closed_reason, :string)
field(:specifications,:string)
field(:zip, :string, virtual: true)
field(:local_dealer, :boolean, default: false)
field(:claimed, :boolean, default: false)
field(:state, :string, default: "new")
field(:message, :string)
field(:uuid, :string)
field(:reference_number, :string)
field(:business_name, :string)
field(:project_reference, :string)
field(:quotable, :boolean, default: false)
field(:lead_url, :string)
field(:lead_details, :string)
field(:details, :map)
field(:printed, :boolean, default: false)
belongs_to(:internal_rep, PortalApi.User)
belongs_to(:external_rep, PortalApi.User)
belongs_to(:company, PortalApi.Company)
belongs_to(:proposed_company, PortalApi.Company)
belongs_to(:lead_source, PortalApi.LeadSource)
belongs_to(:zipcode, PortalApi.Zipcode, on_replace: :nilify)
belongs_to(:quote_request, PortalApi.QuoteRequest)
has_many(:attachments, PortalApi.Attachment)
has_many(:activity_logs, PortalApi.ActivityLog)
has_many(:notes, PortalApi.Note)
timestamps()
end
Migration of specifications table:
def change do
alter table(:leads) do
add :specifications, :string
end
end
This update seem to update everything normally except that specification.
I've tried checking everything from model, to migrations and controllers, but it just doesn't seem to store that column. I can assure you that Postgres table is fine and all required column exists.
What could be the issue? How can I solve this?
Thank you!

How to write a migration to convert JSON field to Postgres Array for querying in rails?

There is an old table with column type as JSON but only arrays are stored in this column.
Even though I am storing array, I am not able to query this field using the ANY keyword (which will work on array type columns in Postgres like in this post)
Eg: let's say ['Apple', 'Orange', 'Banana'] is stored as Json in the fruits column, I want to query like Market.where(":name = ANY(fruits)", name: "Orange") and get all the markets with Oranges available.
Can anyone please help me to write a migration to change the existing column(type: Json) to array type?
One example assuming a json field:
\d json_test
Table "public.json_test"
Column | Type | Collation | Nullable | Default
-----------+---------+-----------+----------+---------
id | integer | | |
fld_json | json | | |
fld_jsonb | jsonb | | |
fruits | json | | |
insert into json_test (id, fruits) values (1, '["Apple", "Orange", "Banana"] ');
insert into json_test (id, fruits) values (2, '["Pear", "Orange", "Banana"] ');
insert into json_test (id, fruits) values (3, '["Pear", "Apple", "Banana"] ');
WITH fruits AS
(SELECT
id, json_array_elements_text(fruits) fruit
FROM json_test)
SELECT
id
FROM
fruits
WHERE
fruit = 'Orange';
id
----
1
2
UPDATE Method to convert JSON array into Postgres array:
SELECT
array_agg(fruit)
FROM
(SELECT
id, json_array_elements_text(fruits)AS fruit
FROM
json_test) AS elements
GROUP BY
id;
array_agg
-----------------------
{Pear,Apple,Banana}
{Pear,Orange,Banana}
{Apple,Orange,Banana}
This assumes the JSON array has homogeneous elements as that is a requirement for Postgres arrays.
A simpler method of finding rows that have 'Orange' in the json field:
SELECT
id, fruits
FROM
json_test
WHERE
fruits::jsonb ? 'Orange';
id | fruits
----+--------------------------------
1 | ["Apple", "Orange", "Banana"]
2 | ["Pear", "Orange", "Banana"]
class AddArrayFruitsToMarkets < ActiveRecord::Migration[6.0]
def up
rename_column :markets, :fruits, :old_fruits
add_column :markets, :fruits, :string, array: true
Market.update_all('fruits = json_array_elements(old_fruits)')
end
end
class RemoveJsonFruitsFromMarkets < ActiveRecord::Migration[6.0]
def up
remove_column :markets, :old_fruits
end
end
But really if you're going to do something why not create tables instead as you're not really improving anything?
class Fruit < ApplicationRecord
validates :name, presence: true
has_many :market_fruits
has_many :markets, through: :market_fruits
end
class MarketFruit < ApplicationRecord
belongs_to :market
belongs_to :fruit
end
class Market < ApplicationRecord
has_many :market_fruits
has_many :fruits, through: :market_fruits
def self.with_fruit(name)
joins(:fruits)
.where(fruits: { name: name })
end
def self.with_fruits(*names)
left_joins(:fruits)
.group(:id)
.where(fruits: { name: names })
.having('COUNT(fruits.*) >= ?', names.length)
end
end

How to compare if a record exist with json data type field?

I want to check if a record already exist on database, but I have one json data type field and I need to compare it too.
When I try check using exists? I got the following error:
SELECT 1 AS one FROM "arrangements"
WHERE "arrangements"."deleted_at" IS NULL AND "arrangements"."account_id" = 1
AND "arrangements"."receiver_id" = 19 AND "config"."hardware" = '---
category: mobile
serial: ''00000013''
vehicle:
' AND "arrangements"."recorded" = 't' LIMIT 1
PG::UndefinedTable: ERROR: missing FROM-clause entry for table "config"
LINE 1: ...id" = 1 AND "arrangements"."receiver_id" = 19 AND "config"."...
^
Code that I using to check if a exists:
#arrangement = Arrangement.new({account_id: receiver.account.id, receiver_id: receiver.id, config: params[:config], recorded: true})
if Arrangement.exists?(account_id: #arrangement.account_id, receiver_id: #arrangement.receiver_id, config: #arrangement.config, recorded: #arrangement.recorded)
puts 'true'
end
I already tried:
if Arrangement.exists?(#arrangement)
puts 'true'
end
But always return false
Table:
create_table :arrangements do |t|
t.references :account, index: true
t.references :receiver, index: true
t.json :config, null: false
t.boolean :recorded, default: false
t.datetime :deleted_at, index: true
t.integer :created_by
t.timestamps
end
You cannot compare jsons. Try to compare some jsons values
where("arrangements.config->>'category' = ?", params[:config][:category])
Look in postgresql docs for other JSON functions and operators
This will convert both field(in case it is just json) and the parameter(which will be a json string) to jsonb, and then perform a comparison of everything it contains.
def existing_config?(config)
Arrangement.where("config::jsonb = ?::jsonb", config.to_json).any?
end

Resources