In my Rails 5 app, I read in a feed for products. In the JSON, when the price is over $1,000, it the JSON has a comma, like 1,000.
My code seems to be truncating it, so it's storing as 1 instead of 1,000.
All other fields are storing correctly. Can someone please tell me what I'm doing wrong?
In this example, the reg_price saves as 2, instead of 2590.
json sample (for reg_price field):
[
{
"reg_price": "2,590"
}
]
schema
create_table "products", force: :cascade do |t|
t.decimal "reg_price", precision: 10, scale: 2
end
model
response = open_url(url_string).to_s
products = JSON.parse(response)
products.each do |product|
product = Product.new(
reg_price: item['reg_price']
)
product.save
end
You are not doing anything wrong. Decimals don't work with comma separator. I'm not sure there is a nice way to fix the thing. But as an option you could define a virtual attribute:
def reg_price=(reg_price)
self[:reg_price] = reg_price.gsub(',', '')
end
The reason this is happening has nothing to do with Rails.
JSON is a pretty simple document structure and doesn't have any support for number separators. The values in your JSON document are strings.
When you receive a String as input and you want to store it as an Integer, you need to cast it to the appropriate type.
Ruby has built in support for this, and Rails is using it: "1".to_s #=> 1
The particular heuristic Ruby uses to convert a string to an integer is to take any number up to a non-numerical character and cast it as an integer. Commas are non-numeric, at least by default, in Ruby.
The solution is to convert the string value in your JSON to an integer using another method. You can do this any of these ways:
Cast the string to an integer before sending it to your ActiveRecord model.
Alter the string in such a way that the default Ruby casting will cast the string into the expected value.
Use a custom caster to handle the casting for this particular attribute (inside of ActiveRecord and ActiveModel).
The solution proposed by #Danil follows #2 above, and it has some shortcomings (as #tadman pointed out).
A more robust way of handling this without getting down in the mud is to use a library like Delocalize, which will automatically handle numeric string parsing and casting with consideration for separators used by the active locale. See this excellent answer by Benoit Garret for more information.
Related
I dont how to accomplish this problem.
I faced with this problem 3 times and each time I put it in my todo list but even tho I tried to find a solution I couldnt.
For examples,
I m trying to create a query with dynamic variables of this example;
User.search(first_name_start: 'K')
there are 3 arguments in this example;
1)first_name - My model attribute
2)start - Query type (start/end/cont )
3)'3' - value
I was able to create dynamic ActiveRecord using static symbols but how am I suppose to make dynamic input
Thanks in advance
EDIT: ADDITIONAL INFORMATION
let me show you a some kind of pseudo-code
varArray.each_with_index |x,index|
queryString=varArray[i]+"_"+filterArray=[i] #lets say varArray[i], this will be first_name, an actual model attribute/a column in my db
#and filterArray like /start/end/with a filter type
#and finally valArray a string value like 'geo' or 'paul'
User.where(queryString valArray[i]).result
I tried to use send(variable) but that didnt help me either, so i dont how should i proceed,
This is one of a few cases where new fancy Ruby 1.9 syntax for defining hashes doesn't cut it. You have to use the traditional hashrocket (=>) that allows you to specify not only symbols, but any arbitrary values as hash keys:
column = "#{first_name}_size_#{query_type}".to_sym
User.where( column => value )
AFAIK, ActiveRecord is able to accept strings instead of symbols as column names, so you don't even need to call to_sym.
For various reasons, I'm creating an app that takes a SQL query string as a URL parameter and passes it off to Postgres(similar to the CartDB SQL API, and CFPB's Qu). Rails then renders a JSON response of the results that come from Postgres.
Snippet from my controller:
#table = ActiveRecord::Base.connection.execute(#query)
render json: #table
This works fine. But when I use Postgres JSON functions (row_to_json, json_agg), it renders the nested JSON property as a string. For example, the following query:
query?q=SELECT max(municipal) AS series, json_agg(row_to_json((SELECT r FROM (SELECT sch_yr,grade_1 AS value ) r WHERE grade_1 IS NOT NULL))ORDER BY sch_yr ASC) AS values FROM ed_enroll WHERE grade_1 IS NOT NULL GROUP BY municipal
returns:
{
series: "Abington",
values: "[{"sch_yr":"2005-06","value":180}, {"sch_yr":"2005-06","value":180}, {"sch_yr":"2006-07","value":198}, {"sch_yr":"2006-07","value":198}, {"sch_yr":"2007-08","value":158}, {"sch_yr":"2007-08","value":158}, {"sch_yr":"2008-09","value":167}, {"sch_yr":"2008-09","value":167}, {"sch_yr":"2009-10","value":170}, {"sch_yr":"2009-10","value":170}, {"sch_yr":"2010-11","value":153}, {"sch_yr":"2010-11","value":153}, {"sch_yr":"2011-12","value":167}, {"sch_yr":"2011-12","value":167}]"
},
{
series: "Acton",
values: "[{"sch_yr":"2005-06","value":353}, {"sch_yr":"2005-06","value":353}, {"sch_yr":"2006-07","value":316}, {"sch_yr":"2006-07","value":316}, {"sch_yr":"2007-08","value":323}, {"sch_yr":"2007-08","value":323}, {"sch_yr":"2008-09","value":327}, {"sch_yr":"2008-09","value":327}, {"sch_yr":"2009-10","value":336}, {"sch_yr":"2009-10","value":336}, {"sch_yr":"2010-11","value":351}, {"sch_yr":"2010-11","value":351}, {"sch_yr":"2011-12","value":341}, {"sch_yr":"2011-12","value":341}]"
}
So, it only partially renders the JSON, running into problems when I have nested JSON arrays created with the Postgres functions in the query.
I'm not sure where to start with this problem. Any ideas? I am sure this is a problem with Rails.
ActiveRecord::Base.connection.execute doesn't know how to unpack database types into Ruby types so everything – numbers, booleans, JSON, everything – you get back from it will be a string. If you want sensible JSON to come out of your controller, you'll have to convert the data in #table to Ruby types by hand and then convert the Ruby-ified data to JSON in the usual fashion.
Your #table will actually be a PG::Result instance and those have methods such as ftype (get a column type) and fmod (get a type modifier for a column) that can help you figure out what sort of data is in each column in a PG::Result. You'd probably ask the PG::Result for the type and modifier for each column and then hand those to the format_type PostgreSQL function to get some intelligible type strings; then you'd map those type strings to conversion methods and use that mapping to unpack the strings you get back. If you dig around inside the ActiveRecord source, you'll see AR doing similar things. The AR source code is not for the faint hearted though, sorry but this is par for the course when you step outside the narrow confines of how AR things you should interact with databases.
You might want to rethink your "sling hunks of SQL around" approach. You'll probably have an easier time of things (and be able to whitelist when the queries do) if you can figure out a way to build the SQL yourself.
The PG::Result class (the type of #table), utilizes TypeMaps for type casts of result values to ruby objects. For your example, you could use PG::TypeMapByColumn as follows:
#table = ActiveRecord::Base.connection.execute(#query)
#table.type_map = PG::TypeMapByColumn.new [nil, PG::TextDecoder::JSON.new]
render json: #table
A more generic approach would be to use the PG::TypeMapByOid TypeMap class. This requires you to provide OIDs for each PG attribute type. A list of these can be found in pg_type.dat.
tm = PG::TypeMapByOid.new
tm.add_coder PG::TextDecoder::Integer.new oid: 23
tm.add_coder PG::TextDecoder::Boolean.new oid: 16
tm.add_coder PG::TextDecoder::JSON.new oid: 114
#table.type_map = tm
I have a Rails model that has a field array_field, which is a serialized text array. I want the combination of this array value and the value of another_field to be unique.
Should be straightforward, no?
class Foo < ActiveRecord::Base
validates_uniqueness_of :array_field, scope: [:another_field]
serialize :filters, Array
end
This doesn't work. However, if I switch them around in the validations,
validates_uniqueness_of :another_field, scope: [:array_field] works as expected.
Can someone explain why this is the case? Is this expected behavior?
The Postgres error for the former setup when array_field's value is nil or [] is this:
PG::SyntaxError: ERROR: syntax error at or near ")"
LINE 1: ...other_field" = 103 AND "foo"."array_field" = ) LIMIT 1
When array_field is [[1, 2], [3, 4, 5]] (a sample multiarray I was using), it's:
PG::UndefinedFunction: ERROR: operator does not exist: text = integer
LINE 1: ...other_field" = 103 AND "foo"."array_field" = 1, 2, 3, 4, 5) LIMIT 1
It seems that Rails doesn't know how to translate the serialized object for this query. Am I missing something or is this a bug?
Edit: This is occurring in Rails 4.0.2.
Second Edit:
Clarification: I understand why this is happening (Rails has custom logic for list queries), and I'm using both a custom validator to manually perform the serialization before validating and a custom serializer to avoid problems with comparisons of Yaml strings (as detailed in my other question here).
At this point I'm mostly just wondering why validates_uniqueness_of treats the primary field differently from the scope fields, and am hoping someone can shed some light.
I can't explain why the validations work one way around, but not the other.
But I think basically your problems are due to the fact that serialize only defines that an attribute is to be serialized using Yaml on save and deserialized upon load.
In other words: the only thing you say by doing serialize :filters, Array is that
when saving a Foo, serialize it's filters attribute using Yaml first,
when loading a Foo from the DB, make sure that the value of the
filters attribute is an Array after deserialization, otherwise raise an exception
It does not affect how queries are constructed. Instead, Rails' usual rules for queries are used. So an array is converted into a comma separated list of numbers. This makes sense for example when constructing a LIKE query. This is the reason why the query fails. The DB field is a string but you're trying to compare it to a list.
I haven't used native PostgreSQL array columns with Rails 4, but my guess is that these issues would solved if you used those instead a serialization-type solution. You get the added benefit of being able to search within the contents of arrays on the DB level.
I'm having an issue with Rails 4's support for Postgresql's ts_range data type. Here is the code that I am trying to persist:
before_validation :set_appointment
attr_accessor :starting_tsrange, :ending_tsrange
def set_appointment
self.appointment = convert_to_utc(starting_tsrange)...convert_to_utc(ending_tsrange)
end
def convert_to_utc
ActiveSupport::TimeZone.new("America/New_York").parse(time_string).utc
end
Basically I set an instance variable for the beginning and end of the appointment ts_range with two strings representing date_times. Before validation it converts them to utc and saves those values to the appointment attribute which should then be persisted. It sets things correctly but when I try to retrieve the record, the appointment attribute is now nil. Why is this code not working as expected?
Figured out the subtle bug in the code. The issue here is with the triple dot range operator. If we get two values that are the exact same time. The triple dot will say include everything from time a up until time b if they are the same exact time, then nothing will be included and the result will be nil. This can be visualized with the code below
(1...1).to_a # []
(1..1).to_a # [1]
So the way to fix this is to not use the triple dot notation when using ranges that can have the same value for a time. Use the double dot notation instead.
I am using Sphinx with the Thinking Sphinx plugin to search my data. I am using MySQL.
My data contains accented chars ("á", "é", "ã") and I want them to be equivalent to their non-accented counterparts ("a", "e", "a", for example) when searching and ordering.
I got the search working using a charset table (pastie.org/204316), and a search for "AGUA" returns "ÁGUA", but the ordering of the results is not working properly. In a search for "AGUA", "ÁGUA" cames after "MUITA ÁGUA", for example, but I wanted it to be sorted as if it were written with an "A", not an "Á".
The only solution I can think is index a new column containing the non-accented chars and using it for sortering, using the REPLACE (http://dev.mysql.com/doc/refman/5.4/en/string-functions.html#function_replace) mysql function to strip the accented chars, but I would need one call to REPLACE for each possible accented char (and there are many) and it seems to me a not very maintanable workaround.
Anybody know some better way to handle this issue?
Thanks!
Sphinx handles sorting on string fields by storing all the values in a list, sorting the list and then storing the index of each string as an int attribute. According to the docs the sorting of this list is done at a byte level and currently isn't configurable.
Ideally the strings should be sorted differently, depending on the encoding and locale. For instance, if the strings are known to be Russian text in KOI8R encoding, sorting the bytes 0xE0, 0xE1, and 0xE2 should produce 0xE1, 0xE2 and 0xE0, because in KOI8R value 0xE0 encodes a character that is (noticeably) after characters encoded by 0xE1 and 0xE2. Unfortunately, Sphinx does not support that at the moment and will simply sort the strings bytewise.
-- from http://www.sphinxsearch.com/docs/current.html
So, no easy way to achieve this within Sphinx. A modification to your REPLACE() based idea would be to have a separate column and populate it using a callback in your model. This would let you handle the replace in Ruby instead of MySQL, an arguably more maintainable solution.
# save an unaccented copy of your title. Normalise method borrowed from
# http://stackoverflow.com/questions/522715/removing-accents-diacritics-from-string-while-preserving-other-special-chars-tri
class MyModel < ActiveRecord::Base
before_validation :update_sort_col
private
def update_sort_col
sort_col = self.title.to_s.mb_chars.normalize(:kd).gsub(/[^-x00-\x7F]/n, '').to_s
end
end
you can also use a special index for that you dont even need a new column on your db
indexes "LOWER(title)", :as => :title, :sortable => true
its raw sql so you can call your replace method.
Just build index on lower case version with following syntax. Its very simple and elegant solution for case insensitive search using Sphinx.
indexes title, as: :title, sortable: :insensitive