Postgres: Column updated but not detected. - ruby-on-rails

So i initially had a foreign id tutor_id as type string. So i ran the following migrations.
change_column(:profiles, :tutor_id, 'integer USING CAST(tutor_id AS integer)')
The problem is that there was data already created which initially contained the tutor_id as type string. I did read however that by using CAST, the data should be converted into an integer.
So just to confirm i went into heroku run rails console to check the tutor_id of the profiles and tutor_id.is_a? Integer returns true.
However i am currently getting this error
ActionView::Template::Error (PG::UndefinedFunction: ERROR: operator does not exist: integer = text at character 66
Why is that so? Is the only way out to delete the data and to recreate it?
(I'm assuming the information provided above is enough to draw a conclusion, else i will add the relevant information too.)

You also have to update your code to use integers rather than strings. This error happens because your code somewhere still has the column type as string and the query sent has the value sent as '123'. PostgreSQL doesn't do automatic type conversions so it's telling you it can't do the comparison.

Related

"The provided key element does not match the schema"

I am building a rails app. I am using dynamodb for the database tables. I get the error
The provided key element does not match the schema
In my helper/controller:
session[:id] = #record.id
In my view:
<% record_id = TableName.find(session[:id]) %>
I printed the session[:id] and checked, it has the correct id of the particular record. Also checked the db. The record matches with my desired one. It works fine in rails console.
But, when I run the application, I get the above error.
Kindly help.
When you get that error from DynamoDB it is because the key you are providing doesn't match the type of the key in your table. Either your table has a key that is defined as a String and you're passing in a Number; or vice versa - the table's key is a Number and you're passing it in as a String.

Rails test database values do not match fixture values

This seems like a very strange problem to me. I have a table which contained a boolean value field. I changed that field to be a string since I now want more possible values than just yes or no. I ran the migration and reran the tests. The database table structures show that the field type has been changed to a varchar(255). But every time I run the test, the database field values are still showing 'f' or 't'. But in my fixture file, I am now setting the values to "No" or "Yes":
one:
value: No
two:
value: Yes
I've tried purging the database and rerunning the test. But nothing helps. I have no idea where boolean values are coming from since I changed the type. I can't think of anything else that needs to be cleared out. I'm sure there is something simple I'm forgetting to do but I don't see it. If more information is needed to answer this, please let me know.
I'm running rails 4.1.5 with ruby 2.1.4. I'm running the tests using SQLite.
The values Yes and No are interpreted as booleans in Yaml files. Try changing it to:
one:
value: "No"
two:
value: "Yes"

Mongoid: getting mongo error code from Moped::Errors

I'm building a Rails server with the model stored in MongoDB using Mongoid.
There are cases where a user can attempt to add a document to the mongo database with a duplicate index value. Is there a way to retrieve the MongoDB error code (in this case 11000) without parsing the error message so that I can make my exception handling more robust?
EDIT: Title had Mongoid::Errors instead of Moped::Errors
I developed the mongoid_token gem and encountered this exact problem, since the core functionality of this gem relies on being able to identify if a particular field (in this case, the token) is the cause of the key duplication.
If all you're after is the error code, the yes - you can get this. However, if you need more precise details (such as the field name), you will need parse the error description.
Also, if you're testing for duplicate keys, I think you'll need to check for both error codes 11000 and 11001 (duplicate key on update). A partial list of the mongoDB error codes is here.
I've paraphrased some of the code from the gem below:
begin
#... do whatever
rescue Moped::Errors::OperationFailure => e
description = e.details['err']
if [11000, 11001].include?(e.details['code'])
# Duplicate key error
end
end

Rails refuses to store a value in a column named `authorization`

I've fought a few hours now to store a string in a database column in Rails.
I had to rename authorization to transaction so that Rails would store the value.
Why does Rails interfere while saving the value?
Example:
# Works
self.update_attribute(:transaction, result) rescue nil
# Does not work
self.update_attribute(:authorization, result) rescue nil
What is your underlying database? It might have "authorization" as a reserved word.
See the generated sql and run it directly to your db. If it runs without problems, then my assumption is invalid.
Both mySQL and SQLserver use authorization as a reserved word.
So you'll just need to use the different word.
You could also use something close like 'authorized' or 'auth'.
maybe try prefixing the column using the table name? For example:
UPDATE my_table
SET my_table.authorization = "new authorization"
WHERE id = 5

Moving encoded data with Rails

I'm trying to move data from one database to another from within a rake task.
However, I'm getting some fruity encoding issues on some of the data:
rake aborted!
PGError: ERROR: invalid byte sequence for encoding "UTF8": 0x92
HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding".
What can I do to resolve this error and get the data in? As far as I can tell (not knowing anything about encoding), the source DB is latin1.
if both databases are PG then you can export and import the whole database using the pg_dump options to change the encoding... that would probably the most performant way to do it
if you do this via a rake task you can do the transcoding inside your rake-task... that actually means you will have to touch every attribute and reencode it...
as it seems your new database is utf8 whereas the old is latin1
you could do it by having every string/text/text-like value encoded using... checking for respond_to?(:encoding) makes sure the data is encoded only if it has some encoding information attached, i. e. numeric values wont be transcoded
def transcode(data, toEnc = 'utf8')
if data.respond_to?(:encoding) && data.encoding.name != toEnc
return data.dup.force_encoding toEnc
end
data
end
now you can just read a record from the old db, run it through this method and then write it to the new database
u = OldDBUser.first
u.attribute_names.each { |x|
u[x.to_sym] = transcode u[x.to_sym]
}
#... whatever you do with the transcoded u
... well I have not tested those, but please do, maybe its all you need

Resources