I have a rails app where my users can manually set up products via a web form. This works fine and accepts foreign characters well, words like 'Svölk' for example.
I now have a need to bulk import products and am using FasterCSV to do so. Generally this works without issue, but when the CSV contains foreign characters it stalls at that point.
Am I correct to believe the file needs to be UTF-8 in the first instance?
Also, I'm running Ruby 1.8.7 so is ICONV my only solution for converting the file? This could be an issue as the format of the original file won't be known.
Have others encountered this issue and if so, how did you overcome it?
You have two alternatives:
Use ensure_encoding gem to find the actual encoding of the strings.
Use Ruby to determine the file encoding using:
File.open(source_file).read.encoding
I prefer the first approach as it tries to detected the encoding based on Strings, and tries to convert to your desired encoding (UTF-8) and then you can set the encoding on FasterCSV options.
Related
I've learnt that you may define a Ruby source file as UTF-8 to be able to key inside it double-byte characters (e.g.: ¤) instead of their HTML code (e.g.: & curren;):
# encoding: UTF-8
class Price < ActiveRecord:Base
def currency_symbol
'¤'
end
end
Without the encoding statement, I would need to write '& curren;'.html_safe as the core of the method.
I don't like the later because it assume I'm writing HTML (I have Excel output in my app on top of HTML).
My question is: Is there any problems or performance hits I must be aware while doing this?
Note: Ruby 2.0 brings UTF-8 as the default encoding; does it mean all Ruby files will automatically support all those characters?
Character chart: http://dev.w3.org/html5/html-author/charref
This is exactly the kind of thing that should go in the locales (config/locales). These are YAML files that define words and characters that will be used in the various parts of your application, including currency symbols. It also has the benefit of allowing you to easily introduce translations for other languages.
Take a look at the ruby on rails guide for i18n for more.
So I code using ruby on rails and the default database is sqlite, which is amazing for development. there is no setup time, no need for connection etc. What really sucks is the output on the command line. when i do a query to list all the contents of a table for example. I just get a dump of text, unlike when you use MySql, the CLI formats the results of the queries nicely (with headers in a table). I am also aware of "headers on" and the commands you can type to format the results of sqlite but those are temporary and I am looking for a more permanent way to format the results so i do not have to do it every-time.
It appears that you can specify SQL statements and metadata statements in an init file and pass this file as the -init parameter.
I have a Rails application where I use regex-based rules to categorize transactions. In my seeds.rb, I create some categories and rules, then import transactions from a CSV file (also utf8-encoded) and allow them to be categorized. This process works fine on my development machine, but when I run it on Heroku, I get:
incompatible encoding regexp match (ASCII-8BIT regexp with UTF-8 string)
I am running the Cedar Stack, Rails 2.3.15. I have put
# encoding: utf-8
at the top of all my source files and I've set the encoding to utf-8 in my app config, so I'm not sure what else could be causing this problem. I'm wondering if has something to do with the Heroku configuration.
The issue could be caused by invisible characters that are ignored by your local operating system, ensuring proper encoding takes place whereas on Heroku, the characters mess up the magic number declaration at the top of the file and you end up with both ASCII-8BIT and UTF-8.
Since the file that is having issues contains the regex, it's probably your model class instead of seeds.rb.
There are many ways to view invisible characters in your file. In vi, just set the option :set list
I'm using the DocSplit gem for Ruby 1.9.3 to create Unicode UTF-8 versions of word documents. To my surprise today while I was running a test on a particular piece of one of these documents I started running into character encoding inconstencies.
I have tried a number of different methods to resolve the issue which I will list below, but the best success I've had so far is to remove all non-ASCII characters. This is far from ideal, as I don't think the character's are really going to be all that problematic in the DB.
gsub(/[^[:ascii:]]/, "")
This is a sample of what my output looks like vs. what I'm expecting:
My CODES'S APOSTROPHE
My CODES’S APOSTROPHE
The second apostrophe should look squiggly. If you paste it into irb, you get the following: \U+FFE2
I tried Regexing specifically for this character and it appears to work in Rubular. As soon as I put it in my model however, I got a syntax error.
syntax error, unexpected $end, expecting ')'
raw_title = raw_title.gsub(/’/, "")
I also tried forcing the encoding to UTF-8, but everything is already in UTF-8 and this does not appear to have an effect. I tried forcing the output to US-ASCII, but I get a byte sequence error.
I also tried a few of the encoding options found in Ruby library. These basically did the same thing as the Regex.
This all comes down to that I'm trying to match output for testing purposes. Should I even be concerned about these special characters? Is there a better way to match these characters without blindly removing them?
Try adding:
# encoding: utf-8
at the top of the failing rspec file. This should ensure things like:
raw_title = raw_title.gsub(/’/, "")
in your spec work.
I tried using the above example. but even after that it kept failing. So I used iconv to convert that specfic character. THis is what I used
Iconv.conv('ASCII//IGNORE', 'UTF8', text_to_be_converted)
I tried what was given in the following link - How to get rid of non-ascii characters in ruby
I'm getting the following error with my Ruby 1.9 & Rails 2.3.4. This happens when user submits a non-ASCII standard character.
I read a lot of online resources but none seems to have a solution that worked.
I tried using (as some resources suggested)
string.force_encoding('utf-8')
but it didn't help.
Any ideas how to resolve this? Is there a way to eliminate such characters before saving to the DB? Or, is a there a way to make them show?
For ruby 1.9 and Rails 3.0.x, use the mysql2 adapter.
In your gemfile:
gem 'mysql2', '~> 0.2.7'
and update your database.yml to:
adapter: mysql2
http://www.rorra.com.ar/2010/07/30/rails-3-mysql-and-utf-8/
I don't know much about Ruby (or Rails), but I imagine the problem is caused by a lack of control over your character encodings.
First, you should decide which encoding you're storing in your database. Then, you need to make sure to convert all text to that encoding before storing in the database. In order to do that, you first need to know which encoding it is to begin with.
One often repeated piece of advice is to decode all input from whatever encoding it uses, to unicode (if your language supports it) as soon as possible after you get control of it. Then you know that all the text you handle in your program is unicode. On the other end, encode the text to whatever output-encoding you want as a last step before outputting it.
The key is to always know which encoding a piece of text is using at any given place in your code.