RoR character class regex - ruby-on-rails

I have the following line of code in my Ruby on Rails app, which checks whether the given string contains Korean characters or not:
isKorean = !/\p{Hangul}/.match(word).nil?
It works perfectly in the console, but raises a syntax error for the actual app:
invalid character property name {Hangul}: /\p{Hangul}/
What am I missing and how can I get it to work?

This is a character encoding issue, you need to add:
# encoding: utf-8
to the top of the Ruby file you're using that regex in. You can probably use any encoding that the character class you're using exists in instead of UTF-8 if you wish. Note that in Ruby 2.0, UTF-8 is now the default, so this is not needed in Ruby 2.0+.
This is known as a "magic comment". You can and should read more about encoding in Ruby 1.9. Note that encoding in Rails views is handled automatically by config.encoding (set to UTF-8 by default in config/application.rb.
It was likely working in the console because your terminal is set to use UTF-8 already.

Related

Rails admin encoding error when i try to use 'windows-1250'

I got error :_ incompatible character encodings: UTF-8 and Windows-1250_
when i try to show something with chars from Poland ie. 'ąęźć'
in my application.rb i got:
config.encoding = "windows-1250"
In database.yml:
encoding: windows-1250
How can i show params in windows-1250 in rails admin panel?
I would suggest you go with utf-8 encoding (which is ruby's default these days).
Your input 'ąęźć' is a valid utf-8 string, so you would face no problem in decoding it as a utf-8 string.
If you still want to hack around, you can use:
'ąęźć'.mb_chars.tidy_bytes.to_s
which should also give you the desired output.

Avoid using # encoding: UTF-8

I ran into a problem with a Rails controller where it choked on a Unicode string:
syntax error, unexpected $end, expecting ']'
...conditions => ['url like ?', "%日本%"])
The solution to this problem was to set the encoding at the top of the controller file using
# encoding: UTF-8
Is there any way to set this globally? I keep on getting into trouble by forgetting to set it in files. Alternatively, is there a default somewhere that will make sure that all strings are thought of as Unicode? Are there any problems with setting everything to be Unicode?
In less than a month, Ruby 2.0 will be released, which will have UTF-8 as the default encoding. Then, you will not need to do that any more.
You can try setting environment variable RUBYOPT to -Ku value:
export RUBYOPT="-Ku"

Unicode symbol in Rails' view

I'm trying to include a degree symbol into my Rails view. If I put the degree symbol (°) straight into .html.erb file, it is displayed by browser in a normal way.
But this symbol should be transferred to view via string. And here the problem begins.
If I put this:
<%= 176.chr %>
into view, or put
.... + 176.chr
into ruby source, I get
incompatible character encodings: UTF-8 and ASCII-8BIT
How to make Rails recognize all views as UTF-8 by default?
You can use special replacement for this symbol in HTML: °.
http://www.w3schools.com/charsets/ref_html_entities_4.asp
You have to put it in HTML, outside the <%= %>. Or use raw helper. Or mark it as html_safe. And by the way, did you try to supply encoding in your chr? Like 176.chr(__ENCODING__) (__ENCODING__ here isn't placeholder, it's Ruby thing) or 176.chr(Encoding::UTF_8). All these approaches should work.
This should already be specified inside your application.rb inside /config/.
The relevant section should look like this:
module App
class Application < Rails::Application
# Configure the default encoding used in templates for Ruby 1.9.
config.encoding = "utf-8"
end
end
I think the issue here is that you are generating a ASCII-8BIT character that should be inserted into the UTF-8 body.
If you want to use a UTF-8 String in your Ruby code you have to put this magic string into the first line of your ruby file:
# encoding: UTF-8
Details on Encoding in Ruby 1.9 can be found here

Rails: encoding woes with serialized hashes despite UTF8

I've just updated from ruby 1.9.2 to ruby 1.9.3p0 (2011-10-30 revision 33570). My rails application uses postgresql as its database backend. The system locale is UTF8, as is the database encoding. The default encoding of the rails application is also UTF8. I have Chinese users who input Chinese characters as well as English characters. The strings are stored as UTF8 encoded strings.
Rails version: 3.0.9
Since the update some of the existing Chinese strings in the database are no longer displayed correctly. This does not affect all strings, but only those that are part of a serialized hash. All other strings that are stored as plain strings still appear to be correct.
Example:
This is a serialized hash that is stored as a UTF8 string in the database:
broken = "--- !map:ActiveSupport::HashWithIndifferentAccess \ncheckbox: \"1\"\nchoice: \"Round Paper Clips \\xEF\\xBC\\x88\\xE5\\x9B\\x9E\\xE5\\xBD\\xA2\\xE9\\x92\\x88\\xEF\\xBC\\x89\\r\\n\"\ninfo: \"10\\xE7\\x9B\\x92\"\n"
In order to convert this string to a ruby hash, I deserialize it with YAML.load:
broken_hash = YAML.load(broken)
This returns a hash with garbled contents:
{"checkbox"=>"1", "choice"=>"Round Paper Clips ï¼\u0088å\u009B\u009Eå½¢é\u0092\u0088ï¼\u0089\r\n", "info"=>"10ç\u009B\u0092"}
The garbled stuff is supposed to be UTF8-encoded Chinese. broken_hash['info'].encoding tells me that ruby thinks this is #<Encoding:UTF-8>. I disagree.
Interestingly, all other strings that were not serialized before look fine, however. In the same record a different field contains Chinese characters that look just right---in the rails console, the psql console, and the browser. Every string---no matter if serialized hash or plain string---saved to the database since the update looks fine, too.
I tried to convert the garbled text from a possible wrong encoding (like GB2312 or ANSI) to UTF-8 despite ruby's claim that this was already UTF-8 and of course I failed. This is the code I used:
require 'iconv'
Iconv.conv('UTF-8', 'GB2312', broken_hash['info'])
This fails because ruby doesn't know what to do with illegal sequences in the string.
I really just want to run a script to fix all the old, presumably broken serialized hash strings and be done with it. Is there a way to convert these broken strings to something resembling Chinese again?
I just played with the encoded UTF-8 string in the raw string (called "broken" in the above example). This is the Chinese string that is encoded in the serialized string:
chinese = "\\xEF\\xBC\\x88\\xE5\\x9B\\x9E\\xE5\\xBD\\xA2\\xE9\\x92\\x88\\xEF\\xBC\\x89\\r\\n\"
I noticed that it is easy to convert this to a real UTF-8 encoded string by unescaping it (removing the escape backslashes).
chinese_ok = "\xEF\xBC\x88\xE5\x9B\x9E\xE5\xBD\xA2\xE9\x92\x88\xEF\xBC\x89\r\n"
This returns a proper UTF-8-encoded Chinese string: "(回形针)\r\n"
The thing falls apart only when I use YAML.load(...) to convert the string to a ruby hash. Maybe I should process the raw string before it is fed to YAML.load. Just makes me wonder why this is so...
Interesting! This is likely due to the YAML engine "psych" that's used by default now in 1.9.3. I switched to the "syck" engine with YAML::ENGINE.yamler = 'syck' and the broken strings are correctly parsed.
This seems to have been caused by a difference in the behaviour of the two available YAML engines "syck" and "psych".
To set the YAML engine to syck:
YAML::ENGINE.yamler = 'syck'
To set the YAML engine back to psych:
YAML::ENGINE.yamler = 'psych'
The "syck" engine processes the strings as expected and converts them to hashes with proper Chinese strings. When the "psych" engine is used (default in ruby 1.9.3), the conversion results in garbled strings.
Adding the above line (the first of the two) to config/application.rb fixes this problem. The "syck" engine is no longer maintained, so I should probably only use this workaround to buy me some time to make the strings acceptable for "psych".
From the 1.9.3 NEWS file:
* yaml
* The default YAML engine is now Psych. You may downgrade to syck by setting
YAML::ENGINE.yamler = 'syck'.
Apparently the Syck and Psych YAML engines treat non-ASCII strings in different and incompatible ways.
Given a Hash like you have:
h = {
"checkbox" => "1",
"choice" => "Round Paper Clips (回形针)\r\n",
"info" => "10盒"
}
Using the old Syck engine:
>> YAML::ENGINE.yamler = 'syck'
>> h.to_yaml
=> "--- \ncheckbox: "1"\nchoice: "Round Paper Clips \\xEF\\xBC\\x88\\xE5\\x9B\\x9E\\xE5\\xBD\\xA2\\xE9\\x92\\x88\\xEF\\xBC\\x89\\r\\n"\ninfo: "10\\xE7\\x9B\\x92"\n"
we get the ugly double-backslash format the you currently have in your database. Switching to Psych:
>> YAML::ENGINE.yamler = 'psych'
=> "psych"
>> h.to_yaml
=> "---\ncheckbox: '1'\nchoice: ! "Round Paper Clips (回形针)\\r\\n"\ninfo: 10盒\n"
The strings stay in normal UTF-8 format. If we manually screw up the encoding to be Latin-1:
>> Iconv.conv('UTF-8', 'ISO-8859-1', "\xEF\xBC\x88\xE5\x9B\x9E\xE5\xBD\xA2\xE9\x92\x88\xEF\xBC\x89")
=> "ï¼\u0088å\u009B\u009Eå½¢é\u0092\u0088ï¼\u0089"
then we get the sort of nonsense that you're seeing.
The YAML documentation is rather thin so I don't know if you can force Psych to understand the old Syck format. I think you have three options:
Use the old unsupported and deprecated Syck engine, you'd need to YAML::ENGINE.yamler = 'syck' before you YAML anything.
Load and decode all your YAML using Syck and then re-encode and save it using Psych.
Stop using serialize in favor of manually serializing/deserializing using JSON (or some other stable, predictable, and portable text format) or use an association table so that you're not storing serialized data at all.

In Ruby on Rails, are '#encoding: utf-8' and 'config.encoding = "utf-8"' different?

I can specify any ruby file to use specific encoding by add a comment line at its top:
#encoding: utf-8
But in Rails' config/application.rb, I found this:
config.encoding = "utf-8"
Are they different? If I have set config.encoding = "utf-8", still I need #encoding: utf-8?
The config.encoding = "utf-8" part in config/application.rb is related to how rails should interpret content.
#encoding: utf-8 in a ruby file tells ruby that this file contains non-ascii characters.
These two cases are different. The first one (in config/application.rb) tells rails something, and has nothing at all to do with how ruby itself should interpret source files.
You can set the environment variable RUBYOPT=-Ku if you're lazy and want ruby to automatically set the default file encoding of .rb files to utf-8, but I'd rather recommend that you put your non-ascii bits in a translation file and reference that with I18n.t.

Resources