In a TextArea, I am using the ' character but it is not displaying properly. Instead, it is displaying something like this: –.
How do I get the ' character to display properly?
You are probably not using the Ascii apostrophe (') but some non-Ascii punctuation mark, such as the correct punctuation apostrophe (’). The problem arises because your HTML document is (probably) UTF-8 encoded but the browser interprets it as windows-1252 encoded. If there encoding is not declared in HTTP headers, adding the tag <meta charset=utf-8> into the head part would help. For general advice on encodings, see the W3C page Character encodings.
The textarea element is meant for user input. For presenting your content, other elements (possibly styled with CSS) are usually a better choice. However, the encoding issue is the same.
Related
If we type into firefox or chrome
http://☃.net/
It takes us to
http://xn--n3h.net/
Which is a mirror of unicodesnowmanforyou.com
What I don't understand is by what rules the unicode snowman can decode to xn--n3h, it doesn't look anything like utf-8 or urlencoding.
I think I found a hint while mucking around in python3, because:
>>> '☃'.encode('punycode')
b'n3h'
But I still don't understand the xn-- part. How are domain names internationalised, what is the standard and where is this stuff documented?
It uses an encoding scheme called Punycode (as you've already discovered from the Python testing you've done), capable of representing Unicode characters in ASCII-only format.
Each label (delimited by dots, so get.me.a.coffee.com has five labels) that contains Unicode characters is encoded in Punycode and prefixed with the string xn--.
The label encoding first copies all the ASCII characters, then appends the encoded Unicode characters. The Unicode characters are always after the final - in the label, so one is added after the ASCII characters if needed.
More detail can be found in this page over at the w3 site, and in RFC 3987. For details on how Punycode actually encodes labels, see the Wikipedia page.
I've had this problem for a long time but I've been implementing this ugly hack on the backend to get around it.
Now I've decided to act as a real developer and deal with it.
My problem is that when parsing an XML feed with any of the Norwegian characters æ, ø or å in the title node, all the letters appearing before these special characters are ommitted.
So if the word is "Bålhuset" it only displays "ålhuset" - the funny thing is that æ,ø and å characters AFTER the initial problem character is included.
So if I put for example "ÅBålhuset", I will get "Bålhuset". So it seems it's only the first occurence of one of these special characters that will cause a problem.
Any help would be immensely appreciated!
-Chris
Try while you creating XML use CDATA tags like
<title><![CDATA[Transport "Bålhuset"Classic World's]]></title>
Also here is a list of HTML Tags and more cases XML with those characters is invalid, unless they are contained within a CDATA. Also try this Question hope with help you
Otherwise you need to use their special character code. If you want to represent ö you need to type ö please review like.
And Final XML with those characters is invalid, unless they are contained within a CDATA.
You can Validate you XML while creating and easily fix the bug.
What did it for me was getting the data in JSON and using the native JSON methods; no dropped characters and other sporadic behaviour.
So what that means to me is that there is an issue with NSXMLParser that makes it choke on international characters (the first occurence of which mind you) even though everything is in order with encoding etc.
I have html page which has following line with some html entities like "".
#Here I am not pasting whole html page content. just putting issue line only
html_file = "<html>....<body><p>theyre originally intended to describe the spread of of viral diseases, but they’re nice analogies for how web/SN apps grow.<p> ...</body></html>"
doc = Nokogiri::HTML(html)
body = doc.xpath('//body')
body_content = body[0].inner_html
puts body_content
Result:
These terms come from the fields of medicine and biology they\u0092re originally intended to describe the spread of of viral diseases, but they\u0092re nice analogies for how web/SN apps grow.
I want to leave these entities as it is instead of changing it to unicode.
Any thing, Am I missing?
Thanks
theyre
is wrong and should be avoided. If you want to use a close-single-quote there, to reproduce the typographical practice of rendering apostrophes as a slanted quote, then the correct character is U+2019 RIGHT SINGLE QUOTATION MARK, which can be written as ’ or ’. Or, if you're using UTF-8, just included verbatim as ’.
should refer to character U+0092, a little-used and pointless control character that typically renders as blank or a missing-glyph box. And indeed in XML, it does.
But in HTML (other than XHTML, which uses XML rules), it's a long-standing browser quirk that character references in the range to are misinterpreted to mean the characters associated with bytes 128 to 159 in the Windows Western code page (cp1252) instead of the Unicode characters with those code points. The HTML5 standard finally documents this behaviour.
The problem is that Nokogiri doesn't know about this quirk, and takes character reference 146 at its word, ending up with the character 146 (\u0092) that you don't really want. I think Nokogiri is using libxml2 to parse HTML, so ultimately the proper fix would be to libxml2's htmlParseCharRef function, to substitute characters 128–159.
In the meantime you could perhaps try ‘fixing up’ character references manually with crude string substitution like ->’ before parsing. It's a bit wrong, but at least in HTML the only other place you can have the markup sequence without it being a character reference would be in a comment, so hopefully it wouldn't matter if you changed the content there accidentally too.
Have you tried changing
’
into
i think the parser parses the ampersand first then concats it with "#146" and then parse them both. it's just an opinion though..I want this to be just a comment IDK how..lol
Well I got the idea from focos in his answer post here, and the unicode from here.
I'm importing an RSS feed from Tumblr into a Kynetx app. It appears that the RSS feed has some encoding issues, as apostrophes appear like this:
The feed (which you can find here) claims to be encoded in UTF-8.
Is there a way to specify the encoding or else replace those characters with regular apostrophes?
While not optimal, you could try to catch these encodings and replace them with the UTF-8 standard:
newstring = oldstring.replace(re/’/\'/);
This appears to be a case of a service that specifies UTF-8, but does't explicitly enforce it. I uploaded an image of the RSS feed that you provided. For comparison, I cut and pasted the text into a notepad document and then typed in the same text from my keyboard.
I don't know if you can tell from the image, but the apostrophe that is mangled is different from the apostrophe that is generated by my UTF-8 browser.
I suspect that this post was submitted via a Windows client. If you look at your encoding options, you will see an option for Western (Windows-1252).
Windows-1252 is a legacy encoding from windows that resembles ISO 8859-1, but substitutes some of their own characters for control characters in the ANSI standard and changes the location in the codepage of others.
A couple of quotes from the wikipedia page that I cite above:
It is very common to mislabel Windows-1252 text data with the charset label ISO-8859-1. Many web browsers and e-mail clients treat the MIME charset ISO-8859-1 as Windows-1252 characters in order to accommodate such mislabeling
Many Microsoft programs, such as Word will automatically substitute Windows-1252 characters when standard ASCII characters are entered, such as for "smart quotes" (e.g. substituting ’ for the apostrophe in a contraction) or substituting © for the three characters '(c)'.
KRL supports all of the language charsets supported by UTF-8, so it supports multi-byte international characters natively; however, that comes at the expense of being able to fudge encodings that is possible when you only have ISO-8859-1 or Windows-1252 to choose from.
What’s the difference between an URL Encode and a HTML Encode?
HTML Encoding escapes special characters in strings used in HTML documents to prevent confusion with HTML elements like changing
"<hello>world</hello>"
to
"<hello>world</hello>"
URL Encoding does a similar thing for string values in a URL like changing
"hello+world = hello world"
to
"hello%2Bworld+%3D+hello+world"
urlEncode replaces special characters with characters that can be understood by web browsers/web servers for the purpose of addressing... hence URL. For instance, spaces are replaced with %20, ' = %27 etc...
See these references:
http://www.blooberry.com/indexdot/html/topics/urlencoding.htm
http://www.degraeve.com/reference/urlencoding.php
HtmlEncode replaces special characters with character strings that are recognised by the HTML engine itself to render the content of the page - things like & become & or < = <, > = > this prevents the HTML engine from interpreting these characters as parts of the HTML markup and therefore render them as if they were strings.
See this reference:
http://msdn.microsoft.com/en-us/library/ms525347.aspx
Both HTML and URL's are essentially very constrained languages. As a language they add meaning to specific keywords or operators. For both of these languages though, keywords are almost always single characters. For example
HTML: > and <
URL: / and :
In the use of each language though it is possible to use these constructs in a manner that does not ensure the meaning of the language. For instance this post contains a > character. I do not want it to be interpreted as HTML, just text.
This is where Encode and Decode methods come into play. These methods will respectively take a string and convert any of the characters that would otherwise be treated as keywords into an escaped form which will not be interpreted as part of the language.
For instance: Passing > into HtmlEncode will return >
HTMLEncode and URLEncode deal with invalid characters in HTML and URLs, or more accurately, characters that need to be specially written to be interpreted correctly. For example, in HTML the < and > characters are used to indicate tags. Thus, if you wanted to write a math formula, something like 1+1 < 2+2, the '<' would normally be interpreted as the beginning of a tag. HTMLEncoding turns this character into "<" which is the encoded representation of the less-than sign. URLEncoding does the same, but for URLs, for which the special characters are different, although there is some overlap.
I don't know what language you are working in, but the PHP manual for example provides good explanations.
URLEncode
Returns a string in which all
non-alphanumeric characters except -_.
have been replaced with a percent (%)
sign followed by two hex digits and
spaces encoded as plus (+) signs. It
is encoded the same way that the
posted data from a WWW form is
encoded, that is the same way as in
application/x-www-form-urlencoded
media type. This differs from the »
RFC 1738 encoding (see rawurlencode())
in that for historical reasons, spaces
are encoded as plus (+) signs.
Read on