I'm running into issues uploading an ios strings file (english -> japanese) to a node server for parsing.
The file is UTF-16LE but when parsed as a string the character encoding loses characters. This may have something to do with express using utf8 to read in the request file data which malforms the file data.
When the file is loaded in atom/sublime w/ utf16 encoding it works great
When the file is loaded in utf8 things break down.
Any help would be awesome.
After doing some research and digging.
Utilizing npm module iconv-lite to parse the file buffer one should:
1) parse the buffer as utf16le
2) down convert to utf8
3) conver to a string.
if (encoding === 'utf-16le') {
str = iconv.decode(buffer, 'utf16le');
body = iconv.encode(str, 'utf8').toString();
} else if (encoding === 'utf-16be') {
str = iconv.decode(buffer, 'utf16be');
body = iconv.encode(str, 'utf8').toString();
} else {
body = Buffer.concat(file.data).toString();
}
Related
We have a link module that looks something like this:
const string lMod = "/project/_admin/somethingÜ" // Umlaut
We later use the linkMod like this to loop through the outlinks:
for a in obj->lMod do {}
But this only works when executing directly from DOORS and not from a batch script since it for some reason doesn't recognize the Umlaut causing the inside of the loop to never to be run; exchanging lMod with "*" works and also shows the objects linked to by the lMod.
We are already using UTF-8 encoding for the file:
pragma encoding, "UTF-8"
Any solutions are welcome.
Encode the file as UTF-8 in Notepad++ by going to Encoding > Convert to UTF-8. (Make sure it's not already set to UTF-8 before you do it).
I've got a string variable containing a text that I need to encode and write to a file, in UTF-16LE code page.
Currently the following code generates a UTF-8 file and I don't see any option in the statement OPEN DATASET to generate the file in UTF-16LE.
REPORT zmyprogram.
DATA(filename) = `/tmp/myfile`.
OPEN DATASET filename IN TEXT MODE ENCODING DEFAULT FOR OUTPUT.
TRANSFER 'HELLO WORLD' TO filename.
CLOSE DATASET filename.
I guess one solution is to first encode the string in memory, then write the encoded bytes to the file.
Generally speaking, how to encode a string of characters into a given code page, in memory?
In the first part, I explain how to encode a string of characters into a given code page (all is done in memory), and in the second part, I explain specifically how to write files to the application server in a given code page.
General way (all in memory)
If a string of characters (type STRING) has to be encoded, the result has to be stored in a string of bytes, which corresponds to the built-in data type XSTRING.
There are several possibilities which depend on the ABAP version:
Since 7.53, use the class CL_ABAP_CONV_CODEPAGE:
DATA(xstring) = cl_abap_conv_codepage=>create_out( codepage = `UTF-16LE` )->convert( source = `ABCDE` ).
Since 7.02, use the class CL_ABAP_CODEPAGE:
DATA xstring TYPE xstring.
xstring = cl_abap_codepage=>convert_to( source = `ABCDE` codepage = `UTF-16LE` ).
Before 7.02, use the class CL_ABAP_CONV_OUT_CE (documentation provided with the class):
First, instantiate the conversion object, use a SAP code page number instead of the ISO name (list of values shown hereafter):
DATA: conv TYPE REF TO CL_ABAP_CONV_OUT_CE, xstring TYPE xstring.
conv = CL_ABAP_CONV_OUT_CE=>CREATE( encoding = '4103' ). "4103 = utf-16le
Then encode the string and retrieve the bytes encoded:
conv->RESET( ).
conv->WRITE( data = `ABCDE` ).
xstring = conv->GET_BUFFER( ).
Eventually, instead of using RESET, WRITE and GET_BUFFER, the method CONVERT was added in 6.40 and retroported :
conv->CONVERT( EXPORTING data = `ABCDE` IMPORTING buffer = xstring ).
With the class CL_ABAP_CONV_OUT_CE, you need to use the number of the SAP Code Page, not the ISO name. Here are the most common SAP code pages and their equivalent ISO names:
1100: ISO-8859-1
1101: US-ASCII
1160: Windows-1252 ("ANSI")
1401: ISO-8859-2
4102: UTF-16BE
4103: UTF-16LE
4104: UTF-32BE
4105: UTF-32LE
4110: UTF-8
Etc. (the possible values are defined in the table TCP00A, in lines with column CPATTRKIND = 'H').
Writing a file on the application server in a given code page
In ABAP, OPEN DATASET can directly specify the target code page, most code pages are supported including UTF-8, but not other UTF (code pages 41xx) which can be done only by the solution explained in 2.3 below (by first encoding in memory).
2.1) IN TEXT MODE ENCODING ...
Possible ENCODING values:
UTF-8: in this mode, it's possible to add the Byte Order Mark if needed, via the option WITH BYTE-ORDER MARK.
DEFAULT: will be UTF-8 in a SAP "Unicode" system (that you can check via the menu System > Status > Unicode System Yes/No), NON-UNICODE otherwise.
NON-UNICODE: will depend on the current ABAP linguistic environment; for language English, it's the character encoding iso-8859-1, for language Polish, it's the character encoding iso-8859-2, etc. (the equivalences are shown in table TCP0C.)
Example in ABAP version 7.52 to write to UTF-8 with the byte order mark:
REPORT zmyprogram.
DATA(filename) = `/tmp/dataset_utf_8`.
OPEN DATASET filename IN TEXT MODE ENCODING UTF-8 WITH BYTE-ORDER MARK FOR OUTPUT.
TRY.
TRANSFER `Witaj świecie` TO filename.
CATCH cx_sy_conversion_codepage INTO DATA(lx).
" Character not supported in language code page
ENDTRY.
CLOSE DATASET filename.
Example in ABAP version 7.52 to write to iso-8859-2 (Polish language here):
REPORT zmyprogram.
SET LOCALE LANGUAGE 'L'. " Polish
DATA(filename) = `/tmp/dataset_nonunicode_pl`.
OPEN DATASET filename IN TEXT MODE ENCODING NON-UNICODE FOR OUTPUT.
TRY.
TRANSFER `Witaj świecie` TO filename.
CATCH cx_sy_conversion_codepage INTO DATA(lx).
" Character not supported in language code page
ENDTRY.
CLOSE DATASET filename.
2.2) IN LEGACY TEXT MODE CODE PAGE ...
Use any code page number except code pages 41xx (i.e. UTF-8 and other UTF; see workaround in 2.3 below).
Example in ABAP version 7.52 to write to iso-8859-2 (code page 1401) :
REPORT zmyprogram.
DATA(filename) = `/tmp/dataset_iso_8859_2`.
OPEN DATASET filename IN LEGACY TEXT MODE CODE PAGE '1401' FOR OUTPUT. " iso-8859-2
TRY.
TRANSFER `Witaj świecie` TO filename.
CATCH cx_sy_conversion_codepage INTO DATA(lx).
" Character not supported in language code page
ENDTRY.
CLOSE DATASET filename.
2.3) UTF = general way + IN BINARY MODE
Example in ABAP version 7.52:
REPORT zmyprogram.
TRY.
DATA(xstring) = cl_abap_codepage=>convert_to( source = `Witaj świecie` codepage = `UTF-16LE` ).
CATCH cx_sy_conversion_codepage INTO DATA(lx).
" Character not supported in language code page
BREAK-POINT.
ENDTRY.
DATA(filename) = `/tmp/dataset_utf_16le`.
OPEN DATASET filename IN BINARY MODE FOR OUTPUT.
TRANSFER xstring TO filename.
CLOSE DATASET filename.
This code:
var str = "ぴ" //Japanese: pi
xcode report compilation error:
Unprintable ASCII character found in source file
Have you tried this?
func convertString(string: String) -> String {
var data = string.dataUsingEncoding(NSASCIIStringEncoding, allowLossyConversion: true)
return NSString(data: data!, encoding: NSASCIIStringEncoding) as! String
}
From How to deal with a user input string that gives an "unprintable ascii character found in source file" error when pasted into Xcode?
This answer is for the people who are using windows keyboard with mac system.
While typing you might have unknowingly clicked the right-click key in your keyboard, which will create the Unprintable ASCII character that is causing the Unprintable ASCII character found in the source file error.
To check this go to find(cmd+F) and click the right-click key and you can find them in your class and delete those characters and the error will be gone.
Thanks
SWIFT 4:
func convertString(string: String) -> String {
let data = string.data(using: String.Encoding.ascii, allowLossyConversion: true)
return NSString(data: data!, encoding: String.Encoding.ascii.rawValue)! as String
}
In a Playground your line won't give you an error as it is, since it uses UTF-8 encoding by default. In a regular project your trusty File inspector will allow you to define the Text Encoding in the Text Settings section. I recommend to use some version of Unicode, probably UTF-8 if your app will deal primarily with western languages and UTF-16 if you have a lot of text constants in asian languages.
This just for the record since this is an old question and you probably found a working solution years ago.
Here is one piece of Chinese utf-8 text which is encoded by PHP on the server-side, but when I decode it with iOS, it returns null.
I also tried this online tool where text can be decoded well.
NSData *decodedData = [[NSData alloc] initWithBase64EncodedString:content options:0];
content = [[NSString alloc] initWithData:decodedData encoding:NSUTF8StringEncoding];
5aW96ZuF77yM5ZKx5p2l5LiA5L+X55qE77yM5pS56Ieq5Lic5Y2X6KW/5YyX6aOO44CCCuS4juS9oOebuOmAou+8jOWFqOaYr+acuue8mOW3p+WQiOOAguWPr+Wtpui1t+adpeWNtOW/g+aGlOaCtOOAggrmgLvmmK/ovpPkuoborqnlho3ljrvlrabvvIzlrabkuobo
Here is the test code for debug this issue with xcode:
NSString * = #"5aW96ZuF77yM5ZKx5p2l5LiA5L+X55qE77yM5pS56Ieq5Lic5Y2X6KW/5YyX6aOO44CCCuS4juS9oOebuOmAou+8jOWFqOaYr+acuue8mOW3p+WQiOOAguWPr+Wtpui1t+adpeWNtOW/g+aGlOaCtOOAggrmgLvmmK/ovpPkuoborqnlho3ljrvlrabvvIzlrabkuobo";
//
NSData *decodedData = [[NSData alloc] initWithBase64EncodedString: options:0];
NSString *content = [[NSString alloc] initWithData:decodedData encoding:NSUTF8StringEncoding] ;
NSLog(content);
Your revised question features a base64 string of:
5aW96ZuF77yM5ZKx5p2l5LiA5L+X55qE77yM5pS56Ieq5Lic5Y2X6KW/5YyX6aOO44CCCuS4juS9oOebuOmAou+8jOWFqOaYr+acuue8mOW3p+WQiOOAguWPr+Wtpui1t+adpeWNtOW/g+aGlOaCtOOAggrmgLvmmK/ovpPkuoborqnlho3ljrvlrabvvIzlrabkuobo
This string has a length that is a multiple of four bytes, so the lack of the =/== terminator at the end is not the problem. And, in fact, initWithBase64EncodedString decodes it successfully:
e5a5bde9 9b85efbc 8ce592b1 e69da5e4 b880e4bf 97e79a84 efbc8ce6 94b9e887
aae4b89c e58d97e8 a5bfe58c 97e9a38e e380820a e4b88ee4 bda0e79b b8e980a2
efbc8ce5 85a8e698 afe69cba e7bc98e5 b7a7e590 88e38082 e58fafe5 ada6e8b5
b7e69da5 e58db4e5 bf83e686 94e682b4 e380820a e680bbe6 98afe8be 93e4ba86
e8aea9e5 868de58e bbe5ada6 efbc8ce5 ada6e4ba 86e8
The issue here is that this appears to not be a valid UTF8 string. In fact, when I run it through the http://base64decode.net site you referenced in your original question, it is also unable to convert it to a UTF8 string (I notice that your screen snapshots are using a different converter web site). When I ran it through another converter, it converted what it could, but then complained about the character following 学了 (which is, coincidentally, the character at which your base64 converter web site stopped, too).
By the way, the UTF8 representation of 了 is e4 ba 86. And you'll see that near the end of the hex representation of your base 64 string, followed by one more byte, e8. The thing is, e8, by itself, is not a valid UTF8 character. It almost looks looks like you took a base64 encoded string and just grabbed the first 200 bytes, disregarding whether that resulted in cutting the UTF8 character off in the middle or not.
The original question featured a base64 string of:
5aW96ZuF77yM5ZKx5p2l5LiA5L+X55qE77yM5pS56Ieq5Lic5Y2X6KW/5YyX6aOO44CCCuS4juS9oOebuOmAou+8jOWFqOaYr+acuue8mOW3p+WQiOOAguWPr+Wtpui1t+adpeWNtOW/g+aGlOaCtOOAggrmgLvmmK/
That is not valid base64. It should be a multiple of four bytes in length, but that is only 163 characters, which is missing a character. Either your server isn't properly terminating the base64 string, or it got cut off for some reason.
For example, if I add a = to get it up to 164 characters, I get a valid base64 string:
5aW96ZuF77yM5ZKx5p2l5LiA5L+X55qE77yM5pS56Ieq5Lic5Y2X6KW/5YyX6aOO44CCCuS4juS9oOebuOmAou+8jOWFqOaYr+acuue8mOW3p+WQiOOAguWPr+Wtpui1t+adpeWNtOW/g+aGlOaCtOOAggrmgLvmmK/=
Adding the = would be the right solution if the server simply neglected to terminate the base64 string properly. Anyway, that can be base64-decoded to:
好雅,咱来一俗的,改自东南西北风。
与你相逢,全是机缘巧合。可学起来却心憔悴。
总是
Is that what you were expecting?
Perhaps you should take a look at your base64 routine on your server? Or if it's getting truncated, look at how you are receiving it and compare the server's original base64 string length to what you have here.
For information about adding = or == to the end of a base 64 encoded string, see the base64 wikipedia page.
#Rob is right.
Check this Check NSData won't accept valid base64 encoded string
But in case if your server is not returning valid JSON with "=" or "==" then you need to use external methods to perform base64decode. Those methods can decode even if base64string does not have "=" symbol at the end.
I have localised my app by adding the correct resource files for various European languages / dialects.
I have the required folder in my project: ./res/com/demo/localization
It contains the required files e.g. Demo.rrh, Demo.rrc, Demo_de.rrc etc.
I want to add support for 2 Chinese dialects, and I have the translations in an Excel file. On iPhone, they are referred to by the codes zh_TW & zh_CM. Following the pattern with German, I created 2 extra files called Demo_zh_TW.rrc & Demo_zh_CN.rrc.
I opened file Demo_zh_CN.rrc using Eclipse's text editor, and pasted in line of the Chinese translation using the normal resource file format:
START_LOCATION#0="开始位置";
When I tried to save the file, I got Eclipse's error about the Cp1252 character encoding:
Save could not be completed.
Reason:
Some characters cannot be mapped using "Cp1252" character encoding.
Either change the encoding or remove the characters which are not
supported by the "Cp1252" character encoding.
It seems the Eclipse editor will accept the Chinese characters, but the resource tool expects that these characters must be saved in the resource file as Java Unicode /u encoding.
How do I add language support for these 2 regions without manually copy n pasting in each string?
Is there maybe a tool that I can use to Java Unicode /u encode the strings from Excel so they can be saved in Code page 1252 Latin chars only?
I'm not aware of any readily available tools for working with BlackBerry's peculiar localization style.
Here's a snippet of Java-SE code I use to convert the UTF-8 strings I get for use with BlackBerry:
private static String unicodeEscape(String value, CharsetEncoder encoder) {
StringBuilder sb = new StringBuilder();
for(char c : value.toCharArray()) {
if(encoder.canEncode(c)) {
sb.append(c);
} else {
sb.append("\\u");
sb.append(hex4(c));
}
}
return sb.toString();
}
private static String hex4(char c) {
String ret = Integer.toHexString(c);
while(ret.length() < 4) {
ret = "0" + ret;
}
return ret;
}
Call unicodeEscape with the 8859-1 encoder with Charset.forName("ISO-8859-1").newEncoder()
I suggest you look at Blackberry Hindi and Gujarati text display
You need to use the resource editor to make these files with the right encoding. Eclipse will escape the characters automatically.
This is a problem with the encoding of your resource file. 1252 Code Page contains Latin characters only.
I have never worked with Eclipse, but there should be somewhere you specify the encoding of the file, you should set your default encoding for files to UTF-8 if possible. This will handle your chinese characters.
You could also use a good editor like Notepad++ or EMEditor to set the encoding of your file.
See here for how you can configure Eclipse to use UTF-8 by default.