character encoding in Auto-py-to-exe - character-encoding

Is it possible to set the character encoding in auto-py-to-exe, like is there a command to set ( for example) UTF-8 while converting the py to exe?
thank you very much for future answers!

Related

Japanese encoding JIS_X_0208 codepage in python and C++

I am trying to encode and decode Japanese characters that are incoded in JIS_X_0208.
In python I use this command to encode my string from uft-8 to japanese characters
string.decode('utf8').encode('iso2022_jp')
to encode the kanji properly
I decode it in C++ with this line to UTF-16
MultiByteToWideChar(932, 0, &s[0], s.size(), &unicodeBuffer[0], s.size());
All the kanji are properly encoded/decoded.
But the problem is that it is not compliant with JIS_X_0208. I prefer to specify that the usage of JIS_X_0208 is mandatory and I can't change it.
For instance, the roman character are supposed to be encoded in two bytes with the first one starting with 0x23, for example le letter T should be encoded as 0x23 0x54 (according to both he JIS_X_0208 wikipedia page and the sample I was gevin as example).
I guess the only issue I have is to find the correct codepage for the encoding, but I can't find the one I need.
Does anyone know what the correct codepage is, or at least where I can find the available codepage for C++ and python on Windows?
Thank you in advance.

Trouble making a heart symbol in Lua?

I was wondering how to make the heart sign or "♥" in Lua, I have tried \003 because that is the ASCII code for it, but it does not print it out.
This has little to do with Lua.
You need to find out which character set and encoding is used in your environment and select a font that supports ♥ in that encoding.
Then you need to use an editor for your Lua script that saves in that encoding. If that part is not possible then you can determine the byte sequence required, code it as numeric escapes in a literal string and save in a compatible encoding such as CP437. For example, if you are outputting to a UTF-8 processor, "\xE2\x99\xA5".
Keep in mind that a Lua string is a counted sequence of bytes. It's up to you and your editor to put the right bytes in in the file, it's up to your environment (e.g., console) to interpret those bytes in a particular character encoding, and up to the font to display the glyph.
In a Windows console, you can select the Lucinda Console font, chcp 65001 to use UTF-8 and use Lua 5.1 like this: lua -e "print('\226\153\165')". As a comparison, chcp 437 to use IBM437 and use Lua 5.1 like this: lua -e "print('\003')".
For ASCII, only range 0x20 to 0x7E are printable. Others, including 0x03, isn't printable. Printing its value would be up to the implementation.
If the environment supports Unicode, you can simply call:
print("♥")
For instance, Lua Demo outputs ♥, same in ideone.

Encode ASCII Files

this question will have a very simple answers which is yes or no I guess ?
If I encode from x64 bit unicode delphi app my stringlist like this
StringList.SaveToFile(FileName, TEncoding.ASCII);
is there any other limitation , difference in file layout while writing this file with the statement
StringList.SaveToFile(FileName);
or
StringList.SaveToFile(FileName, TEncoding.UTF8);
I'm afraid on line length and control char issues between both versions....Answer NO will make me happy.
UTF-8 and the Windows 'Ansi' codepages are all superset of ASCII. As such, if the string list only contains characters in the ASCII range, the three statements you listed will be equivalent if you prepend the last with this:
StringList.WriteBOM := False;
This is because by default, TStrings will write out a small marker (a BOM) to denote UTF-8 text.
The difference is simply in the encoding used. This in turn, of course, leads to differences in size. So ASCII files will be smaller than UTF-16 (what you get with TEncoding.Unicode. And UTF-8 files could be the same size as ASCII, or larger than UTF-16.
I guess you are asking if using ASCII or UTF-8 in any way damages the text that is written. Well, using ASCII will if the text contains non-ASCII characters. ASCII can only encode 127 characters.
On the other hand, UTF-8 is a full encoding of Unicode. Which means that
StringList.SaveToFile(FileName, TEncoding.UTF8);
StringList.LoadFromFile(FileName, TEncoding.UTF8);
results in the list having exactly the same content as it did before the save.
You ask if lines can be truncated by SaveToFile. They cannot.
Another point to make is that 32/64 bit is not relevant here. The code behaves in exactly the same way under 32 and 64 bit. The issues are always to do with encoding.
I would also note that the title of your question is somewhat mis-leading. When you encode with TEncoding.UTF8 you not do not have an ASCII file.

lua string.upper not working with accented characters?

I'm trying to convert some French text to upper case in lua, it is not converting the accented characters. Any idea why?
test script:
print('échelle')
print(string.upper('échelle'))
print('ÉCHELLE')
print(string.lower('ÉCHELLE'))
output:
échelle
éCHELLE
ÉCHELLE
Échelle
It might be a bit overkill, but you can do this with slnunicode (which is available in LuaRocks).
require "unicode"
print(unicode.utf8.upper("échelle"))
-- ÉCHELLE
You may need to use unicode.ascii.upper or unicode.latin1.upper depending on the encoding of your source files.
You need to set a suitable locale, which depends how these strings are encoded in the source.
You seem to be using Latin 1 because of the output you gave.
In this case, trying adding the line below at the top of your script:
os.setlocale("fr_FR.ISO8859-1")
This name is for Mac OS X. For Linux, try
os.setlocale("fr_FR.iso88591")
If you're using UTF, then setting a locale won't help because string.lower converts the string one byte at a time.
Lua just uses the C library function toupper, which AFAIK doesn't support accented characters. You'd need to write a routine for that yourself.
To explain this all more effectively, Lua does not have built-in support for non-ASCII strings. You can store a Latin-1 or UTF-8-encoded string, but none of the special string manipulation functions (upper, lower, etc) will work on any non-ASCII character.
There are Lua libraries that add varying degrees of Unicode support. So you will have to use one of them.

Can anyone tell me how to convert UTF-8 value to UCS-2 value in Objective-c?

I am trying to convert UTF-8 string into UCS-2 string.
I need to get string like "\uFF0D\uFF0D\u6211\u7684\u4E0A\u7F51\u4E3B\u9875".
I have googled for about a month by now, but still there is no reference about converting UTF-8 to UCS-2.
Please someone help me.
Thx in advance.
EDIT: okay, maybe my explanation was not good enough. Here is what I am trying to do.
I live in Korea, and I am trying to send a sms message using CTMessageCenter. I tried to send chinese simplified character through my app. And I get ???? Instead of proper characters. So I tried UTF-8, UTF-16, BE and LE as well. But they all return ??. Finally I found out that SMS uses UCS-2 and EUC-KR encoding in Korea. Weird, isn't it?
Anyway I tried to send string like \u4E3B\u9875 and it worked.
So I need to convert string into UCS-2 encoding first and get the string literal from those strings.
Wikipedia:
The older UCS-2 (2-byte Universal Character Set) is a similar
character encoding that was superseded by UTF-16 in version 2.0 of the
Unicode standard in July 1996.2 It produces a fixed-length format
by simply using the code point as the 16-bit code unit and produces
exactly the same result as UTF-16 for 96.9% of all the code points in
the range 0-0xFFFF, including all characters that had been assigned a
value at that time.
IBM:
Since the UCS-2 standard is limited to 65,535 characters, and the data
processing industry needs over 94,000 characters, the UCS-2 standard
is in the process of being superseded by the Unicode UTF-16 standard.
However, because UTF-16 is a superset of the existing UCS-2 standard,
you can develop your applications using the systems existing UCS-2
support as long as your applications treat the UCS-2 as if it were
UTF-16.
uincode.org:
UCS-2 is obsolete terminology which refers to a Unicode
implementation up to Unicode 1.1, before surrogate code points and
UTF-16 were added to Version 2.0 of the standard. This term should now
be avoided.
UCS-2 does not define a distinct data format, because UTF-16 and UCS-2
are identical for purposes of data exchange. Both are 16-bit, and have
exactly the same code unit representation.
So, using the "UTF8toUnicode" transformation in most language libraries will produce UTF-16, which is essentially UCS-2. And simply extracting the 16-bit characters from an Objective-C string will accomplish the same thing.
In other words, the solution has been staring you in the face all along.
UCS-2 is not a valid Unicode encoding. UTF-8 is.
It is therefore impossible to convert UTF-8 into UCS-2 — and indeed, also the reverse.
UCS-2 is dead, ancient history. Let it rot in peace.

Resources