I'm using the Entypo font in my iPhone app but it's working fine only for some characters. I'm not able to display icons using five-digit unicode values.
I found some information on the Web telling this is due to the UTF encoding supported on iOS (and within other languages too) and the 5-digit unicode values should be splitted in two values.
But I'm not found a clear how-to description or a code sample.
My code to display a Entypo symbol is something like this:
myLabel.text = [NSString stringWithUTF8String:"\u25B6"];
myLabel.font = [UIFont fontWithName:#"Entypo" size:200];
If I replace the unicode value by "\u1F342" which is the icon leaf in the Entypo font then a non-valid character is displayed.
If you already have encountered this issue, perhaps you could help me to save time.
Thanks very much
If you check out the unicode page for that character, you'll see that its UTF-8 encoding is 0xF0 0x9F 0x8D 0x82 - that's what you should be using:
myLabel.text = [NSString stringWithUTF8String:"\uf0\u9f\u8d\u82"];
Note: totally untested.
After several searches I finally found a solution being easy to use in the different cases: symbol encoded up to 4 digits and more than 4 digits.
I defined a NSString category as follows:
#import "NSString+Extension.h"
#implementation NSString (Extension)
/**
* Convert a UTF8 symbol to a string which can directly be used as text in a label view for instance, for which the right font has been specified.
*
* The method can be used for both cases
* . the symbol is defined as a const char with a maximum of 4 digits. In this case the first parameter must be 0 and the second is used. Example: NSString *symbolString = [NSString symbolStringfromUnicode:0 orChar:"\uE766"]
* . the symbol is defined as an integer with hexadecimal notation. It can be have either less or more than 4 digits. In this case, only the first parameter is used. Example : NSString *prefixSymbol = [NSString symbolStringfromUnicode:0x1F464 orChar:nil];
*
* #param symbolUnicode symbol to convert defined as int
* #param symbolChar symbol to convert defined as const char *
*
*/
+ (NSString *)symbolStringfromUnicode:(int)symbolUnicode orChar:(const char *)symbolChar
{
NSString *symbolString;
if (symbolUnicode == 0) {
symbolString = [NSString stringWithUTF8String:symbolChar];
}
else {
int unicode = symbolUnicode;
symbolString = [[NSString alloc] initWithBytes:&unicode length:sizeof(unicode) encoding:NSUTF32LittleEndianStringEncoding];
}
return symbolString;
}
#end
Related
I'm working with c in iOS Project I'm trying to convert my string to respected type in c , below code is supposed to send to core Library
typedef uint16_t UniCharT;
static const UniCharT s_learnWord[] = {'H', 'e','l','\0'};
what i have done till now is string is the one what I'm passing
NSString * string = #"Hel";
static const UniCharT *a = (UniCharT *)[string UTF8String];
But it is failing to convert when more than one character , If i pass one character then working fine please let me where i miss, How can i pass like s_learnWord ?
and i tried in google and StackOverFLow none of the duplicates or answers didn't worked for me like this
Convert NSString into char array I'm already doing same way only.
Your question is a little ambiguous as the title says "c type char[]" but your code uses typedef uint16_t UniCharT; which is contradictory.
For any string conversions other than UTF-8, you normally want to use the method getCString:maxLength:encoding:.
As you are using uint16_t, you probably are trying to use UTF-16? You'll want to pass NSUTF16StringEncoding as the encoding constant in that case. (Or possibly NSUTF16BigEndianStringEncoding/NSUTF16LittleEndianStringEncoding)
Something like this should work:
include <stdlib.h>
// ...
NSString * string = #"part";
NSUInteger stringBytes = [string maximumLengthOfBytesUsingEncoding];
stringBytes += sizeof(UniCharT); // make space for \0 termination
UniCharT* convertedString = calloc(1, stringBytes);
[string getCString:(char*)convertedString
maxLength:stringBytes
encoding:NSUTF16StringEncoding];
// now use convertedString, pass it to library etc.
free(convertedString);
I’ve developed an iOS app in which we can send emojis from iOS to web portal and vice versa. All emojis sent from iOS to web portal are displaying perfect except “© and ®”.
Here is the emoji encoding piece of code.
NSData *data = [messageBody dataUsingEncoding:NSNonLossyASCIIStringEncoding];
NSString *encodedString = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding];
// This piece of code returns \251\256 as Unicodes of copyright and registered emojis, as these two Unicodes are not according to standard code so it doesn't display on web portal.
So what should I do to convert them standard Unicodes?
Test Run :
messageBody = #"Copy right symbol : © AND Registered Mark symbol : ®";
// Encoded string i get from the above encoding is
Copy right symbol : \\251 AND Registered Mark symbol : \\256
Where as it should like this (On standard unicodes )
Copy right symbol : \\u00A9 AND Registered Mark symbol : \\u00AE
First, I will try to provide the solution. Then I will try to explain why.
Escaping non-ASCII chars
To escape unicode chars in a string, you shouldn't rely on NSNonLossyASCIIStringEncoding. Below is the code that I use to escape unicode&non-ASCII chars in a string:
// NSMutableString category
- (void)appendChar:(unichar)charToAppend {
[self appendFormat:#"%C", charToAppend];
}
// NSString category
- (NSString *)UEscapedString {
char const hexChar[] = "0123456789ABCDEF";
NSMutableString *outputString = [NSMutableString string];
for (NSInteger i = 0; i < self.length; i++) {
unichar character = [self characterAtIndex:i];
if ((character >> 7) > 0) {
[outputString appendString:#"\\u"];
[outputString appendChar:(hexChar[(character >> 12) & 0xF])]; // append the hex character for the left-most 4-bits
[outputString appendChar:(hexChar[(character >> 8) & 0xF])]; // hex for the second group of 4-bits from the left
[outputString appendChar:(hexChar[(character >> 4) & 0xF])]; // hex for the third group
[outputString appendChar:(hexChar[character & 0xF])]; // hex for the last group, e.g., the right most 4-bits
} else {
[outputString appendChar:character];
}
}
return [outputString copy];
}
(NOTE: I guess Jon Rose's method does the same but I didn't wanna share a method that I didn't test)
Now you have the following string: Copy right symbol : \u00A9 AND Registered Mark symbol : \u00AE
Escaping unicode is done. Now let's convert it back to display the emojis.
Converting back
This is gonna be confusing at first but this is what it is:
NSData *data = [escapedString dataUsingEncoding:NSUTF8StringEncoding];
NSString *converted = [[NSString alloc] data encoding:NSNonLossyASCIIStringEncoding];
Now you have your emojis (and other non-ASCIIs) back.
What is happening?
The problem
In your case, you are trying to create a common language between your server side and your app. However, NSNonLossyASCIIStringEncoding is pretty bad choice for the purpose. Because this is a black-box that is created by Apple and we don't really know what it is exactly doing inside. As we can see, it converts unicode into \uXXXX while converting non-ASCII chars into \XXX. That is why you shouldn't rely on it to build a multi-platform system. There is no equivalent of it in backend platforms and Android.
Yet it is pretty mysterious, NSNonLossyASCIIStringEncoding can still convert back ® from \u00AE while it is converting it into \256 in the first place. I'm sure there are tools on other platforms to convert \uXXXX into unicode chars, that shouldn't be a problem for you.
messageBody is a string there is no reason to convert it to data only to convert it back to a string. Replace your code with
NSString *encodedString = messageBody;
If the messageBody object is incorrect then the way to fix it is to change the way it was created. The server sends data, not strings. The data that the server sends is encoding in some agreed upon way. Generally this encoding is UTF-8. If you know the encoding you can convert the data to a string; if you don't, then the data is gibberish that cannot be read. If the messageBody is incorrect, the problem occurred when it was converted from the data that the server sent. It seems likely that you are parsing it with the incorrect encoding.
The code you posted is just plain wrong. It converts a string to data using one encoding (ASCII) and the reads that data with a different encoding (UTF8). That is like translating a book to Spanish and then having a Portuguese speaker translate it back - it might work for some words, but it is still wrong.
If you are still having trouble then you should share the code of where messageBody is created.
If you server expects a ASCII string with all unicode characters changed to \u00xx then you should first yell at your server guy because he is an idiot. But if that doesn't work you can do the following code
NSString* messageBody = #"Copy right symbol : © AND Registered Mark symbol : ®";
NSData* utf32Data = [messageBody dataUsingEncoding:NSUTF32StringEncoding];
uint32_t *bytes = (uint32_t *) [utf32Data bytes];
NSMutableString* escapedString = [[NSMutableString alloc] init];
//Start a 1 because first bytes are for endianness
for(NSUInteger index = 1; index < escapedString.length / 4 ;index++ ){
uint32_t charValue = bytes[index];
if (charValue <= 127) {
[escapedString appendFormat:#"%C", (unichar)charValue];
}else{
[escapedString appendFormat:#"\\\\u%04X", charValue];
}
}
I'm really do not understand your problem.
You can simply convert ANY character into nsdata and return it into string.
You can simply pass UTF-8 string including both emoji and other symbols using POST request.
NSString* newStr = [[NSString alloc] initWithData:theData encoding:NSUTF8StringEncoding];
NSData* data = [newStr dataUsingEncoding:NSUTF8StringEncoding];
It have to work for both server and client side.
But, of course, you have got the other problem that some fonts do not support allutf-8 chars. That's why, e.g., in terminal you might not see some of them. But this is beyong the scope of this question.
NSNonLossyASCIIStringEncoding is used only then you really wnat to convert symbol into chain of symbols. But it is not needed.
Now I have a range of unicode numbers, I want to show them in UILabel, I can show them if i hardcode them, but that's too slow, so I want to substitute them with a variable, and then change the variable and get the relevant character.
For example, now I know the unicode is U+095F, I want to show the range of U+095F to U+096f in UILabel, I can do that with hardcode like
NSString *str = [NSString stringWithFormat:#"\u095f"];
but I want to do that like
NSInteger hex = 0x095f;
[NSString stringWithFormat:#"\u%ld", (long)hex];
I can change the hex automatically,just like using #"%ld", (long)hex, so anybody know how to implement that?
You can initialize the string with the a buffer of bytes of the hex (you simply provide its pointer). The point is, and the important thing to notice is that you provide the character encoding to be applied. Specifically you should notice the byte order.
Here's an example:
UInt32 hex = 0x095f;
NSString *unicodeString = [[NSString alloc] initWithBytes:&hex length:sizeof(hex) encoding:NSUTF32LittleEndianStringEncoding];
Note that solutions like using the %C format are fine as long as you use them for 16-bit unicode characters; 32-bit unicode characters like emojis (for example: 0x1f601, 0x1f41a) will not work using simple formatting.
You would have to use
[NSString stringWithFormat:#"%C", (unichar)hex];
or directly declare the unichar (unsigned short) as
unichar uni = 0x095f;
[NSString stringWithFormat:#"%C", uni];
A useful resource might be the String Format Specifiers, which lists %C as
16-bit Unicode character (unichar), printed by NSLog() as an ASCII character, or, if not an ASCII character, in the octal format \ddd or the Unicode hexadecimal format \udddd, where d is a digit.
Like this:
unichar charCode = 0x095f;
NSString *s = [NSString stringWithFormat:#"%C",charCode];
NSLog(#"String = %#",s); //Output:String = य़
I thought I had nailed converting an int to and NSString a while back, but each time I run my code, the program gets to the following lines and crashes. Can anyone see what I'm doing wrong?
NSString *rssiString = (int)self.selectedBeacon.rssi;
UnitySendMessage("Foo", "RSSIValue", [rssiString UTF8String] );
These lines should take the rssi value (Which is an NSInt) convert it to a string, then pass it to my unity object in a format it can read.
What am I doing wrong?
NSString *rssiString = [NSString stringWithFormat:#"%d", self.selectedBeacon.rssi];
UPDATE: it is important to remember there is no such thing as NSInt. In my snippet I assumed that you meant NSInteger.
If you use 32-bit environment, use this
NSString *rssiString = [NSString stringWithFormat:#"%d", self.selectedBeacon.rssi];
But you cann't use this in 64-bit environment, Because it will give below warning.
Values of type 'NSInteger' should not be used as format arguments; add
an explicit cast to 'long'
So use below code, But below will give warning in 32-bit environment.
NSString *rssiString = [NSString stringWithFormat:#"%ld", self.selectedBeacon.rssi];
If you want to code for both(32-bit & 64-bit) in one line, use below code. Just casting.
NSString *rssiString = [NSString stringWithFormat:#"%ld", (long)self.selectedBeacon.rssi];
I'd like to provide a sweet way to do this job:
//For any numbers.
int iValue;
NSString *sValue = [#(iValue) stringValue];
//Even more concise!
NSString *sValue = #(iValue).stringValue;
NSString *rssiString = [self.selectedBeacon.rssi stringValue];
For simple conversions of basic number values, you can use a technique called casting. A cast forces a value to perform a conversion based on strict rules established for the C language. Most of the rules dictate how conversions between numeric types (e.g., long and short versions of int and float types) are to behave during such conversions.
Specify a cast by placing the desired output data type in parentheses before the original value. For example, the following changes an int to a float:
float myValueAsFloat = (float)myValueAsInt;
One of the rules that could impact you is that when a float or double is cast to an int, the numbers to the right of the decimal (and the decimal) are stripped off. No rounding occurs. You can see how casting works for yourself in Workbench by modifying the runMyCode: method as follows:
- (IBAction)runMyCode:(id)sender {
double a = 12345.6789;
int b = (int)a;
float c = (float)b;
NSLog(#"\ndouble = %f\nint of double = %d\nfloat of int = %f", a, b, c);
}
the console reveals the following log result:
double = 12345.678900
int of double = 12345
float of int = 12345.000000
original link is http://answers.oreilly.com/topic/2508-how-to-convert-objective-c-data-types-within-ios-4-sdk/
If self.selectedBeacon.rssi is an int, and it appears you're interested in providing a char * string to the UnitySendMessage API, you could skip the trip through NSString:
char rssiString[19];
sprintf(rssiString, "%d", self.selectedBeacon.rssi);
UnitySendMessage("Foo", "RSSIValue", rssiString );
I am generating QR code and everything is working fine if text is only in English. When i want to generate QR code with some Arabic text then it fails at NSString's method "getCString:maxLength:encoding:".
Suppose, I have two strings:
NSString *englishText = #"Some text English";
NSString *englishArabicMixText = #"Some text بالعربي";
char strEng [[englishText length] + 1];
char strArb [[englishArabicMixText length] + 1];
1- [englishText getCString:strEng maxLength:[englishText length] + 1 encoding:NSUTF8StringEncoding];
2- [englishArabicMixText getCString:strArb maxLength:[englishArabicMixText length] + 1 encoding:NSUTF8StringEncoding];
At Case#1 'getCString' return true and QR code is generated and at Case#2 it return false and failed to generate code.
What should I do, so that in case#2 it should also return true ? Thank you
length returns the number of Unicode characters. You have to use lengthOfBytesUsingEncoding:, which returns the number of bytes required to store the receiver in a given encoding.
NSUInteger arbLength = [englishArabicMixText lengthOfBytesUsingEncoding:NSUTF8StringEncoding] + 1;
char strArb [arbLength];
[englishArabicMixText getCString:strArb maxLength:arbLength encoding:NSUTF8StringEncoding];
2 is returning false for either 2 possible reasons:
1) the string cannot be converted with the specified encoding.
2) the buffer to hold the encoded string is too small.
I'd guess (or at least I suggest you to start investigating) problem is nr. 2.
Because as you're converting to UTF8 a single un-encoded character may result in more than one encoded character. An 'A' is a single byte with value 65 but an arabic character or some kind of symbol may require more bytes.
You are assuming your destination buffer requires the same number of bytes as the same number of characters of your NSString
So you should do something like that:
NSUInteger size = [englishArabicMixText lengthOfBytesUsingEncoding : NSUTF8StringEncoding];
if(size>0)
{
size++;
strArb = malloc(size); // NOTE: you should allocate space for your string at runtime!!
[englishArabicMixText getCString:strArb maxLength:size encoding:NSUTF8StringEncoding];
}
You should do the same for the plain english string too.
And I'd reccomend to allocate dinamically at runtime the space for the C string with malloc and then free it when you don't need it anymore.