iOS - XML to NSString conversion - ios

I'm using NSXMLParser for parsing XML to my app and having a problem with the encoding type. For example, here is one of the feeds coming in. It looks similar to this"
\U2026Some random text from the xml feed\U2026
I am currently using the encoding type:
NSData *data = [string dataUsingEncoding:NSUTF8StringEncoding];
Which encoding type am I suppose to use for converting \U2026 into a ellipse (...) ??

The answer here is you're screwed. They are using a non-standard encoding for XML, but what if they really want the literal \U2026? Let's say you add a decoder to handle all \UXXXX and \uXXXX encodings. What happens when another feed want the data to be the literal \U2026?
You're first choice and best bet is to get this feed fixed. If they need to encode data, they need to use proper HTML entities or numeric references.
As a fallback, I would isolate the decoder away from the XML parser. Don't create a non-conforming XML parser just because your getting non-conforming data. Have a post processor that would only be run on the offending feed.
If you must have a decoder, then there is more bad news. There is no built in decoder, you will need to find a category online or write one up yourself.
After some poking around, I think Using Objective C/Cocoa to unescape unicode characters, ie \u1234 may work for you.

Alright, heres a snippet of code that should work for any unicode code-point:
NSString *stringByUnescapingUnicodeSymbols(NSString *input)
{
NSMutableString *output = [NSMutableString stringWithCapacity:[input length]];
// get the UTF8 string for this string...
const char *UTF8Str = [input UTF8String];
while (*UTF8Str) {
if (*UTF8Str == '\\' && tolower(*(UTF8Str + 1)) == 'u')
{
// skip the next 2 chars '\' and 'u'
UTF8Str += 2;
// make sure we only read 4 chars
char tmp[5] = { UTF8Str[0], UTF8Str[1], UTF8Str[2], UTF8Str[3], 0 };
long unicode = strtol(tmp, NULL, 16); // remember that Unicode is base 16
[output appendFormat:#"%C", unicode];
// move on with the string (making sure we dont miss the end of the string
for (int i = 0; i < 4; i++) {
if (*UTF8Str == 0)
break;
UTF8Str++;
}
}
else
{
if (*UTF8Str == 0)
break;
[output appendFormat:#"%c", *UTF8Str];
}
UTF8Str++;
}
return output;
}

You should simple replace literal '\U2026' on a quotation, then encode it with NSUTF8StringEncoding encodind to NSData

Related

Copyright/Registered symbol encoding not working

I’ve developed an iOS app in which we can send emojis from iOS to web portal and vice versa. All emojis sent from iOS to web portal are displaying perfect except “© and ®”.
Here is the emoji encoding piece of code.
NSData *data = [messageBody dataUsingEncoding:NSNonLossyASCIIStringEncoding];
NSString *encodedString = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding];
// This piece of code returns \251\256 as Unicodes of copyright and registered emojis, as these two Unicodes are not according to standard code so it doesn't display on web portal.
So what should I do to convert them standard Unicodes?
Test Run :
messageBody = #"Copy right symbol : © AND Registered Mark symbol : ®";
// Encoded string i get from the above encoding is
Copy right symbol : \\251 AND Registered Mark symbol : \\256
Where as it should like this (On standard unicodes )
Copy right symbol : \\u00A9 AND Registered Mark symbol : \\u00AE
First, I will try to provide the solution. Then I will try to explain why.
Escaping non-ASCII chars
To escape unicode chars in a string, you shouldn't rely on NSNonLossyASCIIStringEncoding. Below is the code that I use to escape unicode&non-ASCII chars in a string:
// NSMutableString category
- (void)appendChar:(unichar)charToAppend {
[self appendFormat:#"%C", charToAppend];
}
// NSString category
- (NSString *)UEscapedString {
char const hexChar[] = "0123456789ABCDEF";
NSMutableString *outputString = [NSMutableString string];
for (NSInteger i = 0; i < self.length; i++) {
unichar character = [self characterAtIndex:i];
if ((character >> 7) > 0) {
[outputString appendString:#"\\u"];
[outputString appendChar:(hexChar[(character >> 12) & 0xF])]; // append the hex character for the left-most 4-bits
[outputString appendChar:(hexChar[(character >> 8) & 0xF])]; // hex for the second group of 4-bits from the left
[outputString appendChar:(hexChar[(character >> 4) & 0xF])]; // hex for the third group
[outputString appendChar:(hexChar[character & 0xF])]; // hex for the last group, e.g., the right most 4-bits
} else {
[outputString appendChar:character];
}
}
return [outputString copy];
}
(NOTE: I guess Jon Rose's method does the same but I didn't wanna share a method that I didn't test)
Now you have the following string: Copy right symbol : \u00A9 AND Registered Mark symbol : \u00AE
Escaping unicode is done. Now let's convert it back to display the emojis.
Converting back
This is gonna be confusing at first but this is what it is:
NSData *data = [escapedString dataUsingEncoding:NSUTF8StringEncoding];
NSString *converted = [[NSString alloc] data encoding:NSNonLossyASCIIStringEncoding];
Now you have your emojis (and other non-ASCIIs) back.
What is happening?
The problem
In your case, you are trying to create a common language between your server side and your app. However, NSNonLossyASCIIStringEncoding is pretty bad choice for the purpose. Because this is a black-box that is created by Apple and we don't really know what it is exactly doing inside. As we can see, it converts unicode into \uXXXX while converting non-ASCII chars into \XXX. That is why you shouldn't rely on it to build a multi-platform system. There is no equivalent of it in backend platforms and Android.
Yet it is pretty mysterious, NSNonLossyASCIIStringEncoding can still convert back ® from \u00AE while it is converting it into \256 in the first place. I'm sure there are tools on other platforms to convert \uXXXX into unicode chars, that shouldn't be a problem for you.
messageBody is a string there is no reason to convert it to data only to convert it back to a string. Replace your code with
NSString *encodedString = messageBody;
If the messageBody object is incorrect then the way to fix it is to change the way it was created. The server sends data, not strings. The data that the server sends is encoding in some agreed upon way. Generally this encoding is UTF-8. If you know the encoding you can convert the data to a string; if you don't, then the data is gibberish that cannot be read. If the messageBody is incorrect, the problem occurred when it was converted from the data that the server sent. It seems likely that you are parsing it with the incorrect encoding.
The code you posted is just plain wrong. It converts a string to data using one encoding (ASCII) and the reads that data with a different encoding (UTF8). That is like translating a book to Spanish and then having a Portuguese speaker translate it back - it might work for some words, but it is still wrong.
If you are still having trouble then you should share the code of where messageBody is created.
If you server expects a ASCII string with all unicode characters changed to \u00xx then you should first yell at your server guy because he is an idiot. But if that doesn't work you can do the following code
NSString* messageBody = #"Copy right symbol : © AND Registered Mark symbol : ®";
NSData* utf32Data = [messageBody dataUsingEncoding:NSUTF32StringEncoding];
uint32_t *bytes = (uint32_t *) [utf32Data bytes];
NSMutableString* escapedString = [[NSMutableString alloc] init];
//Start a 1 because first bytes are for endianness
for(NSUInteger index = 1; index < escapedString.length / 4 ;index++ ){
uint32_t charValue = bytes[index];
if (charValue <= 127) {
[escapedString appendFormat:#"%C", (unichar)charValue];
}else{
[escapedString appendFormat:#"\\\\u%04X", charValue];
}
}
I'm really do not understand your problem.
You can simply convert ANY character into nsdata and return it into string.
You can simply pass UTF-8 string including both emoji and other symbols using POST request.
NSString* newStr = [[NSString alloc] initWithData:theData encoding:NSUTF8StringEncoding];
NSData* data = [newStr dataUsingEncoding:NSUTF8StringEncoding];
It have to work for both server and client side.
But, of course, you have got the other problem that some fonts do not support allutf-8 chars. That's why, e.g., in terminal you might not see some of them. But this is beyong the scope of this question.
NSNonLossyASCIIStringEncoding is used only then you really wnat to convert symbol into chain of symbols. But it is not needed.

convert unicode string to nsstring

I have a unicode string as
{\rtf1\ansi\ansicpg1252\cocoartf1265
{\fonttbl\f0\fswiss\fcharset0 Helvetica;\f1\fnil\fcharset0 LucidaGrande;}
{\colortbl;\red255\green255\blue255;}
{\*\listtable{\list\listtemplateid1\listhybrid{\listlevel\levelnfc23\levelnfcn23\leveljc0\leveljcn0\levelfollow0\levelstartat1\levelspace360\levelindent0{\*\levelmarker \{check\}}{\leveltext\leveltemplateid1\'01\uc0\u10003 ;}{\levelnumbers;}\fi-360\li720\lin720 }{\listname ;}\listid1}}
{\*\listoverridetable{\listoverride\listid1\listoverridecount0\ls1}}
\paperw11900\paperh16840\margl1440\margr1440\vieww22880\viewh16200\viewkind0
\pard\li720\fi-720\pardirnatural
\ls1\ilvl0
\f0\fs24 \cf0 {\listtext
\f1 \uc0\u10003
\f0 }One\
{\listtext
\f1 \uc0\u10003
\f0 }Two\
}
Here i have unicode data \u10003 which is equivalent to "✓" characters. I have used
[NSString stringWithCharacters:"\u10003" length:NSUTF16StringEncoding] which is throwing compilation error. Please let me know how to convert these unicode characters to "✓".
Regards,
Boom
I have same for problem and the following code solve my issue
For Encode
NSData *dataenc = [yourtext dataUsingEncoding:NSNonLossyASCIIStringEncoding];
NSString *encodevalue = [[NSString alloc]initWithData:dataenc encoding:NSUTF8StringEncoding];
For decode
NSData *data = [yourtext dataUsingEncoding:NSUTF8StringEncoding];
NSString *decodevalue = [[NSString alloc] initWithData:data encoding:NSNonLossyASCIIStringEncoding];
Thanks
I have used below code to convert a Uniode string to NSString. This should work fine.
NSData *unicodedStringData =
[unicodedString dataUsingEncoding:NSUTF8StringEncoding];
NSString *emojiStringValue =
[[NSString alloc] initWithData:unicodedStringData encoding:NSNonLossyASCIIStringEncoding];
In Swift 4
let emoji = "😃"
let unicodedData = emoji.data(using: String.Encoding.utf8, allowLossyConversion: true)
let emojiString = String(data: unicodedData!, encoding: String.Encoding.utf8)
I assume that:
You are reading this RTF data from a file or other external source.
You are parsing it yourself (not using, say, AppKit's built-in RTF parser).
You have a reason why you're parsing it yourself, and that reason isn't “wait, AppKit has this built in?”.
You have come upon \u… in the input you're parsing and need to convert that to a character for further handling and/or inclusion in the output text.
You have ruled out \uc, which is a different thing (it specifies the number of non-Unicode bytes that follow the \u… sequence, if I understood the RTF spec correctly).
\u is followed by hexadecimal digits. You need to parse those to a number; that number is the Unicode code point number for the character the sequence represents. You then need to create an NSString containing that character.
If you're using NSScanner to parse the input, then (assuming you have already scanned past the \u itself) you can simply ask the scanner to scanHexInt:. Pass a pointer to an unsigned int variable.
If you're not using NSScanner, do whatever makes sense for however you're parsing it. For example, if you've converted the RTF data to a C string and are reading through it yourself, you'll want to use strtoul to parse the hex number. It'll interpret the number in whatever base you specify (in this case, 16) and then put the pointer to the next character wherever you want it.
Your unsigned int or unsigned long variable will then contain the Unicode code point value for the specified character. In the example from your question, that will be 0x10003, or U+10003.
Now, for most characters, you could simply assign that over to a unichar variable and create an NSString from that. That won't work here: unichars only go up to 0xFFFF, and this code point is higher than that (in technical terms, it's outside the Basic Multilingual Plane).
Fortunately, *CF*String has a function to help you:
unsigned int codePoint = /*…*/;
unichar characters[2];
NSUInteger numCharacters = 0;
if (CFStringGetSurrogatePairForLongCharacter(codePoint, characters)) {
numCharacters = 2;
} else {
characters[0] = codePoint;
numCharacters = 1;
}
You can then use stringWithCharacters:length: to create an NSString from this array of 16-bit characters.
Use this:
NSString *myUnicodeString = #"\u10003";
Thanks to modern Objective C.
Let me know if its not what you want.
NSString *strUnicodeString = "\u2714";
NSData *unicodedStringData = [strUnicodeString dataUsingEncoding:NSUTF8StringEncoding];
NSString *emojiStringValue = [[NSString alloc] initWithData:unicodedStringData encoding:NSUTF8StringEncoding];

Obfuscating a number(in a string) Objective C

I'm using the following code to obfuscate a passcode for a test app of mine.
- (NSString *)obfuscate:(NSString *)string withKey:(NSString *)key
{
// Create data object from the string
NSData *data = [string dataUsingEncoding:NSUTF8StringEncoding];
// Get pointer to data to obfuscate
char *dataPtr = (char *) [data bytes];
// Get pointer to key data
char *keyData = (char *) [[key dataUsingEncoding:NSUTF8StringEncoding] bytes];
// Points to each char in sequence in the key
char *keyPtr = keyData;
int keyIndex = 0;
// For each character in data, xor with current value in key
for (int x = 0; x < [data length]; x++)
{
// Replace current character in data with
// current character xor'd with current key value.
// Bump each pointer to the next character
*dataPtr = *dataPtr++ ^ *keyPtr++;
// If at end of key data, reset count and
// set key pointer back to start of key value
if (++keyIndex == [key length])
keyIndex = 0, keyPtr = keyData;
}
return [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding];
}
This works like a charm with all strings, but i've ran into a bit of a problem comparing the following results
NSLog([[self obfuscate:#"0000", #"maki"]); //Returns 0]<W
NSLog([[self obfuscate:#"0809", #"maki"]); //Returns 0]<W
As you can see, the two strings with numbers in, while different, return the same result! Whats gone wrong in the code i've attached to result in the same result for these two numbers?
Another example:
NSLog([self obfuscate:#"8000" withKey:#"maki"]); //Returns 8U4_
NSLog([self obfuscate:#"8290" withKey:#"maki"]); //Returns 8U4_ as well
I may be misunderstanding the concept of obfuscation, but I was under the impression that each unique string returns a unique obfuscated string!
Please help me fix this bug/glitch
Source of Code: http://iosdevelopertips.com/cocoa/obfuscation-encryption-of-string-nsstring.html
The problem is your last line. You create the new string with the original, unmodified data object.
You need to create a new NSData object from the modified dataPtr bytes.
NSData *newData = [NSData dataWithBytes:dataPtr length:data.length];
return [[NSString alloc] initWithData:newData encoding:NSUTF8StringEncoding];
But you have some bigger issues.
The calls to bytes returns a constant, read-only reference to the bytes in the NSData object. You should NOT be modifying that data.
The result of your XOR on the character data could, in theory, result in a byte stream that is no longer a valid UTF-8 encoded string.
The obfuscation algorithm that you have selected is based on XORing the data and the "key" values together. Generally, this is not very strong. Moreover, since XOR is symmetric, the results are very prone to producing duplicates.
Although your implementation is currently broken, fixing it would not be of much help in preventing the algorithm from producing identical results for different data: it is relatively straightforward to construct key/data pairs that produce the same obfuscated string - for example,
[self obfuscate:#"0123" withKey:#"vwxy"]
[self obfuscate:#"pqrs" withKey:#"6789"]
will produce identical results "FFJJ", even though both the strings and the keys look sufficiently different.
If you would like to "obfuscate" your strings in a cryptographically strong way, use a salted secure hash algorithm: it will produce very different results for even slightly different strings.

how to read chinese from pdf in ios correctly

here is what I have done, but it appears disorderly. Thanks in advance.
1.use CGPDFStringCopyTextString to get the text from the pdf
2.encode the NSString to char*
NSStringEncoding enc = CFStringConvertEncodingToNSStringEncoding(kCFStringEncodingGB_18030_2000);
const char *char_content = [self.currentData cStringUsingEncoding:enc];
Below is how I get the currentData:
void arrayCallback(CGPDFScannerRef inScanner, void *userInfo)
{
BIDViewController *pp = (__bridge BIDViewController*)userInfo;
CGPDFArrayRef array;
bool success = CGPDFScannerPopArray(inScanner, &array);
for(size_t n = 0; n < CGPDFArrayGetCount(array); n += 1)
{
if(n >= CGPDFArrayGetCount(array))
continue;
CGPDFStringRef string;
success = CGPDFArrayGetString(array, n, &string);
if(success)
{
NSString *data = (__bridge NSString *)CGPDFStringCopyTextString(string);
[pp.currentData appendFormat:#"%#", data];
}
}
}
- (IBAction)press:(id)sender {
table = CGPDFOperatorTableCreate();
CGPDFOperatorTableSetCallback(table, "TJ", arrayCallback);
CGPDFOperatorTableSetCallback(table, "Tj", stringCallback);
self.currentData = [NSMutableString string];
CGPDFContentStreamRef contentStream = CGPDFContentStreamCreateWithPage(pagerf);
CGPDFScannerRef scanner = CGPDFScannerCreate(contentStream, table, (__bridge void *)(self));
bool ret = CGPDFScannerScan(scanner);
}
According to the Mac Developer Library
CGPDFStringCopyTextString returns a CFString object that represents a PDF string as a text string. The PDF string is given as a CGPDFString which is a series of bytes—unsigned integer values in the range 0 to 255; thus, this method already decodes the bytes according to some character encoding.
It is given none explicitly, so it assumes one encoding type, most likely the PDFDocEncoding or the UTF-16BE Unicode character encoding scheme which are the two encodings that may be used to represent text strings in a PDF document outside the document’s content streams, cf. section 7.9.2.2 Text String Type and Table D.1, Annex D in the PDF specification.
Now you have not told us from where you received your CGPDFString. I assume, though, that you received it from inside one of the document’s content streams. Text strings there, on the other hand, can be encoded with any imaginable encoding. The encoding used is given by the embedded data of the font the string is to be displayed with.
For more information on this you may want to read CGPDFScannerPopString returning strange result and have a look at PDFKitten.

How do i ignore illegal characters when parsing an rss feed using nsxmlparser?

When using the NSXMLParser (indirectly through Michael Waterfalls MWFeedParser library)
and parsing the following RSS feed:
http://qdb.us/qdb.xml?action=latest
NSURL *feedURL = [NSURL URLWithString:#"http://qdb.us/qdb.xml?action=random"];
self.feedParser = [[MWFeedParser alloc] initWithFeedURL:feedURL];
self.feedParser.delegate = self;
self.feedParser.feedParseType = ParseTypeFull; // Parse feed info and all items
self.feedParser.connectionType = ConnectionTypeAsynchronously;
[self.feedParser parse];
I receive back an invalid formatted xml document that appears to be an illegal character in the feed.
http://validator.w3.org/check?uri=http%3A%2F%2Fqdb.us%2Fqdb.xml%3Faction%3Dlatest&charset=utf-8&doctype=Inline&group=0&user-agent=W3C_Validator%2F1.1
I've tried changing the documents encoding from ISO-8859-1 to UTF-8 but the problem still occurs.
How do I identify the illegal character and then how do I make it so parsing the RSS feed won't fall over when encountering these illegal characters?
References: (links I've already investigated)
HTML character decoding in Objective-C / Cocoa Touch
https://stackoverflow.com/users/106244/michael-waterfall
I don't know how to ignore illegal character, but you might consider to do some regex correction to remove them before parsing, but I suggest to use killxml instand of nsxmlparser, which could be ok with illegal character, here is "How To Choose The Best XML Parser for Your iPhone Project"
I found something like this while parsing EPG Data grabbed from the REST API of my Enigma2 receiver. In this case one service was pushing EPGInfo with the illegal character 0x05.
I have implemented a cleanup method for incoming NSData. This is the poor man's way to filter these 0x05 bytes from the NSData I receive from NSURLSession before passing it to the parser:
-(NSData *)DataCleaned:(NSData *)data {
NSData *clean = nil;
const char *old = (const char *)data.bytes;
char *flt = (char *)calloc( data.length, sizeof( char ) );
NSInteger cnt = 0;
for( NSInteger i = 0; i < data.length; i++ ) {
if ( old[i] != 0x05 )
flt[cnt++] = old[i];
}
clean = [NSData dataWithBytes:flt length:cnt];
free( flt );
return clean;
}
In my case, this solved the problem. But of course this requires to load the response into NSData prior to parsing it.

Resources