Convert Large NSString in Hexadecimal to Decimal NSString iOS - ios

As the title of the question states I'm looking to take the following string in hexadecimal base:
b9c84ee012f4faa7a1e2115d5ca15893db816a2c4df45bb8ceda76aa90c1e096456663f2cc5e6748662470648dd663ebc80e151d4d940c98a0aa5401aca64663c13264b8123bcee4db98f53e8c5d0391a7078ae72e7520da1926aa31d18b2c68c8e88a65a5c221219ace37ae25feb54b7bd4a096b53b66edba053f4e42e64b63
And convert it to its decimal equivalent string:
130460875511427281888098224554274438589599458108116621315331564625526207150503189508290869993616666570545720782519885681004493227707439823316135825978491918446215631462717116534949960283082518139523879868865346440610923729433468564872249430429294675444577680464924109881111890440473667357213574597524163283811
I've looked to use this code, found at this link:
unsigned result = 0;
NSScanner *scanner = [NSScanner scannerWithString:hexString];
[scanner setScanLocation:1]; // bypass '#' character
[scanner scanHexInt:&result];
NSLog(#" %u",result);
However, I keep getting the following result: 4294967295. Any ideas on how I can solve this problem?

This sounds like a homework/quiz question, and SO isn't to get code written, so here are some hints in hope they help.
Your number is BIG, far larger than any standard integer size, so you are not going to be able to do this with long long or even NSDecimal.
Now you could go and source an "infinite" precision arithmetic package, but really what you need to do isn't that hard (but if you are going to be doing more than this then such using a package would make sense).
Now think back to your school days, how were you taught to do base conversion? The standard method is long division and reminders.
Example: start with BAD in hex and convert to decimal:
BAD ÷ A = 12A remainder 9
12A ÷ A = 1D remainder 8
1D ÷ A = 2 remainder 9
2 ÷ A = 0 remainder 2
now read the remainder back, last first, to give 2989 decimal.
Long division is a digit at a time process, starting with the most significant digit, and carrying the remainder as you move to the next digit. Sounds like a loop.
Your initial number is a string, the most significant digit is first. Sounds like a loop.
Processing characters one at a time from an NSString is, well, painful. So first convert your NSString to a standard C string. If you copy this into a C-array you can then overwrite it each time you "divide". You'll probably find the standard C functions strlen() and strcpy() helpful.
Of course you have characters in your string, not integer values. Include ctype.h in your code and use the digittoint() function to convert each character in your number to its numeric equivalent.
The standard library doesn't have the inverse of digittoint(), so to convert an integer back to its character equivalent you need to write your own code, think indexing into a suitable constant string...
Write a C function, something like int divide(char *hexstring) which does one long division of hexstring, writing the result into hexstring and returning the remainder. (If you wish to write more general code, useful for testing, write something like int divide(char *buf, int base, int divisor) - so you can convert hex to decimal and then back again to check you get the back to where you started.)
Now you can loop calling your divide and accumulating the remainders (as characters) into another string.
How big should your result string be? Well a number written in decimal typically has more digits than when written in hex (e.g. 2989 v. BAD above). If you're being general then hex uses the fewest digits and binary uses the most. A single hex digit equates to 4 binary digits, so a working buffer 4 times the input size will always be long enough. Don't forget to allow for the terminating NUL in C strings in your buffer.
And as hinted above, for testing make your code general, convert your hex string to a decimal one, then convert that back to a hex one and check the result is the same as the input.
If this sounds complicated don't despair, it only takes around 30 lines of well spaced code.
If you get stuck coding it ask a new question showing your code, explain what goes wrong, and somebody will undoubtedly help you out.
HTH

Your result is the maximum of unsinged int 32 bit, the type you are using. As far as I can see, in the NSScanner documentation long long is the biggest supported type.

Related

GForth: Convert floating point number to String

A simple question that turned out to be quite complex:
How do I turn a float to a String in GForth? The desired behavior would look something like this:
1.2345e fToString \ takes 1.2345e from the float stack and pushes (addr n) onto the data stack
After a lot of digging, one of my colleagues found it:
f>str-rdp ( rf +nr +nd +np -- c-addr nr )
https://www.complang.tuwien.ac.at/forth/gforth/Docs-html-history/0.6.2/Formatted-numeric-output.html
Convert rf into a string at c-addr nr. The conversion rules and the
meanings of nr +nd np are the same as for f.rdp.
And from f.rdp:
f.rdp ( rf +nr +nd +np – )
https://www.complang.tuwien.ac.at/forth/gforth/Docs-html/Simple-numeric-output.html
Print float rf formatted. The total width of the output is nr. For
fixed-point notation, the number of digits after the decimal point is
+nd and the minimum number of significant digits is np. Set-precision has no effect on f.rdp. Fixed-point notation is used if the number of
siginicant digits would be at least np and if the number of digits
before the decimal point would fit. If fixed-point notation is not
used, exponential notation is used, and if that does not fit,
asterisks are printed. We recommend using nr>=7 to avoid the risk of
numbers not fitting at all. We recommend nr>=np+5 to avoid cases where
f.rdp switches to exponential notation because fixed-point notation
would have too few significant digits, yet exponential notation offers
fewer significant digits. We recommend nr>=nd+2, if you want to have
fixed-point notation for some numbers. We recommend np>nr, if you want
to have exponential notation for all numbers.
In humanly readable terms, these functions require a number on the float-stack and three numbers on the data stack.
The first number-parameter tells it how long the string should be, the second one how many decimals you would like and the third tells it the minimum number of decimals (which roughly translates to precision). A lot of implicit math is performed to determine the final String format that is produced, so some tinkering is almost required to make it behave the way you want.
Testing it out (we don't want to rebuild f., but to produce a format that will be accepted as floating-point number by forth to EVALUATE it again, so the 1.2345E0 notation is on purpose):
PI 18 17 17 f>str-rdp type \ 3.14159265358979E0 ok
PI 18 17 17 f.rdp \ 3.14159265358979E0 ok
PI f. \ 3.14159265358979 ok
I couldn't find the exact word for this, so I looked into Gforth sources.
Apparently, you could go with represent word that prints the most significant numbers into supplied buffer, but that's not exactly the final output. represent returns validity and sign flags, as well as the position of decimal point. That word then is used in all variants of floating point printing words (f., fp. fe.).
Probably the easiest way would be to substitute emit with your word (emit is a deferred word), saving data where you need it, use one of available floating pint printing words, and then restoring emit back to original value.
I'd like to hear the preferred solution too...

Does anyone know what this is actually called?

I have been wondering what this is actually called for a long time because a while ago (like 3 years ago) I thought it was called bytecode but since then I have realized what bytecode actually is. I'll give an example because I don't really know what to call it.
It looks like this:
\234\22\21\65\22\76\54\87. It's basically the byte of all the characters preceded by a backslash.
Does anyone know what this is called?
Thanks.
From the Lua reference manual:
We can specify any byte in a short literal string by its numeric value
(including embedded zeros). This can be done with the escape sequence
\xXX, where XX is a sequence of exactly two hexadecimal digits, or
with the escape sequence \ddd, where ddd is a sequence of up to three
decimal digits. (Note that if a decimal escape sequence is to be
followed by a digit, it must be expressed using exactly three digits.)
Also refer to https://en.wikipedia.org/wiki/String_literal#Escape_sequences

swift 2.0 NSDecimalNumber possible discrepency converting to long

I can't make heads or tails of this. I am using NSDecimalNumber to truncate
the fractional portion from a string. This works in most cases, but not apparently in the case of infinite decimals (or just too many). Here is an example:
print(NSDecimalNumber(string: "49.81666666666666666").longLongValue)
print(NSDecimalNumber(string: "49.816666666666666666").longLongValue)
print(NSDecimalNumber(string: "49.8166666666666666666").longLongValue)
The first line prints 49, the second -5, and the last one 0. I know I can use the rounding function to do the same thing, and that is what I will probably use instead, but doesn't this seem odd? I know it isn't just converting the float bit pattern into a long or else the results would be completely different.

Parse long double from string

I need to parse floating-point literals in C code using OCaml.
OCaml's float type is 64 bit. I have the string of the literal, its numeric value rounded to 64 bits and its kind (float, double or long double).
The problem are literals with a numeric value bigger than 64 bit:
long double literals
float literals with 'f'-suffix for which double rounding errors would occur if they wouldn't have the suffix.
OCaml's arbitrary-precision module can parse rational numbers from strings like "123/123", but not "123.123", "123e123", "0x1.23p-1" like they might appear in C.
Background: I do value analysis of C programs using CIL.
Double literals of any size and float literals with a numeric value that fits into 64 bit are always correctly represented. By rounding from double- to single-precision I can also reproduce double rounding errors.
I wrote my answer in the form of a blog post
To summarize some of the points here: you could interface strtold() and strtof() from OCaml. For the former, you would have to consider how you are going to store the result it produces, since there only is a point if long double is larger than double on your host architecture. There remains the problem that these functions are buggy in one of the most widely used C library. Very slightly buggy, but buggy for exactly the examples that are going to be of interest if you are doing this to study double rounding.
Another way is to write your own function, starting from another post in the blog you refer to.
Finally, the phrase "Even getting single-precision floats right requires me to parse literals with values bigger than 64 bit" that you use in the comments is still a strange way to put it. The intermediate format(s) in which you can parse the representation of a single-precision float before you round it to single-precision have to be lossless, otherwise there will be double rounding. Double rounding may be more or less difficult to exhibit depending on the precision of the lossy intermediate format, but using 80 bits or 128 bits binary floating-point formats is not going to remove the problem, just make it more subtle. In the simple algorithm that I recommend, the intermediate format is a fraction of two multiprecision integers.
I don't see the question in this question :)
Assuming that you need an ocaml parser for "C float literals" - the answer is - write one yourself, it is not very hard and you will have strict control on the implementation details and what "C float literal" actually means.

How to Remove exponent from formatted float in Delphi

Given a double value like 1.00500000274996E-8, how do I convert it to it's non scientific format with a maximum number of digits after the decimal point - in this case with 8 digits it would be 1.00500000?
The conversion should not pad with zeros, so 2007 would come out as 2007, and 2012.33 and 2012.33.
I've tried lots of combinations using Format, FormatFloat, FloatToStrF but can't quite seem to hit the jackpot. Many thanks for any help.
Edit: I should clarify that I am trying to convert it to a string representation, without the Exponent (E) part.
FormatFloat('0.######################', 1.00500000274996E-8) should do the trick.
Output is: 0,0000000100500000274996
It will not output more digits than absolutely necessary.
See John Herbster's Exact Float to String Routines in CodeCentral. Perhaps not exactly what youre after but might be good starting point... CC item's description:
This module includes
(a) functions for converting a floating binary point number to its *exact* decimal representation in an AnsiString;
(b) functions for parsing the floating point types into sign, exponent, and mantissa; and
(c) function for analyzing a extended float number into its type (zero, normal, infinity, etc.)
Its intended use is for trouble shooting problems with floating point numbers.
His DecimalRounding routines might be of intrest too.

Resources