My application has to do operation on Hexadecimal values.
For example,
If the input given by user is '0010F750', then my application will tell you the user which is the nearest value (from some set of pre defined values) and will give next value by adding '0000E500'.
How we can perform Hexa Decimal operations Find nearest, Add, Subtract from DELPHI?
Performing operations on hexadecimal values does not really mean anything. Numbers are numbers. Hexadecimal is merely a representation using base 16.
All you need to do is convert these hex strings to integers and you can use standard arithmetic operations.
function HexStrToInt(const str: string): Integer;
begin
Result := StrToInt('$' + str);
end;
Add and subtract using + and -. Use IntToHex to express values as their hex representations.
Your application does not and cannot "do operation on Hexadecimal values". Rather, it operates on binary values stored in chunks of data organized as bytes.
What the USER sees and what the PROGRAM works with are two completely unrelated things.
The number one (1) in binary is 00000001, in hex is 01, in decimal is 1, and in ASCII has the hexadecimal value of 31. Try printing the value of Ord('1').
You need to convert the external representation of your data, in Hex, to an internal representation as an Integer. That's what David was pointing to earlier.
Then you'd need to apply your "rounding" to the numeric value, then convert it back to a Hex string for the user to see.
Search around for examples that let you implement a simple calculator and you'll understand better.
Related
I want to convert a double to a string and only display needed decimals.
So I cannot use
d := 123.4
s := Format('%.2f', [d]);
As it display as the result is 123.40 when I want 123.4.
Here is a table of samples and expected result
|Double|Result as string|
-------------------------
|5 |5 |
|5.1 |5.1 |
|5.12 |5.12 |
|5.123 |5.123 |
You can use the %g format string:
General: The argument must be a floating-point value. The value is converted to the shortest possible decimal string using fixed or
scientific format. The number of significant digits in the resulting
string is given by the precision specifier in the format string; a
default precision of 15 is assumed if no precision specifier is
present. Trailing zeros are removed from the resulting string, and a
decimal point appears only if necessary. The resulting string uses the
fixed-point format if the number of digits to the left of the decimal
point in the value is less than or equal to the specified precision,
and if the value is greater than or equal to 0.00001. Otherwise the
resulting string uses scientific format.
This is not as simple as you think. It all boils down to representability.
Let's consider a simple example of 0.1. That value is not exactly representable in double. This is because double is a binary representation rather than a decimal representation.
A double value is stored in the form s*2^e, where s and e are the significand and exponent respectively, both integers.
Back to 0.1. That value cannot be exactly represented as a binary floating point value. No combination of significand and exponent exist that represent it. Instead the closest representable value will be used:
0.10000 00000 00000 00555 11151 23125 78270 21181 58340 45410 15625
If this comes as a shock I suggest the following references:
Is floating point math broken?
http://download.oracle.com/docs/cd/E19957-01/806-3568/ncg_goldberg.html
http://floating-point-gui.de/
So, what to do? An obvious option is to switch to a decimal rather than binary representation. In Delphi that typically means using the Currency type. Depending on your application that might be a good choice, or it might be a terrible choice. If you wish to perform scientific or engineering calculations efficiently, for instance, then a decimal type is not appropriate.
Another option would be to look at how Python handles this. The repr function is meant, where possible, to yield a string with the property that eval(repr(x)) == x. In older versions of Python repr produced very long strings of the form 1.1000000000000001 when in fact 1.1 would suffice. Python adopted an algorithm that finds the shortest decimal expression that represents the floating point value. You could adopt the same approach. The snag is that the algorithm is very complex.
Quoting the Delphi XE8 help:
For single-byte and multibyte strings, Length returns the number of bytes used by the string. Example for UTF-8:
Writeln(Length(Utf8String('1¢'))); // displays 3
For Unicode (WideString) strings, Length returns the number of bytes divided by two.
This arises important questions:
Why the difference in handling is there at all?
Why Length() doesn't do what it's expected to do, return just the length of the parameter (as in, the count of elements) instead of giving the size in bytes in some cases?
Why does it state it divides the result by 2 for Unicode (UTF-16) strings? AFAIK UTF-16 is 4-byte at most, and thus this will give incorrect results.
Length returns the number of elements when considering the string as an array.
For strings with 8 bit element types (ANSI, UTF-8) then Length gives you the number of bytes since the number of bytes is the same as the number of elements.
For strings with 16 bit elements (UTF-16) then Length is half the number of bytes because each element is 2 bytes wide.
Your string '1¢' has two code points, but the second code point requires two bytes to encode it in UTF-8. Hence Length(Utf8String('1¢')) evaluates to three.
You mention SizeOf in the question title. Passing a string variable to SizeOf will always return the size of a pointer, since a string variable is, under the hood, just a pointer.
To your specific questions:
Why the difference in handling is there at all?
There is only a difference if you think of Length as relating to bytes. But that's the wrong way to think about it Length always returns an element count, and when viewed that way, there behaviour is uniform across all string types, and indeed across all array types.
Why Length() doesn't do what it's expected to do, return just the length of the parameter (as in, the count of elements) instead of giving the size in bytes in some cases?
It does always return the element count. It just so happens that when the element size is a single byte, then the element count and the byte count happen to be the same. In fact the documentation that you refer to also contains the following just above the excerpt that you provided: Returns the number of characters in a string or of elements in an array. That is the key text. The excerpt that you included is meant as an illustration of the implications of this italicised text.
Why does it state it divides the result by 2 for Unicode (UTF-16) strings? AFAIK UTF-16 is 4-byte at most, and thus this will give incorrect results.
UTF-16 character elements are always 16 bits wide. However, some Unicode code points require two character elements to encode. These pairs of character elements are known as surrogate pairs.
You are hoping, I think, that Length will return the number of code points in a string. But it doesn't. It returns the number of character elements. And for variable length encodings, the number of code points is not necessarily the same as the number of character elements. If your string was encoded as UTF-32 then the number of code points would be the same as the number of character elements since UTF-32 is a constant sized encoding.
A quick way to count the code points is to scan through the string checking for surrogate pairs. When you encounter a surrogate pair, count one code point. Otherwise, when you encounter a character element that is not part of a surrogate pair, count one code point. In pseudo-code:
N := 0;
for C in S do
if C.IsSurrogate then
inc(N)
else
inc(N, 2);
CodePointCount := N div 2;
Another point to make is that the code point count is not the same as the visible character count. Some code points are combining characters and are combined with their neighbouring code points to form a single visible character or glyph.
Finally, if all you are hoping to do is find the byte size of the string payload, use this expression:
Length(S) * SizeOf(S[1])
This expression works for all types of string.
Be very careful about the function System.SysUtils.ByteLength. On the face of it this seems to be just what you want. However, that function returns the byte length of a UTF-16 encoded string. So if you pass it an AnsiString, say, then the value returned by ByteLength is twice the number of bytes of the AnsiString.
I have stringgrid on delphi form and i am trying to divide values of one cell with value of another cell in another column.
But the problem is, stringgrid cells are populated with different types of numbers, so I am getting ConvertErrors.
For example the numbers in cells can look like
0.37 or 34 or 0.0013 or 0.00 or 0.35 or 30.65 or 45.9108 or 0.0307 or 6854.93.
In another words I never know is it going to be real, float, integer or any other kind of type in those cells.
I have looked everywhere on internet but no luck. Anyone any ideas. By the way I am not exactly Delphi expert. Thanks.
For each string, convert it first to a float value using StrToFloat function in SysUtils.pas . This should allow for any numerical type to be dealt with (unless you have something unusual like complex numbers). As you have some zero values in your list above you should also ensure that you check for divide by zero conditions as this will also potentially throw an exception.
SysUtils has many functions such as TryStrToFloat, TryStrToInt, TryStrToInt64 etc for this purpose. These functions accept a reference parameter (var parameter) for returning the converted value and function itself returns true if the conversion is successful.
If you are sure that the string has a valid number then you can check the input string to see if it has a decimal point before deciding which function to use.
Treat all the numbers as float. Use StrToFloat, divide the numbers, and then convert the result back to string with FloatToStr. If the result is an integer, no decimal point would be produced.
Given a double value like 1.00500000274996E-8, how do I convert it to it's non scientific format with a maximum number of digits after the decimal point - in this case with 8 digits it would be 1.00500000?
The conversion should not pad with zeros, so 2007 would come out as 2007, and 2012.33 and 2012.33.
I've tried lots of combinations using Format, FormatFloat, FloatToStrF but can't quite seem to hit the jackpot. Many thanks for any help.
Edit: I should clarify that I am trying to convert it to a string representation, without the Exponent (E) part.
FormatFloat('0.######################', 1.00500000274996E-8) should do the trick.
Output is: 0,0000000100500000274996
It will not output more digits than absolutely necessary.
See John Herbster's Exact Float to String Routines in CodeCentral. Perhaps not exactly what youre after but might be good starting point... CC item's description:
This module includes
(a) functions for converting a floating binary point number to its *exact* decimal representation in an AnsiString;
(b) functions for parsing the floating point types into sign, exponent, and mantissa; and
(c) function for analyzing a extended float number into its type (zero, normal, infinity, etc.)
Its intended use is for trouble shooting problems with floating point numbers.
His DecimalRounding routines might be of intrest too.
I am storing a list of numbers (as Double) in a text file, then reading them out again.
When I read them out of the text file however, the numbers are placed into the text box as 1.59993499 for example, instead of 1.6.
AssignFile(Pipe, 'EconomicData.data');
Reset(Pipe);
For i := 1 to 15
Do ReadLn(Pipe, SavedValue[i]);
CloseFile(Pipe);
Edit1.Text := FloatToStr(SavedValue[1]);
The text in Edit1.text, from the code above, would be 1.59999... instead of the 1.6 in the text file. How can i make it so the text box displays the original value (1.6)?
you can use the FormatFloat Function
var
d: double;
begin
d:=1.59993499 ;
Edit1.Text:=FormatFloat('0.0',d); //show 1.6
end;
Sorry, I wasn't sure whether it will suit your requirements, but my original answer was to use:
Format('%n', [SavedValue[1]]);
Just be careful when using floating points. If your going to be performing calculations using the values, then your better off using either a currency type or an integer and implying the decimal point prior to saving. As you have noticed, floating point values are approximations, and rounding errors are bound to eventually occur.
For instance, lets say you want to store tenths in your program (the 1.6), just create an integer variable and for all intensive purposes think of it as tenths. When you go to display the value, then use the following:
Format('%n',[SavedValue[1]/10]);
Currency is an integer type with an implied decimal of thousandths.