SPSS: How to count number of digits in variable - spss

I need to count the number of digits in an SPSS numeric variable, and assign it to a different variable.
I tried converting it to a string and counting the length of the string with char.length(), but this returns the defined length of the variable, rather than the length of the actual string in each line.
Any ideas how this can be done?

when canculating the length of your string variable, use ltrimor rtrim(depending on how you calculated your string - just to be sure you could use both) to get rid of spaces and count only digits:
compute Ndigits=char.length(ltrim(rtrim(YourString))).
you could also do away with the text variable altogether and just use this function:
compute Ndigits=trunc(lg10(YourNumber))+1.

Note that the length will depend on how the number was converted to a string, i.e., the number of decimals specified in the conversion. Also, the decimal point character will contribute to the length.
Also, if you are in Unicode mode, which has been the default for several years, you don't need to use char.rtrim. Strings are automatically rtrimmed in that mode.

Related

How do I keep my rails integer from being converted to binary?

As you may be able to see in the image, I have a User model and #user.zip is stored as an integer for validation purposes (ie, so only digits are stored, etc.). I was troubleshooting an error when I discovered that my sample zip code (00100) was automatically being converted to binary, and ending up as the number 64.
Any ideas on how to keep this from happening? I am new to Rails, and it took me a few hours to figure out the cause of this error, as you might imagine :)
I can't imagine any other information would be helpful here, but please inform me if otherwise.
This is not binary, this is octal.
In Ruby, any number starting with 0 will be treated as an octal number. You should check the Ruby number literals to learn more about this, here's a quote:
You can use a special prefix to write numbers in decimal, hexadecimal, octal or binary formats. For decimal numbers use a prefix of 0d, for hexadecimal numbers use a prefix of 0x, for octal numbers use a prefix of 0 or 0o, for binary numbers use a prefix of 0b. The alphabetic component of the number is not case-sensitive.
For your case, you should not store zipcodes as numbers. Not only in the database, but even as variables don't treat them as numeric values. Instead, store and treat them as strings.
The zip should probably be stored as a string since you can't have a valid integer with leading zeroes.

Length() vs Sizeof() on Unicode strings

Quoting the Delphi XE8 help:
For single-byte and multibyte strings, Length returns the number of bytes used by the string. Example for UTF-8:
Writeln(Length(Utf8String('1¢'))); // displays 3
For Unicode (WideString) strings, Length returns the number of bytes divided by two.
This arises important questions:
Why the difference in handling is there at all?
Why Length() doesn't do what it's expected to do, return just the length of the parameter (as in, the count of elements) instead of giving the size in bytes in some cases?
Why does it state it divides the result by 2 for Unicode (UTF-16) strings? AFAIK UTF-16 is 4-byte at most, and thus this will give incorrect results.
Length returns the number of elements when considering the string as an array.
For strings with 8 bit element types (ANSI, UTF-8) then Length gives you the number of bytes since the number of bytes is the same as the number of elements.
For strings with 16 bit elements (UTF-16) then Length is half the number of bytes because each element is 2 bytes wide.
Your string '1¢' has two code points, but the second code point requires two bytes to encode it in UTF-8. Hence Length(Utf8String('1¢')) evaluates to three.
You mention SizeOf in the question title. Passing a string variable to SizeOf will always return the size of a pointer, since a string variable is, under the hood, just a pointer.
To your specific questions:
Why the difference in handling is there at all?
There is only a difference if you think of Length as relating to bytes. But that's the wrong way to think about it Length always returns an element count, and when viewed that way, there behaviour is uniform across all string types, and indeed across all array types.
Why Length() doesn't do what it's expected to do, return just the length of the parameter (as in, the count of elements) instead of giving the size in bytes in some cases?
It does always return the element count. It just so happens that when the element size is a single byte, then the element count and the byte count happen to be the same. In fact the documentation that you refer to also contains the following just above the excerpt that you provided: Returns the number of characters in a string or of elements in an array. That is the key text. The excerpt that you included is meant as an illustration of the implications of this italicised text.
Why does it state it divides the result by 2 for Unicode (UTF-16) strings? AFAIK UTF-16 is 4-byte at most, and thus this will give incorrect results.
UTF-16 character elements are always 16 bits wide. However, some Unicode code points require two character elements to encode. These pairs of character elements are known as surrogate pairs.
You are hoping, I think, that Length will return the number of code points in a string. But it doesn't. It returns the number of character elements. And for variable length encodings, the number of code points is not necessarily the same as the number of character elements. If your string was encoded as UTF-32 then the number of code points would be the same as the number of character elements since UTF-32 is a constant sized encoding.
A quick way to count the code points is to scan through the string checking for surrogate pairs. When you encounter a surrogate pair, count one code point. Otherwise, when you encounter a character element that is not part of a surrogate pair, count one code point. In pseudo-code:
N := 0;
for C in S do
if C.IsSurrogate then
inc(N)
else
inc(N, 2);
CodePointCount := N div 2;
Another point to make is that the code point count is not the same as the visible character count. Some code points are combining characters and are combined with their neighbouring code points to form a single visible character or glyph.
Finally, if all you are hoping to do is find the byte size of the string payload, use this expression:
Length(S) * SizeOf(S[1])
This expression works for all types of string.
Be very careful about the function System.SysUtils.ByteLength. On the face of it this seems to be just what you want. However, that function returns the byte length of a UTF-16 encoded string. So if you pass it an AnsiString, say, then the value returned by ByteLength is twice the number of bytes of the AnsiString.

Operation on Hexadecimal DELPHI

My application has to do operation on Hexadecimal values.
For example,
If the input given by user is '0010F750', then my application will tell you the user which is the nearest value (from some set of pre defined values) and will give next value by adding '0000E500'.
How we can perform Hexa Decimal operations Find nearest, Add, Subtract from DELPHI?
Performing operations on hexadecimal values does not really mean anything. Numbers are numbers. Hexadecimal is merely a representation using base 16.
All you need to do is convert these hex strings to integers and you can use standard arithmetic operations.
function HexStrToInt(const str: string): Integer;
begin
Result := StrToInt('$' + str);
end;
Add and subtract using + and -. Use IntToHex to express values as their hex representations.
Your application does not and cannot "do operation on Hexadecimal values". Rather, it operates on binary values stored in chunks of data organized as bytes.
What the USER sees and what the PROGRAM works with are two completely unrelated things.
The number one (1) in binary is 00000001, in hex is 01, in decimal is 1, and in ASCII has the hexadecimal value of 31. Try printing the value of Ord('1').
You need to convert the external representation of your data, in Hex, to an internal representation as an Integer. That's what David was pointing to earlier.
Then you'd need to apply your "rounding" to the numeric value, then convert it back to a Hex string for the user to see.
Search around for examples that let you implement a simple calculator and you'll understand better.

How can I know where digits are on a double

I would like to display a number according to the position of the first uncertain digit.
Hence for a number as 203.32134 with the first uncertain digit being 0.01 I would like to display 203.321 (all the certain digits plus the first uncertain as is and the second rounded.)
But I do not know how I could write a string format with a %x.yf in order to get my string as I would like.
Could anyone help?
Use NSNumberFormatter and set minimum/maximum number of significant fractions.

How do I avoid errors when converting strings to numbers if I don't know whether I have floats or integers?

I have stringgrid on delphi form and i am trying to divide values of one cell with value of another cell in another column.
But the problem is, stringgrid cells are populated with different types of numbers, so I am getting ConvertErrors.
For example the numbers in cells can look like
0.37 or 34 or 0.0013 or 0.00 or 0.35 or 30.65 or 45.9108 or 0.0307 or 6854.93.
In another words I never know is it going to be real, float, integer or any other kind of type in those cells.
I have looked everywhere on internet but no luck. Anyone any ideas. By the way I am not exactly Delphi expert. Thanks.
For each string, convert it first to a float value using StrToFloat function in SysUtils.pas . This should allow for any numerical type to be dealt with (unless you have something unusual like complex numbers). As you have some zero values in your list above you should also ensure that you check for divide by zero conditions as this will also potentially throw an exception.
SysUtils has many functions such as TryStrToFloat, TryStrToInt, TryStrToInt64 etc for this purpose. These functions accept a reference parameter (var parameter) for returning the converted value and function itself returns true if the conversion is successful.
If you are sure that the string has a valid number then you can check the input string to see if it has a decimal point before deciding which function to use.
Treat all the numbers as float. Use StrToFloat, divide the numbers, and then convert the result back to string with FloatToStr. If the result is an integer, no decimal point would be produced.

Resources