I have a Lua script, where I'm trying to use hex numbers (0x..). If I run this script in the console, with the official Windows binaries, it works fine. But if I run it in my application (simple dofile), I get
malformed number near '0x1F'
It doesn't matter what the hex is, I always get that error, as if it wouldn't support them. The library I'm using is Lua 5.1.4, and I've tried 2 different ones (the first one being one I've compiled myself), so that shouldn't be the problem.
Does anyone have a clue what might be wrong here?
Edit:
It's not the script. No matter what I do, a simple "foo = 0xf" already triggers the error, even if there's nothing else in the file.
Update:
tonumber("0xf")
This returns nil, while
tonumber("15")
work fine. There's definitely something wrong with hex in my libs...
If hex literals aren't working for you (though they should), you can always use hex from lua by doing tonumber("fe",16)
Why do functions have to be different in different compilers, ...why?
Alright, the problem was that Lua tries to convert numbers into double by default. For this it uses the function "strtod", which takes 2 arguments, the string, and a char pointer. The char pointer is supposed to point to the last position after the parsed number. Which for a hex number would mean the 'x', after the '0'. If this isn't the case, Lua assumes an error, and gives us this nice little error message.
I've compiled Lua using DMC, because I need the lib to be in OMF, and I assume others used DMC as well. But apparently DMC's strtod works differenty, since the pointers always point to the start of the string if it's a hex... or rather any invalid number.
I've now added a little hack, which checks for the x, if conversion to double failed. Not pretty, but it works fine for now.
int luaO_str2d (const char *s, lua_Number *result) {
char *endptr;
*result = lua_str2number(s, &endptr);
/* Hack for DMC */
if (endptr == s)
if(*(s+1) == 'x' || *(s+1) == 'X')
endptr++;
else
return 0; /* conversion failed */
I faced this bug with lua5.2. Lua 5.1 works fine.
Related
I am trying to use the pipe character "|" in SPSS syntax with strange results:
In the syntax it appears like this:
But when I copy this line from the syntax window to here, this is what I get:
SELECT IF(SEX = 1 SEX = 2).
The pipe just disappears!
If I run this line, this is the output:
SELECT IF(SEX = 1 SEX = 2).
Error # 4007 in column 20. Text: SEX
The expression is incomplete. Check for missing operands, invalid operators,
unmatched parentheses or excessive string length.
Execution of this command stops.
So the pipe is invisible to the program too!
When I save this syntax and reopen it, the pipe is gone...
The only way I found to get SPSS to work with the pipe is when I edited the syntax (adding the pipe) and saved it in an alternative editor (notepad++ in this case). Now, without opening the syntax, I ran it from another syntax using insert command, and it worked.
EDIT: some background info:
I have spss version 23 (+service pack 3) 64 bit.
The same things happens if I use my locale (encoding: windows-1255) or Unicode (Encoding: UTF-8). Suspecting my Hebrew keyboard I tried copying syntax from the web with same results.
Can anyone shed any light on this subject?
Turns out (according to SPSS support) that's a version specific (ver. 21) bug and was fixed in later versions.
I searched a lot around but found no answer:
I have to use a lot of command line options which I planed to parse with optarg (which works well for single char) - if I count all arguments the alphabet has to less letters :(
So my plan was to use double letter codes (-a, -ab, -ac ) how can I parse them ?
Would it easier to solve by using boost:programoptions?
I'm programming in Vala language and I've a problem: I don't know how to read a char (although I know how to read a string and an integer).
string name = stdin.read_line();
int maximum = int.parse(stdin.read_line());
char option = ????;
One option is using scanf(), but if I use it I have problems during execution time.
If you just want to read a single character from a GLib.FileStream, you can use GLib.FileStream.getc, which "reads the next character from stream and returns it as an unsigned char cast to an int, or EOF on end of file or error." Actually, GLib.FileStream.read_line is implemented using getc. So, the "????" in your question would be something like (char) stdin.getc().
If you are trying to interact with a user, note that input is typically buffered by the terminal until the end of a line. If you want to get a character immediately after the user presses a key you'll need to use something like ncurses (for which Vala bindings are distributed with valac).
In C, I can tell printf to print the arguments in an order different than the order they are passed in:
printf("%2$d %1$d\n", 10, 20);
//prints 20 10
However, if I try to do the same in Lua I get an error:
print(string.format("%2$d %1$d\n", 10, 20))
invalid option '%$' to 'format'
Is there a way to create a Lua format string that causes string.format to write the second argument before the first? I am working with an internationalization and changing the format string is easy but changing the argument order is much more tricky.
I would have expected the technique that I used in C to work with Lua because, according to the manual, string.format should receive the same parameters as sprintf. Are %2$ directives not part of ANSI C or is the Lua manual just forgetting to mention that they are not supported?
In short, no. %2$ directives are a POSIX extension, thus not part of ANSI C or Lua. This has been brought up on the Lua mailing list before, and according to lhf, the feature was around in versions prior to Lua 5 but was removed with that version's release. In the same thread, a wiki page of alternatives was suggested.
If you really want the %2$ style, then it's not too difficult to cook up your own fix either.
local function reorder(fmt, ...)
local args, order = {...}, {}
fmt = fmt:gsub('%%(%d+)%$', function(i)
table.insert(order, args[tonumber(i)])
return '%'
end)
return string.format(fmt, table.unpack(order))
end
print(reorder('%2$d %1$d\n', 10, 20))
You cannot do this with string.format, but you can actually achieve almost the same result with string.gsub. The caveat here is that the last argument of string.gsub can be either string or table (with multiple values to replace)
So this code would do the trick:
local output = string.gsub("%2 %1 %2 %1\n", '%S+', {['%1'] = 10, ['%2'] = 20})
print(output)
> 20 10 20 10
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 9 years ago.
I may have made some syntax mistakes, but I can't understand why my code isn't working:
In top of .m file I wrote:
#define kCountry "Country";
In this case I got red warning from xCode - expected ]
Then in function body:
floatCountries = 74,2;
[[NSUserDefaults standardUserDefaults]setFloat:floatCountries forKey:kCountry];
float test= [[NSUserDefaults standardUserDefaults]floatForKey:kCountry];
NSLog(#"%f", test);
In that one - expected expression.
That may sound a bit stupid, but I want to simplify my code and use this, please help me! :)
Remove the semi-colon from the end of the define and add the '#':
#define kCountry #"Country"
Use:
#define kCountry #"Country"
You added a semicolon at the end.
And as this will be replaced for string value, you need to put #
To explain why this didn't work:
The semicolon
#define kCountry "Country";
In this case i got red warning from xCode - expected ]
There's no such thing as a “red warning”. Warnings are yellow; errors are red.
(Unless you turn on “Treat Warnings as Errors”; then, in a sense, all the warnings are red—because then they're errors.)
Anyway.
Preprocessor directives such as #define do not require a semicolon. If you include one, it becomes part of the directive.
The preprocessor replaces any macro with whatever you defined it as. In this case, you defined a macro named “kCountry” with the value “"Country;”. Note that the semicolon is part of the value—the directive ends at the end of the line, not at a semicolon.
Thus, when you go to use the macro:
[[NSUserDefaults standardUserDefaults]setFloat:floatCountries forKey:kCountry];
float test= [[NSUserDefaults standardUserDefaults]floatForKey:kCountry];
The preprocessor replaces the macro as follows:
[[NSUserDefaults standardUserDefaults]setFloat:floatCountries forKey:"Country";];
float test= [[NSUserDefaults standardUserDefaults]floatForKey:"Country";];
A semicolon can't go inside a statement; it must come after. Thus, the above statements are invalid.
The fix is to remove the semicolon from the line where you #defined the macro, so that the semicolon does not appear in the output:
[[NSUserDefaults standardUserDefaults]setFloat:floatCountries forKey:"Country"];
float test= [[NSUserDefaults standardUserDefaults]floatForKey:"Country"];
By the way, Xcode has an option in its Jump Bar to show you the complete preprocessed version of the code, as the compiler will see it. That can be handy when investigating macro problems.
So, that's one of the problems. The other one was…
The kind of string you used
"Country" is a C string. It's only usable with the C string and stdio libraries (including the functions printf, strcat, etc.) and various other APIs that require C strings (such as +[NSString stringWithUTF8String:] and +[NSString stringWithCString:encoding:]).
NSUserDefaults, like everything else in Cocoa that requires a string, requires a Cocoa string object—an NSString. The syntax for an NSString literal is the same, but with an # in front of it: #"Country".
Hence the version that works:
#define kCountry #"Country"
Which produces the preprocessed output:
[[NSUserDefaults standardUserDefaults]setFloat:floatCountries forKey:#"Country"];
float test= [[NSUserDefaults standardUserDefaults]floatForKey:#"Country"];
With no syntax errors and the right kind of string in both places, this is the version that will work.
Oh, and, as Anoop Vaidya already pointed out:
The number syntax
You tried to assign a number to a variable, but I think you'll find a different number there than you were expecting.
floatCountries = 74,2;
If you print the value of this variable with a statement such as:
NSLog(#"%f", floatCountries);
You'll find that the output is 2.0.
C has an operator called the comma operator, and it is simply x, y, where x and y can be any expression (ideally of types that are compatible with each other—e.g., both numbers).
The comma operator evaluates first the left-side expression, then the right-side expression, and itself evaluates to the right-side expression.
74,2 evaluates first the expression 74, and then the expression 2, and then evaluates to 2. Thus, you assign 2 (an int, which is converted automatically as needed) to the variable.
It may seem kind of silly to use this with literal numbers, and it is. The comma operator exists to be used with expressions that have side effects, such as expressions involving the ++ and -- operators.
Use of the comma operator is generally discouraged, because the resulting code is unclear: as a rule, each line should do one thing, but a line such as x = ++y, --z; does three things.
Nonetheless, it is valid, as you found. You should get a warning, though, if you have the “unused value” warning turned on (as you should), because half of the expression is, in fact, unused—you drop the 74 on the floor. Harmless, but a symptom that this isn't what you meant to do.
What you want is:
floatCountries = 74.2;