I'm getting an exception when trying to use a decimal value with FunScript. It can be reproduced simply by using:
Globals.window.alert(Globals.JSON.stringify(3M))
The exception says:
System.Exception was unhandled
Message: An unhandled exception of type 'System.Exception' occurred in FunScript.dll
Additional information: Could not compile expression: Call (None, MakeDecimal,
[Value (13), Value (0), Value (0), Value (false), Value (0uy)])
I suspect this is a FunScript limitation, but I just wanted to check. In that case, How could a decimal value be used in the context of FunScript code? Or How could Funscript be extended in order to fix this?
The decimal primitive is a TODO feature. I guess the best way to tackle it would be reimplementing the System.Decimal structure using the recently open sourced .NET Framework reference source and then add the appropiate expression replacements to the compiler like it's done for other types which do not translate directly from .NET to JavaScript like DateTime, TimeSpan, Regex, F# option or list, etc.
I guess the feature hasn't been prioritized. If you need it, can you please open an issue in the Github page so one of the contributors (maybe myself) can start implementing it? If you think you can do it yourself, please feel free to submit a pull request. Thanks!
It is rather JavaScript limitation because JavaScript has only binary floating point.
One of the solution would be creating your own type containing two integers: for integer part and fractional part
Related
I have a really long insert query with more than 40 fields (from an 'inherited' Foxpro database) processed using OleDb, that produces the exception 'Data type mismatch.' Is there any way to know which field of the query is producing this exception?
By now I'm using the force brute method of reducing the number of fields on the insert until I locate the buggy one, but I guess it must be a more straight way to found it...
There isn't really any shortcut beyond taking a guess at which 20 might be the problem, chopping out the other 20 and testing, and repeating that reductive process until you hit it.
Or alternatively looking at the table structure(s) in the DBF and making sure the field types match to the OleDB types you're using. The details of how .NET types are mapped to Visual FoxPro table field types is here.
If you have access to the Visual FoxPro IDE you could probably do that a lot quicker by knocking up a little program or even just doing it in the Command Window.
You are not telling us the language you use, so that we could possibly give a sample to handle it.
Basically what you would do is:
Get the structure,
Parse the insert statement and get values,
Compare data types.
It should be a short code to make this check.
I am using Delphi XE2.
As a workaround to Delphi not supporting forward references to Record types I started using untyped parameters.
How can I obtain the Type of an untyped Parameter?
procedure TSomeRecord.TransformBy(const AUntypedParam);
begin
// how can I ensure that **AUntypedParam** is of a specific record type?
I need to make sure that AUntypedParam is of a specific type, otherwise an exception should be thrown.
Thank you!
How can I ensure that AUntypedParam is of a specific record type?
You cannot. That's pretty much the modus operandi of untyped parameters. When you say to the compiler, don't check the type of the actual parameter, the compiler takes you at your word and lets you pass anything that you like. You cannot have it both ways.
#LURD astutely points out that you can use record helpers to work around this compiler limitation. I do hope that somebody from Embarcadero reads questions on Stack Overflow. This must be the third or fourth time in the past week that we've had a question due to the limitations of extended records.
Clarification (sorry the question was not specific): They both try to convert the item on the stack to a lua_Number. lua_tonumber will also convert a string that represents a number. How does luaL_checknumber deal with something that's not a number?
There's also luaL_checklong and luaL_checkinteger. Are they the same as (int)luaL_checknumber and (long)luaL_checknumber respectively?
The reference manual does answer this question. I'm citing the Lua 5.2 Reference Manual, but similar text is found in the 5.1 manual as well. The manual is, however, quite terse. It is rare for any single fact to be restated in more than one sentence. Furthermore, you often need to correlate facts stated in widely separated sections to understand the deeper implications of an API function.
This is not a defect, it is by design. This is the reference manual to the language, and as such its primary goal is to completely (and correctly) describe the language.
For more information about "how" and "why" the general advice is to also read Programming in Lua. The online copy is getting rather long in the tooth as it describes Lua 5.0. The current paper edition describes Lua 5.1, and a new edition describing Lua 5.2 is in process. That said, even the first edition is a good resource, as long as you also pay attention to what has changed in the language since version 5.0.
The reference manual has a fair amount to say about the luaL_check* family of functions.
Each API entry's documentation block is accompanied by a token that describes its use of the stack, and under what conditions (if any) it will throw an error. Those tokens are described at section 4.8:
Each function has an indicator like this: [-o, +p, x]
The first field, o, is how many elements the function pops from the
stack. The second field, p, is how many elements the function pushes
onto the stack. (Any function always pushes its results after popping
its arguments.) A field in the form x|y means the function can push
(or pop) x or y elements, depending on the situation; an interrogation
mark '?' means that we cannot know how many elements the function
pops/pushes by looking only at its arguments (e.g., they may depend on
what is on the stack). The third field, x, tells whether the function
may throw errors: '-' means the function never throws any error; 'e'
means the function may throw errors; 'v' means the function may throw
an error on purpose.
At the head of Chapter 5 which documents the auxiliary library as a whole (all functions in the official API whose names begin with luaL_ rather than just lua_) we find this:
Several functions in the auxiliary library are used to check C
function arguments. Because the error message is formatted for
arguments (e.g., "bad argument #1"), you should not use these
functions for other stack values.
Functions called luaL_check* always throw an error if the check is not
satisfied.
The function luaL_checknumber is documented with the token [-0,+0,v] which means that it does not disturb the stack (it pops nothing and pushes nothing) and that it might deliberately throw an error.
The other functions that have more specific numeric types differ primarily in function signature. All are described similarly to luaL_checkint() "Checks whether the function argument arg is a number and returns this number cast to an int", varying the type named in the cast as appropriate.
The function lua_tonumber() is described with the token [-0,+0,-] meaning it has no effect on the stack and does not throw any errors. It is documented to return the numeric value from the specified stack index, or 0 if the stack index does not contain something sufficiently numeric. It is documented to use the more general function lua_tonumberx() which also provides a flag indicating whether it successfully converted a number or not.
It too has siblings named with more specific numeric types that do all the same conversions but cast their results.
Finally, one can also refer to the source code, with the understanding that the manual is describing the language as it is intended to be, while the source is a particular implementation of that language and might have bugs, or might reveal implementation details that are subject to change in future versions.
The source to luaL_checknumber() is in lauxlib.c. It can be seen to be implemented in terms of lua_tonumberx() and the internal function tagerror() which calls typerror() which is implemented with luaL_argerror() to actually throw the formatted error message.
They both try to convert the item on the stack to a lua_Number. lua_tonumber will also convert a string that represents a number. luaL_checknumber throws a (Lua) error when it fails a conversion - it long jumps and never returns from the POV of the C function. lua_tonumber merely returns 0 (which can be a valid return as well.) So you could write this code which should be faster than checking with lua_isnumber first.
double r = lua_tonumber(_L, idx);
if (r == 0 && !lua_isnumber(_L, idx))
{
// Error handling code
}
return r;
I have a field ($P{ORDER}.permit) which is Integer (0,1) and I'd like to display it as a String ("No", "Yes"). So I added below keys to ResourceBoundle:
order.permit.0=No
order.permit.1=Yes
I wrote expression $R{order.permit.$P{ORDER}.permit} but it doesn't work. An exception is thrown
net.sf.jasperreports.engine.JRException: Too many groovy classes were
generated. Please make sure that you don't use Groovy features such as
closures that are not supported by this report compiler.
I suspect that this exception is caused by nesting jasper expressions or nesting them in wrong way.
How should I write the expression to achieve desired result?
EDIT: str("order.permit." + $P{ORDER}.permit) is the answer. Details in the below post.
Use str() instead of $R{}.
See also http://jasperforge.org/plugins/espforum/view.php?group_id=102&forumid=103&topicid=54665:
$R{} and str() are largely the same thing. The functional difference
is that $R{} can only be used with fixed/static keys, while str() can
be used with dynamic message keys, e.g. str("message.prefix." +
$P{message}).
I have a problem understanding co-existence of "null" and Option in F#. In a book I have read that null value is not a proper value in F# because this way F# eliminates the excessive null checking. But it still allows null-initialized references in F#. In other words, you can have null values but you don't have the weapons to defend yourself with. Why not completely replace nulls with Options. Is it because of compatibility issues with .NET libraries or languages that it's still there? If yes can you give an example that shows why it can't be replaced by Option?
F# avoids the use of null when possible, but it lives in the .NET eco-system, so it cannot avoid it completely. In a perfect world, there would be no null values, but you just sometimes need them.
For example, you may need to call .NET method with null as an argument and you may need to check whether a result of .NET method call was null.
The way F# deals with this is:
Null is used when working with types that come from .NET (When you have a value or argument of type declared in .NET, you can use null as a value of that type; you can test if it equals null)
Option is needed when working with F# types, because values of types declared in F# cannot be null (And compiler prohibits using null as a value of these types).
In F# programming, you'll probably use option when working with .NET types as well (if you have control over how their values are created). You'll just never create null value and then use option to have a guarantee that you'll always handle missing values correctly.
This is really the only option. If you wanted to view all types as options implicitly when accessing .NET API, pretty much every method would look like:
option<Control> GetNextChild(option<Form> form, option<Control> current);
...programming with API like this would be quite painful.