The current version of Dart Editor is showing the bitwise XOR operator as not defined for class bool
I don't see it defined in num.dart either.
Ex:
bool x = a ^ b;
The editor shows the "Caret" as not defined.
Update:
Dart's api spec only allows bitwise XOR on integers. I fixed my code to properly work with bools.
You can use the XOR operator on booleans since Dart version 2.1
[...] since Dart 2.1, the bool class has had non-short-circuit operators &, | and ^.
They can be used where you want both sides to be evaluated, and, especially for ^, they can be used in assignments: bool parity = false; while (something) parity ^= checkSomething();.
Taken from the corresponding Github issue.
See the dart documentation for XOR here.
(Copied from the question, so that this appears as answered...)
Dart's spec only allows bitwise XOR on integers.
Related
Lightuserdata is different from userdata so what can I do with it? I mean the operations of lightuserdata in lua. Looks like I cannot convert it to any other data type.
One of my case:
My C library returns a C pointer named 'c_pointer', AKA lightuserdata to Lua, and then I want:
my_pointer = c_pointer +4
and then pass 'my_pointer' back to C library. Since I cannot do anything with 'c_pointer', so the expression 'c_pointer + 4' is invalid.
I am wondering are there some practical solutions to this?
Lightuserdata are created by C libraries. They are simply C pointers.
For example, you can use them to refer to data you allocate with malloc, or statically allocate in your module. Your C library can transfer these pointers to the Lua side as a lightuserdata using lua_pushlightuserdata, and later Lua can give it back to your library (or other C code) on the stack. Lua code can use the lightuserdata as any other value, storing it in a table, for example, even as a table key.
ADDENDUM
To answer your revised question, if you want to add an offset to the pointer, do it on the C side. Pass the lightuserdata and the integer offset to C, and let C do the offset using ptr[n]
void * ptr = lua_touserdata(L, idx1);
lua_Integer n = lua_tointeger(L. idx2);
// do something with
((char *)ptr)[n];
Plain Lua has no pointer arithmetic, so as Doug Currie stated you would need to do the pointer arithmetic on the C side.
LuaJIT on the other hand can do pointer arithmetic (via the FFI library), so consider using that instead.
I'm learning compilers and creating a code generator for a simple language that deals with two types: characters and integers.
After the user input has been scanned by the scanner and then parsed by the parser, I get an AST representation of the input. I have made a code generation for an even simpler language which only processes expressions with integers, operators and variables.
However with this new language I sometimes get a subtree for a type declaration, like this:
(IS TYPE (x) (INT))
which says x is of type INT.
Should there be a case in my code generator which deals with these type declarations? Or is this simply for the semantic analyzer to type check, so I should just assume the types have been checked and ignore this part of the tree and simply assign the value for x?
Both situations are possible, you need to describe more about your language, to see if you really need to add that feature to your code generator, or skip it as unnecessary, and avoid extra work with this difficult and interesting topic of designing a programming language.
Is you "code generator" a program that recieves as an input code in a programming language (maybe small one) and outputs code in another programming language (maybe small one) ?
This tool is usually called a "translator".
Is you "code generator" a program that receive as an input a programming language and outputs assembler / bytecode like programming language ?
This tool is usually called a "compiler".
Note: "pile" is a synonym for "stack".
Usually an A.S.T., stores the type of an operation, or function call. Example, in c:
...
int a = 3;
int b = 5;
float c = (float)(a * b);
...
The last line, generates an A.S.T. similar to this, (skip A.S.T. for other lines):
..................................................................
..................................................................
......................+--------------+............................
......................| [root] |............................
......................| (no type) = |............................
......................+------+-------+............................
.............................|....................................
.................+-----------+------------+.......................
.................|........................|.......................
...........+-----+-----+....+-------------+-------------+.........
...........| (int) c |....| (float) (cast operation) |.........
...........+-----------+....+-------------+-------------+.........
..........................................|.......................
....................................+-----+-----+.................
....................................| (int) () |.................
....................................+-----+-----+.................
..........................................|.......................
....................................+-----+-----+.................
....................................| (int) * |.................
....................................+-----+-----+.................
..........................................|.......................
..............................+-----------+-----------+...........
..............................|.......................|...........
........................+-----+-----+...........+-----+-----+.....
........................| (int) a |...........| (float) b |.....
........................+-----------+...........+-----------+.....
..................................................................
..................................................................
Note that the "(float)" cast its like an operator or a function,
similar to your question.
Good Luck.
If this is a declaration
(IS TYPE (x) (INT))
then x should be laid out in memory. In the case of C and automatic variables, local auto variables are allocated on stack. To allocate needed size of stack you should know sizes of all local vars and sizes are from types.
If this variable is stored in a register, you should select a register of needed size (think about x86 with: AL, AX, EAX, RAX - the same register with different sizes), if your target has such.
Also, type is needed when there is an ambiguous operation in AST, which can operate on different data sizes (e.g. char, short, int - or 8-bit, 16-bit, 32-bit, etc). And for some assemblers, size of data is encoded into instruction itself; so codegen should remember sizes of variables.
Or, if the type of operation was not recorded in AST, the ADD:
(ADD (x) (y))
may mean both float and int additions (ADD or FADD instructions), so types of x and y are needed in codegen to select right variant.
By definition the integer division returns the quotient.
Why 4613.9145 div 100. gives an error ("bad argument") ?
For div the arguments need to be integers. / accepts arbitrary numbers as arguments, especially floats. So for your example, the following would work:
1> 4613.9145 / 100.
46.139145
To contrast the difference, try:
2> 10 / 10.
1.0
3> 10 div 10.
1
Documentation: http://www.erlang.org/doc/reference_manual/expressions.html
Update: Integer division, sometimes denoted \, can be defined as:
a \ b = floor(a / b)
So you'll need a floor function, which isn't in the standard lib.
% intdiv.erl
-module(intdiv).
-export([floor/1, idiv/2]).
floor(X) when X < 0 ->
T = trunc(X),
case X - T == 0 of
true -> T;
false -> T - 1
end;
floor(X) ->
trunc(X) .
idiv(A, B) ->
floor(A / B) .
Usage:
$ erl
...
Eshell V5.7.5 (abort with ^G)
> c(intdiv).
{ok,intdiv}
> intdiv:idiv(4613.9145, 100).
46
Integer division in Erlang, div, is defined to take two integers as input and return an integer. The link you give in an earlier comment, http://mathworld.wolfram.com/IntegerDivision.html, only uses integers in its examples so is not really useful in this discussion. Using trunc and round will allow you use any arguments you wish.
I don't know quite what you mean by "definition." Language designers are free to define operators however they wish. In Erlang, they have defined div to accept only integer arguments.
If it is the design decisions of Erlang's creators that you are interested in knowing, you could email them. Also, if you are curious enough to sift through the (remarkably short) grammar, you can find it here. Best luck!
Not sure what you're looking for, #Bertaud. Regardless of how it's defined elsewhere, Erlang's div only works on integers. You can convert the arguments to integers before calling div:
trunc(4613.9145) div 100.
or you can use / instead of div and convert the quotient to an integer afterward:
trunc(4613.9145 / 100).
And trunc may or may not be what you want- you may want round, or floor or ceiling (which are not defined in Erlang's standard library, but aren't hard to define yourself, as miku did with floor above). That's part of the reason Erlang doesn't assume something and do the conversion for you. But in any case, if you want an integer quotient from two non-integers in Erlang, you have to have some sort of explicit conversion step somewhere.
I was using the Pow function of the BigInteger class in F# when my compiler told me :
This construct is deprecated. This member has been removed to ensure that this
type is binary compatible with the .NET 4.0 type System.Numerics.BigInteger
Fair enough I guess, but I didn't found a replacement immediately.
Is there one? Should we only use our own Pow functions? And (how) will it be replaced in NET4.0?
You can use the pown function
let result = pown 42I 42
pown works on any type that 'understands' multiplication and 'one'.
If you look at F# from the perspective of being based on OCaml, then the OCaml Num module has power_num. Since OCaml type num are arbitrary-precision rational numbers they can handle any size number, e.g. they are not limited by the CPU register because they can do the math symbolically. Also since num is defined as
type num =
| Int of int
| Big_int of Big_int.big_int
| Ratio of Ratio.ratio
they can handle very small numbers with out loss of precision because of the Ratio type.
Since F# does not have the num type, Jack created the FSharp.Compatibility.OCaml module which has num.fs and is available via NuGet.
So you can get all the precision you want using this, and the num functions can handle negative exponents.
let myuint64 = 10uL
match myuint64 with
| -1 -> ()
| _ -> ()
How do I define the given -1 as a uint64 value?
> match 0UL-1UL with
- |System.UInt64.MaxValue -> "-1"
- |_ -> "???"
- ;;
val it : string = "-1"
Let me leave alone the fact that you can't really represent a negative value with a data type that can only store positive values (and zero of course).
If, on the other hand, you were storing it in a signed value, -1 would be stored as all bits set.
So basically, I will assume you want to find a way to represent -1 as a bit-wise value that will be compatible with -1 as a signed value.
The value would then be, in C# and C/C++ syntax, 0xffffffffffffffff. Exactly how to specify that in F# I don't know.
I don't know F# at all, but if it's anything like any other languages, a UInt64 can't be -1. Ever. UInt means unsigned integer, which means it can only represent positive values.
To expand on other answers:
When a type starts with a u it means unsigned. What signed/unsigned means is this:
Numbers are stored using a certain number of bits. In the case of int64 and uint64, 64 bits are used. If the number is signed, the 1st bit is not used as part of the number itself, only the other 63 are. That bit is used to say whether the number is negative. If the number is unsigned, then all bits including the 1st bit are used as part of the number and the number is always non-negative (ie: is positive or 0).
Well you could assign it -1 and on most architectures store the 2's complement in there. The signed and unsigned stuff are really only for the type checking. There is no negative sign in hardware.
I have no idea if f# type checker is smart enough to know that a lexical constant -1 is a negative number and should not be put in a uint64.
C definitely does not care.
#include <stdio.h>
#include <inttypes.h>
main()
{
uint64_t x = -1;
printf("0x%x\n", x); // 0xffffffff
}
if F# will convert it for you then -1UL would work. If not then you can specify it as 0xFFFFFFFFFFFFFFFFUL and add a comment to remember that it's -1.
Don't have the F# tools installed at the moment so I cannot verify this.
If you want to go with a signed int:
-1: int64
but you can't match a negative number to a uint, as others have stated.