The goal is to fast extract bit at position N;
So far i've found only this way to do that:
CREATE OR REPLACE FUNCTION test(x int, size int)
RETURNS int AS
$BODY$
DECLARE
y int;
BEGIN
y = get_bit(x::bit(size)>>size-1,size-1);
return y;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
But this says because of bit(size)
invalid input syntax for integer: "size"
So i'm forsed to write size as a constant, for example bit(3)
Is there any way to set bit size dynamically? Alternatively, maybe there are some other ways to extract specific bit of int/text?
You don't need bitfield types for this. To test bit 6 of the value 200:
SELECT 200 & (1<<6) != 0
you binary-AND a value with the 6th bit set, by upshifting 1 by 6 base-2 places, then test to see if the result is nonzero.
Alternately, cast to a suitable bit size based on the integer and use position, though I think there's no reason whatsoever to do that when you can use extremely fast bitwise AND and shifts like above. Because get_bit numbers from the left, and uses a 1-offset instead of 0-offset, you have to take the compliment of 31 (for a 32-bit int) to get the position from the right:
SELECT get_bit(200::bit(32), 31 - 6);
so it's way simpler to just use bitwise operations like the first example.
Related
Is there an easier way of comparing two floats in the metal shading language?
I'm currently multiplying by 1000 then converting to an int to get an integer comparison.
int r = (int)(color.r * 1000);
int checkColorR = (int)(50.0/255.0 * 1000);
if (r == checkColorR) ...
I'm after something like: Mathf.Approximately in Unity
A common technique is to compare the absolute difference to some desired precision:
if (fabs(color.r - 50.0/255.0) < 1.0/1000.0) ...
You can, of course, define a function for this rather than repeating it all over the place.
Your current technique is bad because it does integer truncation. The two values could be very close, but on either side of an integer, and they would compare as not equal because one would be truncated to the integer below. For example, 4.9999 and 5.0001.
I am trying some things in Dafny. I want to code a simple datastructure that holds an uncompressed image in memory:
datatype image' = image(width: int, height: int, data: array<byte>)
newtype byte = b: int | 0 <= b <= 255
Actually using it:
method Main() {
var dat := [1,2,3];
var im := image(1, 3, dat);
}
datatype image' = image(width: int, height: int, data: array<byte>)
newtype byte = b: int | 0 <= b <= 255
leads Dafny to complain:
stdin.dfy(3,24): Error: incorrect type of datatype constructor argument (found seq, expected array)
1 resolution/type errors detected in stdin.dfy
I might also want to demand that the byte array is not null, and the size of the byte array is equal to width * height * 3 (to store three bytes representing the RGB value of that pixel).
What way should I enforce this? I looked into newtype, which lets you put some constraints on variables with a certain type, but this works only for numeric types.
Dafny supports both immutable sequences (which are like mathematical sequences of elements) and mutable arrays (which are, like in C and Java, pointers to elements). The error you're getting is telling you that you're calling the image constructor with a seq<byte> value where an array<byte> value is expected.
You can fix the problem by replacing your definition of dat with:
var dat := new byte[3];
dat[0], dat[1], dat[2] := 1, 2, 3;
However, the more typical thing, if you're using a datatype (which is immutable), would be to use a sequence. So, you probably want to instead change your definition of image to:
datatype image = image(width: int, height: int, data: seq<byte>)
Btw, note that Dafny allows you to name a type and one of its constructors the same, so there's no reason to name one of them with a prime (unless you want to, of course).
Another matter of style is to use a half-open interval in your definition of byte:
newtype byte = b: int | 0 <= b < 256
Since half-open intervals are prevalent in computer science, Dafny's syntax favors them. For example, for a sequence s, the expression s[52..57] denotes a subsequence of s of length 5 (that is, 57 minus 52) starting in s at index 52. One more thing, you can also leave out the type int of b if you want, since Dafny will infer it:
newtype byte = b | 0 <= b < 256
You also asked about the possibility of adding a type constraint, so that the sequence in your datatype will always be of length 3. As you discovered, you cannot do this with a newtype, because newtype (at least for now) only works with numeric types. You can (almost) use a subset type, however. This would be done as follows:
type triple = s: seq<byte> | |s| == 3
(In this example, the first vertical bar is like the one in the newtype declaration and says "such that", whereas the next two denote the length operator on sequences.) The trouble with this declaration is that types must be nonempty and Dafny isn't convinced that there are any values that satisfy the constraint of triple. Well, Dafny is not trying very hard. The plan is to add a witness clause to the type (and newtype) declaration, so that a programmer can show Dafny a value that belongs to the triple type. However, this support is waiting for some implementation changes that will allow customized initial values, so you cannot use this constraint at this time.
Not that you want it here, but Dafny would let you give a weaker constraint that admits the empty sequence:
type triple = s: seq<byte> | |s| <= 3
So, instead, if you want to talk about that an image value has a data component of length 3, then introduce a predicate:
predicate GoodImage(img: image)
{
|img.data| == 3
}
and use this predicate in specifications like pre- and postconditions.
Program safely,
Rustan
I know that storing single value (or double) can not be very precise. so storing for example 125.12 can result in 125.1200074788. now in delphi their is some usefull function like samevalue or comparevalue that take an epsilon as param and say that 125.1200074788 or for exemple 125.1200087952 is equal.
but i often see in code stuff like : if aSingleVar = 0 then ... and this in fact as i see always work. why ? why storing for exemple 0 in a single var keep the exact value ?
Only values that are in form m*2^e, where m and e are integers can be stored in a floating point variable (not all of them though, it depends on precision). 0 has this form, and 125.12 does not, as it equals 3128/25, and 1/25 is not an integer power of 2.
Comparing 125.12 to a single (or double) precision variable will most probably return always False, because a literal 125.12 will be treated as an extended precision number, and no single (or double) precision number would have such a value.
Looks like a good use for the BigDecimals unit by Rudy Velthuis. Millions of decimal places of accuracy and precision.
I have this line:
const
MY_SET: set of WideChar = [WideChar('A')..WideChar('Z')];
The above does not compile, with error:
[Error] Sets may have at most 256 elements
But this line does compile ok:
var WS: WideString;
if WS[1] in [WideChar('A')..WideChar('Z')] then...
And this also compiles ok:
const
MY_SET = [WideChar('A')..WideChar('Z'), WideChar('a')..WideChar('z')];
...
if WS[1] in MY_SET then...
Why is that?
EDIT: My question is why if WS[1] in [WideChar('A')..WideChar('Z')] compiles? and why MY_SET = [WideChar('A')..WideChar('Z'), WideChar('a')..WideChar('z')]; compiles? aren't they also need to apply to the set rules?
A valid set has to obey two rules:
Each element in a set must have an ordinal value less than 256.
The set must not have more than 256 elements.
MY_SET: set of WideChar = [WideChar('A')..WideChar('Z')];
Here you declare a set type (Set of WideChar) which has more than 256 elements -> Compiler error.
if WS[1] in [WideChar('A')..WideChar('Z')]
Here, the compiler sees WideChar('A') as an ordinal value. This value and all other values in the set are below 256. This is ok with rule 1.
The number of unique elements are also within limits (Ord('Z')-Ord('A')+1), so the 2nd rules passes.
MY_SET = [WideChar('A')..WideChar('Z'), WideChar('a')..WideChar('z')];
Here you declare a set that also fulfills the requirements as above. Note that the compiler sees this as a set of ordinal values, not as a set of WideChar.
A set can have no more than 256 elements.
Even with so few elements the set already uses 32 bytes.
From the documentation:
A set is a bit array where each bit indicates whether an element is in the set or not. The maximum number of elements in a set is 256, so a set never occupies more than 32 bytes. The number of bytes occupied by a particular set is equal to
(Max div 8) - (Min div 8) + 1
For this reason only sets of byte, (ansi)char, boolean and enumerations with fewer than 257 elements are possible.
Because widechar uses 2 bytes it can have 65536 possible values.
A set of widechar would take up 8Kb, too large to be practical.
type
Capitals = 'A'..'Z';
const
MY_SET: set of Capitals = [WideChar('A')..WideChar('Z')];
Will compile and work the same.
It does seem a bit silly to use widechar if your code ignores unicode.
As written only the English capitals are recognized, you do not take into account different locales.
In this case it would be better to use code like
if (AWideChar >= 'A') and (AWideChar <= 'Z') ....
That will work no matter how many chars fall in between.
Obviously you can encapsulate this in a function to save on typing.
If you insist on having large sets, see this answer: https://stackoverflow.com/a/2281327/650492
let myuint64 = 10uL
match myuint64 with
| -1 -> ()
| _ -> ()
How do I define the given -1 as a uint64 value?
> match 0UL-1UL with
- |System.UInt64.MaxValue -> "-1"
- |_ -> "???"
- ;;
val it : string = "-1"
Let me leave alone the fact that you can't really represent a negative value with a data type that can only store positive values (and zero of course).
If, on the other hand, you were storing it in a signed value, -1 would be stored as all bits set.
So basically, I will assume you want to find a way to represent -1 as a bit-wise value that will be compatible with -1 as a signed value.
The value would then be, in C# and C/C++ syntax, 0xffffffffffffffff. Exactly how to specify that in F# I don't know.
I don't know F# at all, but if it's anything like any other languages, a UInt64 can't be -1. Ever. UInt means unsigned integer, which means it can only represent positive values.
To expand on other answers:
When a type starts with a u it means unsigned. What signed/unsigned means is this:
Numbers are stored using a certain number of bits. In the case of int64 and uint64, 64 bits are used. If the number is signed, the 1st bit is not used as part of the number itself, only the other 63 are. That bit is used to say whether the number is negative. If the number is unsigned, then all bits including the 1st bit are used as part of the number and the number is always non-negative (ie: is positive or 0).
Well you could assign it -1 and on most architectures store the 2's complement in there. The signed and unsigned stuff are really only for the type checking. There is no negative sign in hardware.
I have no idea if f# type checker is smart enough to know that a lexical constant -1 is a negative number and should not be put in a uint64.
C definitely does not care.
#include <stdio.h>
#include <inttypes.h>
main()
{
uint64_t x = -1;
printf("0x%x\n", x); // 0xffffffff
}
if F# will convert it for you then -1UL would work. If not then you can specify it as 0xFFFFFFFFFFFFFFFFUL and add a comment to remember that it's -1.
Don't have the F# tools installed at the moment so I cannot verify this.
If you want to go with a signed int:
-1: int64
but you can't match a negative number to a uint, as others have stated.