Error with comparing CGPoint using XCTAssertEqual in Xcode 5.1 - ios

I just upgraded to XCode 5.1 and am seeing a compilation error in my unit tests for code like this:
CGPoint p1 = CGPointMake(1,2);
CGPoint p2 = CGPointMake(2,3);
XCTAssertEqual(p1, p2, #"Points not equal");
This is the error message:
Invalid operands to binary expression ('typeof (p1)' (aka 'struct CGPoint') and 'typeof (p2)' (aka 'struct CGPoint'))
The same code worked in previous versions of XCode. Is the code incorrect, or is this a bug in the latest XCode?
Update
The error is triggered by the XCTAssertEqual macro doing a != on the two variables. As these are structs this is not allowed. Did the macro change from 5.0 to 5.1, or did the compiler allow comparison of structs before?
Update 2
The code can be fixed by changing too
XCTAssertEqualObjects([NSValue valueWithCGPoint:p1],
[NSValue valueWithCGPoint:p2],
#"Points not equal");
I would still like to know what caused this to start failing. (Unfortunately the old version of xcode is removed by installing the new one).

The functionally of this macro was changed in Xcode 5.1 to allow comparison of scalar values but removing support for nonscalar types like the struct CGPoint.
From the release notes
The XCTAssertEqual macro (formerly STAssertEquals using OCUnit) correctly compares scalar values of different types without casting, for example, int andNSInteger. It can no longer accept nonscalar types, such as structs, for comparison. (14435933)

That test code:
XCTAssertEqual(p1, p2, #"Points not equal"); )
...definitely compiles fine in Xcode 5.0.1 (5A2034a). In 5.0.1, XCTAssertEqual evaluates to _XCTPrimitiveAssertEqual, which is not doing a !=. It's encoding the primitives into NSValues via value:withObjCType:, then comparing via isEqualToValue:.
Another option would be to use something like:
XCTAssertTrue( NSEqualPoints( NSPointFromCGPoint( p1 ), NSPointFromCGPoint( p2) ),
#"Points not equal" );

Related

"The type of pattern could not be computed" when iterating over aliased rel[loc,str,int,int]

I have a rel[loc, str, int, int] which I aliased as ECCModel
Iterating over it in a function results in an error "Type of pattern could not be computed" when elements of the pattern are called location. When named anything other than location, the errors disappear. Is "location" a reserved word in Rascal? Regardless of the errors, it will compile and run just fine.
The error can be produced with the following method
alias ECCModel = rel[loc, str, int, int]
map[str,int] Foo(ECCModel bar)
{
return (y : z | <location, y, z, _> <- bar);
}
Is this expected behavior?
This is not expected behavior and location is not a reserved word, but loc is.
I think this is a glitch of the current compiler.
For your clarification: we currently have three implementations of Rascal:
The Rascal interpreter.
The current (also already sometimes called "old") compiler.
The new compiler we are working on that is not yet released.
Your example runs fine with the interpreter and new compiler. So the problem you are reporting will disappear over time.

No '...' candidates produce 'Range<String.Index>'

While converting an old iOS app to Sift 3.0 I hit the following issue:
The code is:
cutRange = numberString.index(numberString.startIndex, offsetBy:2)...numberString.index(numberString.startIndex, offsetBy:5)
The error message I get is:
No '...' candidates produce the expected contextual result type 'Range<String.Index>' (aka 'Range<String.CharacterView.Index>')
I have seen a few post related to the subject, but was not very satisfied.
So what is the simplest way to solve this problem?
In Swift 3, two range operators generate different results:
closed range operator ... -> ClosedRange (by default)
(half open) range operator ..< -> Range (by default)
So, assuming your cutRange is declared as Range<String.Index>, you need to use half open range operator ..<:
cutRange = numberString.index(numberString.startIndex, offsetBy:2)..<numberString.index(numberString.startIndex, offsetBy:6)
(Please do not miss the last offset is changed to 6.)

How to do the equivalent of pattern matching on native .NET types?

let print_type_c (x: Type)=
if x = typeof<Int32> then "int"
elif x = typeof<Single> then "float"
elif x = typeof<Tuple<_>> then "tuple"
elif x = typeof<FSharpFunc<Tuple<'inp>,'out>> then "function..."
else failwithf "Not supported(%A)" x
print_type_c (typeof<int>) // prints int
print_type_c (typeof<float32>) // prints float
print_type_c (typeof<int*int>) // throw an exception
I am trying to build a primitive F# quotations to Cuda compiler and I need to figure out how to translate tuple types to C struct types in function declarations, but I am really not familiar with how the .NET machinery works. It is disappointing that F# quotations do not return you a nice AST based on discriminated unions which would make everything explicit, but unless I want to do the typechecking myself, I'll have work with this.
What I am asking is - is there a way to the crappy piece of code above with pattern matching on .NET Types, because the above is clearly not working.
Edit: The question is not a duplicate because that linked question is asking how to print the properties of F# records, while I am asking how to print the types of native .NET types (including functions) inside F# quotations. A visual scan of the accepted answer here and there should show that they are completely different and not even using the same functions.
Runtime type inference? On types not known at compile time? No, sorry, doesn't exist, you'll have to build one yourself.
To check if a particular type is an instance of a specific generic type, you need to see if it's generic in the first place, and then get its generic definition, and compare that:
if x.IsGenericType && (x.GetGenericTypeDefinition() = typeof<System.Tuple<_>>.GetGenericTypeDefinition()) then "tuple" else "not"
Note that there is not just one Tuple type, but many - Tuple<_>, Tuple<_,_>, Tuple<_,_,_>, and so on - up to 8.
Same goes for FSharpFunc:
if x.IsGenericType && (x.GetGenericTypeDefinition() = typeof<FSharpFunc<_,_>>.GetGenericTypeDefinition()) then "function" else "not"
To get generic arguments of a type, use .GetGenericArguments():
if x.IsGenericType && (x.GetGenericTypeDefinition() = typeof<FSharpFunc<_,_>>.GetGenericTypeDefinition())
then
sprintf "function %s -> %s" (x.GetGenericArguments().[0].Name) (x.GetGenericArguments().[1].Name)
else
"not function"

gfortran 4.8.0 bug? Return type mismatch of function

I just use gfortran 4.1.2 and gfortran 4.8.0 to compile the following simple code:
function foo(a, b) result(res)
integer, intent(in) :: a, b
integer res
res = a+b
end function foo
program test
integer a, b, c
c = foo(a, b)
end program test
gfortran 4.1.2 succeeds, but gfortran 4.8.0 gives the weird error:
test.F90:14.11:
c = foo(a, b)
1
Error: Return type mismatch of function 'foo' at (1) (REAL(4)/INTEGER(4))
Any idea?
There is a bug in your code, namely that you don't specify the return type of the function foo in the main program. Per the Fortran implicit typing rules it thus gets a type of default real.
You should (1) always use 'implicit none', furthermore if at all possible, (2) use modules or contained procedures thus giving you explicit interfaces.
The reason why GFortran 4.1 doesn't report this error is that older versions of GFortran always functioned in a 'procedure at a time' mode; thus the compiler is happily oblivious to any other functions in the same file. Newer versions work in 'whole file' mode (default since 4.6) where the compiler 'sees' all the procedures in a file at a time. This allows the compiler to catch errors such as the one in your code, and also provides some optimization opportunities.

Use of selected_real_kind in Fortran to achieve desired minimum precision

I am using gfortran 4.7.1 on Mac OS X 10.8 to compile a simple program using a specific precision:
program simple
implicit none
integer, parameter :: l = selected_real_kind(18,100)
real(kind=l) :: myNum
myNum = 0.123456789123456789
print '(f18.12)', myNum
end program simple
When I compile and run I get 0.123456791043 which obviously indicates I'm not getting the precision I called for.
However if I compile with -fdefault-real-8 I get the right answer. Obviously, I could just move on and use the option, but I can't help wondering whether this is a bug or I am missing some justification. Do I really have to specify my real size outside of the code itself when it seems that selected_real_kind intent is to allow the code to specify precision?
Sorry. You have to add _l to the number assignment.

Resources