Following on from this question, I still seem to be battling at the frontiers of what is possible, though I don't think that I'm doing anything particularly bleeding edge:
type Vector2d = { X: float<'u>; Y: float<'u> }
Gives me error FS0039: The unit-of-measure parameter 'u' is not defined.
And
type Vector2d = { X: float<_>; Y: float<_> }
Gives me error FS0191: anonymous unit-of-measure variables are not permitted in this declaration.
Is it the case that functions can handle 'generic' units of measure, but types can't?
type Vector2d<[<Measure>]'u> = { X: float<'u>; Y: float<'u> }
should do the trick
Note: This is correct as of the 1.9.6.2 CTP release but this api is not currently viewed as stable
Related
In C++, I can create structures like these:
union Vector4
{
struct { float x, y, z, w; };
float data[4];
};
so I can easily access the data as fields or as an contiguous array. Alternatively, I can just create a pointer to the first field x and read from the pointer as an contiguous array.
I know that there are enums, but I can't pay for the additional overhead. I also know I can create unions in Rust, but they require me to litter my code with unsafe where ever I'm accessing them. Which I feel I shouldn't have to since the code is not unsafe as the underlying data is always represented as floats (and I need the C-layout #[repr(C)] so the compiler won't throw around the order of the fields).
How would I implement this in Rust so that I can access the fields by name but also have easy and safe access to the whole struct's contiguous memory? If this is not possible, is there a way so I can safely take a slice of a struct?
There is no such thing as a safe union. Personally, I would argue that transmuting between fixed sized arrays of integer types should be considered safe, but at the moment there are no exceptions.
That being said, here is my totally 100% not a union Vector4. As you can see, Deref works to hide the unsafe code and makes it so you can treat Vector4 as either a struct or an array based on the context it is used in. The transmute also isn't ideal, but I feel like I can justify it in this case. If you choose to do something like this, then you may also want to implement DerefMut as well.
use std::ops::Deref;
// I'm not sure if the repr(C) is needed in this case, but I added it just in case.
#[repr(C)]
pub struct Vector4<T> {
pub x: T,
pub y: T,
pub z: T,
pub w: T,
}
impl<T> Deref for Vector4<T>
where
T: Copy + Sized,
{
type Target = [T; 4];
fn deref(&self) -> &Self::Target {
use std::mem::transmute;
unsafe { transmute(self) }
}
}
pub fn main() {
let a = Vector4{
x: 37,
y: 21,
z: 83,
w: 94,
};
println!("{:?}", &a[..]);
// Output: [37, 21, 83, 94]
}
Note that I'm not trying to set the value in a CGRect. I'm mystified as to why the compiler is issuing this claim:
let widthFactor = 0.8
let oldWidth = wholeFrameView.frame.width
let newWidth = wholeFrameView.frame.width * widthFactor // Value of type '(CGRect) -> CGRect' has no member 'width'
let newWidth2 = wholeFrameView.frame.width * 0.8 // This is fine.
Width is a CGFloat where your multiplier is a Double. Explicitly declare the type of your multiplier:
let widthFactor: CGFloat = 0.8
All the dimensions of a CGRect are of type CGFloat, not Double, and because Swift is especially strict about types, you can't multiply a CGFloat by a Double.
The interesting thing though, is that both CGFloat and Double implement ExpressibleByFloatLiteral. This means that 0.8, a "float literal", can be interpreted as either a Double or a CGFloat. Without context, it's always a Double, because of how the compiler is designed. Note that this only applies to float literals like 3.14, 3e8 etc, and not to identifiers of variables.
So the expression wholeFrameView.frame.width * 0.8 is valid because the compiler sees that width is a CGFloat, so it treats 0.8 as a CGFloat as well. No problems.
On the other hand, when you declare the variable widthFactor, it is automatically given the type Double, because there aren't any more context on that line to suggest to the compiler that you want it to be any other type.
This can be fixed by directly telling the compiler that you want widthFactor to be a CGFloat:
let widthFactor: CGFloat = 0.8
Because, as others have noted, you can't multiply a Double and a CGFloat, the compiler doesn't know what you're intending.
So, instead of giving you an error about the frame property, which you currently think it's doing, it's actually making its best guess*, and giving you an error related to the frame method. No method method has a width property, so what it tells you is true.
*Of course, its best guess is not good enough, hence a human being coming here to ask a question about it. So please file a bug!
Stepping onto my soapbox: This confusion would be avoided if Apple hadn't named the method the thing it returns. The convention to prefix all such methods with get solves the problem. Some convention is important in any language with first-class functions, to disambiguate between properties and methods.
wholeFrameView.frame has no member width. Also, you need widthFactor to be of type CGFloat. Try:
let newWidth = wholeFrameView.frame.size.width * CGFloat(widthFactor)
I'm trying to track down the errors in this github project.
https://github.com/gpbl/SwiftChart
The owner doesn't seem to answer any questions or respond.
I can't get this example to run:
// Create a new series specifying x and y values
let data = [(x: 0, y: 0), (x: 0.5, y: 3.1), (x: 1.2, y: 2), (x: 2.1, y: -4.2), (x: 2.6, y: 1.1)]
let series = new ChartSeries(data)
chart.addSerie(series)
Xcode gives this error in regards to the data
ViewController.swift:41:31: '(Double, Double)' is not identical to 'Float'
in the main file Chart.swift, there is this section of code
var labels: [Float]
if xLabels == nil {
// Use labels from the first series
labels = series[0].data.map( { (point: ChartPoint) -> Float in
return point.x } )
}
else {
labels = xLabels!
}
I'm not quite sure how to deal with the map .
There are typos in the realm. It should read
let data = [(x: 0, y: 0), (x: 0.5, y: 3.1), (x: 1.2, y: 2), (x: 2.1, y: -4.2), (x: 2.6, y: 1.1)]
let series = ChartSeries(data)
chart.addSeries(series)
That being said, Swift by default infers 0.5 to be a Double, and his default init is looking for x and y to be of type Float.
I forked the repository, and added an init that will convert the doubles to float. This could obviously cause an issue if the double is too big, but for the small numbers it likely won't be an issue. I also added the example in question to the actual project. My fork is here.
I'll send a pull request if the owner wants to accept the changes. Otherwise, if I have time I may refactor it to all be Double and get rid of the extra init.
I added the following init in ChartSeries.swift, this prevents you from always having to define your array as it convert the array of Double value tuples to Floats.
init(data: Array<(x: Double, y: Double)>) {
self.data = data.map ({ (Float($0.x), Float($0.y))})
}
Is it possible to overload += operator in Swift to accept for example CGFloat arguments? If so, how?
My approach (below) does not work.
infix operator += { associativity left precedence 140 }
public func +=(inout left: CGFloat, right: CGFloat) {
left = left + right
}
(Edit) Important:
The coding approch above actually works. Please see my answer below for explanation why I thought it did not.
I am sorry, my bad. The operator += does not need to be overloaded for CGFloat arguments as such overload is included in Swift. I was trying to do something like
let a: CGFloat = 1.5
a += CGFloat(2.1)
This failed because I cannot asign to let and the error displayed by XCode confused me.
And of course, approach like in my original question (below) works for overloading operators.
infix operator += { associativity left precedence 140 }
public func +=(inout left: CGFloat, right: CGFloat) {
left = left + right
}
Please feel free to vote to close this question.
The += operator for CGFloat is already available, so you just have to use it - the only thing you can do is override the existing operator if you want it to behave in a different way, but that's of course discouraged.
Moreover, the += operator itself already exists, so there is no need to declare it again with this line:
infix operator += { associativity left precedence 140 }
You should declare a new operator only if it's a brand new one, such as:
infix operator <^^> { associativity left precedence 140 }
However, if you want to overload the += operator for other type(s) for which it is not defined, this is the correct way:
func += (inout lhs: MyType, rhs: MyType) {
lhs = // Your implementation here
}
I came across a bug with the 64bit processors that I wanted to share.
CGFloat test1 = 0.58;
CGFloat test2 = 0.40;
CGFloat value;
value = fmaxf( test1, test2 );
The result would be:
value = 0.5799999833106995
This obviously is a rounding issue, but if you needed to check to see which value was picked you would get an erroneous result.
if( test1 == value ){
// do something
}
however if you use either MIN( A, B ) or MAX( A, B ) it would work as expected.
I thought this is was worth sharing
Thanks
This has nothing to do with a bug in fminf or fmaxf. There is a bug in your code.
On 64-bit systems, CGFloat is typedef'd to double, but you're using the fmaxf function, which operates on float (not double), which causes its arguments to be rounded to single precision, thus changing the value. Don't do that.
On 32-bit systems, this doesn't happen because CGFloat is typedef'd to float, matching the argument and return type of fmaxf; no rounding occurs.
Instead, either include <tgmath.h> and use fmax without the f suffix, or use float instead of CGFloat.