DateTime conversion to Excel Date in f# - f#

I read on stackoverflow that easiest way to convert a DateTime variable back to Excel date was simply to do:
let exceldate = int(DateTime)
Admittedly this was in c# and not f#. This is supposed to work as the decimals represent time and int part represents date. I tried this and f# comes back with the error:
The type 'DateTime' does not support a conversion to the type 'int'
So how do I convert back to excel date?
More specificly, I m trying to create a vector of month 1st for a period between start date and end date. Both vector output and start date and end date are floats, i.e. excel dates. Here my clumsy first attempt:
let monthlies (std:float) (edd:float) =
let stddt = System.DateTime.FromOADate std
let edddt = System.DateTime.FromOADate edd
let vecstart = new DateTime(stddt.Year, stddt.Month, 1)
let vecend = new DateTime(edddt.Year, edddt.Month, 1)
let nrmonths = 12 * (edddt.Year-stddt.Year) + edddt.Month - stddt.Month + 1
let scaler = 1.0 - (float(stddt.Day) - 1.0) / float(DateTime.DaysInMonth(stddt.Year , stddt.Month))
let dtsvec:float[] = Array.zeroCreate nrmonths
dtsvec.[0] <- float(vecstart)
for i=1 to (nrmonths-1) do
let temp = System.DateTime.FromOADate dtsvec.[i-1]
let temp2 = temp.AddMonths 1
dtsvec.[i] = float temp2
dtsvec
This doesnt work because of the conversion issue and is rather complicated and imperative.
How do I do the conversion? How can I do this more functionally? Thanks

Once you have the DateTime object, just call ToOADate, like so:
let today = System.DateTime.Now
let excelDate = today.ToOADate()
So your example would end up like so:
let monthlies (std:float) (edd:float) =
let stddt = System.DateTime.FromOADate std
let edddt = System.DateTime.FromOADate edd
let vecstart = new System.DateTime(stddt.Year, stddt.Month, 1)
let vecend = new System.DateTime(edddt.Year, edddt.Month, 1)
let nrmonths = 12 * (edddt.Year-stddt.Year) + edddt.Month - stddt.Month + 1
let scaler = 1.0 - (float(stddt.Day) - 1.0) / float(System.DateTime.DaysInMonth(stddt.Year , stddt.Month))
let dtsvec:float[] = Array.zeroCreate nrmonths
dtsvec.[0] <- vecstart.ToOADate()
for i=1 to (nrmonths-1) do
let temp = System.DateTime.FromOADate dtsvec.[i-1]
let temp2 = temp.AddMonths 1
dtsvec.[i] = temp2.ToOADate()
dtsvec
In regards to getting rid of the loop, maybe something like this?
type Vector(x: float, y : float) =
member this.x = x
member this.y = y
member this.xDate = System.DateTime.FromOADate(this.x)
member this.yDate = System.DateTime.FromOADate(this.y)
member this.differenceDuration = this.yDate - this.xDate
member this.difference = System.DateTime.Parse(this.differenceDuration.ToString()).ToOADate
type Program() =
let vector = new Vector(34.0,23.0)
let difference = vector.difference

Related

FSCL error on a simple example

I am trying to use openCL with FSCL on F# but I am obtaining some errors that I don't understand
open FSCL.Compiler
open FSCL.Language
open FSCL.Runtime
open Microsoft.FSharp.Linq.RuntimeHelpers
open System.Runtime.InteropServices
[<StructLayout(LayoutKind.Sequential)>]
type gpu_point2 =
struct
val mutable x: float32
val mutable y: float32
new ( q ,w) = {x=q; y=w}
end
[<ReflectedDefinition>]
let PointSum(a:gpu_point2,b:gpu_point2) =
let sx =(a.x+b.x)
let sy =(a.y+b.y)
gpu_point2(sx,sy)
[<ReflectedDefinition;Kernel>]
let Modgpu(b:float32[], c:float32[],wi:WorkItemInfo) =
let gid = wi.GlobalID(0)
let arp = Array.zeroCreate<gpu_point2> b.Length
let newpoint = gpu_point2(b.[gid],c.[gid])
arp.[gid] <- newpoint
arp
[<ReflectedDefinition;Kernel>]
let ModSum(a:gpu_point2[],b:gpu_point2[],wi:WorkItemInfo) =
let gid = wi.GlobalID(0)
let cadd = Array.zeroCreate<gpu_point2> a.Length
let newsum = PointSum(a.[gid],b.[gid])
cadd.[gid] <- newsum
cadd
[<ReflectedDefinition;Kernel>]
let ModSum2(a:gpu_point2[],b:gpu_point2[],wi:WorkItemInfo) =
let gid = wi.GlobalID(0)
let cadd = Array.zeroCreate<gpu_point2> a.Length
let newsum = gpu_point2(a.[gid].x+b.[gid].x,a.[gid].y+b.[gid].y)
cadd.[gid] <- newsum
cadd
let ws = WorkSize(64L)
let arr_s1= <# Modgpu([|0.f..63.f|],[|63.f..(-1.f)..0.f|],ws)#>.Run()
let arr_s2 = <# Modgpu([|63.f..(-1.f)..0.f|],[|0.f..63.f|],ws)#>.Run()
With this code when I try to use ModSum as
let rsum = <# ModSum(arr_s1,arr_s2,ws)#>.Run()
doesn't work, but instead when I use ModSum2 works perfectly
let rsum = <# ModSum2(arr_s1,arr_s2,ws)#>.Run()
The error I obtain the first time I run it is
FSCL.Compiler.CompilerException: Unrecognized construct in kernel body NewObject (gpu_point2, sx, sy)
and if I re-run the fsi console says
System.NullReferenceException: Object reference not set to an instance of an object.
The only thing I know is that the error doesn't comes from the use of another function since I can define a dot product function that works.
[<ReflectedDefinition>]
let PointProd(a:gpu_point2,b:gpu_point2) =
let f = (a.x*b.x)
let s = (a.y*b.y)
f+s
Thus, I guess the problem comes from the return type of PointSum, but is there a way to create such a function to sum two points and return the point type? And Why is not working?
Edit/Update:
Also with a record happens the same if I define the type as :
[<StructLayout(LayoutKind.Sequential)>]
type gpu_point_2 = {x:float32; y:float32}
If I try to create a function that directly sums two gpu_point_2 on a function works, but if I call a second function it raises the same error as using a struct.
Try to add [<ReflectedDefinition>] on the constructor of gpu_point2:
[<StructLayout(LayoutKind.Sequential)>]
type gpu_point2 =
struct
val mutable x: float32
val mutable y: float32
[<ReflectedDefinition>] new (q, w) = {x=q; y=w}
end
Normally each code that is called from the device need this attribute, constructors included.

Savings account app

Hi I'm trying to make a app where it takes your capital * the rent raised to the amount of years. So it calculates how much it has grown.
But i have encountered a problem whit the pow i want it to pow the rent to the amount of years but i only get it to 1 unless i use a higher value. I have tried using float and double whit no luck. I´m really grateful for any help received enter.
func dismissKeyboard() {
responder status.
view.endEditing(true)
let myInt: Int? = Int(kapital.text!)
let myInt1: Int? = Int(år.text!)
let myInt2: Int? = Int(ränta.text!)
let ab = 100.00000
let a = 1.00000
let faktor = Double(myInt2!) / Double(ab)
let faktor1 = Double(faktor) + Double(a)
let fx: Int = Int(pow(Double(faktor1),Double(myInt1!)))
let result = Double(fx) * Double(myInt!)
duhar.text = "\(result)"
}
You are converting the result of pow to an Int, here:
let fx: Int = Int(pow(Double(faktor1),Double(myInt1!)))
Doing that will drop any decimal and round down to the nearest integer, try this instead:
let fx = pow(faktor1, Double(myInt1!))

NSDecimalRound in Swift

Trying to figure out the 'correct' way to round down decimal numbers in Swift and struggling to set up the C calls correctly (or something) as it is returning a weird result. Here's a snippet from Playground:
import Foundation
func roundTo2(result: UnsafePointer<Double>, number: UnsafePointer<Double>) {
var resultCOP = COpaquePointer(result)
var numberCOP = COpaquePointer(number)
NSDecimalRound(resultCOP, numberCOP, 2, .RoundDown)
}
var from: Double = 1.54762
var to: Double = 0.0
roundTo2(&to, &from)
println("From: \(from), to: \(to)")
Output -> From: 1.54762, to: 1.54761981964356
I was hoping for 1.54. Any pointers would be appreciated.
The rounding process should be pretty straightforward without any wrappers. All we should do -- just call the function NSDecimalRound(_:_:_:_:), described there: https://developer.apple.com/documentation/foundation/1412204-nsdecimalround
import Cocoa
/// For example let's take any value with multiple decimals like this:
var amount: NSDecimalNumber = NSDecimalNumber(value: 453.585879834)
/// The mutable pointer reserves only "one cell" in memory for the
let uMPtr = UnsafeMutablePointer<Decimal>.allocate(capacity: 1)
/// Connect the pointer to the value of amount
uMPtr[0] = amount.decimalValue
/// Let's check the connection between variable/pointee and the poiner
Swift.print(uMPtr.pointee) /// result: 453.5858798339999232
/// One more pointer to the pointer
let uPtr = UnsafePointer<Decimal>.init(uMPtr)
/// Standard function call
NSDecimalRound(uMPtr, uPtr, Int(2), NSDecimalNumber.RoundingMode.bankers)
/// Check the result
Swift.print(uMPtr.pointee as NSDecimalNumber) /// result: 453.59
My solution:
var from: Double = 1.54762
var to: Double = 0.0
let decimalSize = 2.0 //you want to round for 2 digits after decimal point, change to your right value
let k = pow(10.0, decimalSize) //k here is 100
let cent = from*k
/*
get floor (integer) value of this double,
equal or less than 'cent'.You will get 154.
For negative value, it will return-155.
If you want to get -154, you have to use ceil(cent) for cent < 0.
*/
let centRound = floor(cent)
to = centRound/k
println("From: \(from), to: \(to)")
As additional info to HoaParis answer, you can make an extensions for Double so you can call it easily again later:
extension Double{
func roundDown(decimals:Int)->Double{
var from: Double = self
var to: Double = 0.0
let decimalSize = 2.0 //you want to round for 2 digits after decimal point, change to your right value
let k = pow(10.0, Double(decimals)) //k here is 100
var cent = from*k
var centRound = floor(cent) //get floor (integer) value of this double.You will get 154.
to = centRound/k
return to
}
}
var from: Double = 1.54762
from.roundDown(2)// 1.54
from.roundDown(3)// 1.547
Here's another approach (if you just want a fix rounding to 2 digits):
extension Double {
mutating func roundTo2Digits() {
self = NSString(format:"%2.2f", self).doubleValue
}
}
var a:Double = 12.3456
a.roundTo2Digits()
// Playground - noun: a place where people can play
import UIKit
// why rounding double (float) numbers is BAD IDEA
let d1 = 0.04499999999999999 // 0.045
let d2 = d1 + 5e-18 // 0.045 (a 'little bit' bigger)
let dd = d2 - d1 // 0.00000000000000000693889390390723
dd == 5e-18 // false
// this should work by mathematical theory
// and it wokrks ...
// BUT!! the Double DOESN'T means Decimal Number
func round(d: Double, decimalNumbers: UInt) -> Double {
let p = pow(10.0, Double(decimalNumbers))
let s = d < 0.0 ? -1.0 : 1.0
let dabs = p * abs(d) + 0.5
return s * floor(dabs) / p
}
// this works as expected
let r1 = round(d1, 3) // 0.045
let r2 = round(d2, 3) // 0.045
r1 == r2 // true
// this works only in our heads, not in my computer
// as expected too ... :-)
let r11 = round(d1, 2) // 0.04
let r21 = round(d2, 2) // 0.05
r11 == r21 // false
// look at the difference, it is just about the decimal numbers required
// are you able predict such a result?

Create Loop for Amortization Schedule in Swift

I'm looking to figure out a simple loop in order to calculate an amortization schedule in Swift.
So far, here is my setup on Playground:
let loanAmount: Double = 250000.00
let intRate: Double = 4.0
let years: Double = 30.0
var r: Double = intRate / 1200
var n: Double = years * 12
var rPower: Double = pow(1 + r, n)
var monthlyPayment: Double = loanAmount * r * rPower / (rPower - 1)
var annualPayment: Double = monthlyPayment * 12
For the actual loop, I'm unsure how to fix the code below.
for i in 0...360 {
var interestPayment: Double = loanAmount * r
var principalPayment: Double = monthlyPayment - interestPayment
var balance: Double; -= principalPayment
}
Looking to generate a monthly schedule. Thanks in advance for any tip.
I'm guessing you mean to declare the balance variable outside the loop, and to decrement it inside the loop:
// stylistically, in Swift it's usual to leave
// off the types like Double unless you have a
// reason to be explicit
let loanAmount = 250_000.00
let intRate = 4.0
let years = 30.0
// since these are one-off calculations, you
// should use let for them, too. let doesn't
// just have to be for constant numbers, it just
// means the number can't change once calculated.
let r = intRate / 1200
let n = years * 12
let rPower = pow(1 + r, n)
// like above, these aren't changing. always prefer let
// over var unless you really need to vary the value
let monthlyPayment = loanAmount * r * rPower / (rPower - 1)
let annualPayment = monthlyPayment * 12
// this is the only variable you intend to "vary"
// so does need to be a var
var balance = loanAmount
// start counting from 1 not 0 if you want to use an open
// (i.e. including 360) range, or you'll perform 361 calculations:
for i in 1...360 {
// you probably want to calculate interest
// from balance rather than initial principal
let interestPayment = balance * r
let principalPayment = monthlyPayment - interestPayment
balance -= principalPayment
println(balance)
}
This should print out the correct balances going down to zero for the final balance (well actually 9.73727765085641e-09 – but that's a whole other question).
If you wanted to create a monthly balance, say in an array, you could add an additional array variable to store that in:
var balance = loanAmount
//array of monthly balances, with the initial loan amount to start with:
var monthlyBalances = [balance]
for i in 1...360 {
let interestPayment = balance * r
let principalPayment = monthlyPayment - interestPayment
balance -= principalPayment
monthlyBalances.append(balance)
}
Advanced version for anyone who's interested
You might wonder if there's a way to declare monthlyBalances with let rather than var. And there is! You could use reduce:
let monthlyBalances = reduce(1...360, [loanAmount]) {
payments, _ in
let balance = payments.last!
let interestPayment = balance * r
let principalPayment = monthlyPayment - interestPayment
return payments + [balance - principalPayment]
}
However this is a bit nasty for a couple of reasons. It would much much nicer if the Swift standard library had a slightly different version of reduce called accumulate that generated an array out of a running total, like this:
let monthlyBalances = accumulate(1...360, loanAmount) {
balance, _ in
let interestPayment = balance * r
let principalPayment = monthlyPayment - interestPayment
return balance - principalPayment
}
And here's a definition of accumulate:
func accumulate<S: SequenceType, U>
(source: S, var initial: U, combine: (U, S.Generator.Element) -> U)
-> [U] {
var result: [U] = []
result.append(initial)
for x in source {
initial = combine(initial, x)
result.append(initial)
}
return result
}

EntityFunctions.CreateDateTime issue with leap Year in linq to entity

When i have a leap year in my database (ex.: 29th Feb 2012). The EntityFunctions.CreateDateTime functions throws System.Data.SqlClient.SqlException: Conversion failed when converting date and/or time from character string.
My Code is as follows in my asp.net mvc (C#) application:
from u in _entities.tt_Users
let _start_date = u.Start_Date
let _startDate = _start_date.Day
let _startmonth = _start_date.Month
let _startyear = _start_date.Year
let _starthour = u.Start_Time.Value.Hours
let _startminutes = u.Start_Time.Value.Minutes
let _startseconds = u.Start_Time.Value.Seconds
let _startDateWithTime = EntityFunctions.CreateDateTime(_startyear, _startmonth, _startDate, _starthour, _startminutes, _startseconds)
let _startDateWithZeroTime = EntityFunctions.CreateDateTime(_startyear, _startmonth, _startDate, 0, 0, 0)
let _start_datetime = u.Is_Include_Time ? _startDateWithZeroTime : _startDateWithTime
let _end_date = u.End_Date
let _endDate = _end_date.Day
let _endmonth = _end_date.Month
let _endyear = _end_date.Year
let _endhour = u.End_Time.Value.Hours
let _endminutes = u.End_Time.Value.Minutes
let _endseconds = u.End_Time.Value.Seconds
let _endDateWithTime = EntityFunctions.CreateDateTime(_endyear, _endmonth, _endDate, _endhour, _endminutes, _endseconds)
let _endDateWithZeroTime = EntityFunctions.CreateDateTime(_endyear, _endmonth, _endDate, 0, 0, 0)
let _end_datetime = u.Is_Include_Time ? _endDateWithZeroTime : _endDateWithTime
let _cur_Start_date = u.Is_Include_Time ? _userStartDate : _gMTStartDate
let _cur_End_date = u.Is_Include_Time ? _userEndDate : _gMTEndDate
where u.User_Id == 1 && !u.Is_Deleted
&& _start_datetime >= _cur_Start_date && _end_datetime <= _cur_End_date
select new
{
u.User_id,
u.User_Name,
u.Login_Name,
u.Email_Address
};
Here _userStartDate, _userEndDate, _gMTStartDate and _gMTEndDate are parameters from my function.
If the column "Is_Include_Time" is true, then i have to include TimeSpan also from the table. But for the leap year Its throwing the error.
Any suggestions?
I just encountered the same problem. I have some values in database rows that i need to convert to a datetime. I found out that I can use the following construct:
DateTime startDate = new DateTime(1, 1, 1);
var counters = from counter in entities.Counter
let date = SqlFunctions.DateAdd("day", counter.DayOfMonth-1, SqlFunctions.DateAdd("month", counter.Month-1, SqlFunctions.DateAdd("year", counter.Year-1, startDate)))
where date >= dateFrom && date <= dateTo
orderby date
select new
{
Value = counter.CounterValue,
Date = date
};
I'm not sure of the performance impact, but it does work.
Best regards,
Tor-Odd
try to use a declared variable outside the linq expression as I suggested here
This is the way that I deal with
var minDate = Convert.ToDateTime("1900-01-01 00:00:00");
return source.Where(x => EntityFunctions.DiffDays(x.ReviewedDate, minDate) > 0).ToList();

Resources