setup: mono 4.5, linux, f# 4.0, gtk#
Here's my code, mostly copied from sample snippets:
open System
open Gtk
let (width, height) = (800, 600)
[<EntryPoint>]
let main argv =
Application.Init ()
let window = new Window ("helloworld")
window.SetDefaultSize(width, height)
window.DeleteEvent.Add(fun e -> window.Hide(); Application.Quit(); e.RetVal <- true)
let drawing = new Gtk.DrawingArea ()
drawing.ExposeEvent.Add(fun x ->
let gc = drawing.Style.BaseGC(StateType.Normal)
let allocColor (r,g,b) =
let col = ref (Gdk.Color(r,g,b))
let _ = gc.Colormap.AllocColor(col, true, true)
!col
gc.Foreground <- allocColor (255uy, 0uy, 0uy)
drawing.GdkWindow.DrawLine(gc, 0, 0, 100, 100)
)
window.Add(drawing)
window.ShowAll()
window.Show()
Application.Run ()
0
It fails to compile with the following error:
The field, constructor or member 'ExposeEvent' is not defined
Turned out to be a gtk2 -> gtk3 difference. Here's the updated code - DrawingArea now emits Drawn rather than ExposeEvent
open System
open Gtk
open Cairo
let (width, height) = (800, 600)
[<EntryPoint>]
let main argv =
Application.Init ()
let window = new Window ("helloworld")
window.SetDefaultSize(width, height)
window.DeleteEvent.Add(fun e -> window.Hide(); Application.Quit(); e.RetVal <- true)
let drawing = new Gtk.DrawingArea ()
drawing.Drawn.Add(fun args ->
let cr = args.Cr
cr.MoveTo(0.0, 0.0)
cr.LineTo(100.0, 100.0)
cr.LineWidth = 1.0
cr.Stroke ()
)
window.Add(drawing)
window.ShowAll()
window.Show()
Application.Run ()
Related
I am trying to write a neural network in rust + arrayfire, and while gradient descent works, ADAM does not.
fn back_propagate(
&mut self,
signals: &Vec<Array<f32>>,
labels: &Array<u8>,
learning_rate_alpha: f64,
batch_size: i32,
) {
let mut output = signals.last().unwrap();
let mut error = output - labels;
for layer_index in (0..self.num_layers - 1).rev() {
let signal = Self::add_bias(&signals[layer_index]);
let deriv = self.layer_activations[layer_index].apply_deriv(output);
let delta = &(deriv * error).T();
let matmul = matmul(&delta, &signal, MatProp::NONE, MatProp::NONE);
let gradient_t = (matmul / batch_size).T();
match self.optimizer {
Optimizer::GradientDescent => {
let weight_update = learning_rate_alpha * gradient_t;
self.weights[layer_index] -= weight_update;
}
Optimizer::Adam => {
let exponents = constant(2f32, gradient_t.dims());
self.first_moment_vectors[layer_index] = (&self.beta1[layer_index]
* &self.first_moment_vectors[layer_index])
+ (&self.one_minus_beta1[layer_index] * &gradient_t);
self.second_moment_vectors[layer_index] = (&self.beta2[layer_index]
* &self.second_moment_vectors[layer_index])
+ (&self.one_minus_beta2[layer_index]
* arrayfire::pow(&gradient_t, &exponents, true));
let corrected_first_moment_vector = &self.first_moment_vectors[layer_index]
/ &self.one_minus_beta1[layer_index];
let corrected_second_moment_vector = &self.second_moment_vectors[layer_index]
/ &self.one_minus_beta2[layer_index];
let denominator = sqrt(&corrected_second_moment_vector) + 1e-8;
let weight_update =
learning_rate_alpha * (corrected_first_moment_vector / denominator);
self.weights[layer_index] -= weight_update;
}
}
output = &signals[layer_index];
let err = matmulTT(
&delta,
&self.weights[layer_index],
MatProp::NONE,
MatProp::NONE,
);
error = index(&err, &[seq!(), seq!(1, output.dims()[1] as i32, 1)]);
}
}
I've stored beta1, beta2, 1-beta1, 1-beta2 in constant arrays for every layer just to avoid having to recompute them. It appears to have made no difference.
GradientDescent converges with a learning rate alpha=2.0, however with Adam, if i use alpha>~0.02, the network appears to get locked in. Funnily enough, if I remove all the hidden layers, it does work. Which tells me something, but I'm not sure what it is.
I figured it out, for anyone else, my alpha=0.01 is still too high, once I reduced it to 0.001, it converged very quickly
I have a rusoto_core::ByteStream which implements futures' Stream trait:
let chunks = vec![b"1234".to_vec(), b"5678".to_vec()];
let stream = ByteStream::new(stream::iter_ok(chunks));
I'd like to pass it to actix_web's HttpResponseBuilder::streaming method.
use actix_web::dev::HttpResponseBuilder; // 0.7.18
use rusoto_core::ByteStream; // 0.36.0
fn example(stream: ByteStream, builder: HttpResponseBuilder) {
builder.streaming(stream);
}
When I try to do it I receive the following error:
error[E0271]: type mismatch resolving `<rusoto_core::stream::ByteStream as futures::stream::Stream>::Item == bytes::bytes::Bytes`
--> src/main.rs:5:13
|
5 | builder.streaming(stream);
| ^^^^^^^^^ expected struct `std::vec::Vec`, found struct `bytes::bytes::Bytes`
|
= note: expected type `std::vec::Vec<u8>`
found type `bytes::bytes::Bytes`
I believe the reason is that streaming() expects a S: Stream<Item = Bytes, Error> (i.e., Item = Bytes) but my ByteStream has Item = Vec<u8>. How can I fix it?
I think the solution is to flatmap my ByteStream somehow but I couldn't find such a method for streams.
Here's an example how streaming() can be used:
let text = "123";
let (tx, rx_body) = mpsc::unbounded();
let _ = tx.unbounded_send(Bytes::from(text.as_bytes()));
HttpResponse::Ok()
.streaming(rx_body.map_err(|e| error::ErrorBadRequest("bad request")))
How can I flatmap streams in Rust?
A flat map converts an iterator of iterators into a single iterator (or stream instead of iterator).
Futures 0.3
Futures 0.3 doesn't have a direct flat map, but it does have StreamExt::flatten, which can be used after a StreamExt::map.
use futures::{stream, Stream, StreamExt}; // 0.3.1
fn into_many(i: i32) -> impl Stream<Item = i32> {
stream::iter(0..i)
}
fn nested() -> impl Stream<Item = i32> {
let stream_of_number = into_many(5);
let stream_of_stream_of_number = stream_of_number.map(into_many);
let flat_stream_of_number = stream_of_stream_of_number.flatten();
// Returns: 0, 0, 1, 0, 1, 2, 0, 1, 2, 3
flat_stream_of_number
}
Futures 0.1
Futures 0.1 doesn't have a direct flat map, but it does have Stream::flatten, which can be used after a Stream::map.
use futures::{stream, Stream}; // 0.1.25
fn into_many(i: i32) -> impl Stream<Item = i32, Error = ()> {
stream::iter_ok(0..i)
}
fn nested() -> impl Stream<Item = i32, Error = ()> {
let stream_of_number = into_many(5);
let stream_of_stream_of_number = stream_of_number.map(into_many);
let flat_stream_of_number = stream_of_stream_of_number.flatten();
// Returns: 0, 0, 1, 0, 1, 2, 0, 1, 2, 3
flat_stream_of_number
}
However, this doesn't solve your problem.
streaming() expects a S: Stream<Item = Bytes, Error> (i.e., Item = Bytes) but my ByteStream has Item = Vec<u8>
Yes, this is the problem. Use Bytes::from via Stream::map to convert your stream Item from one type to another:
use bytes::Bytes; // 0.4.11
use futures::Stream; // 0.1.25
fn example(stream: ByteStream, mut builder: HttpResponseBuilder) {
builder.streaming(stream.map(Bytes::from));
}
I'm attempting some Newton Raphson updates. Here is a piece of code that compiles and runs (warning: infinite loop).
let thetam = [|beta; sigSq|] |> DenseVector
let mutable gm = grad yt xt betah sigSqh // returns DenseVector
let hm = hess yt xt betah sigSqh // return Matrix<float>
while gm*gm > 0.0001 do
gm <- grad yt xt betah sigSqh
thetam - (hess yt xt betah sigSqh).Inverse() * gm // unassigned compiles
However, as soon as I assign the last value to the mutable variable thetam as follows...
while gm*gm > 0.0001 do
gm <- grad yt xt betah sigSqh
thetam <- thetam - (hess yt xt betah sigSqh).Inverse() * gm // gm here has problems
a squigly red line under gm appears and the compiler complains The type 'Vector<float>' is not compatible with the type 'DenseVector'
However, the function grad is explicitly told to return a DenseVector and ordinarily works as expected.
let grad (yt : Vector<float>) (xt : Vector<float>) (beta : float) (sigSq : float) =
let T = (float yt.Count)
let gradBeta = (yt - beta * xt)*xt / sigSq
let gradSigSq = -0.5*T/sigSq + 0.5/sigSq**2.*(yt - beta * xt)*(yt - beta * xt)
[|gradBeta; gradSigSq|] |> DenseVector
Why is the assignment to thetam causing problems? Is there a magic way to perform updates without mutability?
Here is the complete script:
open System
open System.IO
open System.Windows.Forms
open System.Windows.Forms.DataVisualization
open FSharp.Data
open FSharp.Charting
open FSharp.Core.Operators
open MathNet.Numerics
open MathNet.Numerics.LinearAlgebra
open MathNet.Numerics.LinearAlgebra.Double
open MathNet.Numerics.Random
open MathNet.Numerics.Distributions
open MathNet.Numerics.Statistics
let beta, sigSq = 3., 9.
let xt = DenseVector [|23.; 78.; 43.; 32.; 90.; 66.; 89.; 34.; 72.; 99.|]
let T = xt.Count
let genProc () =
beta * xt + DenseVector [|for i in 1 .. T do yield Normal.Sample(0., Math.Sqrt(sigSq))|]
let llNormal (yt : Vector<float>) (xt : Vector<float>) (beta : float) (sigSq : float) =
let T = (float yt.Count)
let z = (yt - beta * xt) / Math.Sqrt(sigSq)
-0.5 * log (2. * Math.PI) - 0.5 * log (sigSq) - z*z/2./T/sigSq
let grad (yt : Vector<float>) (xt : Vector<float>) (beta : float) (sigSq : float) =
let T = (float yt.Count)
let gradBeta = (yt - beta * xt)*xt / sigSq
let gradSigSq = -0.5*T/sigSq + 0.5/sigSq**2.*(yt - beta * xt)*(yt - beta * xt)
[|gradBeta; gradSigSq|] |> DenseVector
let hess (yt : Vector<float>) (xt : Vector<float>) (beta : float) (sigSq : float) =
let T = (float yt.Count)
let z = yt - beta * xt
let h11 = -xt*xt/sigSq
let h22 = T*0.5/sigSq/sigSq - z*z/sigSq/sigSq/sigSq
let h12 = -1./sigSq**2.*((yt - beta * xt)*xt)
array2D [[h11;h12];[h12;h22]] |> DenseMatrix.ofArray2
let yt = genProc()
// until convergence
let mutable thetam = [|beta; sigSq|] |> DenseVector
let mutable gm = grad yt xt beta sigSq
while gm*gm > 0.0001 do
gm <- grad yt xt beta sigSq
// 'gm' here is complaining upon equation being assigned to thetam
thetam <- thetam - (hess yt xt beta sigSq).Inverse() * gm
You should change at least let mutable thetam = [|beta; sigSq|] |> DenseVector to
let mutable thetam = [|beta; sigSq|] |> DenseVector.ofArray (and possibly other DenseVector references). Mathnet for performance reasons does in-place changes so it might trip you if you use mutable references:
DenseVector(Double[] storage)
Create a new dense vector directly binding to a raw array. The array
is used directly without copying. Very efficient, but changes to the
array and the vector will affect each other.
Versus:
DenseVector OfArray(Double[] array)
Create a new dense vector as a copy of the given array. This new
vector will be independent from the array. A new memory block will be
allocated for storing the vector.
In fact we've seen this behavior in your previous question when Exponential.Samples behaved in a similar fashion.
The API docs (while not super user-friendly) are here.
I am using the F# skeleton tracking template provided by KinectContrib. The template in C# that does the same thing works so I know the hardware is OK.
I am using Windows Kinect SDK v1.8.
The program will track once in a rare while but with no consistent pattern. I have been playing with the code since last night so I am looking for someone to confirm the same behavior on another system or for any pointers on how to change the code.
Thanks in advance.
This is the template code:
#light
open System
open System.Windows
open System.Windows.Media.Imaging
open Microsoft.Kinect
open System.Diagnostics
let sensor = KinectSensor.KinectSensors.[0]
//The main canvas that is handling the ellipses
let canvas = new System.Windows.Controls.Canvas()
canvas.Background <- System.Windows.Media.Brushes.Transparent
let ds : byte = Convert.ToByte(1)
let dummySkeleton : Skeleton = new Skeleton(TrackingState = SkeletonTrackingState.Tracked)
// Thanks to Richard Minerich (#rickasaurus) for helping me figure out
// some array concepts in F#.
let mutable pixelData : byte array = [| |]
let mutable skeletons : Skeleton array = [| |]
//Right hand ellipse
let rhEllipse = new System.Windows.Shapes.Ellipse()
rhEllipse.Height <- 20.0
rhEllipse.Width <- 20.0
rhEllipse.Fill <- System.Windows.Media.Brushes.Red
rhEllipse.Stroke <- System.Windows.Media.Brushes.White
//Left hand ellipse
let lhEllipse = new System.Windows.Shapes.Ellipse()
lhEllipse.Height <- 20.0
lhEllipse.Width <- 20.0
lhEllipse.Fill <- System.Windows.Media.Brushes.Red
lhEllipse.Stroke <- System.Windows.Media.Brushes.White
//Head ellipse
let hEllipse = new System.Windows.Shapes.Ellipse()
hEllipse.Height <- 20.0
hEllipse.Width <- 20.0
hEllipse.Fill <- System.Windows.Media.Brushes.Red
hEllipse.Stroke <- System.Windows.Media.Brushes.White
canvas.Children.Add(rhEllipse) |> ignore
canvas.Children.Add(lhEllipse) |> ignore
canvas.Children.Add(hEllipse) |> ignore
let grid = new System.Windows.Controls.Grid()
let winImage = new System.Windows.Controls.Image()
winImage.Height <- 600.0
winImage.Width <- 800.0
grid.Children.Add(winImage) |> ignore
grid.Children.Add(canvas) |> ignore
//Video frame is ready to be processed.
let VideoFrameReady (sender : obj) (args: ColorImageFrameReadyEventArgs) =
let receivedData = ref false
using (args.OpenColorImageFrame()) (fun r ->
if (r <> null) then
(
pixelData <- Array.create r.PixelDataLength ds
//Array.Resize(ref pixelData, r.PixelDataLength)
r.CopyPixelDataTo(pixelData)
receivedData := true
)
if (receivedData <> ref false) then
(
winImage.Source <- BitmapSource.Create(640, 480, 96.0, 96.0, Media.PixelFormats.Bgr32, null, pixelData, 640 * 4)
)
)
//Required to correlate the skeleton data to the PC screen
//IMPORTANT NOTE: Code for vector scaling was imported from the Coding4Fun Kinect Toolkit
//available here: http://c4fkinect.codeplex.com/
//I only used this part to avoid adding an extra reference.
let ScaleVector (length : float32, position : float32) =
let value = (((length / 1.0f) / 2.0f) * position) + (length / 2.0f)
if value > length then
length
elif value < 0.0f then
0.0f
else
value
//This will set the ellipse positions depending on the passed instance and joint
let SetEllipsePosition (ellipse : System.Windows.Shapes.Ellipse, joint : Joint) =
let vector = new Microsoft.Kinect.SkeletonPoint(X = ScaleVector(640.0f, joint.Position.X), Y=ScaleVector(480.0f, -joint.Position.Y),Z=joint.Position.Z)
let mutable uJoint = joint
uJoint.TrackingState <- JointTrackingState.Tracked
uJoint.Position <- vector
System.Windows.Controls.Canvas.SetLeft(ellipse,(float uJoint.Position.X))
System.Windows.Controls.Canvas.SetTop(ellipse,(float uJoint.Position.Y))
//Triggered when a new skeleton frame is ready for processing
let SkeletonFrameReady (sender : obj) (args: SkeletonFrameReadyEventArgs) =
let receivedData = ref false
using (args.OpenSkeletonFrame()) (fun r ->
if (r <> null) then
(
skeletons <- Array.create r.SkeletonArrayLength dummySkeleton
r.CopySkeletonDataTo(skeletons)
for i in skeletons do
Debug.WriteLine(i.TrackingState.ToString())
receivedData := true
)
if (receivedData <> ref false) then
(
for i in skeletons do
if i.TrackingState <> SkeletonTrackingState.NotTracked then
(
let currentSkeleton = i
SetEllipsePosition(hEllipse, currentSkeleton.Joints.[JointType.Head])
SetEllipsePosition(lhEllipse, currentSkeleton.Joints.[JointType.HandLeft])
SetEllipsePosition(rhEllipse, currentSkeleton.Joints.[JointType.HandRight])
)
)
)
let WindowLoaded (sender : obj) (args: RoutedEventArgs) =
sensor.Start()
sensor.ColorStream.Enable()
sensor.SkeletonStream.Enable()
sensor.ColorFrameReady.AddHandler(new EventHandler<ColorImageFrameReadyEventArgs>(VideoFrameReady))
sensor.SkeletonFrameReady.AddHandler(new EventHandler<SkeletonFrameReadyEventArgs>(SkeletonFrameReady))
let WindowUnloaded (sender : obj) (args: RoutedEventArgs) =
sensor.Stop()
//Defining the structure of the test window
let window = new Window()
window.Width <- 800.0
window.Height <- 600.0
window.Title <- "Kinect Skeleton Application"
window.Loaded.AddHandler(new RoutedEventHandler(WindowLoaded))
window.Unloaded.AddHandler(new RoutedEventHandler(WindowUnloaded))
window.Content <- grid
window.Show()
[<STAThread()>]
do
let app = new Application() in
app.Run(window) |> ignore
I ended up rewriting it based off of this post http://channel9.msdn.com/coding4fun/kinect/Kinecting-with-F and the skeleton tracking is now working. Still interested in why the original code doesn't work as well.
// Learn more about F# at http://fsharp.net
#light
open System
open System.Windows
open System.Windows.Media.Imaging
open System.Windows.Threading
open Microsoft.Kinect
open System.Diagnostics
[<STAThread>]
do
let sensor = KinectSensor.KinectSensors.[0]
sensor.SkeletonStream.Enable()
sensor.Start()
// Set-up the WPF window and its contents
let width = 1024.
let height = 768.
let w = Window(Width=width, Height=height)
let g = Controls.Grid()
let c = Controls.Canvas()
let hd = Shapes.Rectangle(Fill=Media.Brushes.Red, Width=15., Height=15.)
let rh = Shapes.Rectangle(Fill=Media.Brushes.Blue, Width=15., Height=15.)
let lh = Shapes.Rectangle(Fill=Media.Brushes.Green, Width=15., Height=15.)
ignore <| c.Children.Add hd
ignore <| c.Children.Add rh
ignore <| c.Children.Add lh
ignore <| g.Children.Add c
w.Content <- g
w.Unloaded.Add(fun args -> sensor.Stop())
let getDisplayPosition w h (joint : Joint) =
let x = ((w * (float)joint.Position.X + 2.0) / 4.0) + (w/2.0)
let y = ((h * -(float)joint.Position.Y + 2.0) / 4.0) + (h/2.0)
System.Console.WriteLine("X:" + x.ToString() + " Y:" + y.ToString())
new Point(x,y)
let draw (joint : Joint) (sh : System.Windows.Shapes.Shape) =
let p = getDisplayPosition width height joint
sh.Dispatcher.Invoke(DispatcherPriority.Render, Action(fun () -> System.Windows.Controls.Canvas.SetLeft(sh, p.X))) |> ignore
sh.Dispatcher.Invoke(DispatcherPriority.Render, Action(fun () -> System.Windows.Controls.Canvas.SetTop(sh, p.Y))) |> ignore
let drawJoints (sk : Skeleton) =
draw (sk.Joints.Item(JointType.Head)) hd
draw (sk.Joints.Item(JointType.WristRight)) rh
draw (sk.Joints.Item(JointType.WristLeft)) lh
let skeleton (sensor : KinectSensor) =
let rec loop () =
async {
let! args = Async.AwaitEvent sensor.SkeletonFrameReady
use frame = args.OpenSkeletonFrame()
let skeletons : Skeleton[] = Array.zeroCreate(frame.SkeletonArrayLength)
frame.CopySkeletonDataTo(skeletons)
skeletons
|> Seq.filter (fun s -> s.TrackingState <> SkeletonTrackingState.NotTracked)
|> Seq.iter (fun s -> drawJoints s)
return! loop ()
}
loop ()
skeleton sensor |> Async.Start
let a = Application()
ignore <| a.Run(w)
In F#, any value bindings (e.g., let or do) you declare within a module itself will be executed the first time the module is opened or accessed from another module. If you're familiar with C#, you can think of these value bindings as executing within a type constructor (i.e., a static constructor).
I suspect the reason the second version of your code works, but not the first, is that in the second version, you're creating the Window and drawing the shapes into it from within the STA thread running the application's message loop. In the first version, I'd guess that code is executing on some other thread, and that's why it isn't working as expected.
There's nothing wrong with the second version of your code, but a more-canonical F# approach would be to lift your functions (getDisplayPosition, draw, etc.) out of the top-level do binding. That makes the code a bit easier to read by making it obvious that those functions aren't capturing any of the local values created within the do.
I want to use FSharpChart with a fsx script file in my project. I downloaded and referenced the MSDN.FSharpChart.dll using Nuget and my code looks like this
#r #"..\packages\MSDN.FSharpChart.dll.0.60\lib\MSDN.FSharpChart.dll"
open System.Drawing
open MSDN.FSharp.Charting
[for x in 0.0 .. 0.1 .. 6.0 -> sin x + cos (2.0 * x)]
|> FSharpChart.Line
The path is correct because VS 2012 offers me intellisense and knows the MSDN.FSharp namespace. The problem is that when I run this script in FSI, nothing is shown.
What is wrong?
In order to make your chart show up from FSI you should preload FSharpChart.fsx into your FSI session like in a snippet below:
#load #"<your path here>\FSharpChart.fsx"
[for x in 0.0 .. 0.1 .. 6.0 -> sin x + cos (2.0 * x)]
|> MSDN.FSharp.Charting.FSharpChart.Line;;
UPDATE 08/23/2012:
For comparison, visualizing the same chart off FSharpChart.dll would require some WinForms plumbing:
#r #"<your path here>\MSDN.FSharpChart.dll"
#r "System.Windows.Forms.DataVisualization.dll"
open System.Windows.Forms
open MSDN.FSharp.Charting
let myChart = [for x in 0.0 .. 0.1 .. 6.0 -> sin x + cos (2.0 * x)]
|> FSharpChart.Line
let form = new Form(Visible = true, TopMost = true, Width = 700, Height = 500)
let ctl = new ChartControl(myChart, Dock = DockStyle.Fill)
form.Controls.Add(ctl)