I'm trying to use the following F# code to access a Xaml control.
let (?) (source:obj) (s:string) =
match source with
| :? ResourceDictionary as r -> r.[s] :?> 'T
| :? Control as source ->
match source.FindName(s) with
| null -> invalidOp (sprintf "dynamic lookup of Xaml component %s failed" s)
| :? 'T as x -> x
| _ -> invalidOp (sprintf "dynamic lookup of Xaml component %s failed because the component found was of type %A instead of type %A" s (s.GetType()) typeof<'T>)
| _ -> invalidOp (sprintf "dynamic lookup of Xaml component %s failed because the source object was of type %A. It must be a control or a resource dictionary" s (source.GetType()))
This is from Daniel Mohl's excellent F# for Windows Phone template.
I've created a class to basically read the accelerometer and trigger an event when the phone is shaken. The event gets raised as expected but for some reason it's spawned in a second thread--which leads the CLR to throw an "invalid cross-thread access" exception when the event handler attempts to execute this code. The exception is thrown on the source.FindName(s) call. I can see a second thread of execution--which kind of surprises me because I didn't specifically spawn a secondary thread. I mean I didn't explicitly call async or do anything else that I can think of that would cause a secondary thread of execution to start.
So it seems like there are a few approaches I could take:
I could try to figure out why I'm
spawning a secondary thread when I
haven't specifically requested it and modify the code to prevent a secondary thread of
execution from being spawned.
I could try to modify the (?)
function to account for multiple
threads of execution but I'd really
like to understand why I'm getting a
second thread started
I think probably the second approach is best but I'd really like to understand what I'm doing that's causing a secondary thread to spawn. I realize how hard that is to answer without specific code but I don't mind researching this if someone can point me in the right direction. I believe this has something to do with the Windows 7 Phone platform because the code is, as far as I can tell, pretty much the idiomatic way to bind a Xaml control with F# code.
Any thoughts, comments, suggestions would be greatly appreciated.
Cross-posted to HubFS as well
Event handling in WP7 is generally handled on async callbacks. Accessing the accelerometer is no exception.
You'll need to direct any code that results in a UI update to the dispatcher.
In c# this can be done like
Dispatcher.BeginInvoke( () => { /* your UI code */ } );
The approach for sending results to the Dispatcher used in this post may also be helpful for you in f# as it's more of a functional style than imperative as a result of using Rx.
WP7 Code: Using the Accelerometer API - Dragos Manolescu's (work) blog
Related
In the wonderful FBlazorShop repo, Onur Gumus is riffing off of Steve Sanderson’s Pizza Workshop with F# flavor. On line 128 of blob/master/FBlazorShop.Web.BlazorClient/Home/Home.fs [GitHub], Onur is passing an Elmish Message for the parent, HomeView, inheriting ElmishComponent<Model, Message> , to a child, PizzaConfigView, inheriting ElmishComponent<Model, PizzaConfigMsg>. By convention, we can see Message being converted (?) to PizzaConfigMsg with this:
(PizzaConfigMsg >> dispatch)
where dispatch is of type Message -> unit. At the time of this writing, I have no idea how this ‘conversion’ is happening (in part because I refuse to compile this repo by going back to .NET core 3.x). I am not familiar with this usage of the >> operator. Is this operation actually a conversion or is something else going on?
If you find >> confusing, but are comfortable with |>, then you can easily rewrite that line like this:
fun pizzaMsg -> pizzaMsg |> PizzaConfigMsg |> dispatch
Since the top-level message type is:
type Message =
| SpecialsReceived of PizzaSpecial list
| PizzaConfigMsg of PizzaConfigMsg
| OrderMsg of OrderMsg
| CheckoutRequested of Order
This tells us that the code is converting a value of type PizzaConfigMsg (which I've called pizzaMsg above) to a top-level Message via the Message.PizzaConfigMsg union case, then dispatching the result.
This style of coding (where function arguments become implicit, rather than explicit) is called "point-free programming". You can find more information about it here.
I'm starting to use common test as my test framework in erlang.
Suppose that I have function I expect to accept only positive numbers and it should blows in any other case.
positive_number(X) when > 0 -> {positive, X}.
and I want to test that
positive_number(-5).
will not complete successfully.
How do I test this behaviour? In other languages I would say that the test case expects some error or exception and fails if it function under test doesn't throws any error for invalid invalid parameter. How to do that with common test?
Update:
I can make it work with
test_credit_invalid_input(_) ->
InvalidArgument = -1,
try mypackage:positive_number(InvalidArgument) of
_ -> ct:fail(not_failing_as_expected)
catch
error:function_clause -> ok
end.
but I think this is too verbose, I would like something like:
assert_error(mypackage:positive_number, [-1], error:function_clause)
I assuming that common test has this in some place and my lack of proper knowledge of the framework that is making me take such a verbose solution.
Update:
Inspired by Michael's response I created the following function:
assert_fail(Fun, Args, ExceptionType, ExceptionValue, Reason) ->
try apply(Fun, Args) of
_ -> ct:fail(Reason)
catch
ExceptionType:ExceptionValue -> ok
end.
and my test became:
test_credit_invalid_input(_) ->
InvalidArgument = -1,
assert_fail(fun mypackage:positive_number/1,
[InvalidArgument],
error,
function_clause,
failed_to_catch_invalid_argument).
but I think it just works because it is a little bit more readable to have the assert_fail call than having the try....catch in every test case.
I still think that some better implementation should exists in Common Test, IMO it is an ugly repetition to have this test pattern repeatedly implemented in every project.
Convert the exception to an expression and match it:
test_credit_invalid_input(_) ->
InvalidArgument = -1,
{'EXIT', {function_clause, _}}
= (catch mypackage:positive_number(InvalidArgument)).
That turns your exception into a non exception and vice versa, in probably about as terse a fashion as you can expect.
You could always use a macro or function too though to hide the verbosity.
Edit (re: comment):
If you use a full try catch construct, as in your question, you lose information about any failure case, because that information is thrown away in favour of the atom 'not_failing_as_expected' or 'failed_to_catch_invalid_argument'.
Trying an expected fail value on the shell:
1> {'EXIT', {function_clause, _}}
= (catch mypackage:positive_number(-4)).
{'EXIT',{function_clause,[{mypackage,positive_number,
[-4],
[{file,"mypackage.erl"},{line,7}]},
{erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,661}]},
{erl_eval,expr,5,[{file,"erl_eval.erl"},{line,434}]},
{erl_eval,expr,5,[{file,"erl_eval.erl"},{line,441}]},
{shell,exprs,7,[{file,"shell.erl"},{line,676}]},
{shell,eval_exprs,7,[{file,"shell.erl"},{line,631}]},
{shell,eval_loop,3,[{file,"shell.erl"},{line,616}]}]}}
Truing an expected success value on the shell:
2> {'EXIT', {function_clause, _}} = (catch mypackage:positive_number(3)).
** exception error: no match of right hand side value {positive,3}
In both cases you get a lot of information, but crucially, from both of them you can tell what parameters were used to call your function under test (albeit in the second case this is only because your function is determinate and one to one).
In a simple case like this these things don't matter so much, but as a matter of principal, it's important, because later with more complex cases where perhaps the value that fails your function is not hard coded as it is here, you may not know what value caused your function to fail, or what return value. This might then the difference between looking at an ugly backtrace for a moment or two and realising exactly what the problem is, or spending 15 minutes setting up a test to really find out what's going on... OR, worse still, if it's a heisenbug you might spend hours looking for it!
I'm just learning F#, and setting up a FAKE build harness for a hello-world-like application. (Though the phrase "Hell world" does occasionally come to mind... :-) I'm using a Mac and emacs (generally trying to avoid GUI IDEs by preference).
After a bit of fiddling about with documentation, here's how I'm invoking the F# compiler via FAKE:
let buildDir = #"./build-app/" // Where application build products go
Target "CompileApp" (fun _ -> // Compile application source code
!! #"src/app/**/*.fs" // Look for F# source files
|> Seq.toList // Convert FileIncludes to string list
|> Fsc (fun p -> // which is what the Fsc task wants
{p with //
FscTarget = Exe //
Platform = AnyCpu //
Output = (buildDir + "hello-fsharp.exe") }) // *** Writing to . instead of buildDir?
) //
That uses !! to make a FileIncludes of all the sources in the usual way, then uses Seq.toList to change that to a string list of filenames, which is then handed off to the Fsc task. Simple enough, and it even seems to work:
...
Starting Target: CompileApp (==> SetVersions)
FSC with args:[|"-o"; "./build-app/hello-fsharp.exe"; "--target:exe"; "--platform:anycpu";
"/Users/sgr/Documents/laboratory/hello-fsharp/src/app/hello-fsharp.fs"|]
Finished Target: CompileApp
...
However, despite what the console output above says, the actual build products go to the top-level directory, not the build directory. The message above looks like the -o argument is being passed to the compiler with an appropriate filename, but the executable gets put in . instead of ./build-app/.
So, 2 questions:
Is this a reasonable way to be invoking the F# compiler in a FAKE build harness?
What am I misunderstanding that is causing the build products to go to the wrong place?
This, or a very similar problem, was reported in FAKE issue #521 and seems to have been fixed in FAKE pull request #601, which see.
Explanation of the Problem
As is apparently well-known to everyone but me, the F# compiler as implemented in FSharp.Compiler.Service has a practice of skipping its first argument. See FSharp.Compiler.Service/tests/service/FscTests.fs around line 127, where we see the following nicely informative comment:
// fsc parser skips the first argument by default;
// perhaps this shouldn't happen in library code.
Whether it should or should not happen, it's what does happen. Since the -o came first in the arguments generated by FscHelper, it was dutifully ignored (along with its argument, apparently). Thus the assembly went to the default place, not the place specified.
Solutions
The temporary workaround was to specify --out:destinationFile in the OtherParams field of the FscParams setter in addition to the Output field; the latter is the sacrificial lamb to be ignored while the former gets the job done.
The longer term solution is to fix the arguments generated by FscHelper to have an extra throwaway argument at the front; then these 2 problems will annihilate in a puff of greasy black smoke. (It's kind of balletic in its beauty, when you think about it.) This is exactly what was just merged into the master by #forki23:
// Always prepend "fsc.exe" since fsc compiler skips the first argument
let optsArr = Array.append [|"fsc.exe"|] optsArr
So that solution should be in the newest version of FAKE (3.11.0).
The answers to my 2 questions are thus:
Yes, this appears to be a reasonable way to invoke the F# compiler.
I didn't misunderstand anything; it was just a bug and a fix is in the pipeline.
More to the point: the actual misunderstanding was that I should have checked the FAKE issues and pull requests to see if anybody else had reported this sort of thing, and that's what I'll do next time.
Mainly I want to know if I can send a function in a message in a distributed Erlang setup.
On Machine 1:
F1 = Fun()-> hey end,
gen_server:call(on_other_machine,F1)
On Machine 2:
handler_call(Function,From,State) ->
{reply,Function(),State)
Does it make sense?
Here's an interesting article about "passing fun's to other Erlang nodes". To resume it briefly:
[...] As you might know, Erlang distribution
works by sending the binary encoding
of terms; and so sending a fun is also
essentially done by encoding it using
erlang:term_to_binary/1; passing the
resulting binary to another node, and
then decoding it again using
erlang:binary_to_term/1.[...]
This is pretty obvious
for most data types; but how does it
work for function objects?
When you encode a fun, what is encoded
is just a reference to the function,
not the function implementation.
[...]
[...]the definition of the function is not passed along; just exactly enough information to recreate the fun at an other node if the module is there.
[...] If the module containing the fun has not yet been loaded, and the target node is running in interactive mode; then the module is attempted loaded using the regular module loading mechanism (contained in the module error_handler); and then it tries to see if a fun with the given id is available in said module. However, this only happens lazily when you try to apply the function.
[...] If you never attempt to apply the function, then nothing bad happens. The fun can be passed to another node (which has the module/fun in question) and then everybody is happy.
Maybe the target node has a module loaded of said name, but perhaps in a different version; which would then be very likely to have a different MD5 checksum, then you get the error badfun if you try to apply it.
I would suggest you to read the whole article, cause it's extremely interesting.
You can send any valid Erlang term. Although you have to be careful when sending funs. Any fun referencing a function inside a module needs that module to exist on the target node to work:
(first#host)9> rpc:call(second#host, erlang, apply,
[fun io:format/1, ["Hey!~n"]]).
Hey!
ok
(first#host)10> mymodule:func("Hey!~n").
5
(first#host)11> rpc:call(second#host, erlang, apply,
[fun mymodule:func/1, ["Hey!~n"]]).
{badrpc,{'EXIT',{undef,[{mymodule,func,["Hey!~n"]},
{rpc,'-handle_call_call/6-fun-0-',5}]}}}
In this example, io exists on both nodes and it works to send a function from io as a fun. However, mymodule exists only on the first node and the fun generates an undef exception when called on the other node.
As for anonymous functions, it seems they can be sent and work as expected.
t1#localhost:
(t1#localhost)7> register(shell, self()).
true
(t1#localhost)10> A = me, receive Fun when is_function(Fun) -> Fun(A) end.
hello me you
ok
t2#localhost:
(t2#localhost)11> B = you.
you
(t2#localhost)12> Fn2 = fun (A) -> io:format("hello ~p ~p~n", [A, B]) end.
#Fun<erl_eval.6.54118792>
(t2#localhost)13> {shell, 't1#localhost'} ! Fn2.
I am adding coverage logic to an app built on riak-core, and the merge of results gathered can be tricky if anonymous functions cannot be used in messages.
Also check out riak_kv/src/riak_kv_coverage_filter.erl
riak_kv might be using it to filter result, I guess.
I keep running into this. I want to spawn processes and pass arguments
to them without using the MFA form (module/function/arguments), so
basically without having to export the function I want to spawn with
arguments. I've gotten around this a few times using closures(fun's)
and having the arguments just be bound values outside the fun(that I then reference inside the fun), but its
limiting my code structure... I've looked at the docs and spawn only
has the regular spawn/1 and the spawn/3 form, nothing else...
I understand that code reloading in spawned processes is not possible without the use of the MFA form but the spawned processes are not of the long running nature and finish relatively quickly so that's not an issue (I also want to contain all the code in one module-level function with sub-jobs being placed in funs inside that function).
much appreciated
thanks
actually Richard pointed me in the right direction to take to avoid the issue nicelly (in a reply to the same post I put up on the Erlang GoogleGroups):
http://groups.google.com/group/erlang-programming/browse_thread/thread/1d77a697ec67935a
His answer:
By "using closures", I hope you mean something like this:
Pid = spawn(fun () -> any_function(Any, Number, Of, Arguments) end)
How would that be limiting to your code structure?
/Richard
thank you for promptly commenting you my question. Much appreciated
Short answer: you can't. Spawn (in all it's varying forms) only takes a 0-arity function. Using a closure and bringing in bound variables from the spawning function is the way to go, short of using some sort of shared data store like ETS (which is Monster Overkill).
I've never found using a closure to severely hamper my code structure, though; can you give an example of the problems you're having, and perhaps someone can tidy it up for you?
This is an old question but I believe it can be properly answered with a bit of creativity:
The goal of the question is to
Invoke a function
With the following limits;
No M:F/A formatting
No exporting of the Invoked function
This can be solved in the following;
Using the 1st limitation leads us to the following solution:
run() ->
Module = module,
Function = function,
Args = [arg1, arg2, arg3],
erlang:spawn(Module, Function, Args).
In this solution however, the function is required to be exported.
Using the 2nd limitation (No exporting of the Invoked function) alongside the 1st leads us to the following solution using conventional erlang logic:
run() ->
%% Generate an anonymous fun and execute it
erlang:spawn(fun() -> function(arg1, arg2, arg3) end).
This solution generates Anonymous Funs every execution which may or may not be wanted based on your design due to the extra work that the Garbage Colelctor will need to perform (note that, generally, this will be neglible and issues will potentially only be seen in larger systems).
An alternative way to write the above without generating Anonymous Funs would be to spawn an erlang:apply/2 which can execute functions with given parameters.
By passing a Function Ref. to erlang:apply/2, we can reference a local function and invoke it with the given arguments.
The following implements this solution:
run() ->
%% Function Ref. to a local (non-exported) function
Function = fun function/arity,
Args = [arg1, arg2, arg3],
erlang:spawn(erlang, apply, [Function, Args]).
Edit: This type of solution can be found within the Erlang Src whereby erlang:apply/2 is being called to execute a fun() with args.
%% https://github.com/erlang/otp/blob/71af97853c40d8ac5f499b5f2435082665520642/erts/preloaded/src/erlang.erl#L2888%% Spawn and atomically set up a monitor.
-spec spawn_monitor(Fun) -> {pid(), reference()} when
Fun :: function().
spawn_monitor(F) when erlang:is_function(F, 0) ->
erlang:spawn_opt(erlang,apply,[F,[]],[monitor]);
spawn_monitor(F) ->
erlang:error(badarg, [F]).
first, there is no code and we can't help you a lot, so the best way to control your functions and their args with your spawned processes is to spawn the process with a receive function then you will be in contact with your process across the send and receive method, try:
Pid=spawn(Node, ModuleName, functionThatReceive, [])
%%or just spawn(ModuleName....) if the program is not %%distributed
Pid ! {self(), {M1, f1, A1}},
receive
{Pid, Reply} ->Reply
end,
Pid ! {self(), {M2, f2, A2}},
receive
{Pid, Reply} ->Reply
end,
.......
functionThatReceive() ->
receive
{From, {M1, f1, A1}} ->From ! {self(), doSomething1} ;
{From, {M2, f2, A2}} ->From ! {self(), doSomething2}
end.