How does one inject dependencies like a logger, database connection, or SHA256 generator in Iron? [duplicate] - dependency-injection

In writing my tests, I'd like to be able to inject a connection into the request so that I can wrap the entire test case in a transaction (even if there is more than one request in the test case).
I've attempted to do this using a BeforeMiddleware which I can link in my test cases to insert a connection, as such:
pub type DatabaseConnection = PooledConnection<ConnectionManager<PgConnection>>;
pub struct DatabaseOverride {
conn: DatabaseConnection,
}
impl BeforeMiddleware for DatabaseOverride {
fn before(&self, req: &mut Request) -> IronResult<()> {
req.extensions_mut().entry::<DatabaseOverride>().or_insert(self.conn);
Ok(())
}
}
However, I'm encountering a compile error in trying to do this:
error: the trait bound `std::rc::Rc<diesel::pg::connection::raw::RawConnection>: std::marker::Sync` is not satisfied [E0277]
impl BeforeMiddleware for DatabaseOverride {
^~~~~~~~~~~~~~~~
help: run `rustc --explain E0277` to see a detailed explanation
note: `std::rc::Rc<diesel::pg::connection::raw::RawConnection>` cannot be shared between threads safely
note: required because it appears within the type `diesel::pg::PgConnection`
note: required because it appears within the type `r2d2::Conn<diesel::pg::PgConnection>`
note: required because it appears within the type `std::option::Option<r2d2::Conn<diesel::pg::PgConnection>>`
note: required because it appears within the type `r2d2::PooledConnection<r2d2_diesel::ConnectionManager<diesel::pg::PgConnection>>`
note: required because it appears within the type `utility::db::DatabaseOverride`
note: required by `iron::BeforeMiddleware`
error: the trait bound `std::cell::Cell<i32>: std::marker::Sync` is not satisfied [E0277]
impl BeforeMiddleware for DatabaseOverride {
^~~~~~~~~~~~~~~~
help: run `rustc --explain E0277` to see a detailed explanation
note: `std::cell::Cell<i32>` cannot be shared between threads safely
note: required because it appears within the type `diesel::pg::PgConnection`
note: required because it appears within the type `r2d2::Conn<diesel::pg::PgConnection>`
note: required because it appears within the type `std::option::Option<r2d2::Conn<diesel::pg::PgConnection>>`
note: required because it appears within the type `r2d2::PooledConnection<r2d2_diesel::ConnectionManager<diesel::pg::PgConnection>>`
note: required because it appears within the type `utility::db::DatabaseOverride`
note: required by `iron::BeforeMiddleware`
Is there a way around this with diesel's connections? I've found several examples on Github to do this using the pg crate, but I'd like to keep using diesel.

This answer will certainly solve the problem, but it's not optimal. As mentioned, you can't share a single connection as it's not thread safe. However, while wrapping it in a Mutex makes it thread-safe, it would force all the server threads to use a single connection. Instead, you want to use a connection pool.
You can accomplish this with the r2d2 and r2d2-diesel crates. This will establish multiple connections as needed, and reuse them when possible in a thread safe manner.

Since there isn't enough code provided for me to reproduce your issue, I've made this:
use std::cell::Cell;
trait Middleware: Sync {}
struct Unsharable(Cell<bool>);
impl Middleware for Unsharable {}
fn main() {}
which has the same error:
error: the trait bound `std::cell::Cell<bool>: std::marker::Sync` is not satisfied [E0277]
impl Middleware for Unsharable {}
^~~~~~~~~~
help: run `rustc --explain E0277` to see a detailed explanation
note: `std::cell::Cell<bool>` cannot be shared between threads safely
note: required because it appears within the type `Unsharable`
note: required by `Middleware`
You can solve the problem by changing the type to make it cross-thread compatible:
use std::sync::Mutex;
struct Sharable(Mutex<Unsharable>);
impl Middleware for Sharable {}
Note that Rust has done a very good thing for you: it prevented you from using a type that is unsafe to be called in multiple threads.
In writing my tests, I'd like to be able to inject a connection into the request so that I can wrap the entire test case in a transaction (even if there is more than one request in the test case).
I'd suggest that it's possible an architectural change would be even better. Separate the domains of "web framework" from your "database". The authors of Growing Object-Oriented Software, Guided by Tests (a highly recommended book) advocate for this style.
Pull apart your code such that there is a method that simply accepts some type that can start / end a transaction, write the interesting stuff there, and test it thoroughly. Then have just enough glue code in the web layer to create a transaction object, then call the next layer down.

Related

Passing functions defined in Rcpp in each node through "foreach" [duplicate]

I'm trying to understand what is happening behind the Rcpp::sourceCpp() call on a parallelized environment. Recently, this was partially addressed in the question: Using Rcpp function in parLapply on Windows.
Within this post, Dirk said,
"You need to run the sourceCpp() call in each spawned process, or else get them your code."
This was in response to questioner's use of distributing the Rcpp function to the worker processes. The questioner was sending the Rcpp function via:
clusterExport(cl = cl, varlist = "payoff")
I'm confused as to why this doesn't work. My thoughts are that this was what the objective of the clusterExport() is for.
The issue here is that the compiled code is not "exportable" to the spawned processes without being embedded in a package due to how binaries are linked into R's processes.
Traditionally, the clusterExport() statement allows for R specific code to be distributed to workers.
By using clusterExport() on an Rcpp function, you are only receiving the R declaration and not the underlying shared library. That is to say, the R CMD SHLIB given in Attributes.R is not shared with / exported to the workers. As a result, when a call is then made to an Rcpp function on the worker, R cannot find the correct shared library.
Take the previous question's function:
Rcpp::cppFunction("NumericVector payoff( double strike, NumericVector data) {
return pmax(data - strike, 0);
}")
Note: I'm using cppFunction() instead of sourceCpp() but the results are equivalent since cppFunction() calls sourceCpp() to create the function.
Typing the function name:
payoff
Yields the R declaration with a shared library pointer.
function (strike, data)
.Primitive(".Call")(<pointer: 0x1015ec130>, strike, data)
This shared library is only available on process that compiled the function.
Hence, why it is always ideal to embed compiled code within a package and then distribute the package.

How do I get the exported types of an erlang module?

I had cause to check the types exported by a module, and I immediately thought "right, module_info then" but was surprised to run into a few difficulties. I found I can get the exported types from modules I compile, but not from say modules in stdlib.
My (three) questions are, how do I reliably get the exported types of a module, why are the exported types in the attributes bit of the module info on some modules, and why some modules and not others?
I discovered that if I build this module:
-module(foo).
-export([bar/0]).
-export_types([baz/0]).
bar() -> bat .
And then use foo:module_info/0, I get this:
[{exports,[{bar,0},{module_info,0},{module_info,1}]},
{imports,[]},
{attributes,[{vsn,[108921085595958308709649797749441408863]},
{export_types,[{baz,0}]}]},
{compile,[{options,[{outdir,"/tmp"}]},
{version,"5.0.1"},
{time,{2015,10,22,10,38,8}},
{source,"/tmp/foo.erl"}]}]
Great, hidden away in 'attributes' is 'export_types'. Why this is in attributes I'm not quite sure, but... whatever...
I now know this will work:
4> lists:keyfind(export_types, 1, foo:module_info(attributes)).
{export_types,[{baz,0}]}
Great. So, I now know this will work:
5> lists:keyfind(export_types, 1, ets:module_info(attributes)).
false
Ah... it doesn't.
I know there are exported types of course, if the documentation isn't good enough the ets source shows:
-export_type([tab/0, tid/0, match_spec/0, comp_match_spec/0, match_pattern/0]).
In fact the exported type information for the ets module doesn't seem to be anywhere in the module info:
6> rp(ets:module_info()).
[{exports,[{match_spec_run,2},
{repair_continuation,2},
{fun2ms,1},
{foldl,3},
{foldr,3},
{from_dets,2},
{to_dets,2},
{test_ms,2},
{init_table,2},
{tab2file,2},
{tab2file,3},
{file2tab,1},
{file2tab,2},
{tabfile_info,1},
{table,1},
{table,2},
{i,0},
{i,1},
{i,2},
{i,3},
{module_info,0},
{module_info,1},
{tab2list,1},
{match_delete,2},
{filter,3},
{setopts,2},
{give_away,3},
{update_element,3},
{match_spec_run_r,3},
{match_spec_compile,1},
{select_delete,2},
{select_reverse,3},
{select_reverse,2},
{select_reverse,1},
{select_count,2},
{select,3},
{select,2},
{select,1},
{update_counter,3},
{slot,2},
{safe_fixtable,2},
{rename,2},
{insert_new,2},
{insert,2},
{prev,2},
{next,2},
{member,2},
{match_object,3},
{match_object,2},
{match_object,1},
{match,3},
{match,2},
{match,1},
{last,1},
{info,2},
{info,1},
{lookup_element,3},
{lookup,2},
{is_compiled_ms,1},
{first,1},
{delete_object,2},
{delete_all_objects,1},
{delete,2},
{delete,1},
{new,2},
{all,0}]},
{imports,[]},
{attributes,[{vsn,[310474638056108355984984900680115120081]}]},
{compile,[{options,[{outdir,"/tmp/buildd/erlang-17.1-dfsg/lib/stdlib/src/../ebin"},
{i,"/tmp/buildd/erlang-17.1-dfsg/lib/stdlib/src/../include"},
{i,"/tmp/buildd/erlang-17.1-dfsg/lib/stdlib/src/../../kernel/include"},
warnings_as_errors,debug_info]},
{version,"5.0.1"},
{time,{2014,7,25,16,54,59}},
{source,"/tmp/buildd/erlang-17.1-dfsg/lib/stdlib/src/ets.erl"}]}]
ok
I took things to extremes now and ran this, logging the output to a file:
rp(beam_disasm:file("/usr/lib/erlang/lib/stdlib-2.1/ebin/ets.beam")).
Not that I don't consider this absurd... but anyway, it's about 5,000 lines of output, but nowhere do I find an instance of the string "tid".
Up to Erlang 18 this information is not easily available.
Dialyzer, for example, extracts it from the abstract syntax tree of the core Erlang version of a module (see e.g. dialyzer_utils:get_record_and_type_info/1 used by e.g. dialyzer_analysis_callgraph:compile_byte/5)
Regarding this part:
why are the exported types in the attributes bit of the module info on some modules, and why some modules and not others?
this is due to a bad definition in your module. The attribute should be -export_type, not -export_types. If you use the correct one (and define the baz/0 type and use it somewhere so that the module compiles), the exported types... vanish, as is expected.

FAKE Fsc task is writing build products to wrong directory

I'm just learning F#, and setting up a FAKE build harness for a hello-world-like application. (Though the phrase "Hell world" does occasionally come to mind... :-) I'm using a Mac and emacs (generally trying to avoid GUI IDEs by preference).
After a bit of fiddling about with documentation, here's how I'm invoking the F# compiler via FAKE:
let buildDir = #"./build-app/" // Where application build products go
Target "CompileApp" (fun _ -> // Compile application source code
!! #"src/app/**/*.fs" // Look for F# source files
|> Seq.toList // Convert FileIncludes to string list
|> Fsc (fun p -> // which is what the Fsc task wants
{p with //
FscTarget = Exe //
Platform = AnyCpu //
Output = (buildDir + "hello-fsharp.exe") }) // *** Writing to . instead of buildDir?
) //
That uses !! to make a FileIncludes of all the sources in the usual way, then uses Seq.toList to change that to a string list of filenames, which is then handed off to the Fsc task. Simple enough, and it even seems to work:
...
Starting Target: CompileApp (==> SetVersions)
FSC with args:[|"-o"; "./build-app/hello-fsharp.exe"; "--target:exe"; "--platform:anycpu";
"/Users/sgr/Documents/laboratory/hello-fsharp/src/app/hello-fsharp.fs"|]
Finished Target: CompileApp
...
However, despite what the console output above says, the actual build products go to the top-level directory, not the build directory. The message above looks like the -o argument is being passed to the compiler with an appropriate filename, but the executable gets put in . instead of ./build-app/.
So, 2 questions:
Is this a reasonable way to be invoking the F# compiler in a FAKE build harness?
What am I misunderstanding that is causing the build products to go to the wrong place?
This, or a very similar problem, was reported in FAKE issue #521 and seems to have been fixed in FAKE pull request #601, which see.
Explanation of the Problem
As is apparently well-known to everyone but me, the F# compiler as implemented in FSharp.Compiler.Service has a practice of skipping its first argument. See FSharp.Compiler.Service/tests/service/FscTests.fs around line 127, where we see the following nicely informative comment:
// fsc parser skips the first argument by default;
// perhaps this shouldn't happen in library code.
Whether it should or should not happen, it's what does happen. Since the -o came first in the arguments generated by FscHelper, it was dutifully ignored (along with its argument, apparently). Thus the assembly went to the default place, not the place specified.
Solutions
The temporary workaround was to specify --out:destinationFile in the OtherParams field of the FscParams setter in addition to the Output field; the latter is the sacrificial lamb to be ignored while the former gets the job done.
The longer term solution is to fix the arguments generated by FscHelper to have an extra throwaway argument at the front; then these 2 problems will annihilate in a puff of greasy black smoke. (It's kind of balletic in its beauty, when you think about it.) This is exactly what was just merged into the master by #forki23:
// Always prepend "fsc.exe" since fsc compiler skips the first argument
let optsArr = Array.append [|"fsc.exe"|] optsArr
So that solution should be in the newest version of FAKE (3.11.0).
The answers to my 2 questions are thus:
Yes, this appears to be a reasonable way to invoke the F# compiler.
I didn't misunderstand anything; it was just a bug and a fix is in the pipeline.
More to the point: the actual misunderstanding was that I should have checked the FAKE issues and pull requests to see if anybody else had reported this sort of thing, and that's what I'll do next time.

Windows Shell and Citrix

I have this line of code in my Delphi app:
sh := CoShellWindows.Create;
When run through a Citrix session, this raises an exception "Not enough storage is available to complete this operation."
Can someone confirm my suspicion that I can't access this through Citrix? I'm running in Seamless mode if that makes any difference. Maybe there's something I need to change on the published icon to make it work?
I am guessing that there is no "Shell" in Citrix to create.
Thanks
EDIT
The CoShellWindows is simply a class which creates an object which implements the IShellWindows interface. This interface is then used to iterate through it's items looking for an instance of Internet Explorer (or more specifically, an item which implements the IWebBrowser2 interface).
There are a few other use case scenarios using the CoShellWindows, but all ultimately are used to interact with the IWebBrowser2 interface (Internet Explorer 8). My requirement is to obtain this IWebBrowser2 object.
The call, behind the scenes is calling the Windows API CoCreateInstance with the following parameters:
rclsid = {9BA05972-F6A8-11CF-A442-00A0C90A8F39} (CLSID of
IShellWindows)
pUnkOuter = null (nil)
dwClsContext = CLSCTX_ALL (I've tried various combinations of these
flags)
riid = {85CB6900-4D95-11CF-960C-0080C7F4EE85} (IID of IShellWindows)
ppv = a variable declared as type IShellWindows
eg:CoCreateInstance(CLASS_ShellWindows, nil, CLSCTX_ALL, IID_IShellWindows, sh)
Your exception "Not enough storage is available to complete this operation." should really read "Shell does not exist so no instance can be created"
Basically you are correct in your assumption that there is no shell to create in Citrix.
What are you using the shell for? as if you provide more information we may well be able to offer a full work around.

What kind of types can be sent on an Erlang message?

Mainly I want to know if I can send a function in a message in a distributed Erlang setup.
On Machine 1:
F1 = Fun()-> hey end,
gen_server:call(on_other_machine,F1)
On Machine 2:
handler_call(Function,From,State) ->
{reply,Function(),State)
Does it make sense?
Here's an interesting article about "passing fun's to other Erlang nodes". To resume it briefly:
[...] As you might know, Erlang distribution
works by sending the binary encoding
of terms; and so sending a fun is also
essentially done by encoding it using
erlang:term_to_binary/1; passing the
resulting binary to another node, and
then decoding it again using
erlang:binary_to_term/1.[...]
This is pretty obvious
for most data types; but how does it
work for function objects?
When you encode a fun, what is encoded
is just a reference to the function,
not the function implementation.
[...]
[...]the definition of the function is not passed along; just exactly enough information to recreate the fun at an other node if the module is there.
[...] If the module containing the fun has not yet been loaded, and the target node is running in interactive mode; then the module is attempted loaded using the regular module loading mechanism (contained in the module error_handler); and then it tries to see if a fun with the given id is available in said module. However, this only happens lazily when you try to apply the function.
[...] If you never attempt to apply the function, then nothing bad happens. The fun can be passed to another node (which has the module/fun in question) and then everybody is happy.
Maybe the target node has a module loaded of said name, but perhaps in a different version; which would then be very likely to have a different MD5 checksum, then you get the error badfun if you try to apply it.
I would suggest you to read the whole article, cause it's extremely interesting.
You can send any valid Erlang term. Although you have to be careful when sending funs. Any fun referencing a function inside a module needs that module to exist on the target node to work:
(first#host)9> rpc:call(second#host, erlang, apply,
[fun io:format/1, ["Hey!~n"]]).
Hey!
ok
(first#host)10> mymodule:func("Hey!~n").
5
(first#host)11> rpc:call(second#host, erlang, apply,
[fun mymodule:func/1, ["Hey!~n"]]).
{badrpc,{'EXIT',{undef,[{mymodule,func,["Hey!~n"]},
{rpc,'-handle_call_call/6-fun-0-',5}]}}}
In this example, io exists on both nodes and it works to send a function from io as a fun. However, mymodule exists only on the first node and the fun generates an undef exception when called on the other node.
As for anonymous functions, it seems they can be sent and work as expected.
t1#localhost:
(t1#localhost)7> register(shell, self()).
true
(t1#localhost)10> A = me, receive Fun when is_function(Fun) -> Fun(A) end.
hello me you
ok
t2#localhost:
(t2#localhost)11> B = you.
you
(t2#localhost)12> Fn2 = fun (A) -> io:format("hello ~p ~p~n", [A, B]) end.
#Fun<erl_eval.6.54118792>
(t2#localhost)13> {shell, 't1#localhost'} ! Fn2.
I am adding coverage logic to an app built on riak-core, and the merge of results gathered can be tricky if anonymous functions cannot be used in messages.
Also check out riak_kv/src/riak_kv_coverage_filter.erl
riak_kv might be using it to filter result, I guess.

Resources