I have a fairly simple use case:
Start a process serving an AspNetCore app with Swagger middleware for generating a swagger.json file.
Start a process that reads said swagger.json and generates a C# REST client
Wait until the C# Rest client has been generated and then shut down the AspNetCore app.
Currently my code looks like this:
try
let createProc =
CreateProcess.fromRawCommand "dotnet" ["run"; "--project"; "FakeProcMvc"]
|> CreateProcess.addOnStarted
(fun _ ->
CreateProcess.fromRawCommand "dotnet" ["run"; "--project"; "FakeProcWriteFile"]
|> Proc.run
|> ignore
)
let procTask =
createProc
|> Proc.start
let tokenSource = new CancellationTokenSource()
procTask.Wait(tokenSource.Token)
tokenSource.Cancel()
with
| Failure msg -> printfn "caught: %A" msg
But this is clearly not quite right. The file writing (C# client) works fine, but I can't get the AspNetCore app to stop.
The canceling part does nothing, and I don't know how to get some kind of reference to the actual process so I can use Process.kill or similar.
Related
I have something like :-
pathScan "/blah/%s" (fun x) -> (sprintf "%A" x) |> json)
and what it shows me if I do /blah/AT%2BVER%3F is the url encoded data. Is there a way to get this decoded automatically? or do I need to parse all my parameters ( which seems a bit odd )
Some older versions require manually decoding. Note that a pull request was accepted (and is now in the current release) which should address this in the next release.
Currently, the best option is to either upgrade to the latest Suave or run this through System.Web.HttpUtility.UrlDecode yourself (as this is the mechanism being used by Suave vCurrent).
I have the following .fsx script:
#r "packages/FSharp.Data/lib/net40/FSharp.Data.dll"
open FSharp.Data
async { let! html = Http.AsyncRequestString("http://stackoverflow.com")
printfn "%d" html.Length }
|> Async.Start
The code is correct, since it works as expected in fsharpi. I suspect what's happening is that the script exits before the async response is back. What's the easiest way to wait for the response to be back?
You can use |> Async.RunSynchronously in this case. Please see https://msdn.microsoft.com/en-us/library/dd233250.aspx or Chapter 11 in the Expert F# 4.0 book. Async.Start can be very useful kicking of functions that will return their results to the GUI without blocking.
One of my website is using Nitrogen with a Cowboy server.
I would like to log every access to web pages just like Apache does with access.log.
What would be the best way to do that ?
You can use cowboy middlewares https://ninenines.eu/docs/en/cowboy/1.0/guide/middlewares/
Just create a simple log module:
-module(app_web_log).
-behaviour(cowboy_middleware).
-export([execute/2]).
execute(Req, Env) ->
{{Peer, _}, Req2} = cowboy_req:peer(Req),
{Method, Req3} = cowboy_req:method(Req2),
{Path, Req4} = cowboy_req:path(Req3),
error_logger:info_msg("~p: [~p]: ~p ~p", [calendar:universal_time(), Peer, Method, Path]),
{ok, Req4, Env}.
and add it in list of middlwares:
{ok, _} = cowboy:start_http(http, 100, [{port, 8080}], [
{env, [{dispatch, Dispatch}]},
{middlewares, [cowboy_router, app_web_log, cowboy_handler]}]).
Try using Nitrogen on top of the Yaws web server instead, since it performs access logging by default.
Each underlying webserver does it differently (or not at all) - this is something simple_bridge does not yet have abstracted.
So in the case of cowboy, you'll likely have to rig it up yourself.
If you're using a newer build of Nitrogen (if you have the file site/src/nitrogen_main_handler.erl), then you can edit that file to manually log yourself. For example, using erlang's error handler, you could add something simple like:
log_request() ->
error_logger:info_msg("~p: [~p]: ~p", [{date(), time()}, wf:peer_ip(), wf:url()]).
run() ->
handlers(),
log_request(), %% <--- insert before wf_core:run()
wf_core:run().
Then whatever happens with the log can be handled by configuring error_logger to write to disk (http://erldocs.com/17.0/kernel/error_logger.html?i=13&search=error_logger#logfile/1)
If you use an older Nitrogen (which would have site/src/nitrogen_cowboy.erl), then you would similarly edit that file, once again before the wf_core:run() call.
Alternatively, your hooks option with cowboy could work as well. I've not worked with them, so you're on your own there :)
Assume the following code:
let sw = new StreamWriter("out.txt", false)
sw.AutoFlush <- true
let proc = new Process()
proc.StartInfo.FileName <- path
proc.StartInfo.RedirectStandardOutput <- true
proc.StartInfo.UseShellExecute <- false
proc.OutputDataReceived.Add(fun b -> sw.WriteLine b.Data )
proc.Start() |> ignore
proc.BeginOutputReadLine()
I create a process and exit the main application. The process is still running (as it should) but it stops redirecting the standard output. Is there any way how to continue writing the standard output to the file even after the main application exits?
PS: I have to exit the main application and cannot wait for the process to finish
PPS: I would like to do a similar thing for the standard error output
I think desco's answer may work if RedirectStandardOutput is false. The output isn't being written to the file after your process exits because the OutputDataReceived event handler no longer exists. If possible, I'd recommend passing the output file path to your program (assume no path means write to stdout). With that in place it should be easy to do what you're trying to do.
can you perform redirect by starting process with proper command line: Redirect stdout and stderr to a single file in dos?
Using the default Erlang installation what is the minimum code needed to produce a "Hello world" producing web server?
Taking "produce" literally, here is a pretty small one. It doesn't even read the request (but does fork on every request, so it's not as minimal possible).
-module(hello).
-export([start/1]).
start(Port) ->
spawn(fun () -> {ok, Sock} = gen_tcp:listen(Port, [{active, false}]),
loop(Sock) end).
loop(Sock) ->
{ok, Conn} = gen_tcp:accept(Sock),
Handler = spawn(fun () -> handle(Conn) end),
gen_tcp:controlling_process(Conn, Handler),
loop(Sock).
handle(Conn) ->
gen_tcp:send(Conn, response("Hello World")),
gen_tcp:close(Conn).
response(Str) ->
B = iolist_to_binary(Str),
iolist_to_binary(
io_lib:fwrite(
"HTTP/1.0 200 OK\nContent-Type: text/html\nContent-Length: ~p\n\n~s",
[size(B), B])).
For a web server using only the built in libraries check out inets http_server.
When in need of some more power but still with simplicity you should check out the mochiweb library. You can google for loads of example code.
Do you actually want to write a web server in Erlang, or do you want an Erlang web server so that you can create dynamic web content using Erlang?
If the latter, try YAWS. If the former, have a look at the YAWS source code for inspiration
Another way, similar to the gen_tcp example above but with less code and already offered as a suggestion, is using the inets library.
%%%
%%% A simple "Hello, world" server in the Erlang.
%%%
-module(hello_erlang).
-export([
main/1,
run_server/0,
start/0
]).
main(_) ->
start(),
receive
stop -> ok
end.
run_server() ->
ok = inets:start(),
{ok, _} = inets:start(httpd, [
{port, 0},
{server_name, "hello_erlang"},
{server_root, "/tmp"},
{document_root, "/tmp"},
{bind_address, "localhost"}
]).
start() -> run_server().
Keep in mind, this exposes your /tmp directory.
To run, simply:
$ escript ./hello_erlang.erl
For a very easy to use webserver for building restful apps or such check out the gen_webserver behaviour: http://github.com/martinjlogan/gen_web_server.
Just one fix for Felix's answer and it addresses the issues Martin is seeing. Before closing a socket, all data being sent from the client should be received (using for example do_recv from gen_tcp description).
Otherwise there's a race condition for the browser/proxy sending the HTTP request being quick enough to send the http request before the socket is closed.