port createNewDocument: Encode.Value -> Cmd msg
port printDocument : () -> Cmd msg
createNewDocument: Document -> Task err msg
printDocument: Task err msg
i want to chain this create and print steps, in one step. Because sometimes i need both one after another - some other times i need to create the document - make some updates, then print.
someCmd: Cmd msg
someCmd = createNewDocument |> Task.andThen (\ what? -> -- what to add here? printDocument ) |> Task.attempt (\ result -> some result handler )
how can i chain port calls? Because ports return Cmd msg not Task err msg.
There is no way to do this without introducing a message that lives in the middle. Ports are also only unidirectional, so you would need a subscription port to return the value from your external source.
I.e: Your first command triggers a JavaScript function that sends a message via a subscription, and in your update function you handle that message by returning the second command.
type Msg
= ...
| CreateNewDocument Encode.Value
| PrintDocument Document
update : Msg -> Model -> (Model, Cmd.model)
update msg model =
case msg of
...
CreateNewDocument value ->
(model, createNewDocument value)
PrintDocument document ->
(model, printDocument document)
sub : Sub Msg
sub =
receiveNewDocument PrintDocument
Related
I am trying to use cowboy to send a notification to multiple clients using the socket connected to them. The problem is that I cannot find anything in the documentation about the argument to be passed to the function, the one I used in the code seems to be incorrect.
The socket is saved in a variable called Req that is given when a new client connects in the init function:
% Called to know how to dispatch a new connection.
init(Req, _Opts) ->
?LOG_INFO("New client"),
?LOG_DEBUG("Request: ~p", [Req]),
% "upgrade" every request to websocket,
% we're not interested in serving any other content.
Req2=Req,
{cowboy_websocket, Req, #state{socket = Req2}}.
The sockets are used in this way
send_message_to_sockets([Socket | Sockets], Msg) ->
cowboy_websocket:websocket_send(Socket, {text, Msg}),
send_message_to_sockets(Sockets, Msg).
This is the error:
Error in process <0.202.0> with exit value:
{undef,[{cowboy_websocket,websocket_send, [#{bindings => #{},...}
I have tried different argument to be passed to the function websocket_send but nothing worked.
Here is the code of the websocket_send:
transport_send(#state{socket=Stream={Pid, _}, transport=undefined}, IsFin, Data) ->
Pid ! {Stream, {data, IsFin, Data}},
ok;
transport_send(#state{socket=Socket, transport=Transport}, _, Data) ->
Transport:send(Socket, Data).
-spec websocket_send(cow_ws:frame(), #state{}) -> ok | stop | {error, atom()}.
websocket_send(Frames, State) when is_list(Frames) ->
websocket_send_many(Frames, State, []);
websocket_send(Frame, State) ->
Data = frame(Frame, State),
case is_close_frame(Frame) of
true ->
_ = transport_send(State, fin, Data),
stop;
false ->
transport_send(State, nofin, Data)
end.
websocket_send_many([], State, Acc) ->
transport_send(State, nofin, lists:reverse(Acc));
websocket_send_many([Frame|Tail], State, Acc0) ->
Acc = [frame(Frame, State)|Acc0],
case is_close_frame(Frame) of
true ->
_ = transport_send(State, fin, lists:reverse(Acc)),
stop;
false ->
websocket_send_many(Tail, State, Acc)
end.
The socket is saved in a variable called Req that is given when a new
client connects in the init function:
init(Req, _Opts) ->
?LOG_INFO("New client"),
?LOG_DEBUG("Request: ~p", [Req]),
% "upgrade" every request to websocket,
% we're not interested in serving any other content.
Req2=Req,
{cowboy_websocket, Req, #state{socket = Req2}}.
I don't think that is true. In Nine Nine's Cowboy User Guide, there is a section titled Getting Started, which shows this code:
Handling requests
...
...
init(Req0, State) ->
Req = cowboy_req:reply(200,
#{<<"content-type">> => <<"text/plain">>},
<<"Hello Erlang!">>,
Req0),
{ok, Req, State}.
In that code, init() is passed a map that is assigned/bound to the Req0 variable. You can read about the Request map here:
https://ninenines.eu/docs/en/cowboy/2.9/guide/req/
The map contains the usual HTTP Request information, e.g. the HTTP Request method, the HTTP version number, scheme, host, port, path, etc. Then the docs say:
Any other field is internal and should not be accessed.
Generally, a socket is defined like this:
A socket is one endpoint of a two-way communication link between two
programs running on the network.
And:
An endpoint is a combination of an IP address and a port number.
https://docs.oracle.com/javase/tutorial/networking/sockets/definition.html
In this line:
cowboy_websocket:websocket_send(Socket, {text, Msg}),
websocket_send() is defined to take a cow_ws:frame() type as the first argument:
-spec websocket_send(cow_ws:frame(), #state{}) -> ok | stop | {error, atom()}.
websocket_send(Frames, State) when is_list(Frames) ->
In the cow_ws module, the frame() type is defined like this:
frame() :: {text, iodata()}
| {binary, iodata()}
| ping | {ping, iodata()}
| pong | {pong, iodata()}
| close | {close, iodata()} | {close, close_code(), iodata()}
...which is a tuple with the first element being an atom and possibly a second element being an iodata() type, which is a built in erlang type that is a binary or a list (containing integers, binaries and other lists), and it is defined here:
I'm not sure how you go from a Request map to a two-tuple.
In Reactor Netty, when sending data to TCP channel via out.send(publisher), one would expect any publisher to work. However, if instead of a simple immediate Flux we use a more complex one with delayed elements, then it stops working properly.
For example, if we take this hello world TCP echo server, it works as expected:
import reactor.core.publisher.Flux;
import reactor.netty.DisposableServer;
import reactor.netty.tcp.TcpServer;
import java.time.Duration;
public class Reactor1 {
public static void main(String[] args) throws Exception {
DisposableServer server = TcpServer.create()
.port(3344)
.handle((in, out) -> in
.receive()
.asString()
.flatMap(s ->
out.sendString(Flux.just(s.toUpperCase()))
))
.bind()
.block();
server.channel().closeFuture().sync();
}
}
However, if we change out.sendString to
out.sendString(Flux.just(s.toUpperCase()).delayElements(Duration.ofSeconds(1)))
then we would expect that for each received item an output will be produced with one second delay.
However, the way server behaves is that if it receives multiple items during the interval, it will produce output only for the first item. For example, below we type aa and bb during the first second, but only AA gets produced as output (after one second):
$ nc localhost 3344
aa
bb
AA <after one second>
Then, if we later type additional line, we get output (after one second) but from the previous input:
cc
BB <after one second>
Any ideas how to make send() work as expected with a delayed Flux?
I think you shouldn't recreate publisher for the out.sendString(...)
This works:
DisposableServer server = TcpServer.create()
.port(3344)
.handle((in, out) -> out
.options(NettyPipeline.SendOptions::flushOnEach)
.sendString(in.receive()
.asString()
.map(String::toUpperCase)
.delayElements(Duration.ofSeconds(1))))
.bind()
.block();
server.channel().closeFuture().sync();
Try to use concatMap. This works:
DisposableServer server = TcpServer.create()
.port(3344)
.handle((in, out) -> in
.receive()
.asString()
.concatMap(s ->
out.sendString(Flux.just(s.toUpperCase())
.delayElements(Duration.ofSeconds(1)))
))
.bind()
.block();
server.channel().closeFuture().sync();
Delaying on the incoming traffic
DisposableServer server = TcpServer.create()
.port(3344)
.handle((in, out) -> in
.receive()
.asString()
.timestamp()
.delayElements(Duration.ofSeconds(1))
.concatMap(tuple2 ->
out.sendString(
Flux.just(tuple2.getT2().toUpperCase() +
" " +
(System.currentTimeMillis() - tuple2.getT1())
))
))
.bind()
.block();
I am struggling with how to set different cache response headers based on whether the result is an Ok or an Error. My code is something like the following (but with other types in the result):
let resultToJson (result:Result<'a,string>) : HttpHandler =
match result with
| Ok o -> Successful.ok (json o)
| Error s -> ServerErrors.internalError (text s)
I can add the headers by doing something like the following:
let resultToJson (result:Result<'a,string>) : HttpHandler =
fun (next : HttpFunc) (ctx : HttpContext) ->
let response =
let headers = ctx.Response.Headers
match result with
| Ok o ->
headers.Add("Cache-Control", new StringValues("public, max-age=10, stale-while-revalidate=2"))
headers.Add("Vary", new StringValues("Origin"))
Successful.ok (json o)
| Error s ->
headers.Add("Cache-Control", new StringValues("no-cache"))
ServerErrors.internalError (text s)
response next ctx
But this does not feel right. I would like to use the standard HttpHandlers from the ResponseCaching module to set the right cache headers:
publicResponseCaching 10 (Some "Origin") // For Ok: Add 10 sec public cache, Vary by Origin
noResponseCaching // For Error: no caching
How do I achieve this?
The response cache handler is supposed to be piped into an normal pipeline. Your choice between Ok and Error is a choose function, so you can use a choose that takes a list of handlers that can be attempted. To reject a path, just return a task { return None }, to move forward, it's next ctx.
If you want to keep all the logic in one controller, like you have now, just keep your match and pipe your json/text response into one of the caching handlers.
let fn = json o >=> publicResponseCaching 30 None) in fn next ctx
if it's nested inside a hander, instead of in a pipeline, you have to apply the next & ctx
I found the solution to my problem.
Yes, I can chain the HttpHandlers like Gerard and Honza Brestan mentioned, using the fish operator (>=>). The reason I could not make that work in the first place was that I also had created a fish operator for the Result type in an opened module. Basically I had created proper fish soup
As soon as I refactored my code so that the module containing the Result fish operator was not open in this scope, everything worked fine as expected.
Another point to remember is that response caching needs to be called before the finalizing HttpHandler, otherwise it will not be called:
// Simplified code
let resultToJson =
function
| Ok o -> publicResponseCaching 10 (Some "Origin") >=> Successful.ok(json o)
| Error e -> noResponseCaching >=> ServerErrors.internalError(text e)
I've written the following Haskell code to download the CSV file (daliy prices) available on yahoo finance web site . In the last part of the code, there's a case statement. I would like to know when actually "rcode" contains the "Left" value. I've mentioned three cases, but all of them refer to "Right" values. I may be wrong. I'm referring to the HTTP response codes available on the following web site.
downloadCSVFile ::String-> IO (Bool,String)
downloadCSVFile company_code=do
let a="http://ichart.finance.yahoo.com/table.csv?s=" ++ company_code
let b=simpleHTTP $ getRequest a
src <- ( b >>= getResponseBody)
rcode <- fmap rspCode <$> b
case rcode of
Right (2,_,_) -> return (True,src)
Right (4,_,_) -> return (False,"Invalid URL..")
Right (5,_,_) -> return (False, "Server Error")
https://support.google.com/webmasters/answer/40132?hl=en
The Result a type that gets threaded around is an alias for Either ConnError a.
You'll get a Left value if the HTTP client library had some actual problem when connecting to the server. If it successfully connected to the server and received a HTTP response code from the server, that will always be a Right value.
See the Network.HTTP documentation for more details.
To handle the error cases, do something like this:
case rcode of
Left err -> return (False, "Connection error: " ++ show err)
Right (2,_,_) -> return (True,src)
Right (4,_,_) -> return (False,"Invalid URL..")
Right (5,_,_) -> return (False, "Server Error")
Right code -> return (False, "Unexpected code: " ++ show code)
I also added a "catch-all" case in case you get an unexpected response from the server.
I was going through one of Don Syme's blog posts Async and Parallel Design Patterns in F#: Agents. However, the following seemingly extremely simple code did not generate output as expected.
type Agent<'T> = MailboxProcessor<'T>
let agent =
Agent.Start(fun inbox ->
async { while true do
let! msg = inbox.Receive()
printfn "got message '%s'" msg } )
for i in 1 .. 10000 do
agent.Post (sprintf "message %d" i)
Instead of expected 10,000 messages , I only got something around 3000 messages using Mono 2.8.1 under Ubuntu, or 15 messages using Visual F# under Windows XP. Am I missing anything here? BTW, I tried to replace the printfn statement with the following File op and ended up with same partial results.
open System.IO
type Agent<'T> = MailboxProcessor<'T>
let agent =
Agent.Start(fun inbox ->
async { while true do
let! msg = inbox.Receive()
use logger = new StreamWriter("a.log", true)
logger.WriteLine("got message '{0}'", msg.ToString())
logger.Close()
} )
for i in 1 .. 10000 do
agent.Post (sprintf "message %d" i)
Just run your code in Win machine - everything is OK. Try to add
ignore( System.Console.ReadKey() )
as a last line, because agent.Post is non-blocking and after posting 10000 messages control flow will move forward, possibly exiting the program.