I wish to use scribe to export some data from a Erlang application, but I have a problem with running Thrift client. I install Thrift, in the erlang lib directory. I'm using: thrift-0.6.1
I found some example code to connect from erlang via thrift to scribe:
{ok, C} = thrift_client:start_link("localhost", 1463, scribe_thrift,
[{strict_read, false},
{strict_write, false},
{framed, true}]),
but erlang is returning this error:
** exception error: undefined function thrift_client:start_link/4
When I try to run application:start(thrift), for a moment I see some code completion for thrift*
7> thrift_client:
call/3 close/1 module_info/0 module_info/1 new/2
send_call/3
and there is no method start_link.
I think these days you want something like thrift_client_util:new(Host, Port, ProtoModule, Options)
which in your case would be:
thrift_client_util:new("localhost", 1463, scribe_thrift,
[{strict_read, false},
{strict_write, false},
{framed, true}]).
And an important point to bear in mind with the thrift API in erlang is that all calls return you a new client state value which you must use for subsequent calls. Using a client state value twice leads to wailing and the gnashing of teeth.
I got thrift integrated with my project a couple of months back. There are some initialization steps required to obtain the client.
{ok, TFactory} =
thrift_socket_transport:new_transport_factory(
"localhost", 8899, []),
{ok, PFactory} =
thrift_binary_protocol:new_protocol_factory(TFactory, []),
{ok, Protocol} = PFactory(),
{ok, Client} = thrift_client:new(Protocol, scribe_thrift),
For more context, you can probably take a look at a module from my git repo.
Related
I have included cqerl in my project the Erlang driver for Elixir
According to the documentation the Erlang syntax to connect is:
{ok, Client} = cqerl:new_client({}).
I just do not know how to translate the above to Elixir syntax.
As you are using Erlang via Elixir you have to call the Erlang module like this:
{:ok, client} = :cqerl.new_client({})
If you want to invoke Cassandra using a specific address you can create a new client as described in the cqerl documentation:
{:ok, client} = :cqerl.new_client({"127.0.0.1", 9042})
or if you indend to pass in more options such as authentification as second parameter (It is usually a bad idea to put your password in the code, rather use env variables or a config file ignored by git):
{:ok, client} = :cqerl.new_client({"127.0.0.1", 9042}, , [{auth, {cqerl_auth_plain_handler, [{"Your-Username", "Your-Password"}]}}])
One of my website is using Nitrogen with a Cowboy server.
I would like to log every access to web pages just like Apache does with access.log.
What would be the best way to do that ?
You can use cowboy middlewares https://ninenines.eu/docs/en/cowboy/1.0/guide/middlewares/
Just create a simple log module:
-module(app_web_log).
-behaviour(cowboy_middleware).
-export([execute/2]).
execute(Req, Env) ->
{{Peer, _}, Req2} = cowboy_req:peer(Req),
{Method, Req3} = cowboy_req:method(Req2),
{Path, Req4} = cowboy_req:path(Req3),
error_logger:info_msg("~p: [~p]: ~p ~p", [calendar:universal_time(), Peer, Method, Path]),
{ok, Req4, Env}.
and add it in list of middlwares:
{ok, _} = cowboy:start_http(http, 100, [{port, 8080}], [
{env, [{dispatch, Dispatch}]},
{middlewares, [cowboy_router, app_web_log, cowboy_handler]}]).
Try using Nitrogen on top of the Yaws web server instead, since it performs access logging by default.
Each underlying webserver does it differently (or not at all) - this is something simple_bridge does not yet have abstracted.
So in the case of cowboy, you'll likely have to rig it up yourself.
If you're using a newer build of Nitrogen (if you have the file site/src/nitrogen_main_handler.erl), then you can edit that file to manually log yourself. For example, using erlang's error handler, you could add something simple like:
log_request() ->
error_logger:info_msg("~p: [~p]: ~p", [{date(), time()}, wf:peer_ip(), wf:url()]).
run() ->
handlers(),
log_request(), %% <--- insert before wf_core:run()
wf_core:run().
Then whatever happens with the log can be handled by configuring error_logger to write to disk (http://erldocs.com/17.0/kernel/error_logger.html?i=13&search=error_logger#logfile/1)
If you use an older Nitrogen (which would have site/src/nitrogen_cowboy.erl), then you would similarly edit that file, once again before the wf_core:run() call.
Alternatively, your hooks option with cowboy could work as well. I've not worked with them, so you're on your own there :)
I want to pass some arguments to supervisor:init/1 function and it is desirable, that the application's interface was so:
redis_pool:start() % start all instances
redis_pool:start(Names) % start only given instances
Here is the application:
-module(redis_pool).
-behaviour(application).
...
start() -> % start without params
application:ensure_started(?APP_NAME, transient).
start(Names) -> % start with some params
% I want to pass Names to supervisor init function
% in order to do that I have to bypass application:ensure_started
% which is not GOOD :(
application:load(?APP_NAME),
case start(normal, [Names]) of
{ok, _Pid} -> ok;
{error, {already_started, _Pid}} -> ok
end.
start(_StartType, StartArgs) ->
redis_pool_sup:start_link(StartArgs).
Here is the supervisor:
init([]) ->
{ok, Config} = get_config(),
Names = proplists:get_keys(Config),
init([Names]);
init([Names]) ->
{ok, Config} = get_config(),
PoolSpecs = lists:map(fun(Name) ->
PoolName = pool_utils:name_for(Name),
{[Host, Port, Db], PoolSize} = proplists:get_value(Name, Config),
PoolArgs = [{name, {local, PoolName}},
{worker_module, eredis},
{size, PoolSize},
{max_overflow, 0}],
poolboy:child_spec(PoolName, PoolArgs, [Host, Port, Db])
end, Names),
{ok, {{one_for_one, 10000, 1}, PoolSpecs}}.
As you can see, current implementation is ugly and may be buggy. The question is how I can pass some arguments and start application and supervisor (with params who were given to start/1) ?
One option is to start application and run redis pools in two separate phases.
redis_pool:start(),
redis_pool:run([] | Names).
But what if I want to run supervisor children (redis pool) when my app starts?
Thank you.
The application callback Module:start/2 is not an API to call in order to start the application. It is called when the application is started by application:start/1,2. This means that overloading it to provide differing parameters is probably the wrong thing to do.
In particular, application:start will be called directly if someone adds your application as a dependency of theirs (in the foo.app file). At this point, they have no control over the parameters, since they come from your .app file, in the {mod, {Mod, Args}} term.
Some possible solutions:
Application Configuration File
Require that the parameters be in the application configuration file; you can retrieve them with application:get_env/2,3.
Don't start a supervisor
This means one of two things: becoming a library application (removing the {mod, Mod} term from your .app file) -- you don't need an application behaviour; or starting a dummy supervisor that does nothing.
Then, when someone wants to use your library, they can call an API to create the pool supervisor, and graft it into their supervision tree. This is what poolboy does with poolboy:child_spec.
Or, your application-level supervisor can be a normal supervisor, with no children by default, and you can provide an API to start children of that, via supervisor:start_child. This is (more or less) what cowboy does.
You can pass arguments in the AppDescr argument to application:load/1 (though its a mighty big tuple already...) as {mod, {Module, StartArgs}} according to the docs ("according to the docs" as in, I don't recall doing it this way myself, ever: http://www.erlang.org/doc/apps/kernel/application.html#load-1).
application:load({application, some_app, {mod, {Module, [Stuff]}}})
Without knowing anything about the internals of the application you're starting, its hard to say which way is best, but a common way to do this is to start up the application and then send it a message containing the data you want it to know.
You could make receipt of the message form tell the application to go through a configuration assertion procedure, so that the same message you send on startup is also the same sort of thing you would send it to reconfigure it on the fly. I find this more useful than one-shotting arguments on startup.
In any case, it is usually better to think in terms of starting something, then asking it to do something for you, than to try telling it everything in init parameters. This can be as simple as having it start up and wait for some message that will tell the listener to then spin up the supervisor the way you're trying to here -- isolated one step from the application inclusion issues RL mentioned in his answer.
I am trying to write a first program in Erlang that effects message communication between a client and server. In theory the server exits when it receives no message from the client, but every time I edit the client code and run the server again, it executes the old code. I have to ^G>q>erl>[re-enter command] to get it to see the new code.
-module(srvEsOne).
%%
%% export functions
%%
-export([start/0]).
%%function definition
start()->
io:format("Server: Starting at pid: ~p \n",[self()]),
case lists:member(serverEsOne, registered()) of
true ->
unregister(serverEsOne); %if the token is present, remove it
false ->
ok
end,
register(serverEsOne,self()),
Pid = spawn(esOne, start,[self()]),
loop(false, false,Pid).
%
loop(Prec, Nrec,Pd)->
io:format("Server: I am waiting to hear from: ~p \n",[Pd]),
case Prec of
true ->
case Nrec of
true ->
io:format("Server: I reply to ~p \n",[Pd]),
Pd ! {reply, self()},
io:format("Server: I quit \n",[]),
ok;
false ->
receiveLoop(Prec,Nrec,Pd)
end;
false ->
receiveLoop(Prec,Nrec,Pd)
end.
receiveLoop(Prec,Nrec,Pid) ->
receive
{onPid, Pid}->
io:format("Server: I received a message to my pid from ~p \n",[Pid]),
loop(true, Nrec,Pid);
{onName,Pid}->
io:format("Server: I received a message to name from ~p \n",[Pid]),
loop(Prec,true,Pid)
after
5000->
io:format("Server: I received no messages, i quit\n",[]),
ok
end.
And the client code reads
-module(esOne).
-export([start/1, func/1]).
start(Par) ->
io:format("Client: I am ~p, i was spawned by the server: ~p \n",[self(),Par]),
spawn(esOne, func, [self()]),
io:format("Client: Now I will try to send a message to: ~p \n",[Par]),
Par ! {self(), hotbelgo},
serverEsOne ! {self(), hotbelgo},
ok.
func(Parent)->
io:format("Child: I am ~p, i was spawned from ~p \n",[self(),Parent]).
The server is failing to receive a message from the client, but I can't sensibly begin to debug that until I can try changes to the code in a more straightforward manner.
When you make modification to a module you need to compile it.
If you do it in an erlang shell using the command c(module) or c(module,[options]), the new compiled version of the module is automatically loaded in that shell. It will be used by all the new process you launch.
For the one that are alive and already use it is is more complex to explain and I think it is not what you are asking for.
If you have several erlang shells running, only the one where you compile the module loaded it. That means that in the other shell, if the module were previously loaded, basically if you already use the module in those shell, and even if the corresponding processes are terminated, the new version is ignored.
Same thing if you use the command erlc to compile.
In all these cases, you need to explicitly load the module with the command l(module) in the shell.
Your server loop contain only local function calls. Running code is changed only if there is remote (or external) function call. So you have to export your loop function first:
-export([loop/3]).
and then you have to change all loop/3 calls in function receiveLoop/3 to
?MODULE:loop(...)
Alternatively you can do same thing with receiveLoop/3 instead. Best practice for serious applications is doing hot code swapping by demand so you change loop/3 to remote/external only after receiving some special message.
Using the default Erlang installation what is the minimum code needed to produce a "Hello world" producing web server?
Taking "produce" literally, here is a pretty small one. It doesn't even read the request (but does fork on every request, so it's not as minimal possible).
-module(hello).
-export([start/1]).
start(Port) ->
spawn(fun () -> {ok, Sock} = gen_tcp:listen(Port, [{active, false}]),
loop(Sock) end).
loop(Sock) ->
{ok, Conn} = gen_tcp:accept(Sock),
Handler = spawn(fun () -> handle(Conn) end),
gen_tcp:controlling_process(Conn, Handler),
loop(Sock).
handle(Conn) ->
gen_tcp:send(Conn, response("Hello World")),
gen_tcp:close(Conn).
response(Str) ->
B = iolist_to_binary(Str),
iolist_to_binary(
io_lib:fwrite(
"HTTP/1.0 200 OK\nContent-Type: text/html\nContent-Length: ~p\n\n~s",
[size(B), B])).
For a web server using only the built in libraries check out inets http_server.
When in need of some more power but still with simplicity you should check out the mochiweb library. You can google for loads of example code.
Do you actually want to write a web server in Erlang, or do you want an Erlang web server so that you can create dynamic web content using Erlang?
If the latter, try YAWS. If the former, have a look at the YAWS source code for inspiration
Another way, similar to the gen_tcp example above but with less code and already offered as a suggestion, is using the inets library.
%%%
%%% A simple "Hello, world" server in the Erlang.
%%%
-module(hello_erlang).
-export([
main/1,
run_server/0,
start/0
]).
main(_) ->
start(),
receive
stop -> ok
end.
run_server() ->
ok = inets:start(),
{ok, _} = inets:start(httpd, [
{port, 0},
{server_name, "hello_erlang"},
{server_root, "/tmp"},
{document_root, "/tmp"},
{bind_address, "localhost"}
]).
start() -> run_server().
Keep in mind, this exposes your /tmp directory.
To run, simply:
$ escript ./hello_erlang.erl
For a very easy to use webserver for building restful apps or such check out the gen_webserver behaviour: http://github.com/martinjlogan/gen_web_server.
Just one fix for Felix's answer and it addresses the issues Martin is seeing. Before closing a socket, all data being sent from the client should be received (using for example do_recv from gen_tcp description).
Otherwise there's a race condition for the browser/proxy sending the HTTP request being quick enough to send the http request before the socket is closed.