I want to clean the temporary for collecting resource.
The file module only has del_dir/1 which require the directory is empty. But there is not function the get the all files in the directory (with absolute path")
The source code is as follows, how to correct it?
delete_path(X)->
{ok,List} = file:list_dir_all(X), %% <--- return value has no absolute path here
lager:debug("_229:~n\t~p",[List]),
lists:map(fun(X)->
lager:debug("_231:~n\t~p",[X]),
ok = file:delete(X)
end,List),
ok = file:del_dir(X),
ok.
You can delete a directory via console command using os:cmd, though it's a rough approach. For unix-like OS it will be:
os:cmd("rm -Rf " ++ DirPath).
If you want to delete a non-empty directory using appropriate erlang functions you have to do it recursively. The following example from here shows how to do it:
-module(directory).
-export([del_dir/1]).
del_dir(Dir) ->
lists:foreach(fun(D) ->
ok = file:del_dir(D)
end, del_all_files([Dir], [])).
del_all_files([], EmptyDirs) ->
EmptyDirs;
del_all_files([Dir | T], EmptyDirs) ->
{ok, FilesInDir} = file:list_dir(Dir),
{Files, Dirs} = lists:foldl(fun(F, {Fs, Ds}) ->
Path = Dir ++ "/" ++ F,
case filelib:is_dir(Path) of
true ->
{Fs, [Path | Ds]};
false ->
{[Path | Fs], Ds}
end
end, {[],[]}, FilesInDir),
lists:foreach(fun(F) ->
ok = file:delete(F)
end, Files),
del_all_files(T ++ Dirs, [Dir | EmptyDirs]).
This is now possible with the latest version of Erlang/OTP (unreleased, yet), using the file:del_dir/2 API.
Available options:
recursive: recursively delete the contents of a directory before deleting the directory itself
force: ignore errors when accessing or deleting files or directories
keeptop: the top-most directory is not deleted
Source: https://github.com/erlang/otp/pull/2565
A different approach in Erlang (the only real reason for using is to maintain platform independence):
-spec rm_rf(file:filename()) -> ok.
rm_rf(Dir) ->
Paths = filelib:wildcard(Dir ++ "/**"),
{Dirs, Files} = lists:partition(fun filelib:is_dir/1, Paths),
ok = lists:foreach(fun file:delete/1, Files),
Sorted = lists:reverse(lists:sort(Dirs)),
ok = lists:foreach(fun file:del_dir/1, Sorted),
file:del_dir(Dir).
The example above makes a few assumptions about the current directory in relation to the target directory -- if this is an issue for your use it may be advisable to explicitly set a working directory and fully qualify the path to the target.
It is worth noting that a similar approach can be used to recursively copy directories as well.
Since Erlang OTP 23.0 you can use file:del_dir_r/1 function.
file:del_dir_r(File).
For the ones still using older Erlang versions, here there is, a replacement based on the source code of the new official del_dir_r fixed to work with older versions:
%% Include file.hrl to have access to #file_info{} record
-include_lib("kernel/include/file.hrl").
-spec del_dir_r(File) -> ok | {error, Reason} when
File :: file:name_all(),
Reason :: file:posix() | badarg.
del_dir_r(File) ->
case file:read_link_info(File) of
{ok, #file_info{type = directory}} ->
case file:list_dir_all(File) of
{ok, Names} ->
lists:foreach(fun(Name) ->
del_dir_r(filename:join(File, Name))
end, Names);
{error, _Reason} -> ok
end,
file:del_dir(File);
{ok, _FileInfo} -> file:delete(File);
{error, _Reason} = Error -> Error
end.
Related
I have an Erlang program that can be compiled for multiple platforms.
What I want is to separate code for the different platforms.
What would be the most Erlang way to go about this?
I want to achieve something like this:
-module(platform_1).
-export([platform_init/0, platform_do_work/1, platform_stop/0]).
...
-module(platform_2).
-export([platform_init/0, platform_do_work/1, platform_stop/0]).
...
-module(main).
-export([start/0]).
main() ->
platform::init(),
platform::do_work("The work"),
platform::stop().
(Obviously the above code does not work, it's missing the part I'm asking about.)
I could name both modules platform and provide only one of them during compilation.
I could use -ifdefs in a platform module to wrap the platform specific modules.
I could use -behavior to specify a common contract.
I could use a header file with -export macros to provide a common contract.
I'm sure there are other solutions out there. I just didn't find any idioms out there for this general use case.
My first instinct here would be to define a new behavior, like:
-module platform.
-type state() :: term().
-type work() :: {some, work} | ….
-callback init() -> {ok, state()}.
-callback do_work(work(), state()) -> {ok, state()}.
-callback stop(state()) -> ok.
-export([run/1]).
-spec run(module()) -> ok.
run(Module) ->
{ok, State0} = Module:init(),
{ok, State1} = Module:do_work({some, work}, State0),
ok = Module:stop(State1).
Then, your modules can implement the behavior (I would probably place them in a folder like src/platforms and they'll look something like…
-module first_platform.
-behavior platform.
-export [init/0, do_work/2, stop/1].
-spec init() -> {ok, platform:state()}.
init() –>
…,
{ok, State}.
…
And your main module can have a macro or an environment variable or something where it can retrieve the platform to use, and look something like…
-module main.
-export [start/0].
-spec start() -> ok.
start() ->
Platform =
application:get_env(your_app, platform, first_platform),
platform:run(Platform).
or even…
-module main.
-export [start/0].
-spec start() -> ok.
start() ->
platform:run().
…and you do the figuring out of which platform to use within platform itself.
One simple way to achieve what you want is to store the module in a variable and use this variable to call the right module/function.
For example you could start your application with an extra parameter which select the right platform:
erl [your parameters] -extra "platform=platform_1"
Define your platform dependant modules as you want
-module(platform_1).
-export([platform_init/0, platform_do_work/1, platform_stop/0]).
...
and
-module(platform_2).
-export([platform_init/0, platform_do_work/1, platform_stop/0]).
...
and retrieve in the main function the Platform parameter
-module(main).
-export([start/0]).
main() ->
Args = init:get_plain_arguments().
[Platform] =
[list_to_atom(lists:nthtail(9,Y))
|| Y <- Args, string:prefix(Y,"platform=") =/= nomatch].
Platform:init(),
Platform:do_work("The work"),
Platform:stop().
I am not convinced at all that is the best way, mainly because you need to know, when you use a module if it is platform dependent or not (and the code to get the platform is not clean).
I think it would be better if the modules themselves were responsible for the platform choice. I will try to complete this answer as soon as I have some spare time.
I am setting up an ejabbered + riak cluster, where i have to use basic riak (get,put,delete..)functions in the file ejabberd/src/ejabberd_riak.erl
The functions put, get, get_by_index etc. work great and using the usage of the module in the file I could figure out what is what.
I'm facing an issue with the function delete_by_index and also get_keys_by_index, which is called by delete_by_index, anyhow.
The error thrown when I do this ->
ejabberd_riak:get_keys_by_index(game <<"language">>,
term_to_binary("English")).
{error,<<"Phase 0: invalid module named in PhaseSpec function:\n must be a valid module name (failed to load ejabberd_r"...>>}
(ejabberd#172.43.12.133)57> 12:28:55.177 [error] database error:
** Function: get_keys_by_index
** Table: game
** Index = <<"language">>
** Key: <<131,107,0,7,69,110,103,108,105,115,104>>
** Error: Phase 0: invalid module named in PhaseSpec function:
must be a valid module name (failed to load ejabberd_riak: nofile)
You should probably load ejabberd_riak on the riak side
You are currently using riak as separate erlang application, communicating with the database by protobuf. In this configuration you have independent (from each other) module sets loaded in ejabbered and riak applications. ejabberd_riak module loaded in ejabberd application, but not in riak application.
However get_by_index uses mapred that demands ejabberd_riak loaded on the riak side
-spec get_keys_by_index(atom(), binary(),
any()) -> {ok, [any()]} | {error, any()}.
%% #doc Returns a list of primary keys of objects indexed by `Key'.
get_keys_by_index(Table, Index, Key) ->
{NewIndex, NewKey} = encode_index_key(Index, Key),
Bucket = make_bucket(Table),
case catch riakc_pb_socket:mapred(
get_random_pid(),
{index, Bucket, NewIndex, NewKey},
[{map, {modfun, ?MODULE, map_key}, none, true}]) of
%% ^^^^^^
%% here is the problem
{ok, [{_, Keys}]} ->
{ok, Keys};
{ok, []} ->
{ok, []};
{error, _} = Error ->
log_error(Error, get_keys_by_index, [{table, Table},
{index, Index},
{key, Key}]),
Error
end.
You can customize your riak and add ejabberd_riak to the riak application (you don't need to start the whole ejabberd application on the riak side, however)
With monckeypatching approach, you should copy ejabberd_riak.erl, ejabberd.hrl, logger.hrl to the riak/deps/riak_kv/src. Then rebuild riak. You should distribute your files over the whole cluster, as map phase is executed on each cluster node.
I need to parse text file. This file is in post param. I have code like this:
upload_file('POST', []) ->
File = Req:post_param("file"),
What should I do next? How to parse it?
What's inside Req:post_param("file") ?
You assume it's a path to a file: have you checked the value of File ?
Anyway, it's Req:post_files/0 you are probably looking for:
[{_, _FileName, TempLocation, _Size}|_] = Req:post_files(),
{ok,Data} = file:read_file(TempLocation),
It's also probably a Bad Idea to leave the file at it's temporary location, you'd better find a more suitable place to store them.
It seems the uploaded_file record has 5 fields now (for 10 months by now).
This is the updated example, with the fifth field:
[{_, _FileName, TempLocation, _Size, _Name}|_] = Req:post_files(),
{ok,Data} = file:read_file(TempLocation),
Oh, and because it's a record, following example should work even if the definition gets updated once again:
[File|_] = Req:post_files(),
{ok,Data} = file:read_file(File#uploaded_file.temp_file),
Another warning: the code above, as any erlanger will see, only deals with the first and, probably most of the times, only uploaded file. Should more files be uploaded at the same time, these would be ignored.
The answer really depend on the content of "File". for example if the file content a string with respecting erlang syntax such as:
[{{20,4},0},
{{10,5},0},
{{24,1},0},
{{22,1},0},
{{10,6},0}].
can be read with this code:
File = Req:post_param("file"),
{ok,B} = file:read_file(File),
{ok,Tokens,_} = erl_scan:string(binary_to_list(B)),
{ok,Term} = erl_parse:parse_term(Tokens),
%% at this point Term = [{{20,4},0},{{10,5},0},{{24,1},0},{{22,1},0},{{10,6},0}]
[Edit]
the Erlang libraries use most of the time tuple as return value. It can help to manage the normal case and error cases. In the previous code, all lines are "pattern matched" to the success case only. That means that it will crash if any of the operation fails. If the surrounding code cath the error you will be able to manage the error case, otherwise the process will simply die reporting a badmatch error.
I chose this implementation because at this level of code, there is nothing that can be done to deal with an error. {{badmatch,{error,enoent}} simply means that the return value of file:read_file(File) is not of the form {ok,B} as expected, but is {error,enoent}, which means that the file File does not exist in the current path.
extract of the documentation
read_file(Filename) -> {ok, Binary} | {error, Reason}
Types:
Filename = name()
Binary = binary()
Reason = posix() | badarg | terminated | system_limit
Returns {ok, Binary}, where Binary is a binary data object that contains the contents of Filename, or {error, Reason} if an error occurs.
Typical error reasons:
enoent
The file does not exist.
eacces
Missing permission for reading the file, or for searching one of the parent directories.
eisdir
The named file is a directory.
enotdir
A component of the file name is not a directory. On some platforms, enoent is returned instead.
enomem
There is not enough memory for the contents of the file.
In my opinion the calling code should manage this case if it is a real use case, for example File comes from a user interface, or let the error unmanaged if this case should not happen. In your case you could do
try File_to_term(Params) % call the above code with significant Params
catch
error:{badmatch,{error,enoent}} -> file_does_not_exist_management();
error:{badmatch,{error,eacces}} -> file_access_management();
error:{badmatch,{error,Error}} -> file_error(Error)
end
I have an Erlang application based on Cowboy and I would like to test it.
Previously I used wooga's library etest_http for this kind of tasks, but I would like to start using common tests since I noticed that this is the way used in cowboy. I have tried to setup a very basic test but I am not able to run it properly.
Can anybody provide me a sample for testing the basic example echo_get and tell me what is the correct way to run the test from the console using the Makefile contained in the example?
The example make used only for build echo_get app. So to test echo_get app you can add test suite and call make && rebar -v 1 skip_deps=true ct (rebar should be in PATH) from shell.
Also you need etest and etest_http in your Erlang PATH or add it with rebar.config in your application. You can use httpc or curl with os:cmd/1 instead ehttp_test :)
test/my_test_SUITE.erl (full example)
-module(my_test_SUITE).
-compile(export_all).
-include_lib("common_test/include/ct.hrl").
% etest macros
-include_lib ("etest/include/etest.hrl").
% etest_http macros
-include_lib ("etest_http/include/etest_http.hrl").
suite() ->
[{timetrap,{seconds,30}}].
init_per_suite(Config) ->
%% start your app or release here
%% for example start echo_get release
os:cmd("./_rel/bin/echo_get_example start"),
Config.
end_per_suite(_Config) ->
%% stop your app or release here
%% for example stop echo_get release
os:cmd("./_rel/bin/echo_get_example stop")
ok.
init_per_testcase(_TestCase, Config) ->
Config.
end_per_testcase(_TestCase, _Config) ->
ok.
all() ->
[my_test_case].
my_test_case(_Config) ->
Response = ?perform_get("http://localhost:8080/?echo=saymyname"),
?assert_status(200, Response),
?assert_body("saymyname", Response).
ok.
The following starts the hello_world application and its dependencies, but uses the most recently compiled versions rather than the one in ./_rel; that may or may not be what you want, but it does avoid the timer:sleep(1000)
-module(hello_world_SUITE).
-include_lib("common_test/include/ct.hrl").
-export([all/0, init_per_suite/1, end_per_suite/1]).
-export([http_get_hello_world/1]).
all() ->
[http_get_hello_world].
init_per_suite(Config) ->
{ok, App_Start_List} = start([hello_world]),
inets:start(),
[{app_start_list, App_Start_List}|Config].
end_per_suite(Config) ->
inets:stop(),
stop(?config(app_start_list, Config)),
Config.
http_get_hello_world(_Config) ->
{ok, {{_Version, 200, _ReasonPhrase}, _Headers, Body}} =
httpc:request(get, {"http://localhost:8080", []}, [], []),
Body = "Hello World!\n".
start(Apps) ->
{ok, do_start(_To_start = Apps, _Started = [])}.
do_start([], Started) ->
Started;
do_start([App|Apps], Started) ->
case application:start(App) of
ok ->
do_start(Apps, [App|Started]);
{error, {not_started, Dep}} ->
do_start([Dep|[App|Apps]], Started)
end.
stop(Apps) ->
_ = [ application:stop(App) || App <- Apps ],
ok.
Use https://github.com/extend/gun
to run the provided example (considering 'gun' in in 'user' folder):
ct_run -suite twitter_SUITE.erl -logdir ./results -pa /home/user/gun/deps/*/ebin /home/user/gun/ebin
In lager.elr (the main module of https://github.com/basho/lager) there is no function with name "debug" but I have an application that call debug function from lager module like:
lager:debug(Str, Args)
I am beginner in Erlang but I know when we call a function from a module lile "mymodule:myfunction" there should be a function with name "myfunction" in file mymodule.erl but in this case when I search in lager.erl for function "debug" I can't find it.
The reason you don't see a mention of lager:debug/2 is because lager uses a parse transform. So when compiling the code, it is fed through lagers parse transform and the call to lager:debug/2 is substituted for another call to another module function.
If you compile your code with the correct parse transform option for lager, then the code would work.
"I GIVE CRAP ANSWERS" gave a good explanation of this strange behaviour. I post here a code that should show you what was the code generated in the beam file:
In the shell:
utility:decompile([yourfile.beam]).
%% Author: PCHAPIER
%% Created: 25 mai 2010
-module(utility).
%%
%% Include files
%%
%%
%% Exported Functions
%%
-export([decompile/1, decompdir/1]).
-export([shuffle/1]).
%%
%% API Functions
%%
decompdir(Dir) ->
Cmd = "cd " ++ Dir,
os:cmd(Cmd),
L = os:cmd("dir /B *.beam"),
L1 = re:split(L,"[\t\r\n+]",[{return,list}]),
io:format("decompdir: ~p~n",[L1]),
decompile(L1).
decompile(Beam = [H|_]) when is_integer(H) ->
io:format("decompile: ~p~n",[Beam]),
{ok,{_,[{abstract_code,{_,AC}}]}} = beam_lib:chunks(Beam ++ ".beam",[abstract_code]),
{ok,File} = file:open(Beam ++ ".erl",[write]),
io:fwrite(File,"~s~n", [erl_prettypr:format(erl_syntax:form_list(AC))]),
file:close(File);
decompile([H|T]) ->
io:format("decompile: ~p~n",[[H|T]]),
decompile(removebeam(H)),
decompile(T);
decompile([]) ->
ok.
shuffle(P) ->
Max = length(P)*10000,
{_,R}= lists:unzip(lists:keysort(1,[{random:uniform(Max),X} || X <- P])),
R.
%%
%% Local Functions
%%
removebeam(L) ->
removebeam1(lists:reverse(L)).
removebeam1([$m,$a,$e,$b,$.|T]) ->
lists:reverse(T);
removebeam1(L) ->
lists:reverse(L).
You don't see it in the lager.erl file because it's in the lager.hrl file that's included at the top of lager.erl. Erlang allows you to include a file with the -include("filename.hrl") directive. As a convention the include files end in an hrl extension but it could really be anything.
https://github.com/basho/lager/blob/master/include/lager.hrl