I am writing hg(mercurial)'s client now. For tags commands, the reponse is as follows:
<<"adding a\na\ncommitted changeset
0:44108598f0ec643e7d90e9f18a2b6740401a510a\ntip
1:ce4daf41b6ae\nmy tags
0:44108598f0ec\ntest tag 0:44108598f0ec
local\n">>.
The python's related code is as follows:
t = []
for line in out.splitlines():
taglocal = line.endswith(' local')
if taglocal:
line = line[:-6]
name, rev = line.rsplit(' ', 1)
rev, node = rev.split(':')
t.append((name.rstrip(), int(rev), node, taglocal))
return t
I have to check everyline for "local" postfix, but the compiler give syntax error. How to write correctly and elegant.
error message:
src/emercurial_client.erl:763: illegal pattern
Code
process_tags(List)->
process_tags(List,[]).
process_tags([],Result)->
lists:reverse(Result);
process_tags([Line|Rest],Result) ->
B = binary_to_list(Line),
A = process_tags_line(B),
process_tags(Rest,[A|Result]).
process_tags_line(New_list ++"local")-> %%<-----error here
process_tags_line(New_list);
process_tags_line(New_list)->
%% case List of
%% Data ++ " local" -> %%<-----also match error
%% New_list = Data;
%% _ ->
%% New_list = List
%% end,
[Name,Part2] = string:tokens(Data," "),
[Rev,Node] = string:tokens(Part2,":"),
{trim(Name),love_misc:to_integer(Rev),
node,New_list}.
After modification, it is follows:
process_tags(List)->
List_b = binary:split(List,<<$\n>>,[global]),
Result = process_tags(List_b,[]),
%% error_logger:info_report([client_process_tags,Result]),
Result.
process_tags([],Result)->
lists:reverse(Result);
process_tags([<<>>],Result)->
lists:reverse(Result);
process_tags([Line|Rest],Result) ->
B = binary_to_list(Line),
A = process_tags_line(B),
process_tags(Rest,[A|Result]).
process_tags_line(List) ->
%% error_logger:info_report([client_tags_line_1,List]),
case lists:suffix(" local",List) of
true ->
New_list = lists:sublist(List,1,length(List)-7);
_ ->
New_list = List
end,
{Name,Part2} = rsplit(New_list,$\s),
{Rev,Node} =
rsplit(Part2,$:),
Rev_a = string:substr(Rev,1,length(Rev)-1),
{love_misc:trim(Name),love_misc:to_integer(Rev_a),Node,New_list}.
rsplit(A,Char)->
Index = string:rchr(A,Char),
lists:split(Index,A).
As pointed in documentation, you can match only prefixes in similar way (which in fact just a syntactic sugar).
I'd suggest you to use function lists:suffix
So, you can rewrite your code in such way:
New_list =
case lists:suffix(" local", List) of
true ->
Data;
false ->
List
end
Note, that case expressions returns values, so you can bind variable New_List only once - with result of case expression, instead of binding in each branch of case expression
I think you cannot use this pattern matching because of the underlying structure of a list ([A|[B|[....|[]...]]).
The reverse works so you can do something like
process_tags_line(List) ->
process_tags_line_1(lists:reverse(List)).
process_tags_line1(" lacol"++L) -> process_tags_line1(L);
process_tags_line1(L) ->
New_list = lists:reverse(L),
[Name,Part2] = string:tokens(Data," "),
[Rev,Node] = string:tokens(Part2,":"),
{trim(Name),love_misc:to_integer(Rev),node,New_list}.
But the simplest thing may be to use lists:suffix(L1,L2)...
Related
I have the following functions:
search(DirName, Word) ->
NumberedFiles = list_numbered_files(DirName),
Words = make_filter_mapper(Word),
Index = mapreduce(NumberedFiles, Words, fun remove_duplicates/3),
dict:find(Word, Index).
list_numbered_files(DirName) ->
{ok, Files} = file:list_dir(DirName),
FullFiles = [ filename:join(DirName, File) || File <- Files ],
Indices = lists:seq(1, length(Files)),
lists:zip(Indices, FullFiles). % {Index, FileName} tuples
make_filter_mapper(MatchWord) ->
fun (_Index, FileName, Emit) ->
{ok, [Words]} = file:consult(FileName), %% <---- Line 20
lists:foreach(fun (Word) ->
case MatchWord == Word of
true -> Emit(Word, FileName);
false -> false
end
end, Words)
end.
remove_duplicates(Word, FileNames, Emit) ->
UniqueFiles = sets:to_list(sets:from_list(FileNames)),
lists:foreach(fun (FileName) -> Emit(Word, FileName) end, UniqueFiles).
However, when i call search(Path_to_Dir, Word) I get:
Error in process <0.185.0> with exit value:
{{badmatch,{error,{1,erl_parse,["syntax error before: ","wordinfile"]}}},
[{test,'-make_filter_mapper/1-fun-1-',4,[{file,"test.erl"},{line,20}]}]}
And I do not understand why. Any ideas?
The Words variable will match to content of the list, which might not be only one tuple, but many of them. Try to match {ok, Words} instead of {ok, [Words]}.
Beside the fact that the function file:consult/1 may return a list of several elements so you should replace {ok,[Words]} (expecting a list of one element = Words) by {ok,Words}, it actually returns a syntax error meaning that in the file you are reading, there is a syntax error.
Remember that the file should contain only valid erlang terms, each of them terminated by a dot. The most common error is to forget a dot or replace it by a comma.
I want to write an application that read ip address from xml file. The file looks like
<range>
<start>192.168.40.1</start>
<end>192.168.50.255</end>
<subnet>255.255.255.0</subnet>
<gateway>192.168.50.1</gateway>
</range>
I create an records type to save the ip address
type Scope = { Start: IPAddress; End: IPAddress; Subnetmask: IPAddress; Gateway: IPAddress }
I wrote a unit function, that output the ip's.
loc
|> Seq.iter (fun e -> match e.Name.LocalName with
|"start" -> printfn "Start %s" e.Value
|"end" -> printfn "End %s" e.Value
|"subnet" -> printfn "Subnet %s" e.Value
|"gateway" -> printfn "Gateway %s" e.Value
| _ -> ())
How can I return the scope records type instead of unit?
As mentioned in the comments, the XML type provider makes this a lot easier. You can just point it at a sample file, it will infer the structur and let you read the file easily:
type RangeFile = XmlProvider<"sample.xml">
let range = RangeFile.Load("file-you-want-to-read.xml")
let scope =
{ Start = IPAddress.Parse(range.Start)
End = IPAddress.Parse(range.End)
Subnetmask = IPAddress.Parse(range.Subnet)
Gateway = IPAddress.Parse(range.Gateway) }
That said, you can certainly implement this yourself too. The code you wrote is a good start - there is a number of ways to do this, but in any case, you'll need to do some lookup based on the local name of the element (to find start, end, etc.).
One option is to load all the properties into a dictionary:
let lookup =
loc
|> Seq.map (fun e -> e.Name.LocalName, IPAddress.Parse(e.Value)
|> dict
Now you have a lookup table that contains IPAddress for each of the keys, so you can create Scope value using just:
let scope =
{ Start = lookup.["start"]; End = lookup.["end"];
Subnetmask = lookup.["subnet"]; Gateway = lookup.["gateway"] }
That said, the nice thing about the XML type provider is that it removes the need to do lookup based on string values and so you are less likely to make mistakes caused by typos.
I have the following code which will return a seq of DownloadLink for these Urls that can be parsed.
type DownloadLink = { Url: string; Period: DateTime }
nodes |> Seq.map (fun n ->
let url = n.Attributes.["href"].Value
match url with
| Helper.ParseRegex "[a-zA-Z](?<period>\d{4})\.txt" [period] ->
{ Url = url; Period = period }
| _ ->
printfn "Cannot parse %s" url // Error
)
However, I got the following error at the printfn. What's right way to implement it? Should I make it a list option first and then filter out these None items?
Error 1 Type mismatch. Expecting a
string -> DownloadLink
but given a
string -> unit
The type 'DownloadLink' does not match the type 'unit'
The basic problem is that if you have something like
match x with
|true -> A
|false -> B
the type of A and B must be the same.
There is actually a build in function that combines the map and filter using Some that you had though of - use Seq.choose like so
nodes |> Seq.choose (fun n ->
let url = n.Attributes.["href"].Value
match url with
| Helper.ParseRegex "[a-zA-Z](?<period>\d{4})\.txt" [period] ->
Some ({ Url = url; Period = period })
| _ ->
printfn "Cannot parse %s" url // Error
None
)
Aside from Seq.choose, you can also nicely solve the problem using sequence expressions - where you can use yield to return result in one branch, but do not have to produce a value in another branch:
seq { for n in nodes do
let url = n.Attributes.["href"].Value
match url with
| Helper.ParseRegex "[a-zA-Z](?<period>\d{4})\.txt" [period] ->
yield { Url = url; Period = period }
| _ ->
printfn "Cannot parse %s" url }
Aside, I would not recommend doing a side effect (printing) as part of your processing code. If you want to report errors, it might be better to return an option (or define a type which is either Success or Error of string) so that the error reporting is separated from processing.
get_currency() ->
URL = "http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.xchange%20where%20pair%20in%20(%22GBPEUR%22)&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys",
{Result, Info} = httpc:request(URL),
case Result of
error ->
{Result, Info};
ok ->
{{_Protocol, Code, _CodeStr}, _Attrs, WebData} = Info,
WebData
end.
extract_text(Content) ->
Item = hd(Content),
case element(1, Item) of
xmlText -> Item#xmlText.value;
_ -> ""
end.
analyze_info(WebData) ->
ToFind = [rate],
Parsed = element(1, xmerl_scan:string(WebData)),
Children = Parsed#xmlElement.content,
ElementList = [{El#xmlElement.name, extract_text(El#xmlElement.content)} || El <- Children, element(1, El) == xmlElement],
lists:map(fun(Item) -> lists:keyfind(Item, 1, ElementList) end, ToFind).
the above is the code im using to try to extract the contents of the tag from the url http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.xchange%20where%20pair%20in%20(%22GBPEUR%22)&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys.
here is what i do in the shell.
inets:start().
XML = scrapetest:get_currency().
scrapetest:analyze_info(XML).
and the return i get is simply "false". Im not sure what im doing wrong.
Just add some logs to your code.
Eg. adding io:format("~p~n", [ElementList]), - will show you that ElementList contains only result tag, and you should go one level deeper in your list comprehension to get tag named rate
This is common advice.
In your case, seems that better decision is recursive find function (if you want to write some code)
or use some batteries, like xmerl_xpath
Just example for another analyze_info :
analyze_info(WebData) ->
Parsed = element(1, xmerl_scan:string(WebData)),
xmerl_xpath:string("//Rate/text()", Parsed).
This will return:
[{xmlText,[{'Rate',2},{rate,1},{results,1},{query,1}],
1,[],"1.1813",text}]
I found myself in the position of needing to increment a value which was deeply nested in a series of erlang records. My first attempts at doing this with list comprehensions were dismal failures. Originally, the list contained a number of records where the target value would be absent because the record that contained it would, at some level, be undefined.
I dealt with that easily enough by using lists:partition to filter out only those entries that actually needed incrementing, but I was still unable to come up with a list comprehension that would do such a simple operation.
The code sample below probably doesn't compile - it is simply to demonstrate what I was trying to accomplish. I put the "case (blah) of undefined" sections to illustrate my original problem:
-record(l3, {key, value}).
-record(l2, {foo, bar, a_thing_of_type_l3}).
-record(l1, {foo, bar, a_thing_of_type_l2}).
increment_values_recursive([], Acc
increment_values_recursive([L1 | L1s], Acc) ->
case L1#l1.a_thing_of_type_l2 of
undefined -> NewRecord = L1;
L2 ->
case L2#l2.a_thing_of_type_l3 of
undefined -> NewRecord = L2;
{Key, Value} ->
NewRecord = L1#l1{l2 = L2#l2{l3 = {Key, Value + 1}}}
end
end,
increment_values_recursive(L1s, [NewRecord | Acc]).
increment_values(L1s) ->
lists:reverse(increment_values_recursive(L1s, [])).
........
NewList = increment_values(OldList).
That was what I started with, but I'd be happy to see a list comprehension that would process this when the list didn't have to check for undefined members. Something like this, really:
increment_values_recursive([], Acc
increment_values_recursive([L1 | L1s], Acc) ->
%I'm VERY SURE that this doesn't actually compile:
#l1{l2 = #l2{l3 = #l3{_Key, Value} = L3} = L2} = L1,
%same here:
NewRecord = L1#l1{l2=L2#l2{l3=L3#l3{value = Value+1}}},
increment_values_recursive(L1s, [NewRecord | Acc]).
increment_values(L1s) ->
lists:reverse(increment_values_recursive(L1s, [])).
AKA:
typedef struct { int key, value; } l3;
typedef struct { int foo, bar; l3 m_l3 } l2;
typedef struct { int foo, bar; l2 m_l2 } l1;
for (int i=0; i<NUM_IN_LIST; i++)
{
objs[i].m_l2.m_l3.value++;
}
You can use a list comprehension and even don't need to filter out records that don't have the nesting.
To avoid readability problems I shortened your record definition.
-record(l3, {key, value}).
-record(l2, {foo, bar, al3}).
-record(l1, {foo, bar, al2}).
Define a helper function to increment the value:
inc_value(#l1{al2=#l2{al3=#l3{value=Value}=L3}=L2}=L1) ->
L1#l1{al2=L2#l2{al3=L3#l3{value=Value+1}}};
inc_value(R) ->
R.
Note the last clause that maps any other stuff that doesn't match the pattern to itself.
Lets define example records to try this out:
1> R=#l1{foo=1, bar=2}.
#l1{foo = 1,bar = 2,al2 = undefined}
This is a record that doesn't have the full nesting defined.
2> R1=#l1{foo=1, bar=2, al2=#l2{foo=3, bar=4, al3=#l3{key=mykey, value=10}}}.
#l1{foo = 1,bar = 2,
al2 = #l2{foo = 3,bar = 4,
al3 = #l3{key = mykey,value = 10}}}
Another one that has the full structure.
Try out the helper function:
4> inc_value(R).
#l1{foo = 1,bar = 2,al2 = undefined}
It leaves alone the not fully nested record.
3> inc_value(R1).
#l1{foo = 1,bar = 2,
al2 = #l2{foo = 3,bar = 4,
al3 = #l3{key = mykey,value = 11}}}
It increments the fully nested record ok.
Now the list comprehension is simple and readable:
5> [ inc_value(X) || X <- [R, R1] ].
[#l1{foo = 1,bar = 2,al2 = undefined},
#l1{foo = 1,bar = 2,
al2 = #l2{foo = 3,bar = 4,
al3 = #l3{key = mykey,value = 11}}}]
This is waaaay messier than it would be in a language with destructive mutation, but it is definitely possible. Here's the dirt:
increment(Records) ->
[L1#l1{l2 = (L1#l1.l2)#l2{l3 = ((L1#l1.l2)#l2.l3)#l3{value = ((L1#l1.l2)#l2.l3)#l3.value + 1}}} || L1 <- Records].
As you can see, this is ugly as hell; furthermore, it's difficult to immediately apprehend what this comprehension is doing. It's straightforward to figure out what's going on, but I'd have a talk with anyone in my shop who wrote something like this. Much better to simply accumulate and reverse - the Erlang compiler and runtime are very good at optimizing this sort of pattern.
It is not as hard as it seems. #Peer Stritzinger gave a good answer, but here is my take, with a clean list comprehension:
-record(l3, {key, value}).
-record(l2, {foo=foo, bar=bar, al3}).
-record(l1, {foo=foo, bar=bar, al2}).
increment(#l1{al2 = Al2}=L1) -> L1#l1{al2 = increment(Al2)};
increment(#l2{al3 = Al3}=L2) -> L2#l2{al3 = increment(Al3)};
increment(#l3{value = V}=L3) -> L3#l3{value = V + 1}.
test() ->
List =
[ #l1{al2=#l2{al3=#l3{key=0, value = 100}}}
, #l1{al2=#l2{al3=#l3{key=1, value = 200}}}
, #l1{al2=#l2{al3=#l3{key=2, value = 300}}}
, #l1{al2=#l2{al3=#l3{key=3, value = 400}}}],
[increment(L) || L <- List].
The best solution is probably to look into the concept of lenses in functional programming. A lens is a functional getter and setter for mutation of records. Done correctly, you can then write higher-order lenses which compose primitive lenses.
The result is that you can construct a mutator for your purpose and then run the mutator through all the records by a comprehension.
It is one of those things I wanna write some day for Erlang but never really got the time to write up :)