How does one set a queue to hold N values. When the N is reached, remove the last item and add a value to the front of the queue.
Should this be done with if statement?
I also want to calculate the values within the queue as a new item is added. e.g. add all of the values in the queue.
I assume from your query that you both want to maximize the length of the queue and get the sum of all the values.
To answer your easiest question first: Erlang queues, however you wish to represent them, are normal Erlang data structures so there are no problems in storing them in a dictionary.
The OTP queue module is actually very simple but the plethora of interfaces easily makes it confusing to use. #Nathon's enqueue function can be made much more efficient by not using the queue data structure directly but by defining your own data structure which includes the queue and its current length, {Length,Queue}. If the sum is important then you could even include it as well.
The queue representations are very simple so it is very easy to write your own specialised form of it.
The simplest way is to keep the queue in a list and take elements from the head and add new elements to the end. So :
new(Max) when is_integer(Max), Max > 0 -> {0,Max,[]}. %Length, Max and Queue list
take({L,M,[H|T]}) -> {H,{L-1,M,T}}.
add(E, {L,M,Q}) when L < M ->
{L+1,M,Q ++ [E]}; %Add element to end of list
add(E, {M,M,[H|T]}) -
{M,M,T ++ [E]}. %Add element to end of list
When the queue becomes full the oldest member, which is at the front of the queue, is dropped. An empty queue generates an error. This is a very simple structure but it is inefficient as the queue is copied every time a new element is added. Reversing the list does not help as then the list is copied every time an element is removed from it. But it is simple, and it does work.
A much more efficient structure is to split the queue into two lists, the front end of the queue and the rear end of the queue. The rear end is reversed and becomes the new front when the front is empty. So:
new(Max) when is_integer(Max), Max > 0 ->
{0,Max,[],[]}. %Length, Max, Rear and Front
take({L,M,R,[H|T]}) -> {H,{L-1,M,R,T}};
take{{L,M,R,[]}) when L > 0 ->
take({L,M,[],lists:reverse(R)}). %Move the rear to the front
add(E, {L,M,R,F}) when L < M ->
{L+1,M,[R|E],F}; %Add element to rear
add(E, {M,M,R,[H|T]}) ->
{M,M,[R|E],T}; %Add element to rear
add(E, {M,M,R,[]}) ->
add(E, {M,M,[],lists:reverse(R)}). %Move the rear to the front
Again when the queue becomes full the oldest member, which is at the front of the queue, is dropped and an empty queue generates an error. This is the data structure used in the queue module.
It would be very easy to add the current sum of the elements to the structure and manage it directly.
Often, when working on simple data structures like this, it is just as easy to roll your own module as it is to use a provided one.
Given the comments, this will do it:
enqueue(Value, Queue) ->
Pushed = queue:in(Value, Queue),
Sum = fun (Q) -> lists:sum(queue:to_list(Q)) end,
case queue:len(Pushed) of
Len when Len > 10 ->
Popped = queue:drop(Pushed),
{Popped, Sum(Popped)};
_ ->
{Pushed, Sum(Pushed)}
end.
If you don't actually want to sum the items, you can use lists:foldl instead, or just write a function to do the operation directly on a queue.
Related
I am often wanting to take one list and cons every element into an existing list.
MyList = [3,2,1],
MyNewElements = [4,5,6],
MyNewList = lists:foldl(fun(A, B) -> [A | B] end, MyList, MyNewElements).
%% [6,5,4,3,2,1]
Assuming MyList has 1M elements and MyNewElements only has a few, I want to do this efficiently.
I couldn't figure out which of these functions- if any- did what I was trying to do:
https://www.erlang.org/doc/man/lists.html
Adding a short list to the beginning of a long list is cheap - the execution time of the ++ operator is proportional to the length of the first list. The first list is copied, and the second list is added as the tail without modification.
So in your example, that would be:
lists:reverse(MyNewElements) ++ MyList
(The execution time of lists:reverse/1 is also proportional to the length of the argument.)
Another option, aside from those already provided, would be just to have
NewDeepList = [MyList | DeepList]
and modify the reading/traversing to be able to handle [[element()]] instead of [element()].
Because erlang is function language and is different from c, javascript, it copy variable and modify it, not just modify it. Therefore it is impossible compression to o(A).length(A) is length of new added elements.
I have a tool that is using a Telegram chatbot to interact with its users.
Telegram limits the call rate, so I use a queue system that gets flushed at regular intervals.
So, the current code is very basic:
// flush the message queue
let flushMessageQueue() =
if not messageQueue.IsEmpty then <- messageQueue is a ConcurrentQueue
// get all the messages
let messages =
messageQueue
|> Seq.unfold(fun q ->
match q.TryDequeue () with
| true, m -> Some (m, q)
| _ -> None)
// put all the messages in a single string
let messagesString = String.Join("\n", messages)
// send the data
client.SendTextMessageAsync(chatId, messagesString, ParseMode.Markdown)
|> Async.AwaitTask
|> Async.RunSynchronously
|> ignore
this is called at regular interval, while the write is:
// broadcast message
let broadcastMessage message =
messageQueue.Enqueue(message)
printfn "%s" (message.Replace ("```", String.Empty))
But as messages became more complex, two problems came at once:
Part of the output is formatted text with simple markdown:
Some blocks of lines are wrapped between ``` sections
There are some ``` sections as well inside some lines
The text is UTF-8 and uses a bunch of symbols
Some example of text may be:
```
this is a group of lines
with one, or many many lines
```
and sometimes there are things ```like this``` as well
And... I found out that Telegram limits message size to 4kb as well
So, I thought of two things:
I can maintain a state with the open / close ``` and pull from a queue, wrap each line in triple back ticks based on the state and push into another queue that will be used to make the 4kb block.
I can keep taking messages from the re-formatted queue and aggregate them until I reach 4kb, or the end of the queue and loop around.
Is there an elegant way to do this in F#?
I remember seeing a snippet where a collection function was used to aggregate data until a certain size but it looked very inefficient as it was making a collection of line1, line1+line2, line1+line2+line3... and then picking the one with the right size.
I'm writing a code in Erlang which suppose to generate a random number for a random amount of time and add each number to a list. I managed a function which can generate random numbers and i kinda managed a method for adding it to a list,but my main problem is restricting the number of iterations of the function. I like the function to produce several numbers and add them to the list and then kill that process or something like that.
Here is my code so far:
generator(L1)->
random:seed(now()),
A = random:uniform(100),
L2 = lists:append(L1,A),
generator(L2),
producer(B,L) ->
receive
{last_element} ->
consumer ! {lists:droplast(B)}
end
consumer()->
timer:send_after(random:uniform(1000),producer,{last_element,self()}),
receive
{Answer, Producer_PID} ->
io:format("the last item is:~w~n",[Answer])
end,
consumer().
start() ->
register(consumer,spawn(lis,consumer,[])),
register(producer,spawn(lis,producer,[])),
register(generator,spawn(lis,generator,[random:uniform(10)])).
I know it's a little bit sloppy and incomplete but that's not the case.
First, you should use rand to generate random numbers instead of random, it is an improved module.
In addition, when using rand:uniform/1 you won't need to change the seed every time you run your program. From erlang documentation:
If a process calls uniform/0 or uniform/1 without setting a seed
first, seed/1 is called automatically with the default algorithm and
creates a non-constant seed.
Finally, in order to create a list of random numbers, take a look at How to create a list of 1000 random number in erlang.
If I conclude all this, you can just do:
[rand:uniform(100) || _ <- lists:seq(1, 1000)].
There have some issue in your code:
timer:send_after(random:uniform(1000),producer,{last_element,self()}),, you send {last_element,self()} to producer process, but in producer, you just receive {last_element}, these messages are not matched.
you can change
producer(B,L) ->
receive
{last_element} ->
consumer ! {lists:droplast(B)}
end.
to
producer(B,L) ->
receive
{last_element, FromPid} ->
FromPid! {lists:droplast(B)}
end.
the same reason for consumer ! {lists:droplast(B)} and {Answer, Producer_PID} ->.
Maps, filters, folds and more : http://learnyousomeerlang.com/higher-order-functions#maps-filters-folds
The more I read ,the more i get confused.
Can any body help simplify these concepts?
I am not able to understand the significance of these concepts.In what use cases will these be needed?
I think it is majorly because of the syntax,diff to find the flow.
The concepts of mapping, filtering and folding prevalent in functional programming actually are simplifications - or stereotypes - of different operations you perform on collections of data. In imperative languages you usually do these operations with loops.
Let's take map for an example. These three loops all take a sequence of elements and return a sequence of squares of the elements:
// C - a lot of bookkeeping
int data[] = {1,2,3,4,5};
int squares_1_to_5[sizeof(data) / sizeof(data[0])];
for (int i = 0; i < sizeof(data) / sizeof(data[0]); ++i)
squares_1_to_5[i] = data[i] * data[i];
// C++11 - less bookkeeping, still not obvious
std::vec<int> data{1,2,3,4,5};
std::vec<int> squares_1_to_5;
for (auto i = begin(data); i < end(data); i++)
squares_1_to_5.push_back((*i) * (*i));
// Python - quite readable, though still not obvious
data = [1,2,3,4,5]
squares_1_to_5 = []
for x in data:
squares_1_to_5.append(x * x)
The property of a map is that it takes a collection of elements and returns the same number of somehow modified elements. No more, no less. Is it obvious at first sight in the above snippets? No, at least not until we read loop bodies. What if there were some ifs inside the loops? Let's take the last example and modify it a bit:
data = [1,2,3,4,5]
squares_1_to_5 = []
for x in data:
if x % 2 == 0:
squares_1_to_5.append(x * x)
This is no longer a map, though it's not obvious before reading the body of the loop. It's not clearly visible that the resulting collection might have less elements (maybe none?) than the input collection.
We filtered the input collection, performing the action only on some elements from the input. This loop is actually a map combined with a filter.
Tackling this in C would be even more noisy due to allocation details (how much space to allocate for the output array?) - the core idea of the operation on data would be drowned in all the bookkeeping.
A fold is the most generic one, where the result doesn't have to contain any of the input elements, but somehow depends on (possibly only some of) them.
Let's rewrite the first Python loop in Erlang:
lists:map(fun (E) -> E * E end, [1,2,3,4,5]).
It's explicit. We see a map, so we know that this call will return a list as long as the input.
And the second one:
lists:map(fun (E) -> E * E end,
lists:filter(fun (E) when E rem 2 == 0 -> true;
(_) -> false end,
[1,2,3,4,5])).
Again, filter will return a list at most as long as the input, map will modify each element in some way.
The latter of the Erlang examples also shows another useful property - the ability to compose maps, filters and folds to express more complicated data transformations. It's not possible with imperative loops.
They are used in almost every application, because they abstract different kinds of iteration over lists.
map is used to transform one list into another. Lets say, you have list of key value tuples and you want just the keys. You could write:
keys([]) -> [];
keys([{Key, _Value} | T]) ->
[Key | keys(T)].
Then you want to have values:
values([]) -> [];
values([{_Key, Value} | T}]) ->
[Value | values(T)].
Or list of only third element of tuple:
third([]) -> [];
third([{_First, _Second, Third} | T]) ->
[Third | third(T)].
Can you see the pattern? The only difference is what you take from the element, so instead of repeating the code, you can simply write what you do for one element and use map.
Third = fun({_First, _Second, Third}) -> Third end,
map(Third, List).
This is much shorter and the shorter your code is, the less bugs it has. Simple as that.
You don't have to think about corner cases (what if the list is empty?) and for experienced developer it is much easier to read.
filter searches lists. You give it function, that takes element, if it returns true, the element will be on the returned list, if it returns false, the element will not be there. For example filter logged in users from list.
foldl and foldr are used, when you have to do additional bookkeeping while iterating over the list - for example summing all the elements or counting something.
The best explanations, I've found about those functions are in books about Lisp: "Structure and Interpretation of Computer Programs" and "On Lisp" Chapter 4..
I wanna measure the performance to my database by measuring the time taken to do something as the number of processes increase. The intention is to plot a graph of performance vs number of processes after, anyone has an idea how? i am a beginner in elrlang please helo
Assuming your database is mnesia, this should not be hard. one way would be to have a write function and a read function. However, note that there are several Activity access contexts with mnesia. To test write times, you should NOT use the context of transaction because it returns immediately to the calling process, even before a disc write has occured. However, for disc writes, its important that you look at the context called: sync_transaction. Here is an example:
write(Record)->
Fun = fun(R)-> mnesia:write(R) end,
mnesia:activity(sync_transaction,Fun,[Record],mnesia_frag).
The function above will return only when all active replicas of the mnesia table have committed the record onto the data disc file. Hence to test the speed as processes increase, you need to have a record generator,a a process spawner , the write function and finally a timing mechanism. For timing, we have a built in function called: timer:tc/1, timer:tc/2 and timer:tc/3 which returns the exact time it took to execute (completely) a given function. To cut the story short, this is how i would do this:
-module(stress_test).
-compile(export_all).
-define(LIMIT,10000).
-record(book,{
isbn,
title,
price,
version}).
%% ensure this table is {type,bag}
-record(write_time,{
isbn,
num_of_processes,
write_time
}).
%% Assuming table (book) already exists
%% Assuming mnesia running already
start()->
ensure_gproc(),
tv:start(),
spawn_many(?LIMIT).
spawn_many(0)-> ok;
spawn_many(N)->
spawn(?MODULE,process,[]),
spawn_many(N - 1).
process()->
gproc:reg({n, l,guid()},ignored),
timer:apply_interval(timer:seconds(2),?MODULE,write,[]),
receive
<<"stop">> -> exit(normal)
end.
total_processes()->
proplists:get_value(size,ets:info(gproc)) div 3.
ensure_gproc()->
case lists:keymember(gproc,1,application:which_applications()) of
true -> ok;
false -> application:start(gproc)
end.
guid()->
random:seed(now()),
MD5 = erlang:md5(term_to_binary([random:uniform(152629977),{node(), now(), make_ref()}])),
MD5List = lists:nthtail(3, binary_to_list(MD5)),
F = fun(N) -> f("~2.16.0B", [N]) end,
L = [F(N) || N <- MD5List],
lists:flatten(L).
generate_record()->
#book{isbn = guid(),title = guid(),price = guid()}.
write()->
Record = generate_record(),
Fun = fun(R)-> ok = mnesia:write(R),ok end,
%% Here is now the actual write we measure
{Time,ok} = timer:tc(mnesia,activity,[sync_transaction,Fun,[Record],mnesia_frag]),
%% The we save that time, the number of processes
%% at that instant
NoteTime = #write_time{
isbn = Record#book.isbn,
num_of_processes = total_processes(),
write_time = Time
},
mnesia:activity(transaction,Fun,[NoteTime],mnesia_frag).
Now there are dependencies here, especially: gproc download and build it into your erlang lib path from here Download Gproc.To run this, just call: stress_test:start(). The table write_time will help you draw a graph of number of processes against time taken to write. As the number of processes increase from 0 to the upper limit (?LIMIT), we note the time taken to write a given record at the given instant and we also note the number of processes at that time.UPDATE
f(S)-> f(S,[]).
f(S,Args) -> lists:flatten(io_lib:format(S, Args)).
That is the missing function. Apologies.... Remember to study the table write_time, using the application tv, a window is opened in which you can examine the mnesia tables. Use this table to see increasing write times/ or decreasing performance as number of processes increase from time to time. An element i have left out is to note the actual time of the write action using time() which may be important parameter. You may add it in the table definition of the write_time table.
Also look at http://wiki.basho.com/Benchmarking.html
you might look at tsung http://tsung.erlang-projects.org/