i try to iterate over a an array which contains weather data. That works fine already and I also can load the datas from the array which are important for me. Therefore I wrote a helping word which looks like this:
: get-value ( hsh str -- str ) swap at* drop ;
[ "main" get-value "temp" get-value ] each 9 [ + ] times
This code pushes the temperature values from the array on the stack and builds the sum. "main" and "temp" are the key values of the arrays.
I execute it with this command: get-weather-list generates the array
"Vienna" get-weather-list [ "main" get-value "temp" get-value ] each 9 [ + ] times
The result is a number on the stack. Now I want to split this call into one or two words. For example:
: get-weather-information ( city -- str )
get-weather-list
[ "main" get-value "temp" get-value ] each 9 [ + ] times ;
The problem is that I don't really understand the word's signature. I always get "The input quotation to “each” doesn't match its expected effect". I tried a lot but can't find a solution to fix this problem. May anyone have an idea? I am grateful for any help :)
Cheers
Stefan
This is a very old question by now, but it still may be useful to someone.
First, about each: the stack effect of the quotation is (... x -- ...).
That means it consumes an input, and outputs nothing. Your quotation worked on the interpreter because it lets you get away with "wrong" code. But for calling each from a defined word, your quotation can't output anything.
So each is not what you want. If you try to push a variable amount of values to the stack, you'll have the same kind of trouble again. Sequence words all output a fixed amount of values.
What you want to do is one of two things:
Make a new sequence with just the values you want, and then call sum on it.
Use something like reduce, to accumulate the sum as you process your list.
For example, with reduce:
get-weather-list 0 [ "main" get-value "temp" get-value + ] reduce ;
Related
I'm trying to see if I can use Red (or Rebol) to implement a simple DSL. I want to compile my DSL to source code for another language, perhaps Red or C# or both - rather than directly interpreting and executing it.
The DSL has only a couple of simple statements, plus an if/else statement.
Statements can be grouped into rules. A rule would get translated into a function definition, with each statement the equivalent statement in the target language.
The parse capability in Red/Rebol is great and lets me implement a parser very easily - in effect it's basically just the definition of the grammar itself.
However I haven't been able to find any examples of how to take the next steps, specifically handling an if statement and translating it to other source code.
Translating an if statement seems a good example of something minimal but still slightly tricky - because in Red having an else means you need to change the if to an either, rather than just an extra optional else.
Traditionally, during parsing I would build an abstract syntax tree, and then have functions to operate on the AST and generate the new source code. Should I be following this same approach or is there some other more idiomatic way in Red ?
I've experimented with using collect/keep in my parse rules to return a block of nested blocks, which in effect forms the AST. Another approach would be to save data into specific objects representing the different statements etc.
I'm still getting to grips with collect/keep, as to when a new block will be created and what will be kept. I'd also like to keep my parser rules as "clean" as possible, with as little other code intertwined in it. So I'm still not sure how best to add in Red code in round brackets in the parse rules. Adding code too early can cause the Red code to get executed, even if the rule eventually fails. Adding code too late means the code may not be executed in the order you expect, especially when dealing with multi-level statements like if, which can contain other statements.
So, specifically, any help on how to translate my example DSL to Red source code would be appreciated. Also any links to implementing DSLs like this in Red or Rebol would be great ! :)
Here are my parse rules :-
Red [
Purpose: example rules for parsing a simple language
]
SimpleLanguageParser: make object! [
Expr: [string! | integer! | block!]
Data: ['Person.AGE | 'Person.INCOME]
WriteMessageToLog: ['write 'message 'to 'log Expr]
SetData: ['set 'data Data '= Expr]
IfStatement: ['if Expr [any Statement] opt ['else [any Statement]] 'endif]
Statement: [WriteMessageToLog | SetData | IfStatement]
Rule: [
'rule word!
[any Statement]
'endrule
]
AnySimpLeLanguage: [Rule | [any Statement]]
]
SL: function [slInput] [
parse slInput SimpleLanguageParser/AnySimpleLanguage
]
An example of some source in the DSL :-
RULE TooYoung
IF [Person.Age < 15]
WRITE MESSAGE TO LOG "too young to earn an income"
SET DATA Person.Income = 0
ELSE
WRITE MESSAGE TO LOG "old enough"
ENDIF
ENDRULE
Translated to Red source code :-
TooYoung: function [] [
either Person.Age < 15 [
WriteMessageToLog "too young to earn an income"
Person.Income: 0
] [
WriteMessageToLog "old enough"
]
]
The data, ie Person.Age, Person.Income, and the function WriteMessageToLog are all things which would have been previously defined.
Note, for simplicity I've left Expr as block! etc, rather than defining Expr in any more detail in the DSL itself. Also, setting Person.Income in the function doesn't work as coded as it sets a local - but that's ok for now :)
Always nice to see someone digging language-oriented programming, keep it up and welcome to Red! ;)
Specifying correct grammar rules is the trickiest part of the job, and you've already nailed that. What's left is to intersperse your PEG (parsing expression grammar) with set, copy, collect/keep combo and paren! expressions in the right places, and then either create an AST from that or, in simplier cases, emit code directly.
Example
Here's a quickly baked (and definitely buggy!) example of how I'd tackled your task. Basically, it's slightly reworked code of yours, where matched patterns are setted, copyed or collected, and then bounded to specific words, which then just pasted into "template" (compose function inside emit-rule) to produce a Red code.
It's not the only way, I believe. #rebolek might come up with more industrial-strength solution, as he has experience with sophisticated parsers, which I'm lacking :P
Followup
As for if/else dilemma, I followed the approach proposed above -- instead of using opt I wrapped rule for else-branch into block and added an alternative match, which just sets false-block to none.
What to use for AST -- anything that allow to express a hierarchical structure, which is either a block! (though for performance gain you might want to use hash! or map!) or an object!. The advantage of object! is that it provides a context to be bound to, but here we're approaching a realm of so-called Bindology ("scoping" rules of Red language), which is another beast :)
Emitting C# code would be harder, but doable -- you'll need to assemble a string instead of Red code. I think, however, that, as a newcomer, you should stick with parsing directly at block-level (the way you done in your example), because it a lot easier and much expressive.
Another interesting (but much hairy) approach would be to re-define all words used in your DSL-block to work as you want.
Resources
We have a wiki entry about Red/Rebol dialects on github, which you might find if not useful, but interesting to read.
Also, two articles (this and this) in Red blog, I think you skimmed over first one already (if not, you should!).
Last, but not least, an exhaustive review of Parse principles and keywords (which has a couple of wrong parts in it though, so, caveat emptor). It's written for Rebol, but you should adapt examples to Red rather easily.
As a relative newcomer to the language, I do agree that there's a lack of examples and tutorials about DSL development, but we're working on that (at least in our heads) :)
Taking 9214's answer as a starting point, I've coded one possible solution. My approach has been :-
try to keep the parse rules as "clean" as possible
use collect and keep to return a block as the result, rather than trying to build a more complex AST
do some minimal translation in the keeps
the resulting block should be valid Red code
which uses predefined functions, where any more complex processing needs to happen
Most simple statements are easily translated to functions eg WRITE MESSAGE TO LOG becomes SL_WriteMessageToLog which can then do whatever it needs to do.
More complicated statements with structure, eg If/Else become functions which take block parameters which can then process the blocks as required.
For the If/Else complication, I've made this into two separate functions, SL_If and SL_Else. SL_If stores the result of the condition in a sequence, and SL_Else checks the latest result and removes it. This allows for nested If/Elses.
The presence of the final endrule can be checked for to ensure the input was correctly parsed. Once this is removed, we should have a valid function definition.
Here's the code :-
Red [
Purpose: example rules for parsing and translating a simple language
]
; some data
Person.AGE: 0
Person.INCOME: 0
; functions to implement some simple SL statements
SL_WriteMessageToLog: function [value] [
print value
]
SL_SetData: function [parmblock] [
field: parmblock/1
value: parmblock/2
if type? value = word! [
value: do value
]
print ["old value" field "=" do field]
set field value
print ["new value" field "=" do field]
]
; hold the If condition results, to be used to determine whether or not to do Else
IfConditionResults: []
SL_If: function [cond stats] [
cond_result: do cond
head insert IfConditionResults cond_result
if cond_result stats
]
SL_Else: function [stats] [
cond_result: first IfConditionResults
remove IfConditionResults
if not cond_result stats
]
; parsing rules
SimpleLanguageParser: make object! [
Expr: [logic! | string! | integer! | block!]
Data: ['Person.AGE | 'Person.INCOME]
WriteMessageToLog: ['write 'message 'to 'log set x Expr keep ('SL_WriteMessageToLog) keep (x)]
SetData: ['set 'data set d Data '= set x Expr keep ('SL_SetData) keep (reduce [d x])]
IfStatement: ['if keep ('SL_If) keep Expr collect [any Statement] opt ['else keep ('SL_Else) collect [any Statement]] 'endif]
Statement: [WriteMessageToLog | SetData | IfStatement]
Rule: [collect [
'rule set fname word! keep (to set-word! fname) keep ('does)
collect [any Statement]
keep 'endrule
]
]
AnySimpLeLanguage: [Rule | [any Statement]]
]
SL: function [slInput] [
parse slInput SimpleLanguageParser/Rule
]
For the example in the original post, the output is :-
TooYoung: does [
SL_If [Person.Age < 15] [
SL_WriteMessageToLog "too young to earn an income"
SL_SetData [Person.Income 0]
]
SL_Else [
SL_WriteMessageToLog "old enough"
]
]
ENDRULE
Thanks for your help to get this far.
Feedback on this approach and solution would be appreciated :)
Rebol2 has an /ANY refinement on the FIND function that can do wildcard searches:
>> find/any "here is a string" "s?r"
== "string"
I use this extensively in tight loops that need to perform well. But the refinement was removed in Rebol3.
What's the most efficient way of doing this in Rebol3? (I'm guessing a parse solution of some sort.)
Here's a stab at handling the "*" case:
like: funct [
series [series!]
search [series!]
][
rule: copy []
remove-each s b: parse/all search "*" [empty? s]
foreach s b [
append rule reduce ['to s]
]
append rule [to end]
all [
parse series rule
find series first b
]
]
used as follows:
>> like "abcde" "b*d"
== "bcde"
I had edited your question for "clarity" and changed it to say 'was removed'. That made it sound like it was a deliberate decision. Yet it actually turns out it may just not have been implemented.
BUT if anyone asks me, I don't think it should be in the box...and not just because it's a lousy use of the word "ALL". Here's why:
You're looking for patterns in strings...so if you're constrained to using a string to specify that pattern you get into "meta" problems. Let's say I want to extract the word *Rebol* or ?Red?, now there has to be escaping and things get ugly all over again. Back to RegEx. :-/
So what you might actually want isn't a STRING! pattern like s?r but a BLOCK! pattern like ["s" ? "r"]. This would permit constructs like ["?" ? "?"] or [{?} ? {?}]. That's better than rehashing the string hackery that every other language uses.
And that's what PARSE does, albeit in a slightly-less-declarative way. It also uses words instead of symbols, as Rebol likes to do. [{?} skip {?}] is a match rule where skip is an instruction that moves the parse position past any single element of the parse series between the question marks. It could also do so if it were parsing a block as input, and would match [{?} 12-Dec-2012 {?}].
I don't know entirely what the behavior of /ALL would-or-should be with something like "ab??cd e?*f"... if it provided alternate pattern logic or what. I'm assuming the Rebol2 implementation is brief? So likely it only matches one pattern.
To set a baseline, here's a possibly-lame PARSE solution for the s?r intent:
>> parse "here is a string" [
some [ ; match rule repeatedly
to "s" ; advance to *before* "s"
pos: ; save position as potential match
skip ; now skip the "s"
[ ; [sub-rule]
skip ; ignore any single character (the "?")
"r" ; match the "r", and if we do...
return pos ; return the position we saved
| ; | (otherwise)
none ; no-op, keep trying to match
]
]
fail ; have PARSE return NONE
]
== "string"
If you wanted it to be s*r you would change the skip "r" return pos into a to "r" return pos.
On an efficiency note, I'll mention that it is indeed the case that characters are matched against characters faster than strings. So to #"s" and #"r" to end make a measurable difference in the speed when parsing strings in general. Beyond that, I'm sure others can do better.
The rule is certainly longer than "s?r". But it's not that long when comments are taken out:
[some [to #"s" pos: skip [skip #"r" return pos | none]] fail]
(Note: It does leak pos: as written. Is there a USE in PARSE, implemented or planned?)
Yet a nice thing about it is that it offers hook points at all the moments of decision, and without the escaping defects a naive string solution has. (I'm tempted to give my usual "Bad LEGO alligator vs. Good LEGO alligator" speech.)
But if you don't want to code in PARSE directly, it seems the real answer would be some kind of "Glob Expression"-to-PARSE compiler. It might be the best interpretation of glob Rebol would have, because you could do a one-off:
>> parse "here is a string" glob "s?r"
== "string"
Or if you are going to be doing the match often, cache the compiled expression. Also, let's imagine our block form uses words for literacy:
s?r-rule: glob ["s" one "r"]
pos-1: parse "here is a string" s?r-rule
pos-2: parse "reuse compiled RegEx string" s?r-rule
It might be interesting to see such a compiler for regex as well. These also might accept not only string input but also block input, so that both "s.r" and ["s" . "r"] were legal...and if you used the block form you wouldn't need escaping and could write ["." . "."] to match ".A."
Fairly interesting things would be possible. Given that in RegEx:
(abc|def)=\g{1}
matches abc=abc or def=def
but not abc=def or def=abc
Rebol could be modified to take either the string form or compile into a PARSE rule with a form like:
regex [("abc" | "def") "=" (1)]
Then you get a dialect variation that doesn't need escaping. Designing and writing such compilers is left as an exercise for the reader. :-)
I've broken this into two functions: one that creates a rule to match the given search value, and the other to perform the search. Separating the two allows you to reuse the same generated parse block where one search value is applied over multiple iterations:
expand-wildcards: use [literal][
literal: complement charset "*?"
func [
{Creates a PARSE rule matching VALUE expanding * (any characters) and ? (any one character)}
value [any-string!] "Value to expand"
/local part
][
collect [
parse value [
; empty search string FAIL
end (keep [return (none)])
|
; only wildcard return HEAD
some #"*" end (keep [to end])
|
; everything else...
some [
; single char matches
#"?" (keep 'skip)
|
; textual match
copy part some literal (keep part)
|
; indicates the use of THRU for the next string
some #"*"
; but first we're going to match single chars
any [#"?" (keep 'skip)]
; it's optional in case there's a "*?*" sequence
; in which case, we're going to ignore the first "*"
opt [
copy part some literal (
keep 'thru keep part
)
]
]
]
]
]
]
like: func [
{Finds a value in a series and returns the series at the start of it.}
series [any-string!] "Series to search"
value [any-string! block!] "Value to find"
/local skips result
][
; shortens the search a little where the search starts with a regular char
skips: switch/default first value [
#[none] #"*" #"?" ['skip]
][
reduce ['skip 'to first value]
]
any [
block? value
value: expand-wildcards value
]
parse series [
some [
; we have our match
result: value
; and return it
return (result)
|
; step through the string until we get a match
skips
]
; at the end of the string, no matches
fail
]
]
Splitting the function also gives you a base to optimize the two different concerns: finding the start and matching the value.
I went with PARSE as even though *? are seemingly simple rules, there is nothing quite as expressive and quick as PARSE to effectively implementing such a search.
It might yet as per #HostileFork to consider a dialect instead of strings with wildcards—indeed to the point where Regex is replaced by a compile-to-parse dialect, but is perhaps beyond the scope of the question.
I want to filter log data by cleaning all new line character in every log message. The following is my code, but it seems low efficient, how to improve it?
character_drop_test_b()->
List = "AB\nC\nD\n",
Result = re:replace(List, "[\n]", "", [global, {return, list}]) ++ "\n",
Result.
Since you're replacing a fixed string rather than a pattern, you don't need to use regular expressions. Try this instead:
string:join(string:tokens(List, "\n"), "") ++ "\n"
By my measurements it's 3x faster than your approach on your small List and 6x faster than your approach for a list composed of 1000 copies of the List data.
I want to use erlang datetime values in the standard format {{Y,M,D},{H,Min,Sec}} in a MNESIA table for logging purposes and be able to select log entries by comparing with constant start and end time tuples.
It seems that the matchspec guard compiler somehow confuses tuple values with guard sub-expressions. Evaluating ets:match_spec_compile(MatchSpec) fails for
MatchSpec = [
{
{'_','$1','$2'}
,
[
{'==','$2',{1,2}}
]
,
['$_']
}
]
but succeeds when I compare $2 with any non-tuple value.
Is there a restriction that match guards cannot compare tuple values?
I believe the answer is to use double braces when using tuples (see Variables and Literals section of http://www.erlang.org/doc/apps/erts/match_spec.html#id69408). So to use a tuple in a matchspec expression, surround that tuple with braces, as in,
{'==','$2',{{1,2}}}
So, if I understand your example correctly, you would have
22> M=[{{'_','$1','$2'},[{'==','$2',{{1,2}}}],['$_']}].
[{{'_','$1','$2'},[{'==','$2',{{1,2}}}],['$_']}]
23> ets:match_spec_run([{1,1,{1,2}}],ets:match_spec_compile(M)).
[{1,1,{1,2}}]
24> ets:match_spec_run([{1,1,{2,2}}],ets:match_spec_compile(M)).
[]
EDIT: (sorry to edit your answer but this was the easiest way to get my comment in a readable form)
Yes, this is how it must be done. An easier way to get the match-spec is to use the (pseudo) function ets:fun2ms/1 which takes a literal fun as an argument and returns the match-spec. So
10> ets:fun2ms(fun ({A,B,C}=X) when C == {1,2} -> X end).
[{{'$1','$2','$3'},[{'==','$3',{{1,2}}}],['$_']}]
The shell recognises ets:fun2ms/1. For more information see ETS documentation. Mnesia uses the same match-specs as ETS.
I am relatively new to maxima. I want to know how to write an array into a text file using maxima.
I know it's late in the game for the original post, but I'll leave this here in case someone finds it in a search.
Let A be a Lisp array, Maxima array, matrix, list, or nested list. Then:
write_data (A, "some_file.data");
Let S be an ouput stream (created by openw or opena). Then:
write_data (A, S);
Entering ?? numericalio at the input prompt, or ?? write_ or ?? read_, will show some info about this function and related ones.
I've never used maxima (or even heard of it), but a little Google searching out of curiousity turned up this: http://arachnoid.com/maxima/files_functions.html
From what I can gather, you should be able to do something like this:
stringout("my_new_file.txt",values);
It says the second parameter to the stringout function can be one or more of these:
input: all user entries since the beginning of the session.
values: all user variable and array assignments.
functions: all user-defined functions (including functions defined within any loaded packages).
all: all of the above. Such a list is normally useful only for editing and extraction of useful sections.
So by passing values it should save your array assignments to file.
A bit more necroposting, as google leads here, but I haven't found it useful enough. I've needed to export it as following:
-0.8000,-0.8000,-0.2422,-0.242
-0.7942,-0.7942,-0.2387,-0.239
-0.7776,-0.7776,-0.2285,-0.228
-0.7514,-0.7514,-0.2124,-0.212
-0.7168,-0.7168,-0.1912,-0.191
-0.6750,-0.6750,-0.1655,-0.166
-0.6272,-0.6272,-0.1362,-0.136
-0.5746,-0.5746,-0.1039,-0.104
So I've found how to do this with printf:
with_stdout(filename, for i:1 thru length(z_points) do
printf (true,"~,4f,~,4f,~,4f,~,3f~%",bot_points[i],bot_points[i],top_points[i],top_points[i]));
A bit cleaner variation on the #ProdoElmit's answer:
list : [1,2,3,4,5]$
with_stdout("file.txt", apply(print, list))$
/* 1 2 3 4 5 is then what appears in file.txt */
Here the trick with apply is needed as you probably don't want to have square brackets in your output, as is produced by print(list).
For a matrix to be printed out, I would have done the following:
m : matrix([1,2],[3,4])$
with_stdout("file.txt", for row in args(m) do apply(print, row))$
/* 1 2
3 4
is what you then have in file.txt */
Note that in my solution the values are separated with spaces and the format of your values is fixed to that provided by print. Another caveat is that there is a limit on the number of function parameters: for example, for me (GCL 2.6.12) my method does not work if length(list) > 64.