First of all sorry for my English.
I would like to use a backtracking algorithm in Erlang. It would serve as a guessing to solve partially filled sudokus. A 9x9 sudoku is stored as a list of 81 elements, where every element stores the possible number which can go into that cell.
For a 4x4 sudoku my initial solution looks like this:
[[1],[3],[2],[4],[4],[2],[3],[1],[2,3],[4],[1],[2,3],[2,3],[1],[4],[2,3]]
This sudoku has 2 solutions. I have to write out both of them. After that initial solution reached, I need to implement a backtracking algorithm, but I don't know how to make it.
My thought is to write out the fixed elements into a new list called fixedlist which will change the multiple-solution cells to [].
For the above mentioned example the fixedlist looks like this:
[[1],[3],[2],[4],[4],[2],[3],[1],[],[4],[1],[],[],[1],[4],[]]
From here I have a "sample", I look for the lowest length in the solutionlist which is not equal to 1, and I try the first possible number of this cell and I put it to that fixedlist. Here I have an algorithm to update the cells and checks if it is still a solvable sudoku or not. If not, I don't know how to step back one and try a new one.
I know the pseudo code of it and I can use it for imperative languages but not for erlang. (prolog actually implemented backtrack algorithm, but erlang didn't)
Any idea?
Re: My bactracking functions.
These are the general functions which provide a framework for handling back-tracking and logical variables similar to a prolog engine. You must provide the function (predicates) which describe the program logic. If you write them as you would in prolog I can show you how to translate them into erlang. Very briefly you translate something like:
p :- q, r, s.
in prolog into something like
p(Next0) ->
Next1 = fun () -> s(Next0) end,
Next2 = fun () -> r(Next1) end,
q(Next2).
Here I am ignoring all other arguments except the continuations.
I hope this gives some help. As I said if you describe your algorithms I can help you translate them, I have been looking for a good example. You can, of course, just as well do it by yourself but this provides some help.
Related
I was just thinking about the different mapping functions in common-lisp as described in the hyperspec. I am pretty much used to mapcar and think it is the easiest to understand. But what is a real world example of using mapc? The example in the hyperspec uses it for a side-effect as far as I get it. But why does it return the list argument?
Is there a general rule when such a mapping is favourable over an iteration using loop etc.?
What is a real world example of using mapc?
(mapc #'print my-list) is clearer than (dolist (x my-list) (print x))
Why does it return the list argument?
The functional heritage imposes the thinking that every function should return something useful; for mapc it is the original list.
I think mapc returns its list argument for the same reason print does - to simplify debugging by sprinkling your code with output. E.g., suppose you do something like
(mapcar #'important-processing
list-with-weird-elements)
You want to see what's inside the list while preserving the logic:
(mapcar #'important-processing
(mapc #'show-weird-object list-with-weird-elements))
Also, a lot of things in CL are for "hysterical reasons".
Is there a general rule when such a mapping is favourable over an iteration using loop etc.?
Only if you already have a function which does what you need, like print above.
I'm writing a program that takes in input a straight play in a custom format and then performs some analysis on it (like number of lines and words for each character). It's just for fun, and a pretext for learning cool stuff.
The first step in that process is writing a parser for that format. It goes :
####Play
###Act I
##Scene 1
CHARACTER 1. Line 1, he's saying some stuff.
#Comment, stage direction
CHARACTER 2, doing some stuff. Line 2, she's saying some stuff too.
It's quite a simple format. I read extensively about basic parser stuff like CFG, so I am now ready to get some work done.
I have written my grammar in EBNF and started playing with flex/bison but it raises some questions :
Is flex/bison too much for such a simple parser ? Should I just write it myself as described here : Is there an alternative for flex/bison that is usable on 8-bit embedded systems? ?
What is good practice regarding the respective tasks of the tokenizer and the parser itself ? There is never a single solution, and for such a simple language they often overlap. This is especially true for flex/bison, where flex can perform some intense stuff with regex matching. For example, should "#" be a token ? Should "####" be a token too ? Should I create types that carry semantic information so I can directly identify for example a character ? Or should I just process it with flex the simplest way then let the grammar defined in bison decide what is what ?
With flex/bison, does it makes sense to perform the analysis while parsing or is it more elegant to parse first, then operate on the file again with some other tool ?
This got me really confused. I am looking for an elegant, perhaps simple solution. Any guideline ?
By the way, about the programing language, I don't care much. For now I am using C because of flex/bison but feel free to advise me on anything more practical as long as it is a widely used language.
It's very difficult to answer those questions without knowing what your parsing expectations are. That is, an example of a few lines of text does not provide a clear understanding of what the intended parse is; what the lexical and syntactic units are; what relationships you would like to extract; and so on.
However, a rough guess might be that you intend to produce a nested parse, where ##{i} indicates the nesting level (inversely), with i≥1, since a single # is not structural. That violates one principle of language design ("don't make the user count things which the computer could count more accurately"), which might suggest a structure more like:
#play {
#act {
#scene {
#location: Elsinore. A platform before the castle.
#direction: FRANCISCO at his post. Enter to him BERNARDO
BERNARDO: Who's there?
FRANCISCO: Nay, answer me: stand, and unfold yourself.
BERNARDO: Long live the king!
FRANCISCO: Bernardo?
or even something XML-like. But that would be a different language :)
The problem with parsing either of these with a classic scanner/parser combination is that the lexical structure is inconsistent; the first token on a line is special, but most of the file consists of unparsed text. That will almost inevitably lead to spreading syntactic information between the scanner and the parser, because the scanner needs to know the syntactic context in order to decide whether or not it is scanning raw text.
You might be able to avoid that issue. For example, you might require that a continuation line start with whitespace, so that every line not otherwise marked with #'s starts with the name of a character. That would be more reliable than recognizing a dialogue line just because it starts with the name of a character and a period, since it is quite possible for a character's name to be used in dialogue, even at the end of a sentence (which consequently might be the first word in a continuation line.)
If you do intend for dialogue lines to be distinguished by the fact that they start with a character name and some punctuation then you will definitely have to give the scanner access to the character list (as a sort of symbol table), which is a well-known but not particularly respected hack.
Consider the above a reflection about your second question ("What are the roles of the scanner and the parser?"), which does not qualify as an answer but hopefully is at least food for thought. As to your other questions, and recognizing that all of this is opinionated:
Is flex/bison too much for such a simple parser ? Should I just write it myself...
The fact that flex and bison are (potentially) more powerful than necessary to parse a particular language is a red herring. C is more powerful than necessary to write a factorial function -- you could easily do it in assembler -- but writing a factorial function is a good exercise in learning C. Similarly, if you want to learn how to write parsers, it's a good idea to start with a simple language; obviously, that's not going to exercise every option in the parser/scanner generators, but it will get you started. The question really is whether the language you're designing is appropriate for this style of parsing, not whether it is too simple.
With flex/bison, does it makes sense to perform the analysis while parsing or is it more elegant to parse first, then operate on the file again with some other tool?
Either can be elegant, or disastrous; elegance has more to do with how you structure your thinking about the problem at hand. Having said that, it is often better to build a semantic structure (commonly referred to as an AST -- abstract syntax tree) during the parse phase and then analyse that structure using other functions.
Rescanning the input file is very unlikely to be either elegant or effective.
Let's say in the example lower case is constant and upper case is variable.
I'd like to have programs that can "intelligently" do specified tasks like algebra, but teaching the program new methods should be easy using symbols understood by humans. For example if the program told these facts:
aX+bX=(a+b)X
if a=bX then X=a/b
Then it should be able to perform these operations:
2a+3a=5a
3x+3x=6x
3x=1 therefore x=1/3
4x+2x=1 -> 6x=1 therefore x= 1/6
I was trying to do similar things with Prolog as it can easily "understand" variables, but then I had too many complications, mainly because two describing a relationship both ways results in a crash. (not easy to sort out)
To summarise: I want to know if a program which can be taught algebra by using mathematic symbols only. I'd like to know if other people have tried this and how complicated it is expected to be. The purpose of this is to make programming easier (runtime is not so important)
It depends on what do you want machine to do and how intelligent it should be.
Your question is mostly about AI but not ML. AI deals with formalization of "human" tasks while ML (though being a subset of AI) is about building models from data.
Described program may be implemented like this:
Each fact form a pattern. Program given with an expression and some patterns can try to apply some of them to expression and see what happens. If you want your program to be able to, for example, solve quadratic equations given rule like ax² + bx + c = 0 → x = (-b ± sqrt(b²-4ac))/(2a) then it'd be designed as follows:
Somebody gives a set of rules. Rule consists of a pattern and an outcome (solution or equivalent form). Think about the pattern as kind of a regular expression.
Then the program is asked to show some intelligence and prove its knowledge via doing something with a given expression. Here comes the major part:
you build a graph of expressions by applying possible rules (if a pattern is applicable to an expression you add new vertex with the corresponding outcome).
Then you run some path-search algorithm (A*, for example) to find sequence of transformations leading to the form like x = ...
I think this is an interesting question, although it off topic in SO (tool recommendation)
But nevertheless, because it captured my imagination, I wrote couple of function using R that can solve stuff like that quite easily
First, you'll have to install R, after words you'll need to download package called stringr
So in R console run
install.packages("stringr")
library(stringr)
And then you can define the following functions that I wrote
FirstFunc <- function(temp){
paste0(eval(parse(text = gsub("[A-Z]", "", temp))), unique(str_extract_all(temp, "[A-Z]")[[1]]))
}
SecondFunc <- function(temp){
eval(parse(text = strsplit(temp, "=")[[1]][2])) / eval(parse(text = gsub("[[:alpha:]]", "", strsplit(temp, "=")[[1]][1])))
}
Now, the first function will solve equations like
aX+bX=(a+b)X
While the second will solve equations like
4x+2x=1
For example
FirstFunc("3X+6X-2X-3X")
will return
"4X"
Now this functions is pretty primitive (mostly for the propose of illustration) and will solve equation that contain only one variable type, something like FirstFunc("3X-2X-2Y") won't give the correct result (but the function could be easily modified)
The second function will solve stuff like
SecondFunc("4x-2x=1")
will return
0.5
or
SecondFunc("4x+2x*3x=1")
will return
0.1
Note that this function also works only for one unknown variable (x) but could be easily modified too
I have a module which does some non constrained minimization. I'd like to keep its' interface as simple as possible, so the best choice would be to reduce it to a single function something like: min_of( F ).
But as soon as it is brutal computation, i would have to deal with at least two constants: precision of minimization algorithm and maximum number of iterations, so it would not hang itself if target function doesn't have local minimum at all.
Anyways, the next best choice is: min_of( F, Iterations, Eps ). It's ok, but I don't like it. I would like to still have another min_of( F ) defined something like this:
min_of( F ) ->
min_of( F, 10000, 0.0001).
But without magic numbers.
I'm new to Erlang, so I don't know how to deal with this properly. Should i define a macros, a variable or maybe a function returning a constant? Or even something else? I found Erlang quite expressive, so this question seems to be more of a good practice, than technical question.
You can define macros like this
-define(ITERATIONS, 10000).
-define(EPS, 0.0001).
and then use them as
min_of(F, ?ITERATIONS, ?EPS).
You can use macros but you can also use in-lined functions.
-compile({inline, [iterations/0, eps/0]}).
iterations() -> 10000.
eps() -> 0.0001.
and then use it in the way
min_of(F) ->
min_of(F, iterations(), eps()).
The benefit is you can use all syntax tools without need of epp. In this case also calling of function is not performance critical so you can even go without inline directive.
I'm trying to learn how to think in a functional programming way, for this, I'm trying to learn Erlang and solving easy problems from codingbat. I came with the common problem of comparing elements inside a list. For example, compare a value of the i-th position element with the value of the i+1-th position of the list. So, I have been thinking and searching how to do this in a functional way in Erlang (or any functional language).
Please, be gentle with me, I'm very newb in this functional world, but I want to learn
Thanks in advance
Define a list:
L = [1,2,3,4,4,5,6]
Define a function f, which takes a list
If it matches a list of one element or an empty list, return the empty list
If it matches the first element and the second element then take the first element and construct a new list by calling the rest of the list recursivly
Otherwise skip the first element of the list.
In Erlang code
f ([]) -> [];
f ([_]) -> [];
f ([X, X|Rest]) -> [X | f(Rest)];
f ([_|Rest]) -> f(Rest).
Apply function
f(L)
This should work... haven't compiled and run it but it should get you started. Also in case you need to do modifications to it to behave differently.
Welcome to Erlang ;)
I try to be gentle ;-) So main thing in functional approach is thinking in terms: What is input? What should be output? There is nothing like comparing the i-th element with the i+1-th element alone. There have to be always purpose of it which will lead to data transformation. Even Mazen Harake's example doing it. In this example there is function which returns only elements which are followed by same value i.e. filters given list. Typically there are very different ways how do similar thing which depends of purpose of it. List is basic functional structure and you can do amazing things with it as Lisp shows us but you have to remember it is not array.
Each time you need access i-th element repeatable it indicates you are using wrong data structure. You can build up different data structures form lists and tuples in Erlang which can serve your purposes better. So when you face problem to compare i-th with i+1-th element you should stop and think. What is purpose of it? Do you need perform some stream data transformation as Mazen Harake does or You need random access? If second you should use different data structure (array for example). Even then you should think about your task characteristics. If you will be mostly read and almost never write then you can use list_to_tuple(L) and then read using element/2. When you need write occasionally you will start thinking about partition it to several tuples and as your write ratio will grow you will end up with array implementation.
So you can use lists:nth/2 if you will do it only once or several times but on short list and you are not performance freak as I'm. You can improve it using [X1,X2|_] = lists:nthtail(I-1, L) (L = lists:nthtail(0,L) works as expected). If you are facing bigger lists and you want call it many times you have to rethink your approach.
P.S.: There are many other fascinating data structures except lists and trees. Zippers for example.