As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I want to develop a logical expression evaluator to compute applicability of certain logical expression against a particular expression. For example,
An expression could be of the form
(A AND B) NOT C
this expression should then be evaluated with another expression like
(B AND C) OR D
The result of the evaluation in the above case is FALSE as the second expression doesn't full fill the first.
The expression can be more complex also, like it can have numerical ranges R(1-100) which means the applicability of the expression is valid through the ranges, like [A-Za-z0-9] in regular expression.
So the expression can be complex like
(A AND B) OR C AND R(1-100) NOT R(80-100)
and this has to be then evaluated by an expression like
(C OR D) AND B NOT R(1-7) AND R(25-100)
There are clear rules on when an expression satisfies another expression. So, if one has to write an expression evaluator, what is the best way to go. Since, I haven't done something before, I would like to have an head start. Any relevant pointers, or similar implementations could be vast help.
You can evaluate boolean expressions fairly easily using a stack.
basically as you see "values" you push them on the stack, as you see operators you apply them. Google "boolean expression stack" will give you plenty of hits.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Both Agda and Idris effectively prohibit pattern matching on values of type Type. It seems that Agda always matches on the first case, while Idris just throws an error.
So, why is typecase a bad thing? Does it break consistency? I haven't been able to find much information regarding the topic.
It's really odd that people think pattern matching on types is bad. We get a lot of mileage out of pattern matching on data which encode types, whenever we do a universe construction. If you take the approach that Thorsten Altenkirch and I pioneered (and which my comrades and I began to engineer), the types do form a closed universe, so you don't even need to solve the (frankly worth solving) problem of computing with open datatypes to treat types as data. If we could pattern match on types directly, we wouldn't need a decoding function to map type codes to their meanings, which at worst reduces clutter, and at best reduces the need to prove and coerce by equational laws about the behaviour of the decoding function. I have every intention of building a no-middleman closed type theory this way. Of course, you need that level 0 types inhabit a level 1 datatype. That happens as a matter of course when you build an inductive-recursive universe hierarchy.
But what about parametricity, I hear you ask?
Firstly, I don't want parametricity when I'm trying to write type-generic code. Don't force parametricity on me.
Secondly, why should types be the only things we're parametric in? Why shouldn't we sometimes be parametric in other stuff, e.g., perfectly ordinary type indices which inhabit datatypes but which we'd prefer not to have at run time? It's a real nuisance that quantities which play a part only in specification are, just because of their type, forced to be present.
The type of a domain has nothing whatsoever to do with whether quantification over it should be parametric.
Let's have (e.g. as proposed by Bernardy and friends) a discipline where both parametric/erasable and non-parametric/matchable quantification are distinct and both available. Then types can be data and we can still say what we mean.
Many people see matching on types as bad because it breaks parametricity for types.
In a language with parametricity for types, when you see a variable
f : forall a . a -> a
you immediately know a lot about the possible values of f. Intuitively: Since f is a function, it can be written:
f x = body
The body needs to be of type a, but a is unknown so the only available value of type a is x. If the language allows nontermination, f could also loop. But can it make the choice between looping or returning x based on the value of x? No, because a is unknown, f doesn't know which functions to call on x in order to the make the decision. So there are really just two options: f x = x and f x = f x. This is a powerful theorem about the behavior of f that we get just by looking at the type of f. Similar reasoning works for all types with universally quantified type variables.
Now if f could match on the type a, many more implementations of f are possible. So we would lose the powerful theorem.
In Agda, you cannot pattern matching on Set because it isn't an inductive type.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I think that to add an object we need: create new array with a bigger size + make a copy of older array + add an element. So final complexity is O(N), where N - final number of elements.
Removing - O(N) also.
Am I wrong?
Thx.
I think that to add an object we need: create new array with a bigger size + make a copy of older array + add an element.
NOOOoooo....
To add an object, no new arrays are created and done all those stuffs.
If you remember cocoa has all pointers. And if you see C/C++ with pointers, just take it as a linked list. To add a new element only its address is saved in the list, and head/tail is adjusted if required.
Same case is here with MutableArrays.
Complexity should be O(1).
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Hello I am trying to implement a translator.
Since it is coming more and more complicated I will try to explain better what I'd like to implement.
I need to specify a new java like language.
This language must implement all structure of a java method: variable declaration, expression, conditional expression, parenthesis expressions and so on...
The language will work with vectors, constants and booleans.
It has different function: log, avg, sqrt as wll as sum, diff, shift and so on.
This language must be translated into plsql and other languages. So the method defined will become a StoredProcedure or a c++ function or whatever.
I need to consider also the math constraints such as priority of operators (+,-,*,/, <<, >> and so on...).
I already get this hint: Decompose expression in base operation: ANTLR + StringTemplate
I need to know the best solution for achieving my task. I suppose I have to use all your solution in a pipelined fashion, but i don't want to use a trial and error method for the solution.
I tried different (separated) solutions, but putting all together is hard for me.
My last problem is to separate an expression between vector and constant and an expression between vector and vector. In fact using plsql I have different function for handling these situations. i.e. an expression vactor1+5 (or 5+vector1) must be translated like PKG_FUN.constant_sum(cursor1, 5) instead vector1+vector2 must be translated as PKG_FUN.vector_sum(vector1, vector2). Moreover I can have functions or expressions that produce vector and other that produces constant and this must be considered when analyzing an expression (i.e. vector a = vector1 +((5+var2)*ln(vector2)*2)^2).
An example of this language can be:
DEFINE my_new_method(date date_from, date date_to, long variable1, long variable2){
vector result;
vector out1;
vector out2;
int max = -5+(4);
out1 = GET(date_from, date_to, variable1, 20);
out2 = GET(date_from, date_to, variable2);
if(avg(out1) > max)
{
result = sqrt(ln(out2) + max)*4;
}else
{
result = out1 + ln(out1) + avg(out2);
}
for(int i=0; i<result.length ; i++)
{
int num = result.get(i);
result.set(num*5, i);
}
return result;
}
I should translate it in plsql, c or c++ or other languages.
Any help would be appreciated.
What you need is "type inference". For every expression, you need to know the types of its operands, and the types of the results of each operator symbol.
You get this by a few steps:
1) by building a symbol table that records the type of declared entity in your variable scopes
2) by walking each expression, computing the types of the leaf nodes: for expressions, in your language at least all constant values are scalars, and any identifier has type you can look up in the symbol table. For most languages, the type of an operator result can be computed from the language rules for the operator, given its operand types. (Some language require the types to computed by constraint propagation). Having computed all of these types, you need to associate each tree node with its type (or at least be able to compute the type for a node on demand).
With this computed type information, you can differentiate between different operators (e.g, + on vectors, + with vector first operand and scalar second, etc.) and so choose which target language construct to generate.
ANTLR doesn't offer you any support in building and managing symbol tables, or in computing the type information, other than offering you a tree. Once you have the tree and all the type information, you can choose which string template to use to generate code, giving you and on-the-fly style translator. So doing this is just a lot of sweat. (Doing an on-the-fly translator has a downside: you better generate the exact code you want in that place, because you have no chance to optimize the generated result, and that likely means huge case analyses of the tree to choose what to generate).
Our DMS Software Reengineering Toolkit does offer you such additional support for constructing symbol tables, and for computing inferences over trees with its attributed grammar evaluators, along with additional means to write explicit transformations, easily made conditional on such type lookups. The transformations map from a tree in the source language, to a tree in the target language. You can then so "simpler" translations to the target language, and apply optimizations in the target language using additional explicit transforms. This can greatly simplify the translation process.
But in any case, building a full translator for one language (let alone 3) is a lot of work, for those that have experience and background for doing it. The fact that you have asked this question suggests you likely don't understand lots of issues related to analyzing and transforming code. I suggest you read a good compiler book (e.g., Aho/Ullman/Sethi "Compilers") before you proceed, or you are likely to run into other troubles like this.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have a template of a letter and many of it's variations (see below) which i acquire from a digital pen :
Template
Test 1
Test 2
These letters are scaled to be in the same bounding box .
I want to detect the mistakes in the letter , like the mistake in Test 1 is that that there is a extra line , and mistake in Test 2 is that there is a missing segment . Similarly there can be a mistake in which there is a curve instead of a line segment . I want to find the parts which need to be corrected . How should i go about doing it ?
One ambiguity is that whether you only want to know the difference between your template and test image or you want to detect letter A by using your template.
As mentioned by you, the difference between your template and test image is of that extra line but i think there are more differences e.g template A is not made of straight lines rather including some curves as well but the test 1 image is approximately made up of straight lines.
These two are different problems in image processing and must be entertained differently. First you have to think what you want to do?
However, one solution is that You can divide the template and test image in sub blocks and try to find the correlation between them, if that gives matching upto a predefined threshold( you should define it intelligently) than there is no difference otherwise mark that block as difference between the template and test image.
you can use xcorr2 function in MATLAB and MATLAB help is sufficient to understand the working of this function.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I understand well how postfix and prefix increments/decrements work. But my question is, in a for loop, which is more efficient or faster, and which is more commonly used and why?
Prefix?
for(i = 0; i < 3; ++i) {...}
Or postfix?
for(i = 0; i < 3; i++) {...}
For ints in this context there is no difference -- the compiler will emit the same code under most optimization levels (I'd venture to say even in the case of no optimization).
In other contexts, like with C++ class instances, there is some difference.
In this particular case, none is actually more efficient than the other. I would expect ++i to be more commonly used, because that's what would be more efficient for other kinds of iterations, like iterator objects.
In my opinion, choosing prefix or postfix in a for loop depends on the language itself. In c++ prefix is more efficient and consistent. Because in the prefix type, compiler does not need to copy of unincremented value. Besides your value must not be an integer, if your value is an object than this prefix type is more powerful.
Either works, and one is not more efficient or faster than the other in this case. It's common for people to use ++1, maybe because that is what was used in K&R and other influential books.