Example of combinatorial FSM? - parsing

On the Wikipedia page of Finite State Machines it shows a graphic of the automata types:
I've never heard of combinational logic being included in the automata theory, normally just the Chomsky hierarchy, which stars with FSM. How then would combinational logic be written using a state machine?
For example, if we have an AND gate, I'd see it in a circuit diagram as something like:
______
A ------- | |
| AND |------- C
B ------- |______|
And the states would be: 1(A) & 1(B) --> 1(C), 1&0->0, 0&1->0, 0&0->0. But this involves two initial states rather than one, and also the input to a 'gate' is the combination of two inputs rather than one, so how would this be shown using a FSM? I suppose it could be possible doing something like the following -- with the input symbols being {0,1} and the output {0,1} like a Moore machine.
1 1
s0 ----> s2 -----> s3:1
| | 0
------> s3:0 --0,1--|
0 ^----------|
But this seems a bit useless to me so maybe I'm getting it wrong, what then would be a proper way to model Combinational logic in a state diagram?

Here would be a simpler way to diagram the above, where the Input and Output states are either ON (1) or OFF (0) to make it more intuitive.

Related

How to reference a particular row for an existing variable in SPSS syntax?

I have 2 variables, one for raw p-values and another for adjusted p-values. I need to compute a new variable based on the values of these two variables. What I need to do isn't too complicated, but I have a hard time doing it in SPSS because I can't figure out how I can reference a particular row for an existing variable in SPSS syntax.
The first column lists raw p-values in ascending order. The next column lists adjusted p-values, but these adjusted p-values are still incomplete. I need to compare two adjacent p-values in the adjusted p-values column (e.g., row 1 and 2, row 2 and 3, row 3 and 4, and so forth), and take the p-values whichever is smaller in each of these comparisons and enter those p-values into the following column as values for a new variable.
However, that's not the end of the story. One more condition has to be met. That is, the new p-values have to be in the same order as the raw p-values. However, I cannot ensure this if I start the comparisons from the top row. You can see that (i') is greater than (h') and (g'), and (d') is greater than (c'), (b'), and (a') in the example below (picture).
In order to solve this issue, I would need to start the comparison of the adjusted p-values from the bottom. In addition, I would need to compare the adjusted p-values to the new p-values of one row below. One exception is that I can simply use the value of (a) as the value of (a') since the value of (a) should always be the greatest of all the p-values as a rule. Then, for (b') , I need to compare (b) and (a') and enter whichever is smaller as (b'). For (c'), I need to compare (c) and (b') and enter whichever is smaller as (c'), and so forth. By doing this way, (d') would be 0.911 and (i') would be 0.017.
Sorry for this long post, but I would really appreciate if I can get some help to do this task in SPSS.
Thank you in advance for your help.
Raw p-values | Adjusted p-values (Temporal)| New p-values (Final)
-------------|-----------------------------|---------------------
0.002 | 0.030 (i) | 0.025 (i')
0.003 | 0.025 (h) | 0.017 (h')
0.004 | 0.017 (g) | 0.017 (g')
0.005 | 0.028 (f) | 0.028 (f')
0.023 | 0.068 (e) | 0.068 (e')
0.450 | 1.061 (d) | 1.061 (d')
0.544 | 1.145 (c) | 0.911 (c')
0.850 | 0.911 (b) | 0.911 (b')
0.974 | 0.974 (a) | 0.974 (a')
Another tool that may be convenient is the SHIFT VALUES command. It can move one or more columns of data either forward or backward.
I wonder whether the purpose of this has to do with adjusting p values for multiple testing corrections as with Benjamin-Hochberg FDR or others similar. If that is the case, you might find the STATS PADJUST (Analyze > Descriptives > Calculate adjusted p values) extension command useful. It offers six adjustment methods. You can install it from the Utilities (pre-V24) or Extensions (V24+) menu.
To get you started, here are a few tools that can help you with this task:
The LAG function
you can compare values in this line and the previous one, for example, the following will compare the Pval in each line to the one in the previous one, and put the smaller of the two in the NewPval:
compute NewPVal=min(Pval, lag(Pval)).
If you want to do the same process only start from the bottom, you can easily sort your data in reverse order and do the same.
CREATE + LEAD
if you want to make comparisons to the next line instead of the previous line, you should first create a "lead" variable and then compare to it.
for example, the following syntax will create a new variable that for each line contains the value of Pval in the next line, and then chooses the smaller of the two for the NewPval:
create /LeadPval=LEAD(Pval 1).
compute NewPVal=min(Pval, LeadPval).
Using case numbers
You can use case numbers (line numbers) in calculations and in conditions. For example, the following syntax will let you make different calculations in the first line and the following ones:
if $casenum=1 NewPval=Pval.
if $casenum>1 NewPVal=min(Pval, lag(Pval)).

How many 5-input LUT functions are mappable to two cascaded 4-input LUTs?

The questions can be made more precise:
Out of the 225 = 0x100000000 different functions realizable in a 5-input LUT, are 0x3c1d3c82 functions realizable with two cascaded 4-input LUTs?
Background
One of the standard building blocks in FPGAs is the 4-input look-up table. One such table can realize 224 functions. A standard method to construct an arbitrary 5-input LUT is to use three 4-input LUTS, with two levels of logic, and use the second level LUT as a 2-to-1 multiplexer. An arbitrary function f5(i4,i3,i2,i1,i0) may be represented as ff5(i4,g4(i3,i2,i1,i0),h4(i3,i2,i1,i0)), as illustrated below.
___ ___
i3 ---|I3 | ---|I3 |
i2 ---|I2 | i4 ---|I2 |--- ff5
i1 ---|I1 |------- h4 ---|I1 |
i0 ---|I0_| +--- g4 ---|I0_|
___ |
i3 ---|I3 | |
i2 ---|I2 |---+
i1 ---|I1 |
i0 ---|I0_|
More compact representation
However, many 5-input functions can be represented by two cascaded 4-input LUTs, gg(hh4(j4,j3,j2,j1),j2,j1,j0) where the jn's is a permutation of in (n from 0 to 4). Graphically, this can be shown as:
lut1
___ lut2
j4 ---|I3 | ___
j3 ---|I2 |----hh----|I3 |
j2 ---|I1 | j2 ---|I2 |-- gg
j1 ---|I0_| j1 ---|I1 |
j0 ---|I0_|
Result, to be verified
I have written a brute force program to find how many true 5-input functions can be represented by the above construction, my results are:
0 input lut...... : 0x2 0.0000% (Constant 0 and Constant 1)
1 input lut...... : 0xa 0.0000% (Buffered signal and inverted signal)
2 input lut...... : 0x64 0.0000%
3 input lut...... : 0x884 0.0001%
4 input lut...... : 0x4ed9a 0.0075%
Chained luts..... : 0x3c1d3c82 23.4821%
Uncovered : 0xc3ddccf0 76.5103%
Total :0x100000000 100%
5-input functions is a super-set of 4-input functions, I do not count those 5-input functions that can be represented in a single LUT (or no LUT at all, for functions 0x00000000 and 0xffffffff, and arguably 0xffff0000, 0xff00ff00, 0xf0f0f0f0, 0xcccccccc, 0xaaaaaaaa).
I wonder if someone else have conducted the same exercise, and obtained the same results?
This does not answer your question directly, but my answer is that looking at your first 3-LUT example, when implemented using any 'modern' FPGA device (Xilinx Spartan 3, Altera Cyclone V, Lattice ECP2 or newer), the 'multiplexer' does not actually use another LUT, rather there is dedicated multiplexer resource specifically for the purpose of creating wider logic functions.
These multiplexers avoid any routing delay that would be incurred by using a third LUT for the multiplexer, or using a chained LUT as in your second example, so the resulting wide LUT should operate at a higher frequency than the chained-LUT approach. They are essentially 'for free', in that they can usually only be used for logic associated with the neighboring LUTs in the same ALM/Slice.
I would also say that even if the chained-LUT approach used less logic resource, I would still favor code that was written plainly while using more LUTs, than code that directly instantiated LUTs in this way, with their associated totally unreadable LUT initialisation values.

Multiset Partition Using Linear Arithmetic and Z3

I have to partition a multiset into two sets who sums are equal. For example, given the multiset:
1 3 5 1 3 -1 2 0
I would output the two sets:
1) 1 3 3
2) 5 -1 2 1 0
both of which sum to 7.
I need to do this using Z3 (smt2 input format) and "Linear Arithmetic Logic", which is defined as:
formula : formula /\ formula | (formula) | atom
atom : sum op sum
op : = | <= | <
sum : term | sum + term
term : identifier | constant | constant identifier
I honestly don't know where to begin with this and any advice at all would be appreciated.
Regards.
Here is an idea:
1- Create a 0-1 integer variable c_i for each element. The idea is c_i is zero if element is in the first set, and 1 if it is in the second set. You can accomplish that by saying that 0 <= c_i and c_i <= 1.
2- The sum of the elements in the first set can be written as 1*(1 - c_1) + 3*(1 - c_2) + ... +
3- The sum of the elements in the second set can be written as 1*c1 + 3*c2 + ...
While SMT-Lib2 is quite expressive, it's not the easiest language to program in. Unless you have a hard requirement that you have to code directly in SMTLib2, I'd recommend looking into other languages that have higher-level bindings to SMT solvers. For instance, both Haskell and Scala have libraries that allow you to script SMT solvers at a much higher level. Here's how to solve your problem using the Haskell, for instance: https://gist.github.com/1701881.
The idea is that these libraries allow you to code at a much higher level, and then perform the necessary translation and querying of the SMT solver for you behind the scenes. (If you really need to get your hands onto the SMTLib encoding of your problem, you can use these libraries as well, as they typically come with the necessary API to dump the SMTLib they generate before querying the solver.)
While these libraries may not offer everything that Z3 gives you access to via SMTLib, they are much easier to use for most practical problems of interest.

Logical OR vs Logical AND: which should be more binding?

I'm writing a small parser, which will have an OR operator and an AND operator. When you see a series of ORs and ANDs, which do you expect will be more binding? Given the expression a & b | c, do you expect it to mean (a&b)|c or a&(b|c)? Can you give any reason to prefer one over the other?
Do what everyone else does; AND binds tighter than OR (see e.g. C Operator Precedence Table). This is the convention that everyone expects, so adopt the principle of least surprise.
This choice isn't arbitrary. It stems from the fact that AND and OR follow a similar relationship to multiply and add, respectively; see e.g. http://en.wikipedia.org/wiki/Boolean_logic#Other_notations.
Note also that users of your language should be heavily encouraged to use parentheses to make their intentions clear to readers of their code. But that's up to them!
AND and OR in Boolean algebra are equivalent to * and - in regular algebra, so it makes sense that AND binds harder than OR just like * binds harder than +:
A B A*B A&B A+B A|B
0 0 0 0 0 0
0 1 0 0 1 1
1 0 0 0 1 1
1 1 1 1 1(>0) 1
If you consider it like you would discrete maths, I'd say PEMDAS leads you to say that the AND is more binding. That's not always the case though.
I recommend you recommending your users to use parentheses wherever there's ambiguity.
Usually & has a precedence over | in many scenarios. But you can restrict expressions to be in a full parenthesis form.

How does the Erlang compiler handle pattern matching? What does it output?

I just asked a question about how the Erlang compiler implements pattern matching, and I got some great responses, one of which is the compiled bytecode (obtained with a parameter passed to the c() directive):
{function, match, 1, 2}.
{label,1}.
{func_info,{atom,match},{atom,match},1}.
{label,2}.
{test,is_tuple,{f,3},[{x,0}]}.
{test,test_arity,{f,3},[{x,0},2]}.
{get_tuple_element,{x,0},0,{x,1}}.
{test,is_eq_exact,{f,3},[{x,1},{atom,a}]}.
return.
{label,3}.
{badmatch,{x,0}}
Its all just plain Erlang tuples. I was expecting some cryptic binary thingy, guess not. I am asking this on impulse here (I could look at the compiler source but asking questions always ends up better with extra insight), how is this output translated in the binary level?
Say {test,is_tuple,{f,3},[{x,0}]} for example. I am assuming this is one instruction, called 'test'... anyway, so this output would essentially be the AST of the bytecode level language, from which the binary encoding is just a 1-1 translation?
This is all so exciting, I had no idea that I can this easily see what the Erlang compiler break things into.
ok so I dug into the compiler source code to find the answer, and to my surprise the asm file produced with the 'S' parameter to the compile:file() function is actually consulted in as is (file:consult()) and then the tuples are checked one by one for further action(line 661 - beam_consult_asm(St) -> - compile.erl). further on then there's a generated mapping table in there (compile folder of the erlang source) that shows what the serial number of each bytecode label is, and Im guessing this is used to generate the actual binary signature of the bytecode.
great stuff. but you just gotta love the consult() function, you can almost have a lispy type syntax for a random language and avoid the need for a parser/lexer fully and just consult source code into the compiler and do stuff with it... code as data data as code...
The compiler has a so-called pattern match compiler which will take a pattern and compile it down to what is essentially a series of branches, switches and such. The code for Erlang is in v3_kernel.erl in the compiler. It uses Simon Peyton Jones, "The Implementation of Functional
Programming Languages", available online at
http://research.microsoft.com/en-us/um/people/simonpj/papers/slpj-book-1987/
Another worthy paper is the one by Peter Sestoft,
http://www.itu.dk/~sestoft/papers/match.ps.gz
which derives a pattern match compiler by inspecting partial evaluation of a simpler system. It may be an easier read, especially if you know ML.
The basic idea is that if you have, say:
% 1
f(a, b) ->
% 2
f(a, c) ->
% 3
f(b, b) ->
% 4
f(b, c) ->
Suppose now we have a call f(X, Y). Say X = a. Then only 1 and 2 are applicable. So we check Y = b and then Y = c. If on the other hand X /= a then we know that we can skip 1 and 2 and begin testing 3 and 4. The key is that if something does not match it tells us something about where the match can continue as well as when we do match. It is a set of constraints which we can solve by testing.
Pattern match compilers seek to optimize the number of tests so there are as few as possible before we have conclusion. Statically typed language have some advantages here since they may know that:
-type foo() :: a | b | c.
and then if we have
-spec f(foo() -> any().
f(a) ->
f(b) ->
f(c) ->
and we did not match f(a), f(b) then f(c) must match. Erlang has to check and then fail if it doesn't match.

Resources