a signal x(n) then is this concept of shirting and folding correct? - signal-processing

x(n) is given
need x(-n+3)
so to solve it:
first advance the x(n) signal by 3 units(time)
then fold it, or make a reflection of it
are the above steps correct or is the following correct
first fold the x(n) signal
then advance the signal by 3 units
?

Yes, this is a common source of confusion when learning about signals. Here's what I usually do.
Let y[n] = x[-n+3]. Because of -n, y[n] is obviously a time-reversed version of x[n]. But the question about the shift remains.
Notice that y[3] = x[0]. Therefore, y[n] is achieved by first reflecting x[n] about n=0 and then delaying the reflected signal by 3.
For example, let x[n] be the unit step function u[n]. Draw x[n], then draw y[n].

Actually here is what I do:
Let
x(n) = {1,-1,2,4,-3,0,6,-3,-1,2,7,9,-7,5}
^
Suppose origin or n=0 is 6. Note that the ^ symbol indicates the origin. First, we find the folder sequence of x(-n) from x(n). So first we fold or we can say reverse the form of x(n), we get,
The folder sequence of x(-n) from x(n) is
x(-n) = {5,-7,9,7,2,-1,-3,6,0,-3,4,2,-1,1}
^
then shift the sequence of x(-n) towards right hand side by 3 units, we will get
x(-n+3) = {5,-7,9,7,2-1,-3,6,0,-3,4,2,-1,1}
^
Now, the sample 4 is at the origin.

Above steps are correct.
The following steps can be corrected too if these are followed like:
first fold the x(n) signal
then delay the signal by 3 units this will yield x(-n+3).

Related

Design a turing machine to accept {1^n : n is prime number}

Design a Turing Machine to accept {1^n: n is prime number}.
I have this homework to make a recognizer Turing Machine that will be accepted if the occurrences of 1 are equal to any prime number. As of now, I still got no idea how to find the solution related to this prime number.
How should I go about this?
Because we're making a Turing machine and we haven't explicitly said we care about performance, odds are we just care about showing that TMs can solve this problem - so, any solution, no matter how dumb, should suffice. What is a correct, if needlessly tedious, way to show that a number in unary format (e.g., 1^p) is a prime number? One way is to check whether the number of p's is evenly divisible by any number between 2 and p - 1, inclusive. This is actually pretty easy to do for a Turing machine. Since the problem doesn't tell us not to, we can make it even simpler by using a multi-tape Turing machine for our construction.
Let the input be on tape #1 and use tape #2 to record the current thing we are trying to divide the input by. Before we begin, we can verify that our p is greater than 2, as follows:
see if the current tape square is 1, if so, move right, else halt-reject since 0 is not prime
see if the current tape square is 1, if so, move right, else, halt-reject since 1 is not prime
see if the current tape square is 1, if so, reset the tape head and continue on with our division process, knowing p > 2; else, halt-accept since 2 is prime.
If we're continuing at this point, that means we've verified we are looking at the unary encoding of a number greater than 2. We need to do this because 2 is the first number for which we need to check divisibility, and we don't want to say 2 is composite since 2 divides it. At this stage, we can write 11 (unary 2) on tape #2. If you like, you can do this as you are resetting the tape head as mentioned above. Otherwise, you can use some new states specifically for that part of the setup.
We are now looking at a TM configuration like this:
#1111111111111111111111#
^
#11#
^
We want to see if the number represented on the second tape evenly divides the number represented on the first tape. To do this, we can "cross out" numbers on the first tape repeatedly, in groups the size of the second tape, until we run out of numbers on the first tape. If we run out in the middle of crossing out a whole group, then the number represented on the first tape is not evenly divisible by the number represented on the second tape, and we can proceed to check increasing numbers on the second tape. If we run out after having crossed out an entire group, then it is evenly divisible by a number other than 1 and itself, so we can halt-reject as the number is not prime. Processing our example would look like:
=> #1111111# => #x111111# => #xx11111# => #xx11111#
^ ^ ^ ^
#11# #11# #11# #11#
^ ^ ^ ^
=> #xx11111# => #xx11111# => #xx11111# => #xxx1111#
^ ^ ^ ^
#11# #11# #11# #11#
^ ^ ^ ^
=> #xxxx111# => ... reset => #xxxx111# => ... cross
^ tape 2 ^ off another
#11# back to #11# pair of 1s
^ head ... ^ ...
=> #xxxxxx1# => #xxxxxxx#
^ ^
#11# #11#
^ ^
At this stage we see a 1 on the second tape and a blank on the first tape; this means the number was not divisible by our current guess. If we had seen blank/blank, we could have halt-rejected immediately. Instead, we need to continue checking larger possible divisors. At this stage we need to:
reset the first tape head to the beginning of the input, replacing x's with 1's again.
add an extra 1 to the second tape head to increment its value, then reset the head to the beginning of its value.
repeat the divisibility check described above.
If we continue this process, we will eventually find a divisor of the number represented on the input tape, if that number is composite. As it is, however, we will currently halt-reject on prime numbers when the second tape increases to the same number as the input. We will then find that the prime number evenly divides itself. We need to check for this; a good place would be between steps 2 and 3 in the last set of 3 steps above, we can compare tapes #1 and #2 and see if they match exactly. this would be just like the divisibility check, but it would only cross off at most one copy of tape #2 from tape #1, and it would halt-accept if it got to blank/blank rather than halt-rejecting.
Obviously, there are a lot of details to fill out if you want to formally define the TM by giving its transition table. But, it can be done and a procedure like the one outlined here can be used to solve this problem. Again, this is not the most efficient way to solve this, but it is a way to solve it, which generally is good enough when looking for TMs for some problem.

Can't interpret SPSS Error Message in MATRIX code

A program to run a Schmid-Leiman transformation using SPSS's Matrix language was published in 2005 by Woolf & Preising in Behavior Research Methods volume 37, pages 48 to 58). It is probably not important for you to know what a Schmid-Leiman transformation is, but I'll explain in comments if you feel it is necessary.
In modifying the program for my own data, I'm getting an error I can't figure out:
Error # 12302 in column 12. Text: ,
Syntax error.
Execution of this command stops.
Error in RIGHT HAND SIDE of COMPUTE command.
The MATRIX statement skipped.
Here is the beginning of the code. The error is showing as coming in Line 6:
* Encoding: UTF-8.
* Schmid-Leiman Solution for 2 level higher-order Factor analysis.
Matrix.
* ENTER YOUR SPECIFICATIONS HERE.
* Enter first-order pattern matrix.
Compute F1={.461, .253, -.058, -.069;
.241, .600, .143, .033;
.582, .047, -.077, -.125;
.327, .297, -.120, -.166;
.176, .448, -.240, -.099;
.680, .069, -.036, -.138;
.415, .228, -.091, -.153;
.
.
.
.390, .205, .002, -.098;
.164, .369, -.170, -.047
}.
As shown above, the text generating the error is shown as a comma (,), but the actual text (following the COMPUTE statement) in column 12 is an open bracket ({). So I have no idea what is going on. Can someone help?
For reference, the original code as proposed by Woolf & Preising (2005) is found here;
The Woolf & Preising article is found here
PS: The sample program given in the link above does run on my copy of SPSS. Here's the beginning of that code:
* Schmid-Leiman Solution for 2 level higher-order Factor analysis.
Matrix.
* ENTER YOUR SPECIFICATIONS HERE.
* Enter first-order pattern matrix.
Compute F1={0.099, 0.5647, -0.1521;
0.0124, 0.9419, -0.1535;
-0.1501, 0.6177, 0.4218;
0.7441, -0.0882, 0.1425;
0.6241, 0.2793, -0.1137;
0.8693, -0.0331, 0.0289;
-0.0154, -0.2706, 0.6262;
-0.0914, 0.0995, 0.7216;
0.1502, 0.0835, 0.398}.

Is it appropriate for a parser DCG to not be deterministic?

I am writing a parser for a query engine. My parser DCG query is not deterministic.
I will be using the parser in a relational manner, to both check and synthesize queries.
Is it appropriate for a parser DCG to not be deterministic?
In code:
If I want to be able to use query/2 both ways, does it require that
?- phrase(query, [q,u,e,r,y]).
true;
false.
or should I be able to obtain
?- phrase(query, [q,u,e,r,y]).
true.
nevertheless, given that the first snippet would require me to use it as such
?- bagof(X, phrase(query, [q,u,e,r,y]), [true]).
true.
when using it to check a formula?
The first question to ask yourself, is your grammar deterministic, or in the terminology of grammars, unambiguous. This is not asking if your DCG is deterministic, but if the grammar is unambiguous. That can be answered with basic parsing concepts, no use of DCG is needed to answer that question. In other words, is there only one way to parse a valid input. The standard book for this is "Compilers : principles, techniques, & tools" (WorldCat)
Now you are actually asking about three different uses for parsing.
A recognizer.
A parser.
A generator.
If your grammar is unambiguous then
For a recognizer the answer should only be true for valid input that can be parsed and false for invalid input.
For the parser it should be deterministic as there is only one way to parse the input. The difference between a parser and an recognizer is that a recognizer only returns true or false and a parser will return something more, typically an abstract syntax tree.
For the generator, it should be semi-deterministic so that it can generate multiple results.
Can all of this be done with one, DCG, yes. The three different ways are dependent upon how you use the input and output of the DCG.
Here is an example with a very simple grammar.
The grammar is just an infix binary expression with one operator and two possible operands. The operator is (+) and the operands are either (1) or (2).
expr(expr(Operand_1,Operator,Operand_2)) -->
operand(Operand_1),
operator(Operator),
operand(Operand_2).
operand(operand(1)) --> "1".
operand(operand(2)) --> "2".
operator(operator(+)) --> "+".
recognizer(Input) :-
string_codes(Input,Codes),
DCG = expr(_),
phrase(DCG,Codes,[]).
parser(Input,Ast) :-
string_codes(Input,Codes),
DCG = expr(Ast),
phrase(DCG,Codes,[]).
generator(Generated) :-
DCG = expr(_),
phrase(DCG,Codes,[]),
string_codes(Generated,Codes).
:- begin_tests(expr).
recognizer_test_case_success("1+1").
recognizer_test_case_success("1+2").
recognizer_test_case_success("2+1").
recognizer_test_case_success("2+2").
test(recognizer,[ forall(recognizer_test_case_success(Input)) ] ) :-
recognizer(Input).
recognizer_test_case_fail("2+3").
test(recognizer,[ forall(recognizer_test_case_fail(Input)), fail ] ) :-
recognizer(Input).
parser_test_case_success("1+1",expr(operand(1),operator(+),operand(1))).
parser_test_case_success("1+2",expr(operand(1),operator(+),operand(2))).
parser_test_case_success("2+1",expr(operand(2),operator(+),operand(1))).
parser_test_case_success("2+2",expr(operand(2),operator(+),operand(2))).
test(parser,[ forall(parser_test_case_success(Input,Expected_ast)) ] ) :-
parser(Input,Ast),
assertion( Ast == Expected_ast).
parser_test_case_fail("2+3").
test(parser,[ forall(parser_test_case_fail(Input)), fail ] ) :-
parser(Input,_).
test(generator,all(Generated == ["1+1","1+2","2+1","2+2"]) ) :-
generator(Generated).
:- end_tests(expr).
The grammar is unambiguous and has only 4 valid strings which are all unique.
The recognizer is deterministic and only returns true or false.
The parser is deterministic and returns a unique AST.
The generator is semi-deterministic and returns all 4 valid unique strings.
Example run of the test cases.
?- run_tests.
% PL-Unit: expr ........... done
% All 11 tests passed
true.
To expand a little on the comment by Daniel
As Daniel notes
1 + 2 + 3
can be parsed as
(1 + 2) + 3
or
1 + (2 + 3)
So 1+2+3 is an example as you said is specified by a recursive DCG and as I noted a common way out of the problem is to use parenthesizes to start a new context. What is meant by starting a new context is that it is like getting a new clean slate to start over again. If you are creating an AST, you just put the new context, items in between the parenthesizes, as a new subtree at the current node.
With regards to write_canonical/1, this is also helpful but be aware of left and right associativity of operators. See Associative property
e.g.
+ is left associative
?- write_canonical(1+2+3).
+(+(1,2),3)
true.
^ is right associative
?- write_canonical(2^3^4).
^(2,^(3,4))
true.
i.e.
2^3^4 = 2^(3^4) = 2^81 = 2417851639229258349412352
2^3^4 != (2^3)^4 = 8^4 = 4096
The point of this added info is to warn you that grammar design is full of hidden pitfalls and if you have not had a rigorous class in it and done some of it you could easily create a grammar that looks great and works great and then years latter is found to have a serious problem. While Python was not ambiguous AFAIK, it did have grammar issues, it had enough issues that when Python 3 was created, many of the issues were fixed. So Python 3 is not backward compatible with Python 2 (differences). Yes they have made changes and libraries to make it easier to use Python 2 code with Python 3, but the point is that the grammar could have used a bit more analysis when designed.
The only reason why code should be non-deterministic is that your question has multiple answers. In that case, you'd of course want your query to have multiple solutions. Even then, however, you'd like it to not leave a choice point after the last solution, if at all possible.
Here is what I mean:
"What is the smaller of two numbers?"
min_a(A, B, B) :- B < A.
min_a(A, B, A) :- A =< B.
So now you ask, "what is the smaller of 1 and 2" and the answer you expect is "1":
?- min_a(1, 2, Min).
Min = 1.
?- min_a(2, 1, Min).
Min = 1 ; % crap...
false.
?- min_a(2, 1, 2).
false.
?- min_a(2, 1, 1).
true ; % crap...
false.
So that's not bad code but I think it's still crap. This is why, for the smaller of two numbers, you'd use something like the min() function in SWI-Prolog.
Similarly, say you want to ask, "What are the even numbers between 1 and 10"; you write the query:
?- between(1, 10, X), X rem 2 =:= 0.
X = 2 ;
X = 4 ;
X = 6 ;
X = 8 ;
X = 10.
... and that's fine, but if you then ask for the numbers that are multiple of 3, you get:
?- between(1, 10, X), X rem 3 =:= 0.
X = 3 ;
X = 6 ;
X = 9 ;
false. % crap...
The "low-hanging fruit" are the cases where you as a programmer would see that there cannot be non-determinism, but for some reason your Prolog is not able to deduce that from the code you wrote. In most cases, you can do something about it.
On to your actual question. If you can, write your code so that there is non-determinism only if there are multiple answers to the question you'll be asking. When you use a DCG for both parsing and generating, this sometimes means you end up with two code paths. It feels clumsy but it is easier to write, to read, to understand, and probably to make efficient. As a word of caution, take a look at this question. I can't know that for sure, but the problems that OP is running into are almost certainly caused by unnecessary non-determinism. What probably happens with larger inputs is that a lot of choice points are left behind, there is a lot of memory that cannot be reclaimed, a lot of processing time going into book keeping, huge solution trees being traversed only to get (as expected) no solutions.... you get the point.
For examples of what I mean, you can take a look at the implementation of library(dcg/basics) in SWI-Prolog. Pay attention to several things:
The documentation is very explicit about what is deterministic, what isn't, and how non-determinism is supposed to be useful to the client code;
The use of cuts, where necessary, to get rid of choice points that are useless;
The implementation of number//1 (towards the bottom) that can "generate extract a number".
(Hint: use the primitives in this library when you write your own parser!)
I hope you find this unnecessarily long answer useful.

Why does Octave output $ g = [... ...] $

When I run this code (in a programming assignment for Coursera):
J = 1/m * [-y.*log(sigmoid((theta)'*X))-(1-y).*log(1-sigmoid((theta)'*X))]
where m = length(y), y is an m-dimensional vector, X is an m*2 matrix, and theta = 0.1, Octave outputs:
g =
[long (#rows)*2 matrix, each entry <1 but extremely close to 1]
g =
[another long (#rows)*2 matrix as before]
J =
[(#rows)*2 matrix with entries such as 3.4932e-002 and 7.8914e-005]
What is g? I never defined it, and it does not appear in my code, yet is outputted with some seemingly unrelated numbers? (I know that the function itself may have problems, but that is a separate issue from what I'm interested in here. I figured that if I know what g is, I might be able to troubleshoot better. If you have any comments on the function, please don't hesitate to point out what's wrong.)
Whenever you have a statement (inside a function or otherwise) which is not terminated with a semicolon, the output of that statement will display on the terminal.
Assuming that this is the only code you're running, then my guess is that inside your sigmoid function there is a statement of this kind:
g = dosomething() % note: not semicolon terminated!
resulting in terminal output during its execution.
The fact that g is reported twice in the terminal also makes sense, since you are calling the sigmoid function twice in that expression you just wrote.
Also, for the sake of clarity, please do not refer to your one-liner as a function, since that means something entirely different in the context of programming.

Pascal's triangle and Fibonacci sequence explanation

Okay I need to redraw the pascal's triangle and explain the Fibonacci sequence embedded in it.. And i need to observe over 12 rows of the triangle (which ends on the number 144 in the fibonacci sequence) -- I understand this part as i am just explaining how each row diagonally forms the sum of the Fibonacci numbers.
But I need to use the fact that the rth number in the nth row of the triangle is
C(n, r) = n!/r! n-r!
This last part is whats confusing me.. How can i use C(n,r) to explain the Fibonacci sequence in the triangle??
Please Help. Thanks
Consider the following problem :
In how many ways can you go up a ladder of n steps if you can take either a single step at a time or 2 steps at a time?
Solution 1 : Let's construct a recurrence relation for this problem. It's pretty clear that the recurrence would be something like this : a(n) = a(n-1) + a(n-2); where a(1)=1 and a(2)=2
Thus, the answer for n would be the (n+1)th fibonacci term.
Solution 2 : Each unique way of climbing up the ladder corresponds to a unique sequence of 1's and 2's which adds up to n. The number of such sequences thus would be our answer. Let's start counting such sequences :
Number of sequences without a 2 = $ {n \choose 0 } $.
Number of sequences with one 2 = $ {n-1 \choose 1 } $.
.
.
.
and so on.
In case of even n, the last term would be $ {n/2 \choose n/2 } $.
And for odd n, it would be $ {(n+1)/2 \choose (n-1)/2 } $.
As you can see, These are the diagonal terms in a pascal's triangle.
As these two solutions compute the same result, hence they must be equal. Thus we get the relation between Fibonacci numbers and the diagonals of a pascals triangle.
Refer the link
http://ms.appliedprobability.org/data/files/Articles%2033/33-1-5.pdf
for anymore doubts.

Resources