GoogleQuery returning output in one row only - google-sheets

I'm using query but the output is weird. I just copied this sheet and the original file is working properly. As you can see in the output are all stuck in one row but it should be a three or more row base on the reference. The code is literally copy paste, no as in the whole sheet is made from "Make a copy" function of googlesheet then i just renamed the file. Is there any fix to this? Thank you!
my code : =QUERY(Main!A:AO, "Select A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, AN, AO where J ='A'")

Use the (optional) headers argument: use 1 if you want a header row, 0 if not.
See if this works
=QUERY(Main!A:AO, "Select A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, AN, AO where J ='A'", 1)

or you can leave it empty (after comma) which equals to have 0 there
=QUERY(Main!A:AO, "select A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,AN,AO where J ='A'", )

Related

What is the difference between Parallel and Branch combinators in Trax?

I don't understand what's the difference between the Branch and the Parallel combinators.
They both seem to apply a list of layers in parallel, the only difference is that Branch applies them to copies of inputs -- what does that mean?
From the Trax documentation:
For example, suppose one has three layers:
F: 1 input, 1 output
G: 3 inputs, 1 output
H: 2 inputs, 2 outputs (h1, h2)
Then Branch(F, G, H) will take 3 inputs and give 4 outputs:
inputs: a, b, c
outputs: F(a), G(a, b, c), h1, h2 where h1, h2 = H(a, b)
Then Parallel(F, G, H) will take 6 inputs and give 4 outputs:
inputs: a, b, c, d, e, f
outputs: F(a), G(b, c, d), h1, h2 where h1, h2 = H(e, f)

Finding the result after N clocks states

This is the diagram of 5 D flip-flops. At first, EDCBA = 00000, after 6 clock beats, EDCBA = ?
I drew the timeline values of E, D,C,B,A but got the wrong result. My teacher told me that the answer is EDCBA = 01111. but I got 11110. Please help me find the way to solve this exercise
After the first cycle where not A = 1 is switched to E: 10000
After the second cycle where not A = 1 is switched to E, and E is switched to D: 11000
After the third cycle where not A = 1 is switched to E, and E is switched to D, and D is switched to C: 11100
After the forth cycle where not A = 1 is switched to E, and E is switched to D, D is switched to C, and C is switched to B: 11110
After the fifth cycle where not A = 1 is switched to E, and E is switched to D, D is switched to C, C is switched to B, and B is switched to A: 11111
After the sixth cycle where not A = 0 is switched to E, and E is switched to D, D is switched to C, C is switched to B, and B is switched to A: 11111

Create a DCG Parser in PROLOG

I have to implement an a context-free parser in PROLOG that uses a grammar that can generate:
I saw a tutorial.
I went in a library.
In library a tutorial I saw.
(I know it's not correct grammatically, but I need to see how to match the pattern)
I receive an input as a query - let's suppose it it the first sentence - and I have to print the number of applications of rules for a successful parsing, false otherwise.
In order to achieve this, I found these grammars:
s(X,Z):- vp(Y,Z), np(X,Y).
np(X,Z):- det(X,Y), n(Y,Z).
np(X,Z):- det(X,Y), n(Y,Z), np(X,Z).
vp(X,Z):- det(X,Y), v(Y,Z).
det([i|W],W).
det([a|W],W).
det([in|W],W).
n([tutorial|W],W).
n([library|W],W).
v([went|W],W).
v([saw|W],W).
It works for the first 2 sentences, but I don't know how to make it work for the last one and I don't know how to print the number of applications of rules for a successful parsing.
Thank you!
This will help with the number of applications of rules for a successful parsing. However, as you can see, it will always be the same number as the quantity of words in the sentence. What I did was to implement a 'counter' in the parameters of each rule and each time a 'base rule' succeed it increase the value of the 'counter'.
det([i|W], W, A, R) :- R is A + 1.
det([a|W], W, A, R) :- R is A + 1.
det([in|W], W, A, R) :- R is A + 1.
n([tutorial|W], W, A, R) :- R is A + 1.
n([library|W], W, A, R) :- R is A + 1.
v([went|W], W, A, R):- R is A + 1.
v([saw|W], W, A, R):- R is A + 1.
np([], R, R).
np(X, A, R2):- det(X, Y, A, R), np(Y, R, R2).
np(X, A, R3):- det(X, Y, A, R), n(Y, Z, R, R2), np(Z, R2, R3).
vp(X, Z, R2):- det(X, Y, 0, R), v(Y, Z, R, R2).
s(X, R2):- atomic_list_concat(L,' ', X), vp(L, Z, R), np(Z, R, R2), !.
Here are the results. Results.
As you can see the last sentence still failing, that is because if you follow the flow of the algorithm or calls of it you can see that the rule 's' calls the rule 'vp' which only admits a 'det' follow by a 'v', so if you see the first word of third sentence which is 'In', 'det' in 'vp' will work, but the next word that is 'library' will not success on 'v', because 'library' is not a verb, so that' s why it fail. To conclude, if you want the third sentence to succeed you will have to do some changes to your grammars.
By the way there is better way, probably a lit bit more complex to achieve, but once you understand how to use it, will be faster to work and easier to create a complex grammar this by using Prolog DCG https://www.swi-prolog.org/pldoc/man?section=DCG. I mentioned in case you did not know about this.

Find similar images using Geometric Min Hash: How to calculated theoretical matching probabilities?

I'm trying to match images based on visual words (labeled key points within images). When comparing the simulated results to my theoretical results I get significant deviations, therefore I guess there must be a mistake in my theoretical probability calculation.
You can imagine two images as set of visual words (visual word names range from A to Z):
S1=SetImage1={A, B, C, D, E, F, G, H, I, J, L, M, N, O, Y, Z}
S2=SetImage2={A, L, M, O, T, U, V, W, X, Y, Z}
You can already see that some visual words occur in both sets (e.g. A, Z, Y,...). Now we separate the visual words into primary words and secondary words (see the provided image). Each primary word has a neighborhood of secondary words. You can see the primary words (red rectangles) and their secondary words (words within ellipse). For our example the primary word sets are as follows:
SP1=SetPrimaryWordsImage1={A, J, L}
SP2=SetPrimaryWordsImage2={A, L,}
We now randomly select a visual word img1VAL1 from the set SP1 and one word from the neighborhood of img1VAL1, i.e. img1VAL2=SelFromNeighborhood(img1VAL1) resulting into a pair PairImage1={img1VAL1, img1VAL2}. We do the same with the second image and get PairImage2={img2VAL1, img2VAL2}.
Example:
from Image1 we select A as primary visual word and C as secondary word since C is within the neighborhood of A. We get the pair {A, C}
from Image2 we select also A as primary visual word and Z as secondary word. We get the pair {A, Z}
{A,C} != {A,Z} and therefore we have no match. But what is the probability that randomly selected pairs are equal?
The probability is this:
A={1, 2, 3, 4}, B=A={1, 2, 3}
intersection C=A int B={1, 2, 3}
Number of possible pairs out of intersection = 3-choose-2 (binomial)
number of all possibilities=|A|-choose-2 * |B|-choose-2
therefore probability
|intersection|-choose-2/(|A|-choose-2 * |B|-choose-2)

Match 2 columns, but bring along all associated rows from the second column

What is the easiest way for Example1 to be converted to Example2 (I would be doing this with much longer lists)? Column C and D shall be associated to Col B for the output of Example2. This is not just to make Col B replicate Col A, although that is part of the solution. Thank you in advance!
Example1:
Col A Col B Col C Col D
a e d c
l l o a
e x g t
x a s s
Example2:
Col A Col B Col C Col D
a a s s
l l o a
e e d c
x x g t
It is not totally clear what you want to achieve and what the data qualities are, so a few assumptions:
all items in Col A are also in Col B
items in Col A are unique
Consider the following screenshot. Column A has been copied into column F. The formula in G1 is
=INDEX(B$1:B$4,MATCH($F1,$B$1:$B$4,0))
Copy the formula across to I1 and then copy G1 to I1 down.
If that does not do what you need, please edit your question, add a better data sample and more explanation.

Resources