I have two tables x and y. I want to join on column b such that I get z in output.
x:([a:1 2 1 3]; b:`a`a`b`b)
q) a | b
-----
1 | a
2 | a
1 | b
3 | b
y:([b:`a`a`a`b]; c:7 8 9 10)
q) b | c
------
a | 7
a | 8
a | 9
b | 10
Desired output:
q) a | b | c
-----------
1 | a | 7
1 | a | 8
1 | a | 9
2 | a | 7
2 | a | 8
2 | a | 9
1 | b | 10
3 | b | 10
Is this some sort of cross join?
An equi join (ej) will produce the result you want:
q)ej[`b;x;y]
Related
I have a table like this:
+---+---------+----------+--------+
| | A | B | C |
+---+---------+----------+--------+
| 1 | John | Judy | Team A |
| 2 | Brandon | Bethany | Team B |
| 3 | Agnes | | Team A |
| 4 | William | Welma | Team B |
| 5 | Tom | Theresa | Team B |
| 6 | Peter | | Team A |
+---+---------+----------+--------+
Counting the total number of people is easy:
=COUNTA(A1:B6) -> 10
How can I count the number of people on each of the teams? i.e., Team A is 4, Team B is 6.
You can combine COUNTA() with FILTER() to achieve this result:
=COUNTA(FILTER(A:B, C:C="Team A")) -> 4
=COUNTA(FILTER(A:B, C:C="Team B")) -> 6
Hey folks: I have the following table in a vertica DB:
+-----+------+----------+
| Tid | item | time_sec |
+-----+------+----------+
| 1 | A | 1 |
| 1 | B | 2 |
| 1 | C | 4 |
| 1 | D | 5 |
| 1 | E | 6 |
| 2 | A | 5 |
| 2 | E | 5 |
+-----+------+----------+
My goal is to create new item groups that lie within a time window deltaT. Meaning that the difference between the first and last item's timestamp is smaller or equal to deltaT. Example: if deltaT = 2 sec we would get the new table:
+-----+------+
| Tid | item |
+-----+------+
| 11 | A |
| 11 | B |
| 12 | B |
| 12 | C |
| 13 | C |
| 13 | D |
| 13 | E |
| 14 | D |
| 14 | E |
| 15 | E |
| 21 | A |
| 21 | E |
+-----+------+
Here is the walk through of the table:
First we inspect all items with the Tid 1, and create sub groups with Tid 1n, where n is a counter.
Our first sub group with the Tid 11 consists of item A, B since deltaT between the last and first item is =<2. The next group has Tid 12 with item B,C. The group after that one has the Tid 13 and items C,D,E since all items are within a time span of 2 seconds. This goes on until the last item with Tid 1. Than we start over with the group that has Tid 2.
The new Tid numbering for the sub groups can be continous (1...6), I just choose this kind of numbering to show the relation to the original table.
I am looking at the vertica functions LAG and Time_slice but cannot figure out a way how to handle such a problem elegantly.
This is how far I got - and it does not answer your question, really. But it could constitute a few pointers:
WITH
-- your input
input(Tid,item,time_sec) AS (
SELECT 1,'A',1
UNION ALL SELECT 1,'B',2
UNION ALL SELECT 1,'C',4
UNION ALL SELECT 1,'D',5
UNION ALL SELECT 1,'E',6
UNION ALL SELECT 2,'A',5
UNION ALL SELECT 2,'E',5
)
-- end of your input, start your "real" WITH clause here
,
input_w_ts AS (
SELECT
*
, TIMESTAMPADD('SECOND',time_sec-1,TIMESTAMP '2000-01-01 00:00:00') AS ts
FROM input
)
SELECT
TS_LAST_VALUE(Tid) AS Tid
, item
, TS_LAST_VALUE(time_sec) AS time_sec
, tsr
FROM input_w_ts
TIMESERIES tsr AS '2 SECONDS' OVER (PARTITION BY item ORDER BY ts)
ORDER BY 1,4
;
Output:
Tid|item|time_sec|tsr
1|A | 1|2000-01-01 00:00:00
1|B | 2|2000-01-01 00:00:00
1|A | 1|2000-01-01 00:00:02
1|C | 4|2000-01-01 00:00:02
1|D | 5|2000-01-01 00:00:04
1|E | 6|2000-01-01 00:00:04
2|A | 5|2000-01-01 00:00:04
I'm struggeling with creating my first chart.
i have a dataset of ordinal scaled data from a survey.
There i have several question with the possible answer from 1 - 5.
So have around 110 answers from different persons which i want to collect and show in a stacked bar.
Those data looks like:
| taste | region | brand | price |
| 1 | 3 | 4 | 2 |
| 1 | 1 | 5 | 1 |
| 1 | 3 | 4 | 3 |
| 2 | 2 | 5 | 1 |
| 1 | 1 | 4 | 5 |
| 5 | 3 | 5 | 2 |
| 1 | 5 | 5 | 2 |
| 2 | 4 | 1 | 3 |
| 1 | 3 | 5 | 4 |
| 1 | 4 | 4 | 5 |
...
to can display that in a stacked bar chart, i need to sum that.
so i know at the end it need to be calculated like:
| | taste | region | brand | price |
| 1 | 60 | 20 | 32 | 12 |
| 2 | 23 | 32 | 54 | 22 |
| 3 | 24 | 66 | 36 | 65 |
| 4 | 55 | 68 | 28 | 54 |
| 5 | 10 | 10 | 12 | 22 |
(this is just to demonstarte, the values are not correct)
Or somehow there is already a function for it on spss but i have now idea where an how.
Any advice how to do that?
I can't think of a single command but there are many ways to get to where you want. Here's one:
first recreating your sample data:
data list list/ taste region brand price .
begin data
1 3 4 2
1 1 5 1
1 3 4 3
2 2 5 1
1 1 4 5
5 3 5 2
1 5 5 2
2 4 1 3
1 3 5 4
1 4 4 5
end data.
Now counting the values for each row:
vector t(5) r(5) b(5) p(5).
* the vector command is only nescessary so the new variables will be ordered compfortably for the following parts.
do repeat vl= 1 to 5/t=t1 to t5/r=r1 to r5/b=b1 to b5/p=p1 to p5.
compute t=(taste=vl).
compute r=(region=vl).
compute b=(brand=vl).
compute p=(price=vl).
end repeat.
Now we can aggregate and restructure to arrive to the the exact data structure you specified:
aggregate /outfile=* /break= /t1 to t5 r1 to r5 b1 to b5 p1 to p5 = sum(t1 to p5).
varstocases /make taste from t1 to t5 /make region from r1 to r5
/make brand from b1 to b5/ make price from p1 to p5/index=val(taste).
compute val = char.substr(val,2,1).
alter type val(f1).
I'm not able to get accuracy, as every dataset I provide provides 100% accuracy for every classifier algorithm I apply. My data set is of 10 people.
It gives the same accuracy for naive bayes, J48, JRip classifier algorithm.
+----+-------+----+----+----+----+----+-----+----+------+-------+-------+-------+
| id | name | q1 | q2 | q3 | m1 | m2 | tut | fl | proj | fexam | total | grade |
+----+-------+----+----+----+----+----+-----+----+------+-------+-------+-------+
| 1 | abv | 5 | 5 | 5 | 13 | 13 | 4 | 8 | 7 | 40 | 100 | p |
| 2 | ca | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 40 | 48 | f |
| 3 | ga | 4 | 2 | 3 | 5 | 10 | 4 | 5 | 6 | 20 | 59 | f |
| 4 | ui | 5 | 4 | 4 | 12 | 13 | 3 | 7 | 7 | 39 | 94 | p |
| 5 | pa | 4 | 1 | 1 | 4 | 3 | 2 | 4 | 5 | 22 | 46 | f |
| 6 | la | 2 | 3 | 1 | 1 | 2 | 0 | 4 | 2 | 11 | 26 | f |
| 7 | ka | 5 | 4 | 1 | 3 | 3 | 1 | 6 | 4 | 24 | 51 | f |
| 8 | ma | 5 | 3 | 3 | 9 | 8 | 4 | 8 | 0 | 20 | 60 | p |
| 9 | ash | 2 | 5 | 5 | 11 | 12 | 3 | 7 | 6 | 30 | 81 | p |
| 10 | opo | 4 | 2 | 1 | 13 | 1 | 3 | 7 | 3 | 35 | 69 | p |
+----+-------+----+----+----+----+----+-----+----+------+-------+-------+-------+
Make sure to not include any unique identifier column.
Also don't include the total.
Most likely, the classifiers learned that "name" is a good predictor and/or that you need total > 59 points total to pass.
I suggest you even withhold at least one exercise because of that - some classifiers will still learn that the sum of the individual points is necessary to pass.
I assume you want to find out if one part is most indicative of passing, i.e. "if you do well on part 3, you will likely pass". But to answer this question, you need to account for e.g. different amount of points per question, etc. - otherwise, your predictor will just identify which question has the most points...
Also, 10 is a much too small sample size!
You can see from the output that is displayed that the tree that J48 generated used only the variable fl, so I do not think that you have the problem that #Anony-Mousse referred to.
I notice that you are testing on the training set (see the "Test Options" radio buttons at upper left of the GUI). That almost always overestimates the accuracy. What you are seeing is overfitting. Instead, use cross-validation to get a better estimate of the accuracy you could expect on new data. With only 10 data points, you should use either 10 folds or 5.
Try testing your model on cross-validation on "k splits" or Percentage split.
Generally in Percentage Split: Training set is of 2/3 of dataset and Test set is 1/3.
Also, What I feel that your dataset is very small... There are chances of high accuracy in that case.
INVOICE
So i have to put this into 1NF, 2NF and 3NF
PROD_NUM PROD_LABEL PROD_PRICE
AA-E3422QW ROTARY SANDER 49.95
AA-E3422QW ROTARY SANDER 49.95
QD-300932X 0.25IN. DRILL BIT 3.45
RU-95748G BAND SAW 33.99
GH-778345P POWER DRILL 87.75
VEN_CODE VEN_NAME
211 NEVERFAIL, INC
211 NEVERFAIL, INC
211 NEVERFAIL, INC
309 BEGOOD, INC
157 TOUGHGO, INC
So far i have these as my 2NF. Am i going right? And how do i put the table into 3NF ?
So my 2nf will be like this ?2NF TABLE IMAGE
I think the picture you were given is considered 1NF.
And you initially showed 3NF, but you'll need an additional table to reference which Product is by what Vendor as well as modify the invoice table.
Vendor - Unique list of vendors
VEN_ID | VEN_CODE | VEN_NAME
-------|----------|---------------
1 | 211 | NEVERFAIL, INC
2 | 309 | BEGOOD, INC
3 | 157 | TOUGHGO, INC
Product - Unique list of products
PROD_ID | PROD_NUM | PROD_LABEL | PROD_PRICE
--------|------------|-------------------|-----------
1 | AA-E3422QW | ROTARY SANDER | 49.95
2 | QD-300932X | 0.25IN. DRILL BIT | 3.45
3 | RU-95748G | BAND SAW | 33.99
4 | GH-778345P | POWER DRILL | 87.75
Vendor_Product - the mapping between products and vendors
VEN_ID | PROD_ID
-------|----------
1 | 1
1 | 2
2 | 3
3 | 4
Purchases - The transactions that happened
PURCH_ID | INV_NUM | SALE_DATE | PROD_ID | QUANT_SOLD
---------|---------|-------------|---------|------------
1 | 211347 | 15-JAN-2006 | 1 | 1
2 | 211347 | 15-JAN-2006 | 2 | 8
3 | 211347 | 15-JAN-2006 | 3 | 1
4 | 211348 | 15-JAN-2006 | 1 | 2
5 | 211349 | 16-JAN-2006 | 4 | 1
I think that is good, but it can be split again.
Invoices - A unique list of invoices
INV_ID | INV_NUM | SALE_DATE
--------|---------|-------------
1 | 211347 | 15-JAN-2006
2 | 211348 | 15-JAN-2006
3 | 211349 | 16-JAN-2006
Purchases - The transactions that happened
PURCH_ID | INV_ID | PROD_ID | QUANT_SOLD
---------|--------|---------|---------
1 | 1 | 1 | 1
2 | 1 | 2 | 8
3 | 1 | 3 | 1
4 | 2 | 1 | 2
5 | 3 | 4 | 1
To get 2NF, combine the Vendor information back into the Product table. With these columns
PROD_ID | PROD_NUM | PROD_LABEL | PROD_PRICE | VEN_CODE | VEN_NAME
In this case, the Vendor and Vendor_Product tables aren't needed.