Spark join - match any column from a long list - join

I need to join two tables, condition is one column of a table match any column form a very long list, i.e., the following:
columns = ['name001', 'name002', ..., 'name298']
df = df1.join(df2, (df1['name']==df2['name1']) | (df1['name']==df2['name2']) | ... | df1['name']==df2['name298'])
How can I implement this join in Pyspark, without writing the long conditions? Many thanks!

You can use loop over the columns list to build a join expression:
join_expr = (df1["name"] == df2[columns[0]])
for c in columns[1:]:
join_expr = join_expr | (df1["name"] == df2[c])
Or using functools.reduce:
from functools import reduce
join_expr = reduce(
lambda e, c: e | (df1["name"]==df2[c]),
columns[1:],
df1["name"]==df2[columns[0]]
)
Now use join_expr to join:
df = df1.join(df2, on=join_expr)

Related

Apache Beam | Python | Dataflow - How to join BigQuery' collections with different keys?

I've faced the following problem.
I'm trying to use INNER JOIN with two tables from Google BigQuery on Apache Beam (Python) for a specific situation. However, I haven't found a native way to deal with it easily.
This query output I'm going to fill a third table on Google BigQuery, for this situation I really need to query it on Google Dataflow. The first table (client) key is the "id" column, and the second table (purchase) key is the "client_id" column.
1.Tables example (consider 'client_table.id = purchase_table.client_id'):
client_table
| id | name | country |
|----|-------------|---------|
| 1 | first user | usa |
| 2 | second user | usa |
purchase_table
| id | client_id | value |
|----|-------------|---------|
| 1 | 1 | 15 |
| 2 | 1 | 120 |
| 3 | 2 | 190 |
2.Code I'm trying to develop (problem in the second line of 'output'):
options = {'project': PROJECT,
'runner': RUNNER,
'region': REGION,
'staging_location': 'gs://bucket/temp',
'temp_location': 'gs://bucket/temp',
'template_location': 'gs://bucket/temp/test_join'}
pipeline_options = beam.pipeline.PipelineOptions(flags=[], **options)
pipeline = beam.Pipeline(options = pipeline_options)
query_results_1 = (
pipeline
| 'ReadFromBQ_1' >> beam.io.Read(beam.io.ReadFromBigQuery(query="select id as client_id, name from client_table", use_standard_sql=True)))
query_results_2 = (
pipeline
| 'ReadFromBQ_2' >> beam.io.Read(beam.io.ReadFromBigQuery(query="select * from purchase_table", use_standard_sql=True)))
output = ( {'query_results_1':query_results_1,'query_results_2':query_results_2}
| 'join' >> beam.GroupBy('client_id')
| 'writeToBQ' >> beam.io.WriteToBigQuery(
table=TABLE,
dataset=DATASET,
project=PROJECT,
schema=SCHEMA,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE))
pipeline.run()
3.Equivalent desired output in SQL:
SELECT a.name, b.value * from client_table as a INNER JOIN purchase_table as b on a.id = b.client_id;
You could use either a CoGroupByKey or side inputs (as a broadcast join) depending on your key cardinality. If you have a few keys with many elements each, I suggest the broadcast join.
The first thing you'd need to do is to add a key to your PCollections after the BQ read:
kv_1 = query_results_1 | Map(lambda x: (x["id"], x))
kv_2 = query_results_1 | Map(lambda x: (x["client_id"], x))
Then you can just do the CoGBK or broadcast join. As an example (since it would be easier to understand), I am going to use the code from this session of Beam College. Note that in your example the Value of the KV is a dictionary, so you'd need to make some modifications.
Data
jobs = [
("John", "Data Scientist"),
("Rebecca", "Full Stack Engineer"),
("John", "Data Engineer"),
("Alice", "CEO"),
("Charles", "Web Designer"),
("Ruben", "Tech Writer")
]
hobbies = [
("John", "Baseball"),
("Rebecca", "Football"),
("John", "Piano"),
("Alice", "Photoshop"),
("Charles", "Coding"),
("Rebecca", "Acting"),
("Rebecca", "Reading")
]
Join with CGBK
def inner_join(element):
name = element[0]
jobs = element[1]["jobs"]
hobbies = element[1]["hobbies"]
joined = [{"name": name,
"job": job,
"hobbie": hobbie}
for job in jobs for hobbie in hobbies]
return joined
jobs_create = p | "Create Jobs" >> Create(jobs)
hobbies_create = p | "Create Hobbies" >> Create(hobbies)
cogbk = {"jobs": jobs_create, "hobbies": hobbies_create} | CoGroupByKey()
join = cogbk | FlatMap(inner_join)
Broadcast join with Side Inputs
def broadcast_inner_join(element, side_input):
name = element[0]
job = element[1]
hobbies = side_input.get(name, [])
joined = [{"name": name,
"job": job,
"hobbie": hobbie}
for hobbie in hobbies]
return joined
hobbies_create = (p | "Create Hobbies" >> Create(hobbies)
| beam.GroupByKey()
)
jobs_create = p | "Create Jobs" >> Create(jobs)
boardcast_join = jobs_create | FlatMap(broadcast_inner_join,
side_input=pvalue.AsDict(hobbies_create))

Transpose rows into columns in BigQuery using standard sql [duplicate]

This question already has answers here:
How to Pivot table in BigQuery
(7 answers)
Closed 2 years ago.
Good morning,
I'm trying to transpose some data in big query. I've looked at a few other people who have asked this on stackoverflow but the way to do this seems to be to use legacy sql (using group_concat_unquoted) rather than standard sql. I would use legacy but I've had issues with nested data in the past so have since used standard only.
Here's my example, to give some context I'm trying to map out some customer journeys which I have below:
uniqueid | page_flag | order_of_pages
A | Collection| 1
A | Product | 2
A | Product | 3
A | Login | 4
A | Delivery | 5
B | Clearance | 1
B | Search | 2
B | Product | 3
C | Search | 1
C | Collection| 2
C | Product | 3
However I'd like to transpose the data so it looks like this:
uniqueid | 1 | 2 | 3 | 4 | 5
A | Collection | Product | Product | Login | Delivery
B | Clearance | Search | Product | NULL | NULL
C | Search | Collection | Product | NULL | NULL
I've tried using multiple left joins but get the following error:
select a.uniqueid,
b.page_flag as page1,
c.page_flag as page2,
d.page_flag as page3,
e.page_flag as page4,
f.page_flag as page5
from
(select distinct uniqueid,
(case when uniqueid is not null then 1 end) as page_hit1,
(case when uniqueid is not null then 2 end) as page_hit2,
(case when uniqueid is not null then 3 end) as page_hit3,
(case when uniqueid is not null then 4 end) as page_hit4,
(case when uniqueid is not null then 5 end) as page_hit5
from `mytable`) a
LEFT JOIN (
SELECT *
from `mytable`) b on a.uniqueid = b.uniqueid
and a.page_hit1 = b.order_of_pages
LEFT JOIN (
SELECT *
from `mytable`) c on a.uniqueid = c.uniqueid
and a.page_hit2 = c.order_of_pages
LEFT JOIN (
SELECT *
from `mytable`) d on a.uniqueid = d.uniqueid
and a.page_hit3 = d.order_of_pages
LEFT JOIN (
SELECT *
from `mytable`) e on a.uniqueid = e.uniqueid
and a.page_hit4 = e.order_of_pages
LEFT JOIN (
SELECT *
from `mytable`) f on a.uniqueid = f.uniqueid
and a.page_hit5 = f.order_of_pages
Error: Query exceeded resource limits for tier 1. Tier 13 or higher required.
I've looked at using Array function as well but I've never used this before and I'm not sure if this is just for transposing the other way around. Any advice would be grand.
Thank you
for BigQuery Standard SQL
#standardSQL
SELECT
uniqueid,
MAX(IF(order_of_pages = 1, page_flag, NULL)) AS p1,
MAX(IF(order_of_pages = 2, page_flag, NULL)) AS p2,
MAX(IF(order_of_pages = 3, page_flag, NULL)) AS p3,
MAX(IF(order_of_pages = 4, page_flag, NULL)) AS p4,
MAX(IF(order_of_pages = 5, page_flag, NULL)) AS p5
FROM `mytable`
GROUP BY uniqueid
You can play/test with below dummy data from your question
#standardSQL
WITH `mytable` AS (
SELECT 'A' AS uniqueid, 'Collection' AS page_flag, 1 AS order_of_pages UNION ALL
SELECT 'A', 'Product', 2 UNION ALL
SELECT 'A', 'Product', 3 UNION ALL
SELECT 'A', 'Login', 4 UNION ALL
SELECT 'A', 'Delivery', 5 UNION ALL
SELECT 'B', 'Clearance', 1 UNION ALL
SELECT 'B', 'Search', 2 UNION ALL
SELECT 'B', 'Product', 3 UNION ALL
SELECT 'C', 'Search', 1 UNION ALL
SELECT 'C', 'Collection', 2 UNION ALL
SELECT 'C', 'Product', 3
)
SELECT
uniqueid,
MAX(IF(order_of_pages = 1, page_flag, NULL)) AS p1,
MAX(IF(order_of_pages = 2, page_flag, NULL)) AS p2,
MAX(IF(order_of_pages = 3, page_flag, NULL)) AS p3,
MAX(IF(order_of_pages = 4, page_flag, NULL)) AS p4,
MAX(IF(order_of_pages = 5, page_flag, NULL)) AS p5
FROM `mytable`
GROUP BY uniqueid
ORDER BY uniqueid
result is
uniqueid p1 p2 p3 p4 p5
A Collection Product Product Login Delivery
B Clearance Search Product null null
C Search Collection Product null null
Depends on your needs you can also consider below approach (not pivot though)
#standardSQL
SELECT uniqueid,
STRING_AGG(page_flag, '>' ORDER BY order_of_pages) AS journey
FROM `mytable`
GROUP BY uniqueid
ORDER BY uniqueid
if to run with same dummy data as above - result is
uniqueid journey
A Collection>Product>Product>Login>Delivery
B Clearance>Search>Product
C Search>Collection>Product

How to list most frequent text values within a range?

I'm an intermediate excel user trying to solve an issue that feels a little over my head. Basically, I'm working with a spreadsheet which contains a number of orders associated with customer account #s and which have up to 5 metadata "tags" associated with them. I want to be use that customer account # to pull the 5 most commonly occurring metadata tags in order.
Here is a mock up of the first set of data
Account Number Order Number Metadata
5043 1 A B C D
4350 2 B D
4350 3 B C
5043 4 A D
5043 5 C D
1204 6 A B
5043 7 A D
1204 8 D B
4350 9 B D
5043 10 A C D
and the end result I'm trying to create
Account Number Most Common Tag 2nd 3rd 4th 5th
5043 A C B N/A
4350 B D C N/A N/A
1204 B A C N/A N/A
I was trying to work with the formula suggested here:
=ARRAYFORMULA(INDEX(A1:A7,MATCH(MAX(COUNTIF(A1:A7,A1:A7)),COUNTIF(A1:A7,A1:A7),0)))
But I don't know how to a) use the customer account # as a precondition for counting the text values within the range. b) how to circumvent the fact that the Match forumula only wants to work with a single column of data and c) how to read the 2nd, 3rd, 4th, and 5th most common values from this range.
The way I'm formatting this data isn't set in stone. I suspect the way I'm organizing this information is holding me back from simpler solutions, so any suggestions on re-thinking my organization would be just as helpful as insights on how to create a formula to do this.
Implementing this kind of frequency analysis using built-in functions is likely to be a frustrating exercise. Since you are working with Google Sheets, take advantage of the custom functions, written in JavaScript and placed into a script bound to the sheet (Tools > Script Editor).
The function I wrote for this purpose is below. Entering something like =tagfrequency(A2:G100) in the sheet will produce desired output:
+----------------+-----------------+-----+-----+-----+-----+
| Account Number | Most Common Tag | 2nd | 3rd | 4th | 5th |
| 5043 | D | A | C | B | N/A |
| 4350 | B | D | C | N/A | N/A |
| 1204 | B | A | D | N/A | N/A |
+----------------+-----------------+-----+-----+-----+-----+
Custom function
function tagFrequency(arr) {
var dict = {}; // the object in which to store tag counts
for (var i = 0; i < arr.length; i++) {
var acct = arr[i][0];
if (acct == '') {
continue; // ignore empty rows
}
if (!dict[acct]) {
dict[acct] = {}; // new account number
}
for (var j = 2; j < arr[i].length; j++) {
var tag = arr[i][j];
if (tag) {
if (!dict[acct][tag]) {
dict[acct][tag] = 0; // new tag
}
dict[acct][tag]++; // increment tag count
}
}
}
// end of recording, begin sorting and output
var output = [['Account Number', 'Most Common Tag', '2nd', '3rd', '4th', '5th']];
for (acct in dict) {
var tags = dict[acct];
var row = [acct].concat(Object.keys(tags).sort(function (a,b) {
return (tags[a] < tags[b] ? 1 : (tags[a] > tags[b] ? -1 : (a > b ? 1 : -1)));
})); // sorting by tag count, then tag name
while (row.length < 6) {
row.push('N/A'); // add N/A if needed
}
output.push(row); // add row to output
}
return output;
}
You also could get this report:
Account Number Tag count
1204 B 2
1204 A 1
1204 D 1
4350 B 3
4350 D 2
4350 C 1
5043 D 5
5043 A 4
5043 C 3
5043 B 1
with the formula:
=QUERY(
{TRANSPOSE(SPLIT(JOIN("",ArrayFormula(REPT(FILTER(A2:A,A2:A<>"")&",",5))),",")),
TRANSPOSE(SPLIT(ArrayFormula(CONCATENATE(FILTER(C2:G,A2:A<>"")&" ,")),",")),
TRANSPOSE(SPLIT(rept("1,",counta(A2:A)*5),","))
},
"select Col1, Col2, Count(Col3) where Col2 <>' ' group by Col1, Col2
order by Col1, Count(Col3) desc label Col1 'Account Number', Col2 'Tag'")
The formula will count the number of occurrences of any tag.

How to query a table which has a parent child relation

Situation:
I have a table called "word" which contains a word with the associated translations.
| ID | name | lang_id | parent_id |
|----|----------|---------|-----------|
| 1 | screw | 1 | null |
| 2 | schraube | 2 | 1 |
| 3 | vis | 3 | 1 |
So screw is the main word which has no parent. The other data sets have an association to the parent with the parent_id.
What I want:
I need a query which displays the word I searched for and the word which I typed in.
I want to get the datasets 2 and 3, if I query the word "schraube" from german to french.
I want to get the datasets 1 and 3, if I query the word "screw" from english to french.
...
What I tried:
select word.id, word.name, word.lang_id, word.parent_id
from word
left join word w2 on word.parent_id = w2.parent_id
WHERE w2.name = 'screw';
-- and word.lang_id = 2
Unfortunately the result doesn't contain the word I typed. Also this displays all datasets, not only the ones with the specific language.
You can modifiy th below query to get your answer.
DECLARE #FromLanguageId SMALLINT = 2; --german
DECLARE #ToLanguageId SMALLINT = 3; --french
DECLARE #NAME NVARCHAR(300) = 'schraube';
--DECLARE #FromLanguageId SMALLINT = 1; --english
--DECLARE #ToLanguageId SMALLINT = 3; --french
--DECLARE #NAME NVARCHAR(300) = 'screw';
--Get the mathing record
;
WITH ctematch
AS (
--gets the matching record (child or parent)
SELECT match.*
FROM [word] match
WHERE match.NAME LIKE #NAME),
--Join its sibling , parent and childs
ctefamilydata
AS (SELECT *
FROM ctematch match
UNION
--Parent
SELECT parent.*
FROM ctematch match
INNER JOIN [word] parent
ON match.[parent_id] = parent.[id]
UNION
--Child
SELECT child.*
FROM ctematch match
INNER JOIN [word] child
ON child.[parent_id] = match.[id]
UNION
--Siblings
SELECT siblings.*
FROM ctematch match
INNER JOIN [word] siblings
ON match.[parent_id] = siblings.[parent_id])
--Filter and get the data
SELECT *
FROM ctefamilydata Cte
WHERE Cte.[lang_id] = #ToLanguageId
OR Cte.[lang_id] = #FromLanguageId

Linq query with join, group and count (extension format)

I have a table, that stores some information and reference (column parent_ID) to parent row in the same table.
I need to get list of all records (using Linq extension format) with count of child records. This is SQL query that gives me the desired information
Select a.ID, a.Name, Count(b.ID) from Table a
left join Table b on b.parent_ID=a.ID
group by a.ID, a.Name
Example
| ID | Name | parent_ID |
----------------------------------
| 1 | First | null |
| 2 | Child1 | 1 |
| 3 | Child2 | 1 |
| 4 | Child1.1 | 2 |
The result should be:
| 1 | First | 2 |
| 2 | Child1 | 1 |
| 3 | Child2 | 0 |
| 4 | Child1.1 | 0 |
I tried to make at least child counting, but it doesn't work...
var model = _db.Table
.GroupBy(r => new { r.parent_ID })
.Select(r => new {
r.Key.parent_ID,
ChildCount = r.GroupBy(g => g.parent_ID).Count()
});
Shouldn't this query be equivalent to something like this:
select parent_ID, count(parent_ID) from Table group by Table
but it returns count = 1 for each row...
How can i make such a query using linq extension format?
What I believe you are looking for is a group join which can be easier to understand using linq query syntax instead of the linq extensions so for ease of understanding I will post both methods.
I'm self-joining the table on the ID to the parent_ID into an object which keeps the referential integrity for you and select an anonymous object to select out the parent then all of the children.
This is using straight LINQ query syntax.
var model = from t1 in _db.Test1
join t2 in _db.Test1 on t1.ID equals t2.parent_ID into c1
select new {Parent = t1, Children = c1};
And here is the code using LINQ extensions
var model2 = _db.Test1.GroupJoin(_db.Test1,
t1 => t1.ID,
t2 => t2.parent_ID,
(t1, c1) => new {Parent = t1, Children = c1});
I used a quick test program I threw together to post the results into a textbox for both methods but the code was the same so I'll just post it once.
foreach (var test in model)
{
textBox1.AppendTextAddNewLine(String.Format("{0}: {1}",
test.Parent.Name,
test.Children.Count()));
}
And the results of both of those tests were the same below:
First: 2
Child1: 1
Child2: 0
Child1.1: 0
First: 2
Child1: 1
Child2: 0
Child1.1: 0

Resources