How do I do a loop while a condition is equal to another condition with an exception? - do-while

While the member enrollment month is equal to the provider effective month I want to pull the assignedprovider1 for the maximum provider effective month for that member but if the enrollment month doesn't find a provider effective month keep the assignedprovider1 for the last one you pulled and keep it throughout the enrollment months going forward.
TABLE 1
MemberNbr ENROLLMENT_MONTH Desired Outcome ProviderEffectiveDate
ABCD 1/1/2019 A203 20190110
ABCD 2/1/2019 A015 20190208
ABCD 3/1/2019 A015 20190208
ABCD 4/1/2019 A015 20190208
ABCD 5/1/2019 A015 20190208
ABCD 6/1/2019 A015 20190208
ABCD 7/1/2019 A015 20190208
ABCD 8/1/2019 A015 20190208
TABLE 2
AssignedProvider1 ProviderEffectiveDate MemberNbr
A018 20190108 ABCD
A203 20190110 ABCD
A046 20190205 ABCD
A015 20190208 ABCD
I tried doing a while loop within a while loop but couldn't get it to work and I tried using rank but couldn't figure out how to add the other piece.
select F.MEMBERNBR, F.ELIGIBILITYSTARTDATE, P.PROVIDEREFFECTIVEDATE, P.ASSIGNEDPROVIDER1
INTO #HELPMELORD
,RANK() OVER (PARTITION BY P.PROVIDEREFFECTIVEDATE, P.ASSIGNEDPROVIDER1, P.MEMBERNBR, F.ELIGIBILITYSTARTDATE ORDER BY P.PROVIDEREFFECTIVEDATE DESC) AS 'RANK'
FROM #FINAL F
LEFT JOIN MEMBERPROVIDERHISTORY P ON F.MEMBERNBR = P.MEMBERNBR
I don't have any output to show because I can't get it to work.

Related

Kusto join two tables based on latest available date

Is there a way to join two tables on Kusto, and join values based on latest available date from the second table?
Let's say we get distinct names from first table, and want to join values from the second table based on latest available dates.
I would also only keep matches from left column.
table1
names
-----
Alex
John
Mary
table2
name weight date
----- ------ ------
Alex. 160. 2023-01-20
Alex. 168. 2023-01-28
Mary. 120. 2022-08-28
Mary. 140. 2020-09-17
Sample code:
table1
|distinct names
|join kind=inner table2 on $left.names==$right.name
let table1 = datatable(names:string)
[
"Alex"
,"John"
,"Mary"
];
let table2 = datatable(name:string, weight:real ,["date"]:datetime)
[
"Alex" ,160 ,datetime(2023-01-20)
,"Alex" ,168 ,datetime(2023-01-28)
,"Mary" ,120 ,datetime(2022-08-28)
,"Mary" ,140 ,datetime(2020-09-17)
];
table1
| distinct names
| join kind=inner (table2 | summarize arg_max(['date'], *) by name) on $left.names==$right.name
names
name
date
weight
Mary
Mary
2022-08-28T00:00:00Z
120
Alex
Alex
2023-01-28T00:00:00Z
168
Fiddle

Many-to-many SELECT divided into multiple rows for some reason

I have two tables joined via third in a many-to-many relationship. To simplify:
Table A
ID-A (int)
Name (varchar)
Score (numeric)
Table B
ID-B (int)
Name (varchar)
Table AB
ID-AB (int)
A (foreign key ID-A)
B (foreign key ID-B)
What I want is to display the B-Name and a sum of the "Score" values of all the As belonging to the given B. However, the following code:
WITH "Data" AS(
SELECT "B."."Name" As "BName", "A"."Name", "Score"
FROM "AB"
LEFT OUTER JOIN "A" ON "AB"."A" = "A"."ID-A"
LEFT OUTER JOIN "B" ON "AB"."B" = "B"."ID-B")
SELECT "BName", SUM("Score") AS "Total"
FROM "Data"
GROUP BY "Name", "Score"
ORDER BY "Total" DESC
The results display several rows for every "BName" with the "score" divided into semingly random increments between these rows. For example, if the desired result for Johnny is 12 and for April it's 25, the query may shows something like:
Johnny | 7
Johnny | 3
Johnny | 2
April | 19
April | 5
April | 1
etc.
Even after trying to nest the query and doing another SELECT with SUM("Score"), the results are the same. I'm not sure what I'm doing wrong?
Remove Score from the GROUP BY clause:
SELECT BName, SUM(Score) AS Total
FROM Data
GROUP BY BName
ORDER BY Total DESC;
The purpose of your query is to summarize by name, so name alone should appear in the GROUP BY clause. By also including the score, you will get a record in the output for each unique name/score combination.
Okay, I figured out my problem. Indeed, I had to GROUP BY "Name" only, but Firebird I thought wasn't letting me do that. Turns out it was just a typo. Oops.

SPSS side-by-side frequency table

I have a file where each record is about a student. For example:
Student ServiceA ServiceB
Bob ABC ABC
Jane ABC
Jim XYZ
Henry EFG ABC
Laura EFG
Code for the above table:
data list list/student ServiceA ServiceB (3a10).
begin data
"Bob","ABC","ABC"
"Jane","ABC",
"Jim",,"XYZ"
"Henry","EFG","ABC"
"Laura","EFG",
end data.
I want to show a simple table that has the count of services for Service A and B, side by side, like this:
ServiceA ServiceB
[Blank] 2 1
ABC 1 3
EFG 1 1
XYZ 1 0
I've tried Custom Tables and Report Summaries in Rows / Columns, but can't seem to produce this simple table. Any ideas?
This will put the analysis of both variables in the same table:
VARSTOCASES /make Service from ServiceA ServiceB/index src(service)/null=keep.
sort cases by src.
split file by src.
freq service.
split file off.
This still wont put the frequencies side by side, but you can now do that easily using the pivoting trays.

How to insert data from one table column to another table

I have a file that is something like below format.
test.txt
1 | ABC | A, B, C, D
I need a stored procedure that insert record in details table in row by row basis. e.g.
ID Name Type
1 ABC A
1 ABC B
1 ABC C
1 ABC D
Is it possible through stored procedure in sql. Any help will be appreciated. Thanks in advance.
You can either:
Split it in your code and then insert them
Bulk insert them in a temporary table and split them all like this:
-- SAMPLE Data
declare #data table(id int, name varchar(10), type varchar(100))
insert into #data(id, name, type) values
(1, 'ABCD', 'A, B, C, D')
, (2, 'EFG', 'E, F, G')
, (3, 'HI', 'H, I')
-- Split All Rows and Types
Select ID, Name, ltrim(rtrim(value))
From (
Select *, Cast('<x>'+Replace(d.type,',','</x><x>')+'</x>' As XML) As types
From #data d
) x
Cross Apply (
Select types.x.value('.', 'varchar(10)') as value
From x.types.nodes('x') as types(x)
) c
Output:
ID Name Type
1 ABCD A
1 ABCD B
1 ABCD C
1 ABCD D
2 EFG E
2 EFG F
2 EFG G
3 HI H
3 HI I

Hive outer join: how to change the default NULL value

For hive outer join, if a joining key does not exist in one table,hive will put NULL. Is that possible to use another value for this? For example:
Table1:
user_id, name, age
1 Bob 23
2 Jim 43
Table2:
user_id, txn_amt, date
1 20.00 2013-12-10
1 10.00 2014-07-01
If I do a LEFT OUTER JOIN on user_id:
INSERT INTO TABLE user_txn
SELECT
Table1.user_id,
Table1.name,
Table2.txn_amt,
Table2.date
FROM
Table2
LEFT OUTER JOIN
Table1
ON
Table1.user_id = Table2.user_id;
I want the output be like this:
user_id, name, tnx_amt, date
1 Bob 20.00 2013-12-10
1 Bob 10.00 2014-07-01
2 Jim 0.00 2099-12-31
Note the txn_amt and date columns for Jim. Is there any way in hive to define default values like this?
You can use COALESCE for this, instead of solely Table2.txn_amt
COALESCE(Table2.txn_amt, 0.0)
What this does is returns the first value that is not null. So, if txn_amt is null, it'll go to the second value in the list. 0.0 is never null, so it'll pick that. If txn_amt has a value in it, it'll return that value.

Resources