I am inserting data from user defined table(UDT) into another table but currently if the UDT has duplicate values coming it is also updating them - stored-procedures

I want if the duplicate row is coming from UDT it should stop updating them and skip
MERGE INTO Contact AS T
using #NameChangeSchedulerUDT AS S
on CAST(T.EMPLOYEENUMBER AS INT) = CAST(S.EMPLOYEENUMBER AS INT)
WHEN MATCHED THEN
UPDATE SET
T.FirstName = S.FirstName,
T.MiddleName = S.MiddleName,
T.LastName = S.LastName;

Related

Merging Two Files Together and Retaining All Columns

I think this surely must be a simple thing to achieve, but I have tried various appends and merges and can't seem to get it right.
I have two files, one titled 'Previous' and one titled 'Current'. Both show near identical data, like so :
ID Status Date_Changed
1 Closed 10/11/21
2 Open 10/01/21
3 Closed 10/03/21
4 Pending 10/15/21
I'd like to merge both files together, but retain all columns so that it is structured as below. This will allow me to show tables of what has changed etc.
ID Previous.Status Current.Status Previous.Date_Changed Current.Date_Changed
1 Closed Open 10/11/21 10/15/21
2 Open Closed 10/01/21 10/15/21
3 Closed Pending 10/03/21 10/14/21
I am aware this is probably due to my own naivety with PowerBI. I have tried combining the data by connecting to the folder, but that seems to create a new dataset with the data stacked on top (ie with duplicate ID values). I tried using merge queries as new and joiningby ID, but that didn't seem to give me the right output either?
You can start from the Current table and merge in the previous table joining on ID and then expand the columns. Rename and reorder columns as desired.
Here's an example you can paste into the Advanced Editor:
let
CurrentSource = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMlTSUfIvSM0DUoYG+oam+kYGRoZKsTrRSkZAIeec/OLUFEw5Y6BQQGpeSmZeOlTSBCFpglMyFgA=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [ID = _t, Status = _t, Date_Changed = _t]),
Current = Table.TransformColumnTypes(CurrentSource,{{"ID", Int64.Type}, {"Status", type text}, {"Date_Changed", type date}}),
PreviousSource = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMlTSUXLOyS9OTQEyDA30DQ31jQyMDJVidaKVjIBC/gWpeRAZAyQZYzRdBsYIOROgUEBqXkpmXjrUSFOoZCwA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [ID = _t, Status = _t, Date_Changed = _t]),
Previous = Table.TransformColumnTypes(PreviousSource,{{"ID", Int64.Type}, {"Status", type text}, {"Date_Changed", type date}}),
#"Merged Queries" = Table.NestedJoin(Current, {"ID"}, Previous, {"ID"}, "Previous", JoinKind.LeftOuter),
#"Expanded Previous" = Table.ExpandTableColumn(#"Merged Queries", "Previous", {"Status", "Date_Changed"}, {"Previous.Status", "Previous.Date_Changed"}),
#"Renamed Columns" = Table.RenameColumns(#"Expanded Previous",{{"Status", "Current.Status"}, {"Date_Changed", "Current.Date_Changed"}}),
#"Reordered Columns" = Table.ReorderColumns(#"Renamed Columns",{"ID", "Previous.Status", "Current.Status", "Previous.Date_Changed", "Current.Date_Changed"})
in
#"Reordered Columns"
Note: I've defined Previous within the query above so that it's self-contained. Ordinarily, it would be a separate query.
You can try the following steps:
create a new column called "Source Name" in each table with a constant value mentioning if the data if from "Previous" or "Current"
Then append both the table in Power Query. This will still stack both the tables on top of each other.
But now you can use the "Source Name" column to differentiate them in the Matric Visual. You can add a Matrix visual with
"Source Name" in the Column Field
"Date" & "Status" in values field
"ID" in Rows field
This is ideal because you get all the data in a tabular format which will help you in further calculations if necessary
You can check out an example screenshot below:

How to delete a ets entry based on non key part

I have a etc table ‘table’ as {key,[val1,val2]}
I selected this data from the table using
ets:select(table,[{{‘$1','$2'},[],['$$']}]).
[[key,["val1",<<"12">>]],
[key,["val2",<<"6">>]],
[key,["val3",<<"16">>]]]
I want to delete a entry matching the part [val1,val2] using this
ets:select_delete(table,[{{‘$1','$2'},[{'==','$2',["val1",<<"12">>]}],['$$']}]).
0
But still when I run select again I get
ets:select(table,[{{‘$1','$2'},[],['$$']}]).
[[key,["val1",<<"12">>]],
[key,["val2",<<"6">>]],
[key,["val3",<<"16">>]]]
How can I delete this entry based on the non key part?
The ets:select_delete documentation says:
The match specification has to return the atom true if the object is to be deleted. No other return value gets the object deleted. So one cannot use the same match specification for looking up elements as for deleting them.
So try this:
ets:select_delete(table,[{{'$1','$2'},[{'==','$2',["val1",<<"12">>]}],true}]).
ets:select_delete returns the number of records it deleted, so hopefully it should return 1 this time.

How to add a column to mnesia table

I am trying to add new column to an existing mnesia table. For that, I use following code.
test()->
Transformer =
fun(X)->
#users{name = X#user.name,
age = X#user.age,
email = X#user.email,
year = 1990}
end,
{atomic, ok} = mnesia:transform_table(user, Transformer,record_info(fields, users),users).
Two records I have
-record(user,{name,age,email}).
-record(users,{name,age,email,year}).
My problem is when I get values from my user table it comes as
{atomic,[{users,sachith,28,sachith#so,1990}]}
Why do I get users record name when I retrieve data from user table?
The table name and the record name are not necessarily the same. You started out with a table called user holding user records, and then you transformed all the user records into users records. So when you read from the table, it will return users records, since that's what the table now contains.
If you look at the internal representation of record,
-record(Name, {Field1,...,FieldN}). is represented by {Name,Value1,...,ValueN}.
So, basically you are converting {user,name,age,email} to {users,name,age,email,year} in your table.
But there is better approach for such migration which will come in handy as you update your records later,
Looking at this production codebase a better snippet for the transformer function,
%%-record(user,{name,age,email}). // old
%%-record(user,{name,age,email,year}). // new
Transformer =
fun(X)->
#user{name = element(2,X),
age = element(3,X),
email = element(4,X),
year = 1990}
end,

Neo4J Insertion taking time

I have a query which is taking the long time to insert in neo4j roughly the query looks like following :
create index on :symaccess_symdev(dir_port);
create index on :symaccess_symdev(host_lun);
create index on :symaccess_symdev(ini_tiator_group_name);
create index on :symaccess_symdev(sym_dev);
CALL apoc.load.json('file:////root/output/1530115956414/dev.json') YIELD
value AS row UNWIND row.symdev AS symdevs
MERGE (accesssymdev:symaccess_symdev {
sym_dev: symdevs.sym_dev,
host_lun: symdevs.host_lun,
ini_tiator_group_name: symdevs.ini_tiator_group_name,
dir_port: symdevs.dir_port
})
ON CREATE SET
accesssymdev.attr_percentage = symdevs.attr_percentage,
accesssymdev.cap_mb = toFloat(symdevs.cap_mb),
accesssymdev.physicaldevicename = symdevs.physicaldevicename;
Assuming that the sym_dev property value is unique for every symaccess_symdev node, then this query may be faster:
CALL apoc.load.json('file:////root/output/1530115956414/dev.json') YIELD
value AS row UNWIND row.symdev AS symdevs
MERGE (a:symaccess_symdev {sym_dev: symdevs.sym_dev})
ON CREATE SET
a.host_lun = symdevs.host_lun,
a.ini_tiator_group_name = symdevs.ini_tiator_group_name,
a.dir_port = symdevs.dir_port,
a.attr_percentage = symdevs.attr_percentage,
a.cap_mb = toFloat(symdevs.cap_mb),
a.physicaldevicename = symdevs.physicaldevicename;
A MERGE will only use at most one index, so your current query will cause the Cypher planner to pick one index (out of the 4 that are applicable). After using that index to generate a set of candidate nodes, it would still need to check the other 3 properties for each candidate node. If it had picked an index that is not very selective (because there tends to be many nodes with the same property value), then a lot of work would need to be done per MERGE.
Assuming that the sym_dev property value is unique, the above query simplifies the MERGE so that it will quickly discover whether the wanted symaccess_symdev node existed, and without needing to check any other properties.

Rails SQLite check returning double

I'm using Rails to search through a SQLite table (for other reasons I can't use the standard database-model system) using a SELECT query like so:
info = ActiveRecord::Base.connection.execute("SELECT * FROM #{form_name} WHERE EmailAddress = \"#{user_em}\";")
This returns the correct values, but for some reason the output is in duplicate, the difference being the 2nd set doesn't have column titles in the hash, instead going from 0-[num columns]. For example:
{"id"=>1, "Timestamp"=>"2/27/2017 14:26:03", "EmailAddress"=>"-snip-", 0=>1, 1=>"2/27/2017 14:26:03", 2=>"-snip-"}
(I'll note the obvious- there's only one row in the table with that information in it)
While it's not exactly a fatal problem, I'm interested as to why it's doing so and if it's possible to prevent it. Thanks!
This allows you to read the values both by column index or column name:
id = row[0]
timestamp = row["Timestamp"]

Resources