How to select all columns from a join table in vapor 4? - vapor

It seems that vapor add a new feature eagerLoad and remove alsoDecode. It is convenient for those which possessing parent-child or sibling relationships. But not for those without relationship.
I want to implement a tree structure whose nodes cannot be (or I don't know how to) involved in a relationship. Nodes have a parent and many children which are nodes too.
So I have three tables for this structure.
Tree:
| Field | Type |
| ----------- | --------------- |
| id | UUID? |
| name | String |
| nodes | [Node] |
| paths | [Path] |
Nodes:
| Field | Type |
| ------------- | -------------------------- |
| id | UUID? |
| type | NodeType(root, leaf, node) |
| tree | Tree |
Path:
| Field | Type |
| ------------ | --------- |
| id | UUID? |
| distance | Int |
| ancestorID | UUID |
| descendantID | UUID |
| tree | Tree |
The question is
if I want to do
SELECT Nodes.id, Nodes.type, Path.ancestorID from Nodes
INNER JOIN Path
ON Nodes.id = Path.descendantID
How to write the codes.

You could also choose to cast to SQLDatabase.
Make sure to import SQLKit too, a Fluent database can always be casted as SQLDatabase.
For example: let sqlDb = req.db as? SQLDatabase would give you the power to use custom queries like: sqlDb?.select().from("Nodes").join("Path", on: "Nodes.id = Path.descendantId").all()
For more info on SQLKit see: https://github.com/vapor/sql-kit
Reference: https://docs.vapor.codes/4.0/fluent/advanced/ (Any Fluent Database can be cast to a SQLDatabase. This includes req.db, app.db, the database passed to Migration, etc.)

Related

how to create relationship using cypher

I have been learning neo4j/cypher for the last week. I have finally been able to upload two csv files and create a relationship,"captured". However, I am not fully confident in my understanding of the code as I was following the tutorial on the neo4j site. Could you please help me confirm what I did is correct.
I have two csv files, a "cap.csv" and a "survey.csv". The survey table contains data of each unique survey conducted at the survey sites. the cap table contains data of each unique organisms captured. In the cap table I have a foreign key, "survey_id", which in the Postgres db you would join to the p.key in the survey table.
I want to create a relationship, "captured", showing each unique organsism that was captured based on the "date" column in the survey table.
Survey table
| lake_id | date |survey_id | duration |
| -------- | -------------- | --| --
| 1 | 05/27/14 |1 | 7 |
| 2 | 03/28/13 | 2|10 |
| 2 | 06/29/19 | 3|23 |
| 3 | 08/21/21 | 4|54 |
| 1 | 07/23/18 | 5|23 |
| 2 | 07/22/23 | 6|12 |
Capture table
| cap_id | species |capture_life_stage | weight | survey_id |
| -------- | -------------- | --| -----|---|
| 1 | a |adult | 10 | 1|
| 2 | a | adult|10 | 2 |
| 3 | b | juv|23 | 3 |
| 4 | a | adult|54 | 4 |
| 5 | b | juv|23 | 5 |
| 6 | c | juv |12 | 6 |
LOAD CSV WITH HEADERS FROM 'file:///cap.csv' AS row
WITH
row.id as id,
row.species as species,
row.capture_life_stage as capture_life_stage,
toInteger(row.weight) as weight,
row.survey_id as survey_id
MATCH (c:cap {id: id})
MERGE (s) - [rel:captured {survey_id: survey_id}] ->(c)
return count(rel)
I am struggling to understand the code I wrote above. I followed the neo4j tutorial exactly but used my data (https://neo4j.com/developer/desktop-csv-import/).
I am fairly confident from data checks, but did the above code create the "captured" relationship showing each unique organism captured on that unique survey date? Based on the visual I can see I believe it did but I don't fully understand each step in the code.
What is the purpose of the MATCH (c:cap {id: id}) in the code?
The code below
MATCH (c:cap {id: id})
is the same as
MATCH (c:cap)
Where c.id = id
It is a shorter way of finding Captured node based on id and then you are creating a relationship with Survey node.
Question: s is not defined in your query. Where is it?

How to Create Two Foreign Keys in One Table?

I'd like to build a Rails app to compare distances. Based on a comment about having a Distance table contain two foreign keys to Location, I've created the example below.
The Locations table will keep a record of the location name. The Distance table will keep two locations and a distance between them.
# Locations
| id | name |
| :----- | :----------- |
| 1 | location_1 |
| 2 | location_2 |
| 3 | location_3 |
# Distance
| id | location_id | location_id | distance |
| :----- | :----------- | :----------- | :------- |
| 1 | 1 | 2 | 6 |
| 2 | 1 | 3 | 13 |
How do I create two foreign keys in one table? By running rails generate model Distance distance:integer location:references I only get one location_id.

Can you use parameter/variables/placeholders for values for future use in a Specflow scenario?

In previous job I have used DBFit and used parameters (variables/placeholders) for values
example:
|Key? |
|>>Key|
!|Query|SELECT Status FROM Confirm WHERE Name='xyz' |
| Status | Key |
| Confirmed | <<Key |
I am now using SpecFlow and wondered if it has similar functionality
example: ( I have used << and >> here just for explanation )
Given I get Initial for
And the 1st response should contain
| Name | string | "xyz" |
| Key | string | >>{Key} |
When I get Confirm for
Then the 1st response should contain
| Name | string | "xyz" |
| Key | string | <<{Key} |
I think you are looking for Scenario Outlines.
With them, you can specify a table with your parameters. So in your case, it is looking something like this:
Scenario Outline: Title for your Scenario Outline
Given I get Initial for And the 1st response should contain
| field | type | assertion |
| Name | string | "xyz" |
| Key | string | <Key> |
When I get Confirm for
Then the 1st response should contain
| field | type | assertion |
| Name | string | "xyz" |
| Key | string | <Key> |
Examples:
| Key |
| example1 |
| example2 |
| example3 |
Be aware that you have here two different types of tables. The table at your steps is an argument for the step.
The Examples table at the end are the concrete examples. So the Scenario is executed once per each entry in this table.
You can use the parameters from the example table with a simple <COLUMN_NAME> placeholder.
Docs: https://docs.specflow.org/projects/specflow/en/latest/Gherkin/Gherkin-Reference.html#scenario-outline

How can I capture and store data from a repeating HL7 segement?

We currently capture data from HL7 messages like below and then insert the same in database. This is easy as it is value from a single segment
var vACC_NO =checkSize("ACC",msg['PID']['PID.3']['PID.3.1'].toString(),20);
INSERT INTO adt_tab ( SITEID,ACC_NO) VALUES (vSITEID,vACC_NO);
Now I need to capture DG1 segment data, where we have multiple DG1 segments in HL7 message. And also need to store in Database
| DG1 | 1 | ICD10 | I22.8^MYOCARDIAL INFARCT^ICD10 | MYOCARDIAL | | | | | | | | | | | |
| INFARCTION | 201702010437 | B | | | | | | | | | 7 | | | | |
| DG1 | 2 | ICD10 | A44.9^ORGANISM^ICD10 | ORGANISM | 20170201 0437 | B | | | | | | | | | 7 |
So in my database table I have now more columns - SITEID, ACC_NO, CODE1, CODE2...
From the above message I need to insert I22.8 into CODE 1, A44.9 into CODE2 and so on ..
How I should first capture these codes in loop from multiple DG1 segments in the message ?
And then how I should store it in the database ?
Thanks
You can iterate over the segments like this
for each (dg1 in msg['DG1']){
variable1 = dg1['DG1.3']['DG1.3.1'];
variable2 = dg1['DG1.3']['DG1.3.2'];
// database call with the previus
databaseCall(variable1,variable2, ...
}
For each segment you are going to do an insert.
Apart from this, I do not think is a good idea to make more columns in the same row by adding variable1, variable2, variable3 ... as it is not normalized and it not a good database design practice.

Concatenating nodes from a query into a single line for export to csv in Neo4J using Cypher

I have a neo4J graph that represents a chess tournament.
Say I run this:
MATCH (c:ChessMatch {m_id: 1"})-[:PLAYED]-(p:Player) RETURN *
This gives me the results of the two players who played in a chess match.
The graph looks like this:
And the properties are something like this:
|--------------|------------------|
| (ChessMatch) | |
| m_id | 1 |
| date | 1969-05-02 |
| comments | epic battle |
|--------------|------------------|
| (player) | |
| p_id | 1 |
| name | Roy Lopez |
|--------------|------------------|
| (player) | |
| p_id | 2 |
| name | Aron Nimzowitsch |
|--------------|------------------|
I'd like to export this data to a csv, which would look like this:
| m_id | date | comments | p_id_A | name_A | p_id_B | name_B |
|------|------------|-------------|--------|-----------|--------|------------------|
| 1 | 1969-05-02 | epic battle | 1 | Roy Lopez | 2 | Aron Nimzowitsch |
Googling around, surprisingly, I didn't find any solid answers. The best I could think of is so just use py2neo and pull down all the data as separate tables and merge in Pandas, but this seems uninspiring. Any ideas on how to do in cypher would be greatly illuminating.
APOC has a procedure for that :
apoc.export.csv.query
Check https://neo4j-contrib.github.io/neo4j-apoc-procedures/index32.html#_export_import for more details. Note that you'll have to add the following to neo4j.conf :
apoc.export.file.enabled=true
Hope this helps.
Regards,
Tom

Resources