I am trying spring data jdbc with #query and I met a question: If I have a query that joins 2
tables like this:
#Query("select a.*, b.* from master a, detail b where b.master_id=a.id and a.id=:id")
How do I get the response?
The official Spring Data JDBC, References, and Aggregates didn't give any hint on it.
Any suggestion?
You have multiple options:
Use the default RowMapper
Use as the return type a class (or a collection of that class) that has properties for all the fields you are selecting.
This will return a single element for each row.
Since you are referring to the two tables as master and detail you probably don't want that in this case.
Use a custom RowMapper
The #Query annotation allows you to specify your own rowMapperClass which will get instantiated and passed to a NamedParameterJdbcTemplate together with your query.
Again this will result in on result element per row selected and therefore probably isn't what you want.
Use a custom ResultSetExtractor
This can be again specified in the #Query annotation.
And allows you to construct the result in an arbitrary way.
See the documentation for ResultSetExtractor for more details.
Remark: If you are using a ResultSetExtractor to create a single Master with multiple Detail instances from multiple rows of your query result make sure to add an ORDER BY master.id. Otherwise the order of rows is likely as desired but actually not guaranteed.
Related
I have a generic table in global area and i want to use it in SELECT from. Is this possible or is there a way do this ?
Example Code:
FIELD-SYMBOLS: <gt_data> TYPE STANDARD TABLE.
CLASS-DATA: mo_data TYPE REF TO data.
CREATE DATA mo_data LIKE lt_data.
ASSIGN mo_data->* TO <gt_data>.
<gt_data> = lt_data.
SELECT data~matnr,
mbew~malzeme_deger
FROM zmm_ddl_mbew AS mbew
INNER JOIN #<gt_data> AS data ON data~matnr EQ mbew~matnr
INTO TABLE #DATA(lt_mbew).
If the Generic table you are asking about is an internal Table which the code snippet suggests, then
No i dont think you cant build a join to work on 2 different sources.
Unless there are some new kernel developments, the select statements are converted to DB SQL statements.
ABAP 7.5 documentation of Select statement refers to the from "data_source" as dbtab,View or cds_entity as possible sources.
Even if it was possible there are still other generic options that may make more sense. If the source internal data is small enough, then you can build a generic where clause to solve the problem.
Select from DBTAB where (string_cond).
If the size of the internal table is so large that you end up with half the data in memory and half on a DB, there may be a better generic solution anyway.
No, it is not possible. From the SELECT datasource help:
If the FROM clause is specified statically, the internal table cannot be a generically typed formal parameter or a generically typed field symbol. Objects like this can only be specified in a dynamic FROM clause and must represent a matching internal table at runtime
The above rule remains valid whether itab joined with dbtab or not.
I have a problem:
My Pcoll is made of rows with this format
{'word':'string','table':'string'}
I want to write into BigQuery only the words, however I need the table field to be able to select the right table in BigQuery.
This is how my pipeline looks:
tobq = (input
| 'write names to BigQuery '>> beam.io.gcp.bigquery.WriteToBigQuery(
table=compute_table_name, schema=compute_schema,
insert_retry_strategy='RETRY_ON_TRANSIENT_ERROR',
create_disposition=beam.io.gcp.bigquery.BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition=beam.io.gcp.bigquery.BigQueryDisposition.WRITE_APPEND)
)
The function compute_table_name accesses an element and returns the table field. Is there a way to write into BQ just the words while still having this table selection mechanism based on rows?
Many thanks!
Normally the best approach with a situation like this in BigQuery is to use the ignoreUnknownValues parameter in ExternalDataConfiguration. Unfortunately Apache Beam doesn't yet support enabling this parameter while writing to BigQuery, so we must find a workaround, as follows:
Pass a Mapping of IDs to Tables as a table_side_input
This solution only works if identical word values are guaranteed to map to the same table each time, or there is some kind of unique identifier for your elements. This method is a bit more involved than Solution 1, but it relies only on the Beam model instead of having to touch the BigQuery API.
The solution involves making use of table_side_input to dynamically pick which table to place an element even if the element is missing the table field. The basic idea is to create a dict of ID:table (where ID is either the unique ID, or just the word field). Creating this dict can be done with CombineGlobally by combining all elements into a single dict.
Meanwhile, you use a transform to drop the table field from your elements before the WriteToBigQuery transform. Then you pass the dict into the table_side_input parameter of WriteToBigQuery, and write a callable table parameter that checks with the dict to figure out which table to use, instead of the table field.
I would like to know if this is possible. I have a query that produces a nice report showing a relationship between two entities through two other nodes. There can be more than one path. I now want to create a direct relationship between those two nodes and count the number of paths and sum based upon data in the nodes in between. the report query is below.
match (bo:BuyerAgency)<-[:IS_FOR_BO]-(sol:Solicitation)-[:SELECTED]->(prop:Proposal)<-[:OWNS_BID]-(so:VendorOrg)
where sol.currStatus='Awarded'
return bo.AgencyName, count(sol.Number) as awards, so.orgName, sum(prop.finalPrice) as awardVolume;
What I want to do is similar to below which will not work.
match (bo:BuyerAgency)<-[:IS_FOR_BO]-(sol:Solicitation)-[:SELECTED]->(prop:Proposal)<-[:OWNS_BID]-(so:VendorOrg)
where sol.currStatus='Awarded'
create (bo)-[:HAS_AWARDED{awardCount: count(sol.Number), awardVolume: sum(prop.finalPrice)}]->(so);
If I remove the properties for the relationship, it works but want to add the properties without to much programing.
I am using the most recent version of Neo4j 3.2.
thanks
The problem here is you are trying to use count() and sum() functions in an invalid context. The below query should work:
match (bo:BuyerAgency)<-[:IS_FOR_BO]-(sol:Solicitation)-[:SELECTED]->(prop:Proposal)<-[:OWNS_BID]-(so:VendorOrg)
where sol.currStatus='Awarded'
with bo, so, count(sol.Number) as count_sol, sum(prop.finalPrice) as sum_finalPrice
create (bo)-[:HAS_AWARDED{awardCount: count_sol, awardVolume: sum_finalPrice}]->(so);
This query uses WITH to pass bo, so and the result of the aggregation functions count(sol.Number) and sum(prop.finalPrice) to the next context. After, these values are used to create the new relation between bo and so.
I am using Olingo 1.2 on top of Hibernate.
I have a request that returns 250 rows, each row links to another table in a one-to-many relationship.
I execute $expand to get all data in the child table, but when I examine the query executed in the database it appears that there are 251 individual calls being made, one for the master table returning 250 rows, and then one for each of the rows to return the child records.
Looking at the Olingo code, this lazy approach is by design.
I've test $expand on Microsoft ODATA processor and they use a greedy approach in this case.
My Question is: How can I switch Olingo to use a greedy approach for $expand (i.e. push the join down into the database)?
The queries that you are seeing generated are a result of Hibernate, not Olingo. This is the default way that Hibernate uses to generate queries for the child table. You need to look at the #Fetch(FetchMode.Join) annotation in Hibernate and apply it to your relation. Please take a look at this link for the explanation:
https://stackoverflow.com/a/11077041/3873392
I'm creating a page where I want to make a history page. So I was wondering if there is any way to fetch all rows from multiple tables and then sort by their time? Every table has a field called "created_at".
So is there any way to fetch from all tables and sort without having Rails sorting them form me?
You may get a better answer, but I would presume you would need to
Create a History table with a Created date column, an autogenerated Id column, and any other contents you would like to expose [eg Name, Description]
Modify all tables that generate a "history" item to consume this new table via Foreign Key relationship on History.Id
"Mashing up" tables [ie merging different result sets into a single result set] is a very difficult problem, but you would effectively be doing the above anyway - just in the application layer, so why not do it correctly and more efficiently in the data layer.
Hope this helps :)
You would need to perform the sql like:
Select * from table order by created_at incr
: Store this into an array. Do this for each of the data sources, and then perform a merge sort on all the arrays in Ruby. Of course this will work well for small data sets, but once you get a data set that is large (ie: greater than will fit into memory) then you will have to use a different collect/merge algorithm.
So I guess the answer is that you do need to perform some sort of Ruby, unless you resort to the Union method described in another answer.
Depending on whether these databases are all on the same machine or not:
On same machine: Use OrderBy and UNION statements in your sql to return your result set
On different machines: You'll want to test this for performance, but you could use Linked Servers and UNION, ORDER BY. Alternatively, you could have ruby get the results from each db, and then combine them and sort
EDIT: From your last comment about different tables and not DB's; use something like this:
SELECT Created FROM table1
UNION
SELECT Created FROM table2
ORDER BY created