Dynamically adding preloads in Ecto query - join

I have a table A and has_one tables B and C.
I am doing a query on A, but, depending on columns requested, I want the possibility to join and preload columns from B and/or C.
For joins, I think it's rather easy, they can be dynamically chained to the query before invoking Repo.all. But what to do with the preload? Depending on whether I need tables B and C in the query, preload should have different arguments, or shouldn't be there at all.

The solution that i propose is to create the preload list dynamically and pass it to the preload().
The structure of request sent to the backend for me is like this:
{
limit: 10,
offset: 0,
preloads: {"research_status": null, "assets": null, "notes": null, "job_applications": {"job_position": null, "status": null, "discarded_reason_var": null}}
}
In the above request example, the User entity has associations with research_status, assets, notes and job_applications. The job_applications have again associations with job_position, status and notes.
I have created a helper function to parse the 'preloads' in args, (you can do it wherever you want, im doing it in controller):
def parse_query_preloads(%{"preloads" => preloads} = params) do
preloads
|> Jason.decode!()
|> build_preloads()
|> (&Map.put(params, "preloads", &1)).()
end
def parse_query_preloads(params), do: params
defp build_preloads(preload_map),
do:
preload_map
|> Enum.map(fn
{preload, sub_preload} when not is_nil(sub_preload) and is_map(sub_preload) ->
[{preload |> String.to_atom(), build_preloads(sub_preload)}]
|> Enum.into([])
{preload, nil} ->
preload
|> String.to_atom()
end)
Parsing the params, like this, in controller:
params =
params
|> format_pagination()
|> parse_query_preloads()
And finally in my query, i would simply take the params["preloads"] and use them in preload function.
User
|> preload(^params["preloads"])
|> limit(^params["limit"])
|> offset(^params["offset"])
|> Repo.all()
This would make the preloads in the query dynamic. I hope this solves your problem, let me know for errors/improvements.

Related

What is the equivalent of SELECT first(column), last(column) in Flux Query Language?

What would be equivalent flux queries for
SELECT first(column) as first, last(column) as last FROM measurement ?
SELECT last(column) - first(column) as column FROM measurement ?
(I am referring to FluxQL, the new query language developed by InfluxData)
There are first() and last() functions but, I am unable to find the example to use both in same query.
These are the documentation for FluxQL for better reference:
https://docs.influxdata.com/flux/v0.50/introduction/getting-started
https://v2.docs.influxdata.com/v2.0/query-data/get-started/
If you (or someone else who lands here) just wanted the difference between the min and max values you would want the built-in spread function.
The spread() function outputs the difference between the minimum and maximum values in a specified column.
However, you're asking for the difference between the first and last values in a stream and there doesn't seem to be a built-in function for that (probably because most streams are expected to be dynamic in range). To achieve this, you could either write a custom aggregator function like in a similar answer. Or you can join two queries together, then take the difference:
data = from(bucket: "example-bucket") |> range(start: -1d) |> filter(fn: (r) => r._field == "field-you-need")
temp_earlier_number = data |> first() |> set(key: "_field", value: "delta")
temp_later_number = data |> last() |> set(key: "_field", value: "delta")
union(tables: [temp_later_number, temp_earlier_number])
|> difference()
What this does is create two tables with a field named delta and then join them together, resulting in a table with two rows--one representing the first value and the other representing the last. Then we take the difference between those two rows. If you don't want negative numbers, just be sure to subtract in the correct order for your data (or use math.abs).

JSONB query in Rails for a key that contains an array of hashes

I have a Rails 5 project with a Page model that has a JSONB column content. So the structure looks like this (reduced to the bare minimum for the question):
#<Page id: 46, content: {..., "media" => [{ "resource_id" => 143, "other_key" => "value", ...}, ...], ...}>
How would I write a query to find all pages that have a resource_id of some desired number under the media key of the content JSONB column? This was an attempt that I made which doesn't work (I think because there are other key/value pairs in each item of the array):
Page.where("content -> 'media' #> ?", {resource_id: '143'}.to_json)
EDIT: This works, but will only check the first hash in the media array: Page.where("content -> 'media' -> 0 ->> 'resource_id' = ?", '143')
Using sql, this should give you all pages which have resource id 143:
select * from pages p where '{"resource_id": 143}' <# ANY ( ARRAY(select jsonb_array_elements ( content -> 'media' ) from pages where id=p.id ) );
Postgresql has a function called ANY (postgres docs) which uses the form expression operator ANY (array). The left-hand expression is evaluated and compared to each element of the array using the given operator.
Since the right hand side parameter to ANY has to be an array (not a json array), we use the jsonb_array_elements method to convert the content->media json array into a set of rows which are then converted into an array by using ARRAY().
The <# operator checks if the expression on the right contains the expression on the left side. Ex: '{"a": 1}'::jsonb <# '{"b": 2, "a": 1}'::jsonb will return true.

List to Keyword List in Elixir

I have a list of results that I pulled in using Ecto. I want to end up with a keyword list that I can then use to populate a <select> inside of Phoenix but I'm unsure how to turn this list into a keyword list like ["1": "Author #1", "2": "Author #2"]
authors = Repo.all(Author)
# How would I create ["1": "Author #1", "2": "Author #2"]
A keyword list expects atoms as keys. The good news is that you don't need a keyword list to give to select. Here are two approaches:
Do it directly in the query:
authors = Repo.all from a in Author, select: {a.name, a.id}
Do it on the data:
authors = Repo.all Author
Enum.map(authors, fn a -> {a.name, a.id} end)
The advantage of the first one is that you will load only the data you need from the table.
Select just the author names using Enum.map
authorNames = authors |> Enum.map(fn a-> a.name end)
then use Enum.zip to setup the key value pairs
1..Enum.count(authors ) |> Enum.map(fn x-> to_string(x) end) |> Enum.zip(authorNames)
this will produce soemthing like:
[{"1", "Author #1"}, {"2", "Author #2"}]
If you want it to be a true keyword list you need the first element to be a atom because keyword lists only use atoms as keys
1..Enum.count(authors ) |> Enum.map(fn x-> x |> to_string |> String.to_atom end) |> Enum.zip(authorNames)
which will produce
["1": "Author #1", "2": "Author #2"]
But I've always heard to manage the number of atoms you have carefully and that converting large number of strings to atoms isn't a best practice. Unless you know how many authors your query will return you may need to be careful when converting the key numbers to atoms.

F# parent/child lists to Excel

I'm still new to F# so hopefully my question isn't too dumb. I'm creating an Excel file. I've done a lot of Excel with C# so that isn't a problem. I have a list of parent rows and then a list of child rows. What's the best way to spit that into Excel and keep track of the row in Excel that it belongs in.
Assuming my list rowHdr is a list of Row types, I have something like this:
let setCellText (x : int) (y : int) (text : string) =
let range = sprintf "%c%d" (char (x + int 'A')) (y+1)
sheet.Range(range).Value(Missing.Value) <- text
type Row =
{ row_id:string
parent_id:string
text:string }
let printRowHdr (rowIdx:int) (colIdx:int) (rowToPrint:Row) rows =
setCellText colIdx rowIdx rowToPrint.text
List.iteri (fun i x -> printRowHdr (i+1) 0 x rows) <| rowHdr
I still have trouble thinking about what the best functional approach is at times. Somewhere in the printRowHdr function I need to iterate through the child rows for the rows where the parent_id is equal to parent row id. My trouble is knowing what row in Excel it belongs in. Maybe this is totally the wrong approach, but I appreciate any suggestions.
Thanks for any help, I sincerely appreciate it.
Thanks,
Nick
Edited to add:
Tomas - Thanks for the help. Let's say I have two lists, one with US states and another with cities. The cities list also contains the state abbreviation. I would want to loop through the states and then get the cities for each state. So it might look something like this in Excel:
Alabama
Montgomery
California
San Francisco
Nevada
Las Vegas
etc...
Given those two lists could I join them somehow into one list?
I'm not entirely sure if I understand your question - giving a concrete example with some inputs and a screenshot of the Excel sheet that you're trying to get would be quite useful.
However, the idea of using ID to model parent/child relationship (if that's what you're trying to do) does not sound like the best functional approach. I imagine you're trying to represent something like this:
First Row
Foo Bar
Foo Bar
Second Row
More Stuff Here
Some Even More Neste Stuff
This can be represented using a recursive type that contains the list of items in the current row and then a list of children rows (that themselves can contain children rows):
type Row = Row of list<string> * list<Row>
You can then process the structure using recursive function. An example of a value (representing first three lines from the example above) may be:
Row( ["First"; "Row"],
[ Row( ["Foo"; "Bar"], [] )
Row( ["Foo"; "Bar"], [] ) ])
EDIT: The Row type above would be useful if you had arbitrary nesting. If you have just two layers (states and cities), then you can use list of lists. The other list containing state name together with a nested list that contains all cities in that state.
If you start with two lists, then you can use a couple of F# functions to turn the input into a list of lists:
let states = [ ("WA", "Washington"); ("CA", "California") ]
let cities = [ ("WA", "Seattle"); ("WA", "Redmond"); ("CA", "San Francisco") ]
cities
// Group cities by the state
|> Seq.groupBy (fun (id, name) -> id)
|> Seq.map (fun (id, cities) ->
// Find the state name for this group of cities
let _, name = states |> Seq.find (fun (st, _) -> st = id)
// Return state name and list of city names
name, cities |> Seq.map snd)
Then you can recursively iterate over the nested lists (in the above, they are actually sequences, so you can turn them to lists using List.ofSeq) and keep an index of the current row and column.

Create several Mnesia tables with the same columns

I want to create the following schema in Mnesia. Have three tables, called t1, t2 and t3, each of them storing elements of the following record:
-record(pe, {pid, event}).
I tried creating the tables with:
Attrs = record_info(fields, pe),
Tbls = [t1, t2, t3],
[mnesia:create_table(Tbl, [{attributes, Attrs}]) || Tbl <- Tbls],
and then write some content using the following line (P and E have values):
mnesia:write(t1, #pe{pid=P, event=E}, write)
but I got a bad type error. (Relevant commands were passed to transactions, so it's not a sync problem.)
All the textbook examples of Mnesia show how to create different tables for different records. Can someone please reply with an example for creating different tables for the same record?
regarding your "DDT" for creating the tables, I don't see any mystake at first sight, just remember that using tables with names different from the record names makes you lose the "simple" commands (like mnesia:write/1) because they use element(1, RecordTuple) to retrieve table name.
When defining tables, you can use option {record_name, RecordName} (in your case: {record_name, pe}) to tell mnesia that first atom in tuple representing records in table is not the table name, but instead the atom you passed with record_name; so in case of your table t1 it makes mnesia expecting 'pe' records when inserting or looking up for records.
If you want to insert a record in all tables, you might use a script similar to the one used to create table (but in a function wrapper for mnesia transaction context):
insert_record_in_all_tables(Pid, Event, Tables) ->
mnesia:transaction(fun() -> [mnesia:write(T, #pe{pid=Pid, event=Event}, write) || T <- Tables] end).
Hope this helps!

Resources