I have a list of results that I pulled in using Ecto. I want to end up with a keyword list that I can then use to populate a <select> inside of Phoenix but I'm unsure how to turn this list into a keyword list like ["1": "Author #1", "2": "Author #2"]
authors = Repo.all(Author)
# How would I create ["1": "Author #1", "2": "Author #2"]
A keyword list expects atoms as keys. The good news is that you don't need a keyword list to give to select. Here are two approaches:
Do it directly in the query:
authors = Repo.all from a in Author, select: {a.name, a.id}
Do it on the data:
authors = Repo.all Author
Enum.map(authors, fn a -> {a.name, a.id} end)
The advantage of the first one is that you will load only the data you need from the table.
Select just the author names using Enum.map
authorNames = authors |> Enum.map(fn a-> a.name end)
then use Enum.zip to setup the key value pairs
1..Enum.count(authors ) |> Enum.map(fn x-> to_string(x) end) |> Enum.zip(authorNames)
this will produce soemthing like:
[{"1", "Author #1"}, {"2", "Author #2"}]
If you want it to be a true keyword list you need the first element to be a atom because keyword lists only use atoms as keys
1..Enum.count(authors ) |> Enum.map(fn x-> x |> to_string |> String.to_atom end) |> Enum.zip(authorNames)
which will produce
["1": "Author #1", "2": "Author #2"]
But I've always heard to manage the number of atoms you have carefully and that converting large number of strings to atoms isn't a best practice. Unless you know how many authors your query will return you may need to be careful when converting the key numbers to atoms.
Related
I have a table A and has_one tables B and C.
I am doing a query on A, but, depending on columns requested, I want the possibility to join and preload columns from B and/or C.
For joins, I think it's rather easy, they can be dynamically chained to the query before invoking Repo.all. But what to do with the preload? Depending on whether I need tables B and C in the query, preload should have different arguments, or shouldn't be there at all.
The solution that i propose is to create the preload list dynamically and pass it to the preload().
The structure of request sent to the backend for me is like this:
{
limit: 10,
offset: 0,
preloads: {"research_status": null, "assets": null, "notes": null, "job_applications": {"job_position": null, "status": null, "discarded_reason_var": null}}
}
In the above request example, the User entity has associations with research_status, assets, notes and job_applications. The job_applications have again associations with job_position, status and notes.
I have created a helper function to parse the 'preloads' in args, (you can do it wherever you want, im doing it in controller):
def parse_query_preloads(%{"preloads" => preloads} = params) do
preloads
|> Jason.decode!()
|> build_preloads()
|> (&Map.put(params, "preloads", &1)).()
end
def parse_query_preloads(params), do: params
defp build_preloads(preload_map),
do:
preload_map
|> Enum.map(fn
{preload, sub_preload} when not is_nil(sub_preload) and is_map(sub_preload) ->
[{preload |> String.to_atom(), build_preloads(sub_preload)}]
|> Enum.into([])
{preload, nil} ->
preload
|> String.to_atom()
end)
Parsing the params, like this, in controller:
params =
params
|> format_pagination()
|> parse_query_preloads()
And finally in my query, i would simply take the params["preloads"] and use them in preload function.
User
|> preload(^params["preloads"])
|> limit(^params["limit"])
|> offset(^params["offset"])
|> Repo.all()
This would make the preloads in the query dynamic. I hope this solves your problem, let me know for errors/improvements.
I'm attempting to lookup a list of stores using .where but I'm also trying to keep them sorted as the same array of ids.
i.e
ids = ["4", "15", "10", "20", "1"]
stores = Store.published.where(id: ids)
It looks like stores is being returned in ascending order of the id so like
[{id: 20}, {id: 15} {id: 10}, {id: 4}, {id: 1}]. I'd like to keep the returned stores ordered in the same way that ids is ordered. Also note the ids in each store are ints whereas the ids in the ids array are strings.
Because the numbers are the IDs of the records you can simply use:
ids = ["4", "15", "10", "20", "1"]
stores = Store.find(ids)
This works because find accepts a list of IDs too and when a list is provided then find will return the records in the same order as in the list.
And in Ruby on Rails 7.0 you will be able to use where in combination with in_order_of which has the benefit of not raising an error if a record is not found and that you can order by other columns and their values too:
ids = ["4", "15", "10", "20", "1"]
stores = Store.where(id: ids).in_order_of(:id, ids)
When doing a where(id: ids) query the database is simply running a query limiting the results to those ID's. The database won't return the results based on the ordering of the ids you give it, but you can explicitly order database results by id (or any other attribute) with where(id: ids).order('id desc') to return in decreasing order by id. Another option you can use ActiveRecord#find with multiple ids and it will return the results in the order you request them:
Store.published.find(ids)
However where will silently ignore any id's that are not found, whereas find will fail if any id passed in is missing.
Figured it out. I wanted to stick with using .where() so it wouldn't throw an error if the id didn't exist. This solution worked for me...
sorted_stores = stores.sort_by { |store| ids.index(store[:id]) }
What would be equivalent flux queries for
SELECT first(column) as first, last(column) as last FROM measurement ?
SELECT last(column) - first(column) as column FROM measurement ?
(I am referring to FluxQL, the new query language developed by InfluxData)
There are first() and last() functions but, I am unable to find the example to use both in same query.
These are the documentation for FluxQL for better reference:
https://docs.influxdata.com/flux/v0.50/introduction/getting-started
https://v2.docs.influxdata.com/v2.0/query-data/get-started/
If you (or someone else who lands here) just wanted the difference between the min and max values you would want the built-in spread function.
The spread() function outputs the difference between the minimum and maximum values in a specified column.
However, you're asking for the difference between the first and last values in a stream and there doesn't seem to be a built-in function for that (probably because most streams are expected to be dynamic in range). To achieve this, you could either write a custom aggregator function like in a similar answer. Or you can join two queries together, then take the difference:
data = from(bucket: "example-bucket") |> range(start: -1d) |> filter(fn: (r) => r._field == "field-you-need")
temp_earlier_number = data |> first() |> set(key: "_field", value: "delta")
temp_later_number = data |> last() |> set(key: "_field", value: "delta")
union(tables: [temp_later_number, temp_earlier_number])
|> difference()
What this does is create two tables with a field named delta and then join them together, resulting in a table with two rows--one representing the first value and the other representing the last. Then we take the difference between those two rows. If you don't want negative numbers, just be sure to subtract in the correct order for your data (or use math.abs).
I use list comprehension for transforming database rows from the list of tuples to the list of maps. One day I have added some new column into my database table and forget to change the code everywhere.
And because of that I discovered a strange effect: database rows become an empty list.
Example of code in erl console:
> DbRows = [{1, 1, 1}, {2, 2, 2}].
[{1,1,1},{2,2,2}]
> [#{<<"col1">> => Col1, <<"col2">> => Col2} ||{Col1, Col2} <- DbRows].
[]
Why Erlang does not generate exception error: no match of right hand side value in this case?
Is this code OK, or some other syntax is preferred for performing such data transforming?
Erlang does not generate any exception because that's a right syntax. Generator {Col1, Col2} <- DbRows is a filter in the same time. So any element that does not match pattern just skipped.
In your case I would do something like that:
-define(FIELDS, [id, some1, some2]).
DbRows = [{1, 1, 1}, {2, 2, 2}].
Prepare = fun(X) ->
maps:from_list(lists:zip(?FIELDS, tuple_to_list(X)))
end.
[ Prepare(Row) || Row <- DbRows].
And when you add new field you need to add that field in the macros.
I don't like this "feature", since in my experience it tends to mask bugs, but nikit's answer is correct about the reason for the result you see.
You can get the exception by moving the pattern matching to the left side of the list comprehension:
[ case Row of {Col1, Col2} -> #{<<"col1">> => Col1, <<"col2">> => Col2} || Row <- DbRows ]
I'm still new to F# so hopefully my question isn't too dumb. I'm creating an Excel file. I've done a lot of Excel with C# so that isn't a problem. I have a list of parent rows and then a list of child rows. What's the best way to spit that into Excel and keep track of the row in Excel that it belongs in.
Assuming my list rowHdr is a list of Row types, I have something like this:
let setCellText (x : int) (y : int) (text : string) =
let range = sprintf "%c%d" (char (x + int 'A')) (y+1)
sheet.Range(range).Value(Missing.Value) <- text
type Row =
{ row_id:string
parent_id:string
text:string }
let printRowHdr (rowIdx:int) (colIdx:int) (rowToPrint:Row) rows =
setCellText colIdx rowIdx rowToPrint.text
List.iteri (fun i x -> printRowHdr (i+1) 0 x rows) <| rowHdr
I still have trouble thinking about what the best functional approach is at times. Somewhere in the printRowHdr function I need to iterate through the child rows for the rows where the parent_id is equal to parent row id. My trouble is knowing what row in Excel it belongs in. Maybe this is totally the wrong approach, but I appreciate any suggestions.
Thanks for any help, I sincerely appreciate it.
Thanks,
Nick
Edited to add:
Tomas - Thanks for the help. Let's say I have two lists, one with US states and another with cities. The cities list also contains the state abbreviation. I would want to loop through the states and then get the cities for each state. So it might look something like this in Excel:
Alabama
Montgomery
California
San Francisco
Nevada
Las Vegas
etc...
Given those two lists could I join them somehow into one list?
I'm not entirely sure if I understand your question - giving a concrete example with some inputs and a screenshot of the Excel sheet that you're trying to get would be quite useful.
However, the idea of using ID to model parent/child relationship (if that's what you're trying to do) does not sound like the best functional approach. I imagine you're trying to represent something like this:
First Row
Foo Bar
Foo Bar
Second Row
More Stuff Here
Some Even More Neste Stuff
This can be represented using a recursive type that contains the list of items in the current row and then a list of children rows (that themselves can contain children rows):
type Row = Row of list<string> * list<Row>
You can then process the structure using recursive function. An example of a value (representing first three lines from the example above) may be:
Row( ["First"; "Row"],
[ Row( ["Foo"; "Bar"], [] )
Row( ["Foo"; "Bar"], [] ) ])
EDIT: The Row type above would be useful if you had arbitrary nesting. If you have just two layers (states and cities), then you can use list of lists. The other list containing state name together with a nested list that contains all cities in that state.
If you start with two lists, then you can use a couple of F# functions to turn the input into a list of lists:
let states = [ ("WA", "Washington"); ("CA", "California") ]
let cities = [ ("WA", "Seattle"); ("WA", "Redmond"); ("CA", "San Francisco") ]
cities
// Group cities by the state
|> Seq.groupBy (fun (id, name) -> id)
|> Seq.map (fun (id, cities) ->
// Find the state name for this group of cities
let _, name = states |> Seq.find (fun (st, _) -> st = id)
// Return state name and list of city names
name, cities |> Seq.map snd)
Then you can recursively iterate over the nested lists (in the above, they are actually sequences, so you can turn them to lists using List.ofSeq) and keep an index of the current row and column.