I am executing a query and getting the following data from the database in an array (MySql2 type object):
+-----------+---------------+---------------+------+------+---------------+
| build | platform_type | category_name | pass | fail | indeterminate |
+-----------+---------------+---------------+------+------+---------------+
| 10.0.1.50 | 8k | UMTS | 10 | 2 | 5 |
| 10.0.1.50 | 8k | UMTS | 10 | 2 | 5 |
| 10.0.1.50 | 8k | IP | 10 | 2 | 5 |
| 10.0.1.50 | 8k | IP | 14 | 1 | 3 |
| 10.0.1.50 | 9k | IP | 14 | 1 | 3 |
| 10.0.1.50 | 9k | IP | 12 | 1 | 1 |
| 10.0.1.50 | 9k | UMTS | 12 | 1 | 1 |
| 10.0.1.50 | 9k | UMTS | 12 | 1 | 1 |
| 10.0.1.50 | 9k | UMTS | 12 | 1 | 1 |
| 10.0.1.50 | 9k | Stability | 9 | 4 | 0 |
| 10.0.1.50 | 9k | Stability | 15 | 1 | 0 |
I want to display it on my UI in a table, something like this :
+-----------+---------------+---------------+------+------+---------------+
| build | platform_type | category_name | pass | fail | indeterminate |
+-----------+---------------+---------------+------+------+---------------+
| | | UMTS | 20 | 4 | 10 |
| | 8k |---------------------------------------------|
| | | IP | 24 | 3 | 8 |
| |---------------|---------------------------------------------|
| 10.0.1.50 | | IP | 26 | 2 | 4 |
| | |---------------------------------------------|
| | 9k | UMTS | 36 | 3 | 3 |
| | |---------------------------------------------|
| | | Stability | 24 | 5 | 0 |
---------------------------------------------------------------------------
I did try using hash to find unique platform types for the build. But as I am very new to ruby, I am having trouble using the hash properly. I would appreciate if someone can help me parse the data.
Assuming you have an array of arrays:
#data = sql_results.group_by(&:first).map do |b, bl|
[b, bl.group_by(&:second).map{|p, pl| [p, pl.map{|r| r[2..-1]}] }.sort_by(&:first)]
end.sort_by(&:first)
Here is how to break down the logic.
group the rows by first column. This will return a hash with key as the first column name and value as an array of rows.
group each build list by 2nd column(platform type). Each group should contain the array of col values from 3 to the last col.
sort the platform lists by platform type
sort the build lists by build name
The resultant structure would like this:
[
[
"10.0.1.50", [
[
"8k", [
["UMTS", 20, 4, 10],
["IP", 24, 3, 8]
]
],
[
"9k", [
["IP", 26, 2, 4],
["UMTS", 36, 3, 3],
["UMTS", 24, 5, 0]
]
]
]
]
]
You can use this in your view layout example:
%table
%tr
- %w(build platform_type category_name pass fail indeterminate).each do |name|
%th=name
- #data.each do |build, build_list|
%tr
%td=build
%td{:colspan=4}
%table
- build_list.each do |build, platform_list|
%tr
%td=build
%td{:colspan=3}
%table
- platform_list.each do |row|
%tr
- row.each do |attr|
%td=attr
If you use an AR model here is what you do:
class Build < ActiveRecord::Base
def self.builds_by_platform
reply = Hash.new{|h, k| h[k] = Hash.new{|h, k| h[k] = []}}
Build.order("build ASC, platform_type ASC").find_each do |row|
reply[row.build][row.platform_type] << row
end
reply.map{|b, bh| [b, bh.sort_by(&:first)}.sort_by(&:first)
end
end
In your controller you can access the normalized variable as:
#report _list = Build.builds_by_platform
You can use the #report _list variable for rendering the table.
Related
I have a Cypher query that shows the following output:
+----------------
| usid | count |
+----------------
| "000" | 1 |
| "000" | 0 |
| "000" | 0 |
| "001" | 1 |
| "001" | 1 |
| "001" | 0 |
| "002" | 2 |
| "002" | 2 |
| "002" | 0 |
| "003" | 4 |
| "003" | 2 |
| "003" | 2 |
| "004" | 4 |
| "004" | 4 |
| "004" | 4 |
+----------------
How can I get the below result with the condition SUM(count) <= 9.
+----------------
| usid | count |
+----------------
| "000" | 1 |
| "001" | 2 |
| "002" | 4 |
| "003" | 8 |
+----------------
Note: I have used the below query to get the 1st table data.
MATCH (us:USER)
WITH us
WHERE us.count <= 4
RETURN us.id as usid, us.count as count;
I don't know how you get your original data, so I will just use a WITH clause and assume the data is there:
// original data
WITH usid, count
// aggregate and filter
WITH usid, sum(count) as new_count
WHERE new_count <= 9
RETURN usid, new_count
Based on the updated question, the new query would look like:
MATCH (us:USER)
WHERE us.count <= 4
WITH us.id as usid, sum(us.count) as count
WHERE new_count <= 9
RETURN usid, count
˙˙˙
How to combine distinct and max within join table below?
Table_details_usage
UID | VE_NO | START_MILEAGE | END_MILEAGE
------------------------------------------------
1 | ASD | 410000 | 410500
2 | JWQ | 212000 | 212350
3 | WYS | 521000 | 521150
4 | JWQ | 212360 | 212400
5 | ASD | 410520 | 410600
Table_service_schedule
SID | VE_NO | SV_ONMILEAGE | SV_NEXTMILEAGE
------------------------------------------------
1 | ASD | 400010 | 410010
2 | JWQ | 212120 | 222120
3 | WYS | 511950 | 521950
4 | JWQ | 212300 | 222300
5 | ASD | 410510 | 420510
How to get display as below (only max value)?
Get Max value from Table_service_schedule (SV_NEXTMILEAGE) and Get Max value from Table_details_usage (END_MILEAGE)
SID | VE_NO | SV_NEXTMILEAGE | END_MILEAGE
--------------------------------------------
5 | ASD | 420510 | 410600
4 | JWQ | 222300 | 212400
3 | WYS | 521950 | 521150
Something in the lines of:
SELECT
SID,
VE_NO,
SV_NEXTMILEAGE,
(select max(END_MILEAGE) from Table_details_usage d where d.VE_NO = s.VE_NO) END_MILEAGE
FROM Table_service_schedule s
WHERE SID = (SELECT max(SID) FROM Table_service_schedule s2 WHERE s2.VE_NO = s.VE_NO)
Probably could need to change the direct value ov SV_NEXTMILEAGE to max as well if the id:s aren't in order...
I have a tree structure like node(1)->node(2)->node(3). I have name as an property used to retrieve a node.
Given a node say node(3), i wanna retrieve node(1).
Query tried :
MATCH (p:Node)-[:HAS*]->(c:Node) WHERE c.name = "node 3" RETURN p LIMIT 5
But, not able to get node 1.
Your query will not only return "node 1", but it should at least include one path containing it. It's possible to filter the paths to only get the one traversing all the way to the root, however:
MATCH (c:Node {name: "node 3"})<-[:HAS*0..]-(p:Node)
// The root does not have any incoming relationship
WHERE NOT (p)<-[:HAS]-()
RETURN p
Note the use of the 0 length, which matches all cases, including the one where the start node is the root.
Fun fact: even if you have an index on Node:name, it won't be used (unless you're using Neo4j 3.1, where it seems to be fixed since 3.1 Beta2 at least) and you have to explicitly specify it.
MATCH (c:Node {name: "node 3"})<-[:HAS*0..]-(p:Node)
USING INDEX c:Node(name)
WHERE NOT (p)<-[:HAS]-()
RETURN p
Using PROFILE on the first query (with a numerical id property instead of name):
+-----------------------+----------------+------+---------+-------------------------+----------------------+
| Operator | Estimated Rows | Rows | DB Hits | Variables | Other |
+-----------------------+----------------+------+---------+-------------------------+----------------------+
| +ProduceResults | 0 | 1 | 0 | p | p |
| | +----------------+------+---------+-------------------------+----------------------+
| +AntiSemiApply | 0 | 1 | 0 | anon[23], c -- p | |
| |\ +----------------+------+---------+-------------------------+----------------------+
| | +Expand(All) | 1 | 0 | 3 | anon[58], anon[67] -- p | (p)<-[:HAS]-() |
| | | +----------------+------+---------+-------------------------+----------------------+
| | +Argument | 1 | 3 | 0 | p | |
| | +----------------+------+---------+-------------------------+----------------------+
| +Filter | 1 | 3 | 3 | anon[23], c, p | p:Node |
| | +----------------+------+---------+-------------------------+----------------------+
| +VarLengthExpand(All) | 1 | 3 | 5 | anon[23], p -- c | (c)<-[:HAS*]-(p) |
| | +----------------+------+---------+-------------------------+----------------------+
| +Filter | 1 | 1 | 3 | c | c.id == { AUTOINT0} |
| | +----------------+------+---------+-------------------------+----------------------+
| +NodeByLabelScan | 3 | 3 | 4 | c | :Node |
+-----------------------+----------------+------+---------+-------------------------+----------------------+
Total database accesses: 18
and on the second one:
+-----------------------+----------------+------+---------+-------------------------+------------------+
| Operator | Estimated Rows | Rows | DB Hits | Variables | Other |
+-----------------------+----------------+------+---------+-------------------------+------------------+
| +ProduceResults | 0 | 1 | 0 | p | p |
| | +----------------+------+---------+-------------------------+------------------+
| +AntiSemiApply | 0 | 1 | 0 | anon[23], c -- p | |
| |\ +----------------+------+---------+-------------------------+------------------+
| | +Expand(All) | 1 | 0 | 3 | anon[81], anon[90] -- p | (p)<-[:HAS]-() |
| | | +----------------+------+---------+-------------------------+------------------+
| | +Argument | 1 | 3 | 0 | p | |
| | +----------------+------+---------+-------------------------+------------------+
| +Filter | 1 | 3 | 3 | anon[23], c, p | p:Node |
| | +----------------+------+---------+-------------------------+------------------+
| +VarLengthExpand(All) | 1 | 3 | 5 | anon[23], p -- c | (c)<-[:HAS*]-(p) |
| | +----------------+------+---------+-------------------------+------------------+
| +NodeUniqueIndexSeek | 1 | 1 | 2 | c | :Node(id) |
+-----------------------+----------------+------+---------+-------------------------+------------------+
Total database accesses: 13
what should one specify when creating a graphlab recommender model such that the item that a user already own is not recommended to him again? Can this be done directly by specifying certain parameters or do I need to write a recommender from scratch.? data would look something like this
| user_id | item_id | othercolumns |
|:-----------|------------:|:------------:|
| 1 | 21 | This |
| 2 | 22 | column |
| 1 | 23 | will |
| 3 | 24 | hold |
| 2 | 25 | other |
| 1 | 26 | values |
Since item 21,23 and 26 are already owned by user 1 this item should not be recommended to him.
This behaviour is controlled by the exclude_known parameter of the recommender.recommend method (doc).
exclude_known : bool, optional
By default, all user-item interactions previously seen in the training
data, or in any new data provided using new_observation_data.., are
excluded from the recommendations. Passing in exclude_known = False
overrides this behavior.
Example
>>> import graphlab as gl
>>> sf = gl.SFrame({'user_id':[1,2,1,3,2,1], 'item_id':[21,22,23,24,25,26]})
>>> print sf
+---------+---------+
| item_id | user_id |
+---------+---------+
| 21 | 1 |
| 22 | 2 |
| 23 | 1 |
| 24 | 3 |
| 25 | 2 |
| 26 | 1 |
+---------+---------+
[6 rows x 2 columns]
>>> rec_model = gl.recommender.create(sf)
>>> # we recommend items not owned by user
>>> rec_wo_own_item = rec_model.recommend(sf['user_id'].unique())
>>> rec_wo_own_item.sort('user_id').print_rows(100)
+---------+---------+----------------+------+
| user_id | item_id | score | rank |
+---------+---------+----------------+------+
| 1 | 22 | 0.0 | 1 |
| 1 | 24 | 0.0 | 2 |
| 1 | 25 | 0.0 | 3 |
| 2 | 21 | 0.0 | 1 |
| 2 | 23 | 0.0 | 2 |
| 2 | 24 | 0.0 | 3 |
| 2 | 26 | 0.0 | 4 |
| 3 | 21 | 0.333333333333 | 1 |
| 3 | 23 | 0.333333333333 | 2 |
| 3 | 26 | 0.333333333333 | 3 |
| 3 | 22 | 0.166666666667 | 4 |
| 3 | 25 | 0.166666666667 | 5 |
+---------+---------+----------------+------+
[12 rows x 4 columns]
>>> # we recommend items owned by user
>>> rec_w_own_item = rec_model.recommend(sf['user_id'].unique(), exclude_known=False)
>>> rec_w_own_item.sort('user_id').print_rows(100)
+---------+---------+----------------+------+
| user_id | item_id | score | rank |
+---------+---------+----------------+------+
| 1 | 21 | 0.666666666667 | 1 |
| 1 | 23 | 0.666666666667 | 2 |
| 1 | 26 | 0.666666666667 | 3 |
| 1 | 22 | 0.0 | 4 |
| 1 | 24 | 0.0 | 5 |
| 1 | 25 | 0.0 | 6 |
| 2 | 26 | 0.0 | 6 |
| 2 | 24 | 0.0 | 5 |
| 2 | 23 | 0.0 | 4 |
| 2 | 21 | 0.0 | 3 |
| 2 | 25 | 0.5 | 2 |
| 2 | 22 | 0.5 | 1 |
| 3 | 24 | 0.0 | 6 |
| 3 | 25 | 0.166666666667 | 5 |
| 3 | 22 | 0.166666666667 | 4 |
| 3 | 26 | 0.333333333333 | 3 |
| 3 | 23 | 0.333333333333 | 2 |
| 3 | 21 | 0.333333333333 | 1 |
+---------+---------+----------------+------+
[18 rows x 4 columns]
>>> # we add recommended items not owned by user to the original SFrame
>>> rec = rec_wo_own_item.groupby('user_id', {'reco':gl.aggregate.CONCAT('item_id')})
>>> sf = sf.join(rec, 'user_id', 'left')
>>> print sf
+---------+---------+----------------------+
| item_id | user_id | reco |
+---------+---------+----------------------+
| 21 | 1 | [24, 25, 22] |
| 22 | 2 | [24, 26, 23, 21] |
| 23 | 1 | [24, 25, 22] |
| 24 | 3 | [21, 23, 26, 25, 22] |
| 25 | 2 | [24, 26, 23, 21] |
| 26 | 1 | [24, 25, 22] |
+---------+---------+----------------------+
[6 rows x 3 columns]
Is it possible to do union of two queries (from the same entity) in core data? In SQL speak, if entity is called t, then consider that T has following data:
+------+------+------+
| x | y | z |
+------+------+------+
| 1 | 11 | 2 |
| 1 | 12 | 3 |
| 2 | 11 | 1 |
| 3 | 12 | 3 |
Then I am trying to run the following query (using core data - not SQLite)
select x, y, sum(z)
from t
group by 1, 2
union
select x, 1 as y, sum(z)
from t
group by 1, 2
order by x, y, 1
;
+------+------+--------+
| x | y | sum(z) |
+------+------+--------+
| 1 | 1 | 5 |
| 1 | 11 | 2 |
| 1 | 12 | 3 |
| 2 | 1 | 1 |
| 2 | 11 | 1 |
| 3 | 1 | 3 |
| 3 | 12 | 3 |
+------+------+--------+
7 rows in set (0.00 sec)
Is it possible?
Thanks!