How to present sort order-by query in Falcor? - falcor

Suppose the model is structured as
{
events: [
{
date: '2016-06-01',
name: 'Children Day'
},
{
date: '2016-01-01',
name: 'New Year Day'
},
{
date: '2016-12-25',
name: 'Christmass'
}
]
}
There could be a lot of events in our storage. From client side, we want to issue a query to get 10 events order by date in ascending order.
How to present this kind of query in Falcor?

Heres what I would do: first, promote your events to entities, i.e. give them ids:
id | date | name
1 | 2016-06-01 | Children Day
2 | 2016-01-01 | New Year Day
3 | 2016-12-25 | Christmass
Then provide a route providing these events by id:
route: 'eventsById[{integers:ids}]["date","name"]'
which returns the original data. Now you can create a new route for orderings
route: 'orderedEvents['date','name']['asc','desc'][{ranges:range}]
which returns references into the eventsById route. This way your client could even request the same data sorted in different ways within the same request!
router.get(
"orderedEvents.date.asc[0..2]",
"orderedEvents.date.desc[0..2]");
which would return
{
'eventsById': {
1: {
'date':'2016-06-01',
'name':'Children Day' },
2: {
'date':'2016-01-01',
'name':'New Year Day' },
3: {
'date':'2016-12-25',
'name':'Christmass' } },
'orderedEvents': {
'date': {
'asc': [
{ '$type':'ref', 'value':['eventsById',2] },
{ '$type':'ref', 'value':['eventsById',1] },
{ '$type':'ref', 'value':['eventsById',3] } ],
'desc': [
{ '$type':'ref', 'value':['eventsById',3] },
{ '$type':'ref', 'value':['eventsById',1] },
{ '$type':'ref', 'value':['eventsById',2] } ] } }
}

Related

Returning tasks for each session in Neo4J (list of lists)

I'm using Neo4J for a mentor platform I'm building and I'm stumped by the following:
Given the following nodes and properties:
Mentor{ login, ... }
Mentee{ login, ... }
Session{ notes, ... }
Task{ complete, name }
And the following associations:
// each session has 1 mentor and 1 mentee
(Mentor)<-[:HAS]-(Session)-[:HAS]->(Mentee)
// each task is FOR one person (a Mentor or Mentee)
// each task is FROM one Session
(Session)<-[:FROM]-(Task)-[:FOR]->(Mentor or Mentee)
What's the best way to query this data to produce an API response in the following shape? Similarly, is this a reasonable way to model the data? Maybe something with coalesce?
{
mentor: { login: '...', /* ... */ },
mentee: { login: '...', /* ... */ },
sessions: [
{
notes,
/* ... */
mentorTasks: [{ id, name, complete }],
menteeTasks: [{ id, name, complete }]
]
I first tried:
MATCH (mentor:Mentor{ github: "mentorlogin" })
MATCH (session:Session)-[:HAS]->(mentee:Mentee{ github: "menteelogin" })
OPTIONAL MATCH (mentor)<-[:FOR]-(mentorTask:Task)-[:FROM]->(session)
OPTIONAL MATCH (mentee)<-[:FOR]-(menteeTask:Task)-[:FROM]->(session)
RETURN
mentor,
mentee,
session,
COLLECT(DISTINCT mentorTask) as mentorTasks,
COLLECT(DISTINCT menteeTask) as menteeTasks
ORDER BY session.date DESC
But that's janky - The mentor and mentee data is returned many times, and it's completely gone if the mentee has no sessions.
This seems more appropriate, but I'm not sure how to fold in the tasks:
MATCH (mentor:Mentor{ github: "mentorlogin" })
MATCH (mentee:Mentee{ github: "menteelogin })
OPTIONAL MATCH (session:Session)-[:HAS]->(mentee)
OPTIONAL MATCH (mentor)<-[:FOR]-(mentorTask:Task)-[:FROM]->(session)
OPTIONAL MATCH (mentee)<-[:FOR]-(menteeTask:Task)-[:FROM]->(session)
RETURN
mentor,
mentee,
COLLECT(DISTINCT session) as sessions
EDIT: Working! thanks to a prompt response from Graphileon. I made a few modifications:
changed MATCH statement so it returns the mentor and mentee even if there are no sessions
sort sessions by date (most recent first)
return all node properties, instead of whitelisting
MATCH (mentor:Mentor{ github: $mentorGithub })
MATCH (mentee:Mentee{ github: $menteeGithub })
RETURN DISTINCT {
mentor: mentor{ .*, id: toString(id(mentor)) },
mentee: mentee{ .*, id: toString(id(mentee)) },
sessions: apoc.coll.sortMaps([(mentor:Mentor)<-[:HAS]-(session:Session)-[:HAS]->(mentee:Mentee) |
session{
.*,
id: toString(id(session)),
mentorTasks: [
(session)<-[:FROM]-(task:Task)-[:FOR]->(mentor) |
task{ .*, id: toString(id(task)) }
],
menteeTasks: [
(session)<-[:FROM]-(task:Task)-[:FOR]->(mentee) |
task{ .*, id: toString(id(task)) }
]
}
], "date")
} AS result
Presuming you would have these data:
You can do something along these lines, with nested pattern comprehensions
MATCH (mentor:Mentor)<-[:HAS]-(:Session)-[:HAS]->(mentee:Mentee)
RETURN DISTINCT {
mentor: {id:id(mentor), name: mentor.name},
mentee: {id:id(mentee), name: mentee.name},
sessions: [(mentor:Mentor)<-[:HAS]-(session:Session)-[:HAS]->(mentee:Mentee) |
{ id: id(session),
name: session.name,
mentorTasks: [(session)<-[:FROM]-(task:Task)-[:FOR]->(mentor) |
{id:id(task), name: task.name}
],
menteeTasks: [(session)<-[:FROM]-(task:Task)-[:FOR]->(mentee) |
{id:id(task), name: task.name}
]
}
]
} AS myResult
returning
{
"mentor": {
"name": "Mentor Jill",
"id": 211
},
"sessions": [
{
"menteeTasks": [
{
"id": 223,
"name": "Task D"
},
{
"id": 220,
"name": "Task C"
},
{
"id": 219,
"name": "Task B"
}
],
"name": "Session 1",
"id": 208,
"mentorTasks": [
{
"id": 213,
"name": "Task A"
}
]
}
],
"mentee": {
"name": "Mentee Joe",
"id": 212
}
}
Note that using the pattern comprehensions, you can avoid the OPTIONAL matches. If a pattern comprehension does not find anything, it returns []

How can i get duplicate node CYPHER Neo4j

I have a question How can i I get some nodes with same property (for example same name property). In SQL i would use GROUP BY, but in CYPHER i don't have idea what should i use to group them. Below I added my simple input and example output to visualizate my problem.
[
{
id:1,
name: 'name1'
},
{
id:2,
name: 'name2'
},
{
id:3,
name: 'name2'
},
{
id:4,
name: 'name3'
},
{
id:5,
name: 'name3'
},
{
id:6,
name: 'name3'
},
{
id:7,
name: 'name4'
},
{
id:8,
name: 'name5'
},
{
id:9,
name: 'name6'
},
{
id:10,
name: 'name6'
}
]
My solution should gave me this:
[
{
count:2,
name: 'name2'
},
{
count:3,
name: 'name3'
},
{
count:2,
name: 'name6'
}
]
Thank you in advance for your help
In Cypher, when you aggregate (for the straight-forward cases) the grouping key is formed from the non-aggregation terms.
If nodes have already been created from the input (let's say they're using the label :Entry), then we can get the output you want with this:
MATCH (e:Entry)
RETURN e.name as name, count(e) as count
The grouping key here is name, which becomes distinct as the result of the aggregation. The result is a row for each distinct name value and the count of nodes with that name.

Combining results of two tables in mongoid/mongo

Hi guys what would be the best way to combine results of two mongoid queries.
My issue is that I would like to know active users, A user can send a letter and a notification, both are separate table and a user if he sends either the letter or the notification is considered active. What I want to know is how many active users were there per month.
right now what I can think of is doing this
Letter.collection.aggregate([
{ '$match': {}.merge(opts) },
{ '$sort': { 'created_at': 1 } },
{
'$group': {
_id: '$customer_id',
first_notif_sent: {
'$first': {
'day': { '$dayOfMonth': '$created_at' },
'month': { '$month': '$created_at' },
'year': { '$year': '$created_at' }
}
}
}
}])
Notification.collection.aggregate([
{ '$match': {}.merge(opts) },
{ '$sort': { 'created_at': 1 } },
{
'$group': {
_id: '$customer_id',
first_notif_sent: {
'$first': {
'day': { '$dayOfMonth': '$created_at' },
'month': { '$month': '$created_at' },
'year': { '$year': '$created_at' }
}
}
}
}])
What I am looking for is to get the minimum of the dates and then combine the results and get the count. Right now I can get the results and loop over each of them and create a new list. But I wanted to know if there is a way to do it in mongo directly.
EDIT
For letters
def self.get_active(tenant_id)
map = %{
function() {
emit(this.customer_id, new Date(this.created_at))
}
}
reduce = %{
function(key, values) {
return new Date(Math.min.apply(null, values))
}
}
where(tenant_id: tenant_id).map_reduce(map, reduce).out(reduce: "#{tenant_id}_letter_notification")
end
Notifications
def self.get_active(tenant_id)
map = %{
function() {
emit(this.customer_id, new Date(this.updated_at))
}
}
reduce = %{
function(key, values) {
return new Date(Math.min.apply(null, values))
}
}
where(tenant_id: tenant_id, transferred: true).map_reduce(map, reduce).out(reduce: "#{tenant_id}_outgoing_letter_standing_order_balance")
end
This is what I am thinking of going with, one of the reason is that, lookup does not work with my version of mongo.
the customer created a new notification, or a new letter, and I would like to get the first created at of either.
Let's address this first as a foundation. Given examples of document schema as below:
Document schema in Letter collection:
{ _id: <ObjectId>,
customer_id: <integer>,
created_at: <date> }
And, document schema in Notification collection:
{ _id: <ObjectId>,
customer_id: <integer>,
created_at: <date> }
You can utilise aggregation pipeline $lookup to join the two collections. For example using mongo shell :
db.letter.aggregate([
{"$group":{"_id":"$customer_id", tmp1:{"$max":"$created_at"}}},
{"$lookup":{from:"notification",
localField:"_id",
foreignField:"customer_id",
as:"notifications"}},
{"$project":{customer_id:"$_id",
_id:0,
latest_letter:"$tmp1",
latest_notification: {"$max":"$notifications.created_at"}}},
{"$addFields":{"latest":
{"$cond":[{"$gt":["$latest_letter", "$latest_notification"]},
"$latest_letter",
"$latest_notification"]}}},
{"$sort":{latest:-1}}
], {cursor:{batchSize:100}})
The output of the above aggregation pipeline is a list of customers in sorted order of created_at field from either Letter or Notification. Example output documents:
{
"customer_id": 0,
"latest_letter": ISODate("2017-12-19T07:00:08.818Z"),
"latest_notification": ISODate("2018-01-26T13:43:56.353Z"),
"latest": ISODate("2018-01-26T13:43:56.353Z")
},
{
"customer_id": 4,
"latest_letter": ISODate("2018-01-04T18:55:26.264Z"),
"latest_notification": ISODate("2018-01-25T02:05:19.035Z"),
"latest": ISODate("2018-01-25T02:05:19.035Z")
}, ...
What I want to know is how many active users were there per month
To achieve this, you can just replace the last stage ($sort) of the above aggregation pipeline with $group. For example:
db.letter.aggregate([
{"$group":{"_id":"$customer_id", tmp1:{$max:"$created_at"}}},
{"$lookup":{from:"notification",
localField:"_id",
foreignField:"customer_id",
as:"notifications"}},
{"$project":{customer_id:"$_id",
_id:0,
latest_letter:"$tmp1",
latest_notification: {"$max":"$notifications.created_at"}}},
{"$addFields":{"latest":
{"$cond":[{"$gt":["$latest_letter", "$latest_notification"]},
"$latest_letter",
"$latest_notification"]}}},
{"$group":{_id:{month:{"$month": "$latest"},
year:{"$year": "$latest"}},
active_users: {"$sum": "$customer_id"}
}
}
],{cursor:{batchSize:10}})
Where the example output would be as below:
{
"_id": {
"month": 10,
"year": 2017
},
"active_users": 9
},
{
"_id": {
"month": 1,
"year": 2018
},
"active_users": 18
},

Ordering a Rails DataTable column by another value

I am building a helpdesk application. I have a model called TicketDetail, with a table which uses datatables to get its data via JSON. This is in order to periodically recalculate the time a ticket has been open. The time taken is formatted by a simple helper so it's in the format "dd:hh:mm", but it should be sorted by the time (stored as a decimal) multiplied by a weighting. Here's the datatables definition
var table = $('#ticket_details').DataTable({
order: [[ 8, "desc" ], [ 9, "desc" ], [ 2, "asc" ]],
stateSave: true,
deferRender: true,
ajax: $('#ticket_details').data('source'),
"columns": [
{ "data": "reference_number" },
{ "data": "location" },
{ "data": "title" },
{ "data": "parent", className: "hidden-md hidden-sm hidden-xs" },
{ "data": { _:"time_display.time", sort: "time_display.decimal_time"}},
{ "data": "created_by", className: "hidden-md hidden-sm hidden-xs" }
]
} );
setInterval( function () {
table.ajax.reload( null, false ); }, 60000 );
Here's a simplified sample record, where the ticket has been open 3 days and 6 hours, with a weighting of x2 (i.e. 3.25 * 2 = 6.5:
{
data: [
{
id: 140,
parent: null,
title: "[",
location: "Bond St",
ticket_sla: "16 Hours",
reference_number: "1606210001",
ticket_sla_weighting: 2,
time_display: {
time: "<span class = "label label-danger">03:06:00</span>",
decimal_time: 6.5
}
]
}
The problem is that the datatable sorts correctly if I display the decimal_time, but as soon as I put the formatted time in the class, it sorts simply by the number of days, immediately to the left of the colon. (So 03:06:00 and 03:18:00 would not get sorted properly).
For Date/Time sorting in DataTable You need to use it's Sorting plug-ins
For Example,
You need to include this js files :
//cdnjs.cloudflare.com/ajax/libs/moment.js/2.8.4/moment.min.js
//cdn.datatables.net/plug-ins/1.10.12/sorting/datetime-moment.js
and then, In your jQuery use this as
$.fn.dataTable.moment( 'HH:mm MMM D, YY' ); // Pass your date time format as param
For Deeper reference please check :
Sorting Plugins
Ultimate date / time sorting plugin

Mongodb querying for aggregation with count of multiple values

I am using Mongoid in one of my rails app to for mongodb
class Tracking
include Mongoid::Document
include Mongoid::Timestamps
field :article_id, type: String
field :action, type: String # like | comment
field :actor_gender, type: String # male | female | unknown
field :city, type: String
field :state, type: String
field :country, type: String
end
Here I want to grab the record in this tabular format,
article_id | state | male_like_count | female_like_count | unknown_gender_like_count | date
juhkwu2367 | California | 21 | 7 | 1 | 11-20-2015
juhkwu2367 | New York | 62 | 23 | 3 | 11-20-2015
juhkwu2367 | Vermont | 48 | 27 | 3 | 11-20-2015
juhkwu2367 | California | 21 | 7 | 1 | 11-21-2015
juhkwu2367 | New York | 62 | 23 | 3 | 11-21-2015
juhkwu2367 | Vermont | 48 | 27 | 3 | 11-21-2015
Here the input for the query would be:
article_id
country
date range (from and to)
action (is `like` in this scenario)
sort_by [ date | state | male_like_count | female_like_count ]
This is what I am trying, by referring an example at https://docs.mongodb.org/v3.0/reference/operator/aggregation/group/
db.trackings.aggregate(
[
{
$group : {
_id : { month: { $month: "$created_at" }, day: { $dayOfMonth: "$created_at" }, year: { $year: "$created_at" }, article_id: "$article_id", state: "$state", country: "$country"},
article_id: "$article_id",
country: ??,
state: "$state",
male_like_count: { $sum: ?? } },
female_like_count: { $sum: ?? } },
unknown_gender_like_count: { $sum: ?? } },
date: ??
}
}
]
)
So what should I put at the place of ?? for comparing the count by gender and how to add clause for sorting_option?
You are largely looking for the $cond operator in order to evaluate conditions and return whether the the particular counter should be incremented or not, but there are also some other aggregation concepts you are missing here:
db.trackings.aggregate([
{ "$match": {
"created_at": { "$gte": startDate, "$lt": endDate },
"country": "US",
"action": "like"
}},
{ "$group": {
"_id": {
"date": {
"month": { "$month": "$created_at" },
"day": { "$dayOfMonth": "$created_at" },
"year": { "$year": "$created_at" }
},
"article_id": "$article_id",
"state": "$state"
},
"male_like_count": {
"$sum": {
"$cond": [
{ "$eq": [ "$gender", "male" ] }
1,
0
]
}
},
"female_like_count": {
"$sum": {
"$cond": [
{ "$eq": [ "$gender", "female" ] }
1,
0
]
}
},
"unknown_like_count": {
"$sum": {
"$cond": [
{ "$eq": [ "$gender", "unknown" ] }
1,
0
]
}
}
}},
{ "$sort": {
"_id.date.year": 1,
"_id.date.month": 1,
"_id.date.day": 1,
"_id.article_id": 1,
"_id.state": 1,
"male_like_count": 1,
"female_like_count": 1
}}
]
)
Firstly you basically want to $match, which is how you supply "query" conditions for an aggregation pipeline. It can basically be any pipeline stage, but when used first it will filter the input that is considered in the following operations. In this case, the required date range as well as country, and removal of anything that is not a "like" since you are not worried about those counts.
Then all items are grouped by the respective "key" in _id. This can be and is used as a compound field, mostly because all of these field values are considered part of the grouping key, and also for a little organization.
You also seem to ask in your ouput for "distinct fields" outside of the _id itself. DON'T DO THAT. The data is already there, so there is no point in copying it. You can produce the same things outside of _id via $first as an aggregation operator, or you could even use a $project stage at the end of the pipeline to rename the fields. But it's really best that you loose the habit that you think you need that, as it just costs time and or space in getting a response.
If anything though, you seem to be after a "pretty date" more than anything else. I personally prefer working with "date math" for most manipulation, and therefore an altered listing suitable for mongoid would be:
Tracking.collection.aggregate([
{ "$match" => {
"created_at" => { "$gte" => startDate, "$lt" => endDate },
"country" => "US",
"action" => "like"
}},
{ "$group" => {
"_id" => {
"date" => {
"$add" => [
{ "$subtract" => [
{ "$subtract" => [ "$created_at", Time.at(0).utc.to_datetime ] },
{ "$mod" => [
{ "$subtract" => [ "$created_at", Time.at(0).utc.to_datetime ] },
1000 * 60 * 60 * 24
]}
]},
Time.at(0).utc.to_datetime
]
},
"article_id" => "$article_id",
"state" => "$state"
},
"male_like_count" => {
"$sum" => {
"$cond" => [
{ "$eq" => [ "$gender", "male" ] }
1,
0
]
}
},
"female_like_count" => {
"$sum" => {
"$cond" => [
{ "$eq" => [ "$gender", "female" ] }
1,
0
]
}
},
"unknown_like_count" => {
"$sum" => {
"$cond" => [
{ "$eq" =>[ "$gender", "unknown" ] }
1,
0
]
}
}
}},
{ "$sort" => {
"_id.date" => 1,
"_id.article_id" => 1,
"_id.state" => 1,
"male_like_count" => 1,
"female_like_count" => 1
}}
])
Which really just comes down to getting a DateTime object suitable for use as a driver argument that corresponds to the epoch date and working the various operations. Where processing $subtract with one BSON Date and another will produce a numeric value that can be subsequently be rounded to the current day using the applied math. Then of course when using $add with a numeric timestamp value to a BSON Date ( again representing epoch ) then the result is again a BSON Date object, with of course the adjusted and rounded value.
Then it's all just a matter of applying $sort as an aggregation pipeline stage again, as oppposed to an external modifier. Much like the $match principle, an aggregation pipeline can sort anywhere, but at the end is always dealing with the final result.

Resources