Couchbase N1QL query running, but no data retrieved (wrong query i guess) - join

I am trying to retrieve data from 2 buckets, no error but nothing shows up (I do have documents I need in these buckets).
1st bucket: a_bucket
here is the document I am interested in (I do have 3 different docs)
author_ID document:
{
"author_ID": 1,
"profil_creation_date": "2017/01/01/01:23:05/+5",
"prefix": "Mr.",
"first_name": "Dylan",
"middle_name_s": "Alfred",
"last_name": "Kerr",
"date_of_birth": "1974/01/02",
"sex": "M",
"marital_status": "Single",
"mobile_phone": "(860) 231-3336",
"address": [
{
"address_1": {
"address_ID": 1,
"home_address": "338 Counts Lane",
"city": "West Hartford",
"province/state": "CT",
"postal_code": "06105"
}
},
{
"address_2": {
"address_ID": 2,
"work_address": "977 Copperhead Rd",
"city": "Newington",
"province/state": "CT",
"postal_code": "06111"
}
}
]
}
2nd bucket: b_bucket
here are the 2 docs I am interested in:
p_output_ID document:
{
"p_output_ID": 1,
"author_ID": 2,
"overall_score": 4.41,
"status": {
"r_status_first": "TRUE",
"r_status_second": "FALSE",
"r_status_third": "YES",
"y_status_second": "TRUE",
"y_status_third": "FALSE",
"g_status_third": "TRUE"
}
}
timing_ID document:
{
"timing_ID": 1,
"p_output_ID": 1,
"author_ID": 1,
"date_and_time": "2017-06-06/23:45:25.25/+5",
"time_in_seconds": 12525,
"incremental_time_in_seconds": "time_in_seconds",
"current_state_and_duration": {
"state": "RED",
"duration_in_seconds": 33333
}
}
my goal is to grab these informations in one query ():
prefix, first_name, middle_name_s, last_name (from author_ID document in a_bucket)
overall_score (from p_output_ID document in b_bucket)
date_and_time, state (from timing_ID document in b_bucket)
Here is my query:
select p2.current_state_and_duration.state, p1.overall_score, p2.date_and_time
from proc_data_bucket p1 USE KEYS "p_output_ID"
JOIN proc_data_bucket p2 ON KEYS "author_ID";
The syntax is OK, but I am getting no data
Please help me with that...

CREATE INDEX ix1 ON b_bucket(timing_ID);
SELECT p1.prefix, p1.first_name, p1.middle_name_s, p1.last_name,
p2.date_and_time,p2.state,
p3.overall_score
FROM b_bucket p2
JOIN a_bucket p1 ON KEYS ("author_" || TO_STRING(p2.author_ID))
JOIN b_bucket p3 ON KEYS ("p_output_" || TO_STRING(p2.p_output_ID))
WHERE p2.timing_ID BETWEEN 10 AND 50;

Related

Access Nested Netsuite Ruby Hash

I am trying to access a Netsuite Ruby Hash (Netsuite gem) and return orders which need updating from the db, the process uses two service objects, one to get the orders from db, (ListOrdersService), and the this file to compare those against modified ones in Netsuite. Is all working except I am having problems getting some of the nested values in Netsuite in to the output. Code is below with a troublesome item commented out. Its just an each method which compares dates then puts the needed orders in returned value.
def process_order_updates
get_order_updates = []
# Get all our open orders from DB
bj_open_orders = ListOrdersService.new.call
# Get all identical open orders from Netsuite
bj_open_orders.each do |item|
netsuite_sales_orders = NetSuite::Records::SalesOrder.get(item['sales_order_internal_id'])
# Compare the last modified date from Netsuite to the last checked date from app DB
if netsuite_sales_orders.present? && netsuite_sales_orders.last_modified_date > item['last_checked_date']
# If the last modified date is newer, then we create a new hash with the updated order info
get_order_updates << {
sales_order_internal_id: item['sales_order_internal_id'],
order_status: item['order_status']
# quantity_fulfilled: item['items_list']['item']['quantity_fulfilled']
}
puts "still open order #{item['sales_order_internal_id']} needs to be updated, it was last checked by at #{item['last_checked_date']} but it was just modified, on #{netsuite_sales_orders.last_modified_date}"
end
end
puts "Here are the orders that need to be updated: #{get_order_updates}"
end
The Netsuite file code I am referencing is below, and trying to get quantity_fulfilled, quantity_billed, and some others in the file. items_list is a top level item
"item_list": {
"list": [
{
"attributes": {
"item": {
"internal_id": "110",
"external_id": null,
"type": null,
"attributes": {
"name": "000002 Kerosene (UN1223) 3.PGIII (D/E)"
}
},
"expand_item_group": false,
"quantity": "1000.0",
"units": {
"internal_id": "1",
"external_id": null,
"type": null,
"attributes": {
"name": "ltr"
}
},
"description": "Kerosene (UN1223) 3.PGIII (D/E)",
"price": {
"internal_id": "-1",
"external_id": null,
"type": null,
"attributes": {}
},
"rate": "0.81",
"amount": "810.0",
"is_closed": false,
"gross_amt": "850.5",
"line": "1",
"cost_estimate_type": "_averageCost",
"cost_estimate": "900.79",
"quantity_back_ordered": "0.0",
"quantity_billed": "0.0",
"quantity_committed": "1000.0",
"quantity_fulfilled": "0.0",
"tax1_amt": "40.5",
"tax_code": {
"internal_id": "2214",
"external_id": null,
"type": null,
"attributes": {
"name": "VAT:RDR-5%"
}
},
Any tips on how to get those items, directly or with a hashmap welcome Thanks

How to create dynamic node relation in neo4j for dynamic data?

I was able to create author nodes directly from the json file . But the challenge is on what basis or how we have to link the data. Linking "Author" to "organization". since the data is dynamic we cannot generalize it. I have tried with using csv file but, it fails the conditions when dynamic data is coming. For example one json record contain 2 organization and 3 authors, next record will be different. Different json record have different author and organization to link. organization/1 represent organization1 and organization/2 represents organization 2. Any help or hint will be great. Thank you. Please find the json file below.
"Author": [
{
"seq": "3",
"type": "abc",
"identifier": [
{
"idtype:auid": "10000000"
}
],
"familyName": "xyz",
"indexedName": "MI",
"givenName": "T",
"preferredName": {
"familyName": "xyz1",
"givenName": "a",
"initials": "T.",
"indexedName": "bT."
},
"emailAddressList": [],
"degrees": [],
"#id": "https:abc/2009127993/author/person/3",
"hasAffiliation": [
"https:abc/author/organization/1"
],
"organization": [
[
{
"identifier": [
{
"#type": "idtype:uuid",
"#subtype": "idsubtype:affiliationInstanceId",
"#value": "aff2"
},
{
"#type": "idtype:OrgDB",
"#subtype": "idsubtype:afid",
"#value": "12345"
},
{
"#type": "idtype:OrgDB",
"#subtype": "idsubtype:dptid"
}
],
"organizations": [],
"addressParts": [],
"sourceText": "",
"text": " Medical University School of Medicine",
"#id": "https:abc/author/organization/1"
}
],
[
{
"identifier": [
{
"#type": "idtype:uuid",
"#subtype": "idsubtype:affiliationInstanceId",
"#value": "aff1"
},
{
"#type": "idtype:OrgDB",
"#subtype": "idsubtype:afid",
"#value": "7890"
},
{
"#type": "idtype:OrgDB",
"#subtype": "idsubtype:dptid"
}
],
"organizations": [],
"addressParts": [],
"sourceText": "",
"text": "K University",
"#id": "https:efg/author/organization/2"
}
]
Hi I see that Organisation is part of the Author data, so you have to model it like wise. So for instance (Author)-[:AFFILIATED_WITH]->(Organisation)
When you use apoc.load.json which supports a stream of author objects you can load the data.
I did some checks on your JSON structure with this cypher query:
call apoc.load.json("file:///Users/keesv/work/check.json") yield value
unwind value as record
WITH record.Author as author
WITH author.identifier[0].`idtype:auid` as authorId,author, author.organization[0] as organizations
return authorId, author, organizations
To get this working you will need to create include apoc in the plugins directory, and add the following two lines in the apoc.conf file (create one if it is not there) in the 'conf' directory.
apoc.import.file.enabled=true
apoc.import.file.use_neo4j_config=false
I also see a nested array for the organisations in the output why is that and what is the meaning of that?
And finally I see also in the JSON that an organisation can have a reference to other organisations.
explanation
In my query I use UNWIND to unwind the base Author array. This means you get for every author a 'record' to work with.
With a MERGE or CREATE statement you can now create an Author Node with the correct properties. With the FOREACH construct you can walk over all the Organization entry and create/merge an Organization node and create the relation between the Author and the Organization.
here an 'psuedo' example
call apoc.load.json("file:///Users/keesv/work/check.json") yield value
unwind value as record
WITH record.Author as author
WITH author.identifier[0].`idtype:auid` as authorId,author, author.organization[0] as organizations
// creating the Author node
MERGE (a:Author { id: authorId })
SET a.familyName = author.familyName
...
// walk over the organizations
// determine
FOREACH (org in organizations |
MERGE (o:Organization { id: ... })
SET o.name = org.text
...
MERGE (a)-[:AFFILIATED_WITH]->(o)
// if needed you can also do a nested FOREACH here to process the Org Org relationship
)
Here is the JSON file I used I had to change something at the start and the end
[
{
"Author":{
"seq":"3",
"type":"abc",
"identifier":[
{
"idtype:auid":"10000000"
}
],
"familyName":"xyz",
"indexedName":"MI",
"givenName":"T",
"preferredName":{
"familyName":"xyz1",
"givenName":"a",
"initials":"T.",
"indexedName":"bT."
},
"emailAddressList":[
],
"degrees":[
],
"#id":"https:abc/2009127993/author/person/3",
"hasAffiliation":[
"https:abc/author/organization/1"
],
"organization":[
[
{
"identifier":[
{
"#type":"idtype:uuid",
"#subtype":"idsubtype:affiliationInstanceId",
"#value":"aff2"
},
{
"#type":"idtype:OrgDB",
"#subtype":"idsubtype:afid",
"#value":"12345"
},
{
"#type":"idtype:OrgDB",
"#subtype":"idsubtype:dptid"
}
],
"organizations":[
],
"addressParts":[
],
"sourceText":"",
"text":" Medical University School of Medicine",
"#id":"https:abc/author/organization/1"
}
],
[
{
"identifier":[
{
"#type":"idtype:uuid",
"#subtype":"idsubtype:affiliationInstanceId",
"#value":"aff1"
},
{
"#type":"idtype:OrgDB",
"#subtype":"idsubtype:afid",
"#value":"7890"
},
{
"#type":"idtype:OrgDB",
"#subtype":"idsubtype:dptid"
}
],
"organizations":[
],
"addressParts":[
],
"sourceText":"",
"text":"K University",
"#id":"https:efg/author/organization/2"
}
]
]
}
}
]
IMPORTANT create unique constraints for Author.id and Organization.id!!
In this way you can process any json file with an unknown number of author elements and an unknown number of affiliated organisations

How to remove multiple attributes from a json using ruby

I have a json object. It has multiple fields "passthrough_fields" which is unnecessary for me and I want to remove them. Is there a way to get all those attributes filtered out?
JSON:
{
"type": "playable_item",
"id": "p06s0lq7",
"urn": "urn:bbc:radio:episode:p06s0mk3",
"network": {
"id": "bbc_radio_five_live",
"key": "5live",
"short_title": "Radio 5 live",
"logo_url": "https://sounds.files.bbci.co.uk/v2/networks/bbc_radio_five_live/{type}_{size}.{format}",
"passthrough_fields": {}
},
"titles": {
"primary": "Replay",
"secondary": "Bill Shankly",
"tertiary": null,
"passthrough_fields": {}
},
"synopses": {
"short": "Bill Shankly with Sue MacGregor in 1979 - five years after he resigned as Liverpool boss.",
"medium": null,
"long": "Bill Shankly in conversation with Sue MacGregor in 1979, five years after he resigned as Liverpool manager.",
"passthrough_fields": {}
},
"image_url": "https://ichef.bbci.co.uk/images/ic/{recipe}/p06qbz1x.jpg",
"duration": {
"value": 1774,
"label": "29 mins",
"passthrough_fields": {}
},
"progress": null,
"container": {
"type": "series",
"id": "p06qbzmj",
"urn": "urn:bbc:radio:series:p06qbzmj",
"title": "Replay",
"synopses": {
"short": "Colin Murray unearths classic sports commentaries and interviews from the BBC archives.",
"medium": "Colin Murray looks back at 90 years of sport on the BBC by unearthing classic commentaries and interviews from the BBC archives.",
"long": null,
"passthrough_fields": {}
},
"activities": [],
"passthrough_fields": {}
},
"availability": {
"from": "2018-11-16T16:18:54Z",
"to": null,
"label": "Available for over a year",
"passthrough_fields": {}
},
"guidance": {
"competition_warning": false,
"warnings": null,
"passthrough_fields": {}
},
"activities": [],
"uris": [
{
"type": "latest",
"label": "Latest",
"uri": "/v2/programmes/playable?container=p06qbzmj&sort=sequential&type=episode",
"passthrough_fields": {}
}
],
"passthrough_fields": {}
}
Is there a way I can remove all those fields and store the updated json in a new variable?
You can do this recursively to tackle nested occurances of passthrough_fields, whether they're found in an array or a sub hash. Inline comments to explain things a little as it goes:
hash = JSON.parse(input) # convert the JSON to a hash
def remove_recursively(hash, *to_remove)
hash.each do |key, val|
hash.except!(*to_remove) # the heavy lifting: remove all keys that match `to_remove`
remove_recursively(val, *to_remove) if val.is_a? Hash # if a nested hash, run this method on it
if val.is_a? Array # if a nested array, loop through this checking for hashes to run this method on
val.each { |el| remove_recursively(el, *to_remove) if el.is_a? Hash }
end
end
end
remove_recursively(hash, 'passthrough_fields')
To demonstrate, with a simplified example:
hash = {
"test" => { "passthrough_fields" => [1, 2, 3], "wow" => '123' },
"passthrough_fields" => [4, 5, 6],
"array_values" => [{ "to_stay" => "I am", "passthrough_fields" => [7, 8, 9]}]
}
remove_recursively(hash, 'passthrough_fields')
#=> {"test"=>{"wow"=>"123"}, "array_values"=>[{"to_stay"=>"I am"}]}
remove_recursively(hash, 'passthrough_fields', 'wow', 'to_stay')
#=> {"test"=>{}, "array_values"=>[{}]}
This will tackle any arrays, and will dig for nested hashes however deep it needs to go.
It takes any number of fields to remove, in this case a single 'passthrough_fields'.
Hope this helps, let me know how you get on.
I think that the easiest solution would be to:
convert JSON into hash (JSON.parse(input))
use this answer to extend the functionality of Hash (save it in config/initializers/except_nested.rb)
on the hash from 1st step, call:
without_passthrough = your_hash.except_nested('passthrough_fields')
covert hash to JSON (without_passthrough.to_json)
Please keep in mind that it will work for passthrough_fields that is nested directly in hashes. In your JSON, you have the following part:
"uris" => [
{
"type"=>"latest",
"label"=>"Latest",
"uri"=>"/v2/programmes/playable?container=p06qbzmj&sort=sequential&type=episode",
"passthrough_fields"=>{}
}
]
In this case, the passthrough_fields will not be removed. You have to find a more sophisticated solution :)
You can do something like this:
def nested_except(hash, except_key)
sanitized_hash = {}
hash.each do |key, value|
next if key == except_key
sanitized_hash[key] = value.is_a?(Hash) ? nested_except(value, except_key) : value
end
sanitized_hash
end
json = JSON.parse(json_string)
sanitized = nested_except(json, 'passthrough_fields')
See example:
json = { :a => 1, :b => 2, :c => { :a => 1, :b => { :a => 1 } } }
nested_except(json, :a)
# => {:b=>2, :c=>{:b=>{}}}
This helper can easily be converted to support multiple keys to except, simply by except_keys = Array.wrap(except_key) and next if except_keys.include?(key)

Couchdb Reference Document

I'm new to CouchDB and struggling to implement a basic example. I have three documents Customer, Contact, Address and I want join them into a single document.
Account Document
{
"_id": "CST-1",
"_rev": "8-089da95f148b446bd3b33a3182de709f",
"name": "Customer",
"code": "CST-001",
"contact_Key": "CNT-001",
"address_Key": "ADD-001",
"type": "Customer"
}
Contact Document
{
"_id": "CNT-001",
"_rev": "8-079da95f148b446bd3b33a3182de709g",
"fullname": "Happy Swan",
"type": "Contact"
}
Address Document
{
"_id": "ADD-001",
"_rev": "8-179da95f148b446bd3b33a3182de709c",
"street1": "9 Glass View",
"street2": "Street 2",
"city": "USA",
"type": "Address"
}
Map/Query:
var map= function (doc) {
if (doc.type === 'Customer') {
emit(doc.id, { contact_Key: doc.contact_Key, address_Key: doc.address_Key })
}
};
db.query({ map: map }, { include_docs: true }, function (err, res) {
});
I want all 3 documents in a single document when I query account e.g.
Expected result
{
"_id": "CST-1",
"_rev": "8-089da95f148b446bd3b33a3182de709f",
"name": "Customer",
"code": "CST-001",
"contact_Key": "CNT-001",
"address_Key": "ADD-001",
"type": "Customer",
"Contact: {
"_id": "CNT-001",
"_rev": "8-079da95f148b446bd3b33a3182de709g",
"fullname": "Happy Swan",
"type": "Contact"
}",
"Address: {
"_id": "ADD-001",
"_rev": "8-179da95f148b446bd3b33a3182de709c",
"street1": "9 Glass View",
"street2": "Street 2",
"city": "USA",
"type": "Address"
}"
}
I don't see any better solution than querying the account document first and then querying the other two once you know their IDs. If you think about it, it makes sense because the only link between these documents is the IDs stored in the account document, so to get all three at the same time, internally the DB would have to do two queries: first the account document, then the other two. And by design CouchDB only does one query at a time.
If you had the account doc ID stored into the contact and address documents however, you could use a list function to merge them all into one.
First you would need a view:
function(doc) {
if (doc.type === 'Customer') {
emit(doc._id, doc);
}
if (doc.type === 'Contact' || doc.type === 'Address') {
emit(doc.account_id, doc);
}
}
Then a list function:
function(head, req) {
var row, account, contact, address;
while (row = getRow()) {
if (row.value.type === 'Customer') {
account = row.value;
} else if (row.value.type === 'Contact') {
contact = row.value;
} else if (row.value.type === 'Address') {
address = row.value;
}
}
account['Contact'] = contact;
account['Address'] = address;
provides("json", function() {
return { 'json': account };
});
}
And you would query it with:
GET /db/_design/foo/_list/the-list/the-view?key="CST-1"

How do I add JSON data in the child table against the parent table in Ruby on Rails?

I have been struggling with something in Ruby on Rails.
I have four tables which are interlinked: A, B, C, and D. A is the parent for B and B is the parent for C and D.
I have a records already existing in table B and want to add multiple entries against a particular record, for example "3", in the 'C' and 'D' tables against this id.
The data format is:
[{\"waypoint\":{\"latitude\":37.3645616666667,\"timestamp\":\"2012-10-16T09:58:50Z\",\"background\":false,\"estimated_speed\":17.4189262390137,\"journey_id\":null,\"longitude\":-112.850676666667}},{\"waypoint\":{\"latitude\":37.3648733333333,\"timestamp\":\"2012-10-16T09:58:54Z\",\"background\":false,\"estimated_speed\":17.076057434082,\"journey_id\":null,\"longitude\":-112.85077}},{\"waypoint\":{\"latitude\":37.3651116666667,\"timestamp\":\"2012-10-16T09:58:57Z\",\"background\":false,\"estimated_speed\":15.4269437789917,\"journey_id\":null,\"longitude\":-112.850766666667}},{\"waypoint\":{\"latitude\":37.36547,\"timestamp\":\"2012-10-16T09:59:02Z\",\"background\":false,\"estimated_speed\":17.1007328033447,\"journey_id\":null,\"longitude\":-112.85072}},{\"waypoint\":{\"latitude\":37.3658433333333,\"timestamp\":\"2012-10-16T09:59:11Z\",\"background\":false,\"estimated_speed\":10.3052024841309,\"journey_id\":null,\"longitude\":-112.850738333333}}]"
I get this data from a web service. But I see journey_id as null, whereas I want it to be 3, as I want to make the entry against this id.
How can I save this data in a child table using this id?
Your JSON string isn't opened correctly in your sample, as it's missing the leading '"'. Fixing that and moving on, here's what the JSON looks like "prettified":
[
{
"waypoint": {
"latitude": 37.3645616666667,
"timestamp": "2012-10-16T09:58:50Z",
"background": false,
"estimated_speed": 17.4189262390137,
"journey_id": null,
"longitude": -112.850676666667
}
},
{
"waypoint": {
"latitude": 37.3648733333333,
"timestamp": "2012-10-16T09:58:54Z",
"background": false,
"estimated_speed": 17.076057434082,
"journey_id": null,
"longitude": -112.85077
}
},
{
"waypoint": {
"latitude": 37.3651116666667,
"timestamp": "2012-10-16T09:58:57Z",
"background": false,
"estimated_speed": 15.4269437789917,
"journey_id": null,
"longitude": -112.850766666667
}
},
{
"waypoint": {
"latitude": 37.36547,
"timestamp": "2012-10-16T09:59:02Z",
"background": false,
"estimated_speed": 17.1007328033447,
"journey_id": null,
"longitude": -112.85072
}
},
{
"waypoint": {
"latitude": 37.3658433333333,
"timestamp": "2012-10-16T09:59:11Z",
"background": false,
"estimated_speed": 10.3052024841309,
"journey_id": null,
"longitude": -112.850738333333
}
}
]
You have an array of waypoint objects. Parsing that JSON into a Ruby object:
obj = JSON["[{\"waypoint\":..."] # purposely truncated for brevity
returns an array of hashes:
[{"waypoint"=>
{"latitude"=>37.3645616666667,
"timestamp"=>"2012-10-16T09:58:50Z",
"background"=>false,
"estimated_speed"=>17.4189262390137,
"journey_id"=>nil,
"longitude"=>-112.850676666667}},
{"waypoint"=>
{"latitude"=>37.3648733333333,
"timestamp"=>"2012-10-16T09:58:54Z",
"background"=>false,
"estimated_speed"=>17.076057434082,
"journey_id"=>nil,
"longitude"=>-112.85077}},
{"waypoint"=>
{"latitude"=>37.3651116666667,
"timestamp"=>"2012-10-16T09:58:57Z",
"background"=>false,
"estimated_speed"=>15.4269437789917,
"journey_id"=>nil,
"longitude"=>-112.850766666667}},
{"waypoint"=>
{"latitude"=>37.36547,
"timestamp"=>"2012-10-16T09:59:02Z",
"background"=>false,
"estimated_speed"=>17.1007328033447,
"journey_id"=>nil,
"longitude"=>-112.85072}},
{"waypoint"=>
{"latitude"=>37.3658433333333,
"timestamp"=>"2012-10-16T09:59:11Z",
"background"=>false,
"estimated_speed"=>10.3052024841309,
"journey_id"=>nil,
"longitude"=>-112.850738333333}}]
You can walk through that array and access, or change, the value for journey_id:
row = 3
obj = obj.map{ |h| h['waypoint']['journey_id'] = row }
obj.first
Looking at the first hash shows the value was changed, as were all the rest:
{
"waypoint" => {
"latitude" => 37.3645616666667,
"timestamp" => "2012-10-16T09:58:50Z",
"background" => false,
"estimated_speed" => 17.4189262390137,
"journey_id" => 3,
"longitude" => -112.850676666667
}
}
At that point, you need to recreate the JSON string. You can figure that out by reading the JSON documentation.
You could do all this by modifying the received string directly, but you don't want to get into the habit of directly modifying JSON strings because you can inadvertently damage the payload. It's better to let the parser give you the structure, modify that, then let JSON recreate the string.
How you store it to your database is left as an exercise for you also.

Resources