Indexing into JSON - ruby-on-rails

This seems like it should be a very easy question but I'm having some trouble with it. I'm creating my own JSON and I need to index into it in order to seed by database. I've indexed into JSONs before with very little difficulty, but for some reason I can't index into my own. That makes me think that there might be an issue with my JSON itself, but I can't see anything that would cause an issue. I appreciate your assistance!
My JSON:
{
"workouts": [
{
"level": "1",
"exercises": [
{
"name": "box jumps",
"difficulty": "3",
"reps": "10",
"sets": "3",
"requirements": [
"sturdy box at least two feet high"
],
"body-part": "quadriceps",
"description": "Plant both feet should length apart and jump onto the box. Once on the box, stand fully upright.",
"pounds": "1"
},
{
"name": "v-press",
"difficulty": "4",
"reps": "12",
"sets": "3",
"requirements": [
"mat"
],
"body-part": "abdominals",
"description": "Lie flat on the ground, then raise your legs and arms slightly off the matt.",
"pounds": "1"
}
]
},
{
"level": "2",
"exercises": [
{
"name": "assisted pullups",
"difficulty": "1",
"reps": "12",
"sets": "3",
"requirements": [
"Assisted Pullup Machine"
],
"body-part": "biceps",
"description": "Kneel on the machine and adjust the weight to your needs",
"pounds": "50"
},
{
"name": "assisted dips",
"difficulty": "1",
"reps": "12",
"sets": "3",
"requirements": [
"Assisted Dips Machine"
],
"body-part": "triceps",
"description": "Kneel on the machine and adjust the weight to your needs",
"pounds": "50"
}
]
}
]
}
In pry, I do the following:
require "json"
f= File.open("workout.json")
mylist = JSON.parse(f.read)
When I try to index in, I get various errors (syntax error, no method errors, nil). Below are some examples of indexing I have attempted.
mylist.workouts
mylist[:workouts]
mylist[0]
mylist[:workouts][0][:level]
Thanks in advance!

The keys in the Hash after parsing the JSON data are strings not symbols. Try this :
mylist['workouts']
mylist['workouts'][0]['level']
A couple of points to remember :
Strings and Symbols are not interchangeable as keys in a Hash. They both are different objects and hence different keys.
To get the behaviour of the params in Rails controller where strings and symbols are interchangeable you need to instantiate an instance of HashWithInDifferentAccess. It is a separate utility class provided by Rails and is not part of the Ruby stdlib
The gem jbuilder is not a JSON parser. It is a JSON creator. It is used to create JSON structures from Ruby objects, used mostly in writing views for JSON responses. It is analogous to how ERB is used for HTML responses.
JSON has been part of Ruby stdlib for some time now (i.e. JSON parsing and serialization does not require any additional gems).

Related

How to parse JSON to an entity that has a relationship to itself using Sync?

I am using Sync trying to parse some JSON to Core Data.
My "Creature" entity has a parent-children relationship that looks like this:
and the JSON has a format similar to this:
[
{
"id": 1,
"name": "Mad king",
"parent": null,
"children": [
5
]
},
{
"id": 2,
"name": "Drogon",
"parent": 5,
"children": []
},
{
"id": 3,
"name": "Rhaegal",
"parent": 5,
"children": []
},
{
"id": 4,
"name": "Viserion",
"parent": 5,
"children": []
},
{
"id": 5,
"name": "Daenerys",
"parent": 1,
"children": [
2,
3,
4
]
}
]
The Mad king has one child Daenerys who has 3 children (Drogon, Rhaegal and Viserion).
Now, I know that Sync does support this sort of setup (where the JSON contains only the ids of parents/children instead of whole objects) and I suspect I have to parse the file twice - one for just getting all the objects and the second to create the relationships among them. For the second to work, I need to rename children to children_ids and parent to parent_id (as described in their README).
However I can't understand how exactly would I do that. Is it possible to ignore the parent/children during the first pass and then take them into account (using the modified keys) during the second?
Or could someone maybe propose a better solution that would (ideally) require just one pass?
According to the documentation:
For example, in the one-to-many example, you have a user, that has
many notes. If you already have synced all the notes then your JSON
would only need the notes_ids, this can be an array of strings or
integers. As a side-note only do this if you are 100% sure that all
the required items (notes) have been synced, otherwise this
relationships will get ignored and an error will be logged.
So you can, in theory, just blindly perform a full sync to actually get all the models(letting it fail on the relationships), and then sync again immediately after to get the relationships.
If you want to avoid the errors, you might want to write some helper functions to create 2 sets of JSON for these models, 1 to define the objects, and then a second to define the relationships. Either way, you'd need to do 2 passes.

Ruby on rails: Substring with quotes search inside JSON object

I have retrieved a json object using typhoeus gem.
url = 'www.example.com' <br>
request = ::Typhoeus::Request.get(url,userpwd: username + ":" + pass)<br>
content = JSON.parse(request.body)
I would like to count the occurence of "Priority":"high" including the quotes inside the json response. How do I go about doing this?
Edit:
"priority":"high" is a key value pair. It is deeply nested inside the json tree.(Don't how deeply it is nested). All I need is count of occurence of "priority":"high"
Any and all suggestion is welcome.
Sample data:
"tickets": [{
"url": "https://.zendesk.com/api/v2/tickets/xxxx.json",
"id": xxxxx,
"external_id": null,
"via": {
"channel": "email",
"source": {
"from": {
"address": "#compli.com",
"name": ""
},
"to": {
"name": "organization Global Support",
"address": "support#organization.zendesk.com"
},
"rel": null
}
},
"created_at": "2016-08-04T16:23:13Z",
"updated_at": "2016-08-08T20:26:01Z",
"type": "problem",
"subject": "Problems with abc Connect",
"raw_subject": "Problems with abc Connect",
"description": "Hi – our Tenet ID is 5675.\n\n \n\nThe abc report is not providing the full data when I run the billing preview. I am running it using Chrome. Attached are snapshots of what I’m doing plus the report generated.\n\n \n\nA perfect example of the problem is shown at the bottom of the report generated. Garber Automotive Group, account number A00000490 does not display the data for all of their products. Their data is shown on rows 5658 thru 5712 on the excel file BillingPreviewResult_201620 report run 08.04.16.\n\n \n\nHowever the EXACT same report (all the parameters are the same) run on 07/01/16 included all of Garber’s information. The excel file abc report run 07.01.16 10.13 AM has the data for Garber on rows 6099 – 6182.\n\n \n\nThe report is cutting off a lot of data for some reason. As you can see by comparing the amount of data between the two excel reports there are much fewer lines on the report run on today as opposed to the one run on 07/01, 6182 rows vs 5712 rows.\n\n \n\nThis is a business critical report for us. It is used for cash forecasting, monthly financial reporting, rolling budgeting and ad hoc reporting.\n\n \n\nWe need this problem identified and fixed immediately. It is already causing a problem with finalizing our July results.\n\n \n\nLet me know if you have any questions or need any additional data.\n\n \n\n \n\nRegards,\n\n \n\n \n\n \n\n| Controller\ndesk: 503.963-4239 | fax: 503.294.1200 | \n\nCompli - Cool, Calm and Compliant. TM\n\nVisit() to learn more.\n\n \n\nFollow us on LinkedIn () and Twitter",
"priority": "normal",
"status": "open",
"recipient": "support#organization.zendesk.com",
"requester_id": 1336424406,
"submitter_id": 1336424406,
"assignee_id": null,
"organization_id": 224504969,
"group_id": 21606503,
"collaborator_ids": [560973773, 786229209, 421597631, 539566717, 707192615, 1336424406, 31365392, 719608577, 1817633993],
"forum_topic_id": null,
"problem_id": null,
"has_incidents": false,
"due_at": null,
"tags": ["1_price", "best_practice_advise", "engage_global_services__email_", "escalate", "hard", "internal_escalation", "p0", "yes_escalated", "xxxxx", "zhub"],
"custom_fields": [{
"id": 22024091,
"value": "p0"
}, {
"id": 24212576,
"value": "best_practice_advise"
}, {
"id": 22035048,
"value": "xxx and so on.....

Getting the Highway name - Skobbler

I need to get the highway name on which the user is currently navigating.
That can be done in navigation mode, getting it from
-(void)routingService:(SKRoutingService *)routingService didChangeCurrentStreetName:(NSString *)currentStreetName streetType:(SKStreetType)streetType countryCode:(NSString *)countryCode
So, when I was testing my app yesterday, I was on the highway, and yes, Skobbler did recognised that I am on one, and yes, I got the Highway name back.
It was "Brooklyn-Queens Expressway".
But, Brooklyn-Queens Expressway is actually name of the I-278 Interstate highway, and all the functions I would later have to use, need to get Highway name in that format I-nnn
Here is the map photo of what I mean
So, Is there a way to get streetName in that I-nnn format, when the streetType is recognised as an interstate highway?
Or is there any Open Streetmap database we could consult? I wasn't able to find anything on OSM Wiki.
Don't know about the Skobbler SDK, but if online query is available and you have the approximate geographical area and the name of the motorway, you may use the Overpass API (http://wiki.openstreetmap.org/wiki/Overpass_API) to query the openstreetmap database for the highway reference.
For example, the following query (for a particular bbox which contains a small section of the highway):
[out:json]
[timeout:25]
;
(
way
["highway"="motorway"]
["name"="Brooklyn-Queens Expressway"]
(40.73483602685421,-73.91463160514832,40.73785205632046,-73.9096748828888);
);
out body qt;
returns (with some key-value pairs omitted for simplicity):
{
"version": 0.6,
"generator": "Overpass API",
"osm3s": {
"timestamp_osm_base": "2015-09-18T20:21:02Z",
"copyright": "The data included in this document is from www.openstreetmap.org. The data is made available under ODbL."
},
"elements": [
{
"type": "way",
"id": 46723482,
"nodes": [
488264429,
488264444,
488264461,
488264512,
488264530,
488264541,
597315979
],
"tags": {
"bicycle": "no",
"bridge": "yes",
"foot": "no",
"hgv": "designated",
"highway": "motorway",
"horse": "no",
"lanes": "3",
"layer": "1",
"name": "Brooklyn-Queens Expressway",
"oneway": "yes",
"ref": "I 278",
"sidewalk": "none",
}
},
{
"type": "way",
"id": 46724225,
"nodes": [
597315978,
488242888,
488248526,
488248544,
488248607
],
"tags": {
"bicycle": "no",
"bridge": "yes",
"foot": "no",
"hgv": "designated",
"highway": "motorway",
"horse": "no",
"lanes": "3",
"layer": "1",
"name": "Brooklyn-Queens Expressway",
"oneway": "yes",
"ref": "I 278",
"sidewalk": "none",
}
}
]
}
Which are 2 sections of the road in the osm database. In the US the "ref" tag for interstates is in the form "I XXX" (See http://wiki.openstreetmap.org/wiki/Interstate_Highways and note the format for co-location). You can retrieve the interstate name accordingly.
You can try the above query in overpass-turbo (a UI for the service) at http://overpass-turbo.eu/s/bxi (Press RUN and the DATA tab for the returned data, and pan the map for query in another bbox).
The "ref" information is not exposed in the SDK (will put this on the TODO list).
A workaround would be to look in the text advices (when using TTS) as this information is there (if you look at the $ref parameter, that contains the information you are looking for).
For more details regarding the text advices structure, see this blog article.

Submitting an array of arrays to grape rails

managed to submit a simple array to my grape api following this tip
testing rails grape API with curl, params array
Building a simple workout tracker that generates a graph at the end, through this array of workouts, which should be passed with their keys I guess.
But since what i'm trying to do is a 2D array, i have this output, the type is set to Array[Array], this is the call that i'm currently using
curl --data 'workouts_array[]=1&workouts_array[]=2&workouts_array[]=3' http://localhost:3000/api/v1/workouts/workout.json
And it returns
{
"workouts_array": [
[
"1"
],
[
"2"
],
[
"3"
]
]
}
But i wish to pass something like workouts_array[]=[1][2][3]&workouts_array[]=[4][5][6]
so it returns
{
"workouts_array": [
[
"time": "1", "distance": "2", "calories": "3",
],
[
"time": "4", "distance": "5", "calories": "6",
]
]
}
Thank you for any help, I guess it's just my poor way of using curl
I'm not sure that I correctly understood you but
for your case you can use this query
workouts_array[0]=1&workouts_array[0]=2&workouts_array[0]=3
&workouts_array[1]=4&workouts_array[1]=5&workouts_array[1]=6
it should return smth similar to:
[
[
"1",
"2",
"3"
],
[
"1",
"2",
"3"
]
]
this is array of arrays.
you says you set the type Array[Array] but wanna see the array of hashes. it's kinda different.
BTW, I prefer use JSON payload for those things.

Solr CollapsingQParserPlugin with group.facet=on style facet counts

I have a Solr index of about 5 million documents at 8GB using Solr 4.7.0. I require grouping in Solr, but find it to be too slow. Here is the group configuration:
group=on
group.facet=on
group.field=workId
group.ngroups=on
The machine has ample memory at 24GB and 4GB is allocated to Solr itself. Queries are generally taking about 1200ms compared to 90ms when grouping is turned off.
I ran across a plugin called CollapsingQParserPlugin which uses a filter query to remove all but one of a group.
fq={!collapse field=workId}
It's designed for indexes that have a lot of unique groups. I have about 3.8 million. This approach is much much faster at about 120ms. It's a beautiful solution for me except for one thing. Because it filters out other members of the group, only facets from the representative document are counted. For instance, if I have the following three documents:
"docs": [
{
"id": "1",
"workId": "abc",
"type": "book"
},
{
"id": "2",
"workId": "abc",
"type": "ebook"
},
{
"id": "3",
"workId": "abc",
"type": "ebook"
}
]
once collapsed, only the top one shows up in the results. Because the other two get filtered out, the facet counts look like
"type": ["book":1]
instead of
"type": ["book":1, "ebook":1]
Is there a way to get group.facet counts using the collapse filter query?
According to Yonik Seeley, the correct group facet counts can be gathered using the JSON Facet API. His comments can be found at:
https://issues.apache.org/jira/browse/SOLR-7036?focusedCommentId=15601789&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15601789
I tested out his method and it works great. I still use the CollapsingQParserPlugin to collapse the results, but I exclude the filter when counting up the facets like so:
fq={!tag=workId}{!collapse field=workId}
json.facet={
type: {
type: terms,
field: type,
facet: {
workCount: "unique(workId)"
},
domain: {
excludeTags: [workId]
}
}
}
And the result:
{
"facets": {
"count": 3,
"type": {
"buckets": [
{
"val": "ebook",
"count": 2,
"workCount": 1
},
{
"val": "book",
"count": 1,
"workCount": 1
}
]
}
}
}
I was unable to find a way to do this with Solr or plugin configurations, so I developed a work around to effectively create group facet counts while still using the CollapsingQParserPlugin.
I do this by making a duplicate of the fields I'll be faceting on and making sure all facet values for the entire group are in each document like so:
"docs": [
{
"id": "1",
"workId": "abc",
"type": "book",
"facetType": [
"book",
"ebook"
]
},
{
"id": "2",
"workId": "abc",
"type": "ebook",
"facetType": [
"book",
"ebook"
]
},
{
"id": "3",
"workId": "abc",
"type": "ebook",
"facetType": [
"book",
"ebook"
]
}
]
When I ask Solr to generate facet counts, I use the new field:
facet.field=facetType
This ensures that all facet values are accounted for and that the counts represent groups. But when I use a filter query, I revert back to using the old field:
fq=type:book
This way the correct document is chosen to represent the group.
I know this is a dirty, complex way to make it work, but it does work and that's what I needed. Also it requires the ability to query your documents before insertion into Solr, which calls for some development. If anyone has a simpler solution I would still love to hear it.

Resources