Submitting an array of arrays to grape rails - ruby-on-rails

managed to submit a simple array to my grape api following this tip
testing rails grape API with curl, params array
Building a simple workout tracker that generates a graph at the end, through this array of workouts, which should be passed with their keys I guess.
But since what i'm trying to do is a 2D array, i have this output, the type is set to Array[Array], this is the call that i'm currently using
curl --data 'workouts_array[]=1&workouts_array[]=2&workouts_array[]=3' http://localhost:3000/api/v1/workouts/workout.json
And it returns
{
"workouts_array": [
[
"1"
],
[
"2"
],
[
"3"
]
]
}
But i wish to pass something like workouts_array[]=[1][2][3]&workouts_array[]=[4][5][6]
so it returns
{
"workouts_array": [
[
"time": "1", "distance": "2", "calories": "3",
],
[
"time": "4", "distance": "5", "calories": "6",
]
]
}
Thank you for any help, I guess it's just my poor way of using curl

I'm not sure that I correctly understood you but
for your case you can use this query
workouts_array[0]=1&workouts_array[0]=2&workouts_array[0]=3
&workouts_array[1]=4&workouts_array[1]=5&workouts_array[1]=6
it should return smth similar to:
[
[
"1",
"2",
"3"
],
[
"1",
"2",
"3"
]
]
this is array of arrays.
you says you set the type Array[Array] but wanna see the array of hashes. it's kinda different.
BTW, I prefer use JSON payload for those things.

Related

geoJSON location/geometry instance

I have a JSON file describing a business and the location is specified in GEOJSON. Without using FEATURE, I just want to specify that the location is always a GEOJSON geometry. So for the following instance do I need to use the phrase "geometry"? or Can I use any phrase such as "GeometryInstance" or "InstanceLocation" instead of the phrase "geometry"?
"geometry":{"type": "Polygon", "coordinates": [
[
[
143.6033048818323,
-38.76987023813212
],
[
143.605236072323,
-38.76966947968941
],
[
143.60497858025758,
-38.76839799643248
],
[
143.6029830167505,
-38.768615488596204
],
[
143.6033048818323,
-38.76987023813212
]
]
]}

Getting the Highway name - Skobbler

I need to get the highway name on which the user is currently navigating.
That can be done in navigation mode, getting it from
-(void)routingService:(SKRoutingService *)routingService didChangeCurrentStreetName:(NSString *)currentStreetName streetType:(SKStreetType)streetType countryCode:(NSString *)countryCode
So, when I was testing my app yesterday, I was on the highway, and yes, Skobbler did recognised that I am on one, and yes, I got the Highway name back.
It was "Brooklyn-Queens Expressway".
But, Brooklyn-Queens Expressway is actually name of the I-278 Interstate highway, and all the functions I would later have to use, need to get Highway name in that format I-nnn
Here is the map photo of what I mean
So, Is there a way to get streetName in that I-nnn format, when the streetType is recognised as an interstate highway?
Or is there any Open Streetmap database we could consult? I wasn't able to find anything on OSM Wiki.
Don't know about the Skobbler SDK, but if online query is available and you have the approximate geographical area and the name of the motorway, you may use the Overpass API (http://wiki.openstreetmap.org/wiki/Overpass_API) to query the openstreetmap database for the highway reference.
For example, the following query (for a particular bbox which contains a small section of the highway):
[out:json]
[timeout:25]
;
(
way
["highway"="motorway"]
["name"="Brooklyn-Queens Expressway"]
(40.73483602685421,-73.91463160514832,40.73785205632046,-73.9096748828888);
);
out body qt;
returns (with some key-value pairs omitted for simplicity):
{
"version": 0.6,
"generator": "Overpass API",
"osm3s": {
"timestamp_osm_base": "2015-09-18T20:21:02Z",
"copyright": "The data included in this document is from www.openstreetmap.org. The data is made available under ODbL."
},
"elements": [
{
"type": "way",
"id": 46723482,
"nodes": [
488264429,
488264444,
488264461,
488264512,
488264530,
488264541,
597315979
],
"tags": {
"bicycle": "no",
"bridge": "yes",
"foot": "no",
"hgv": "designated",
"highway": "motorway",
"horse": "no",
"lanes": "3",
"layer": "1",
"name": "Brooklyn-Queens Expressway",
"oneway": "yes",
"ref": "I 278",
"sidewalk": "none",
}
},
{
"type": "way",
"id": 46724225,
"nodes": [
597315978,
488242888,
488248526,
488248544,
488248607
],
"tags": {
"bicycle": "no",
"bridge": "yes",
"foot": "no",
"hgv": "designated",
"highway": "motorway",
"horse": "no",
"lanes": "3",
"layer": "1",
"name": "Brooklyn-Queens Expressway",
"oneway": "yes",
"ref": "I 278",
"sidewalk": "none",
}
}
]
}
Which are 2 sections of the road in the osm database. In the US the "ref" tag for interstates is in the form "I XXX" (See http://wiki.openstreetmap.org/wiki/Interstate_Highways and note the format for co-location). You can retrieve the interstate name accordingly.
You can try the above query in overpass-turbo (a UI for the service) at http://overpass-turbo.eu/s/bxi (Press RUN and the DATA tab for the returned data, and pan the map for query in another bbox).
The "ref" information is not exposed in the SDK (will put this on the TODO list).
A workaround would be to look in the text advices (when using TTS) as this information is there (if you look at the $ref parameter, that contains the information you are looking for).
For more details regarding the text advices structure, see this blog article.

How to print Grails params without the flattened keys

params in Grails is a GrailsParameterMap that automatically builds up sub-Maps by splitting parameter names that contain dots.
For example, if my query string is ?one.two.three=hello then Grails gives me a params variable that contain both the flattened (original) and the re-structured values:
params == [
"one.two.three": "hello",
one: [
"two.three": "hello",
two: [
three: "hello",
],
],
// plus "controller parameters"
]
If we ignore the additional "controller parameterss", such as controller and action, how can I get a clean version of this, withouth the original flattened parameters?
[
one: [
two: [
three: "hello"
]
]
]

Indexing into JSON

This seems like it should be a very easy question but I'm having some trouble with it. I'm creating my own JSON and I need to index into it in order to seed by database. I've indexed into JSONs before with very little difficulty, but for some reason I can't index into my own. That makes me think that there might be an issue with my JSON itself, but I can't see anything that would cause an issue. I appreciate your assistance!
My JSON:
{
"workouts": [
{
"level": "1",
"exercises": [
{
"name": "box jumps",
"difficulty": "3",
"reps": "10",
"sets": "3",
"requirements": [
"sturdy box at least two feet high"
],
"body-part": "quadriceps",
"description": "Plant both feet should length apart and jump onto the box. Once on the box, stand fully upright.",
"pounds": "1"
},
{
"name": "v-press",
"difficulty": "4",
"reps": "12",
"sets": "3",
"requirements": [
"mat"
],
"body-part": "abdominals",
"description": "Lie flat on the ground, then raise your legs and arms slightly off the matt.",
"pounds": "1"
}
]
},
{
"level": "2",
"exercises": [
{
"name": "assisted pullups",
"difficulty": "1",
"reps": "12",
"sets": "3",
"requirements": [
"Assisted Pullup Machine"
],
"body-part": "biceps",
"description": "Kneel on the machine and adjust the weight to your needs",
"pounds": "50"
},
{
"name": "assisted dips",
"difficulty": "1",
"reps": "12",
"sets": "3",
"requirements": [
"Assisted Dips Machine"
],
"body-part": "triceps",
"description": "Kneel on the machine and adjust the weight to your needs",
"pounds": "50"
}
]
}
]
}
In pry, I do the following:
require "json"
f= File.open("workout.json")
mylist = JSON.parse(f.read)
When I try to index in, I get various errors (syntax error, no method errors, nil). Below are some examples of indexing I have attempted.
mylist.workouts
mylist[:workouts]
mylist[0]
mylist[:workouts][0][:level]
Thanks in advance!
The keys in the Hash after parsing the JSON data are strings not symbols. Try this :
mylist['workouts']
mylist['workouts'][0]['level']
A couple of points to remember :
Strings and Symbols are not interchangeable as keys in a Hash. They both are different objects and hence different keys.
To get the behaviour of the params in Rails controller where strings and symbols are interchangeable you need to instantiate an instance of HashWithInDifferentAccess. It is a separate utility class provided by Rails and is not part of the Ruby stdlib
The gem jbuilder is not a JSON parser. It is a JSON creator. It is used to create JSON structures from Ruby objects, used mostly in writing views for JSON responses. It is analogous to how ERB is used for HTML responses.
JSON has been part of Ruby stdlib for some time now (i.e. JSON parsing and serialization does not require any additional gems).

Solr CollapsingQParserPlugin with group.facet=on style facet counts

I have a Solr index of about 5 million documents at 8GB using Solr 4.7.0. I require grouping in Solr, but find it to be too slow. Here is the group configuration:
group=on
group.facet=on
group.field=workId
group.ngroups=on
The machine has ample memory at 24GB and 4GB is allocated to Solr itself. Queries are generally taking about 1200ms compared to 90ms when grouping is turned off.
I ran across a plugin called CollapsingQParserPlugin which uses a filter query to remove all but one of a group.
fq={!collapse field=workId}
It's designed for indexes that have a lot of unique groups. I have about 3.8 million. This approach is much much faster at about 120ms. It's a beautiful solution for me except for one thing. Because it filters out other members of the group, only facets from the representative document are counted. For instance, if I have the following three documents:
"docs": [
{
"id": "1",
"workId": "abc",
"type": "book"
},
{
"id": "2",
"workId": "abc",
"type": "ebook"
},
{
"id": "3",
"workId": "abc",
"type": "ebook"
}
]
once collapsed, only the top one shows up in the results. Because the other two get filtered out, the facet counts look like
"type": ["book":1]
instead of
"type": ["book":1, "ebook":1]
Is there a way to get group.facet counts using the collapse filter query?
According to Yonik Seeley, the correct group facet counts can be gathered using the JSON Facet API. His comments can be found at:
https://issues.apache.org/jira/browse/SOLR-7036?focusedCommentId=15601789&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15601789
I tested out his method and it works great. I still use the CollapsingQParserPlugin to collapse the results, but I exclude the filter when counting up the facets like so:
fq={!tag=workId}{!collapse field=workId}
json.facet={
type: {
type: terms,
field: type,
facet: {
workCount: "unique(workId)"
},
domain: {
excludeTags: [workId]
}
}
}
And the result:
{
"facets": {
"count": 3,
"type": {
"buckets": [
{
"val": "ebook",
"count": 2,
"workCount": 1
},
{
"val": "book",
"count": 1,
"workCount": 1
}
]
}
}
}
I was unable to find a way to do this with Solr or plugin configurations, so I developed a work around to effectively create group facet counts while still using the CollapsingQParserPlugin.
I do this by making a duplicate of the fields I'll be faceting on and making sure all facet values for the entire group are in each document like so:
"docs": [
{
"id": "1",
"workId": "abc",
"type": "book",
"facetType": [
"book",
"ebook"
]
},
{
"id": "2",
"workId": "abc",
"type": "ebook",
"facetType": [
"book",
"ebook"
]
},
{
"id": "3",
"workId": "abc",
"type": "ebook",
"facetType": [
"book",
"ebook"
]
}
]
When I ask Solr to generate facet counts, I use the new field:
facet.field=facetType
This ensures that all facet values are accounted for and that the counts represent groups. But when I use a filter query, I revert back to using the old field:
fq=type:book
This way the correct document is chosen to represent the group.
I know this is a dirty, complex way to make it work, but it does work and that's what I needed. Also it requires the ability to query your documents before insertion into Solr, which calls for some development. If anyone has a simpler solution I would still love to hear it.

Resources