parsing a multi leve json file in Go Language - parsing

I need to parse and get values from fields in a json file.
[{"id": 27}, {"id": 0, "label": "Label 0"}, null, {"id": 93}, {"id": 85}, {"id": 54}, null, {"id": 46, "label": "Label 46"}]}}
Though i can work on single level , i am at a loss how i can iterate through levels here.
I have tried looking for an answer in google , various help sites and even stackoverflow.
I could not find any example that might help me in working with multi level json byte array.
Hope somebody can lead me to understand and work on it.
Thanks in advance

Just parse the JSON into an array of structs:
package main
import (
"encoding/json"
"fmt"
)
type Item struct {
Id int
Label string
}
func main() {
data := []byte(`[{"id": 27}, {"id": 0, "label": "Label 0"}, null, {"id": 93}, {"id": 85}, {"id": 54}, null, {"id": 46, "label": "Label 46"}]`)
var val []*Item
if err := json.Unmarshal(data, &val); err != nil {
fmt.Printf("Error: %s\n", val)
return
}
for _, it := range val {
fmt.Printf("%#v\n", it)
}
}
I hope this helps.

Related

OPA masking a dynamic array field

I'm trying to apply masking on an input and result field that is part of an array. And the size of the array is dynamic. Based on the documentation, it is instructed to provide absolute array index which is not possible in this use case. Do we have any alternative?
Eg. If one needs to mask the age field of all the students from the input document?
Input:
"students" : [
{
"name": "Student 1",
"major": "Math",
"age": "18"
},
{
"name": "Student 2",
"major": "Science",
"age": "20"
},
{
"name": "Student 3",
"major": "Entrepreneurship",
"age": "25"
}
]
If you want to just generate a copy of input that has a field (or set of fields) removed from the input, you can use json.remove. The trick is to use a comprehension to compute the list of paths to remove. For example:
paths_to_remove := [sprintf("/students/%v/age", [x]) | some x; input.students[x]]
result := json.remove(input, paths_to_remove)
If you are trying to mask fields from the input document in the decision log using the Decision Log Masking feature then you would write something like:
package system.log
mask[x] {
some i
input.input.students[i]
x := sprintf("/input/students/%v/age", [i])
}

Rego Validation Array Compare

I am new at Rego and I am trying to write a policy in order to check if there is a set of rules already created on certain Azure NSGs.
Input test:
{
"name": "<name>",
"id": "<id>",
"etag": "<etag>",
"type": "<resourcetype>",
"location": "<location>",
"properties":
{
"provisioningState": "Succeeded",
"resourceGuid": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"securityRules":
[
{
"name": "<rule name>",
"id": "<id>",
"etag": "<etag",
"type": "<type>",
"properties":
{
"provisioningState": "Succeeded",
"description": "....",
"protocol": "*",
"sourcePortRange": "*",
"destinationPortRange": "53",
"sourceAddressPrefix": "*",
"access": "Allow",
"priority": 1,
"direction": "Outbound",
"sourcePortRanges": [],
"destinationPortRanges": [],
"sourceAddressPrefixes": [],
"destinationAddressPrefixes":
[
"10.0.0.1",
"10.0.0.2",
"10.0.0.3"
]
}
}
]
{
}
I wrote a custom function in order to check the values. Below is the code that I am testing in The Rego Playground
existRule(rule) = true
{
input.properties.securityRules[i].name == rule.name
input.properties.securityRules[i].properties.provisioningState == rule.provisioningState
input.properties.securityRules[i].properties.description == rule.description
input.properties.securityRules[i].properties.protocol == rule.protocol
input.properties.securityRules[i].properties.access == rule.access
input.properties.securityRules[i].properties.priority == rule.priority
input.properties.securityRules[i].properties.direction == rule.direction
}
rule = {
"name": "name",
"provisioningState": "Succeeded",
"description": "description",
"protocol": "*",
"sourcePortRange": "*",
"destinationPortRange": "1",
"sourceAddressPrefix": "*",
"access": "Allow",
"priority": 1,
"direction": "Outbound",
"destinationAddressPrefix": "",
"sourcePortRanges": [],
"destinationPortRanges": [],
"sourceAddressPrefixes": [],
"destinationAddressPrefixes": [
"10.0.0.1",
"10.0.0.2",
"10.0.0.3",
"10.0.0.4"
]
}
rules
{
existRule(rule)
}
which is working for the properties that I define above, however I am having an issue when trying to compare arrays, particularly in this example with destinationAddressPrefixes
I have tried the following:
test1 { input.properties.securityRules[i].properties.destinationAddressPrefixes == rule.destinationAddressPrefixes }
Always returns false
With the following line I can check one destination address from the input against a specific ip, however I can not achive a compare of all the address of the input against the ones of the rule that is defined in the example
onerule {input.properties.securityRules[i].properties.destinationAddressPrefixes[_] == "10.0.0.1"}
test2 {input.properties.securityRules[i].properties.destinationAddressPrefixes[_] == rule.destinationAddressPrefixes[j]}
test3 {input.properties.securityRules[i].properties.destinationAddressPrefixes[j] == rule.destinationAddressPrefixes[k]}
test2 and test3 always return true, even when there is a rule that is not in the input. I also tried and array difference
x := input.properties.securityRules[i].properties.destinationAddressPrefixes - rule.destinationAddressPrefixes
but I get the following error:
rego_type_error: minus: invalid argument(s) have: (any, array<string,
string, string, string, string, string, string, string, string,
string, string, string, string, string>, ???) want: (any<number,
set[any]>, any<number, set[any]>, any<number, set[any]>)
Do you know if it is feasible to achieve what I am looking for? Or is there a different way to make a look of the array and compare the values one by one?
What does rule3400.destinationAddressPrefixes look like?
If you want to compare for exact equality between two arrays, == should suffice.
If all elements are known to be unique and order doesn't matter (which seems to be the case in your example) you could convert the arrays to sets using a set comprehension. This makes it possible to subtract one set from another such as you tried to do with arrays directly.
to_set(arr) = {x | x := arr[_]}
input_prefixes := to_set(input.properties.securityRules[i].properties.destinationAddressPrefixes)
destination_prefixes := to_set(rule3400.destinationAddressPrefixes)
x := input_prefixes - destination_prefixes

How to get only matched data from nested class using query builder in elastic search?

I am trying to get only the matched data from nested array of elastic search class. but I am not able to get it..the whole nested array data is being returned as output.
this is my Query:-
QueryBuilders.nestedQuery("questions",
QueryBuilders.boolQuery()
.must(QueryBuilders.matchQuery("questions.questionTypeId", quesTypeId)), ScoreMode.None)
.innerHit(new InnerHitBuilder());
I am using querybuilders to get data from nested class.Its working fine but not able to get only the matched data.
Request Body :
{
"questionTypeId" : "MCMC"
}
when questionTypeId = "MCMC"
this is the output i am getting..Here I want to exclude the output for which the questionTypeId = "SCMC".
output :
{
"id": "46",
"subjectId": 1,
"topicId": 1,
"subtopicId": 1,
"languageId": 1,
"difficultyId": 4,
"isConceptual": false,
"examCatId": 3,
"examId": 1,
"usedIn": 1,
"questions": [
{
"id": "46_31",
"pid": 31,
"questionId": "QID41336691",
"childId": "CID1",
"questionTypeId": "MCMC",
"instruction": "This is a single correct multiple choice question.",
"question": "Who holds the most english premier league titles?",
"solution": "Manchester United",
"status": 1000,
"questionTranslation": []
},
{
"id": "46_33",
"pid": 33,
"questionId": "QID41336677",
"childId": "CID1",
"questionTypeId": "SCMC",
"instruction": "This is a single correct multiple choice question.",
"question": "Who holds the most english premier league titles?",
"solution": "Manchester United",
"status": 1000,
"questionTranslation": []
}
]
}
As you have tagged this with spring-data-elasticsearch:
Support to return inner hits was recently added to version 4.1.M1 and so will be included in the next released version. Then in a SearchHit you will get the complete top level document, but in the innerHits property only the matching inner hits will be returned.

Parse Json - Google Apps Script

I am trying to collect the JSON response from a webhook in google scripts and output the result to google sheets.
Ideally i want to parse and output the JSON data to columns in the spreadsheet, (each time the webhook is fired this data is entered on the next row in the spreadsheet)
Here is the example data that is collected from the webhook:
{parameter={environment=prod, payload={"TEST":"WARNING: THIS IS A TEST PAYLOAD ONLY. THESE IDs ARE NOT VALID FOR THIS RETAILER","id":"b53744ce-2d21-11e2-8057-080027706aa2","retailer_id":"9a5521c3-2d20-11e2-8057-080027706aa2","customer_code":"JOEB","balance":"0.000","points":0,"note":"<p>This is a <em>really<\/em> nice customer<\/p>","year_to_date":"6750.00000","sex":"M","date_of_birth":"1986-10-12","custom_field_1":"foo","custom_field_2":"bar","custom_field_3":"baz","custom_field_4":"","updated_at":"2012-11-14 17:47:57","created_at":"2012-11-13 12:35:57","contact":{"first_name":"Joe","last_name":"Bloggs","company_name":"Acme Limited","phone":"021 555 55555","mobile":"","fax":"","email":"joe.bloggs#example.com","twitter":"#joebloggs","website":"http:\/\/www.example.com\/","physical_address1":"12 Jimmy Street","physical_address2":"","physical_suburb":"Johnsonville","physical_city":"Jamestown","physical_postcode":"65225","physical_state":"WA","physical_country_id":"AU","postal_address1":"12 Jimmy Street","postal_address2":"","postal_suburb":"Johnsonville","postal_city":"Jamestown","postal_postcode":"65225","postal_state":"WA","postal_country_id":"AU"},"contact_first_name":"Joe","contact_last_name":"Bloggs"}, retailer_id=b8ca3a6d-ea18-11e4-ee41-1ca4a74ff9da, type=customer.update, domain_prefix=123}, contextPath=, contentLength=1859, queryString=null, parameters={environment=[Ljava.lang.Object;#53432cae, payload=[Ljava.lang.Object;#2af01d0d, retailer_id=[Ljava.lang.Object;#4282d52a, type=[Ljava.lang.Object;#5a178fb1, domain_prefix=[Ljava.lang.Object;#107bfe01}, postData=FileUpload}
Here is my existing google script code that simply dumps the json data into the next row
function doPost(e) {
return handleResponse(e);
}
function handleResponse(e) {
var ss =SpreadsheetApp.openById('XYZ');
var lastRow = sheet.getLastRow() + 1;
SpreadsheetApp.getActiveSheet().getRange('a' + lastRow).setValue(e);
}
I have experimented with JSON.parse but seem to be getting nowhere fast.
Many Thanks
Not all of it is in JSON format. You can check it here. Below I've separated what I believe to be the JSON formatted data.
{
"TEST": "WARNING: THIS IS A TEST PAYLOAD ONLY. THESE IDs ARE NOT VALID FOR THIS RETAILER",
"id": "b53744ce-2d21-11e2-8057-080027706aa2",
"retailer_id": "9a5521c3-2d20-11e2-8057-080027706aa2",
"customer_code": "JOEB",
"balance": "0.000",
"points": 0,
"note": "<p>This is a <em>really</em> nice customer</p>",
"year_to_date": "6750.00000",
"sex": "M",
"date_of_birth": "1986-10-12",
"custom_field_1": "foo",
"custom_field_2": "bar",
"custom_field_3": "baz",
"custom_field_4": "",
"updated_at": "2012-11-14 17:47:57",
"created_at": "2012-11-13 12:35:57",
"contact": {
"first_name": "Joe",
"last_name": "Bloggs",
"company_name": "Acme Limited",
"phone": "021 555 55555",
"mobile": "",
"fax": "",
"email": "joe.bloggs#example.com",
"twitter": "#joebloggs",
"website": "http://www.example.com/",
"physical_address1": "12 Jimmy Street",
"physical_address2": "",
"physical_suburb": "Johnsonville",
"physical_city": "Jamestown",
"physical_postcode": "65225",
"physical_state": "WA",
"physical_country_id": "AU",
"postal_address1": "12 Jimmy Street",
"postal_address2": "",
"postal_suburb": "Johnsonville",
"postal_city": "Jamestown",
"postal_postcode": "65225",
"postal_state": "WA",
"postal_country_id": "AU"
},
"contact_first_name": "Joe",
"contact_last_name": "Bloggs"
}
I ran this little script just to see if I could get the data out properly.
function messingWithJson()
{
var html='';
var br='<br />';
var text=myUtilities.loadFile('jsonquestion.json');//This just loads into text from a file
var data=JSON.parse(text);
html+=br + Utilities.formatString('%s<br/>%s %s<br />%s', data.created_at,data.contact.first_name,data.contact.last_name,data.contact.company_name)
html+='<br /><input type="button" value="Close" onClick="google.script.host.close();" />';
var userInterface=HtmlService.createHtmlOutput(html).setWidth(800);
SpreadsheetApp.getUi().showModelessDialog(userInterface, 'JSON Question');
}
And this was my output.
2012-11-13 12:35:57
Joe Bloggs
Acme Limited
And I was able to recover what I wanted. If you want some help with JSON you might like to check out these videos. Look towards the middle of the playlist for JSON.

Importing relationships with Core Data and Magical Record

I am getting JSON data from a webservice and try to store that in Core Data with Magical Record. I read the great post (and only documentation?) "Importing data made easy" by Saul Mora but I still do not really understand what I need to do to get all data in my entities.
Here is the JSON the web service returns:
{
"ApiVersion": 4,
"AvailableFileSystemLibraries": [
{
"Id": 10,
"Name": "Movie Shares",
"Version": "0.5.4.0"
},
{
"Id": 11,
"Name": "Picture Shares",
"Version": "0.5.4.0"
},
{
"Id": 5,
"Name": "Shares",
"Version": "0.5.4.0"
},
{
"Id": 9,
"Name": "Music Shares",
"Version": "0.5.4.0"
}
],
"AvailableMovieLibraries": [
{
"Id": 3,
"Name": "Moving Pictures",
"Version": "0.5.4.0"
},
{
"Id": 7,
"Name": "MyVideo",
"Version": "0.5.4.0"
}
],
"AvailableMusicLibraries": [
{
"Id": 4,
"Name": "MyMusic",
"Version": "0.5.4.0"
}
],
"AvailablePictureLibraries": [
{
"Id": 8,
"Name": "Picture Shares",
"Version": "0.5.4.0"
}
],
"AvailableTvShowLibraries": [
{
"Id": 6,
"Name": "MP-TVSeries",
"Version": "0.5.4.0"
}
],
"DefaultFileSystemLibrary": 5,
"DefaultMovieLibrary": 3,
"DefaultMusicLibrary": 4,
"DefaultPictureLibrary": 0,
"DefaultTvShowLibrary": 6,
"ServiceVersion": "0.5.4"
}
The entities I want to store that data in look like this:
There is also a Server entity with a 1:1 relationship to ServerInfo.
What I want to do:
Store basic data (ApiVersion, ...) in ServerInfo. This I already got to work.
Store each object in AvailableXYLibraries in BackendLibrary (1:n relationship from ServerInfo).
Set type based on the XY part of AvailableXYLibraries, for example "movie" for AvailableMovieLibraries.
Set defaultLibrary to true if this library is referenced by DefaultXYLibrary.
Set providerId to servername + LibraryId as there are multiple servers that can have BackendLibraries with the same numeric ID.
Is this possible with Magical Record? I guess I need to implement some of the import hooks and set some user info keys, but everything I read doesn't really tell me where to set what user info key or implement which method where and how.
I hope this made sense and that you can give me some hints :) Thanks!
The structure of this data is quite a bit different from your Core Data model. What you'll most likely have to do is iterate a bit on the dictionary. That is, there are various collections of library data, eg. FileSystemLibraries, AvailableMovieLibraries, etc. You'll have to get the array out of those keys, and then map your entities as I described in the article. In order to launch the process, you'll have to call
[BackendLibrary importFromArray:arrayFromDownloadedDictionary];
where the arrayFromDownloadedDictionary is each array in the example dictionary you've posted. Once you give the array to MagicalRecord, and provided the proper field mapping, MagicalRecord will then import and create all the entities for you at that point.
Make sure you map "Id" to BackendLibary.id, "Name" to BackendLibrary.name, and "Version" to BackendLibrary.version

Resources