What's the `location` key in the opa rego resultset expression? can I get locations in input json that caused policy violation? - open-policy-agent

I'm using go rego package, and the rego.ResultSet when marshalled gives this:
[
{
"expressions": [
{
"value": {...},
"text": "data",
"location": { "row": 1, "col": 1 }
}
]
}
]
I intend to output the location(s) in Input JSON where the keys were responsible for failures so that I can use this in building context for the errors
We used JSON schema earlier for validating JSONs and it used to return the keys from input that we can map with errors. https://www.jsonschemavalidator.net/
I suppose as rego could support far more complex decision making where more than one key would be responsible for making the final outcome, that could be the reason it wouldn’t point to a location in the input for failure context. unless am I missing anything?

To answer the first question:
Every value parsed by OPA retains "location" information identifying where it came from in the source string/file. The location in the ResultSet is the location of the expression in the query that was passed when creating the rego.Rego object.
In your case, the query was "data", i.e., you referred to ALL of the documents in OPA (both base documents which could have be loaded from outside as well as virtual documents generated by any rules you loaded into OPA.) The location of the expression in this case is not very interesting: row 1, column 1.
To answer your second question:
OPA does not currently have a reliable way of returning the location of JSON values in the input however this is something that would be valuable and could be added in the future.

Related

What is the use of Data Model while API Parsing in swift

Why we should use Data model while parsing API. whereas we can simply get response in the ViewController class it self.
Can someone tell me why we should use Data Model to parse api response..
Thanks in advance
Imagine that you have below json response from server after calling an API:
{
"settings": {
"isUserActive": false,
"isUserAdmin": false,
"rollNumber": 10,
"userId": 2,
"userName": "John"
},
"status": 200,
"message": "Success"
}
Now how will you access the value if you are not using data model. It will be like
let name = response["settings"]["userName"]
(Assuming that you have converted the json into dictionary)
1) What if you have to use the username at multiple place, then you have to do the same thing again.
2) The above json response is simple so it will be easy to get a particular value, but imagine a json where there are nested objects, trying to retrieve a value manually can be pain.
3) If you are working in a team there is a probability that some developers can misspell the key and it can take hours to debug.
Using data model the compiler will throw error if the property is misspelled avoiding bugs.
4) You will have to typecast every time you retrieve the data from dictionary.
When using data models, need to do typecasting only once ie. when parsing the json.
All this pain can be avoided simply using data model, you only have to parse the json once and you can simply use the key as property to access value.
For example see the settings json, once you parse it to data model it can be used like this:
let data = dataModel(json: jsonResponse)
data.settings.userName // John
data.settings.rollNumber //10
data.status //200
This is a good tool to convert the json in to data models Link
Hope it helps.

How to keep the single resource representation approach using OpenAPI spec

Reading this post (see: 3 How to use a single definition when...) about describing a REST API using OpenAPI (Swagger) specification you can note how to keep a single resource representation for adding/updating and getting resource using readOnly property instead of having one representation for getting (GET a collection item) and other one for adding (POST to a collection). For example, in the following User single representation, the id is a read-only property which mean that it won't be sent in the representation when a user is created, it will be there when a user is retrieved.
"User":
{
"type": "object",
"properties": {
"id": {
"type": "integer",
"format": "int64",
"readOnly": true
},
"company_data": {
"type": "object",
"properties": {
.
.
.
},
"readOnly": false
}
}
}
It is really clean and nice to keep the list of resource representation as short as possible so I want to keep the single resource representation approach but the problem I am facing to do that is: how to manage required when a property is mandatory for input only? Suppose I need to set company_data as required when the user is created (POST /users/) but non-required when an user is retrieved (GET /users/{user_id}). There are any way in OpenAPI specification to satisfy this need without lost the single resource representation?
From the Swagger-OpenAPI 2.0 spec, readonly is defined as follows:
Declares the property as "read only". This means that it MAY be sent
as part of a response but MUST NOT be sent as part of the request.
Properties marked as readOnly being true SHOULD NOT be in the required
list of the defined schema. Default value is false.
Since the specification says that a read-only property should not be required, there are no defined semantics for what readonly + required should mean.
(It might have been reasonable to say that readonly + required means it's required in the response, but still excluded from the request. In fact there is an open issue to make this change, and it looks like it's under consideration for OpenAPI 3.0.)
Unfortunately there is no way for a single schema to make properties required in the request, but optional (or disallowed) in the response.
(Again, there's an open issue proposing a "write-only" modifier, possibly under consideration for the next release.)
For now, you would need to create different schemas for these different cases. As described here, you might be able to make these schemas a bit more DRY using allOf composition.

Change OrgUnit type via Valence

I'm attempting to change the type of one custom orgunit to another to correct an error that was made previously.
Doing: GET /d2l/api/lp/1.4/orgstructure/6770
Results in:
{
"Identifier": "6770",
"Name": "Art",
"Code": "ART",
"Type": {
"Id": 101,
"Code": "Department",
"Name": "Department"
}
}
I then take that data and run it through PUT /d2l/api/lp/1.4/orgstructure/6770 as per the documentation however I change the data to be:
{
"Identifier": "6770",
"Path": "/content/",
"Name": "Art",
"Code": "ART",
"Type": {
"Id": 103,
"Code": "Discipline",
"Name": "Discipline"
}
}
Essentially only adding a "Path" property because it issues a 404 without it. As well as changing the type to a Discipline rather than Department. However the object returned is identical to the original without updating any of the type information.
Any suggestions to how to fix this? Deletion and recreation at this point is not a feasible option at all. Because both of these are "custom" org unit types I would imagine an update like this shouldn't be difficult.
This is an oversight in the documentation, combined with a somewhat awkward evolution of the API. The documentation has now been updated to be more clear on this situation:
The update orgunit properties call can only update the Name, Code, or Path properties of an orgunit, not it's Identifier (sensibly) or it's Type. (I do not believe there is a way to update the type of an org unit, once created, even in the Web UI for the LMS -- you likely have to create a new org unit, re-assign parent and children relationships as appropriate, and then drop the old unit.)
Unfortunately, you must provide a valid, good Path for the org unit, and the simple call to fetch a single org unit's properties won't tell you what the current one is.
If you don't already know what the path is, and should be, then you'll need to call the route to fetch back a list of org unit records, find the exact one that matches yours (by Identifier, or by matching on several properties like Code and Name), and then send back that Path dispensed in the record sent back there. (Note that you're strongly advised to scope the call to fetch back a list of org unit records by filtering on type, code, and/or name, and the call is paged, so you may have to proceed with it several times if you don't scope down the call enough, to find the particular org unit record in question.)

Will RestKit's dynamic mapping solve this complex JSON mapping?

I am using RestKit in my app, which needs to use an existing synchronization service that structures the incoming data this way:
{
"timestamp": 000000000001,
"status" : 0,
"syncData":[
{
"errors":[],
"rows":[ {"name":"AAA", ...},
{"name":"BBB", ...},
...],
"rtype":"FOO" },
{
"errors":[],
"rows":[ {"id":1, "description":"ZZZ", ....},
{"id":2, "description":"YYY", ....},
...],
"rtype":"BAR"
}, ...
I'm new to RestKit and trying to figure out the best way to solve this problem, and the complementary problem of sending this same structure of data back to the server. I'm using Core Data with RestKit.
I've mapped a SyncResponse entity to hold the top level data, and what I want to get out of this is a collection of FOO objects, "AAA", "BBB", etc., and a collection of BAR objects, "ZZZ", "YYY", etc., and a few dozen other collections of objects whose Class is indicated by the "rtype" field.
I've read the doc section on dynamic mapping and some example code and postings here, but I don't see how dynamic mapping works in this case as it is not of the {"a":{is A}, "b":{is B}} format. Is this possible using dynamic mapping, and if so, what concepts am I missing here?
Assuming it is possible, how do I, starting with collections of FOOs and BARs send data back, of course replacing the SyncResponse with something like a SyncUpdateRequest wrapper?
I don't think you'll be able to do this using a set of mappings alone.
Your best option may be to create your mappings for each item and one for the overall structure. The overall mapping just extracts the array as an NSArray of dictionaries. Once you have the array you can iterate over it yourself, check the type and then apply an RKMapperOperation to perform the mappings.
For sending your update request, I'd look at it as a quite separate thing. I'd build an array of dictionaries where the dictionaries have 'plain' key / value pairs for some information and 'complex' key / value pairs for the rows. Your request mapping is then in terms of this array of dictionaries (which cover the custom parts) and the rows (which should be the inverse of your response mapping for the class). Then RestKit should be able to handle it in the standard way (compared to the complexity of your response mapping above).

Indirection in dust.js

Is it possible to achieve variable indirection in dust.js - and therefore to be able to use map-like functionality?
Imagine I have the following context to pass to Dust:
{
"keys": [ "Foo", "Bar", "Baz" ],
"data": [{
"date": "20130101",
"values": {
"Foo": 1,
"Bar": 2,
"Baz": 3
}
}, {
"date": "20130102",
"values": {
"Foo": 4,
"Bar": 5,
"Baz": 6
}
}]
}
And I want to achieve the following output (it would actually be a table, but I've skipped the <tr><td> tags for brevity and replaced them with spaces and newlines):
Date Foo Bar Baz
20130101 1 2 3
20130102 4 5 6
I'm not sure how to loop over the keys property, and use each value x to look up data[i].values[x]. I can get the desired output by hardcoding the keys:
Date{~s}
{#keys}
{.}{~s}
{/keys}
{~n}
{#data}
{date}{~s}
{values.Foo}{~s}
{values.Bar}{~s}
{values.Baz}{~s}
{~n}
{/data}
but the keys will be determined dynamically, so I can't hardcode them into the template. Is there a way to replace the lines that say values.Foo etc., with something like the following:
{#data}
{date}{~s}
{#keys outerMap=values}
{outerMap.{.}}{~s}
{/keys}
{~n}
{/data}
This doesn't work as written; can I capture the output of {.} (the value of the current key) and dynamically use it as (part of) the property name to resolve?
So, the short answer is no, you can't do that in Dust.
Dust is meant to be a logicless language, and this is bordering on too much logic. Generally the answer to problems like this one is to update your JSON so it works correctly in Dust. This could be done easily in the example you have given, but could be much more difficult in a real world situation. Part of the power of Dust lies in its limitations.
If this answer doesn't work for you, you are welcome to submit a pull request on GitHub: https://github.com/linkedin/dustjs
As smfoote says, this is not supported out-of-the-box in Dust.
However, I've realised that handlers can involve some elements of logic, and so it was relatively straightforward to write a handler to do dereferencing:
deref: function(chunk, context, bodies, params) {
chunk.write(params.obj[params.prop]);
}
This handler takes an obj argument which is the object to use as an associative array; and the prop parameter describing the key to look up from that object. With this handler function added to the context, I was then able to write the data part of the template as:
{#data}
{date}{~s}
{#keys}
{#deref obj=values prop=./}{~s}
{/keys}
{~n}
{/data}
This produces the correct output, by iterating over every key and passing it in as the property to read from the values object.
I appreciate that some might consider this inappropriate given Dust's philosophy. However I don't think it constitutes particularly complex logic; indeed, it's something that could quite conceivably be part of a templating framework. Given that smfoote's suggested alternative of updating my JSON isn't an option (the template does not know what the keys will be in advance, so there is no way to write "static" references here), this seems like a reasonable approach.

Resources