I have a usecase to do like, if a variable is already defined then return that value else invoke a rest endpoint to get the variable.
get_value = value {
data.value
value
}else = value {
value := <> #invoke rest
}
I will be running OPA as a server and my expectation is like, in the first invokation it will go to the else block then to the first block for rest of the calls. Could you please help me
You cannot write to the in-memory store from a policy, no. Except for that though, your code looks fine, and I'd say it's a pretty common pattern to first check for the existence of a value in data before fetching it via http.send. One thing to note though is that you may use caching on the http.send call, which would effectively do the same thing you're after:
value := http.send({
"url": "https://example.com",
"method": "get",
# Cached for one hour
"force_cache": true,
"force_cache_duration": 3600
})
If the server responds with caching headers, you won't even need to use force_cache but can simply say cache: true and the http.send client will cache according to what the server suggests.
Related
I have defined a function that is not idempotent; it can return different results for the same inputs. Does Rego memoize the result of the function on each query? In other words, given the following policy:
val := myFunc(...) # Returns an object with "a" and "b" fields.
foo {
val.a
}
bar {
val.b
}
Rules foo and bar would operate on the same val which resulted from a single call to myFunc. Does Rego guarantee this?
Except for http.send, I don't think there are any builtins that would allow you to return different data provided the same input. I'd be curious to find out though! :) To answer your question, a rule/function is cached in the scope of a single query, so multiple calls to the same rule/function won't be re-evaluated. The http.send builtin addditionally allows you to cache results across queries, which can be very useful when requesting data which is rarely updated.
Speaking of http.send, it's a pretty useful builtin to test things like this. Just fire up a local webserver, e.g. python3 -m http.server and then have a policy that queries that:
package policy
my_func(x) {
http.send({
"method": "GET",
"url": "http://localhost:8000",
})
}
boo := my_func("x")
foo := my_func("x")
Then evaluate the policy:
opa eval -f pretty -d policy.rego data.policy
{
"boo": true,
"foo": true
}
Checking the logs of the web server, you'll see that only one request was sent despite two rules calling the my_func function:
::1 - - [29/Jun/2022 19:27:01] "GET / HTTP/1.1" 200 -
I use gatling to send data to an ActiveMQ. The payload is generated in a separate method. The response should also be validated. However, how can I access the session data within the checks
check(bodyString.is()) or simpleCheck(...)? I have also thought about storing the current payload in a separate global variable, but I don't know if this is the right approach. My code's setup looks like this at the moment:
val scn = scenario("Example ActiveMQ Scenario")
.exec(jms("Test").requestReply
.queue(...)
.textMessage{ session => val message = createPayload(); session.set("payload", payload); message}
.check(simpleCheck{message => customCheck(message, ?????? )})) //access stored payload value, alternative: check(bodystring.is(?????)
def customCheck(m: Message, string: String) = {
// check logic goes here
}
Disclaimer: providing example in Java as you don't seem to be a Scala developper, so Java would be a better fit for you (supported since Gatling 3.7).
The way you want to do things can't possibly work.
.textMessage(session -> {
String message = createPayload();
session.set("payload", payload);
return message;
}
)
As explained in the documentation, Session is immutable, so in a function that's supposed to return the payload, you can't also return a new Session.
What you would have to do it first store the payload in the session, then fetch it:
.exec(session -> session.set("payload", createPayload()))
...
.textMessage("#{payload}")
Regarding writing your check, simpleCheck doesn't have access to the Session. You have to use check(bodyString.is()) and pass a function to is, again as explained in the documentation.
I have been working to create an API which programatically creates/updates work item in Azure Devops. I have been able to create a work item and populate almost all fields. I have problem with setting the state.
When I am creating a POST request to Azure Devops rest api with any state name like "Active", "Closed", "Rejected", it throws a 400 Bad Request error.
I don't know if I am missing anything or if there something wrong with the way I am trying to set the value.
{
"op" : "add",
"path": "/fields/System.State",
"value"="Active",
}
I have found the solution to this problem and hence I am answering it here.
I was getting a 400 Bad Request error whenever I tried Creating an Item and Setting the state in the same call. I debugged the code and caught the exception. I found out that, there are some validation rules for some of the fields. State is one of them.
The rule for System.State field is, whenever a Work Item is created it takes its configured default value. ( In my case it was "Proposed", in your case it could be "New"). If you try altering the value at the time of work item creation, it throws a 400 Bad Request.
What should I do if I have to Create a Work Item with a specific State?
As of now, the solution that I have found out is to make two calls. One for Work Item Creation and another for Changing the state of the work item to desired state.
CreateWorkItem()
{
var result = await _client.Post(url, jsonData);
var result2 = await _client.Put(result.id, jsonData); // or maybe just the state
return Ok(result2);
}
Check the example here: Update a field
You have to use "value":"Active" in the request body.
[
{
"op" : "add",
"path": "/fields/System.State",
"value": "Active"
}
]
I am making a worklist application using SAPUI5. The problem is that when I create an entry and then create another one right after that, I get the following error:
Default changeset implementation allows only one operation.
I checked the $batch header and I see that there is a MERGE and a POST, with the MERGE updating the previous entry for some reason. Can anyone shed some light? Could it be a backend error and not a UI5 error?
Creating the new entry:
_onMetadataLoaded: function() {
var oModel = this.getView().getModel();
var that = this;
// ...
oModel.read("/USERS_SET", {
success: function(oData) {
var oProperties = {
Qmnum: "0",
Otherstuff: "cool"
};
that._oContext = that._oView.getModel().createEntry("/ENTITYSET", {
properties: oProperties
});
that.getView().setBindingContext(that._oContext);
// ...
}
});
},
handleSavePress: function(oEvent) {
// ...
this.getView().getModel().submitChanges({
success: function(oData) {
// ...
},
error: function(oError) {
// ...
}
});
},
tl-dr: Apparently you must be using the SAP Gateway. If you do not need to process those requests in one transaction then send them in different changesets. If you do not need batch calls at all consider turning it off by supplying your model with "useBatch": false upon instantiation. However if you need to process the requests together in one transaction then you have to read the details below.
In order to understand the problem you have to understand how the gateway and the batch and changeset requests work.
Batch requests consist of multiple requests bundled together. The purpose is to open only one connection and group together relevant requests so that the overhead is minimalized. Changesets form smaller blocks inside batch requests, where modification requests can be bundled and processed together in order to ensure an all-or-nothing characteristic.
So on the gateway side: there are two relevant classes for your OData service, assuming that you have used the SAP Gateway Service Builder (SEGW transaction). There is one with the ending ...DPC and one with ...DPC_EXT. Don't touch the former, it will be always regenerated when you update your service in the service builder. The latter is the one that we will need in this example. You will have to redefine at least two methods:
/IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_BEGIN
/IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_PROCESS
By default the changeset_begin method will only allow changeset processing for changesets where the number of requests equals to one. This can be handled automatically that's why a limitation exists. If there were more requests one could not ensure their processing automatically as they could have a business dependency on each other.
So make sure to allow a bundled (deferred mode) processing of changesets under the desired conditions:
/IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_BEGIN: first call the super->/iwbep/if_mgw_appl_srv_runtime~changeset_begin method in a try catch block, then loop at it_operation_info to decide and narrow down processing only in selected cases and then allow cv_defer_mode only for the selected cases, otherwise throw a /iwbep/cx_mgw_tech_exception=>changeset_not_supported exception.
/IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_PROCESS: all requests will be available in the it_changeset_request. Make sure to fill the ct_changeset_response table with the responses.
METHOD /iwbep/if_mgw_appl_srv_runtime~changeset_process.
DATA:
lv_operation_counter TYPE i VALUE 0,
lr_context TYPE REF TO /iwbep/cl_mgw_request,
lr_entry_provider TYPE REF TO /iwbep/if_mgw_entry_provider,
lr_message_container TYPE REF TO /iwbep/if_message_container,
lr_entity_data TYPE REF TO data,
ls_context_details TYPE /iwbep/if_mgw_core_srv_runtime=>ty_s_mgw_request_context,
ls_changeset_response LIKE LINE OF ct_changeset_response.
FIELD-SYMBOLS:
<fs_ls_changeset_request> LIKE LINE OF it_changeset_request.
LOOP AT it_changeset_request ASSIGNING <fs_ls_changeset_request>.
lr_context ?= <fs_ls_changeset_request>-request_context.
lr_entry_provider = <fs_ls_changeset_request>-entry_provider.
lr_message_container = <fs_ls_changeset_request>-msg_container.
ls_context_details = lr_context->get_request_details( ).
CASE ls_context_details-target_entity.
WHEN 'SomeEntity'.
"Do the processing here
WHEN OTHERS.
ENDCASE.
ENDLOOP.
ENDMETHOD.
From the error I can tell you must be using SAP GW :-) This happens only for batch requests containing more than one create/delete/update calls and it's related to transaction security ("all or nothing"). What you have to do is redefining the corresponding GW method, I think it was CHANGESET_BEGIN. See https://archive.sap.com/discussions/thread/3562720 for some details (can't offer more for now...).
Having a simple service worker updating, when receiving a message, as below. This works fine and the cache is updated. But next is to leave one of the files inaccessible and trying to get some notification that one is failing. How to list results of the requests?
Looking here
[https://developer.mozilla.org/en-US/docs/Web/API/Cache/addAll][1]
States
"The request objects created during retrieval become keys to the stored response operations."
How is this interpretated? and in code accessed?
self.addEventListener('message', event => {
console.log('EventListener message ' + event.data);
event.waitUntil(
caches.open('minimal_sw').then(cache => {
return cache.addAll(fileList).then(function(responseArray){
console.log('EventListener responseArray ' + responseArray);
self.skipWaiting();
});
})
)
});
Earlier this year, the addAll() behavior was changed so that it will reject if any of the underlying requests return responses that do not have a 2xx status code. This new behavior is implemented in the current version of all browsers that support the Cache Storage API.
So, if you want to detect when one (or more) of the requests fail, you can chain a .catch() to the end of your addAll() call.
But, to answer your question more generally, when you pass an array of strings to addAll(), they're implicitly converted (section 6.4.4.4.1) to Request objects using all of the defaults implied by the Request constructor.
Those Request objects that are created are ephemeral, though, and aren't saved anywhere for use in the subsequent then(). If, for some reason, you really need the actual underlying Request object that was used to make the network request inside of the then(), you can explicitly construct an array of Request objects and pass that to addAll():
var requests = urls.map(url => new Request(url));
caches.open('my-cache').then(cache => {
return cache.addAll(requests).then(() => {
// At this point, `cache` will be populated with `Response` objects,
// and `requests` contains the `Request` objects that were used.
}).catch(error => console.error(`Oops! ${error}`));
});
Of course, if you have a Cache object and want to get a list of the keys (which correspond to the request URLs), you can do that at any point via the keys() method:
caches.open('my-cache')
.then(cache => cache.keys())
.then(keys => console.log(`All the keys: ${keys}`));
There's no need to keep a reference to the original Requests that were used to populate the cache.