Adding custom analyzer to elasticsearch via grails plugin - grails

I'm trying to add a custom analyzer to elasticsearch via grails plugin. I was able to change the used analyzer to a common analyzer using "searchable" on the domain:
static searchable = {
all = [analyzer: 'snowball']
}
but cannot get it to know a costum analyzer. It is unclear how to translate the following json in the REST API to a groovy closue:
PUT /my_index
{
"settings": {
"analysis": {
"filter": {
"my_synonym_filter": {
"type": "synonym",
"synonyms": [
"british,english",
"queen,monarch"
]
}
},
"analyzer": {
"my_synonyms": {
"tokenizer": "standard",
"filter": [
"lowercase",
"my_synonym_filter"
]
}
}
}
}
}
this question seems to have the same problem but the answer doesn't work, and this answer suggests that it might not be possible, but that doesn't seem reasonable because setting a custom analyzer is pretty basic.
Any suggestions?

There are two ways I see which would help you achieve that.
The first way is by going through the low level API using the injected elasticSearchHelper and accessing ES client directly.
elasticSearchHelper.withElasticSearch { client ->
// Do some stuff with the ElasticSearch client
client.admin()
.indices()
.prepareCreate(indexName)
.setSettings(settings) <--- your settings/analyzers go here
.execute()
.actionGet()
}
A second way involves using an undocumented feature of the ElasticSearchAdminService service, namely the createIndex() method, which allows you to pass in the settings and analyzers you need when creating a new index. The latter basically does exactly the same as the first option above, but you get to use the Grails service directly.

Related

How to modify GraphQL response in hasura?

I want to modify the response of hasura fetch query.
the current response is this:
{
"data": {
"ids": [
{
"id_object": {
"id": 33102
}
},
{
"id_object": {
"id": 33104
}
}
]
}
}
And I want to remove "id_object" and want just array of id's like this:
{
"data": {
"ids": [
{
"id": 33102
},
{
"id": 33104
}
]
}
}
A GraphQL server exposes an exact set of operations and the shape of the allowed responses for those operations. When interacting with any GraphQL server (Hasura or otherwise), it is therefore not possible to to arbitrarily modify the shape of the returned data.
You're free to map it into a new form when you receive the data on the client side.
If you really need the server itself to be able to respond using this shape, you'll need to extend Hasura's schema to be able to specifically support this query pattern.
There are a number of different ways that you could accomplish this:
You could write a custom Hasura Action
You could expose this query from your own GraphQL server and then stitch it together with Hasura using Remote Schemas
You could use a Postgres View or Function to shape the data as required and expose it as a new operation

How do you configure Destructurama.Attributes .Destructure.UsingAttribues() in JSON from appsettings?

We use appsettings to configure Serilog via JSON, a fairly standard enterprise practice.
But I'm having trouble enabling the UsingAttributes. How does one enable .Destructure.UsingAttributes via JSON config?
I started with a pure Serliog approach, but they indicate that Serilog.Extras.Attributed is deprecated in favor of using Destructurama.Attributed. And in looking at the Destructurama.Attributed github example, I don't understand how to convert that into a JSON configuration. Their example:
var log = new LoggerConfiguration().Destructure.UsingAttributes()
The Serilog documentation for the "Destructure" option is straightforward:
"Destructure": [
{
"Name": "With",
"Args": { "policy": "Sample.CustomPolicy, Sample" }
},
],
However, I don't know what I would use for the "Sample.CustomPolicy, Sample" to get Destructurama to be enabled.
"Destructure": [
{
"Name": "With",
"Args": { "UsingAttributes": WHAT_GOES_HERE}
},
],
I feel like I'm missing something obvious.
I just encountered the same problem and after a few experiments I figured how to have it working using Destructurama.Attributed version 2.0.0 :
You must specify the following using in JSON configuration to make Serilog able to see the library :
"Serilog":
{
"Using":
[
"Destructurama.Attributed"
],
....
}
Inside Serilog configuration, you have to specify this to Destructure to enable the attributes :
"Destructure":
[
{
"Name": "UsingAttributes"
},
....
]

Why is the exact difference between "violation" and "deny" in OPA/Rego?

In Open Policy Agent (https://www.openpolicyagent.org/)
regarding to Kubernetes, depending which engine is used:
Gatekeeper: https://github.com/open-policy-agent/gatekeeper
OR
Plain OPA with kube-mgmt: https://www.openpolicyagent.org/docs/latest/kubernetes-introduction/#how-does-it-work-with-plain-opa-and-kube-mgmt
There are different ways to define validation rules:
In Gatekeeper the violation is used. See sample rules here: https://github.com/open-policy-agent/gatekeeper-library/tree/master/library/general
In plain OPA samples, the deny rule, see sample here:
https://www.openpolicyagent.org/docs/latest/kubernetes-introduction/#how-does-it-work-with-plain-opa-and-kube-mgmt
It seems to be the OPA constraint framework defines it as violation:
https://github.com/open-policy-agent/frameworks/tree/master/constraint#rule-schema
So what is the exact "story" behind this, why it is not consistent between the different engines?
Notes:
This doc reflects on this: https://www.openshift.com/blog/better-kubernetes-security-with-open-policy-agent-opa-part-2
Here is mentioned how to support interoperability in the script: https://github.com/open-policy-agent/gatekeeper/issues/1168#issuecomment-794759747
https://github.com/open-policy-agent/gatekeeper/issues/168 In this issue is the migration mentioned, is just because of "dry run" support?.
Plain OPA has no opinion on how you choose to name your rules. Using deny is just a convention in the tutorial. The real Kubernetes admission review response is going to look something like this:
{
"kind": "AdmissionReview",
"apiVersion": "admission.k8s.io/v1beta1",
"response": {
"allowed": false,
"status": {
"reason": "container image refers to illegal registry (must be hooli.com)"
}
}
}
So whatever you choose to name your rules the response will need to be transformed into a response like the above before it's sent back to the Kubernetes API server. If you scroll down a bit in the Detailed Admission Control Flow section of the Kubernetes primer docs, you'll see how this transformation is accomplished in the system.main rule:
package system
import data.kubernetes.admission
main = {
"apiVersion": "admission.k8s.io/v1beta1",
"kind": "AdmissionReview",
"response": response,
}
default response = {"allowed": true}
response = {
"allowed": false,
"status": {
"reason": reason,
},
} {
reason = concat(", ", admission.deny)
reason != ""
}
Note in particular how the "reason" attribute is just built by concatenating all the strings found in admission.deny:
reason = concat(", ", admission.deny)
If you'd rather use violation or some other rule name using plain OPA, this is where you would change it.

In Power Automate, is there a way to filter on a Custom Field using DevOp's Send HTTP Request?

I'm trying to use Power Automate to return a custom work item in Azure DevOps using the "workitemsearch" API (via the "Send HTTP Request" action). Part of this will require me to filter based on the value of a Custom Field, however, I have not been able to get it to work. Here is a copy of my HTTP Request Body:
{
"searchText": "ValueToSearch",
"$skip": 0,
"$top": 1,
"filters": {
"System.TeamProject": ["MyProject"],
"System.AreaPath": ["MyAreaPath"],
"System.WorkItemType": ["MyCustomWorkItem"],
"Custom.RequestNumber": ["ValueToSearch"]
},
"$orderBy": [
{
"field": "system.id",
"sortOrder": "ASC"
}
],
"includeFacets": true
}
I have been able to get it to work by removing the Custom.RequestNumber": ["ValueToSearch"] but am hesitant to use that in case my ValueToSearch is found in other places like the comments of other work items.
Any help on this would be appreciated.
Cheers!
From WorkItemSearchResponse, we can see the facets (A dictionary storing an array of Filter object against each facet) only supports the following fields:
"System.TeamProject"
"System.WorkItemType"
"System.State":
"System.AssignedTo"
If you want to filter RequestNumber, you can just set it in the searchText as the following syntax:
"searchText": "RequestNumber:ValueToSearch"

Elasticsearch Creating an Analyzer and Adding Stop Words

I would like to create an analyzer and add a few stop words to Elasticsearch. I am having trouble finding documentation on doing so.
Every tutorial I find (including on the elasticsearch website) simply says to configure the analyzer...
PUT /my_index
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"type": "standard",
"stopwords": [ "and", "the" ]
}
}
}
}
}
... but I don't know how to actually get to the analyzer and pass in my own stop words, filters, etc.
If you could help me out, or point me in the direction of some in depth documentation that would be awesome.

Resources