I would like to create an analyzer and add a few stop words to Elasticsearch. I am having trouble finding documentation on doing so.
Every tutorial I find (including on the elasticsearch website) simply says to configure the analyzer...
PUT /my_index
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"type": "standard",
"stopwords": [ "and", "the" ]
}
}
}
}
}
... but I don't know how to actually get to the analyzer and pass in my own stop words, filters, etc.
If you could help me out, or point me in the direction of some in depth documentation that would be awesome.
Related
I wrote a task a while back that would run a collect flow and collect an image using the model on the docs for doing so (https://www.twilio.com/docs/autopilot/actions/collect#questions). It ran flawlessly, and I tested it to make sure it ran as expected.
I made a new account using the company email to migrate my work over, and continue implementing the code, and eventually reached the portion where I needed to integrate that media collection. I used the same code, but it didn't work. The collect flow keeps on triggering the validate portion and telling me that it isn't an accepted type. I have tried it using the exact code from before as well as the exact image, but it still isn't working. The only thing I can think of is if the phone number was set up differently somehow. The message logs show the image as sent and looks fine and I can't find any differences other than that.
Is there anything that might be causing this? Here is the code for reference
{
"actions": [
{
"collect": {
"name": "image_collect",
"questions": [
{
"question": "Please upload an image",
"name": "image",
"type": "Twilio.MEDIA",
"validate": {
"on_failure": {
"messages": [
{
"say": "We do not accept this format. Please send another image."
}
]
},
"allowed_types": {
"list": [
"image/jpeg",
"image/gif",
"image/png",
"image/bmp",
"application/pdf"
]
}
}
}
],
"on_complete": {
"redirect": "https://4894-100-33-3-193.ngrok.io/image_processing"
}
}
}
]
}
Generally, media size causes this issue, just make sure the file size is within the limits.
For more info - https://www.twilio.com/docs/sms/accepted-mime-types
I'm relatively new to CouchDB (more specifically Cloudant if it matters) and I'm having a hard time wrapping my head around something.
Assume the following (simplified) document examples:
{ "docType": "school", "_id": "school1", "state": "CA" }
{ "docType": "teacher", "_id": "teacher1", "age": "40", "school": "school1" }
I want to find all the teachers aged $age (eg. 40) in state $state (eg. CA).
Views only consider one document at a time; that is queries can't directly combine data from different documents. You can query across multiple fields in the same document using Cloudant Query. You can write a selector directly in the Cloudant dashboard. Something like
"selector": {
"age": {
"$gte": 40
},
"state": {
"$eq": "CA"
}
}
See https://cloud.ibm.com/docs/services/Cloudant/tutorials?topic=cloudant-creating-an-ibm-cloudant-query
with the full reference here: https://cloud.ibm.com/docs/services/Cloudant/tutorials?topic=cloudant-query
You could also use a so-called linked document to emulate basic joins, as outlined in the CouchDB docs https://docs.couchdb.org/en/stable/ddocs/views/joins.html
I'm trying to add a custom analyzer to elasticsearch via grails plugin. I was able to change the used analyzer to a common analyzer using "searchable" on the domain:
static searchable = {
all = [analyzer: 'snowball']
}
but cannot get it to know a costum analyzer. It is unclear how to translate the following json in the REST API to a groovy closue:
PUT /my_index
{
"settings": {
"analysis": {
"filter": {
"my_synonym_filter": {
"type": "synonym",
"synonyms": [
"british,english",
"queen,monarch"
]
}
},
"analyzer": {
"my_synonyms": {
"tokenizer": "standard",
"filter": [
"lowercase",
"my_synonym_filter"
]
}
}
}
}
}
this question seems to have the same problem but the answer doesn't work, and this answer suggests that it might not be possible, but that doesn't seem reasonable because setting a custom analyzer is pretty basic.
Any suggestions?
There are two ways I see which would help you achieve that.
The first way is by going through the low level API using the injected elasticSearchHelper and accessing ES client directly.
elasticSearchHelper.withElasticSearch { client ->
// Do some stuff with the ElasticSearch client
client.admin()
.indices()
.prepareCreate(indexName)
.setSettings(settings) <--- your settings/analyzers go here
.execute()
.actionGet()
}
A second way involves using an undocumented feature of the ElasticSearchAdminService service, namely the createIndex() method, which allows you to pass in the settings and analyzers you need when creating a new index. The latter basically does exactly the same as the first option above, but you get to use the Grails service directly.
I am building an app that give users the ability to construct there own graphs. I have been using parameters for all queries and creates. But when I want to give users the ability to create a node where they can also Label it anything they want(respecting neo4j restrictions on empty string labels). How would I parameterize this type of transaction?
I tried this:
.CREATE("(a:{dynamicLabel})").WithParams(new {dynamicLabel = dlabel})...
But this yields a syntax error with neo. I am tempted to concatenate, but am worried that this may expose an injection risk to my application.
I am tempted to build up my-own class that reads the intended string and rejects any type of neo syntax, but this would limit my users a bit and I would rather not.
There is an open neo4j issue 4334, which is a feature request for adding the ability to parameterize labels during CREATE.So, this is not yet possible.
That issue contains a comment that suggests generating CREATE statements with hardcoded labels, which will work. It is, unfortunately, not as performant as using parameters (should it ever be supported in this case).
I searched like hell and finally found it out.
you can do it like that:
// create or update nodes with dynamic label from import data
WITH "file:///query.json" AS url
call apoc.load.json(url) YIELD value as u
UNWIND u.cis as ci
CALL apoc.merge.node([ ci.label ], {Id:ci.Id}, {}, {}) YIELD node
RETURN node;
The JSON looks like that:
{
"cis": [
{
"label": "Computer",
"Id": "1"
},
{
"label": "Service",
"Id": "2"
},
{
"label": "Person",
"Id": "3"
}
],
"relations": [
{
"end1Id": "1",
"Id": "4",
"end2Id": "2",
"label": "USES"
},
{
"end1Id": "3",
"Id": "5",
"end2Id": "1",
"label": "MANAGED_BY"
}
]
}
If you are using a Java client, then you can do it like this.
Node node = GraphDatabaseService.createNode();
Label label = new Label() {
#Override
public String name() {
return dynamicLabelVal;
}
};
node.addLabel(label);
You can then have a LabelCache which will avoid Label object creation for every node.
I am working with RavenDB documents. I need to change a field in all the documents at once. I read there is something called set based updates in Raven DB documentation. I need a little help to put me in right direction here.
A patron Document looks something like this:
{
"Privilege": [
{
"Level": "Gold",
"Code": "12312",
"EndDate": "12/12/2012"
}
],
"Phones": [
{
"Cell": "123123",
"Home": "9783041284",
"Office": "1234123412"
}
]
{
In Patrons document collection, there is a Privilege.Level field in each doc. I need to write a query to update it to "Gold" for all documents in that Patrons collection. This is what I know so far. I need to create an Index (ChangePrivilegeIndex) first:
from Patrons in docs.patrons
select new {Patrons.Privilege.Level}
and then write a curl statement to patch documents all at once something like this:
PATCH http://localhost:8080/bulk_docs/ChangePrivilegeIndex
[
{ "Type": "Set", "Name": "Privilege.Level", "Value": "Gold"}
]
I need help to get this to work .. thank you. I know there are lots of loose ends in the actual scripts.. that's why its not working. Can some one look at the scenario and the script above to put me in right direction.