i am creating an ELK stack to fetch tweet and analyse them. When, i start my elk stack i got this error message from Logstash
Failed to install template {:message=>"Got response code '400' contacting Elasticsearch at URL 'http://elasticsearch:9200/_index_template/twitter'", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.9.3-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb:84:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.9.3-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:324:in `perform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.9.3-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:311:in `block in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.9.3-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:398:in `with_connection'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.9.3-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:310:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.9.3-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:318:in `block in Pool'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.9.3-java/lib/logstash/outputs/elasticsearch/http_client.rb:412:in `template_put'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.9.3-java/lib/logstash/outputs/elasticsearch/http_client.rb:85:in `template_install'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.9.3-java/lib/logstash/outputs/elasticsearch/template_manager.rb:29:in `install'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.9.3-java/lib/logstash/outputs/elasticsearch/template_manager.rb:17:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.9.3-java/lib/logstash/outputs/elasticsearch.rb:578:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.9.3-java/lib/logstash/outputs/elasticsearch.rb:344:in `finish_register'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.9.3-java/lib/logstash/outputs/elasticsearch.rb:300:in `block in register'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.9.3-java/lib/logstash/plugin_mixins/elasticsearch/common.rb:154:in `block in after_successful_connection'"]}
I think i got an error inside my index template but even on internet i didn't found what is wrong with it.
I am using:
logstash:8.5.3
elastictsearch:8.5.3
kibana:8.5.3
this is my template:
{
"template": "twitter-*",
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0,
"index.mapping.total_fields.limit": 2000
},
"mappings": {
"_default_": {
"_all": {
"enabled": true
},
"properties": {
"#timestamp": {
"type": "date",
"format": "dateOptionalTime"
},
"created_at": {
"type": "date",
"format": "EEE MMM dd HH:mm:ss Z YYYY"
},
"text": {
"type": "text"
},
"user": {
"type": "object",
"properties": {
"description": {
"type": "text"
}
}
},
"coordinates": {
"type": "object",
"properties": {
"coordinates": {
"type": "geo_point"
}
}
},
"entities": {
"type": "object",
"properties": {
"hashtags": {
"type": "object",
"properties": {
"text": {
"type": "text",
"fielddata": true
}
}
}
}
},
"retweeted_status": {
"type": "object",
"properties": {
"text": {
"type": "text"
}
}
}
},
"dynamic_templates": [
{
"string_template": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"type": "keyword"
}
}
}
]
}
}
}
and this is my logstash config, i send my tweet via tcp because i have a python bot who fetch them for me.
input {
tcp {
port => 50000
}
}
filter {
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
user => "elastic"
password => "${LOGSTASH_INTERNAL_PASSWORD}"
index => "twitter-%{+yyyy.MM.dd}"
document_type => "tweets"
template => "./templates/twitter.json"
template_name => "twitter"
template_overwrite => true
}
}
Thanks for any help <3
What did you try ?
I tried to modify some attributes of my template, but i still got this message.
What were you expecting?
I was expecting that logstash create my index_template, for incoming tweets.
Related
I have an elasticsearch index and am using the following query:
"_source": [
"title",
"content"
],
"size": 15,
"from": 0,
"query": {
"bool": {
"must": {
"multi_match": {
"query": "{{query}}",
"fields": [
"title",
"content"
],
"operator": "or"
}
},
"should": [
{
"multi_match": {
"query": "{{query}}",
"fields": [
"title.standard^16",
"content.standard^2"
],
"operator": "and"
}
},
{
"match_phrase": {
"content.standard": {
"query": "{{query}}",
"_name": "Phrase on title",
"boost": 1000
}
}
}
]
}
},
"highlight": {
"fields": {
"content": {}
},
"fragment_size": 100
}
}
Here is the mapping I set:
{
"settings": {
"index": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "standard",
"filter": [
"lowercase",
"my_metaphone"
]
}
},
"filter": {
"my_metaphone": {
"type": "phonetic",
"encoder": "metaphone",
"replace": true
}
}
}
}
},
"mappings": {
"properties": {
"title": {
"type": "text",
"term_vector": "with_positions_offsets",
"analyzer": "my_analyzer",
"fields": {
"standard": {
"type": "text"
},
"stemmer": {
"type": "text",
"analyzer": "english"
}
}
},
"content": {
"type": "text",
"term_vector": "with_positions_offsets",
"analyzer": "my_analyzer",
"fields": {
"standard": {
"type": "text"
},
"stemmer": {
"type": "text",
"analyzer": "english"
}
}
}
}
}
}
Here is my logic with the query:
1) It will give the highest precedence to a phrase if it appears.
2) If not it will use the standard analyzer (that is the text, as is) and give it the highest precedence.
3) If all else doesn't match up, it will use the phonetic analyzer to get the results, that is the least precedence.
But obviously there is some fault to this as it seems to give higher precedence to the phonetic analyzer than the standard or phrase. For example, if I search for "Person of Indian Origin" it returns results on the top highlighting "Pursuant" "pursuing" and very, very less number of results with person of Indian origin although I know a large number of them exists. How do I solve this?
I'm currently writing swagger 3.0 documentation and using reDoc to render as nice UI for it. I have a few scenarios in my documentation where based on a previous properties enum I would want to display different schema object properties. Sadly I cant seam to figure out how to wire this together properly in my documentation. So far I have the following test endpoint:
{
"post": {
"operationId" : "test",
"summary": "test",
"description": "test",
"tags": [ "test" ],
"consumes": "application/json",
"requestBody": {
"required": true,
"content": {
"application/json": {
"schema": {
"oneOf": [
{
"$ref": "./schemas/test1.json"
},
{
"$ref": "./schemas/test2.json"
}
],
"discriminator": {
"propertyName": "pet_type",
"mapping": {
"click": "./schemas/test1.json",
"open": "./schemas/test2.json"
}
}
}
}
}
},
"responses": {
"200": {
"description": "Success"
}
}
}
}
The test1.json looks like this:
{
"Cat": {
"type": "object",
"properties": {
"pet_type": {
"type": "string"
},
"hunts": {
"type": "boolean"
},
"age": {
"type": "integer"
}
},
"discriminator": {
"propertyName": "pet_type"
}
}
}
And the test2.json like this:
{
"Dog": {
"type": "object",
"properties": {
"pet_type": {
"type": "string"
},
"bark": {
"type": "boolean"
},
"breed": {
"type": "string",
"enum": [
"Dingo",
"Husky",
"Retriever",
"Shepherd"
]
}
},
"discriminator": {
"propertyName": "pet_type"
}
}
}
The desired out come would be to toggle between the two "test" jsons based on an enum (the drop down seen in the reDoc sample). What am I missing to get this result?
You can see an example of the discriminator result here under the feature section (the first gif)
After more digging I was able to figure out the issue... my structure for the most part.
On my index.json file I updated my components section to point at my components folder containing the schema as such:
"components": {
"$ref": "./components/test.json"
},
The test.json looks like the following:
{
"schemas": {
"Refinance": {
"description": "A representation of a cat",
"allOf": [
{
"$ref": "#/schemas/Pet"
},
{
"type": "object",
"properties": {
"huntingSkill": {
"type": "string",
"description": "The measured skill for hunting",
"default": "lazy",
"enum": [
"clueless",
"lazy",
"adventurous",
"aggressive"
]
}
},
"required": [
"huntingSkill"
]
}
]
},
"Purchase": {
"description": "A representation of a dog",
"allOf": [
{
"$ref": "#/schemas/Pet"
},
{
"type": "object",
"properties": {
"packSize": {
"type": "integer",
"format": "int32",
"description": "The size of the pack the dog is from",
"default": 1,
"minimum": 1
},
"foobar": {
"type": "string",
"description": "some ol bullshit"
}
},
"required": [
"packSize"
]
}
]
},
"Pet": {
"type": "object",
"discriminator": {
"propertyName": "petType"
},
"properties": {
"petType": {
"description": "Type of a pet",
"type": "string"
}
},
"xml": {
"name": "Pet"
}
}
}
}
And finally the schema for the endpoint gets referenced as follows:
"requestBody": {
"required": true,
"content": {
"application/json": {
"schema": {
"$ref": "../../index.json#/components/schemas/Pet"
}
}
}
},
When calling my swagger.json from the swagger-ui I get an error:
Maximum call stack size exceeded
I guess it is because I have
Token which has an owner of Type User
User which has a Token of Type Token
When using the online-version of the swagger editior it can resolve the types. How can I configure swagger to resolve the types correctly?
The full swagger.json
{
"swagger": "2.0",
"info": {
"description": "Descr",
"version": "1.0.0",
"title": "Skeleton"
},
"host": "1.1.1.1:11",
"basePath": "/api",
"tags": [{
"name": "auth"
}
],
"schemes": ["http"],
"paths": {
"/auth/local": {
"post": {
"tags": ["auth"],
"summary": "Authenticates User",
"description": "This auths only local users",
"operationId": "authenticateUser",
"consumes": ["application/json"],
"produces": ["application/json"],
"parameters": [{
"in": "body",
"name": "body",
"required": false,
"schema": {
"$ref": "#/definitions/Credentials"
}
}
],
"responses": {
"200": {
"description": "successful operation",
"schema": {
"$ref": "#/definitions/AuthResponse"
}
}
}
}
},
"/auth/ldap": {
"post": {
"tags": ["auth"],
"operationId": "authenticateLdapUser",
"produces": ["application/json"],
"parameters": [{
"in": "body",
"name": "body",
"required": false,
"schema": {
"$ref": "#/definitions/Credentials"
}
}
],
"responses": {
"default": {
"description": "successful operation"
}
}
}
}
},
"definitions": {
"AuthResponse": {
"type": "object",
"properties": {
"issued": {
"type": "string",
"format": "date-time"
},
"responseType": {
"type": "string",
"enum": ["RESPONSE", "ERROR", "UNAUTHORIZED", "OK"]
},
"responseDescription": {
"type": "string"
},
"accessToken": {
"$ref": "#/definitions/Token"
},
"resourceName": {
"type": "string"
}
}
},
"Note": {
"type": "object",
"properties": {
"id": {
"type": "integer",
"format": "int32"
},
"content": {
"type": "string"
},
"modified": {
"type": "string",
"format": "date-time"
}
}
},
"Token": {
"type": "object",
"properties": {
"id": {
"type": "string"
},
"expirationDate": {
"type": "string",
"format": "date-time"
},
"issued": {
"type": "string",
"format": "date-time"
},
"expired": {
"type": "boolean"
},
"owner": {
"$ref": "#/definitions/User"
}
}
},
"User": {
"type": "object",
"properties": {
"username": {
"type": "string"
},
"password": {
"type": "string"
},
"email": {
"type": "string"
},
"displayName": {
"type": "string"
},
"notes": {
"type": "array",
"items": {
"$ref": "#/definitions/Note"
}
},
"accessToken": {
"$ref": "#/definitions/Token"
}
}
},
"Credentials": {
"type": "object",
"properties": {
"user": {
"type": "string"
},
"password": {
"type": "string"
}
}
}
}
}
I have the same problem and I removed format: date-time and the error is gone.
Still I don't know what causes the error. But without that format everything goes ok.
In FastAPI which uses Swagger UI, I was receiving the same error. I updated the FastAPI package to get the last version of Swagger UI and then set the value of 'syntaxHighlight' to False, like below:
app = FastAPI(swagger_ui_parameters={'syntaxHighlight': False})
Just search how you can set this param directly in Swagger UI. This may fix your issue.
I am trying to fully comprehend indexing with multiple mapping types in ElasticSearch. In the docs it gives example code:
PUT my_index
{
"mappings": {
"user": {
"_all": { "enabled": false },
"properties": {
"title": { "type": "string" },
"name": { "type": "string" },
"age": { "type": "integer" }
}
},
"blogpost": {
"properties": {
"title": { "type": "string" },
"body": { "type": "string" },
"user_id": {
"type": "string",
"index": "not_analyzed"
},
"created": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
}
}
}
}
}
With this mapping how would I then create and search on an object?
For create would it be:
POST my_index/user/blogspot
or
POST my_index/user,blogspot
For searching would it be:
GET my_index/user/blogspot
or
GET my_index/user,blogspot
or something else?
An example of a POST and GET with multiple mapping would really help me out. Thank you so much!
I'm using ElasticSearch in Rails 4 through elasticsearch-rails (https://github.com/elasticsearch/elasticsearch-rails)
I have a User model, with an email attribute.
I'm trying to use the 'uax_url_email' tokenizer described in the docs:
class User < ActiveRecord::Base
include Elasticsearch::Model
include Elasticsearch::Model::Callbacks
settings analysis: { analyzer: { whole_email: { tokenizer: 'uax_url_email' } } } do
mappings dynamic: 'false' do
indexes :email, analyzer: 'whole_email'
end
end
end
I followed examples in the wiki (https://github.com/elasticsearch/elasticsearch-rails/wiki) and the elasticsearch-model docs (https://github.com/elasticsearch/elasticsearch-rails/wiki) to arrive at this.
It doesn't work. If I query elasticsearch directly:
curl -XGET 'localhost:9200/users/_mapping
It returns:
{
"users": {
"mappings": {
"user": {
"properties": {
"birthdate": {
"type": "date",
"format": "dateOptionalTime"
},
"created_at": {
"type": "date",
"format": "dateOptionalTime"
},
"email": {
"type": "string"
},
"first_name": {
"type": "string"
},
"gender": {
"type": "string"
},
"id": {
"type": "long"
},
"last_name": {
"type": "string"
},
"name": {
"type": "string"
},
"role": {
"type": "string"
},
"updated_at": {
"type": "date",
"format": "dateOptionalTime"
}
}
}
}
}
}
This ended up being an issue with how I was creating the index. I was trying:
User.__elasticsearch__.client.indices.delete index: User.index_name
User.import
I expected this to delete the index, then re-import the values. However I needed to do:
User.__elasticsearch__.create_index! force: true
User.import