Stopping pipeline in logstash - docker

A line of error code print out when I am trying to run logstash in docker
Here is the error code:
[LogStash::Runner] WARN logstash.agent - stopping pipeline {:id=>"main"}
Below is my configuration when running logstash in docker
input {
jdbc {
jdbc_driver_library => "/config-dir/mysql-connector-java-5.1.36-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://172.17.0.5:3306/data1"
jdbc_user => "user"
jdbc_password => "password"
statement => "SELECT * from COMPANY"
}
}
output {
stdout { codec => json_lines }
elasticsearch {
"hosts" => "172.17.0.2:9200"
"index" => "test-migrate"
"document_type" => "data"
}
}

Related

Ruby psych - YAML coversion referencing another resource

I recently started to use ruby psych to generate yaml strings from hashes..
I have a concern however, in case I need to !Ref or !GetAtt for another resource, it would end up being wrapped in double quoted strings after the YAML. For example:
{
'FooPolicy' => {
'Type' => 'AWS::SQS::QueuePolicy',
'Properties' => {
'Queues' => '!Ref FooQueue',
'PolicyDocument' => {
'Statement' => {
'Action' => 'SQS:*',
'Effect' => 'Allow',
'Resource' => '!GetAtt FooQueue.Arn',
'Principal' => {
'AWS' => '${AWS::AccountId}'
}
}
}
}
}
}
The output is
FooPolicy:
Type: AWS::SQS::QueuePolicy
Properties:
Queues: "!Ref FooQueue"
PolicyDocument:
Statement:
Action: SQS:*
Effect: Allow
Resource: "!GetAtt FooQueue.Arn"
Principal:
AWS: "${AWS::AccountId}"
which is throwing malform message in Cloudformation validation... Has anyone run into this issue?

Logstash container stopped because of an error creating action from filter

Hello I'm new to Elasticsearch
I'm working with log files comming from filebeat and logstash and I'm trying to add a field "response_time", and then affect the difference between timestamps to It.
So I create a logstash's filter and I add it to logstash configuration file but when I restared the container I get the error bellow.
This is my logstash configuration file:
input {
beats {
port => 5044
}
}
filter {
json {
source => "message"
}
ruby {
code => "event.set('indexDay', event.get('[#timestamp]').time.localtime('+01:00').strftime('%Y%m%d'))"
}
aggregate {
add_field => {
"response_time" => "timestamp2-timestamp1"
}
}
grok {
match => ["message","%{LOGLEVEL:loglevel},%{DATESTAMP_RFC2822:timestamp},%{NOTSPACE:event_type},%{NUMBER:capture_res_id},%{NUMBER:capture_pid},%{NUMBER:mti},%{NUMBER:node_id}
,%{UUID:msg_uuid},%{NOTSPACE:module},%{NUMBER :respCode}"]}
if [event_type] == "request_inc" {
aggregate {
msg_uuid => "%{UUID}"
timestamp1 => event.get('DATESTAMP_RFC2822')
code => "map['response_time'] = 0"
map_action => "create"
}
}
if [event_type] == "response_outg" {
aggregate {
msg_uuid => "%{UUID}"
event_type => event.set('event_type')
timestamp2 => "%{DATESTAMP_RFC2822}"
code => "map['response_time']"
map_action => "update"
end_of_task => true
timeout =>120
}
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
template => "/usr/share/logstash/templates/testblogstash.template.json"
template_name => "testblogstash"
template_overwrite => true
index => "testblogstash-%{indexDay}"
codec => json
}
stdout {
codec => rubydebug
}
}
And this is an exemple of my log file:
{"log_level":"INFO","timestamp":"2021-12-15T16:06:24.400087Z","event_type":"s_tart","ca_id":"11","c_pid":"114","mti":"00","node_id":"00","msg_uuid":"1234","module":"cmde"}
{"log_level":"INFO","timestamp":"2021-12-15T16:06:31.993057Z","event_type":"e_nd","mti":"00","node_id":"00","msg_uuid":"1234","module":"PWC-cmde","respCode":"1"}
This is the error from docker logs :
[2022-06-01T14:43:24,529][ERROR][logstash.agent ] Failed to execute
action {:action=>LogStash::PipelineAction::Create/pipeline_id:main,
:exception=>"LogStash::ConfigurationError", :message=>"Expected one of
[A-Za-z0-9_-], [ \t\r\n], "#", "{", [A-Za-z0-9_], "}" at line
25, column 24 (byte 689) after filter {\r\n json {\r\n source =>
"message"\r\n }\r\n ruby {\r\n code => "event.set('indexDay',
event.get('[#timestamp]').time.localtime('+01:00').strftime('%Y%m%d'))"\r\n
}\r\n aggregate {\r\n add_field => {\r\n "response_time" =>
"timestamp2-timestamp1"\r\n\t\t }\r\n\t\t}\r\n grok {\r\n match =>
["message","%{LOGLEVEL:loglevel},%{DATESTAMP_RFC2822:timestamp},%{NOTSPACE:event_type},%{NUMBER:capture_res_id},%{NUMBER:capture_pid},%{NUMBER:mti},%{NUMBER:node_id}\r\n\t,%{UUID:msg_uuid},%{NOTSPACE:module},%{NUMBER
:respCode}"]}\r\n if [event_type] == "request_inc" {\r\n aggregate
{\r\n\t msg_uuid => "%{UUID}"\r\n\t timestamp1 => event",
:backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in
compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:187:in initialize'",
"org/logstash/execution/JavaBasePipelineExt.java:72:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47:in initialize'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in
execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:383:in block
in converge_state'"]}
...
[2022-06-01T14:43:29,460][INFO ][logstash.runner ] Logstash shut down.

Logstash shutdows after start

im new on the ELK Stack and i can't figure out why Logstash keeps shutdown after i execute it. I'm trying to gather information from twitter.
input {
twitter {
consumer_key => "XXX"
consumer_secret => "XXX"
oauth_token => "XXX"
oauth_token_secret => "XXX"
keywords => ["portugal", "game", "movie"]
ignore_retweets => true
full_tweet => true
}
}
filter {}
output{
stdout {
codec => dots
}
elasticsearch {
hosts => "localhost:9200"
index => "twitterind"
}
}

Not able ot fetch the indices related data in kibana

I am able to create the indices using logstash.conf. My input type is gelf.
I am sending the logstash logs to kibana.
Here is my logstash.conf
input
{ gelf { }
}
output
{
stdout { codec => rubydebug }
elasticsearch {
hosts => ["elk.lera.com:80"]
index => "templeton-math-%{+YYYY.MM.dd}"
}
elasticsearch {
hosts => ["elk.lera.com:80"]
index => "templeton-science-%{+YYYY.MM.dd}"
}
elasticsearch {
hosts => ["elk.lera.com:80"]
index => "templeton-bio-%{+YYYY.MM.dd}"
}
elasticsearch {
hosts => ["elk.lera.com:80"]
index => "templeton-lang-%{+YYYY.MM.dd}"
}
}
Issue: logs are sent to all the indices now. I would like to send the logs to respective indices.
I have added like
if[tag] == "templeton-math"{
elasticsearch {
hosts => ["elk.lera.com:80"]
index => "templeton-math-%{+YYYY.MM.dd}"
}
}
It is giving an error
INFO logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"67f7a48e-fc7c-499b-85a0-3fd6979f88f6", :path=>"/var/lib/logstash/uuid"}
14:58:14.308 [LogStash::Runner] ERROR logstash.agent - Cannot create pipeline {:reason=>"Expected one of #, => at line 22, column 9 (byte 179) after output \n\n{\n\n elasticsearch {\n hosts "}
2017-10-11 14:58:14,355 Api Webserver ERROR No log4j2 configuration file found. Using default configuration: logging only errors to the console.
Try this.
output {
stdout { codec => rubydebug }
if [tag] == "templeton-math" {
elasticsearch {
hosts => ["elk.lera.com:80"]
index => "templeton-math-%{+YYYY.MM.dd}"
}
}
if [tag] == "templeton-science" {
elasticsearch {
hosts => ["elk.lera.com:80"]
index => "templeton-science-%{+YYYY.MM.dd}"
}
}
if [tag] == "templeton-bio" {
elasticsearch {
hosts => ["elk.lera.com:80"]
index => "templeton-bio-%{+YYYY.MM.dd}"
}
}
if [tag] == "templeton-lang" {
elasticsearch {
hosts => ["elk.lera.com:80"]
index => "templeton-lang-%{+YYYY.MM.dd}"
}
}
}

Logstash input twitter authorized error

Any one have experience with below error ? Please help me
Logstash startup completed
exception=>Twitter::Error::Unauthorized, :backtrace=>["C:/logstash-1.5.1 ...
I'm using the below twitter config
input{
twitter{
consumer_key => ""
consumer_secret => ""
oauth_token => ""
oauth_token_secret => ""
keywords => [""]
full_tweet => true
}
}
output {
stdout { codec => dots }
elasticsearch {
host => "localhost:9200"
}
}

Resources