Configuring Logstash for Docker - docker

I'm new in Docker and I'm having problems running a simple logstash.conf with Docker.
My Dockerfile:
FROM docker.elastic.co/logstash/logstash:5.0.0
RUN rm -f ~/desktop/docker_logstash/logstash.conf
Logstash.conf:
input {
file {
path => "~/desktop/filename.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
grok {
match => {
"message" => "%{COMBINEDAPACHELOG}"
}
}
}
output {
stdout {
codec => rubydebug
}
}
Docker commands:
docker build -t logstashexample .
docker run logstashexample
I can build the container but when I run it it is stuck on:
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties.
[2017-11-22T11:08:23,040][INFO][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2017-11-22T11:08:24,501][INFO][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2017-11-22T11:08:24,520][INFO ][logstash.pipeline ] Pipeline main started [2017-11-22T11:08:24,593][INFO][org.logstash.beats.Server] Starting server on port: 5044
[2017-11-22T11:08:25,054][INFO ][logstash.agent] Successfully started Logstash API endpoint {:port=>9600}
What am I doing wrong? Thanks.

Related

Logstash does not process files sent by filebeat

I have setup an elk stack infrastructure with docker.
I can't see files being processed by logstash.
Filebeat is configured to send .csv files to logstash from logstash, to elasticsearch. I see the logstash filebeat listner staring. Logstash to elasticsearch pipeline works however there is no document/index written.
Please advise
filebeat.yml
filebeat.prospectors:
- input_type: log
paths:
- logs/sms/*.csv
document_type: sms
paths:
- logs/voip/*.csv
document_type: voip
output.logstash:
enabled: true
hosts: ["logstash:5044"]
logging.to_files: true
logging.files:
logstash.conf
input {
beats {
port => "5044"
}
}
filter {
if [document_type] == "sms" {
csv {
columns => ['Date', 'Time', 'PLAN', 'CALL_TYPE', 'MSIDN', 'IMSI', 'IMEI']
separator => " "
skip_empty_columns => true
quote_char => "'"
}
}
if [document_type] == "voip" {
csv {
columns => ['Date', 'Time', 'PostDialDelay', 'Disconnect-Cause', 'Sip-Status','Session-Disposition', 'Calling-RTP-Packets-Lost','Called-RTP-Packets-Lost', 'Calling-RTP-Avg-Jitter','Called-RTP-Avg-Jitter', 'Calling-R-Factor', 'Called-R-Factor', 'Calling-MOS', 'Called-MOS', 'Ingress-SBC', 'Egress-SBC', 'Originating-Trunk-Group', 'Terminating-Trunk-Group']
separator => " "
skip_empty_columns => true
quote_char => "'"
}
}
}
output {
if [document_type] == "sms"{
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "smscdr_index"
}
stdout {
codec => rubydebug
}
}
if [document_type] == "voip" {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "voipcdr_index"
}
stdout {
codec => rubydebug
}
}
}
Logstash partial logs
[2019-12-05T12:48:38,227][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2019-12-05T12:48:38,411][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x4ffc5251 run>"}
[2019-12-05T12:48:38,949][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-12-05T12:48:39,077][INFO ][org.logstash.beats.Server] Starting server on port: 5044
==========================================================================================
[2019-12-05T12:48:43,518][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
[2019-12-05T12:48:43,745][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x46e8e60c run>"}
[2019-12-05T12:48:43,780][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
[2019-12-05T12:48:45,770][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
filebeat log sample
2019-12-05T12:55:33.119Z INFO log/harvester.go:255 Harvester started for file: /usr/share/filebeat/logs/voip/voip_cdr_1595.csv
2019-12-05T12:55:33.126Z INFO log/harvester.go:255 Harvester started for file: /usr/share/filebeat/logs/voip/voip_cdr_2004.csv
2019-12-05T12:55:33.130Z INFO log/harvester.go:255 Harvester started for file: /usr/share/filebeat/logs/voip/voip_cdr_2810.csv
======================================================================================================
2019-12-05T13:00:44.002Z INFO log/harvester.go:280 File is inactive: /usr/share/filebeat/logs/voip/voip_cdr_563.csv. Closing because close_inactive of 5m0s reached.
2019-12-05T13:00:44.003Z INFO log/harvester.go:280 File is inactive: /usr/share/filebeat/logs/voip/voip_cdr_2729.csv. Closing because close_inactive of 5m0s reached.
2019-12-05T13:00:44.004Z INFO log/harvester.go:280 File is inactive: /usr/share/filebeat/logs/voip/voip_cdr_2308.csv. Closing because close_inactive of 5m0s reached.
2019-12-05T13:00:49.218Z INFO log/harvester.go:280 File is inactive: /usr/share/filebeat/logs/voip/voip_cdr_981.csv. Closing because close_inactive of 5m0s reached.
docker-compose ps
docker-compose -f docker-compose_stash.yml ps
The system cannot find the path specified.
Name Command State Ports
---------------------------------------------------------------------------------------------------------------------
elasticsearch_cdr /usr/local/bin/docker-entr ... Up 0.0.0.0:9200->9200/tcp, 9300/tcp
filebeat_cdr /usr/local/bin/docker-entr ... Up
kibana_cdr /usr/local/bin/kibana-docker Up 0.0.0.0:5601->5601/tcp
logstash_cdr /usr/local/bin/docker-entr ... Up 0.0.0.0:5000->5000/tcp, 0.0.0.0:5044->5044/tcp, 9600/tcp
In logstash you have a conditional check in the field document_type, but this field is not generated by filebeat, you need to correct your filebeat config.
Try this config for your inputs.
filebeat.prospectors:
- input_type: log
paths:
- logs/sms/*.csv
fields:
document_type: sms
paths:
- logs/voip/*.csv
fields:
document_type: voip
This will create a field named fields with a nested field named document_type, like the example below.
{ "fields" : { "document_type" : "voip" } }
And change your logstash conditionals to check agains the field fields.document_type, like the example below.
if [fields][document_type] == "sms" {
your filters
}
If you want, you can use the option fields_under_root: true in filebeat to create the document_type in the root of your document, so you will not need to change your logstash conditionals.
filebeat.prospectors:
- input_type: log
paths:
- logs/sms/*.csv
fields:
document_type: sms
fields_under_root: true

Logstash jdbc not sending data

I'm trying to export datas from a mysql table to elastic search with logstash and the jdbc mysql driver with every process in a docker container. My problem is (with no error) their is nothing sent to elastic search.
My Dockerfile :
FROM elastic/logstash:6.3.0
ENV https_proxy=
ENV http_proxy=
COPY ./mysql-connector-java-5.1.46/mysql-connector-java-5.1.46.jar /tmp/mysql-connector-java-5.1.46.jar
COPY ./logstash.conf /tmp/logstash.conf
COPY ./logstash.yml /usr/share/logstash/config/logstash.yml
RUN logstash-plugin install logstash-input-jdbc
I run it with this command :
docker run -d --rm --name=logstach -v /data/logstash:/home/logstash logstash bin/logstash -f /tmp/logstash.conf
And here is my logstash.conf :
input {
jdbc {
jdbc_driver_library => "/tmp/mysql-connector-java-5.1.46.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://0.0.2.22:3306/itop_db"
jdbc_user => "admin"
jdbc_password => "password"
statement => "SELECT * FROM contact”
}
}
output {
elasticsearch {
index => "contact"
document_type => "data"
document_id => "%{id}"
hosts => "127.0.0.1:9200"
}
stdout { codec => json_lines }
}
Every thing seems to accomplish well, except their is no new index in elastic search http://localhost:9200/_cat/indices?v
This is the output I have when i run logstash :
logstash execution output
logstash error 2
"SELECT * FROM contact” <-- this could be the problem. I imagine you copied this from the internet? Change ” to "
In addition to the error in my SQL statement, I needed to specify the host ip of the container (172.17.0.2 in my case) instead of using 127.0.0.1:9200

Appending to file not showing up on logstash or elasticsearch output

I spun up logstash and elasticsearch docker containers using images from elastic.co. When I append to the file which I have set as my input file I don't see any output from logstash or elasticsearch. This page didn't help much and couldn't find my exact problem on google or stackoverflow.
This is how I started my containers:
docker run \
--name elasticsearch \
-p 9200:9200 \
-p 9300:9300 \
-e "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:6.3.1
docker run \
--name logstash \
--rm -it -v $(pwd)/elk/logstash.yml:/usr/share/logstash/config/logstash.yml \
-v $(pwd)/elk/pipeline.conf:/usr/share/logstash/pipeline/pipeline.conf \
docker.elastic.co/logstash/logstash:6.3.1
This is my pipeline configuration file:
input {
file {
path => "/pepper/logstash-tutorial.log"
}
}
output {
elasticsearch {
hosts => "http://x.x.x.x:9200/"
}
stdout {
codec => "rubydebug"
}
}
Logstash and elasticsearch started fine it seems.
Sample logstash startup output:
[INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x6dc306e7 sleep>"}
[INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:main, :".monitoring-logstash"], :non_running_pipelines=>[]}
[INFO ][org.logstash.beats.Server] Starting server on port: 5044
[INFO ][logstash.inputs.metrics ] Monitoring License OK
[INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
Sample elasticsearch startup output:
[INFO ][o.e.c.r.a.AllocationService] [FJImg8Z] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-logstash-6-2018.07.10][0]] ...]).
So when I make changes to logstash-tutorial.log, I don't see any terminal output from logstash or elasticsearch. How to get output or configure logstash and elasticsearch correctly?
The answer is on the same page that you have referred. Take a look at start_position
Choose where Logstash starts initially reading files: at the beginning or at the end. The default behavior treats files like live streams and thus starts at the end. If you have old data you want to import, set this to beginning.
Set the start position as below:
input {
file {
path => "/pepper/logstash-tutorial.log"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => "http://x.x.x.x:9200/"
}
stdout {
codec => "rubydebug"
}
}

problems running logstash with -f flag in docker

I'm trying to run the logstash container in red hat 7 with the command:
docker run -v /home/logstash/config:/conf -v /home/docker/logs:/logs logstash logstash -f /conf/logstash.conf --verbose
and the response received is:
{:timestamp=>"2016-05-05T10:21:20.765000+0000", :message=>"translation missing: en.logstash.runner.configuration.file-not-found", :level=>:error}
{:timestamp=>"2016-05-05T10:21:20.770000+0000", :message=>"starting agent", :level=>:info}
and the logstash container is not running.
If I execute the folowing command:
docker run -dit -v /home/logstash/config:/conf -v /home/docker/logs:/logs --name=logstash2 logstash logstash -e 'input { stdin { } } output { stdout { } }'
Enter in the container with the command :
docker exec -ti bash
and execute:
logstash -f /conf/logstash.conf
A new logstash proccess is now running in the container and I can manage log files setted up in the config file.
Any idea why am i having this strange behaviour?
Thanks, for all.
Problem solved. It was a directory permissions in the host machine.Thanks for helping #NOTtardy
Try making double quotes around logstash -f /conf/logstash.conf --verbose
Edit: just tried you command myself and it works fine. Could be that your logstash.conf is the reason for the error.
I've used a very simple logstash.conf
input {
udp {
port => 5000
type => syslog
}
}
output {
stdout {
codec => rubydebug
}
}
And got the following output
{:timestamp=>"2016-05-05T18:24:26.294000+0000", :message=>"starting agent", :level=>:info}
{:timestamp=>"2016-05-05T18:24:26.332000+0000", :message=>"starting pipeline", :id=>"main", :level=>:info}
{:timestamp=>"2016-05-05T18:24:26.370000+0000", :message=>"Starting UDP listener", :address=>"0.0.0.0:5000", :level=>:info}
{:timestamp=>"2016-05-05T18:24:26.435000+0000", :message=>"Starting pipeline", :id=>"main", :pipeline_workers=>1, :batch_size=>125, :batch_delay=>5, :max_inflight=>125, :level=>:info}
{:timestamp=>"2016-05-05T18:24:26.441000+0000", :message=>"Pipeline main started"}

Logstash Removed from cluster after joining with elasticsearch in docker

I am using docker to host my logstash and elasticsearch.
Logstash joins the cluster and then it disconnect after 2 sec's.
Below is the exception i am getting.
[2015-08-31 23:30:40,880][INFO ][cluster.service ] [Ms.
MODOK] removed
{[logstash-da1b6e0a073b-1-11622][G_hYr0mcTZ6G-IOia1g5Cg][da1b6e0a073b][inet[/172.17.5.146:9300]]{data=false,
client=true},}, reason:
zen-disco-node_failed([logstash-da1b6e0a073b-1-11622][G_hYr0mcTZ6G-IOia1g5Cg][da1b6e0a073b][inet[/172.17.5.146:9300]]{data=false,
client=true}), reason transport disconnected
My logstash configuration file.
input {
stdin { }
}
output {
elasticsearch {
host => elasticsearch
}
stdout { codec => rubydebug }
}
I missed it.
Need to add log file location, created volumes in docker and pointed input of logstash and everything started working.
file {
path => [ "/opt/logs/test_log.json" ]
codec => json {
charset => "UTF-8"
}
}

Resources