Filebeat failing to send logs to logstash with log/harvester error - docker

I'm following this tutorial to get logs from my docker containers stored in elasticsearch via filebeat and logstash Link to tutorial
However, nothing is being shown in kibana and when I run a docker-logs on my filebeat container, I'm getting the following error
2019-03-30T22:22:40.353Z ERROR log/harvester.go:281 Read line error: parsing CRI timestamp: parsing time "-03-30T21:59:16,113][INFO" as "2006-01-02T15:04:05Z07:00": cannot parse "-03-30T21:59:16,113][INFO" as "2006"; File: /usr/share/dockerlogs/data/2f3164397450efdd5851c3fad67fe405ab3dd822bbea1d807a993844e9143d5e/2f3164397450efdd5851c3fad67fe405ab3dd822bbea1d807a993844e9143d5e-json.log
My containers are hosted on a linux virtual machine where the virtual machine is running on a windows machine (Not sure if this could be causing the error due to the locations specified)
Below I'll describe what's running and some files in case the article is deleted in the future etc
One container is running which is simply running the following command, printing lines that filebeat should be able to read
CMD while true; do sleep 2 ; echo "{\"app\": \"dummy\", \"foo\": \"bar\"}"; done
My filebeat.yml file is as follows
filebeat.inputs:
- type: docker
combine_partial: true
containers:
path: "/usr/share/dockerlogs/data"
stream: "stdout"
ids:
- "*"
exclude_files: ['\.gz$']
ignore_older: 10m
processors:
# decode the log field (sub JSON document) if JSON encoded, then maps it's fields to elasticsearch fields
- decode_json_fields:
fields: ["log", "message"]
target: ""
# overwrite existing target elasticsearch fields while decoding json fields
overwrite_keys: true
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
# setup filebeat to send output to logstash
output.logstash:
hosts: ["logstash"]
# Write Filebeat own logs only to file to avoid catching them with itself in docker log files
logging.level: error
logging.to_files: false
logging.to_syslog: false
loggins.metrice.enabled: false
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
ssl.verification_mode: none
Any suggestions on why filebeat is failing to forward my logs and how to fix it would be appreciated. Thanks

Related

How to collect docker logs using Filebeats?

I am trying to collect this kind of logs from a docker container:
[1620579277][642e7adc-74e1-4b89-a705-d271846f7ebc][channel1]
[afca2a976fa482f429fff4a38e2ea49f337a8af1b5dca0de90410ecc792fd5a4][usecase_cc][set] ex02 set
[1620579277][ac9f99b7-0126-45ed-8a74-6adc3a9d6bc5][channel1]
[afca2a976fa482f429fff4a38e2ea49f337a8af1b5dca0de90410ecc792fd5a4][usecase_cc][set][Transaction] Aval
=201 Bval =301 after performing the transaction
[1620579277][9211a9d4-3fe6-49db-b245-91ddd3a11cd3][channel1]
[afca2a976fa482f429fff4a38e2ea49f337a8af1b5dca0de90410ecc792fd5a4][usecase_cc][set][Transaction]
Transaction makes payment of X units from A to B
[1620579280][0391d2ce-06c1-481b-9140-e143067a9c2d][channel1]
[1f5752224da4481e1dc4d23dec0938fd65f6ae7b989aaa26daa6b2aeea370084][usecase_cc][get] Query Response:
{"Name":"a","Amount":"200"}
I have set the filebeat.yml in this way:
filebeat.inputs:
- type: container
paths:
- '/var/lib/docker/containers/container-id/container-id.log'
processors:
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
- dissect:
tokenizer: '{"log":"[%{time}][%{uuid}][%{channel}][%{id}][%{chaincode}][%{method}] %{specificinfo}\"\n%{}'
field: "message"
target_prefix: ""
output.elasticsearch:
hosts: ["elasticsearch:9200"]
username: "elastic"
password: "changeme"
indices:
- index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
logging.json: true
logging.metrics.enabled: false
Although elasticsearch and kibana are deployed successfully, I am getting this error when a new log is generated:
{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index
[filebeat]","resource.type":"index_or_alias","resource.id":"filebeat","index_uuid":"_na_",
"index":"filebeat"}],"type":"index_not_found_exception","reason":"no such index
[filebeat]","resource.type":"index_or_alias","resource.id":"filebeat","index_uuid":"_na_",
"index":"filebeat"},"status":404}
Note: I am using version 7.12.1 and Kibana, Elastichsearch and Logstash are deployed in docker.
I have used logstash as alternative way instead filebeat. However, a mistake was made by incorrectly mapping the path where the logs are obtained from, in the filebeat configuration file. To solve this issue
I have created an enviroment variable to point to right place:
I passed the environment variable as part of the docker volume:
I have pointed the path of the configuration file to the path of the volume inside the container:

Why does my metricbeat extension ignore my ActiveMQ broker host configuration in Kibana docker?

I'm trying to set up a local Kibana instance with ActiveMQ for testing purposes. I've created a docker network called elastic-network. I have 3 containers in my network: elasticsearch, kibana and finally activemq. In my kibana container, I downloaded metric beats using the following shell command
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.11.2-linux-x86_64.tar.gz
In the configuration file metricbeat.reference.yml, I've changed the host for my ActiveMQ instance running under the container activemq
- module: activemq
metricsets: ['broker', 'queue', 'topic']
period: 10s
hosts: ['activemq:8161']
path: '/api/jolokia/?ignoreErrors=true&canonicalNaming=false'
username: admin # default username
password: admin # default passwor
When I run metricbeat using the verbose parameter ./metricbeat -e I get some error mentioning that ActiveMQ API is unreachable. My problem is that metricbeat ignore my active mq broker configuration and tries to connect to localhost.
Is there a reason why my configuration could be ignored?
After looking through the documentation, I saw that for Linux, unlike the other OS, you also have to change the configuration in the module directory module.d/activemq.yml, not just the metricbeat.reference.yml
# Module: activemq
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/7.11/metricbeat-module-activemq.html
- module: activemq
metricsets: ['broker', 'queue', 'topic']
period: 10s
hosts: ['activemq:8161']
path: '/api/jolokia/?ignoreErrors=true&canonicalNaming=false'
username: admin # default username
password: admin # default password

Filebeat not running using docker-compose: setting 'filebeat.prospectors' has been removed

I'm trying to launch filebeat using docker-compose (I intend to add other services later on) but every time I execute the docker-compose.yml file, the filebeat service always ends up with the following error:
filebeat_1 | 2019-08-01T14:01:02.750Z ERROR instance/beat.go:877 Exiting: 1 error: setting 'filebeat.prospectors' has been removed
filebeat_1 | Exiting: 1 error: setting 'filebeat.prospectors' has been removed
I discovered the error by accessing the docker-compose logs.
My docker-compose file is as simple as it can be at the moment. It simply calls a filebeat Dockerfile and launches the service immediately after.
Next to my Dockerfile for filebeat I have a simple config file (filebeat.yml), which is copied to the container, replacing the default filebeat.yml.
If I execute the Dockerfile using the docker command, the filebeat instance works just fine: it uses my config file and identifies the "output.json" file as well.
I'm currently using version 7.2 of filebeat and I know that the "filebeat.prospectors" isn't being used. I also know for sure that this specific configuration isn't coming from my filebeat.yml file (you'll find it below).
It seems that, when using docker-compose, the container is accessing another configuration file instead of the one that is being copied to the container, by the Dockerfile, but so far I haven't been able to figure it out how, why and how can I fix it...
Here's my docker-compose.yml file:
version: "3.7"
services:
filebeat:
build: "./filebeat"
command: filebeat -e -strict.perms=false
The filebeat.yml file:
filebeat.inputs:
- paths:
- '/usr/share/filebeat/*.json'
fields_under_root: true
fields:
tags: ['json']
output:
logstash:
hosts: ['localhost:5044']
The Dockerfile file:
FROM docker.elastic.co/beats/filebeat:7.2.0
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
COPY output.json /usr/share/filebeat/output.json
USER root
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
RUN mkdir /usr/share/filebeat/dockerlogs
USER filebeat
The output I'm expecting should be similar to the following, which comes from the successful executions I'm getting when I'm executing it as a single container.
The ERROR is expected because I don't have logstash configured at the moment.
INFO crawler/crawler.go:72 Loading Inputs: 1
INFO log/input.go:148 Configured paths: [/usr/share/filebeat/*.json]
INFO input/input.go:114 Starting input of type: log; ID: 2772412032856660548
INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
INFO log/harvester.go:253 Harvester started for file: /usr/share/filebeat/output.json
INFO pipeline/output.go:95 Connecting to backoff(async(tcp://localhost:5044))
ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://localhost:5044)): dial tcp [::1]:5044: connect: cannot assign requested address
INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://localhost:5044)) with 1 reconnect attempt(s)
ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://localhost:5044)): dial tcp [::1]:5044: connect: cannot assign requested address
INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://localhost:5044)) with 2 reconnect attempt(s)
I managed to figure out what the problem was.
I needed to map the location of the config file and logs directory in the docker-compose file, using the volumes tag:
version: "3.7"
services:
filebeat:
build: "./filebeat"
command: filebeat -e -strict.perms=false
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml
- ./filebeat/logs:/usr/share/filebeat/dockerlogs
Finally I just had to execute the docker-compose command and everything start working properly:
docker-compose -f docker-compose.yml up -d

Filebeat 7.2 - Save logs from Docker containers to Logstash

I have a few Docker containers running on my ec2 instance.
I want to save logs from these containers directly to Logstash (Elastic Cloud).
When I tried to install Filebeat manually, everything worked allright.
I have downloaded it using
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.2.0-linux-x86_64.tar.gz
I have unpacked it, changed filebeat.yml configuration to
filebeat.inputs:
- type: log
enabled: true
fields:
application: "myapp"
fields_under_root: true
paths:
- /var/lib/docker/containers/*/*.log
cloud.id: "iamnotshowingyoumycloudidthisisjustfake"
cloud.auth: "elastic:mypassword"
This worked just fine, I could find my logs after searching application: "myapp" in Kibana.
However, when I tried to run Filebeat from Docker, no success.
This is filebeat part of my docker-compose.yml
filebeat:
image: docker.elastic.co/beats/filebeat:7.2.0
container_name: filebeat
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /var/run/docker.sock:/var/run/docker.sock #needed for autodiscover
My previous filebeat.yml from manual execution doesn't work, so I have tried many examples, but nothing worked. This is one example which I think should work, but it doesn't. Docker container starts no problem, but it can't read from logfiles somehow.
filebeat.autodiscover:
providers:
- type: docker
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/lib/docker/containers/*/*.log
json.keys_under_root: true
json.add_error_key: true
fields_under_root: true
fields:
application: "myapp"
cloud.id: "iamnotshowingyoumycloudidthisisjustfake"
cloud.auth: "elastic:mypassword"
I have also tried something like this
filebeat.autodiscover:
providers:
- type: docker
templates:
config:
- type: docker
containers.ids:
- "*"
filebeat.inputs:
- type: docker
containers.ids:
- "*"
processors:
- add_docker_metadata:
fields:
application: "myapp"
fields_under_root: true
cloud.id: "iamnotshowingyoumycloudidthisisjustfake"
cloud.auth: "elastic:mypassword"
I have no clue what else to try, filebeat logs still shows
"harvester":{"open_files":0,"running":0}}
I am 100% sure that logs from containers are under /var/lib/docker/containers/*/*.log ... as I said, Filebeat worked, when installed manually, not as docker image.
Any suggesions ?
Output log from Filebeat
2019-07-23T05:35:58.128Z INFO instance/beat.go:292 Setup Beat: filebeat; Version: 7.2.0
2019-07-23T05:35:58.128Z INFO [index-management] idxmgmt/std.go:178 Set output.elasticsearch.index to 'filebeat-7.2.0' as ILM is enabled.
2019-07-23T05:35:58.129Z INFO elasticsearch/client.go:166 Elasticsearch url: https://123456789.us-east-1.aws.found.io:443
2019-07-23T05:35:58.129Z INFO [publisher] pipeline/module.go:97 Beat name: e3e5163f622d
2019-07-23T05:35:58.136Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
2019-07-23T05:35:58.142Z INFO instance/beat.go:421 filebeat start running.
2019-07-23T05:35:58.142Z INFO registrar/migrate.go:104 No registry home found. Create: /usr/share/filebeat/data/registry/filebeat
2019-07-23T05:35:58.142Z INFO registrar/migrate.go:112 Initialize registry meta file
2019-07-23T05:35:58.144Z INFO registrar/registrar.go:108 No registry file found under: /usr/share/filebeat/data/registry/filebeat/data.json. Creating a new registry file.
2019-07-23T05:35:58.146Z INFO registrar/registrar.go:145 Loading registrar data from /usr/share/filebeat/data/registry/filebeat/data.json
2019-07-23T05:35:58.146Z INFO registrar/registrar.go:152 States Loaded from registrar: 0
2019-07-23T05:35:58.146Z INFO crawler/crawler.go:72 Loading Inputs: 1
2019-07-23T05:35:58.146Z WARN [cfgwarn] docker/input.go:49 DEPRECATED: 'docker' input deprecated. Use 'container' input instead. Will be removed in version: 8.0.0
2019-07-23T05:35:58.150Z INFO log/input.go:148 Configured paths: [/var/lib/docker/containers/*/*.log]
2019-07-23T05:35:58.150Z INFO input/input.go:114 Starting input of type: docker; ID: 11882227825887812171
2019-07-23T05:35:58.150Z INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
2019-07-23T05:35:58.150Z WARN [cfgwarn] docker/docker.go:57 BETA: The docker autodiscover is beta
2019-07-23T05:35:58.153Z INFO [autodiscover] autodiscover/autodiscover.go:105 Starting autodiscover manager
2019-07-23T05:36:28.144Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s
{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":10,"time":{"ms":17}},"total":{"ticks":40,"time":{"ms":52},"value":40},"user":{"ticks":30,"time":{"ms":35}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":9},"info":{"ephemeral_id":"4427db93-2943-4a8d-8c55-6a2e64f19555","uptime":{"ms":30111}},"memstats":{"gc_next":4194304,"memory_alloc":2118672,"memory_total":6463872,"rss":28352512},"runtime":{"goroutines":34}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0},"writes":{"success":1,"total":1}},"system":{"cpu":{"cores":1},"load":{"1":0.31,"15":0.03,"5":0.09,"norm":{"1":0.31,"15":0.03,"5":0.09}}}}}}
Hmm, I don't see anything obvious in the Filebeat config on why its not working, I have a very similar config running for a 6.x Filebeat.
I would suggest doing a docker inspect on the container and confirming that the mounts are there, maybe check on permissions but errors would have probably shown in the logs.
Also could you try looking into using container input? I believe it is the recommended method for container logs in 7.2+: https://www.elastic.co/guide/en/beats/filebeat/7.2/filebeat-input-container.html

Filebeat not pushing logs to Elasticsearch

I am new to docker and all this logging stuff so maybe I'm making a stuipd mistake so thanks for helping in advance. I have ELK running a a docker container (6.2.2) via Dockerfile line:
FROM sebp/elk:latest
In a separate container I am installing and running Filebeat via the folling Dockerfile lines:
RUN curl -L -O -k https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.2-amd64.deb
RUN dpkg -i filebeat-6.2.2-amd64.deb
COPY resources/filebeat/filebeat.yml /etc/filebeat/filebeat.yml
RUN chmod go-w /etc/filebeat/filebeat.yml
RUN /usr/share/filebeat/bin/filebeat -e -d "publish" &
My Filebeat configuration is:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /jetty/jetty-distribution-9.3.8.v20160314/logs/*.log
output.logstash:
enabled: false
hosts: ["elk-stack:9002"]
#index: 'audit'
output.elasticsearch:
enabled: true
hosts: ["elk-stack:9200"]
#index: "audit-%{+yyyy.MM.dd}"
path.config: "/etc/filebeat"
#setup.template.name: "audit"
#setup.template.pattern: "audit-*"
#setup.template.fields: "${path.config}/fields.yml"
As you can see I was trying to do a custom index into elasticsearch, but now I'm just trying to get the default working first. The jetty logs all have global read permissions.
The docker container logs show no errors and after running I make sure the config and output are OK:
# filebeat test config
Config OK
# filebeat test output
elasticsearch: http://elk-stack:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 172.17.0.3
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
version: 6.2.2
/var/log/filebeat/filebeat shows:
2018-03-15T13:23:38.859Z INFO instance/beat.go:468 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2018-03-15T13:23:38.860Z INFO instance/beat.go:475 Beat UUID: ed5cecaf-cbf5-438d-bbb9-30bab80c4cb9
2018-03-15T13:23:38.860Z INFO elasticsearch/client.go:145 Elasticsearch url: http://elk-stack:9200
2018-03-15T13:23:38.891Z INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.2
However when i hit localhost:9200/_cat/indices?v it doesn't return any indices:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
How do I get this working? I am out of ideas. Thanks again for any help.
To answer my own question you can't start filebeat with:
RUN /usr/share/filebeat/bin/filebeat -e -d "publish" &
and have it keep running once the container starts. Need to manually start it or have it start in its own container with an ENTRYPOINT tag.

Resources