Filebeat: Failed to start crawler: starting input failed: Error while initializing input: Can only start an input when all related states are finished - docker

I have a job that starts several docker containers periodically and for each container I also start a filebeat docker container to gather the logs and save them in elastic search.
Filebeat version 7.9 has been used.
Docker containers are started from java application using spotify docker client and terminated when job finishes.
The filebeat configuration is the following and it monitors only a specific docker container:
filebeat.inputs:
- paths: ${logs_paths}
include_lines: ['^{']
json.message_key: log
json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true
type: log
scan_frequency: 10s
ignore_older: 15m
- paths: ${logs_paths}
exclude_lines: ['^{']
json.message_key: log
type: log
json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true
scan_frequency: 10s
ignore_older: 15m
max_bytes: 20000000
processors:
- decode_json_fields:
fields: ["log"]
target: ""
output.elasticsearch:
hosts: ${elastic_host}
username: "something"
password: "else"
logs_paths:
- /var/lib/docker/containers/40c453871c01f0581b832e0452659553b6be2ac4dc1ac8bfaf2b5478bca1cec5/40c453871c01f0581b832e0452659553b6be2ac4dc1ac8bfaf2b5478bca1cec5-json.log
From time to time we observe that one filebeat container is crashing immediately after starting with the following error. Although the job runs the same docker images each time, the filebeat error might appear to any of them:
2020-12-09T16:00:15.784Z INFO instance/beat.go:640 Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2020-12-09T16:00:15.864Z INFO instance/beat.go:648 Beat ID: 03ef7f54-2768-4d93-b7ca-c449e94b239c
2020-12-09T16:00:15.868Z INFO [seccomp] seccomp/seccomp.go:124 Syscall filter successfully installed
2020-12-09T16:00:15.868Z INFO [beat] instance/beat.go:976 Beat info {"system_info": {"beat": {"path": {"config": "/usr/share/filebeat", "data": "/usr/share/filebeat/data", "home": "/usr/share/filebeat", "logs": "/usr/share/filebeat/logs"}, "type": "filebeat", "uuid": "03ef7f54-2768-4d93-b7ca-c449e94b239c"}}}
2020-12-09T16:00:15.869Z INFO [beat] instance/beat.go:985 Build info {"system_info": {"build": {"commit": "b2ee705fc4a59c023136c046803b56bc82a16c8d", "libbeat": "7.9.0", "time": "2020-08-11T20:11:11.000Z", "version": "7.9.0"}}}
2020-12-09T16:00:15.869Z INFO [beat] instance/beat.go:988 Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":4,"version":"go1.14.4"}}}
2020-12-09T16:00:15.871Z INFO [beat] instance/beat.go:992 Host info {"system_info": {"host": {"architecture":"x86_64","boot_time":"2020-10-28T10:03:29Z","containerized":true,"name":"638de114b513","ip":["someIP"],"kernel_version":"4.4.0-190-generic","mac":["someMAC"],"os":{"family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":8,"patch":2003,"codename":"Core"},"timezone":"UTC","timezone_offset_sec":0}}}
2020-12-09T16:00:15.876Z INFO [beat] instance/beat.go:1021 Process info {"system_info": {"process": {"capabilities": {"inheritable":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"permitted":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"effective":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"bounding":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"ambient":null}, "cwd": "/usr/share/filebeat", "exe": "/usr/share/filebeat/filebeat", "name": "filebeat", "pid": 1, "ppid": 0, "seccomp": {"mode":"filter"}, "start_time": "2020-12-09T16:00:14.670Z"}}}
2020-12-09T16:00:15.876Z INFO instance/beat.go:299 Setup Beat: filebeat; Version: 7.9.0
2020-12-09T16:00:15.876Z INFO [index-management] idxmgmt/std.go:184 Set output.elasticsearch.index to 'someIndex' as ILM is enabled.
2020-12-09T16:00:15.877Z INFO eslegclient/connection.go:99 elasticsearch url: someURL
2020-12-09T16:00:15.878Z INFO [publisher] pipeline/module.go:113 Beat name: 638de114b513
2020-12-09T16:00:15.885Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
2020-12-09T16:00:15.886Z INFO instance/beat.go:450 filebeat start running.
2020-12-09T16:00:15.893Z INFO memlog/store.go:119 Loading data file of '/usr/share/filebeat/data/registry/filebeat' succeeded. Active transaction id=0
2020-12-09T16:00:15.893Z INFO memlog/store.go:124 Finished loading transaction log file for '/usr/share/filebeat/data/registry/filebeat'. Active transaction id=0
2020-12-09T16:00:15.893Z INFO [registrar] registrar/registrar.go:108 States Loaded from registrar: 0
2020-12-09T16:00:15.893Z INFO [crawler] beater/crawler.go:71 Loading Inputs: 2
2020-12-09T16:00:15.894Z INFO log/input.go:157 Configured paths: [/var/lib/docker/containers/40c453871c01f0581b832e0452659553b6be2ac4dc1ac8bfaf2b5478bca1cec5/40c453871c01f0581b832e0452659553b6be2ac4dc1ac8bfaf2b5478bca1cec5-json.log]
2020-12-09T16:00:15.895Z INFO [crawler] beater/crawler.go:141 Starting input (ID: 3906827571448963007)
2020-12-09T16:00:15.895Z INFO log/harvester.go:297 Harvester started for file: /var/lib/docker/containers/40c453871c01f0581b832e0452659553b6be2ac4dc1ac8bfaf2b5478bca1cec5/40c453871c01f0581b832e0452659553b6be2ac4dc1ac8bfaf2b5478bca1cec5-json.log
2020-12-09T16:00:15.902Z INFO beater/crawler.go:148 Stopping Crawler
2020-12-09T16:00:15.902Z INFO beater/crawler.go:158 Stopping 1 inputs
2020-12-09T16:00:15.902Z INFO [crawler] beater/crawler.go:163 Stopping input: 3906827571448963007
2020-12-09T16:00:15.902Z INFO input/input.go:136 input ticker stopped
2020-12-09T16:00:15.902Z INFO log/harvester.go:320 Reader was closed: /var/lib/docker/containers/40c453871c01f0581b832e0452659553b6be2ac4dc1ac8bfaf2b5478bca1cec5/40c453871c01f0581b832e0452659553b6be2ac4dc1ac8bfaf2b5478bca1cec5-json.log. Closing.
2020-12-09T16:00:15.902Z INFO beater/crawler.go:178 Crawler stopped
2020-12-09T16:00:15.902Z INFO [registrar] registrar/registrar.go:131 Stopping Registrar
2020-12-09T16:00:15.902Z INFO [registrar] registrar/registrar.go:165 Ending Registrar
2020-12-09T16:00:15.903Z INFO [registrar] registrar/registrar.go:136 Registrar stopped
2020-12-09T16:00:15.912Z INFO [monitoring] log/log.go:153 Total non-zero metrics {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":80,"time":{"ms":80}},"total":{"ticks":230,"time":{"ms":232},"value":0},"user":{"ticks":150,"time":{"ms":152}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":8},"info":{"ephemeral_id":"cae44857-494c-40e7-bf6a-e06e2cf40759","uptime":{"ms":290}},"memstats":{"gc_next":16703568,"memory_alloc":8518080,"memory_total":40448184,"rss":73908224},"runtime":{"goroutines":11}},"filebeat":{"events":{"added":2,"done":2},"harvester":{"closed":1,"open_files":0,"running":0,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clients":0,"events":{"active":0,"filtered":2,"total":2}}},"registrar":{"states":{"current":1,"update":2},"writes":{"success":2,"total":2}},"system":{"cpu":{"cores":4},"load":{"1":1.79,"15":1.21,"5":1.54,"norm":{"1":0.4475,"15":0.3025,"5":0.385}}}}}}
2020-12-09T16:00:15.912Z INFO [monitoring] log/log.go:154 Uptime: 292.790204ms
2020-12-09T16:00:15.912Z INFO [monitoring] log/log.go:131 Stopping metrics logging.
2020-12-09T16:00:15.913Z INFO instance/beat.go:456 filebeat stopped.
2020-12-09T16:00:15.913Z ERROR instance/beat.go:951 Exiting: Failed to start crawler: starting input failed: Error while initializing input: Can only start an input when all related states are finished: {Id: native::4096794-64769, Finished: false, Fileinfo: &{40c453871c01f0581b832e0452659553b6be2ac4dc1ac8bfaf2b5478bca1cec5-json.log 0 416 {874391692 63743126415 0x608b880} {64769 4096794 1 33184 0 0 0 0 0 4096 0 {1607529615 874391692} {1607529615 874391692} {1607529615 874391692} [0 0 0]}}, Source: /var/lib/docker/containers/40c453871c01f0581b832e0452659553b6be2ac4dc1ac8bfaf2b5478bca1cec5/40c453871c01f0581b832e0452659553b6be2ac4dc1ac8bfaf2b5478bca1cec5-json.log, Offset: 0, Timestamp: 2020-12-09 16:00:15.896210395 +0000 UTC m=+0.302799924, TTL: -1ns, Type: log, Meta: map[], FileStateOS: 4096794-64769}
Exiting: Failed to start crawler: starting input failed: Error while initializing input: Can only start an input when all related states are finished: {Id: native::4096794-64769, Finished: false, Fileinfo: &{40c453871c01f0581b832e0452659553b6be2ac4dc1ac8bfaf2b5478bca1cec5-json.log 0 416 {874391692 63743126415 0x608b880} {64769 4096794 1 33184 0 0 0 0 0 4096 0 {1607529615 874391692} {1607529615 874391692} {1607529615 874391692} [0 0 0]}}, Source: /var/lib/docker/containers/40c453871c01f0581b832e0452659553b6be2ac4dc1ac8bfaf2b5478bca1cec5/40c453871c01f0581b832e0452659553b6be2ac4dc1ac8bfaf2b5478bca1cec5-json.log, Offset: 0, Timestamp: 2020-12-09 16:00:15.896210395 +0000 UTC m=+0.302799924, TTL: -1ns, Type: log, Meta: map[], FileStateOS: 4096794-64769}
Does anyone have an idea what might cause this?

Related

WDIO docker run: [1643987609.767][SEVERE]: bind() failed: Cannot assign requested address (99)

There is an error while run wdio test in Docker using Jenkins. I have no idea how to solve this problem :(
The same config run successfully on local env (windows + docker).
This is wdio config. I used default dockerOptions.
wdio.conf
import { config as sharedConfig } from './wdio.shared.conf'
export const config: WebdriverIO.Config = {
...sharedConfig,
...{
host: 'localhost',
services: ['docker'],
dockerLogs: './logs',
dockerOptions: {
image: 'selenium/standalone-chrome:4.1.2-20220131',
healthCheck: {
url: 'http://localhost:4444',
maxRetries: 3,
inspectInterval: 7000,
startDelay: 15000
},
options: {
p: ['4444:4444'],
shmSize: '2g'
}
},
capabilities: [{
acceptInsecureCerts: true,
browserName: 'chrome',
browserVersion: 'latest',
'goog:chromeOptions': {
args: [ '--verbose', '--headless', '--disable-gpu', 'window-size=1920,1800','--no-sandbox', '--disable-dev-shm-usage', '--disable-extensions'],
}
}]
}
}
After that, I try to run UI test via jenkins:
19:37:34 Run `npm audit` for details.
19:37:34 + npm run test:ci -- --spec ./test/specs/claim.BNB.spec.ts
19:37:34
19:37:34 > jasmine-boilerplate#1.0.0 test:ci
19:37:34 > wdio run wdio.ci.conf.ts
And got an error.
Logs attached:
wdio.log
2022-02-04T16:59:20.725Z DEBUG #wdio/utils:initialiseServices: initialise service "docker" as NPM package
2022-02-04T16:59:20.758Z INFO #wdio/cli:launcher: Run onPrepare hook
2022-02-04T16:59:20.760Z DEBUG wdio-docker-service: Docker command: docker run --cidfile /home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/selenium_standalone_chrome_latest.cid --rm -p 4444:4444 -p 5900:5900 --shm-size 2g selenium/standalone-chrome:latest
2022-02-04T16:59:20.769Z WARN wdio-docker-service: Connecting dockerEventsListener: 6283
2022-02-04T16:59:20.772Z INFO wdio-docker-service: Cleaning up CID files
2022-02-04T16:59:20.834Z INFO wdio-docker-service: Launching docker image 'selenium/standalone-chrome:latest'
2022-02-04T16:59:20.841Z INFO wdio-docker-service: Docker container is ready
2022-02-04T16:59:20.841Z DEBUG #wdio/cli:utils: Finished to run "onPrepare" hook in 82ms
2022-02-04T16:59:20.842Z INFO #wdio/cli:launcher: Run onWorkerStart hook
2022-02-04T16:59:20.843Z DEBUG #wdio/cli:utils: Finished to run "onWorkerStart" hook in 0ms
2022-02-04T16:59:20.843Z INFO #wdio/local-runner: Start worker 0-0 with arg: run,wdio.ci.conf.ts,--spec,./test/specs/claim.BNB.spec.ts
2022-02-04T16:59:22.034Z DEBUG #wdio/local-runner: Runner 0-0 finished with exit code 1
2022-02-04T16:59:22.035Z INFO #wdio/cli:launcher: Run onComplete hook
2022-02-04T16:59:22.036Z INFO wdio-docker-service: Shutting down running container
2022-02-04T16:59:32.372Z INFO wdio-docker-service: Cleaning up CID files
2022-02-04T16:59:32.373Z INFO wdio-docker-service: Docker container has stopped
2022-02-04T16:59:32.374Z WARN wdio-docker-service: Disconnecting dockerEventsListener: 6283
2022-02-04T16:59:32.374Z DEBUG #wdio/cli:utils: Finished to run "onComplete" hook in 10339ms
2022-02-04T16:59:32.430Z INFO #wdio/local-runner: Shutting down spawned worker
2022-02-04T16:59:32.681Z INFO #wdio/local-runner: Waiting for 0 to shut down gracefully
wdio-0-0.log
2022-02-04T16:59:21.223Z INFO #wdio/local-runner: Run worker command: run
2022-02-04T16:59:21.513Z DEBUG #wdio/config:utils: Found 'ts-node' package, auto-compiling TypeScript files
2022-02-04T16:59:21.714Z DEBUG #wdio/local-runner:utils: init remote session
2022-02-04T16:59:21.717Z DEBUG #wdio/utils:initialiseServices: initialise service "docker" as NPM package
2022-02-04T16:59:21.828Z DEBUG #wdio/local-runner:utils: init remote session
2022-02-04T16:59:21.840Z INFO devtools:puppeteer: Initiate new session using the DevTools protocol
2022-02-04T16:59:21.841Z INFO devtools: Launch Google Chrome with flags: --enable-automation --disable-popup-blocking --disable-extensions --disable-background-networking --disable-background-timer-throttling --disable-backgrounding-occluded-windows --disable-sync --metrics-recording-only --disable-default-apps --mute-audio --no-first-run --no-default-browser-check --disable-hang-monitor --disable-prompt-on-repost --disable-client-side-phishing-detection --password-store=basic --use-mock-keychain --disable-component-extensions-with-background-pages --disable-breakpad --disable-dev-shm-usage --disable-ipc-flooding-protection --disable-renderer-backgrounding --force-fieldtrials=*BackgroundTracing/default/ --enable-features=NetworkService,NetworkServiceInProcess --disable-features=site-per-process,TranslateUI,BlinkGenPropertyTrees --window-position=0,0 --window-size=1200,900 --headless --disable-gpu --window-size=1920,1800 --no-sandbox --disable-dev-shm-usage --disable-extensions
2022-02-04T16:59:21.911Z ERROR #wdio/runner: Error:
at new LauncherError (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/chrome-launcher/src/utils.ts:31:18)
at new ChromePathNotSetError (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/chrome-launcher/dist/utils.js:33:9)
at Object.linux (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/chrome-launcher/src/chrome-finder.ts:153:11)
at Function.getFirstInstallation (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/chrome-launcher/src/chrome-launcher.ts:182:61)
at Launcher.launch (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/chrome-launcher/src/chrome-launcher.ts:252:37)
at launch (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/chrome-launcher/src/chrome-launcher.ts:74:18)
at launchChrome (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/devtools/build/launcher.js:80:55)
at launch (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/devtools/build/launcher.js:179:16)
at Function.newSession (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/devtools/build/index.js:50:54)
at remote (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/webdriverio/build/index.js:67:43)
wdio-chromedriver.log
Starting ChromeDriver 97.0.4692.71 (adefa7837d02a07a604c1e6eff0b3a09422ab88d-refs/branch-heads/4692#{#1247}) on port 9515
Only local connections are allowed.
Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
ChromeDriver was started successfully.
[1643987609.767][SEVERE]: bind() failed: Cannot assign requested address (99)
docker-log.txt
2022-02-04 16:59:21,482 INFO Included extra file "/etc/supervisor/conf.d/selenium.conf" during parsing
2022-02-04 16:59:21,484 INFO supervisord started with pid 7
Trapped SIGTERM/SIGINT/x so shutting down supervisord...
2022-02-04 16:59:22,487 INFO spawned: 'xvfb' with pid 9
2022-02-04 16:59:22,489 INFO spawned: 'vnc' with pid 10
2022-02-04 16:59:22,491 INFO spawned: 'novnc' with pid 11
2022-02-04 16:59:22,492 INFO spawned: 'selenium-standalone' with pid 12
2022-02-04 16:59:22,493 WARN received SIGTERM indicating exit request
2022-02-04 16:59:22,493 INFO waiting for xvfb, vnc, novnc, selenium-standalone to die
Setting up SE_NODE_GRID_URL...
2022-02-04 16:59:22,501 INFO success: xvfb entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2022-02-04 16:59:22,501 INFO success: vnc entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2022-02-04 16:59:22,501 INFO success: novnc entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
Selenium Grid Standalone configuration:
[network]
relax-checks = true
[node]
session-timeout = "300"
override-max-sessions = false
detect-drivers = false
max-sessions = 1
[[node.driver-configuration]]
display-name = "chrome"
stereotype = '{"browserName": "chrome", "browserVersion": "97.0", "platformName": "Linux"}'
max-sessions = 1
Starting Selenium Grid Standalone...
16:59:22.930 INFO [LoggingOptions.configureLogEncoding] - Using the system default encoding
16:59:22.939 INFO [OpenTelemetryTracer.createTracer] - Using OpenTelemetry for tracing
16:59:23.452 INFO [NodeOptions.getSessionFactories] - Detected 4 available processors
16:59:23.493 INFO [NodeOptions.report] - Adding chrome for {"browserVersion": "97.0","browserName": "chrome","platformName": "Linux","se:vncEnabled": true} 1 times
16:59:23.505 INFO [Node.<init>] - Binding additional locator mechanisms: name, id, relative
16:59:23.526 INFO [LocalDistributor.add] - Added node 150c2c05-2b08-4ba9-929a-45fef66bb193 at http://172.17.0.2:4444. Health check every 120s
16:59:23.540 INFO [GridModel.setAvailability] - Switching node 150c2c05-2b08-4ba9-929a-45fef66bb193 (uri: http://172.17.0.2:4444) from DOWN to UP
16:59:23.645 INFO [Standalone.execute] - Started Selenium Standalone 4.1.2 (revision 9a5a329c5a): http://172.17.0.2:4444
2022-02-04 16:59:26,091 INFO waiting for xvfb, vnc, novnc, selenium-standalone to die
2022-02-04 16:59:29,095 INFO waiting for xvfb, vnc, novnc, selenium-standalone to die
2022-02-04 16:59:32,097 INFO waiting for xvfb, vnc, novnc, selenium-standalone to die

Kibana not connecting to Elasticsearch (["warning","elasticsearch","admin"],"pid":12,"message":"No living connections"})

Context: I have been struggling this whole week to get this stack up and running: filebeat -> kafka -> logstash -> elasticsearch - kibana, each one in its own docker (you will find around 3 or 4 other questions mine here without answer resulted from different tentatives). I have decided to downsize the stack and then move block by block untill i reach a final docker-compose. Then I tried the simplest stack I can imagine to push forward the simplest log I can imagine and I am facing the issue mentioned above in my question.
Issue: I am trying to run straight from command line three docker containers: filebeat, elasticsearch and kibana. When I try to start kibana I get "No living connections". I am following carefully the answer provide in another stackoverflow question. Any clue why I am not able to connect from Kibana container to Elasticsearch container?
Here are all three docker commands:
docker run -d -p 9200:9200 -e "discovery.type=single-node" --volume C:\Dockers\simplest-try\esdata:/usr/share/elasticsearch/data --name elasticsearch_container docker.elastic.co/elasticsearch/elasticsearch:7.5.2
docker run -d --mount type=bind,source=C:\Dockers\simplest-try\filebeat.yml,target=/usr/share/filebeat/filebeat.yml --volume C:\Dockers\simplest-try\mylogs:/mylogs docker.elastic.co/beats/filebeat:7.5.2
docker run -d --name kibana -p 5601:5601 --link elasticsearch_container:elasticsearch_alias -e "ELASTICSEARCH_URL=http://elasticsearch_alias:9200" docker.elastic.co/kibana/kibana:7.5.2
ElasticSearch is up and running:
C:\Dockers\simplest-try>curl localhost:9200
{
"name" : "ffaa2d39a8b2",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "QWYLaAqwSqu76fNwFtZ5AA",
"version" : {
"number" : "7.5.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "8bec50e1e0ad29dad5653712cf3bb580cd1afcdf",
"build_date" : "2020-01-15T12:11:52.313576Z",
"build_snapshot" : false,
"lucene_version" : "8.3.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Kibana container console:
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["info","plugins-system"],"pid":6,"message":"Setting up [15] plugins: [security,licensing,code,timelion,features,spaces,translations,uiActions,newsfeed,inspector,embeddable,advancedUiActions,expressions,eui_utils,data]"}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["info","plugins","security"],"pid":6,"message":"Setting up plugin"}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["warning","plugins","security","config"],"pid":6,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml"}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["warning","plugins","security","config"],"pid":6,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["info","plugins","licensing"],"pid":6,"message":"Setting up plugin"}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["info","plugins","code"],"pid":6,"message":"Setting up plugin"}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["info","plugins","timelion"],"pid":6,"message":"Setting up plugin"}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["info","plugins","features"],"pid":6,"message":"Setting up plugin"}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["info","plugins","spaces"],"pid":6,"message":"Setting up plugin"}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["info","plugins","translations"],"pid":6,"message":"Setting up plugin"}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["info","plugins","data"],"pid":6,"message":"Setting up plugin"}
{"type":"log","#timestamp":"2020-02-06T14:53:41Z","tags":["error","elasticsearch","data"],"pid":6,"message":"Request error, retrying\nGET http://elasticsearch:9200/_xpack => getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}
{"type":"log","#timestamp":"2020-02-06T14:53:42Z","tags":["warning","legacy-plugins"],"pid":6,"path":"/usr/share/kibana/src/legacy/core_plugins/visualizations","message":"Skipping non-plugin directory at /usr/share/kibana/src/legacy/core_plugins/visualizations"}
{"type":"log","#timestamp":"2020-02-06T14:53:42Z","tags":["info","plugins-system"],"pid":6,"message":"Starting [8] plugins: [security,licensing,code,timelion,features,spaces,translations,data]"}
{"type":"log","#timestamp":"2020-02-06T14:53:42Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2020-02-06T14:53:42Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"No living connections"}
{"type":"log","#timestamp":"2020-02-06T14:53:42Z","tags":["warning","plugins","licensing"],"pid":6,"message":"License information could not be obtained from Elasticsearch for the [data] cluster. Error: No Living connections"}
{"type":"log","#timestamp":"2020-02-06T14:53:43Z","tags":["error","elasticsearch","admin"],"pid":6,"message":"Request error, retrying\nGET http://elasticsearch:9200/.kibana => getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}
{"type":"log","#timestamp":"2020-02-06T14:53:43Z","tags":["error","elasticsearch","admin"],"pid":6,"message":"Request error, retrying\nGET http://elasticsearch:9200/.kibana_task_manager => getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}
{"type":"log","#timestamp":"2020-02-06T14:53:44Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2020-02-06T14:53:44Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"No living connections"}
{"type":"log","#timestamp":"2020-02-06T14:53:44Z","tags":["warning","migrations"],"pid":6,"message":"Unable to connect to Elasticsearch. Error: No Living connections"}
Although not straigt to my question title, here are details about Filebeat:
Filebeat try to harverst my log files
2020-02-06T14:32:23.782Z INFO crawler/crawler.go:72 Loading Inputs: 1
2020-02-06T14:32:23.782Z INFO log/input.go:152 Configured paths: [/mylogs/*.log]
2020-02-06T14:32:23.782Z INFO input/input.go:114 Starting input of type: log; ID: 4094557846902174710
2020-02-06T14:32:23.782Z INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
2020-02-06T14:32:23.788Z INFO log/harvester.go:251 Harvester started for file: /mylogs/y.log
2020-02-06T14:32:23.790Z INFO log/harvester.go:251 Harvester started for file: /mylogs/x.log
filebeat.yml
filebeat.inputs:
- type: log
paths:
- '/mylogs/*.log'
json.message_key: log
json.keys_under_root: true
processors:
- add_docker_metadata: ~
output.elasticsearch:
hosts: ["localhost:9200"]
*** edited
logs after Ibexit's suggestion
2020-02-12T21:33:03.575Z INFO instance/beat.go:610 Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2020-02-12T21:33:03.588Z INFO instance/beat.go:618 Beat ID: d0c71c07-23e0-44e5-b497-195ee9552fe8
2020-02-12T21:33:03.588Z INFO [seccomp] seccomp/seccomp.go:124 Syscall filter successfully installed
2020-02-12T21:33:03.588Z INFO [beat] instance/beat.go:941 Beat info {"system_info": {"beat": {"path": {"config": "/usr/share/filebeat", "data": "/usr/share/filebeat/data", "home": "/usr/share/filebeat", "logs": "/usr/share/filebeat/logs"}, "type": "filebeat", "uuid": "d0c71c07-23e0-44e5-b497-195ee9552fe8"}}}
2020-02-12T21:33:03.588Z INFO [beat] instance/beat.go:950 Build info {"system_info": {"build": {"commit": "a9c141434cd6b25d7a74a9c770be6b70643dc767", "libbeat": "7.5.2", "time": "2020-01-15T11:13:22.000Z", "version": "7.5.2"}}}
2020-02-12T21:33:03.588Z INFO [beat] instance/beat.go:953 Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":2,"version":"go1.12.12"}}}
2020-02-12T21:33:03.590Z INFO [beat] instance/beat.go:957 Host info {"system_info": {"host": {"architecture":"x86_64","boot_time":"2020-02-12T20:32:39Z","containerized":true,"name":"fcfaea4080e7","ip":["127.0.0.1/8","172.17.0.3/16"],"kernel_version":"4.19.76-linuxkit","mac":["02:42:ac:11:00:03"],"os":{"family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":7,"patch":1908,"codename":"Core"},"timezone":"UTC","timezone_offset_sec":0}}}
2020-02-12T21:33:03.590Z INFO [beat] instance/beat.go:986 Process info {"system_info": {"process": {"capabilities": {"inheritable":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"permitted":null,"effective":null,"bounding":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"ambient":null}, "cwd": "/usr/share/filebeat", "exe": "/usr/share/filebeat/filebeat", "name": "filebeat", "pid": 1, "ppid": 0, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2020-02-12T21:33:02.690Z"}}}
2020-02-12T21:33:03.590Z INFO instance/beat.go:297 Setup Beat: filebeat; Version: 7.5.2
2020-02-12T21:33:03.590Z INFO [index-management] idxmgmt/std.go:182 Set output.elasticsearch.index to 'filebeat-7.5.2' as ILM is enabled.
2020-02-12T21:33:03.591Z INFO elasticsearch/client.go:171 Elasticsearch url: http://elasticsearch:9200
2020-02-12T21:33:03.591Z INFO [publisher] pipeline/module.go:97 Beat name: fcfaea4080e7
2020-02-12T21:33:03.593Z INFO instance/beat.go:429 filebeat start running.
2020-02-12T21:33:03.593Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
2020-02-12T21:33:03.594Z INFO registrar/migrate.go:104 No registry home found. Create: /usr/share/filebeat/data/registry/filebeat
2020-02-12T21:33:03.594Z INFO registrar/migrate.go:112 Initialize registry meta file
2020-02-12T21:33:03.600Z INFO registrar/registrar.go:108 No registry file found under: /usr/share/filebeat/data/registry/filebeat/data.json. Creating a new registry file.
2020-02-12T21:33:03.611Z INFO registrar/registrar.go:145 Loading registrar data from /usr/share/filebeat/data/registry/filebeat/data.json
2020-02-12T21:33:03.611Z INFO registrar/registrar.go:152 States Loaded from registrar: 0
2020-02-12T21:33:03.612Z INFO crawler/crawler.go:72 Loading Inputs: 1
2020-02-12T21:33:03.612Z INFO log/input.go:152 Configured paths: [/mylogs/*.log]
2020-02-12T21:33:03.612Z INFO input/input.go:114 Starting input of type: log; ID: 4094557846902174710
2020-02-12T21:33:03.612Z INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
2020-02-12T21:33:03.640Z INFO log/harvester.go:251 Harvester started for file: /mylogs/b.log
2020-02-12T21:33:03.640Z ERROR readjson/json.go:52 Error decoding JSON: invalid character '\'' looking for beginning of object key string
2020-02-12T21:33:03.642Z INFO log/harvester.go:251 Harvester started for file: /mylogs/c.log
2020-02-12T21:33:03.644Z INFO log/harvester.go:251 Harvester started for file: /mylogs/w.log
2020-02-12T21:33:03.645Z ERROR readjson/json.go:52 Error decoding JSON: invalid character 'q' looking for beginning of value
2020-02-12T21:33:03.645Z INFO log/harvester.go:251 Harvester started for file: /mylogs/x.log
2020-02-12T21:33:03.652Z INFO log/harvester.go:251 Harvester started for file: /mylogs/y.log
2020-02-12T21:33:04.654Z INFO pipeline/output.go:95 Connecting to backoff(elasticsearch(http://elasticsearch:9200))
2020-02-12T21:33:04.684Z INFO elasticsearch/client.go:753 Attempting to connect to Elasticsearch version 7.5.2
2020-02-12T21:33:04.720Z INFO [index-management] idxmgmt/std.go:256 Auto ILM enable success.
2020-02-12T21:33:04.724Z INFO [index-management.ilm] ilm/std.go:138 do not generate ilm policy: exists=true, overwrite=false
2020-02-12T21:33:04.724Z INFO [index-management] idxmgmt/std.go:269 ILM policy successfully loaded.
2020-02-12T21:33:04.725Z INFO [index-management] idxmgmt/std.go:408 Set setup.template.name to '{filebeat-7.5.2 {now/d}-000001}' as ILM is enabled.
2020-02-12T21:33:04.725Z INFO [index-management] idxmgmt/std.go:413 Set setup.template.pattern to 'filebeat-7.5.2-*' as ILM is enabled.
2020-02-12T21:33:04.725Z INFO [index-management] idxmgmt/std.go:447 Set settings.index.lifecycle.rollover_alias in template to {filebeat-7.5.2 {now/d}-000001} as ILM is enabled.
2020-02-12T21:33:04.725Z INFO [index-management] idxmgmt/std.go:451 Set settings.index.lifecycle.name in template to {filebeat-7.5.2 {"policy":{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.
2020-02-12T21:33:04.730Z INFO template/load.go:89 Template filebeat-7.5.2 already exists and will not be overwritten.
2020-02-12T21:33:04.730Z INFO [index-management] idxmgmt/std.go:293 Loaded index template.
2020-02-12T21:33:04.734Z INFO [index-management] idxmgmt/std.go:304 Write alias successfully generated.
2020-02-12T21:33:04.736Z INFO pipeline/output.go:105 Connection to backoff(elasticsearch(http://elasticsearch:9200)) established
2020-02-12T21:33:33.595Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":50,"time":{"ms":50}},"total":{"ticks":100,"time":{"ms":107},"value":100},"user":{"ticks":50,"time":{"ms":57}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":11},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":30060}},"memstats":{"gc_next":8351264,"memory_alloc":4760176,"memory_total":12037984,"rss":43970560},"runtime":{"goroutines":42}},"filebeat":{"events":{"added":8,"done":8},"harvester":{"open_files":5,"running":5,"started":5}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":3,"batches":1,"total":3},"read":{"bytes":2942},"type":"elasticsearch","write":{"bytes":2545}},"pipeline":{"clients":1,"events":{"active":0,"filtered":5,"published":3,"retry":3,"total":8},"queue":{"acked":3}}},"registrar":{"states":{"current":5,"update":8},"writes":{"success":7,"total":7}},"system":{"cpu":{"cores":2},"load":{"1":0.02,"15":0.08,"5":0.1,"norm":{"1":0.01,"15":0.04,"5":0.05}}}}}}
2020-02-12T21:33:58.657Z ERROR readjson/json.go:52 Error decoding JSON: invalid character 'E' looking for beginning of value
2020-02-12T21:33:58.657Z ERROR readjson/json.go:52 Error decoding JSON: invalid character 'a' looking for beginning of value
2020-02-12T21:34:03.596Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":60,"time":{"ms":13}},"total":{"ticks":120,"time":{"ms":16},"value":120},"user":{"ticks":60,"time":{"ms":3}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":11},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":60059}},"memstats":{"gc_next":8351264,"memory_alloc":5345000,"memory_total":12622808},"runtime":{"goroutines":42}},"filebeat":{"events":{"added":2,"done":2},"harvester":{"open_files":5,"running":5}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":2,"batches":1,"total":2},"read":{"bytes":351},"write":{"bytes":1062}},"pipeline":{"clients":1,"events":{"active":0,"published":2,"total":2},"queue":{"acked":2}}},"registrar":{"states":{"current":5,"update":2},"writes":{"success":1,"total":1}},"system":{"load":{"1":0.01,"15":0.08,"5":0.09,"norm":{"1":0.005,"15":0.04,"5":0.045}}}}}}
2020-02-12T21:34:33.599Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":70,"time":{"ms":10}},"total":{"ticks":130,"time":{"ms":14},"value":130},"user":{"ticks":60,"time":{"ms":4}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":11},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":90059}},"memstats":{"gc_next":8351264,"memory_alloc":5714936,"memory_total":12992744,"rss":380928},"runtime":{"goroutines":42}},"filebeat":{"harvester":{"open_files":5,"running":5}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":5}},"system":{"load":{"1":0.07,"15":0.08,"5":0.1,"norm":{"1":0.035,"15":0.04,"5":0.05}}}}}}
2020-02-12T21:34:33.686Z INFO log/harvester.go:251 Harvester started for file: /mylogs/d.log
2020-02-12T21:35:03.597Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":80,"time":{"ms":16}},"total":{"ticks":140,"time":{"ms":21},"value":140},"user":{"ticks":60,"time":{"ms":5}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":12},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":120059}},"memstats":{"gc_next":8351264,"memory_alloc":6130552,"memory_total":13408360},"runtime":{"goroutines":46}},"filebeat":{"events":{"added":1,"done":1},"harvester":{"open_files":6,"running":6,"started":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0,"filtered":1,"total":1}}},"registrar":{"states":{"current":6,"update":1},"writes":{"success":1,"total":1}},"system":{"load":{"1":0.15,"15":0.09,"5":0.12,"norm":{"1":0.075,"15":0.045,"5":0.06}}}}}}
2020-02-12T21:35:33.596Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":100,"time":{"ms":14}},"total":{"ticks":170,"time":{"ms":23},"value":170},"user":{"ticks":70,"time":{"ms":9}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":12},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":150060}},"memstats":{"gc_next":7948720,"memory_alloc":4110408,"memory_total":13866968},"runtime":{"goroutines":46}},"filebeat":{"harvester":{"open_files":6,"running":6}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":6}},"system":{"load":{"1":0.09,"15":0.08,"5":0.11,"norm":{"1":0.045,"15":0.04,"5":0.055}}}}}}
2020-02-12T21:36:03.597Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":110,"time":{"ms":7}},"total":{"ticks":190,"time":{"ms":9},"value":190},"user":{"ticks":80,"time":{"ms":2}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":12},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":180059}},"memstats":{"gc_next":7948720,"memory_alloc":4399584,"memory_total":14156144},"runtime":{"goroutines":46}},"filebeat":{"harvester":{"open_files":6,"running":6}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":6}},"system":{"load":{"1":0.38,"15":0.11,"5":0.18,"norm":{"1":0.19,"15":0.055,"5":0.09}}}}}}
2020-02-12T21:36:33.596Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":120,"time":{"ms":11}},"total":{"ticks":200,"time":{"ms":15},"value":200},"user":{"ticks":80,"time":{"ms":4}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":12},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":210059}},"memstats":{"gc_next":7948720,"memory_alloc":4776320,"memory_total":14532880},"runtime":{"goroutines":46}},"filebeat":{"harvester":{"open_files":6,"running":6}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":6}},"system":{"load":{"1":0.23,"15":0.1,"5":0.16,"norm":{"1":0.115,"15":0.05,"5":0.08}}}}}}
2020-02-12T21:37:03.600Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":120,"time":{"ms":9}},"total":{"ticks":210,"time":{"ms":16},"value":210},"user":{"ticks":90,"time":{"ms":7}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":12},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":240059}},"memstats":{"gc_next":7948720,"memory_alloc":5142416,"memory_total":14898976},"runtime":{"goroutines":46}},"filebeat":{"harvester":{"open_files":6,"running":6}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":6}},"system":{"load":{"1":0.14,"15":0.1,"5":0.14,"norm":{"1":0.07,"15":0.05,"5":0.07}}}}}}
2020-02-12T21:37:33.596Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":140,"time":{"ms":12}},"total":{"ticks":240,"time":{"ms":24},"value":240},"user":{"ticks":100,"time":{"ms":12}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":12},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":270060}},"memstats":{"gc_next":7946160,"memory_alloc":4111832,"memory_total":15348288},"runtime":{"goroutines":46}},"filebeat":{"harvester":{"open_files":6,"running":6}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":6}},"system":{"load":{"1":0.08,"15":0.09,"5":0.13,"norm":{"1":0.04,"15":0.045,"5":0.065}}}}}}
2020-02-12T21:38:03.596Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":150,"time":{"ms":11}},"total":{"ticks":250,"time":{"ms":12},"value":250},"user":{"ticks":100,"time":{"ms":1}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":12},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":300060}},"memstats":{"gc_next":7946160,"memory_alloc":4489960,"memory_total":15726416},"runtime":{"goroutines":46}},"filebeat":{"harvester":{"open_files":6,"running":6}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":6}},"system":{"load":{"1":0.1,"15":0.09,"5":0.13,"norm":{"1":0.05,"15":0.045,"5":0.065}}}}}}
2020-02-12T21:38:08.676Z INFO log/harvester.go:276 File is inactive: /mylogs/w.log. Closing because close_inactive of 5m0s reached.
2020-02-12T21:38:08.676Z INFO log/harvester.go:276 File is inactive: /mylogs/c.log. Closing because close_inactive of 5m0s reached.
2020-02-12T21:38:08.678Z INFO log/harvester.go:276 File is inactive: /mylogs/b.log. Closing because close_inactive of 5m0s reached.
2020-02-12T21:38:08.678Z INFO log/harvester.go:276 File is inactive: /mylogs/y.log. Closing because close_inactive of 5m0s reached.
2020-02-12T21:38:13.706Z INFO log/harvester.go:251 Harvester started for file: /mylogs/y.log
2020-02-12T21:38:33.594Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":150,"time":{"ms":5}},"total":{"ticks":250,"time":{"ms":9},"value":250},"user":{"ticks":100,"time":{"ms":4}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":9},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":330059}},"memstats":{"gc_next":7946160,"memory_alloc":5014240,"memory_total":16250696},"runtime":{"goroutines":34}},"filebeat":{"events":{"added":5,"done":5},"harvester":{"closed":4,"open_files":3,"running":3,"started":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0,"filtered":5,"total":5}}},"registrar":{"states":{"current":6,"update":5},"writes":{"success":5,"total":5}},"system":{"load":{"1":0.88,"15":0.15,"5":0.31,"norm":{"1":0.44,"15":0.075,"5":0.155}}}}}}
2020-02-12T21:39:03.595Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":160,"time":{"ms":6}},"total":{"ticks":270,"time":{"ms":8},"value":270},"user":{"ticks":110,"time":{"ms":2}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":9},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":360059}},"memstats":{"gc_next":7946160,"memory_alloc":5284712,"memory_total":16521168},"runtime":{"goroutines":34}},"filebeat":{"harvester":{"open_files":3,"running":3}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":6}},"system":{"load":{"1":0.68,"15":0.16,"5":0.31,"norm":{"1":0.34,"15":0.08,"5":0.155}}}}}}
2020-02-12T21:39:03.676Z INFO log/harvester.go:276 File is inactive: /mylogs/x.log. Closing because close_inactive of 5m0s reached.
2020-02-12T21:39:33.596Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":160,"time":{"ms":5}},"total":{"ticks":270,"time":{"ms":12},"value":270},"user":{"ticks":110,"time":{"ms":7}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":8},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":390059}},"memstats":{"gc_next":7666032,"memory_alloc":3879448,"memory_total":16793464},"runtime":{"goroutines":30}},"filebeat":{"events":{"added":1,"done":1},"harvester":{"closed":1,"open_files":2,"running":2}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0,"filtered":1,"total":1}}},"registrar":{"states":{"current":6,"update":1},"writes":{"success":1,"total":1}},"system":{"load":{"1":0.48,"15":0.16,"5":0.3,"norm":{"1":0.24,"15":0.08,"5":0.15}}}}}}
2020-02-12T21:39:38.705Z INFO log/harvester.go:276 File is inactive: /mylogs/d.log. Closing because close_inactive of 5m0s reached.
2020-02-12T21:39:43.714Z INFO log/harvester.go:251 Harvester started for file: /mylogs/d.log
2020-02-12T21:39:43.715Z ERROR readjson/json.go:52 Error decoding JSON: EOF
2020-02-12T21:39:49.724Z INFO log/harvester.go:264 File was truncated. Begin reading file from offset 0: /mylogs/d.log
2020-02-12T21:39:53.720Z INFO log/harvester.go:251 Harvester started for file: /mylogs/d.log
2020-02-12T21:39:53.721Z ERROR readjson/json.go:52 Error decoding JSON: EOF
2020-02-12T21:40:03.597Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":190,"time":{"ms":30}},"total":{"ticks":320,"time":{"ms":46},"value":320},"user":{"ticks":130,"time":{"ms":16}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":8},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":420059}},"memstats":{"gc_next":7666032,"memory_alloc":4930512,"memory_total":17844528},"runtime":{"goroutines":30}},"filebeat":{"events":{"added":8,"done":8},"harvester":{"closed":2,"open_files":2,"running":2,"started":2},"input":{"log":{"files":{"truncated":1}}}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":4,"batches":2,"total":4},"read":{"bytes":702},"write":{"bytes":2270}},"pipeline":{"clients":1,"events":{"active":0,"filtered":4,"published":4,"total":8},"queue":{"acked":4}}},"registrar":{"states":{"current":6,"update":8},"writes":{"success":6,"total":6}},"system":{"load":{"1":0.59,"15":0.17,"5":0.33,"norm":{"1":0.295,"15":0.085,"5":0.165}}}}}}
The Problem is that the three container are separated in terms of networking from each other and/or misconfigured. Let us discuss what is actualy happening and how to fix it:
1. Elasticsearch
You are starting an elasticsearch container named elasticsearch_container:
docker run -d -p 9200:9200 -e "discovery.type=single-node" --volume C:\Dockers\simplest-try\esdata:/usr/share/elasticsearch/data --name elasticsearch_container docker.elastic.co/elasticsearch/elasticsearch:7.5.2
So far, so good.
2. Filebeat
As mentioned at the beginning, the containers are separated from each other. In order to make elasticsearch visible for filebeat, you need to create a link:
docker run -d --link elasticsearch_container:elasticsearch --mount type=bind,source=C:\Dockers\simplest-try\filebeat.yml,target=/usr/share/filebeat/filebeat.yml --volume C:\Dockers\simplest-try\mylogs:/mylogs docker.elastic.co/beats/filebeat:7.5.2
Please note the container link: --link elasticsearch_container:elasticsearch which is the key here. Now, as the elasticsearch_container is visible to filebeat under the name elasticsearch, we need to change the filebeat.yml in that way:
output.elasticsearch:
hosts: ["http://elasticsearch:9200"]
Using localhost here is meant from the perspective of the filebeat container, which is unaware of the docker host. So localhost within the filebeat container adresses the filebeat container itself. But with the configuration change above, we changed it to the name of the linked elasticsearch container, what should do the trick.
3. Kibana
Kibana is complaining about missing connection to elasticsearch:
Unable to revive connection: http://elasticsearch:9200
Here it's the same case as for filebeat: elasticsearch is not visible to the kibana container under the name elasticsearch but elasticsearch_alias. Additionally, ELASTICSEARCH_URL is not an expected configuration in the version you are using. elasticsearch.hosts is the correct setting and defaults to http://elasticsearch:9200. And this is the root of the error message: kibana is not recognising ELASTICSEARCH_URL, falls back to the default value and fails because elasticsearch_container is linked as elasticsearch_alias and not as elasticsearch. Fixing this is easy, as we need just to remove ELASTICSEARCH_URL and let kibana fall back to the default. To make elasticsearch visible to kibana, we just apply the same link as we did for filebeat already:
docker run -d --name kibana -p 5601:5601 --link elasticsearch_container:elasticsearch docker.elastic.co/kibana/kibana:7.5.2
Important:
Please dispose (stop & remove) the old container instances as they are claiming the container names before executing the discussed changes.

Management page won't load when using RabbitMQ docker container

I'm running RabbitMQ locally using:
docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3-management
Some log:
narley#brittes ~ $ docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3-management
2020-01-08 22:31:52.079 [info] <0.8.0> Feature flags: list of feature flags found:
2020-01-08 22:31:52.079 [info] <0.8.0> Feature flags: [ ] drop_unroutable_metric
2020-01-08 22:31:52.079 [info] <0.8.0> Feature flags: [ ] empty_basic_get_metric
2020-01-08 22:31:52.079 [info] <0.8.0> Feature flags: [ ] implicit_default_bindings
2020-01-08 22:31:52.080 [info] <0.8.0> Feature flags: [ ] quorum_queue
2020-01-08 22:31:52.080 [info] <0.8.0> Feature flags: [ ] virtual_host_metadata
2020-01-08 22:31:52.080 [info] <0.8.0> Feature flags: feature flag states written to disk: yes
2020-01-08 22:31:52.160 [info] <0.268.0> ra: meta data store initialised. 0 record(s) recovered
2020-01-08 22:31:52.162 [info] <0.273.0> WAL: recovering []
2020-01-08 22:31:52.164 [info] <0.277.0>
Starting RabbitMQ 3.8.2 on Erlang 22.2.1
Copyright (c) 2007-2019 Pivotal Software, Inc.
Licensed under the MPL 1.1. Website: https://rabbitmq.com
## ## RabbitMQ 3.8.2
## ##
########## Copyright (c) 2007-2019 Pivotal Software, Inc.
###### ##
########## Licensed under the MPL 1.1. Website: https://rabbitmq.com
Doc guides: https://rabbitmq.com/documentation.html
Support: https://rabbitmq.com/contact.html
Tutorials: https://rabbitmq.com/getstarted.html
Monitoring: https://rabbitmq.com/monitoring.html
Logs: <stdout>
Config file(s): /etc/rabbitmq/rabbitmq.conf
Starting broker...2020-01-08 22:31:52.166 [info] <0.277.0>
node : rabbit#1586b4698736
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.conf
cookie hash : bwlnCFiUchzEkgAOsZwQ1w==
log(s) : <stdout>
database dir : /var/lib/rabbitmq/mnesia/rabbit#1586b4698736
2020-01-08 22:31:52.210 [info] <0.277.0> Running boot step pre_boot defined by app rabbit
...
...
...
2020-01-08 22:31:53.817 [info] <0.277.0> Setting up a table for connection tracking on this node: tracked_connection_on_node_rabbit#1586b4698736
2020-01-08 22:31:53.827 [info] <0.277.0> Setting up a table for per-vhost connection counting on this node: tracked_connection_per_vhost_on_node_rabbit#1586b4698736
2020-01-08 22:31:53.828 [info] <0.277.0> Running boot step routing_ready defined by app rabbit
2020-01-08 22:31:53.828 [info] <0.277.0> Running boot step pre_flight defined by app rabbit
2020-01-08 22:31:53.828 [info] <0.277.0> Running boot step notify_cluster defined by app rabbit
2020-01-08 22:31:53.829 [info] <0.277.0> Running boot step networking defined by app rabbit
2020-01-08 22:31:53.833 [info] <0.624.0> started TCP listener on [::]:5672
2020-01-08 22:31:53.833 [info] <0.277.0> Running boot step cluster_name defined by app rabbit
2020-01-08 22:31:53.833 [info] <0.277.0> Running boot step direct_client defined by app rabbit
2020-01-08 22:31:53.922 [info] <0.674.0> Management plugin: HTTP (non-TLS) listener started on port 15672
2020-01-08 22:31:53.922 [info] <0.780.0> Statistics database started.
2020-01-08 22:31:53.923 [info] <0.779.0> Starting worker pool 'management_worker_pool' with 3 processes in it
completed with 3 plugins.
2020-01-08 22:31:54.316 [info] <0.8.0> Server startup complete; 3 plugins started.
* rabbitmq_management
* rabbitmq_management_agent
* rabbitmq_web_dispatch
Then I go to http:localhost:15672 and page doesn't load. No error is displayed.
Interesting thing is that it worked last time I used it (about 3 weeks ago).
Can anyone give me some help?
Cheers!
have a try:
step 1, going into docker container
docker exec -it rabbitmq bash
step 2, run it in docker container
rabbitmq-plugins enable rabbitmq_management
is work for me
I got it working by simply upgrading docker.
Was running docker 18.09.7 and upgrade to 19.03.5.
In my case, clearing the cookies up has fixed this issue instantly.

Hadoop docker: Cannot connect to resource manager

I am super new to docker and trying to configure single node hadoop using docker in ubuntu server. Here is what I have already done.
$ docker pull sequenceiq/hadoop-docker:2.7.1
......
$ docker run -it sequenceiq/hadoop-docker:2.7.1 /etc/bootstrap.sh -bash
Starting sshd: [ OK ]
18/06/27 12:59:00 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [cb46e163e0be]
cb46e163e0be: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-cb46e163e0be.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-cb46e163e0be.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-cb46e163e0be.out
18/06/27 12:59:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn--resourcemanager-cb46e163e0be.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-cb46e163e0be.out
bash-4.1# jps
532 ResourceManager
204 DataNode
118 NameNode
371 SecondaryNameNode
918 Jps
620 NodeManager
jps shows that resource manager is running. Now I tried to test the hadoop
bash-4.1# cd $HADOOP_PREFIX
bash-4.1# bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar grep input output 'dfs[a-z.]+'
18/06/27 13:02:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/06/27 13:02:25 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
18/06/27 13:02:27 INFO input.FileInputFormat: Total input paths to process : 31
18/06/27 13:02:27 INFO mapreduce.JobSubmitter: number of splits:31
18/06/27 13:02:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1530118774059_0001
18/06/27 13:02:28 INFO impl.YarnClientImpl: Submitted application application_1530118774059_0001
18/06/27 13:02:28 INFO mapreduce.Job: The url to track the job: http://cb46e163e0be:8088/proxy/application_1530118774059_0001/
18/06/27 13:02:28 INFO mapreduce.Job: Running job: job_1530118774059_0001
18/06/27 13:02:44 INFO mapreduce.Job: Job job_1530118774059_0001 running in uber mode : false
18/06/27 13:02:44 INFO mapreduce.Job: map 0% reduce 0%
18/06/27 13:05:56 INFO ipc.Client: Retrying connect to server: cb46e163e0be/172.17.0.2:42698. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
18/06/27 13:05:57 INFO ipc.Client: Retrying connect to server: cb46e163e0be/172.17.0.2:42698. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
18/06/27 13:05:58 INFO ipc.Client: Retrying connect to server: cb46e163e0be/172.17.0.2:42698. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
18/06/27 13:05:59 INFO ipc.Client: Retrying connect to server: cb46e163e0be/172.17.0.2:42698. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
18/06/27 13:06:00 INFO ipc.Client: Retrying connect to server: cb46e163e0be/172.17.0.2:42698. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
18/06/27 13:06:01 INFO ipc.Client: Retrying connect to server: cb46e163e0be/172.17.0.2:42698. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
^C
bash-4.1# jps
532 ResourceManager
204 DataNode
1671 Jps
371 SecondaryNameNode
bash-4.1#
Now I don't understand 2 things here.
This is kind of official image of hadoop so why it is not running correctly? Did I make any mistake? IF yes then what ?
When I did jps before running the example code you can see node manager and name node were returned by jps. But after running the example and quitting the example, these couple of objects were not returned by jps. Why is it so?
Please help. Thanks

Kafka on Minikube: Back-off restarting failed container

I'm need up Kafka and Cassandra in Minikube
Host OS is Ubuntu 16.04
$ uname -a
Linux minikuber 4.4.0-31-generic #50-Ubuntu SMP Wed Jul 13 00:07:12 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Minikube started normally:
$ minikube start
Starting local Kubernetes v1.8.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Services list:
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1d
Zookeeper and Cassandra is running, but kafka crashing with error "CrashLoopBackOff"
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
zookeeper-775db4cd8-lpl95 1/1 Running 0 1h
cassandra-d84d697b8-p5wcs 1/1 Running 0 1h
kafka-6d889c567-w5n4s 0/1 CrashLoopBackOff 25 1h
View logs:
kubectl logs kafka-6d889c567-w5n4s -p
Output:
waiting for kafka to be ready
...
INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
...
INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
INFO Session: 0x0 closed (org.apache.zookeeper.ZooKeeper)
INFO EventThread shut down for session: 0x0 (org.apache.zookeeper.ClientCnxn)
FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server '' with timeout of 6000 ms
...
INFO shutting down (kafka.server.KafkaServer)
INFO shut down completed (kafka.server.KafkaServer)
FATAL Exiting Kafka. (kafka.server.KafkaServerStartable)
Сan any one help how to solve the problem of restarting the container?
kubectl describe pod kafka-6d889c567-w5n4s
Output describe:
Name: kafka-6d889c567-w5n4s
Namespace: default
Node: minikube/192.168.99.100
Start Time: Thu, 23 Nov 2017 17:03:20 +0300
Labels: pod-template-hash=284457123
run=kafka
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"kafka-6d889c567","uid":"0fa94c8d-d057-11e7-ad48-080027a5dfed","a...
Status: Running
IP: 172.17.0.5
Created By: ReplicaSet/kafka-6d889c567
Controlled By: ReplicaSet/kafka-6d889c567
Info about Containers:
Containers:
kafka:
Container ID: docker://7ed3de8ef2e3e665ba693186f5125c6802283e1fabca8f3c85eb584f8de19526
Image: wurstmeister/kafka
Image ID: docker-pullable://wurstmeister/kafka#sha256:2aa183fd201d693e24d4d5d483b081fc2c62c198a7acb8484838328c83542c96
Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 27 Nov 2017 09:43:39 +0300
Finished: Mon, 27 Nov 2017 09:43:49 +0300
Ready: False
Restart Count: 1003
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bnz99 (ro)
Info about Conditions:
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Info about volumes:
Volumes:
default-token-bnz99:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-bnz99
Optional: false
QoS Class: BestEffort
Info about events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 38m (x699 over 2d) kubelet, minikube pulling image "wurstmeister/kafka"
Warning BackOff 18m (x16075 over 2d) kubelet, minikube Back-off restarting failed container
Warning FailedSync 3m (x16140 over 2d) kubelet, minikube Error syncing pod

Resources