I have run the following command while making the kafka cluster up
sudo docker compose up kafka-cluster
i have successfully access the Landoop UI portal a day ago but when i shutdown the system and perform the same steps again. I am now unable to access the landoop ui from this local URL
http://127.0.0.1:3030
I am using Ubuntu 20.04 and the following logs has been generated in the terminal.
[sudo] password for pc-11:
[+] Running 1/0
⠿ Container code-kafka-cluster-1 Created 0.0s
Attaching to code-kafka-cluster-1
code-kafka-cluster-1 | Setting advertised host to 127.0.0.1.
code-kafka-cluster-1 | Starting services.
code-kafka-cluster-1 | This is landoop’s fast-data-dev. Kafka 0.11.0.0, Confluent OSS 3.3.0.
code-kafka-cluster-1 | You may visit http://127.0.0.1:3030 in about a minute.
code-kafka-cluster-1 | 2022-07-14 08:48:34,716 CRIT Supervisor running as root (no user in config file)
code-kafka-cluster-1 | 2022-07-14 08:48:34,729 WARN Included extra file "/etc/supervisord.d/01-zookeeper.conf" during parsing
code-kafka-cluster-1 | 2022-07-14 08:48:34,729 WARN Included extra file "/etc/supervisord.d/02-broker.conf" during parsing
code-kafka-cluster-1 | 2022-07-14 08:48:34,729 WARN Included extra file "/etc/supervisord.d/03-schema-registry.conf" during parsing
code-kafka-cluster-1 | 2022-07-14 08:48:34,729 WARN Included extra file "/etc/supervisord.d/04-rest-proxy.conf" during parsing
code-kafka-cluster-1 | 2022-07-14 08:48:34,729 WARN Included extra file "/etc/supervisord.d/05-connect-distributed.conf" during parsing
code-kafka-cluster-1 | 2022-07-14 08:48:34,729 WARN Included extra file "/etc/supervisord.d/06-caddy.conf" during parsing
code-kafka-cluster-1 | 2022-07-14 08:48:34,729 WARN Included extra file "/etc/supervisord.d/07-smoke-tests.conf" during parsing
code-kafka-cluster-1 | 2022-07-14 08:48:34,729 WARN Included extra file "/etc/supervisord.d/08-logs-to-kafka.conf" during parsing
code-kafka-cluster-1 | 2022-07-14 08:48:34,729 WARN Included extra file "/etc/supervisord.d/99-supervisord-sample-data.conf" during parsing
code-kafka-cluster-1 | 2022-07-14 08:48:34,731 INFO supervisord started with pid 7
code-kafka-cluster-1 | 2022-07-14 08:48:35,735 INFO spawned: 'sample-data' with pid 91
code-kafka-cluster-1 | 2022-07-14 08:48:35,753 INFO spawned: 'zookeeper' with pid 93
code-kafka-cluster-1 | 2022-07-14 08:48:35,766 INFO spawned: 'caddy' with pid 94
code-kafka-cluster-1 | 2022-07-14 08:48:35,770 INFO spawned: 'broker' with pid 95
code-kafka-cluster-1 | 2022-07-14 08:48:35,773 INFO spawned: 'smoke-tests' with pid 97
code-kafka-cluster-1 | 2022-07-14 08:48:35,776 INFO spawned: 'connect-distributed' with pid 98
code-kafka-cluster-1 | 2022-07-14 08:48:35,779 INFO spawned: 'logs-to-kafka' with pid 99
code-kafka-cluster-1 | 2022-07-14 08:48:35,782 INFO spawned: 'schema-registry' with pid 100
code-kafka-cluster-1 | 2022-07-14 08:48:35,785 INFO spawned: 'rest-proxy' with pid 101
code-kafka-cluster-1 | 2022-07-14 08:48:36,262 INFO exited: caddy (exit status 2; not expected)
code-kafka-cluster-1 | 2022-07-14 08:48:37,264 INFO success: sample-data entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
code-kafka-cluster-1 | 2022-07-14 08:48:37,264 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
code-kafka-cluster-1 | 2022-07-14 08:48:37,266 INFO spawned: 'caddy' with pid 381
code-kafka-cluster-1 | 2022-07-14 08:48:37,267 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
code-kafka-cluster-1 | 2022-07-14 08:48:37,267 INFO success: smoke-tests entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
code-kafka-cluster-1 | 2022-07-14 08:48:37,267 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
code-kafka-cluster-1 | 2022-07-14 08:48:37,267 INFO success: logs-to-kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
code-kafka-cluster-1 | 2022-07-14 08:48:37,267 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
code-kafka-cluster-1 | 2022-07-14 08:48:37,268 INFO success: rest-proxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
code-kafka-cluster-1 | 2022-07-14 08:48:37,280 INFO exited: caddy (exit status 2; not expected)
code-kafka-cluster-1 | 2022-07-14 08:48:39,285 INFO spawned: 'caddy' with pid 389
code-kafka-cluster-1 | 2022-07-14 08:48:39,348 INFO exited: caddy (exit status 2; not expected)
code-kafka-cluster-1 | 2022-07-14 08:48:42,444 INFO spawned: 'caddy' with pid 403
code-kafka-cluster-1 | 2022-07-14 08:48:42,450 INFO exited: caddy (exit status 2; not expected)
code-kafka-cluster-1 | 2022-07-14 08:48:42,508 INFO gave up: caddy entered FATAL state, too many start retries too quickly
code-kafka-cluster-1 | 2022-07-14 08:49:04,090 INFO exited: schema-registry (exit status 1; not expected)
code-kafka-cluster-1 | 2022-07-14 08:49:04,099 INFO spawned: 'schema-registry' with pid 485
code-kafka-cluster-1 | 2022-07-14 08:49:05,124 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
code-kafka-cluster-1 | 2022-07-14 08:49:35,818 INFO exited: smoke-tests (exit status 0; expected)
code-kafka-cluster-1 | 2022-07-14 08:51:35,933 INFO exited: logs-to-kafka (exit status 0; expected)
code-kafka-cluster-1 | 2022-07-14 08:52:53,146 INFO exited: sample-data (exit status 0; expected)
I figured out the solution as fast-data-dev is not maintained so we can make changing in the my configuration or mydocker_compose.yml I have replaced the landoop/fast-data-dev:cp3.3.0 with landoop/fast-data-dev:latestMy final docker-compose.yml is as follows:
version: '2'
services:
# this is our kafka cluster.
kafka-cluster:
image: landoop/fast-data-dev:latest
environment:
ADV_HOST: 127.0.0.1 # Change to 192.168.99.100 if using Docker Toolbox
RUNTESTS: 0 # Disable Running tests so the cluster starts faster
ports:
- 2181:2181 # Zookeeper
- 3030:3030 # Landoop UI
- 8081-8083:8081-8083 # REST Proxy, Schema Registry, Kafka Connect ports
- 9581-9585:9581-9585 # JMX Ports
- 9092:9092 # Kafka Broker
# we will use elasticsearch as one of our sinks.
# This configuration allows you to start elasticsearch
elasticsearch:
image: itzg/elasticsearch:2.4.3
environment:
PLUGINS: appbaseio/dejavu
OPTS: -Dindex.number_of_shards=1 -Dindex.number_of_replicas=0
ports:
- "9200:9200"
# we will use postgres as one of our sinks.
# This configuration allows you to start postgres
postgres:
image: postgres:9.5-alpine
environment:
POSTGRES_USER: postgres # define credentials
POSTGRES_PASSWORD: postgres # define credentials
POSTGRES_DB: postgres # define database
ports:
- 5432:5432 # Postgres port
And after just updating the image with the latest and i was able to get the landoop ui on 127.0.0.1:3030
I am also able to get the access the landoop ui even shutting down the cluster and accessing it again.
File /usr/share/logstash/config/ports.conf :
`
input {
tcp {
port => 5000
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "hello-logstash-docker"
}
}
`
logstash_1 | WARNING: An illegal reflective access operation has occurred
logstash_1 | WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby8545812400624390168jopenssl.jar) to field java.security.MessageDigest.provider
logstash_1 | WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
logstash_1 | WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
logstash_1 | WARNING: All illegal access operations will be denied in a future release
logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1 | [2022-03-07T11:37:31,708][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.9.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10-LTS on 11.0.8+10-LTS +indy +jit [linux-x86_64]"}
logstash_1 | [2022-03-07T11:37:32,881][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
logstash_1 | [2022-03-07T11:37:32,884][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
logstash_1 | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
logstash_1 | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
logstash_1 | [2022-03-07T11:37:33,968][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2022-03-07T11:37:34,325][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
logstash_1 | [2022-03-07T11:37:34,388][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
logstash_1 | [2022-03-07T11:37:34,391][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2022-03-07T11:37:34,540][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
logstash_1 | [2022-03-07T11:37:34,541][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
logstash_1 | [2022-03-07T11:37:36,344][INFO ][org.reflections.Reflections] Reflections took 118 ms to scan 1 urls, producing 22 keys and 45 values
logstash_1 | [2022-03-07T11:37:36,885][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2022-03-07T11:37:36,885][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2022-03-07T11:37:36,908][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
logstash_1 | [2022-03-07T11:37:36,914][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
logstash_1 | [2022-03-07T11:37:36,919][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] ES Output version determined {:es_version=>7}
logstash_1 | [2022-03-07T11:37:36,920][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2022-03-07T11:37:36,924][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
logstash_1 | [2022-03-07T11:37:36,930][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2022-03-07T11:37:36,969][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://elasticsearch:9200"]}
logstash_1 | [2022-03-07T11:37:36,972][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
logstash_1 | [2022-03-07T11:37:36,984][WARN ][logstash.javapipeline ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
logstash_1 | [2022-03-07T11:37:37,058][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
logstash_1 | [2022-03-07T11:37:37,109][INFO ][logstash.javapipeline ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x7bdfd22b#/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:122 run>"}
logstash_1 | [2022-03-07T11:37:37,148][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/ports.conf"], :thread=>"#<Thread:0x675f8073#/usr/share/logstash/logstash-core/lib/logstash/pipelines_registry.rb:141 run>"}
logstash_1 | [2022-03-07T11:37:37,152][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
logstash_1 | [2022-03-07T11:37:38,151][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.0}
logstash_1 | [2022-03-07T11:37:38,152][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>1.04}
logstash_1 | [2022-03-07T11:37:38,206][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
logstash_1 | [2022-03-07T11:37:38,465][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
logstash_1 | [2022-03-07T11:37:38,474][INFO ][logstash.inputs.tcp ][main][2dd5b8304a815578c4e06e3aec9e54f0316a8b63a07cd77090a1ddb785d8c617] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enable=>"false"}
logstash_1 | [2022-03-07T11:37:38,533][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
logstash_1 | [2022-03-07T11:37:38,923][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
I use GET /_cat/indices in kibana but "hello-logstash-docker" index is not found. Please tell me where is the error?
I follow the instructions : https://www.youtube.com/watch?v=I2ZS2Wlk1No
My DockerFile is:
FROM openjdk:8
VOLUME /tmp
ADD target/demo-0.0.1-SNAPSHOT.jar app.jar
#RUN bash -c 'touch /app.jar'
#EXPOSE 8080
ENTRYPOINT ["java","-Dspring.data.mongodb.uri=mongodb://mongo/players","-jar","/app.jar"]
And the docker-compose is:
version: "3"
services:
spring-docker:
build: .
restart: always
ports:
- "8080:8080"
depends_on:
- db
db:
image: mongo
volumes:
- ./data:/data/db
ports:
- "27000:27017"
restart: always
I have docker Image and when I use docker-compose up, anything goes well without any error.
But in the Postman, when I use GET method with localhost:8080/player I do not have any out put, so I used the IP of docker-machine such as 192.168.99.101:8080, but I have error 404 Not found in the Postman.
what is my mistake?!
The docker-compose logs:
$ docker-compose logs
Attaching to thesismongoproject_spring-docker_1, thesismongoproject_db_1
spring-docker_1 |
spring-docker_1 | . ____ _ __ _ _
spring-docker_1 | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
spring-docker_1 | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
spring-docker_1 | \\/ ___)| |_)| | | | | || (_| | ) ) ) )
spring-docker_1 | ' |____| .__|_| |_|_| |_\__, | / / / /
spring-docker_1 | =========|_|==============|___/=/_/_/_/
spring-docker_1 | :: Spring Boot :: (v2.2.6.RELEASE)
spring-docker_1 |
spring-docker_1 | 2020-05-31 11:36:39.598 INFO 1 --- [ main] thesisM
ongoProject.Application : Starting Application v0.0.1-SNAPSHOT on e81c
cff8ba0e with PID 1 (/demo-0.0.1-SNAPSHOT.jar started by root in /)
spring-docker_1 | 2020-05-31 11:36:39.620 INFO 1 --- [ main] thesisM
ongoProject.Application : No active profile set, falling back to defau
lt profiles: default
spring-docker_1 | 2020-05-31 11:36:41.971 INFO 1 --- [ main] .s.d.r.
c.RepositoryConfigurationDelegate : Bootstrapping Spring Data MongoDB repositori
es in DEFAULT mode.
spring-docker_1 | 2020-05-31 11:36:42.216 INFO 1 --- [ main] .s.d.r.
c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in
225ms. Found 4 MongoDB repository interfaces.
spring-docker_1 | 2020-05-31 11:36:44.319 INFO 1 --- [ main] o.s.b.w
.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
spring-docker_1 | 2020-05-31 11:36:44.381 INFO 1 --- [ main] o.apach
e.catalina.core.StandardService : Starting service [Tomcat]
spring-docker_1 | 2020-05-31 11:36:44.381 INFO 1 --- [ main] org.apa
che.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.
33]
spring-docker_1 | 2020-05-31 11:36:44.619 INFO 1 --- [ main] o.a.c.c
.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationC
ontext
spring-docker_1 | 2020-05-31 11:36:44.619 INFO 1 --- [ main] o.s.web
.context.ContextLoader : Root WebApplicationContext: initialization c
ompleted in 4810 ms
spring-docker_1 | 2020-05-31 11:36:46.183 INFO 1 --- [ main] org.mon
godb.driver.cluster : Cluster created with settings {hosts=[db:270
17], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms'
, maxWaitQueueSize=500}
spring-docker_1 | 2020-05-31 11:36:46.781 INFO 1 --- [null'}-db:27017] org.mon
godb.driver.connection : Opened connection [connectionId{localValue:1
, serverValue:1}] to db:27017
spring-docker_1 | 2020-05-31 11:36:46.802 INFO 1 --- [null'}-db:27017] org.mon
godb.driver.cluster : Monitor thread successfully connected to ser
ver with description ServerDescription{address=db:27017, type=STANDALONE, state=
CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 7]}, minWireVersion
=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30,
roundTripTimeNanos=5468915}
spring-docker_1 | 2020-05-31 11:36:48.829 INFO 1 --- [ main] o.s.s.c
oncurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTas
kExecutor'
spring-docker_1 | 2020-05-31 11:36:49.546 INFO 1 --- [ main] o.s.b.w
.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with
context path ''
spring-docker_1 | 2020-05-31 11:36:49.581 INFO 1 --- [ main] thesisM
ongoProject.Application : Started Application in 11.264 seconds (JVM r
unning for 13.615)
spring-docker_1 | 2020-05-31 11:40:10.290 INFO 1 --- [extShutdownHook] o.s.s.c
oncurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTa
skExecutor'
db_1 | 2020-05-31T11:36:35.623+0000 I CONTROL [main] Automatically
disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none
'
db_1 | 2020-05-31T11:36:35.639+0000 W ASIO [main] No TransportL
ayer configured during NetworkInterface startup
db_1 | 2020-05-31T11:36:35.645+0000 I CONTROL [initandlisten] Mong
oDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=1a0e5bc0c503
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] db v
ersion v4.2.7
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] git
version: 51d9fe12b5d19720e72dcd7db0f2f17dd9a19212
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] Open
SSL version: OpenSSL 1.1.1 11 Sep 2018
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] allo
cator: tcmalloc
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] modu
les: none
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten] buil
d environment:
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten]
distmod: ubuntu1804
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten]
distarch: x86_64
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten]
target_arch: x86_64
db_1 | 2020-05-31T11:36:35.648+0000 I CONTROL [initandlisten] opti
ons: { net: { bindIp: "*" } }
db_1 | 2020-05-31T11:36:35.649+0000 I STORAGE [initandlisten] Dete
cted data files in /data/db created by the 'wiredTiger' storage engine, so setti
ng the active storage engine to 'wiredTiger'.
db_1 | 2020-05-31T11:36:35.650+0000 I STORAGE [initandlisten] wire
dtiger_open config: create,cache_size=256M,cache_overflow=(file_max=0M),session_
max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(f
ast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager
=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statis
tics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
db_1 | 2020-05-31T11:36:37.046+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:46670][1:0x7f393f9a0b00], txn-recover: Recovering log
9 through 10
db_1 | 2020-05-31T11:36:37.231+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:231423][1:0x7f393f9a0b00], txn-recover: Recovering lo
g 10 through 10
db_1 | 2020-05-31T11:36:37.294+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:294858][1:0x7f393f9a0b00], txn-recover: Main recovery
loop: starting at 9/6016 to 10/256
db_1 | 2020-05-31T11:36:37.447+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:447346][1:0x7f393f9a0b00], txn-recover: Recovering lo
g 9 through 10
db_1 | 2020-05-31T11:36:37.564+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:564841][1:0x7f393f9a0b00], txn-recover: Recovering lo
g 10 through 10
db_1 | 2020-05-31T11:36:37.645+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:645216][1:0x7f393f9a0b00], txn-recover: Set global re
covery timestamp: (0, 0)
db_1 | 2020-05-31T11:36:37.681+0000 I RECOVERY [initandlisten] Wire
dTiger recoveryTimestamp. Ts: Timestamp(0, 0)
db_1 | 2020-05-31T11:36:37.703+0000 I STORAGE [initandlisten] Time
stamp monitor starting
db_1 | 2020-05-31T11:36:37.704+0000 I CONTROL [initandlisten]
db_1 | 2020-05-31T11:36:37.704+0000 I CONTROL [initandlisten] ** W
ARNING: Access control is not enabled for the database.
db_1 | 2020-05-31T11:36:37.704+0000 I CONTROL [initandlisten] **
Read and write access to data and configuration is unrestricted.
db_1 | 2020-05-31T11:36:37.705+0000 I CONTROL [initandlisten]
db_1 | 2020-05-31T11:36:37.712+0000 I SHARDING [initandlisten] Mark
ing collection local.system.replset as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.722+0000 I STORAGE [initandlisten] Flow
Control is enabled on this deployment.
db_1 | 2020-05-31T11:36:37.722+0000 I SHARDING [initandlisten] Mark
ing collection admin.system.roles as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.724+0000 I SHARDING [initandlisten] Mark
ing collection admin.system.version as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.726+0000 I SHARDING [initandlisten] Mark
ing collection local.startup_log as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.729+0000 I FTDC [initandlisten] Init
ializing full-time diagnostic data capture with directory '/data/db/diagnostic.d
ata'
db_1 | 2020-05-31T11:36:37.740+0000 I SHARDING [LogicalSessionCache
Refresh] Marking collection config.system.sessions as collection version: <unsha
rded>
db_1 | 2020-05-31T11:36:37.748+0000 I SHARDING [LogicalSessionCache
Reap] Marking collection config.transactions as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.748+0000 I NETWORK [listener] Listening
on /tmp/mongodb-27017.sock
db_1 | 2020-05-31T11:36:37.748+0000 I NETWORK [listener] Listening
on 0.0.0.0
db_1 | 2020-05-31T11:36:37.749+0000 I NETWORK [listener] waiting f
or connections on port 27017
db_1 | 2020-05-31T11:36:38.001+0000 I SHARDING [ftdc] Marking colle
ction local.oplog.rs as collection version: <unsharded>
db_1 | 2020-05-31T11:36:46.536+0000 I NETWORK [listener] connectio
n accepted from 172.19.0.3:40656 #1 (1 connection now open)
db_1 | 2020-05-31T11:36:46.653+0000 I NETWORK [conn1] received cli
ent metadata from 172.19.0.3:40656 conn1: { driver: { name: "mongo-java-driver|l
egacy", version: "3.11.2" }, os: { type: "Linux", name: "Linux", architecture: "
amd64", version: "4.14.154-boot2docker" }, platform: "Java/Oracle Corporation/1.
8.0_252-b09" }
db_1 | 2020-05-31T11:40:10.302+0000 I NETWORK [conn1] end connecti
on 172.19.0.3:40656 (0 connections now open)
db_1 | 2020-05-31T11:40:10.523+0000 I CONTROL [signalProcessingThr
ead] got signal 15 (Terminated), will terminate after current cmd ends
db_1 | 2020-05-31T11:40:10.730+0000 I NETWORK [signalProcessingThr
ead] shutdown: going to close listening sockets...
db_1 | 2020-05-31T11:40:10.731+0000 I NETWORK [listener] removing
socket file: /tmp/mongodb-27017.sock
db_1 | 2020-05-31T11:40:10.731+0000 I - [signalProcessingThr
ead] Stopping further Flow Control ticket acquisitions.
db_1 | 2020-05-31T11:40:10.796+0000 I CONTROL [signalProcessingThr
ead] Shutting down free monitoring
db_1 | 2020-05-31T11:40:10.800+0000 I FTDC [signalProcessingThr
ead] Shutting down full-time diagnostic data capture
db_1 | 2020-05-31T11:40:10.803+0000 I STORAGE [signalProcessingThr
ead] Deregistering all the collections
db_1 | 2020-05-31T11:40:10.811+0000 I STORAGE [signalProcessingThr
ead] Timestamp monitor shutting down
db_1 | 2020-05-31T11:40:10.828+0000 I STORAGE [TimestampMonitor] T
imestamp monitor is stopping due to: interrupted at shutdown
db_1 | 2020-05-31T11:40:10.828+0000 I STORAGE [signalProcessingThr
ead] WiredTigerKVEngine shutting down
db_1 | 2020-05-31T11:40:10.829+0000 I STORAGE [signalProcessingThr
ead] Shutting down session sweeper thread
db_1 | 2020-05-31T11:40:10.829+0000 I STORAGE [signalProcessingThr
ead] Finished shutting down session sweeper thread
db_1 | 2020-05-31T11:40:10.829+0000 I STORAGE [signalProcessingThr
ead] Shutting down journal flusher thread
db_1 | 2020-05-31T11:40:10.916+0000 I STORAGE [signalProcessingThr
ead] Finished shutting down journal flusher thread
db_1 | 2020-05-31T11:40:10.917+0000 I STORAGE [signalProcessingThr
ead] Shutting down checkpoint thread
db_1 | 2020-05-31T11:40:10.917+0000 I STORAGE [signalProcessingThr
ead] Finished shutting down checkpoint thread
db_1 | 2020-05-31T11:40:10.935+0000 I STORAGE [signalProcessingThr
ead] shutdown: removing fs lock...
db_1 | 2020-05-31T11:40:10.942+0000 I CONTROL [signalProcessingThr
ead] now exiting
db_1 | 2020-05-31T11:40:10.943+0000 I CONTROL [signalProcessingThr
ead] shutting down with code:0
for solving this problem I must put #EnableAutoConfiguration(exclude={MongoAutoConfiguration.class}) annotation
I am trying to run the ELK stash using the popular Docker image on DockerHub, seep/elk.
In my project dir, I have the following two files:
docker-compose.up (just want to see if logstash works, so I'm reading from stdin and writing to stdout rather than elasticsearch):
input { stdin {} }
output { stdout {} }
logstash.conf:
elk:
image: sebp/elk
ports:
- "5605:5601"
- "9200:9200"
- "9300:9300"
- "5044:5044"
volumes:
- /path/to/project/dir/logstash.conf:/usr/share/logstash/config/logstash.conf
When I run docker-compose up elk, the following stack trace is displayed:
elk_1 | * Starting periodic command scheduler cron
elk_1 | ...done.
elk_1 | * Starting Elasticsearch Server
elk_1 | ...done.
elk_1 | waiting for Elasticsearch to be up (1/30)
elk_1 | waiting for Elasticsearch to be up (2/30)
elk_1 | waiting for Elasticsearch to be up (3/30)
elk_1 | waiting for Elasticsearch to be up (4/30)
elk_1 | waiting for Elasticsearch to be up (5/30)
elk_1 | waiting for Elasticsearch to be up (6/30)
elk_1 | waiting for Elasticsearch to be up (7/30)
elk_1 | waiting for Elasticsearch to be up (8/30)
elk_1 | waiting for Elasticsearch to be up (9/30)
elk_1 | waiting for Elasticsearch to be up (10/30)
elk_1 | waiting for Elasticsearch to be up (11/30)
elk_1 | Waiting for Elasticsearch cluster to respond (1/30)
elk_1 | logstash started.
elk_1 | * Starting Kibana5
elk_1 | ...done.
elk_1 | ==> /var/log/elasticsearch/elasticsearch.log <==
elk_1 | [2018-08-11T17:34:41,530][INFO ][o.e.g.GatewayService ] [pIJHFdO] recovered [0] indices into cluster_state
elk_1 | [2018-08-11T17:34:41,926][INFO ][o.e.c.m.MetaDataIndexTemplateService] [pIJHFdO] adding template [.watches] for index patterns [.watches*]
elk_1 | [2018-08-11T17:34:42,033][INFO ][o.e.c.m.MetaDataIndexTemplateService] [pIJHFdO] adding template [.watch-history-7] for index patterns [.watcher-history-7*]
elk_1 | [2018-08-11T17:34:42,099][INFO ][o.e.c.m.MetaDataIndexTemplateService] [pIJHFdO] adding template [.triggered_watches] for index patterns [.triggered_watches*]
elk_1 | [2018-08-11T17:34:42,205][INFO ][o.e.c.m.MetaDataIndexTemplateService] [pIJHFdO] adding template [.monitoring-logstash] for index patterns [.monitoring-logstash-6-*]
elk_1 | [2018-08-11T17:34:42,288][INFO ][o.e.c.m.MetaDataIndexTemplateService] [pIJHFdO] adding template [.monitoring-es] for index patterns [.monitoring-es-6-*]
elk_1 | [2018-08-11T17:34:42,338][INFO ][o.e.c.m.MetaDataIndexTemplateService] [pIJHFdO] adding template [.monitoring-beats] for index patterns [.monitoring-beats-6-*]
elk_1 | [2018-08-11T17:34:42,374][INFO ][o.e.c.m.MetaDataIndexTemplateService] [pIJHFdO] adding template [.monitoring-alerts] for index patterns [.monitoring-alerts-6]
elk_1 | [2018-08-11T17:34:42,431][INFO ][o.e.c.m.MetaDataIndexTemplateService] [pIJHFdO] adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-6-*]
elk_1 | [2018-08-11T17:34:42,523][INFO ][o.e.l.LicenseService ] [pIJHFdO] license [f28743a3-8cc3-46ad-8c75-7c096c7afaa7] mode [basic] - valid
elk_1 |
elk_1 | ==> /var/log/logstash/logstash-plain.log <==
elk_1 |
elk_1 | ==> /var/log/kibana/kibana5.log <==
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:kibana#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:elasticsearch#6.3.2","info"],"pid":247,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:xpack_main#6.3.2","info"],"pid":247,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:searchprofiler#6.3.2","info"],"pid":247,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:ml#6.3.2","info"],"pid":247,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:tilemap#6.3.2","info"],"pid":247,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:watcher#6.3.2","info"],"pid":247,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:license_management#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:index_management#6.3.2","info"],"pid":247,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:timelion#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:graph#6.3.2","info"],"pid":247,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:monitoring#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:searchprofiler#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:ml#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:tilemap#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:watcher#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:index_management#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:graph#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:security#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:grokdebugger#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:logstash#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:reporting#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["info","monitoring-ui","kibana-monitoring"],"pid":247,"message":"Starting all Kibana monitoring collectors"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["license","info","xpack"],"pid":247,"message":"Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic | status: active"}
elk_1 |
elk_1 | ==> /var/log/logstash/logstash-plain.log <==
elk_1 | [2018-08-11T17:35:08,371][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/opt/logstash/data/queue"}
elk_1 | [2018-08-11T17:35:08,380][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/opt/logstash/data/dead_letter_queue"}
elk_1 | [2018-08-11T17:35:08,990][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
elk_1 | [2018-08-11T17:35:09,025][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"aa287931-643e-47ae-bd8e-f982c75b2105", :path=>"/opt/logstash/data/uuid"}
elk_1 | [2018-08-11T17:35:09,779][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.2"}
elk_1 | [2018-08-11T17:35:13,753][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[//localhost], manage_template=>false, index=>"%{[#metadata][beat]}-%{+YYYY.MM.dd}", document_type=>"%{[#metadata][type]}", id=>"c4ee5abcf701afed0db36d4aa16c4fc10da6a92bbd615d837cccdf2f368b7802", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_21596240-07d7-4d2e-b4e5-bb68516e5a61", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
elk_1 | [2018-08-11T17:35:13,823][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>6, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
elk_1 | [2018-08-11T17:35:15,074][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
elk_1 | [2018-08-11T17:35:15,090][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
elk_1 | [2018-08-11T17:35:15,360][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
elk_1 | [2018-08-11T17:35:15,518][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
elk_1 | [2018-08-11T17:35:15,525][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
elk_1 | [2018-08-11T17:35:15,569][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost"]}
elk_1 | [2018-08-11T17:35:16,370][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
elk_1 | [2018-08-11T17:35:16,445][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x2c697fd4 run>"}
elk_1 | [2018-08-11T17:35:16,602][INFO ][org.logstash.beats.Server] Starting server on port: 5044
elk_1 | [2018-08-11T17:35:16,643][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
elk_1 | [2018-08-11T17:35:17,096][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
elk_1 |
elk_1 | ==> /var/log/kibana/kibana5.log <==
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:35:20Z","tags":["listening","info"],"pid":247,"message":"Server running at http://0.0.0.0:5601"}
Now, Kibana and Elasticsearch seem to be perfectly fine, while logstash isn't doing anything because when I type something in the terminal, I get no response.
Running ps aux in the container bash terminal, I get the following:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 21332 3592 ? Ss 17:50 0:00 /bin/bash /usr/local/bin/start.sh
root 20 0.0 0.0 29272 2576 ? Ss 17:50 0:00 /usr/sbin/cron
elastic+ 86 18.0 4.4 5910168 1479108 ? Sl 17:50 0:46 /usr/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -X
elastic+ 112 0.0 0.0 135668 7328 ? Sl 17:50 0:00 /opt/elasticsearch/modules/x-pack/x-pack-ml/platform/linux-x86_64/bin/controller
logstash 226 43.6 2.2 5714032 726940 ? SNl 17:50 1:47 /usr/bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djav
kibana 243 20.0 0.4 1315812 155744 ? Sl 17:50 0:49 /opt/kibana/bin/../node/bin/node --max-old-space-size=250 --no-warnings /opt/kibana/bin/../src/cli -l /var/log/kibana/kibana5.log
root 245 0.0 0.0 7612 672 ? S 17:50 0:00 tail -f /var/log/elasticsearch/elasticsearch.log /var/log/logstash/logstash-plain.log /var/log/kibana/kibana5.log
root 323 1.3 0.0 21488 3544 pts/0 Ss 17:54 0:00 bash
root 340 0.0 0.0 37656 3300 pts/0 R+ 17:54 0:00 ps aux
Running ll /var/log/logstash/ in the container bash terminal, I get the following:
total 16
drwxr-xr-x 1 logstash logstash 4096 Aug 11 17:51 ./
drwxr-xr-x 1 root root 4096 Jul 26 14:27 ../
-rw-r--r-- 1 root root 0 Aug 11 17:50 logstash.err
-rw-r--r-- 1 logstash logstash 3873 Aug 11 17:51 logstash-plain.log
-rw-r--r-- 1 logstash logstash 0 Aug 11 17:51 logstash-slowlog-plain.log
-rw-r--r-- 1 root root 3964 Aug 11 17:51 logstash.stdout
Now, I did change logstash.conf to have the following:
input { stdin {} }
output {
elasticsearch {
hosts => ["localhost:9200"]
}
}
Still when I type something in the terminal, there is nothing in the discover section of Kibana, neither has any index pattern created...
Running ps aux in the container bash terminal, I get the following:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 21332 3600 ? Ss 17:40 0:00 /bin/bash /usr/local/bin/start.sh
root 21 0.0 0.0 29272 2568 ? Ss 17:40 0:00 /usr/sbin/cron
elastic+ 87 12.0 4.5 5912216 1484068 ? Sl 17:40 0:52 /usr/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -X
elastic+ 113 0.0 0.0 135668 7332 ? Sl 17:40 0:00 /opt/elasticsearch/modules/x-pack/x-pack-ml/platform/linux-x86_64/bin/controller
logstash 224 27.8 2.3 5714032 771528 ? SNl 17:40 1:58 /usr/bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djav
kibana 241 12.0 0.5 1322444 181228 ? Sl 17:40 0:50 /opt/kibana/bin/../node/bin/node --max-old-space-size=250 --no-warnings /opt/kibana/bin/../src/cli -l /var/log/kibana/kibana5.log
root 246 0.0 0.0 7612 692 ? S 17:40 0:00 tail -f /var/log/elasticsearch/elasticsearch.log /var/log/logstash/logstash-plain.log /var/log/kibana/kibana5.log
root 317 1.0 0.0 21488 3744 pts/0 Ss 17:47 0:00 bash
root 334 0.0 0.0 37656 3356 pts/0 R+ 17:48 0:00 ps aux
Running ll /var/log/logstash/ in the container bash terminal, I get the following:
total 16
drwxr-xr-x 1 logstash logstash 4096 Aug 11 17:41 ./
drwxr-xr-x 1 root root 4096 Jul 26 14:27 ../
-rw-r--r-- 1 root root 0 Aug 11 17:40 logstash.err
-rw-r--r-- 1 logstash logstash 3873 Aug 11 17:41 logstash-plain.log
-rw-r--r-- 1 logstash logstash 0 Aug 11 17:41 logstash-slowlog-plain.log
-rw-r--r-- 1 root root 3964 Aug 11 17:41 logstash.stdout
I have been spending a good amount of time with no luck here, so any help would be highly appreciated!
So, I did find a solution thanks to the owner of the elk image repo.
I followed instructions from this page. That is, I entered the container bash by running docker exec -it <container-name> bash, and then (inside the container terminal) I ran the command /opt/logstash/bin/logstash --path.data /tmp/logstash/data -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'.
The problem was that although the Logstash service had been started, it did not have an an interactive terminal. The command above, addresses this problem.
The following logs were displayed inside the container terminal:
Sending Logstash's logs to /opt/logstash/logs which is now configured via log4j2.properties
[2018-08-12T06:28:28,941][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/tmp/logstash/data/queue"}
[2018-08-12T06:28:28,948][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/tmp/logstash/data/dead_letter_queue"}
[2018-08-12T06:28:29,592][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-08-12T06:28:29,656][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"29cb946b-2bed-4390-b0cb-9aad6ef5a2a2", :path=>"/tmp/logstash/data/uuid"}
[2018-08-12T06:28:30,634][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.2"}
[2018-08-12T06:28:32,911][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>6, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-08-12T06:28:33,646][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-08-12T06:28:33,663][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-08-12T06:28:34,107][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-08-12T06:28:34,205][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-08-12T06:28:34,212][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-12T06:28:34,268][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost"]}
[2018-08-12T06:28:34,364][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-08-12T06:28:34,442][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-08-12T06:28:34,496][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x5dcf75c7 run>"}
[2018-08-12T06:28:34,602][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
The stdin plugin is now waiting for input:
[2018-08-12T06:28:34,727][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-08-12T06:28:35,607][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9601}
And the following inside my server terminal:
elk_1 | ==> /var/log/elasticsearch/elasticsearch.log <==
elk_1 | [2018-08-12T06:28:34,777][INFO ][o.e.c.m.MetaDataIndexTemplateService] [jqTz2zS] adding template [logstash] for index patterns [logstash-*]
elk_1 | [2018-08-12T06:28:35,214][INFO ][o.e.c.m.MetaDataCreateIndexService] [jqTz2zS] [logstash-2018.08.12] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [_default_]
elk_1 | [2018-08-12T06:28:36,207][INFO ][o.e.c.m.MetaDataMappingService] [jqTz2zS] [logstash-2018.08.12/hiLssj14TMKd5lzBq6tvrw] create_mapping [doc]
Doing so, an index pattern was indeed created inside Kibana and I started to receive messages inside the discover tab.