Logstash MySQL JDBC LoadError: no such file to load -- <file-path> - docker

I want to build Docker-ELK from this repository.
This is my logstash.conf file
input {
jdbc {
jdbc_driver_library => "/home/edsoft/IdeaProjects/docker-elk/resources/mysql-connector-java-5.1.36-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/yazilimokulu"
jdbc_user => "root"
jdbc_password => "1"
schedule => "* * * * *"
statement => "select * from posts"
}
tcp {
port => 5000
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "posts"
document_type => "post"
document_id => "%{id}" ## must be lower case
}
}
I run docker with docker-compose. Kibana and ElasticSearch start successfully but Logstash throw error
LoadError: no such file to load -- /home/edsoft/IdeaProjects/docker-elk/resources/mysql-connector-java-5.1.36-bin
logstash_1 | require at org/jruby/RubyKernel.java:1040
logstash_1 | require at /usr/share/logstash/vendor/bundle/jruby/1.9/gems/polyglot-0.3.5/lib/polyglot.rb:65
logstash_1 | load_drivers at /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.1.3/lib/logstash/plugin_mixins/jdbc.rb:134
logstash_1 | each at org/jruby/RubyArray.java:1613
logstash_1 | load_drivers at /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.1.3/lib/logstash/plugin_mixins/jdbc.rb:132
logstash_1 | prepare_jdbc_connection at /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.1.3/lib/logstash/plugin_mixins/jdbc.rb:146
logstash_1 | register at /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.1.3/lib/logstash/inputs/jdbc.rb:191
logstash_1 | register_plugin at /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:282
logstash_1 | register_plugins at /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:293
logstash_1 | each at org/jruby/RubyArray.java:1613
logstash_1 | register_plugins at /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:293
logstash_1 | start_inputs at /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:436
logstash_1 | start_workers at /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:337
logstash_1 | run at /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:232
logstash_1 | start_pipeline at /usr/share/logstash/logstash-core/lib/logstash/agent.rb:387
I write path ...bin.jar but error delete .jar from filename. I write ..bin.jar.jar error code is ..bin.jar but doesn't find the file.
Please help me
Thank you

The path you set for the jdbc_driver_library parameter doesn't exist within your container. You have to include the library file inside your Docker image or mount it from your host when you run the Logstash container.

Related

Logstash can't connect to ElasticSearch, install using docker?

File /usr/share/logstash/config/ports.conf :
`
input {
tcp {
port => 5000
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "hello-logstash-docker"
}
}
`
logstash_1 | WARNING: An illegal reflective access operation has occurred
logstash_1 | WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby8545812400624390168jopenssl.jar) to field java.security.MessageDigest.provider
logstash_1 | WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
logstash_1 | WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
logstash_1 | WARNING: All illegal access operations will be denied in a future release
logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1 | [2022-03-07T11:37:31,708][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.9.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10-LTS on 11.0.8+10-LTS +indy +jit [linux-x86_64]"}
logstash_1 | [2022-03-07T11:37:32,881][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
logstash_1 | [2022-03-07T11:37:32,884][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
logstash_1 | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
logstash_1 | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
logstash_1 | [2022-03-07T11:37:33,968][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2022-03-07T11:37:34,325][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
logstash_1 | [2022-03-07T11:37:34,388][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
logstash_1 | [2022-03-07T11:37:34,391][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2022-03-07T11:37:34,540][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
logstash_1 | [2022-03-07T11:37:34,541][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
logstash_1 | [2022-03-07T11:37:36,344][INFO ][org.reflections.Reflections] Reflections took 118 ms to scan 1 urls, producing 22 keys and 45 values
logstash_1 | [2022-03-07T11:37:36,885][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2022-03-07T11:37:36,885][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2022-03-07T11:37:36,908][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
logstash_1 | [2022-03-07T11:37:36,914][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
logstash_1 | [2022-03-07T11:37:36,919][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] ES Output version determined {:es_version=>7}
logstash_1 | [2022-03-07T11:37:36,920][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2022-03-07T11:37:36,924][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
logstash_1 | [2022-03-07T11:37:36,930][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2022-03-07T11:37:36,969][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://elasticsearch:9200"]}
logstash_1 | [2022-03-07T11:37:36,972][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
logstash_1 | [2022-03-07T11:37:36,984][WARN ][logstash.javapipeline ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
logstash_1 | [2022-03-07T11:37:37,058][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
logstash_1 | [2022-03-07T11:37:37,109][INFO ][logstash.javapipeline ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x7bdfd22b#/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:122 run>"}
logstash_1 | [2022-03-07T11:37:37,148][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/ports.conf"], :thread=>"#<Thread:0x675f8073#/usr/share/logstash/logstash-core/lib/logstash/pipelines_registry.rb:141 run>"}
logstash_1 | [2022-03-07T11:37:37,152][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
logstash_1 | [2022-03-07T11:37:38,151][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.0}
logstash_1 | [2022-03-07T11:37:38,152][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>1.04}
logstash_1 | [2022-03-07T11:37:38,206][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
logstash_1 | [2022-03-07T11:37:38,465][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
logstash_1 | [2022-03-07T11:37:38,474][INFO ][logstash.inputs.tcp ][main][2dd5b8304a815578c4e06e3aec9e54f0316a8b63a07cd77090a1ddb785d8c617] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enable=>"false"}
logstash_1 | [2022-03-07T11:37:38,533][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
logstash_1 | [2022-03-07T11:37:38,923][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
I use GET /_cat/indices in kibana but "hello-logstash-docker" index is not found. Please tell me where is the error?
I follow the instructions : https://www.youtube.com/watch?v=I2ZS2Wlk1No

stdout put 404 in logstash

I'm new to elk stack, and I'm trying to do a very basic experiment: send a message to logstash stdout with a PUT request, based on this repo: link
The logstash's port is 9600, and I use postman to send a PUT request. It returns 404
My logstash.conf is very simple.
input {
http {
}
}
output {
stdout {
}
}
As for the settings in the docker-compose file, here they are:
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5044:5044"
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
A GET request works, and here is the result:
{
"host": "b32085c40331",
"version": "7.10.2",
"http_address": "0.0.0.0:9600",
"id": "0079f53f-1d2e-4278-85eb-0817fa95506c",
"name": "b32085c40331",
"ephemeral_id": "d0c18df3-9a0b-48c9-abb4-9e41543ed7ac",
"status": "green",
"snapshot": false,
"pipeline": {
"workers": 4,
"batch_size": 125,
"batch_delay": 50
},
"monitoring": {
"hosts": [
"http://elasticsearch:9200"
],
"username": "elastic"
},
"build_date": "2021-01-13T02:43:06Z",
"build_sha": "7cebafee7a073fa9d58c97de074064a540d6c317",
"build_snapshot": false
}
About logstash, with docker-compose logs logstash, I get a large log, and I don't know even where to start:
logstash_1 | Using bundled JDK: /usr/share/logstash/jdk
logstash_1 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
logstash_1 | Using bundled JDK: /usr/share/logstash/jdk
logstash_1 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
logstash_1 | WARNING: An illegal reflective access operation has occurred
logstash_1 | WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby5118775578707886457jopenssl.jar) to field java.security.MessageDigest.provider
logstash_1 | WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
logstash_1 | WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
logstash_1 | WARNING: All illegal access operations will be denied in a future release
logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1 | [2021-01-29T12:00:35,199][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.10.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +indy +jit [linux-x86_64]"}
logstash_1 | [2021-01-29T12:00:35,412][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
logstash_1 | [2021-01-29T12:00:35,440][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
logstash_1 | [2021-01-29T12:00:37,687][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"0079f53f-1d2e-4278-85eb-0817fa95506c", :path=>"/usr/share/logstash/data/uuid"}
logstash_1 | [2021-01-29T12:00:38,657][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
logstash_1 | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
logstash_1 | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
logstash_1 | [2021-01-29T12:00:42,951][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
logstash_1 | [2021-01-29T12:00:46,669][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx#elasticsearch:9200/]}}
logstash_1 | [2021-01-29T12:00:50,290][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elastic:xxxxxx#elasticsearch:9200/"}
logstash_1 | [2021-01-29T12:00:50,515][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
logstash_1 | [2021-01-29T12:00:50,518][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2021-01-29T12:00:50,694][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
logstash_1 | [2021-01-29T12:00:50,695][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
logstash_1 | [2021-01-29T12:00:53,243][INFO ][org.reflections.Reflections] Reflections took 606 ms to scan 1 urls, producing 23 keys and 47 values
logstash_1 | [2021-01-29T12:00:54,045][WARN ][deprecation.logstash.outputs.elasticsearchmonitoring] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
logstash_1 | [2021-01-29T12:00:54,339][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx#elasticsearch:9200/]}}
logstash_1 | [2021-01-29T12:00:54,417][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elastic:xxxxxx#elasticsearch:9200/"}
logstash_1 | [2021-01-29T12:00:54,500][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] ES Output version determined {:es_version=>7}
logstash_1 | [2021-01-29T12:00:54,500][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2021-01-29T12:00:54,627][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://elasticsearch:9200"]}
logstash_1 | [2021-01-29T12:00:54,691][WARN ][logstash.javapipeline ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
logstash_1 | [2021-01-29T12:00:54,953][INFO ][logstash.javapipeline ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x37941be run>"}
logstash_1 | [2021-01-29T12:00:55,984][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x3e7f065e run>"}
logstash_1 | [2021-01-29T12:01:00,012][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>5.05}
logstash_1 | [2021-01-29T12:01:00,013][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>4.03}
logstash_1 | [2021-01-29T12:01:00,142][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
logstash_1 | [2021-01-29T12:01:01,027][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
logstash_1 | [2021-01-29T12:01:01,209][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
logstash_1 | [2021-01-29T12:01:01,245][INFO ][logstash.inputs.http ][main][2d26a22d7786b5d1d6a62684242754061f0e7699167308954d8cf88e52c80903] Starting http input listener {:address=>"0.0.0.0:8080", :ssl=>"false"}
logstash_1 | [2021-01-29T12:01:01,217][INFO ][logstash.inputs.tcp ][main][6ca97606e772405a9e65bc09f9b369d784557cb3e3fea379b981c5d16a9573f1] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enable=>"false"}
logstash_1 | [2021-01-29T12:01:01,306][INFO ][org.logstash.beats.Server][main][d704d487716580c50daa3a9bb4e99ad2bfa9542e31e8b0b06a9e0ea687e6f15a] Starting server on port: 5044
logstash_1 | [2021-01-29T12:01:01,340][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
logstash_1 | [2021-01-29T12:01:02,200][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
How can this problem be fixed?
The port 9600 is the port for the Logstash API, for monitoring logstash, not the port for the http input.
If you want to use the http input and since you didn't specify a port in the configuration, you should use the port 8080, which is the default port for this input.
You will need to expose this port also in your docker configuration.

Import JSON-file to Elasticsearch and Kibana via Logstash (Docker ELK stack)

I'm trying to import data that is stored in a JSON-file via Logstash to Elasticsearch/Kibana. I've unsuccessfully tried to resolve the issue by searching.
I'm using the ELK stack with Docker as provided here [git/docker-elk].
My logstash.conf currently looks as such:
input {
tcp {
port => 5000
}
file {
path => ["/export.json"]
codec => "json"
start_position => "beginning"
}
}
filter {
json {
source => "message"
}
}
## Add your filters / logstash plugins configuration here
output {
stdout {
id => "stdout_test_id"
codec => json
}
elasticsearch {
hosts => "elasticsearch:9200"
index => "logstash-indexname"
}
}
The JSON-file is formatted as such:
[{fields},{fields},{fields},...]
Full JSON-structure: https://jsoneditoronline.org/?id=3d49813d38e641f6af6bf90e9a6481e3
I'd like to import everything under each bracket as-is into Elasticsearch.
Shell output after running docker-compose up:
logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1 | [2018-10-24T13:21:54,602][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
logstash_1 | [2018-10-24T13:21:54,612][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
logstash_1 | [2018-10-24T13:21:54,959][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or commandline options are specified
logstash_1 | [2018-10-24T13:21:55,003][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"4a572899-c7ac-4b41-bcc0-7889983240b4", :path=>"/usr/share/logstash/data/uuid"}
logstash_1 | [2018-10-24T13:21:55,522][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.4.0"}
logstash_1 | [2018-10-24T13:21:57,552][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
logstash_1 | [2018-10-24T13:21:58,018][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2018-10-24T13:21:58,035][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elasticsearch:9200/, :path=>"/"}
logstash_1 | [2018-10-24T13:21:58,272][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
logstash_1 | [2018-10-24T13:21:58,377][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
logstash_1 | [2018-10-24T13:21:58,381][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
logstash_1 | [2018-10-24T13:21:58,419][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
logstash_1 | [2018-10-24T13:21:58,478][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
logstash_1 | [2018-10-24T13:21:58,529][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enable=>"false"}
logstash_1 | [2018-10-24T13:21:58,538][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
logstash_1 | [2018-10-24T13:21:58,683][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
elasticsearch_1 | [2018-10-24T13:21:58,785][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
elasticsearch_1 | [2018-10-24T13:21:59,036][WARN ][o.e.d.i.m.MapperService ] [_default_] mapping is deprecated since it is not useful anymore nowthat indexes cannot have more than one type
elasticsearch_1 | [2018-10-24T13:21:59,041][INFO ][o.e.c.m.MetaDataIndexTemplateService] [riEmfTq] adding template [logstash] for index patterns [logstash-*]
logstash_1 | [2018-10-24T13:21:59,158][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_1ed00aa8bbe3029ead0818433d122587", :path=>["/export.json"]}
logstash_1 | [2018-10-24T13:21:59,210][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x4b7995b9 sleep>"}
logstash_1 | [2018-10-24T13:21:59,337][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
logstash_1 | [2018-10-24T13:21:59,357][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
logstash_1 | [2018-10-24T13:21:59,760][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
The problem is that this file contains all documents inside a JSON array wrapped on a single line. Logstash cannot easily read that kind of file.
What I suggest is to transform that file into another one where each JSON document sits on its own line, so that Logstash can consume it easily.
First, run this command (you might have to install the jq utility first):
cat export.json | jq -c '.[]' > export_lines.json
Then change your file input to
path => ["/export_lines.json"]
Re-run Logstash and enjoy!

docker-compose persisting folder empty

I would like use bitnami-docker-redmine with docker-compose persisting on Windows.
If i run the first exemple docker-compose.yml, without persisting application, redmine start and run perfectly.
But, i would like use this with persisting application exemple :
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- './mariadb:/bitnami/mariadb'
redmine:
image: bitnami/redmine:latest
ports:
- 80:3000
volumes:
- './redmine:/bitnami/redmine'
And only MariaDB run, with error message :
$ docker-compose up
Creating bitnamidockerredmine_redmine_1
Creating bitnamidockerredmine_mariadb_1
Attaching to bitnamidockerredmine_mariadb_1, bitnamidockerredmine_redmine_1
mariadb_1 |
mariadb_1 | Welcome to the Bitnami mariadb container
mariadb_1 | Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mariadb
mariadb_1 | Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mariadb/issues
mariadb_1 | Send us your feedback at containers#bitnami.com
mariadb_1 |
mariadb_1 | WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
mariadb_1 | nami INFO Initializing mariadb
mariadb_1 | mariadb INFO ==> Configuring permissions...
mariadb_1 | mariadb INFO ==> Validating inputs...
mariadb_1 | mariadb WARN Allowing the "rootPassword" input to be empty
redmine_1 |
redmine_1 | Welcome to the Bitnami redmine container
redmine_1 | Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-redmine
redmine_1 | Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-redmine/issues
redmine_1 | Send us your feedback at containers#bitnami.com
redmine_1 |
redmine_1 | nami INFO Initializing redmine
redmine_1 | redmine INFO Configuring Redmine database...
mariadb_1 | mariadb INFO ==> Initializing database...
mariadb_1 | mariadb INFO ==> Creating 'root' user with unrestricted access...
mariadb_1 | mariadb INFO ==> Enabling remote connections...
mariadb_1 | mariadb INFO
mariadb_1 | mariadb INFO ########################################################################
mariadb_1 | mariadb INFO Installation parameters for mariadb:
mariadb_1 | mariadb INFO Root User: root
mariadb_1 | mariadb INFO Root Password: Not set during installation
mariadb_1 | mariadb INFO (Passwords are not shown for security reasons)
mariadb_1 | mariadb INFO ########################################################################
mariadb_1 | mariadb INFO
mariadb_1 | nami INFO mariadb successfully initialized
mariadb_1 | INFO ==> Starting mariadb...
mariadb_1 | nami ERROR Unable to start com.bitnami.mariadb: Warning: World-writable config file '/opt/bitnami/mariadb/conf/my.cnf' is ignored
mariadb_1 | Warning: World-writable config file '/opt/bitnami/mariadb/conf/my.cnf' is ignored
mariadb_1 |
bitnamidockerredmine_mariadb_1 exited with code 1
redmine_1 | mysqlCo INFO Trying to connect to MySQL server
redmine_1 | Error executing 'postInstallation': Failed to connect to mariadb:3306 after 36 tries
bitnamidockerredmine_redmine_1 exited with code 1
My ./mariadb folder is good, but ./redmine is empty.
Do you have any idea why my persisting does not start completely ? Without the persisting, it works :(
docker-version : 1.13.0 (client/server)
plateform : Windows 10 (sorry, not test on Linux)
Thank you !

docker-compose generating duplicate entries in /etc/hosts

I have a fairly simple docker-compose.yml:
db:
build: docker/db
env_file:
- .env
ports:
- "5432"
web:
build: .
env_file:
- .env
volumes:
- .:/home/app/emerson
ports:
- "80:80"
links:
- db
The web container launches a rails app. Everything goes smoothly, but there is one thing that confuses me. Looking inside /etc/hosts on the web container, I see the following entries:
172.17.0.10 db_1
172.17.0.10 emerson_db_1
172.17.0.10 db
I would expect db, since that's the container I'm linking to the web container, but where did the other guys come from? FYI, here's the output of docker-compose up:
Creating emerson_db_1...
Creating emerson_web_1...
Attaching to emerson_db_1, emerson_web_1
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
db_1 | Data page checksums are disabled.
db_1 |
db_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db_1 | creating subdirectories ... ok
db_1 | selecting default max_connections ... 100
db_1 | selecting default shared_buffers ... 128MB
db_1 | selecting dynamic shared memory implementation ... posix
db_1 | creating configuration files ... ok
web_1 | *** Running /etc/my_init.d/00_configure_nginx.sh...
web_1 | *** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
web_1 | No SSH host key available. Generating one...
db_1 | ok
db_1 | initializing pg_authid ... ok
web_1 | Creating SSH2 RSA key; this may take some time ...
db_1 | initializing dependencies ... ok
web_1 | Creating SSH2 DSA key; this may take some time ...
web_1 | Creating SSH2 ECDSA key; this may take some time ...
web_1 | Creating SSH2 ED25519 key; this may take some time ...
db_1 | creating system views ... ok
db_1 | loading system objects' descriptions ... ok
db_1 | creating collations ... ok
db_1 | creating conversions ... ok
db_1 | creating dictionaries ... ok
db_1 | setting privileges on built-in objects ... ok
web_1 | invoke-rc.d: policy-rc.d denied execution of restart.
db_1 | creating information schema ... ok
web_1 | *** Running /etc/my_init.d/30_presetup_nginx.sh...
web_1 | *** Running /etc/rc.local...
db_1 | loading PL/pgSQL server-side language ... ok
web_1 | *** Booting runit daemon...
web_1 | *** Runit started as PID 98
db_1 | vacuuming database template1 ... ok
db_1 | copying template1 to template0 ... ok
db_1 | copying template1 to postgres ... ok
web_1 | Apr 24 02:44:26 1d3b7bb27612 syslog-ng[105]: syslog-ng starting up; version='3.5.3'
db_1 | syncing data to disk ... ok
db_1 |
db_1 | WARNING: enabling "trust" authentication for local connections
db_1 | You can change this by editing pg_hba.conf or using the option -A, or
db_1 | --auth-local and --auth-host, the next time you run initdb.
db_1 |
db_1 | Success. You can now start the database server using:
db_1 |
db_1 | postgres -D /var/lib/postgresql/data
db_1 | or
db_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
db_1 |
db_1 | ****************************************************
db_1 | WARNING: No password has been set for the database.
db_1 | This will allow anyone with access to the
db_1 | Postgres port to access your database. In
db_1 | Docker's default configuration, this is
db_1 | effectively any other container on the same
db_1 | system.
db_1 |
db_1 | Use "-e POSTGRES_PASSWORD=password" to set
db_1 | it in "docker run".
db_1 | ****************************************************
db_1 |
db_1 | PostgreSQL stand-alone backend 9.4.1
db_1 | backend> statement: ALTER USER "postgres" WITH SUPERUSER ;
db_1 |
web_1 | ok: run: /etc/service/nginx-log-forwarder: (pid 118) 0s
db_1 | backend>
db_1 | No PostgreSQL clusters exist; see "man pg_createcluster" ... (warning).
db_1 |
db_1 | backend> *******************************************
db_1 | LOG: database system was shut down at 2015-04-24 02:44:28 UTC
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
web_1 | [ 2015-04-24 02:44:27.9386 119/7f4c07f13780 agents/Watchdog/Main.cpp:538 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nogroup', 'default_python' => 'python', 'default_ruby' => '/usr/bin/ruby', 'default_user' => 'nobody', 'log_level' => '0', 'max_pool_size' => '6', 'passenger_root' => '/usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini', 'passenger_version' => '4.0.58', 'pool_idle_time' => '300', 'temp_dir' => '/tmp', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_passenger_version' => '4.0.58', 'web_server_pid' => '107', 'web_server_type' => 'nginx', 'web_server_worker_gid' => '33', 'web_server_worker_uid' => '33' }
web_1 | [ 2015-04-24 02:44:27.0007 122/7f0c3eb9a780 agents/HelperAgent/Main.cpp:650 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.107/generation-0/request
web_1 | [ 2015-04-24 02:44:28.1065 127/7f5e5b4377c0 agents/LoggingAgent/Main.cpp:321 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.107/generation-0/logging
web_1 | [ 2015-04-24 02:44:28.1072 119/7f4c07f13780 agents/Watchdog/Main.cpp:728 ]: All Phusion Passenger agents started!
But there are only two containers docker ps -a outputs:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1d3b7bb27612 emerson_web:latest "/sbin/my_init" About an hour ago Up About an hour 443/tcp, 0.0.0.0:80->80/tcp emerson_web_1
0c047c3ce103 emerson_db:latest "/docker-entrypoint. About an hour ago Up About an hour 0.0.0.0:49156->5432/tcp emerson_db_1
In addition, I also see duplicate environment variables in the web container, corresponding to db, db_1 and emerson_db_1 prefixes.
They are coming from pre-1.0 docker-compose, where multiple db instances where named after _1, _2 pattern.
PR 364 introduced link name (by default, the name of the linked service) as the hostname to connect to, instead of using environment variable.
There are still aliases with _x added for each container instances, and that can be an issue (Issue 472: Hostnames with underscore fails in ruby URI validation
The current answer is:
You can use the name of the service in the docker-compose.yml as the hostname. It doesn't contain any underscores.
You can also add an alias to your link to the container, which should allow you to access it as just the alias.
In the 1.3 release of compose there should be support for naming your container as anything you want, which will make this more obvious.

Resources