stdout put 404 in logstash - docker

I'm new to elk stack, and I'm trying to do a very basic experiment: send a message to logstash stdout with a PUT request, based on this repo: link
The logstash's port is 9600, and I use postman to send a PUT request. It returns 404
My logstash.conf is very simple.
input {
http {
}
}
output {
stdout {
}
}
As for the settings in the docker-compose file, here they are:
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5044:5044"
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
A GET request works, and here is the result:
{
"host": "b32085c40331",
"version": "7.10.2",
"http_address": "0.0.0.0:9600",
"id": "0079f53f-1d2e-4278-85eb-0817fa95506c",
"name": "b32085c40331",
"ephemeral_id": "d0c18df3-9a0b-48c9-abb4-9e41543ed7ac",
"status": "green",
"snapshot": false,
"pipeline": {
"workers": 4,
"batch_size": 125,
"batch_delay": 50
},
"monitoring": {
"hosts": [
"http://elasticsearch:9200"
],
"username": "elastic"
},
"build_date": "2021-01-13T02:43:06Z",
"build_sha": "7cebafee7a073fa9d58c97de074064a540d6c317",
"build_snapshot": false
}
About logstash, with docker-compose logs logstash, I get a large log, and I don't know even where to start:
logstash_1 | Using bundled JDK: /usr/share/logstash/jdk
logstash_1 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
logstash_1 | Using bundled JDK: /usr/share/logstash/jdk
logstash_1 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
logstash_1 | WARNING: An illegal reflective access operation has occurred
logstash_1 | WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby5118775578707886457jopenssl.jar) to field java.security.MessageDigest.provider
logstash_1 | WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
logstash_1 | WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
logstash_1 | WARNING: All illegal access operations will be denied in a future release
logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1 | [2021-01-29T12:00:35,199][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.10.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +indy +jit [linux-x86_64]"}
logstash_1 | [2021-01-29T12:00:35,412][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
logstash_1 | [2021-01-29T12:00:35,440][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
logstash_1 | [2021-01-29T12:00:37,687][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"0079f53f-1d2e-4278-85eb-0817fa95506c", :path=>"/usr/share/logstash/data/uuid"}
logstash_1 | [2021-01-29T12:00:38,657][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
logstash_1 | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
logstash_1 | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
logstash_1 | [2021-01-29T12:00:42,951][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
logstash_1 | [2021-01-29T12:00:46,669][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx#elasticsearch:9200/]}}
logstash_1 | [2021-01-29T12:00:50,290][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elastic:xxxxxx#elasticsearch:9200/"}
logstash_1 | [2021-01-29T12:00:50,515][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
logstash_1 | [2021-01-29T12:00:50,518][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2021-01-29T12:00:50,694][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
logstash_1 | [2021-01-29T12:00:50,695][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
logstash_1 | [2021-01-29T12:00:53,243][INFO ][org.reflections.Reflections] Reflections took 606 ms to scan 1 urls, producing 23 keys and 47 values
logstash_1 | [2021-01-29T12:00:54,045][WARN ][deprecation.logstash.outputs.elasticsearchmonitoring] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
logstash_1 | [2021-01-29T12:00:54,339][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx#elasticsearch:9200/]}}
logstash_1 | [2021-01-29T12:00:54,417][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elastic:xxxxxx#elasticsearch:9200/"}
logstash_1 | [2021-01-29T12:00:54,500][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] ES Output version determined {:es_version=>7}
logstash_1 | [2021-01-29T12:00:54,500][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2021-01-29T12:00:54,627][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://elasticsearch:9200"]}
logstash_1 | [2021-01-29T12:00:54,691][WARN ][logstash.javapipeline ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
logstash_1 | [2021-01-29T12:00:54,953][INFO ][logstash.javapipeline ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x37941be run>"}
logstash_1 | [2021-01-29T12:00:55,984][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x3e7f065e run>"}
logstash_1 | [2021-01-29T12:01:00,012][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>5.05}
logstash_1 | [2021-01-29T12:01:00,013][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>4.03}
logstash_1 | [2021-01-29T12:01:00,142][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
logstash_1 | [2021-01-29T12:01:01,027][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
logstash_1 | [2021-01-29T12:01:01,209][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
logstash_1 | [2021-01-29T12:01:01,245][INFO ][logstash.inputs.http ][main][2d26a22d7786b5d1d6a62684242754061f0e7699167308954d8cf88e52c80903] Starting http input listener {:address=>"0.0.0.0:8080", :ssl=>"false"}
logstash_1 | [2021-01-29T12:01:01,217][INFO ][logstash.inputs.tcp ][main][6ca97606e772405a9e65bc09f9b369d784557cb3e3fea379b981c5d16a9573f1] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enable=>"false"}
logstash_1 | [2021-01-29T12:01:01,306][INFO ][org.logstash.beats.Server][main][d704d487716580c50daa3a9bb4e99ad2bfa9542e31e8b0b06a9e0ea687e6f15a] Starting server on port: 5044
logstash_1 | [2021-01-29T12:01:01,340][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
logstash_1 | [2021-01-29T12:01:02,200][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
How can this problem be fixed?

The port 9600 is the port for the Logstash API, for monitoring logstash, not the port for the http input.
If you want to use the http input and since you didn't specify a port in the configuration, you should use the port 8080, which is the default port for this input.
You will need to expose this port also in your docker configuration.

Related

Logstash can't connect to ElasticSearch, install using docker?

File /usr/share/logstash/config/ports.conf :
`
input {
tcp {
port => 5000
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "hello-logstash-docker"
}
}
`
logstash_1 | WARNING: An illegal reflective access operation has occurred
logstash_1 | WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby8545812400624390168jopenssl.jar) to field java.security.MessageDigest.provider
logstash_1 | WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
logstash_1 | WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
logstash_1 | WARNING: All illegal access operations will be denied in a future release
logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1 | [2022-03-07T11:37:31,708][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.9.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10-LTS on 11.0.8+10-LTS +indy +jit [linux-x86_64]"}
logstash_1 | [2022-03-07T11:37:32,881][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
logstash_1 | [2022-03-07T11:37:32,884][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
logstash_1 | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
logstash_1 | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
logstash_1 | [2022-03-07T11:37:33,968][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2022-03-07T11:37:34,325][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
logstash_1 | [2022-03-07T11:37:34,388][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
logstash_1 | [2022-03-07T11:37:34,391][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2022-03-07T11:37:34,540][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
logstash_1 | [2022-03-07T11:37:34,541][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
logstash_1 | [2022-03-07T11:37:36,344][INFO ][org.reflections.Reflections] Reflections took 118 ms to scan 1 urls, producing 22 keys and 45 values
logstash_1 | [2022-03-07T11:37:36,885][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2022-03-07T11:37:36,885][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2022-03-07T11:37:36,908][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
logstash_1 | [2022-03-07T11:37:36,914][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
logstash_1 | [2022-03-07T11:37:36,919][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] ES Output version determined {:es_version=>7}
logstash_1 | [2022-03-07T11:37:36,920][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2022-03-07T11:37:36,924][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
logstash_1 | [2022-03-07T11:37:36,930][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2022-03-07T11:37:36,969][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://elasticsearch:9200"]}
logstash_1 | [2022-03-07T11:37:36,972][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
logstash_1 | [2022-03-07T11:37:36,984][WARN ][logstash.javapipeline ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
logstash_1 | [2022-03-07T11:37:37,058][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
logstash_1 | [2022-03-07T11:37:37,109][INFO ][logstash.javapipeline ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x7bdfd22b#/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:122 run>"}
logstash_1 | [2022-03-07T11:37:37,148][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/ports.conf"], :thread=>"#<Thread:0x675f8073#/usr/share/logstash/logstash-core/lib/logstash/pipelines_registry.rb:141 run>"}
logstash_1 | [2022-03-07T11:37:37,152][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
logstash_1 | [2022-03-07T11:37:38,151][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.0}
logstash_1 | [2022-03-07T11:37:38,152][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>1.04}
logstash_1 | [2022-03-07T11:37:38,206][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
logstash_1 | [2022-03-07T11:37:38,465][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
logstash_1 | [2022-03-07T11:37:38,474][INFO ][logstash.inputs.tcp ][main][2dd5b8304a815578c4e06e3aec9e54f0316a8b63a07cd77090a1ddb785d8c617] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enable=>"false"}
logstash_1 | [2022-03-07T11:37:38,533][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
logstash_1 | [2022-03-07T11:37:38,923][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
I use GET /_cat/indices in kibana but "hello-logstash-docker" index is not found. Please tell me where is the error?
I follow the instructions : https://www.youtube.com/watch?v=I2ZS2Wlk1No

ActiveMQ login screen is not showing when runs as a Docker image

I have create a activemq docker file and when i start the image i cannot log to the login screen. The url is http://127.0.0.1:8161
here is my docker file you can also see the url in the log.
# Using jdk as base image
FROM openjdk:8-jdk-alpine
# Copy the whole directory of activemq into the image
COPY activemq /opt/activemq
# Set the working directory to the bin folder
WORKDIR /opt/activemq/bin
# Start up the activemq server
ENTRYPOINT ["./activemq","console"]
and here is the log from the console
INFO: Using java '/usr/lib/jvm/java-1.8-openjdk/bin/java'
INFO: Starting in foreground, this is just for debugging purposes (stop process by pressing CTRL+C)
INFO: Creating pidfile /opt/activemq//data/activemq.pid
Java Runtime: IcedTea 1.8.0_212 /usr/lib/jvm/java-1.8-openjdk/jre
Heap sizes: current=390656k free=386580k max=5779968k
JVM args: -Djava.util.logging.config.file=logging.properties -
Djava.security.auth.login.config=/opt/activemq//conf/login.config -Djava.awt.headless=true -
Djava.io.tmpdir=/opt/activemq//tmp -Dactivemq.classpath=/opt/activemq//conf:/opt/activemq//../lib/: -
Dactivemq.home=/opt/activemq/ -Dactivemq.base=/opt/activemq/ -Dactivemq.conf=/opt/activemq//conf -
Dactivemq.data=/opt/activemq//data
Extensions classpath:
[/opt/activemq/lib,/opt/activemq/lib/camel,/opt/activemq/lib/optional,/opt/activemq/lib/web,
/opt/activemq
/lib/extra]
ACTIVEMQ_HOME: /opt/activemq
ACTIVEMQ_BASE: /opt/activemq
ACTIVEMQ_CONF: /opt/activemq/conf
ACTIVEMQ_DATA: /opt/activemq/data
Loading message broker from: xbean:activemq.xml
INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#6be46e8f: startup date [Mon Nov 23
15:32:26 GMT 2020]; root of context hierarchy
INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/opt/activemq/data/kahadb]
INFO | KahaDB is version 7
INFO | PListStore:[/opt/activemq/data/localhost/tmp_storage] started
INFO | Apache ActiveMQ 5.16.0 (localhost, ID:afee6bfb43ba-45805-1606145547047-0:1) is starting
INFO | Listening for connections at: tcp://afee6bfb43ba:61616?
maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector openwire started
INFO | Listening for connections at: amqp://afee6bfb43ba:5672?
maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector amqp started
INFO | Listening for connections at: stomp://afee6bfb43ba:61613?
maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector stomp started
INFO | Listening for connections at: mqtt://afee6bfb43ba:1883?
maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector mqtt started
INFO | Starting Jetty server
INFO | Creating Jetty connector
WARN | ServletContext#o.e.j.s.ServletContextHandler#ab7395e{/,null,STARTING} has uncovered http
methods for path: /
INFO | Listening for connections at ws://afee6bfb43ba:61614?
maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector ws started
INFO | Apache ActiveMQ 5.16.0 (localhost, ID:afee6bfb43ba-45805-1606145547047-0:1) started
INFO | For help or more information please see: http://activemq.apache.org
INFO | ActiveMQ WebConsole available at http://127.0.0.1:8161/
INFO | ActiveMQ Jolokia REST API available at http://127.0.0.1:8161/api/jolokia/
what have i done wrong ? Thanks
As at ActiveMQ 5.16.0 the jetty endpoint host value was changed from 0.0.0.0 to 127.0.0.1, see AMQ-7007.
To overcome this in my Dockerfile I use CMD ["/bin/sh", "-c", "bin/activemq console -Djetty.host=0.0.0.0"]
Activemq startup done by ENTRYPOINT in your Dockerfile, so CMD ["/bin/sh", "-c", "bin/activemq console -Djetty.host=0.0.0.0"] won't work.
Correct usage with ENTRYPOINT is
ENTRYPOINT ["./activemq","console","-Djetty.host=0.0.0.0"]

Import JSON-file to Elasticsearch and Kibana via Logstash (Docker ELK stack)

I'm trying to import data that is stored in a JSON-file via Logstash to Elasticsearch/Kibana. I've unsuccessfully tried to resolve the issue by searching.
I'm using the ELK stack with Docker as provided here [git/docker-elk].
My logstash.conf currently looks as such:
input {
tcp {
port => 5000
}
file {
path => ["/export.json"]
codec => "json"
start_position => "beginning"
}
}
filter {
json {
source => "message"
}
}
## Add your filters / logstash plugins configuration here
output {
stdout {
id => "stdout_test_id"
codec => json
}
elasticsearch {
hosts => "elasticsearch:9200"
index => "logstash-indexname"
}
}
The JSON-file is formatted as such:
[{fields},{fields},{fields},...]
Full JSON-structure: https://jsoneditoronline.org/?id=3d49813d38e641f6af6bf90e9a6481e3
I'd like to import everything under each bracket as-is into Elasticsearch.
Shell output after running docker-compose up:
logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1 | [2018-10-24T13:21:54,602][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
logstash_1 | [2018-10-24T13:21:54,612][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
logstash_1 | [2018-10-24T13:21:54,959][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or commandline options are specified
logstash_1 | [2018-10-24T13:21:55,003][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"4a572899-c7ac-4b41-bcc0-7889983240b4", :path=>"/usr/share/logstash/data/uuid"}
logstash_1 | [2018-10-24T13:21:55,522][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.4.0"}
logstash_1 | [2018-10-24T13:21:57,552][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
logstash_1 | [2018-10-24T13:21:58,018][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2018-10-24T13:21:58,035][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elasticsearch:9200/, :path=>"/"}
logstash_1 | [2018-10-24T13:21:58,272][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
logstash_1 | [2018-10-24T13:21:58,377][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
logstash_1 | [2018-10-24T13:21:58,381][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
logstash_1 | [2018-10-24T13:21:58,419][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
logstash_1 | [2018-10-24T13:21:58,478][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
logstash_1 | [2018-10-24T13:21:58,529][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enable=>"false"}
logstash_1 | [2018-10-24T13:21:58,538][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
logstash_1 | [2018-10-24T13:21:58,683][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
elasticsearch_1 | [2018-10-24T13:21:58,785][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
elasticsearch_1 | [2018-10-24T13:21:59,036][WARN ][o.e.d.i.m.MapperService ] [_default_] mapping is deprecated since it is not useful anymore nowthat indexes cannot have more than one type
elasticsearch_1 | [2018-10-24T13:21:59,041][INFO ][o.e.c.m.MetaDataIndexTemplateService] [riEmfTq] adding template [logstash] for index patterns [logstash-*]
logstash_1 | [2018-10-24T13:21:59,158][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_1ed00aa8bbe3029ead0818433d122587", :path=>["/export.json"]}
logstash_1 | [2018-10-24T13:21:59,210][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x4b7995b9 sleep>"}
logstash_1 | [2018-10-24T13:21:59,337][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
logstash_1 | [2018-10-24T13:21:59,357][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
logstash_1 | [2018-10-24T13:21:59,760][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
The problem is that this file contains all documents inside a JSON array wrapped on a single line. Logstash cannot easily read that kind of file.
What I suggest is to transform that file into another one where each JSON document sits on its own line, so that Logstash can consume it easily.
First, run this command (you might have to install the jq utility first):
cat export.json | jq -c '.[]' > export_lines.json
Then change your file input to
path => ["/export_lines.json"]
Re-run Logstash and enjoy!

Logstash MySQL JDBC LoadError: no such file to load -- <file-path>

I want to build Docker-ELK from this repository.
This is my logstash.conf file
input {
jdbc {
jdbc_driver_library => "/home/edsoft/IdeaProjects/docker-elk/resources/mysql-connector-java-5.1.36-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/yazilimokulu"
jdbc_user => "root"
jdbc_password => "1"
schedule => "* * * * *"
statement => "select * from posts"
}
tcp {
port => 5000
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "posts"
document_type => "post"
document_id => "%{id}" ## must be lower case
}
}
I run docker with docker-compose. Kibana and ElasticSearch start successfully but Logstash throw error
LoadError: no such file to load -- /home/edsoft/IdeaProjects/docker-elk/resources/mysql-connector-java-5.1.36-bin
logstash_1 | require at org/jruby/RubyKernel.java:1040
logstash_1 | require at /usr/share/logstash/vendor/bundle/jruby/1.9/gems/polyglot-0.3.5/lib/polyglot.rb:65
logstash_1 | load_drivers at /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.1.3/lib/logstash/plugin_mixins/jdbc.rb:134
logstash_1 | each at org/jruby/RubyArray.java:1613
logstash_1 | load_drivers at /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.1.3/lib/logstash/plugin_mixins/jdbc.rb:132
logstash_1 | prepare_jdbc_connection at /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.1.3/lib/logstash/plugin_mixins/jdbc.rb:146
logstash_1 | register at /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.1.3/lib/logstash/inputs/jdbc.rb:191
logstash_1 | register_plugin at /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:282
logstash_1 | register_plugins at /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:293
logstash_1 | each at org/jruby/RubyArray.java:1613
logstash_1 | register_plugins at /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:293
logstash_1 | start_inputs at /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:436
logstash_1 | start_workers at /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:337
logstash_1 | run at /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:232
logstash_1 | start_pipeline at /usr/share/logstash/logstash-core/lib/logstash/agent.rb:387
I write path ...bin.jar but error delete .jar from filename. I write ..bin.jar.jar error code is ..bin.jar but doesn't find the file.
Please help me
Thank you
The path you set for the jdbc_driver_library parameter doesn't exist within your container. You have to include the library file inside your Docker image or mount it from your host when you run the Logstash container.

docker-compose persisting folder empty

I would like use bitnami-docker-redmine with docker-compose persisting on Windows.
If i run the first exemple docker-compose.yml, without persisting application, redmine start and run perfectly.
But, i would like use this with persisting application exemple :
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- './mariadb:/bitnami/mariadb'
redmine:
image: bitnami/redmine:latest
ports:
- 80:3000
volumes:
- './redmine:/bitnami/redmine'
And only MariaDB run, with error message :
$ docker-compose up
Creating bitnamidockerredmine_redmine_1
Creating bitnamidockerredmine_mariadb_1
Attaching to bitnamidockerredmine_mariadb_1, bitnamidockerredmine_redmine_1
mariadb_1 |
mariadb_1 | Welcome to the Bitnami mariadb container
mariadb_1 | Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mariadb
mariadb_1 | Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mariadb/issues
mariadb_1 | Send us your feedback at containers#bitnami.com
mariadb_1 |
mariadb_1 | WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
mariadb_1 | nami INFO Initializing mariadb
mariadb_1 | mariadb INFO ==> Configuring permissions...
mariadb_1 | mariadb INFO ==> Validating inputs...
mariadb_1 | mariadb WARN Allowing the "rootPassword" input to be empty
redmine_1 |
redmine_1 | Welcome to the Bitnami redmine container
redmine_1 | Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-redmine
redmine_1 | Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-redmine/issues
redmine_1 | Send us your feedback at containers#bitnami.com
redmine_1 |
redmine_1 | nami INFO Initializing redmine
redmine_1 | redmine INFO Configuring Redmine database...
mariadb_1 | mariadb INFO ==> Initializing database...
mariadb_1 | mariadb INFO ==> Creating 'root' user with unrestricted access...
mariadb_1 | mariadb INFO ==> Enabling remote connections...
mariadb_1 | mariadb INFO
mariadb_1 | mariadb INFO ########################################################################
mariadb_1 | mariadb INFO Installation parameters for mariadb:
mariadb_1 | mariadb INFO Root User: root
mariadb_1 | mariadb INFO Root Password: Not set during installation
mariadb_1 | mariadb INFO (Passwords are not shown for security reasons)
mariadb_1 | mariadb INFO ########################################################################
mariadb_1 | mariadb INFO
mariadb_1 | nami INFO mariadb successfully initialized
mariadb_1 | INFO ==> Starting mariadb...
mariadb_1 | nami ERROR Unable to start com.bitnami.mariadb: Warning: World-writable config file '/opt/bitnami/mariadb/conf/my.cnf' is ignored
mariadb_1 | Warning: World-writable config file '/opt/bitnami/mariadb/conf/my.cnf' is ignored
mariadb_1 |
bitnamidockerredmine_mariadb_1 exited with code 1
redmine_1 | mysqlCo INFO Trying to connect to MySQL server
redmine_1 | Error executing 'postInstallation': Failed to connect to mariadb:3306 after 36 tries
bitnamidockerredmine_redmine_1 exited with code 1
My ./mariadb folder is good, but ./redmine is empty.
Do you have any idea why my persisting does not start completely ? Without the persisting, it works :(
docker-version : 1.13.0 (client/server)
plateform : Windows 10 (sorry, not test on Linux)
Thank you !

Resources