Logstash jdbc not sending data - docker

I'm trying to export datas from a mysql table to elastic search with logstash and the jdbc mysql driver with every process in a docker container. My problem is (with no error) their is nothing sent to elastic search.
My Dockerfile :
FROM elastic/logstash:6.3.0
ENV https_proxy=
ENV http_proxy=
COPY ./mysql-connector-java-5.1.46/mysql-connector-java-5.1.46.jar /tmp/mysql-connector-java-5.1.46.jar
COPY ./logstash.conf /tmp/logstash.conf
COPY ./logstash.yml /usr/share/logstash/config/logstash.yml
RUN logstash-plugin install logstash-input-jdbc
I run it with this command :
docker run -d --rm --name=logstach -v /data/logstash:/home/logstash logstash bin/logstash -f /tmp/logstash.conf
And here is my logstash.conf :
input {
jdbc {
jdbc_driver_library => "/tmp/mysql-connector-java-5.1.46.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://0.0.2.22:3306/itop_db"
jdbc_user => "admin"
jdbc_password => "password"
statement => "SELECT * FROM contact”
}
}
output {
elasticsearch {
index => "contact"
document_type => "data"
document_id => "%{id}"
hosts => "127.0.0.1:9200"
}
stdout { codec => json_lines }
}
Every thing seems to accomplish well, except their is no new index in elastic search http://localhost:9200/_cat/indices?v
This is the output I have when i run logstash :
logstash execution output
logstash error 2

"SELECT * FROM contact” <-- this could be the problem. I imagine you copied this from the internet? Change ” to "

In addition to the error in my SQL statement, I needed to specify the host ip of the container (172.17.0.2 in my case) instead of using 127.0.0.1:9200

Related

Error 'could not translate host name' running Laravel migration in Docker PostgreSQL container

Well, I have a container with PostgreSQL that I connect to from Laravel showing records on the screen. The Laravel container access the PostgreSQL data ok. This is the result:
object(Illuminate\Support\Collection)#463 (2) {
["items":protected]=>
array(2) {
[0]=>
object(stdClass)#466 (3) {
["pru_id"]=>
int(1)
["pru_name"]=>
string(5) "george"
["pru_years"]=>
int(45)
}
[1]=>
object(stdClass)#467 (3) {
["pru_id"]=>
int(2)
["pru_name"]=>
string(5) "paul"
["pru_years"]=>
int(30)
}
}
}
So far very good!
But when I try to run migrations, with php artisan migrate, I get:
SQLSTATE[08006] [7] could not translate host name "pgcontainer" to address: Name or service not known
Checking the connection, the db_host: (pgcontainer)
'pgsql' => [
'driver' => 'pgsql',
'url' => env('DATABASE_URL'),
'host' => env('DB_HOST', 'pgcontainer'),
'port' => env('DB_PORT', '5432'),
'database' => env('DB_DATABASE', 'mydatabase'),
'username' => env('DB_USERNAME', 'myuser'),
'password' => env('DB_PASSWORD', 'mypass'),
'charset' => 'utf8',
'prefix' => '',
'prefix_indexes' => true,
'schema' => 'public',
'sslmode' => 'prefer',
],
I tried to modify the host, because I understand what it does not recognize, changing pgcontainer for 127.0.0.1, but when doing it two interesting things happen:
ON THE SCREEN: It no longer shows the records on the screen, because the Laravel container no longer accesses the PostgreSQL:
SQLSTATE[08006] [7] could not connect to server: Connection refused Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 5432
SSH - PHP ARTISAN MIGRATE: Now Laravel can see the PostgreSQL container but it cannot authenticate (the password is correct since before it shows records on the screen):
SQLSTATE[08006] [7] FATAL: password authentication failed for user "mydatabase"
Any idea?
If I understand correctly, you have one container with Postgres, and a different container with Laravel. The connection from the Laravel -> Postgres containers works fine.
The error you are seeing suggests you are running php artisan migrate on your host, not in a container. But your host does not know what pgcontainer means - Docker resolves containers by name, but your host can't.
You should be running php artisan <anything> from inside the container where PHP/Laravel is, for eg:
docker exec your_laravel_container_name php artisan migrate
This way artisan runs inside the container, and host name resolution works just the same as it does for Laravel. Do not change your .env or config, artisan is part of Laravel and needs the same config as the Laravel application itself.

Define elasticsearch mapping during Docker build

I have a MySQL, Logstash, and ES setup but I need to set some fields to keyword type instead of text. I've read that it is not possible to do this in Logstash (logstash.conf) and so it needs to be done in ES. I've followed a similar question here and slightly modified it to PUT a mapping but I have got this error: "stacktrace": ["org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: unknown setting [es.path.data] please check that any required plugins are installed, or check the breaking changes documentation for removed settings",
I am using docker-compose to start all the services at once under the same network, and so the mapping must be specified before logstash ports the data to ES. (Mapping can't be changed on a non-empty index).
I have seen other questions and they do seem a bit old so I wanted to ask if there is a better approach to doing this now.
My mapping.json
{
"mappings": {
"properties": {
"authors": {"type": "keyword"},
"tags": {"type": "keyword"}
}
}
}
Dockerfile
FROM elasticsearch:7.5.1
COPY ./docker-entrypoint.sh .
COPY ./mapping.json .
RUN mkdir /data && chown -R elasticsearch:elasticsearch /data && echo 'es.path.data: /data' >> config/elasticsearch.yml && echo 'path.data: /data' >> config/elasticsearch.yml
ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/e1f115e4ca285c3c24e847c4dd4be955e0ed51c2/wait-for-it.sh /utils/wait-for-it.sh
# Copy the files you may need and your insert script
RUN ./docker-entrypoint.sh elasticsearch -p /tmp/epid & /bin/bash /utils/wait-for-it.sh -t 0 localhost:9200 -- curl -X PUT 'http://localhost:9200/cnas_publications' -d #./mapping.json; kill $(cat /tmp/epid) && wait $(cat /tmp/epid); exit 0;
Edit: I've used the docker-entrypoint.sh from the official repo here
It seems that I was mistaken, and it is actually possible to define the mapping in Logstash. Assuming you're using the official elasticsearch image, create a ES template and make a volume with it to the logstash container.
Here's a sample of my output of my logstash.conf
output {
stdout { codec => "rubydebug" }
elasticsearch {
hosts => "http://elasticsearch:9200"
index => "test"
template => "/logstash/mapping.json"
template_name => "mapping"
document_id => "%{[#metadata][_id]}"
}
}
and don't forget to set index_patterns in your ES template.

Configuring Logstash for Docker

I'm new in Docker and I'm having problems running a simple logstash.conf with Docker.
My Dockerfile:
FROM docker.elastic.co/logstash/logstash:5.0.0
RUN rm -f ~/desktop/docker_logstash/logstash.conf
Logstash.conf:
input {
file {
path => "~/desktop/filename.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
grok {
match => {
"message" => "%{COMBINEDAPACHELOG}"
}
}
}
output {
stdout {
codec => rubydebug
}
}
Docker commands:
docker build -t logstashexample .
docker run logstashexample
I can build the container but when I run it it is stuck on:
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties.
[2017-11-22T11:08:23,040][INFO][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2017-11-22T11:08:24,501][INFO][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2017-11-22T11:08:24,520][INFO ][logstash.pipeline ] Pipeline main started [2017-11-22T11:08:24,593][INFO][org.logstash.beats.Server] Starting server on port: 5044
[2017-11-22T11:08:25,054][INFO ][logstash.agent] Successfully started Logstash API endpoint {:port=>9600}
What am I doing wrong? Thanks.

Docker apps logging with Filebeat and Logstash

I have a set of dockerized applications scattered across multiple servers and trying to setup production-level centralized logging with ELK. I'm ok with the ELK part itself, but I'm a little confused about how to forward the logs to my logstashes.
I'm trying to use Filebeat, because of its loadbalance feature.
I'd also like to avoid packing Filebeat (or anything else) into all my dockers, and keep it separated, dockerized or not.
How can I proceed?
I've been trying the following. My Dockers log on stdout so with a non-dockerized Filebeat configured to read from stdin I do:
docker logs -f mycontainer | ./filebeat -e -c filebeat.yml
That appears to work at the beginning. The first logs are forwarded to my logstash. The cached one I guess. But at some point it gets stuck and keep sending the same event
Is that just a bug or am I headed in the wrong direction? What solution have you setup?
Here's one way to forward docker logs to the ELK stack (requires docker >= 1.8 for the gelf log driver):
Start a Logstash container with the gelf input plugin to reads from gelf and outputs to an Elasticsearch host (ES_HOST:port):
docker run --rm -p 12201:12201/udp logstash \
logstash -e 'input { gelf { } } output { elasticsearch { hosts => ["ES_HOST:PORT"] } }'
Now start a Docker container and use the gelf Docker logging driver. Here's a dumb example:
docker run --log-driver=gelf --log-opt gelf-address=udp://localhost:12201 busybox \
/bin/sh -c 'while true; do echo "Hello $(date)"; sleep 1; done'
Load up Kibana and things that would've landed in docker logs are now visible. The gelf source code shows that some handy fields are generated for you (hat-tip: Christophe Labouisse): _container_id, _container_name, _image_id, _image_name, _command, _tag, _created.
If you use docker-compose (make sure to use docker-compose >= 1.5) and add the appropriate settings in docker-compose.yml after starting the logstash container:
log_driver: "gelf"
log_opt:
gelf-address: "udp://localhost:12201"
Docker allows you to specify the logDriver in use. This answer does not care about Filebeat or load balancing.
In a presentation I used syslog to forward the logs to a Logstash (ELK) instance listening on port 5000.
The following command constantly sends messages through syslog to Logstash:
docker run -t -d --log-driver=syslog --log-opt syslog-address=tcp://127.0.0.1:5000 ubuntu /bin/bash -c 'while true; do echo "Hello $(date)"; sleep 1; done'
Using filebeat you can just pipe docker logs output as you've described. Behavior you are seeing definitely sounds like a bug, but can also be the partial line read configuration hitting you (resend partial lines until newline symbol is found).
A problem I see with piping is possible back pressure in case no logstash is available. If filebeat can not send any events, it will buffer up events internally and at some point stop reading from stdin. No idea how/if docker protects from stdout becoming unresponsive. Another problem with piping might be restart behavior of filebeat + docker if you are using docker-compose. docker-compose by default reuses images + image state. So when you restart, you will ship all old logs again (given the underlying log file has not been rotated yet).
Instead of piping you can try to read the log files written by docker to the host system. The default docker log driver is the json log driver . You can and should configure the json log driver to do log-rotation + keep some old files (for buffering up on disk). See max-size and max-file options. The json driver puts one line of 'json' data for every line to be logged. On the docker host system the log files are written to /var/lib/docker/containers/container_id/container_id-json.log . These files will be forwarded by filebeat to logstash. If logstash or network becomes unavailable or filebeat is restarted, it continues forwarding log lines where it left of (given files have been not deleted due to log rotation). No events will be lost. In logstash you can use the json_lines codec or filter to parse the json lines and a grok filter to gain some more information from your logs.
There has been some discussion about using libbeat (used by filebeat for shipping log files) to add a new log driver to docker. Maybe it is possible to collect logs via dockerbeat in the future by using the docker logs api (I'm not aware of any plans about utilising the logs api, though).
Using syslog is also an option. Maybe you can get some syslog relay on your docker host load balancing log events. Or have syslog write log files and use filebeat to forward them. I think rsyslog has at least some failover mode. You can use logstash syslog input plugin and rsyslog to forward logs to logstash with failover support in case the active logstash instance becomes unavailable.
I created my own docker image using the Docker API to collect the logs of the containers running on the machine and ship them to Logstash thanks to Filebeat. No need to install or configure anything on the host.
Check it out and tell me if it suits your needs: https://hub.docker.com/r/bargenson/filebeat/.
The code is available here: https://github.com/bargenson/docker-filebeat
Just for helping others that need to do this, you can simply use Filebeat to ship the logs. I would use the container by #brice-argenson, but I needed SSL support so I went with a locally installed Filebeat instance.
The prospector from filebeat is (repeat for more containers):
- input_type: log
paths:
- /var/lib/docker/containers/<guid>/*.log
document_type: docker_log
fields:
dockercontainer: container_name
It sucks a bit that you need to know the GUIDs as they could change on updates.
On the logstash server, setup the usual filebeat input source for logstash, and use a filter like this:
filter {
if [type] == "docker_log" {
json {
source => "message"
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
mutate {
rename => { "log" => "message" }
}
date {
match => [ "time", "ISO8601" ]
}
}
}
This will parse the JSON from the Docker logs, and set the timestamp to the one reported by Docker.
If you are reading logs from the nginx Docker image, you can add this filter as well:
filter {
if [fields][dockercontainer] == "nginx" {
grok {
match => { "message" => "(?m)%{IPORHOST:targethost} %{COMBINEDAPACHELOG}" }
}
mutate {
convert => { "[bytes]" => "integer" }
convert => { "[response]" => "integer" }
}
mutate {
rename => { "bytes" => "http_streamlen" }
rename => { "response" => "http_statuscode" }
}
}
}
The convert/renames are optional, but fixes an oversight in the COMBINEDAPACHELOG expression where it does not cast these values to integers, making them unavailable for aggregation in Kibana.
I verified what erewok wrote above in a comment:
According to the docs, you should be able to use a pattern like this
in your prospectors.paths: /var/lib/docker/containers/*/*.log – erewok
Apr 18 at 21:03
The docker container guids, represented as the first '*', are correctly resolved when filebeat starts up. I do not know what happens as containers are added.

Logstash Removed from cluster after joining with elasticsearch in docker

I am using docker to host my logstash and elasticsearch.
Logstash joins the cluster and then it disconnect after 2 sec's.
Below is the exception i am getting.
[2015-08-31 23:30:40,880][INFO ][cluster.service ] [Ms.
MODOK] removed
{[logstash-da1b6e0a073b-1-11622][G_hYr0mcTZ6G-IOia1g5Cg][da1b6e0a073b][inet[/172.17.5.146:9300]]{data=false,
client=true},}, reason:
zen-disco-node_failed([logstash-da1b6e0a073b-1-11622][G_hYr0mcTZ6G-IOia1g5Cg][da1b6e0a073b][inet[/172.17.5.146:9300]]{data=false,
client=true}), reason transport disconnected
My logstash configuration file.
input {
stdin { }
}
output {
elasticsearch {
host => elasticsearch
}
stdout { codec => rubydebug }
}
I missed it.
Need to add log file location, created volumes in docker and pointed input of logstash and everything started working.
file {
path => [ "/opt/logs/test_log.json" ]
codec => json {
charset => "UTF-8"
}
}

Resources