I am fairly new to docker and I am trying out the ELK Setup with Filebeat. I have a container for filebeat setup in machine 1 and I am trying to collect the logs from /mnt/logs/temp.log (which are non-container logs) to the ELK containers in machine 2. Here's my filebeat configuration:-
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
hints.default_config:
type: container
paths:
- /mnt/logs/temp.log
processors:
- add_cloud_metadata: ~
output.elasticsearch:
hosts: '${ELASTICSEARCH_HOSTS:42.23.12.131:9042}'
Even if I change the filebeat.yml config to the below, it doesn't seem to send any logs to ES:-
filebeat.inputs:
- type: log
paths:
- /mnt/logs/temp.log
output.elasticsearch:
hosts: ["42.23.12.131:9042"]
Can someone please help me out or point me to any site articles or documentation regarding this? Version of filebeat and ELK container is 7.14.0.
Edit: The docker-compose file for ELK is:-
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
volumes:
- type: bind
source: ./elasticsearch/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms512m"
discovery.type: single-node
ports:
- "9200:9200"
- "9300:9300"
networks:
- elk
logstash:
image: docker.elastic.co/logstash/logstash:7.14.0
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline.conf
target: /usr/share/logstash/pipeline.conf
read_only: true
ports:
- "5044:5044/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx512m -Xms512m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:7.14.0
volumes:
- type: bind
source: ./kibana/kibana.yml
target: /usr/share/kibana/config/kibana.yml
read_only: true
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
elasticsearch:
In your docker-compose file, juste this ports are exposed outside the container (in consideration, the port 9042 is the one you have configured on elasticsearch side) :
ports:
- "9200:9200"
- "9300:9300"
So, if you add the targeted port 9042, it must work. So this must looks like this :
ports:
- "9200:9200"
- "9300:9300"
- "9042:9042"
If is not the port 9042 that you have configured into your elasticsearhc, that means you have to change the configuration from your filebeat agent to have the correct port (probably the 9200)
Related
I'm trying to build my first ELK Stack in Docker for a school project.
The goal is to have an ElasticSearch container along with it's volume.
A LogStash container linked to ElasticSearch along with it's volume aswell.
A Filebeat container to send data to LogStash
And finaly a Kibana Container to visualise the logs.
The problem I'm having is that even though all seems to be configured and working, Kibana isn't receiving anything from ElasticSearch. When I try to add a new index pattern I'm faced with this message : "No data streams, indices, or index aliases match your index pattern."
And not a thing is visible in Index Management.
Here's the docker-compose.yml:
version: '3.6'
services:
Elasticsearch:
image: elasticsearch:8.5.3
container_name: elasticsearch
restart: always
volumes:
- elastic_data:/usr/share/elasticsearch/data/
environment:
ES_JAVA_OPTS: "-Xmx750m -Xms750m"
discovery.type: single-node
xpack.security.enabled: false
ports:
- '9200:9200'
- '9300:9300'
networks:
- elk
Logstash:
image: logstash:8.5.3
container_name: logstash
restart: always
volumes:
- ./logstash/logstash.conf:/etc/logstash/conf.d/logstash.conf:ro
command: logstash -f /etc/logstash/conf.d/logstash.conf
depends_on:
- Elasticsearch
ports:
- '9600:9600'
environment:
LS_JAVA_OPTS: "-Xmx750m -Xms750m"
networks:
- elk
Kibana:
image: kibana:8.5.3
container_name: kibana
restart: always
ports:
- '5601:5601'
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
depends_on:
- Elasticsearch
networks:
- elk
Filebeat:
container_name: filebeat
image: "docker.elastic.co/beats/filebeat:8.5.3"
restart: always
user: root
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- /var/lib/docker:/var/lib/docker:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- elk
volumes:
elastic_data:
networks:
elk:
driver: bridge
logstash/logstash.conf:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => "elasticsearch"
codec => "json"
index => "logstash-%{indexDay}"
}
stdout { codec => rubydebug }
}
And finally the filebeat/filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- '/var/lib/docker/containers/*/*.log'
processors:
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
output.logstash:
hosts: ["logstash:5044"]
Any idea what I'm doing wrong?
version: '3.2'
services:
elasticsearch:
container_name: elasticsearch
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
LOGSPOUT: ignore
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
# Use single node discovery in order to disable production mode and avoid bootstrap checks.
# see: https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
networks:
- elk
networks:
elk:
driver: bridge
volumes:
elasticsearch:
I want to use elasticsearch to pesrist data in folder /WDC1TB/docker/volumes/elasticsearch/data but I can not set it correctly. Or I am getting that volume is not a string or similar error.
How to correctly use docker-compose and persist data in external folder /WDC1TB/docker/volumes/elasticsearch/data
With your current configuration:
services:
elasticsearch:
container_name: elasticsearch
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
LOGSPOUT: ignore
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
# Use single node discovery in order to disable production mode and avoid bootstrap checks.
# see: https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
networks:
- elk
networks:
elk:
driver: bridge
volumes:
elasticsearch:
You are creating a Volume on top of the container directory /usr/share/elasticsearch/data specified in:
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
At the same time you are indicating to docker to create this volume with:
volumes:
elasticsearch:
This means docker engine is creating a volume named elasticsearch and is mounting the container specified directory there.
You can check the volume mountpoint with:
$ docker volume inspect elasticsearch
In order to mount the data to host filesystem directory, you should use bind mounts with the next compose file:
services:
elasticsearch:
container_name: elasticsearch
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: bind
source: /WDC1TB/docker/volumes/elasticsearch/data
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
LOGSPOUT: ignore
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
# Use single node discovery in order to disable production mode and avoid bootstrap checks.
# see: https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
networks:
- elk
networks:
elk:
driver: bridge
With that, using a bind mount, a file or directory /WDC1TB/docker/volumes/elasticsearch/data on the host machine is mounted into a container.
In the opposite case, the way your current solution is done, you are using a volume that are completely managed by Docker and is the engine itself responsible to assing a mount point for the volume.
When you use a volume with type: volume it will be saved where Docker service says to, in my case is in /var/lib/docker/volumes/{projectname_containername}/_data/.
If you want to save it in a specific folder you will need a type: bind volume that points to the desired folder in your host, in your case /WDC1TB/docker/volumes/elasticsearch/data.
You should replace:
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
with
- type: bind
source: /WDC1TB/docker/volumes/elasticsearch/data
target: /usr/share/elasticsearch/data
If /WDC1TB/docker/volumes/elasticsearch/data doesn't exist, Docker will create it. In this case check file permission afterwards.
Also, now that you won't use named volumes you can delete
volumes:
elasticsearch:
I have installed an elk docker image on a Linux server using the following command:
sudo docker pull sebp/elk
This pulls the latest version of the elk docker image, which is 7.8.0, and each service in the stack (elasticsearch, logstash, and kibana) also has version 7.8.0.
I need to upgrade elasticsearch to 7.9.0 for security reasons. How can I do this while continuing to use the sebp/elk docker image?
Elk comes up package and runs all 3 services and links them by default. With this setup, you can’t split and upgrade only elasticsearch.
I recommend you to run all three services independently using docker-compose. So that each service can have an image of your choice.
Sample docker-compose for your reference:
version: '3.2'
services:
elasticsearch:
image: IMAGE_GOES_HERE
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
# Use single node discovery in order to disable production mode and avoid bootstrap checks
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
networks:
- elk
logstash:
image: IMAGE_GOES_HERE
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
image: IMAGE_GOES_HERE
volumes:
- type: bind
source: ./kibana/config/kibana.yml
target: /usr/share/kibana/config/kibana.yml
read_only: true
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
elasticsearch:
I'm using a docker-elk and I'd like to clean all the log files, but I'm not sure where they're stored. The funny thing is, when I stop and remove all the docker containers and then run them from the docker-compose file, the ELK server still contains all the old logs. Why is that?
Here's my docker-compose.yml for reference:
version: '3.2'
services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
# Use single node discovery in order to disable production mode and avoid bootstrap checks
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
network_mode: "host"
# networks:
# - elk
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
network_mode: "host"
# networks:
# - elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./kibana/config/kibana.yml
target: /usr/share/kibana/config/kibana.yml
read_only: true
ports:
- "5601:5601"
network_mode: "host"
# networks:
# - elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
elasticsearch:
You have mounted volume:
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
I think if you remove this volume and rebuild your docker-compose you'll get fresh container with no data.
While non-Docker Elasticsearch logs to /var/log/elasticsearch/elasticsearch.log by default (on Linux), the Docker containers write their logs to STDOUT , which is generally a Docker best practice.
Those logs should be in /var/lib/docker/containers/, but note that on Mac this is inside the small VM layer that Docker is using, so you can't access it directly.
How do you "stop and remove all the docker containers" and still "the ELK server still contains all the old logs"? docker-compose down -v should remove everything and do you see the logs in docker logs or somewhere else?
How to setup login credentials for kibana gui with docker elk stack containers.
What arguments and environmental variables must be passed in docker-compose.yaml file to get this working.
For setting kibana user credentials for docker elk stack, we have to set xpack.security.enabled: true either in elasticsearch.yml or pass this as a environment variable in docker-compose.yml file.
Pass username & password as environment variable in docker-compose.yml like below:
version: '3.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.6.1
ports:
- "9200:9200"
- "9300:9300"
configs:
- source: elastic_config
target: /usr/share/elasticsearch/config/elasticsearch.yml
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_USERNAME: "elastic"
ELASTIC_PASSWORD: "MyPw123"
http.cors.enabled: "true"
http.cors.allow-origin: "*"
xpack.security.enabled: "true"
networks:
- elk
deploy:
mode: replicated
replicas: 1
logstash:
image: docker.elastic.co/logstash/logstash:6.6.1
ports:
- "5044:5044"
- "9600:9600"
configs:
- source: logstash_config
target: /usr/share/logstash/config/logstash.yml:rw
- source: logstash_pipeline
target: /usr/share/logstash/pipeline/logstash.conf
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
xpack.monitoring.elasticsearch.url: "elasticsearch:9200"
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "MyPw123"
networks:
- elk
deploy:
mode: replicated
replicas: 1
kibana:
image: docker.elastic.co/kibana/kibana:6.6.1
ports:
- "5601:5601"
configs:
- source: kibana_config
target: /usr/share/kibana/config/kibana.yml
networks:
- elk
deploy:
mode: replicated
replicas: 1
configs:
elastic_config:
file: ./elasticsearch/config/elasticsearch.yml
logstash_config:
file: ./logstash/config/logstash.yml
logstash_pipeline:
file: ./logstash/pipeline/logstash.conf
kibana_config:
file: ./kibana/config/kibana.yml
networks:
elk:
driver: overlay
Then add this following lines to kibana.yml:
elasticsearch.username: "elastic"
elasticsearch.password: "MyPw123"
Did not managed to get it working without adding XPACK_MONITORING & SECURITY flags to kibana's container and there was no need for a config file
However I was not able to use kibana user, even after logging in with elastic user and changing kibana's password through the UI.
NOTE: looks like you can't setup default built-in users other than elastic superuser in docker-compose through it's environment. I've tried several times with kibana and kibana_system to no success.
version: "3.7"
services:
elasticsearch:
image: elasticsearch:7.4.0
restart: always
ports:
- 9200:9200
environment:
- discovery.type=single-node
- xpack.security.enabled=true
- ELASTIC_PASSWORD=123456
kibana:
image: kibana:7.4.0
restart: always
ports:
- 5601:5601
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
- XPACK_MONITORING_ENABLED=true
- XPACK_MONITORING_COLLECTION_ENABLED=true
- XPACK_SECURITY_ENABLED=true
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD="123456"
depends_on:
- elasticsearch
SOURCE
NOTE: looks like this won't work with 8.5.3, Kibana won't accept superuser elastic.
Update
I was able to setup 8.5.3 but with a couple twists. I would build the whole environment, then in elastic's container run the setup-passwords auto
bin/elasticsearch-setup-passwords auto
Grab the auto generated password for kibana_system user and replace it in docker-compose then restart only kibana's container
Kibana 8.5.3 with environment variables:
kibana:
image: kibana:8.5.3
restart: always
ports:
- 5601:5601
environment:
- ELASTICSEARCH_USERNAME="kibana_system"
- ELASTICSEARCH_PASSWORD="sVUurmsWYEwnliUxp3pX"
Restart kibana's container:
docker-compose up -d --build --force-recreate --no-deps kibana
NOTE: make sure to use --no-deps flag otherwise it will restart elastic container if tagged to kibana's