Filebeat container does not send logs to Elastic - docker

On my local machine running Ubuntu 18.04 via "Windows Subsystem Linux 2" on Windows 10, I am running Elastic 7.3, Kibana 7.3 and Elastic 7.3 docker containers.
Set-up is successful and Filebeat seems to monitor containers correctly. However, Kibana does not show any logs.
Setup
To set-up Elastic and Kibana I use the following commands
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.3.1
docker run --network=lognetwork --name=elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.3.1
docker pull docker.elastic.co/kibana/kibana:7.3.1
docker run --name=kibana --network=lognetwork -e ELASTICSEARCH_HOSTS=http://elasticsearch:9200 -p 5601:5601 docker.elastic.co/kibana/kibana:7.3.1
After these two commands, the logs from Kibana container show it successfully connects to Elastic:
{"type":"log","#timestamp":"2019-09-01T13:22:18Z","tags":["status","plugin:spaces#7.3.1","info"],"pid":6,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
I can also go to Kibana dashboard on http://localhost:5601 as well as Elastic on http://localhost:9200 both function properly
I then set up filebeat:
docker run --network=lognetwork docker.elastic.co/beats/filebeat:7.3.1 setup -E setup.kibana.host=kibana:5601 -E output.elasticsearch.hosts=["elasticsearch:9200"]
I can see both Elastic and Kibana container logs and returning 200. The logs on the Filebeat container show:
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
Loaded machine learning job configurations
Loaded Ingest pipelines
Finally, I pull the default config from Elastic site, launch Filebeat and attach to the container
curl -L -O https://raw.githubusercontent.com/elastic/beats/7.3/deploy/docker/filebeat.docker.yml
docker run -d --network=lognetwork --name=filebeat --user=root --volume="$(pwd)/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro" --volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" --volume="/var/run/docker.sock:/var/run/docker.sock:ro" docker.elastic.co/beats/filebeat:7.3.1 filebeat -e -strict.perms=false -E output.elasticsearch.hosts=["elasticsearch:9200"]
docker attach filebeat
I can see Filebeat sending monitoring pulse but when it does, elastic logs do not show anything new.
To test, I launch Docker "hello-world" which generates several lines of logs
docker run hello-world
Filebeat shows the following log
2019-09-01T13:30:40.624Z INFO log/input.go:148 Configured paths: [/var/lib/docker/containers/460cc8c215ff69ecf28685c9cf89c0e56d0b3e4f680b8bf29beb5b570ebb7a14/*-json.log]
2019-09-01T13:30:40.624Z INFO input/input.go:114 Starting input of type: container; ID: 16402101064670842079
I then go to http://localhost:5601
Results:
Kibana shows no logs. Clicking for "check for new data" does not show anything either.
The folder /var/lib/docker/containers is also empty. The path returned by filebeat log (/var/lib/docker/containers/460cc8c215ff69ecf28685c9cf89c0e56d0b3e4f680b8bf29beb5b570ebb7a14/) does not seem to exist.
Expected:
- Kibana to show the "hello world" docker container logs
- To see a log file under /var/lib/docker/containers
What am I missing?
Thank you,
Olivier

Well, it took me many hours before asking on SO, and of course, 30mn after asking I found the answer.
The trick was to check where the logs were created as running Docker-Desktop on WSL2 is slightly different than running Docker on Linux.
docker inspect filebeat | grep LogPath
returns:
"LogPath": "/var/data/docker-desktop/default/daemon-data/containers/fd56c5e43c9206baaadd33d3a711e523107622450d0deafb498e7940d809f779/fd56c5e43c9206baaadd33d3a711e523107622450d0deafb498e7940d809f779-json.log
Then changing the volume map accordingly volume="/var/data/docker-desktop/default/daemon-data/containers:/var/lib/docker/containers:ro" when launching filebeat did the job:
docker run -d
--network=lognetwork
--name=filebeat
--user=root
--volume="$(pwd)/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro"
--volume="/var/data/docker-desktop/default/daemon-data/containers:/var/lib/docker/containers:ro"
--volume="/var/run/docker.sock:/var/run/docker.sock:ro"
docker.elastic.co/beats/filebeat:7.3.1 filebeat -e -strict.perms=false -E output.elasticsearch.hosts=["elasticsearch:9200"]
The logs are now properly shown on kibana

In my case :
Docker desktop installed in Windows 10 + WSL2 enabled in docker.
I was trying to use file beat to collect logs of all docker containers.
ELK + Filebeat were also running as docker containers.
The pipeline: Filebeat -> logstash -> elastic search -> kibana
Problem: Filebeat was not finding logs from docker. But from a local mounted folder it was sending logs to ELK and was showing up in kibana.
Solution: I was running docker-compose up from wsl bash shell. Instead I ran the same from windows powershell, or cmd and the logs from docker containers started to appear in kibana.
In docker-compose file:
filebeat:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker:/var/lib/docker
- ./MYLOG_TEST:/usr/share/filebeat/mylog
- ./MY_filebeat.yml:/usr/share/filebeat/filebeat.yml
and in MY_filebeat.yml:
filebeat.inputs:
#for docker logs
- type: container # for older filestream version use docker as type
enabled: true
paths:
- /var/lib/docker/containers/**/*.log
#for my test log files
- type: log # for filebeat latest versions8.1+, use filestream as type
enabled: true
paths:
- /usr/share/filebeat/mylog/*.log

Related

Volume data does not fill when running a bamboo container on the server

I am trying to run bamboo on server using docker containers. When i running on local machine work normally and volume save datas successfully. But when i run same docker compose file on server, volume data not save my datas.
docker-compose.yml
version: '3.2'
services:
bamboo:
container_name: bamboo-server_test
image: atlassian/bamboo-server
volumes:
- ./volumes/bamboo_test_vol:/var/atlassian/application-data/bamboo
ports:
- 8085:8085
volumes:
bamboo_test_vol:
Run this compose file on local machine
$ docker-compose up -d
Creating network "test_default" with the default driver
Creating volume "test_bamboo_test_vol" with default driver
Creating bamboo-server_test ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
916c98ca1a9d atlassian/bamboo-server "/entrypoint.sh" 24 minutes ago Up 24 minutes 0.0.0.0:8085->8085/tcp, 54663/tcp bamboo-server_test
$ ls
docker-compose.yml volumes
$ cd volumes/bamboo_test_vol/
$ ls
bamboo.cfg.xml logs
localhost:8085
Run this compose file on server
$ ssh <name>#<ip_address>
password for <name>:
$ docker-compose up -d
Creating network "test_default" with the default driver
Creating volume "test_bamboo_test_vol" with default driver
Creating bamboo-server_test ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
38b77e1b736f atlassian/bamboo-server "/entrypoint.sh" 12 seconds ago Up 11 seconds 0.0.0.0:8085->8085/tcp, 54663/tcp bamboo-server_test
$ ls
docker-compose.yml volumes
$ cd volumes/
$ cd bamboo_test_vol/
$ ls
$ # VOLUME PATH IS EMPTY
server_ip:8085
I didn't have this problem when I tried the same process for jira-software. Why can't it work through the bamboo server even though I use the exact same compose file?
I had the same problem when I wanted to upgrade my Bamboo server instance with my mounted host volume for the bamboo-home directory.
The following was in my docker-compose file:
version: '2.2'
bamboo-server:
image: atlassian/bamboo-server:${BAMBOO_VERSION}
container_name: bamboo-server
environment:
TZ: 'Europe/Berlin'
restart: always
init: true
volumes:
- ./bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo
ports:
- "8085:8085"
- "54663:54663"
When i started with docker-compose up -d bamboo-server, the container never took the files from the host system. So I tried it first without docker-compose, following the instructions of Atlassian Bamboo with the following command:
docker run -v ./bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo --name="bamboo-server" --init -d -p 54663:54663 -p 8085:8085 atlassian/bamboo-server:${BAMBOO_VERSION}
The following error message was displayed:
docker: Error response from daemon: create ./bamboo/bamboo-server/data: "./bamboo/bamboo-server/data" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
So I converted the error message and took the absolute path:
docker run -v /var/project/bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo --name="bamboo-server" --init -d -p 54663:54663 -p 8085:8085 atlassian/bamboo-server:${BAMBOO_VERSION}
After the successful start, I switched to the docker container via SSH and all files were as usual in the docker directory.
I transferred the whole thing to the docker-compose file and took the absolute path in the volumes section. Subsequently it also worked with the docker-compose file.
My docker-compose file then looked like this:
[...]
init: true
volumes:
- /var/project/bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo
ports:
[...]
Setting up a containerized Bamboo Server is not supported for these reasons;
Repository-stored Specs (RSS) are no longer processed in Docker by default. Running RSS in Docker was not possible because;
there is no Docker capability added on the Bamboo server by default,
the setup would require running Docker in Docker.

Connect Kibana container with Elasticsearch

I've a VM which contains Docker and Elasticsearch (OS: Centos7). I would like to create a Kibana docker and connect with my ES.
The ES contains indices, if I type curl -s http://localhost:9200/_cat/indices I got the list of indices.
I used Dockerfile to create my Kibana image:
docker build -t="kibana_test" .
docker run --name kibana -e
ELASTICSEARCH_URL=http://#IP:9200 -e
XPACK_SECURITY_ENABLED=false -p 5600:5601 -d kibana_test
Well, if I put the address IP of my machine, I got this :
plugin:elasticsearch#6.2.4 Request Timeout after 3000ms
And in my Docker logs I got thi message:
License information from the X-Pack plugin could not be obtained from
Elasticsearch for the [data] cluster
How can I resolve this problem ?
Thanks for advance!
So, configure in elasticsearch.yml file.
network.host: 0.0.0.0
transport.host: localhost
transport.tcp.port: 9300
Then restart elasticsearh service first,
When build kibana container :
use this:
-e ELASTICSEARCH_URL=http://172.17.0.1:9200
check again.

Kibana container to elasticsearch cloud auth err

I have a production instance of elasticsearch 5.6.9 deployed on elastic.cloud.
WIth an http elastic all is OK but I would run a localhost kibana connected to that https instance!
I have tried:
docker run --name kibana-prod-user
-e ELASTICSEARCH_URL=https://####.eu-west-1.aws.found.io:9243
-e ELASTICSEARCH_PASSWORD=####
-v /host/workspace/cert:/usr/share/elasticsearch/config/certificates
-p 3501:5601 --b kibana
but i get:
In my mount dir I have put the cert.cer of elastic cloud.
Any ideas?
Thank you very much
I have find the solution, after understand that the error wasn't a certificate problem.
The right script for kibana 5.6.10 is:
docker run --name kibana-prod-provider -v "$(pwd)":/etc/kibana/ -p 3502:5601 --rm kibana
because the ELASTICSEARCH_PASSWORD envvar is not managed by the docker file, only le URL is.
Then in the $(pwd) directory I have put this kibana.yml file:
server.host: '0'
elasticsearch.url: 'https://###.eu-west-1.aws.found.io:9243'
elasticsearch.username: elastic
elasticsearch.password: ###

How can I Publish a jupyter tmpnb server?

I'm trying to publish a tmpnb server, but am stuck. Following the Quickstart at http://github.com/jupyter/tmpnb, I can run the server locally and access it at 172.17.0.1:8000.
However, I can't access the server remotely. I've tried adding -p 8000:8000 when I create the proxy container with the following command:
docker run -it -p 8000:8000 --net=host -d -e CONFIGPROXY_AUTH_TOKEN=$TOKEN --name=proxy jupyter/configurable-http-proxy --default-target http://127.0.0.1:9999
I tried to access the server by typing the machine's IP address:8000 but my browser still returns "This site can't be reached."
The logs for proxy are:
docker logs --details 45d836f98450
08:33:20.981 - info: [ConfigProxy] Proxying http://*:8000 to http://127.0.0.1:9999
08:33:20.988 - info: [ConfigProxy] Proxy API at http://localhost:8001/api/routes
To verify that I can access other servers run on the same machine I tried the following command: docker run -d -it --rm -p 8888:8888 jupyter/minimal-notebook and was able to accessed it remotely at the machine's ip address:8888.
What am I missing?
I'm working on an Ubuntu 16.04 machine with Docker 17.03.0-ce
Thanks
Create file named docker-compose.yml with content following, then you can launch the container with docker-compose up. Since images will be directly pulled errors will be arrested.
httpproxy:
image: jupyter/configurable-http-proxy
environment:
CONFIGPROXY_AUTH_TOKEN: 716238957362948752139417234
container_name: tmpnb-proxy
net: "host"
command: --default-target http://127.0.0.1:9999
ports:
- 8000:8000
tmpnb_orchestrate:
image: jupyter/tmpnb
net: "host"
container_name: tmpnb_orchestrate
environment:
CONFIGPROXY_AUTH_TOKEN: $TOKEN$
volumes:
- /var/run/docker.sock:/docker.sock
command: python orchestrate.py --command='jupyter notebook --no-browser --port {port} --ip=0.0.0.0 --NotebookApp.base_url=/{base_path} --NotebookApp.port_retries=0 --NotebookApp.token="" --NotebookApp.disable_check_xsrf=True'
A solution is available from the github.com/jupyter/tmpnb README.md file. At the end of the file under the heading "Development" three commands are listed:
git clone https://github.com/jupyter/tmpnb.git
cd tmpnb
make dev
These commands clone the tmpnb repository, cd into the tmpnb repository, and run the "dev" command from the the makefile contained in the tmpnb repository. On my machine, entering those commands created a notebook on a temporary server that I could access remotely. Beware that the "make dev" command deletes potentially conflicting docker containers as part of the launching process.
Some insight into how this works can be gained by looking inside the makefile. When the configurable-http-proxy image is run on Docker, both port 8000 and 8001 are published, and the tmpnb image is run with CONFIGPROXY_ENDPOINT=http://proxy:8001

Weave + Ansible Docker Module

I'm using weave to launch some containers which form a database cluster. I have gotten this working manually on two hosts in EC2 by doing the following:
$HOST1> weave launch
$HOST2> weave launch $HOST1
$HOST1> eval $(weave env)
$HOST2> eval $(weave env)
$HOST1> docker run --name neo-1 -d -P ... my/neo4j-cluster
$HOST2> docker run --name neo-2 -d -P ... my/neo4j-cluster
$HOST3> docker run --name neo-1 -d -P -e ARBITER=true ... my/neo4j-cluster
I can check the logs and everthing starts up ok.
When using ansible I can get the above to work using the command: ... module and an environment variable:
- name: Start Neo Arbiter
command: 'docker run --name neo-2 -d -P ... my/neo4j-cluster'
environment:
DOCKER_HOST: 'unix:///var/run/weave/weave.sock'
As that's basically all eval $(weave env) does.
But when I use the docker module for ansible, even with the docker_url parameter set to the same thing you see above with DOCKER_HOST, DNS does not resolve between hosts. Here's what that looks like:
- name: Start Neo Arbiter
docker:
image: "my/neo4j-cluster:{{neo4j_version}}"
docker_url: unix:///var/run/weave/weave.sock
name: neo-3
pull: missing
state: reloaded
detach: True
publish_all_ports: True
OR
- name: Start Neo Arbiter
docker:
image: "my/neo4j-cluster:{{neo4j_version}}"
docker_url: unix:///var/run/weave/weave.sock
name: neo-3
pull: missing
state: reloaded
detach: True
publish_all_ports: True
environment:
DOCKER_HOST: 'unix:///var/run/weave/weave.sock'
Neither of those work. The DNS does not resolve so the servers never start. I do have other server options (like SERVER_ID for neo4j, etc set just not shown here for simplicity).
Anyone run into this? I know the docker module for ansible uses docker-py and stuff. I wonder if there's some type of incompatibility with weave?
EDIT
I should mention that when the containers launch they actually show up in WeaveDNS and appear to have been added to the system. I can ping the local hostname of each container as long as its on the host. When I go to the other host though, it cannot ping the ones on the other host. This despite them registering in WeaveDNS (weave status dns) and weave status showing correct # of peers and established connections.
This could be caused by the client sending a HostConfig struct in the Docker start request, which is not really how you're supposed to do it but is supported by Docker "for backwards compatibility".
Weave has been fixed to cope, but the fix is not in a released version yet. You could try the latest snapshot version if you're brave.
You can probably kludge it by explicitly setting the DNS resolver to the docker bridge IP in your containers' config - weave has an undocumented helper weave docker-bridge-ip to find this address, and it generally won't change.

Resources