We are using microservices, for the filebeat to work I have made a log file for each service. When testing the filebeat after downloading Zip file on windows it is working fine. The configuration on filebeat.yml is as follows.
Now I want to move this setup to production through containerizing all the moving parts using docker-compose file. Elastic search, Kibana and logstash are working perfectly fine. But when I use the same filebeat.yml file on the docker environment it is tracing the log files as they do in zip file download.
Approaches I used to test:
stopping the filebeat container and sending the logs from zip file(filebeat) to the 5044 port of docker container running on localhost, it is working fine in this condition.
docker-compose segment of filebeat is given below.
Would love to have the solution for this problem on docker filebeat container.
Related
Summary
I'm on mac. I have several docker containers, I can run all of them using docker-compose up and everything works as expected: for instance, I can access my backend container by searching http://localhost:8882/ on my browser, since port 8882 is mapped to the same port on host by using:
ports:
- "8882:8882"
Problems start when I try to attach an IDE to the backend container so as to be able to develop "from inside" that container.
I've tried using vscode's plugin "Remote - Containers" following this tutorial and also pycharm professional, which comes with the possibility to run docker configurations out of the box. On both cases I had the same result: I run the IDE configuration to attach to the container and its local website suddenly stops working, showing "this site can't be reached".
When using pycharm, I noticed that Docker Desktop shows that the backend container changed its port to 54762. But I also tried that port with no luck.
I also used this command:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
to get the container ip (172.18.0.4) and tried that with both ports, again, same result.
Any ideas?
Pycharm configuration
Interpreter:
This works in the sense that I can watch the libraries code installed inside the container:
Run/Debug configuration. This configuration succeeds in the sense that I can start it and it seems to be attached correctly to the backend container... though the problem previously described appears.
So, there were many things taking part in this, since this is an already huge and full of technical debt project.
But the main one is that the docker-compose I was using was running the server using uwsgi on production mode, which tampered many things... amongst which were pycharm's ability to successfully attach to the running container, debug, etc.
I was eventually able to create a new docker-compose.dev.yml file that overrided the main docker-compose file, only to change the backend server command for flask on development mode. That fixed everything.
Be mindful that for some reason flask run command inside a docker container does not allow you to see the website properly until you pass a -host=0.0.0.0 option to it. More in https://stackoverflow.com/a/30329547/5750078
I have a django application running inside of a docker container on an EC2. When I ssh into my Ec2, I docker ps find my docker container and then run exec -it [docker-image] sh. Once inside, I have a directory called /logs, that has a log I want to send to a dashboard I'm running on a remote server. I was able to scp the log.file to the remote server inside of the docker container. But I want to be able to send this log.file to my user directory on my EC2, for example /home/ec2-user so that I can use rsync or some other method to continuously update my logs on the remote server.
I'm a bit of a n00b with docker. Can someone explain to me how I can send the log.file inside my container outside of it, for example, to my ec2 home directory? From there, I would just write a bash script that runs on a chron and would just scp the log.file to my remote server every minute (or whatever interval of time).
#2, I cannot, for the life of me, find where my nginx log files are. On the Ec2, I don't have an nginx directory in /var/log or in /etc. Does anyone know where the nginx access.log files are inside of the docker container?
Thanks!
I’m looking for the appropriate way to monitor applicative logs produced by nginx, tomcat, springboot embedded in docker with filebeat and ELK.
In the container strategy, a container should be used for only one purpose.
One nginx per container and one tomcat per container, meaning we can’t have an additional filebeat within a nginx or tomcat container.
Over what I have read over Internet, we could have the following setup:
a volume dedicated for storing logs
a nginx container which mount the dedicated logs volume
a tomcat / springboot container which mount the dedicated logs volume
a filebeat container also mounting the dedicated logs volume
This works fine but when it comes to scale out nginx and springboot container, it is a little bit more complex for me.
Which pattern should I use to push my logs using filebeat to logstash if I have the following configuration:
several nginx containers in load balancing with the same configuration (logs configuration is the same: same path)
several springboot rest api containers behing nginx containers with the same configuration (logs configuration is the same:same path)
Should I create one volume by set of nginx + springboot rest api and add a filebeat container ?
Should I create a global log volume shared by all my containers and have a different log filename by container
(having the name of the container in the filename of the logs?) and having only one filebeat container ?
In the second proposal, how to scale filebeat ?
Is there another way to do that ?
Many thanks for your help.
The easiest thing to do, if you can manage it, is to set each container process to log to its own stdout (you might be able to specify /dev/stdout or /proc/1/fd/1 as a log file). For example, the Docker Hub nginx Dockerfile specifies
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
so the ordinary nginx logs become the container logs. Once you do that, you can plug in the filebeat container input to read those logs and process them. You could also see them from outside the container with docker logs, they are the same logs.
What if you have to log to the filesystem? Or there are multiple separate log streams you want to be able to collect?
If the number of containers is variable, but you have good control over their configuration, then I'd probably set up a single global log volume as you describe and use the filebeat log input to read every log file in that directory tree.
If the number of containers is fixed, then you can set up a volume per container and mount it in each container's "usual" log storage location. Then mount all of those directories into the filebeat container. The obvious problem here is that if you do start or stop a container, you'll need to restart the log manager for the added/removed volume.
If you're actually on Kubernetes, there are two more possibilities. If you're trying to collect container logs out of the filesystem, you need to run a copy of filebeat on every node; a DaemonSet can manage this for you. A Kubernetes pod can also run multiple containers, so your other option is to set up pods with both an application container and a filebeat "sidecar" container that ships the logs off. Set up the pod with an emptyDir volume to hold the logs, and mount it into both containers. A template system like Helm can help you write the pod specifications without repeating the logging sidecar setup over and over.
Need some help us with a Rancher / Docker issue encountered while trying to set up Logstash to parse our IIS logs.
There’s a container with 1 service Logstash (not a web service but a system service, part of the ELK stack that we want to use to ingests files from a given input(s) and parse them into fields before sending them to the configured output(s) – in this case, Elasticsearch).
We need to have the service accessible from an outside system (namely our web server which is going to send the IIS logs over for processing).
The problem is that we can’t get the endpoint configuration.
There is a load balancer host running on rancher with two open ports that are supposed to channel all requests to the inner services containers via path name and target but we can’t get a path configured to the logstash service.
I have been digging into the logstash configs and there is a setting for node.name in the logstash.conf file but … I haven’t managed to do anything with it yet.
Hoping someone who is more familiar with this stuff can offer some insight.
Basically I can get the Logstash service on Rancher to connect to the AWS Elasticsearch but I cannot get our web box (with the IIS logs) to connect with the Logstash service on its input port.
Solution was not to use the standard image but customize it. steps involved:
create local repo with folder structure that we need to emulate. Only
the folders we are going to replace are needed
add a dockerfile which will be used to build up the image from a docker run command
in the dockerfile reference the ready-made / standard image as a base in the first line, i.e. 'FROM '
in the dockerfile RUN command remove the directories and files that need to be customized. In this case it was logstash/pipeline
directory and logstash/config directory
use ADD commands to replace the missing directories with our customized versions
use EXPOSE command to expose the port the service is listening on
Build the container using docker run and the -p flag to publish the ports we want open, mapping them to ports on the host container
I have one question, Is there any way to ship the logs of each container where the log files are located inside the containers. Actually, the current flow will help to ship the log files which is located in the default path(var/lib/docker/containers//.log). I want to customize the filebeat.yaml to ship the logs from each container to logstash instead of the default path.
If you can set your containers to log to stdout rather than to files, it looks like filebeat has an autodiscover mode which will capture the docker logs of every container.
Another common setup in an ELK world is to configure logstash on your host, and set up Docker's logging options to send all output on containers' stdout into logstash. This makes docker logs not work, but all of your log output is available via Kibana.
If your container processes always write to log files, you can use the docker run -v option or the Docker Compose volumes: option to mount a host directory on to an individual container's /var/log directory. Then the log files will be visible on the host, and you can use whatever file-based collector to capture them. This is in the realm of routine changes that will require you to stop and delete your existing containers before starting them with different options.