remove timestamp from log line with Promtail - docker

I am scraping logs from docker with Promtail to Loki.
Works very well, but I would like to remove timestamp from log line once it has been extracted by Promtail.
The reason is that I end up with log panel that half of screen is occupied by timestamp. If I want to display timestamp in panel, I can do that, so I dont really need it in log line.
I have been reading documentation, but not sure how to approach it. logfmt? replace? timestamp?
https://grafana.com/docs/loki/latest/clients/promtail/stages/logfmt/
promtail-config.yml
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
# local machine logs
- job_name: local logs
static_configs:
- targets:
- localhost
labels:
job: varlogs
__path__: /var/log/*log
# docker containers
- job_name: containers
docker_sd_configs:
- host: unix:///var/run/docker.sock
refresh_interval: 15s
pipeline_stages:
- docker: {}
relabel_configs:
- source_labels: ['__meta_docker_container_label_com_docker_compose_service']
regex: '(.*)'
target_label: 'service'
Thank you

Actually I just realized I was looking for wrong thing. I just wanted to display less logs in Grafana, logs were formatted properly. I just had to select fields to display.
Thanks!

Related

Collecting Docker events logs with Promtail

How can I get logs from docker events to Promtail?
I'm using Docker to run a set of containers on my server, and I would like to collect and centralize their logs using Promtail. Specifically, I would like to capture logs from the docker events(logs from the docker daemon about when the container is started, etc. ) command and send them to Promtail.
How can I achieve this? What are the steps and configurations I need to set up in order to get logs from docker events to Promtail?
Note that my Docker host is running on a Windows machine, and I'm using the latest version of Promtail.
my promtail.yaml file :
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: flog_scrape
docker_sd_configs:
- host: unix:///var/run/docker.sock
refresh_interval: 5s
filters:
- name: label
values: ["logging=promtail"]
relabel_configs:
- source_labels: ['__meta_docker_container_name']
regex: '/(.*)'
target_label: 'container'
- source_labels: ['__meta_docker_container_log_stream']
target_label: 'logstream'
- source_labels: ['__meta_docker_container_label_logging_jobname']
target_label: 'job'
Any help or advice would be greatly appreciated. Thank you!

Promtail and Grafana - json log file from docker container not displayed

my application's services are deployed via docker-compose. Currently, I also deployed Grafana, Loki and Promtail within the same docker-compose network.
Following the getting-started guide, collecting and displaying the log files from /var/log with the config
- job_name: system
entry_parser: raw
static_configs:
- targets:
- localhost
labels:
job: varlogs
__path__: /var/log/*log
works fine.
However, my backend logs (NestJS) into a log file which is stored in a docker volume.
Example log entry:
{"message":"Mapped {/api/drink, POST} route","context":"RouterExplorer","level":"info","timestamp":"2021-03-23T17:08:16.334Z"}
The path to the log is
/var/lib/docker/volumes/my_volume/_data/general.log
When I add the following config to Promtail
- job_name: backend
pipeline_stages:
- json:
expressions:
level: level
message: message
timestamp: timestamp
context: context
static_configs:
- targets:
- localhost
labels:
job: backend
__path__: /var/lib/docker/volumes/my_volume/_data/general.log
and use the query {job="backend"} in Grafana, nothing is displayed.
Furthermore, the log of the promtail container doesn't give any information.
What am I missing?
Thank you in advance!
In your pipeline stages you need to store the extracted values:
pipeline_stages:
- json:
expressions:
level: level
message: message
timestamp: timestamp
context: context
- timestamp:
source: timestamp
- labels:
level:
context:
- output:
source: message
This will set the timestamp, the labels context, level and the message will be the log line.
Documentation can be found here.

How to change my instance names on Prometheus

I'm monitoring multiple computers in the same cluster, for that I'm using prometheus.
Here is my config file prometheus.yml:
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "Server-monitoring-Api"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
- targets: ["localhost:9182"]
- targets: ["192.168.1.71:9182"]
- targets: ["192.168.1.84:9182"]
I'm new to Prometheus, I want to show the name of my target, i.e: rather than using for example 192.168.1.71:9182 I only want the target name to be shown, I have a research, I found this:
relabel_configs:
- source_labels: [__meta_ec2_tag_Name]
target_label: instance
But I dont know how to use to relabel my targets(instances), any help will be appreciated, thanks for your help.
The snippet that you found should work only if you're using the EC2 service discover features of Prometheus (which doesn't seem your case since you're using some static targets).
I see a couple of options. You could expose directly in your metrics a different metrics (hostname) with the value of the hostname. Or you could use the textfile collector to expose the same metric as a static value (on a different port).
I recommend reading this post which explains why having a different metric for the "name" or "role" of the machine is usually a better approach than having a hostname label in your metrics.
It is also possible to add a custom label in the Prometheus config directly, something like (since you have your static targets anyhow). Finally, if you are already using the Prometheus node exporter you could use the node_uname_info metric (the nodename label).
- job_name: 'Kafka'
metrics_path: /metrics
static_configs:
- targets: ['10.0.0.4:9309']
labels:
hostname: hostname-a

With Prometheus how to monitor a scaled Docker service where each instance serves its own /metrics?

I have a Prometheus setup that monitors metrics exposed by my own services. This works fine for a single instance, but once I start scaling them, Prometheus gets completely confused and starts tracking incorrect values.
All services are running on a single node, through docker-compose.
This is the job in the scrape_configs:
- job_name: 'wowanalyzer'
static_configs:
- targets: ['prod:8000']
Each instance of prod tracks metrics in its memory and serves it at /metrics. I'm guessing Prometheus picks a random container each time it scraps which leads to the huge increase in counts recorded, building up over time. Instead I'd like Prometheus to read /metrics on all instances simultaneously, regardless of the amount of instances active at that time.
docker-gen (https://github.com/jwilder/docker-gen) was developed for this purpose.
You would need to create a sidecart container running docker-gen that generates a new set of targets.
If I remember well the host names generated are prod_1, prod_2, prod_X, etc.
I tried a lot to find something to help us with this issue but it looks an unsolved issue.
So, I decided to create this tool that helps us with this service-discovery.
https://github.com/juliofalbo/docker-compose-prometheus-service-discovery
Feel free to contribute and open issues!
You can use DNS service discovery feature. For example:
docker-compose.yml:
version: "3"
services:
myapp:
image: appimage:v1
restart: always
networks:
- back
prometheus:
image: "prom/prometheus:v2.32.1"
container_name: "prometheus"
restart: "always"
ports: [ "9090:9090" ]
volumes:
- "./prometheus.yml:/etc/prometheus/prometheus.yml"
- "prometheus_data:/prometheus"
networks:
- back
prometheus.yml sample:
global:
scrape_interval: 15s
evaluation_interval: 60s
scrape_configs:
- job_name: 'monitoringjob'
dns_sd_configs:
- names: [ 'myapp' ] <-- service name from docker-compose
type: 'A'
port: 8080
metrics_path: '/actuator/prometheus'
You can check your DNS records using nslookup util from any container in this network:
docker exec -it myapp bash
bash-4.2# yum install bind-utils
bash-4.2# nslookup myapp
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: myapp
Address: 172.22.0.2
Name: myapp
Address: 172.22.0.7

Docker prom/Prometheus container exits

when i run this command it creates the docker container but shows in exit status
and i am not able to get it started
my goal is to be able to replace prometheus.yml file with a custom prometheus.yml to monitor nginx running at http://localhost:70/nginx_status
docker run -it -d --name prometheus3 -p 9090:9090 -v
/opt/docker/prometheus:/etc/prometheus prom/prometheus -
config.file=/etc/prometheus/prometheus.yml
here is my prometheus.yml file
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
scrape_timeout: 5s
static_configs:
- targets: ['localhost: 9090']
- job_name: 'node'
static_configs:
- targets: ['localhost: 70/nginx_status']
You should be able to see the logs of the stopped container by running:
docker logs prometheus3
Anyway, there are (at least) two issues with your configuration:
The prometheus.yml file is invalid so the prometheus process immediately exits.
The scrape_interval and scrape_timeout need to be in a global section and the indentation was off. See below for an example of a correctly formatted yml file.
2.) You can't just scrape the /nginx_status endpoint but need to use a nginx exporter which extracts the metrics for you. Then the Prometheus server will scrape the nginx_exporter to retrieve the metrics. You can find a list of exporters here and pick one that suits you.
Once you have the exporter running, you need to point Prometheus to the address of the exporter so it can be scraped.
Working prometheus.yml :
global:
scrape_interval: 5s
scrape_timeout: 5s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node'
static_configs:
- targets: ['<< host name and port of nginx exporter >>']

Resources