How to configure uwsgi.yml for metricbeat properly? - uwsgi

I have enabled uwsgi module for metricbeat. But, logs doesn't appear on kibana. I am using default uwsgi.yml for metricbeat and it looks like this.
# Module: uwsgi
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/6.5/metricbeat-modul-uwsgi.html
- module: uwsgi
#metricsets:
# - status
period: 10s
hosts: ["tcp://127.0.0.1:9090"]
I've even tried to find port uwsgi is running on, but I failed. How should I change uwsgi.yml file to make it work properly? I've never used uwsgi myself and my linux knowledge is pretty basic. Thanks in advance.

You need to add this line in your uwsgi config (normally in /etc/uwsgi/apps-enabled/yourapp.ini):
stats = 127.0.0.1:9090

Related

Landoop. Kafka connect how to change worker properties

landoop/fast-data-dev:2.6
I want to change default batch.size using 'producer.override.batch.size=65536' when creating new connector.
But in order to do that, it's required to apply override policy on the worker side
connector.client.config.override.policy=All
Otherwise there is exception
"producer.override.batch.size" : The 'None' policy does not allow
'batch.size' to be overridden in the connector configuration.
It's not clear, how exactly:
to change the default worker properties
where it expects them to be placed
which name they should have
So that landoop sees them
I start the landoop using the following docker-compose.
version: '2'
services:
kafka-cluster:
image: landoop/fast-data-dev:2.6
environment:
ADV_HOST: 127.0.0.1
RUNTESTS: 0
ports:
- 2181:2181 # Zookeeper
- 3030:3030 # Landoop UI
- 8081-8083:8081-8083 # REST Proxy, Schema Registry, Kafka Connect ports
- 9581-9585:9581-9585 # JMX Ports
- 9092:9092 # Kafka Broker
volumes:
- ./connectors/news/crypto-panic-connector-1.0.jar:/connectors/crypto-panic-connector-1.0.jar
distributed.properties at folder /connect/connect-avro-distributed.properties generated by Landoop
offset.storage.partitions=5
key.converter.schema.registry.url=http://127.0.0.1:8081
value.converter.schema.registry.url=http://127.0.0.1:8081
config.storage.replication.factor=1
offset.storage.topic=connect-offsets
status.storage.partitions=3
offset.storage.replication.factor=1
key.converter=io.confluent.connect.avro.AvroConverter
config.storage.topic=connect-configs
config.storage.partitions=1
group.id=connect-fast-data
rest.advertised.host.name=127.0.0.1
port=8083
value.converter=io.confluent.connect.avro.AvroConverter
rest.port=8083
status.storage.replication.factor=1
status.storage.topic=connect-statuses
access.control.allow.origin=*
access.control.allow.methods=GET,POST,PUT,DELETE,OPTIONS
jmx.port=9584
plugin.path=/var/run/connect/connectors/stream-reactor,/var/run/connect/connectors/third-party,/connectors
bootstrap.servers=PLAINTEXT://127.0.0.1:9092
crypto-panic-connector-1.0 connector directories structure:
/config:
> worker.properties
/src:
> ...
UPDATE
Adding to environment properties:
CONNECT_CONNECT_CLIENT_CONFIG_OVERRIDE_POLICY: 'All'
CONNECT_PRODUCER_OVERRIDE_BATCH_SIZE: 65536
Doesn't work for landoop/fast-data-dev:2.6
In logs it's still
'connector.client.config.override.policy = None'
And warning
WARN The configuration 'connect.client.config.override.policy' was supplied but isn't a known config.
Changing this to
CONNECTOR_CONNECTOR_CLIENT_CONFIG_OVERRIDE_POLICY: 'All'
CONNECT_PRODUCER_OVERRIDE_BATCH_SIZE: 65536
Removes warning, but at the end the override policy is still 'None' and it's not possible to override properties for client when creating connector.
Changing to
CONNECTOR_CLIENT_CONFIG_OVERRIDE_POLICY: 'All'
CONNECT_PRODUCER_OVERRIDE_BATCH_SIZE: 65536
has same effect, policy 'None'.
Also batch size overriding is not aplied. So I assume those overriding features are not supported in Landoop.
WARN The configuration 'producer.override.batch.size' was supplied but isn't a known config.
I assume 'confluentinc/cp-kafka-connect' doesn't have UI built-in, and for learning purposes seems better to have it. So it's more preferable to do it in Landoop. But thanks for recommendation to use 'confluentinc/cp-kafka-connect'. I will try to do this config overriding there also
For starters, that image is very old and no longer maintained. I'd recommend you use confluentinc/cp-kafka-connect
In any case, for both images, you can use
environment:
CONNECT_CONNECT_CLIENT_CONFIG_OVERRIDE_POLICY: 'All'
CONNECT_PRODUCER_OVERRIDE_BATCH_SIZE: 65536
It's not clear, how exactly ... change the default worker properties
Look at the source code

Core log Lua in Haproxy does not log to the default haproxy log file

I have setup a Lua script to process the request in HAProxy. I am using Core class to log information in the log file.
Here is my config file
sudo nano /etc/haproxy/haproxy.cfg
global
lua-load /etc/haproxy/route_req.lua
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats timeout 30s
user haproxy
group haproxy
daemon
#HAProxy for web servers
frontend web-frontend
bind 10.122.0.2:80
bind 139.59.75.106:80
mode http
use_backend %[lua.routeIP]
Here is my route_req.lua file
local function getIP(txn)
local clientip = txn.f:src()
backend = ""
-- MY CODE GOES HERE
core.log(core.info, "This is an example\n")
return backend
end
core.register_fetches('routeIP', getIP)
I don't see any logging in my log file, /var/log/haproxy.log. Also there was no logging regarding the same in /var/log/syslog file.
Make sure to include log global in your frontend stanza.

Why does my metricbeat extension ignore my ActiveMQ broker host configuration in Kibana docker?

I'm trying to set up a local Kibana instance with ActiveMQ for testing purposes. I've created a docker network called elastic-network. I have 3 containers in my network: elasticsearch, kibana and finally activemq. In my kibana container, I downloaded metric beats using the following shell command
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.11.2-linux-x86_64.tar.gz
In the configuration file metricbeat.reference.yml, I've changed the host for my ActiveMQ instance running under the container activemq
- module: activemq
metricsets: ['broker', 'queue', 'topic']
period: 10s
hosts: ['activemq:8161']
path: '/api/jolokia/?ignoreErrors=true&canonicalNaming=false'
username: admin # default username
password: admin # default passwor
When I run metricbeat using the verbose parameter ./metricbeat -e I get some error mentioning that ActiveMQ API is unreachable. My problem is that metricbeat ignore my active mq broker configuration and tries to connect to localhost.
Is there a reason why my configuration could be ignored?
After looking through the documentation, I saw that for Linux, unlike the other OS, you also have to change the configuration in the module directory module.d/activemq.yml, not just the metricbeat.reference.yml
# Module: activemq
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/7.11/metricbeat-module-activemq.html
- module: activemq
metricsets: ['broker', 'queue', 'topic']
period: 10s
hosts: ['activemq:8161']
path: '/api/jolokia/?ignoreErrors=true&canonicalNaming=false'
username: admin # default username
password: admin # default password

How to check the number of workers running inside a docker container of kubernetes pod?

I am using Flask-uWSGI architecture for a production service and have set the master flag of uWSGI config as False. While running the service, I pass NUM_WORKERS of uWSGI as 2 to the docker container. Based on this doc on uWSGI config, master flag is necessary to re-spawn and pre-fork workers. I wonder if my service containers within the pods are actually using 2 workers?
So, I want to exec into a pod and see the number of uWSGI workers which are actually being used?
Not related but my uWSGI config:
[uwsgi]
socket = 0.0.0.0:9999
protocol = http
module = my_app.server.wsgi
callable = app
master = false
thunder-lock = true
Add a prometheus-exporter to your app, and curl /metrics endpoint manually:
https://github.com/timonwong/uwsgi_exporter
It has num workers metric: https://github.com/timonwong/uwsgi_exporter/blob/9f88775cc1a600e4038bb1eae5edfdf38f023dc4/exporter/exporter.go#L50
Further, you can collect this using Prometheus to monitor and alert automatically.

Finding the next available port with Ansible

I'm using Ansible to deploy a Ruby on Rails application using Puma as a web server. As part of the deployment, the Puma configuration binds to the IP address of the server on port 8080:
bind "tcp://{{ ip_address }}:8080"
This is then used in the nginx vhost config to access the app:
upstream {{ app_name }} {
server {{ ip_address }}:8080;
}
All of this is working fine. However, I now want to deploy multiple copies of the app (staging, production) onto the same server and obviously having several bindings on 8080 is causing issues so I need to use different ports.
The most simple solution would be to include the port in a group var and then just drop it in when the app is deployed. However, this would require background knowledge of the apps already running on the server and it kind of feels like the deployment should be able to "discover" the port to use.
Instead, I was considering doing some kind of iteration through ports, starting at 8080, and then checking each until one is not being used. netstat -anp | grep 8080 gives a return code 0 if the port is being used so perhaps I could use that command to test (though I'm not sure of how to do the looping bit).
Has anyone come up against this problem before? Is there a more graceful solution that I'm overlooking?
I'd define list of allowed ports and compare it to available ports.
Something like this:
- hosts: myserver
vars:
allowed_ports:
- 80
- 8200
tasks:
- name: Gather occupied tcp v4 ports
shell: netstat -nlt4 | grep -oP '(?<=0.0.0.0:)(\d+)'
register: used_ports
- name: Set bind_port as first available port
set_fact:
bind_port: "{{ allowed_ports | difference(used_ports.stdout_lines | map('int') | list) | first | default(0) }}"
failed_when: bind_port | int == 0
- name: Show bind port
debug: var=bind_port
You may want to tune 0.0.0.0 in the regexp if you need to check ports on specific interface.

Resources