Trying to construct a monitoring system. After using collectd to generate monitoring metrics into files with the following format, my next step will be importing data into TDengine and using Grafana as front end dash board to show collected metrics. What will be the best practice for converting such data format into TDengine compatible data format and import data into the database?
[root#nas01]# head cpu-load-2021-10-15
epoch,min,max,avg
1470731947.726,0.000000,0.002500,0.012500
1470731957.724,0.000000,0.002500,0.012500
1470731967.724,0.000000,0.002500,0.012500
1470731977.724,0.000000,0.002500,0.012500
1470731987.724,0.000000,0.002500,0.012500
You can use collectd to write data into TDengine via taosAdapter.
Use "direct collection" way:
Modify the collectd configuration /etc/collectd/collectd.conf. taosAdapter uses 6045 for collectd direct collection data write by default.
LoadPlugin network
<Plugin network>
Server "127.0.0.1" "6045"
</Plugin>
Or use "tsdb writer" way:
Modify the collectd configuration /etc/collectd/collectd.conf. taosAdapter uses 6047 for collectd tsdb write by default.
LoadPlugin write_tsdb
<Plugin write_tsdb>
<Node>
Host "localhost"
Port "6047"
HostTags "status=production"
StoreRates false
AlwaysAppendDS false
</Node>
</Plugin>
Related
I have a Tomcat WAR project running in AWS Elastic Beanstalk EC2 instances. I have configured the instances to ensure that they have an environment variable CLUSTER_NAME. I can verify that the variable is available in the EC2 instance.
[ec2-user#ip-10* ~]$ cat /etc/environment
export CLUSTER_NAME=sandbox
ec2-user#ip-10* ~]$ echo $CLUSTER_NAME
sandbox
This variable is looked up in a Log4j2 XML file like this:
<properties>
<property name="env-name">${env:CLUSTER_NAME}</property>
</properties>
The env-name property is used in a Coralogix appender like this:
<Coralogix name="Coralogix" companyId="--" privateKey="--"
applicationName="--" subSystemName="${env-name}">
<PatternLayout>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS}{GMT+0}\t%p\t%c\t%m%n</pattern>
</PatternLayout>
</Coralogix>
I see that this lookup is not working, as the env-name is just shown as ${env:CLUSTER_NAME} in Coralogix dashboard. The value works if I hardcode it.
What can be done to fix this lookup? There are several related questions for this, but they seem to refer to log4j1.x. https://stackoverflow.com/a/22296362. I have ensured that this project uses log4j2.
The solution was to add the CLUSTER_NAME variable in the /etc/profile.d/env.sh. This variable is available in the log4j2.xml with the following lookup.
<property name="env-name">
${env:CORALOGIX_CLUSTER_NAME}
</property>
I am still not clear of the difference between adding a variable to /etc/environment vs /etc/profile.d/env.sh.
I am trying to make my docker-compose file write its logging to a Graylog server, using the GELF protocol. This works fine, using the following configuration (snippet of docker-compose.yml):
logging:
driver: gelf
options:
gelf-address: ${GELF_ADDRESS}
The Graylog server receives the messages I log in the JBoss instance in my Docker container. It also adds some extra GELF fields, like container_name and image_name.
My question is, how can I add extra GELF fields myself? I want it to pass _username as an extra field. I have this field available in my MDC context.
I could add the information to the message by using a formatter (Conversion Pattern) in my CONSOLE logger, by adding the following to this logger:
%X{_user_name}
But this is not what I want, as it will be in the GELF message field, not added as seperate extra field.
Any thoughts?
It does seem impossible in the current docker-compose version (1.8.0) to include the extra fields.
I ended up removing any logging configuration from the docker-compose file and instead integrate the GELF logging in the docker container's application. Since I am using JBoss AS 7, I have used the steps as described here: http://logging.paluch.biz/examples/jbossas7.html
To log the container id, I have added the following configuration:
<custom-handler name="GelfLogger" class="biz.paluch.logging.gelf.jboss7.JBoss7GelfLogHandler" module="biz.paluch.logging">
<level name="INFO" />
<properties>
<property name="host" value="udp:${GRAYLOG_HOST}" />
<property name="port" value="${GRAYLOG_PORT}" />
<property name="version" value="1.1" />
<property name="additionalFields" value="dockerContainer=${HOSTNAME}" />
<property name="includeFullMdc" value="true" />
</properties>
Field dockerContainer is substituted by the HOSTNAME environment variable on the docker container and contains the containerId. The other placeholders are substituted by docker-compose environment variables.
By including the full MDC, I was able to put the username (and some other fields) as an additional GELF field. (For more information about MDC, see http://logback.qos.ch/manual/mdc.html)
I need to get some metrics from Wildfly/Undertow, specifically open/max HTTP connections and used threads and correlate it with open database connection count, which I am able to read using jboss-cli:
/subsystem=datasources/data-source=ExampleDS/statistics=pool:read-resource(recursive=true,include-runtime=true)
Is there a way to obtain the HTTP connection statistics in Wildfly 8.2?
In Wildlfy you configure the http connector thread pool specifying a worker configured via the IO subsystem:
IO Subsystem config example:
<subsystem xmlns="urn:jboss:domain:io:1.1">
<worker name="my-worker" io-threads="24" task-max-threads="30" stack-size="20"/>
<worker name="default" />
<buffer-pool name="default"/>
</subsystem>
The worker then gets added to the http-listener (or ajp-listener) using the worker attribtue:
<http-listener name="default" worker="my-worker" socket-binding="http"/>
The IO Subsystem uses the XNIO API, that exposes the statistics in the Mbean org.xnio/Xnio/nio/my-worker. You can have a look at them with a jmx-client or jvisualvm:
But I have no idea how you can read them via jboss-cli.
we want to publish docker container metrics data using collectd, and below mentioned is our puppet script (reference: https://github.com/cloudwatt/docker-collectd-plugin)
Here is our puppet snippet
collectd::plugin { 'collectd-docker-plugin' :
plugin => 'docker',
content => template('test-iops/dockerplugin.erb'),
}
And here is dockerplugin.erb
LoadPlugin python
<Plugin python>
ModulePath "/usr/sbin/collectd"
Import "dockerplugin"
<Module dockerplugin>
BaseURL "unix://var/run/docker.sock"
</Module>
</Plugin>
collectd log message is
plugin_load: Could not find plugin "docker" from /usr/lib64/collectd
I think the problem is that there is no docker plugin per-se for CollectD. the docker-collectd-plugin is a Python-based plugin.
Try with:
collectd::plugin { 'collectd-docker-plugin':
plugin => 'python',
content => template('test-iops/dockerplugin.erb'),
}
By doing this, you also don't need to put LoadPlugin python in your .erb file; I believe that Puppet snippet will do that for you already (although it doesn't hurt if it's there twice).
May I also suggest using the plugin version from https://github.com/lebauce/docker-collectd-plugin, which seems to be the "true" upstream repository. I just happened to have contributed a whole bunch of fixes and improvements for it!
I have an Ubuntu server with Elasticsearch, MongoDB, and Graylog2 running in Azure, and I have an asp.net mvc4 application I am trying to send logs from. (I am using Gelf4Net / Log4Net as the logging component). To cut to the chase, nothing is being logged.
(skip to the update to see what is wrong)
The setup
1 Xsmall Ubuntu VM running the needed software for graylog2
everything is running as a daemon
1 Xsmall cloud service with the MVC4 app (2 instnaces)
A virtual network setup so they can talk.
So what have I tried?
From the linux box the follow command will cause a message to be logged echo "<86>Dec 24 17:05:01 foo-bar CRON[10049]: pam_unix(cron:session):" |
nc -w 1 -u 127.0.0.1 514
I can change the IP address to use the public IP and it works fine as well.
using this powershell script I can log the same message from my dev machine as well as the production web server
Windows firewall turned off and it still doesn't work.
I can log to a FileAppender Log4Net, so I know Log4Net is working.
tailing the graylog2.log shows nothing of interest. Just a few warning about my plugin directory
So I know everything is working, but I can't get the Gelf4Net appender to work. I'm a loss here. Where can I look? Is there something I am missing
GRAYLOG2.CONF
#only showing the connection stuff here. If you need something else let me know
syslog_listen_port = 514
syslog_listen_address = 0.0.0.0
syslog_enable_udp = true
syslog_enable_tcp = false
web.config/Log4Net
//application_start() has log4net.Config.XmlConfigurator.Configure();
<log4net >
<root>
<level value="ALL" />
<appender-ref ref="GelfUdpAppender" />
</root>
<appender name="GelfUdpAppender" type="Gelf4net.Appender.GelfUdpAppender, Gelf4net">
<remoteAddress value="public.ip.of.server"/>
<remotePort value="514" />
<layout type="Gelf4net.Layout.GelfLayout, Gelf4net">
<param name="Facility" value="RandomPhrases" />
</layout>
</appender>
</log4net>
update
for some reason it didn't occur to me to run graylog in debug mode :) Doing so shows this message.
2013-04-09 03:00:56,202 INFO : org.graylog2.inputs.syslog.SyslogProcessor - Date could not be parsed. Was set to NOW because allow_override_syslog_date is true.
2013-04-09 03:00:56,202 DEBUG: org.graylog2.inputs.syslog.SyslogProcessor - Skipping incomplete message.
So it is sending an incomplete message. How can I see what is wrong with it?
I was using the wrong port (DOH!)
I should have been using the port specified in graylog2.config / gelf_listen_port = 12201
so my web.config/log4net/gelf appender should have had
<appender name="GelfUdpAppender" type="Gelf4net.Appender.GelfUdpAppender, Gelf4net">
...
<remotePort value="12201" />
...
</appender>
For anyone who may have the same problem, make sure Log4Net reloads the configuration after you change it. I don't have it set to watch the config file for changes, so it took me a few minutes to realize that I was using the wrong port. When I changed it from 514 to 12201 the first time, messages still weren't getting though. I had to restart the server for Log4Net to pick up the new config, and then it started to work.