I'm using log4j2 to send the log messages to a remote syslog server.
The appender configuration is:
<Syslog name="CLSYSLOG" host="xxx.xxx.xxx.xxx" port="514" protocol="TCP" facility="LOCAL4" format="RFC5424" appName="CEP" id="ES" includeMDC="false" enterpriseNumber="18060" newLine="true" messageId="Audit" mdcId="mdc" />
The message makes it to the remote server but a garbage string of "fe80: 0:0:0:801:24ff:fe62:8910%2" is added after the application name in all the messages.
Any idea how can I get rid of that string?
It turned out to be the IPV6 address of the source. Configuring syslog replaced it with the regular IP Address.
Related
I'm trying to get some Cisco Meraki MX firewalls logs pointed to our Kubernetes cluster using fluentd pods. I'm using the #syslog source plugin, and able to get the logs generated, but I keep getting this error
2022-06-30 16:30:39 -0700 [error]: #0 invalid input data="<134>1 1656631840.701989724 838071_MT_DFRT urls src=10.202.11.05:39802 dst=138.128.172.11:443 mac=90:YE:F6:23:EB:T0 request: UNKNOWN https://f3wlpabvmdfgjhufgm1xfd6l2rdxr.b3-4-eu-w01.u5ftrg.com/..." error_class=Fluent::TimeParser::TimeParseError error="invalid time format: value = 1 1656631840.701989724 838071_ME_98766, error_class = ArgumentError, error = string doesn't match"
Everything seems to be fine, but it seems as though the Meraki is sending it's logs in Epoch time, and the fluentd #syslog plugin is not liking it.
I have a vanilla config:
<source>
#type syslog
port 5140
tag meraki
</source>
Is there a way to possibly transform the time strings to something fluentd will like? Or what am I missing here.
I'm able to successfully connect to SQL server using SQL SERVER AUTHENTICATION however it does not work with WINDOWS AUTHENTICATION, is it a bug, or I'm missing something in the configuration?
<source>
#type sql
host HOSTNAME
database db_name
adapter sqlserver
username WindowsUser
password WindowsPwd
<table>
table tbl_name
update_column insert_timestamp
</table>
</source>
<match **>
#type stdout
</match>
I get below error:
[warn]: #0 failed to flush the buffer. retry_time=1 next_retry_seconds=2021-09-01 22:12:40 238620126384680326147/703687441776640000000 +0530 chunk="5caf1c0f1dfbb6d0ca989ce4ffd28fa3" error_class=TinyTds::Error error="Adaptive Server connection failed (localhost)
The issue is resolved, make sure to add a schema name with the table name.
I have a JAVA application which I want to monitor its JMX attributes using telegraf tool.
The tool provides jolikia plugin to monitor JMX attributes. I have added following dependencies to my app's pom.xml file regarding Maven section of Jolokia documentation:
<dependency>
<groupId>org.jolokia</groupId>
<artifactId>jolokia-core</artifactId>
<version>1.3.7</version>
</dependency>
<dependency>
<groupId>org.jolokia</groupId>
<artifactId>jolokia-client-java</artifactId>
<version>1.3.7</version>
</dependency>
This is my /etc/telegraf/telegraf.conf file:
[[inputs.jolokia]]
context = "/jolokia/"
[[inputs.jolokia.servers]]
name = "wr-core"
host = "192.168.100.175"
port = "1998"
[[inputs.jolokia.metrics]]
name = "send_success"
mbean = "wr-core:type=monitor,name=execution"
attribute = "MessageSendSuccessCount"
The application is up in the provided IP/port (I can connect to it with jconsole). The application has a monitoring section which its object name (as shown in jconsole) is wr-core:type=monitor,name=execution and has the attribute MessageSendSuccessCount. But when I start telegraf service, following error occurs:
Jan 14 14:30:32 ZiZi telegraf[17258]: 2018-01-14T11:00:32Z E! Error in plugin [inputs.jolokia]: error performing request: Error decoding JSON response: invalid character '\x00' looking for beginning of value:
Note that 1998 is my app's jmx port. I also tried using 8778 which is jolokia-agent port; got:
Jan 14 14:40:03 ZiZi telegraf[9150]: 2018-01-14T11:10:03Z E! Error in plugin [inputs.jolokia]: error performing request: Post http://192.168.100.175:8778/jolokia/: dial tcp 192.168.100.175:8778: getsockopt: connection refused
EDIT 1:
I have checked my CLASSPATH and both jolokia-client and jolokia-core were listed: ../lib/jolokia-client-java-1.3.7.jar:../lib/jolokia-core-1.3.7.jar.
EDIT 2:
I have put following lines into my app's execution file:
JOLOKIA_OPTS=-javaagent:$LIB_PATH/jolokia-core-java-1.3.7.jar=port=8778,host=0.0.0.0
JAVA_OPTS="-mx4096M $JAVA_OPTS $JACOCO_OPTS $JOLOKIA_OPTS"
But when I run the file, I get this error (even though ../lib/jolokia-core-java-1.3.7.jar has been listed in the CLASSPATH):
Error opening zip file or JAR manifest missing : ../lib/jolokia-core-java-1.3.7.jar
Error occurred during initialization of VM
agent library failed to init: instrument
Found the solution.
I have skipped maven solution and tried the javaagent approach, but I had misunderstood the usage of javaagent previously; I should address jolokia jvm agent (this helped):
JOLOKIA_OPTS=-javaagent:/root/jolokia-jvm-1.3.7-agent.jar=port=8778,host=0.0.0.0
JAVA_OPTS="-mx4096M $JAVA_OPTS $JACOCO_OPTS $JOLOKIA_OPTS"
Now, my app starts with this log (successfully):
I> No access restrictor found, access to any MBean is allowed
Jolokia: Agent started with URL http://192.168.100.175:8778/jolokia/
In the other side there is no error in the telegraf console for jolokia anymore.
All the observation are implying the jolokia jvm library has been started and works successfully.
I have also found jolokia jmx documentation for using it as dependencies in the project; but since I'm not a JAVA expert (I'm testing the app), I prefer to currently use the javaagent approach and leave it for future study/experience. BTW, it may help the others.
EDIT 1:
I have found and deployed jolokia jvm agent using its spring support.
Configuring it in the spring XML file, I can now have jolokia jvm agent start listening at my app's startup.
I need to get some metrics from Wildfly/Undertow, specifically open/max HTTP connections and used threads and correlate it with open database connection count, which I am able to read using jboss-cli:
/subsystem=datasources/data-source=ExampleDS/statistics=pool:read-resource(recursive=true,include-runtime=true)
Is there a way to obtain the HTTP connection statistics in Wildfly 8.2?
In Wildlfy you configure the http connector thread pool specifying a worker configured via the IO subsystem:
IO Subsystem config example:
<subsystem xmlns="urn:jboss:domain:io:1.1">
<worker name="my-worker" io-threads="24" task-max-threads="30" stack-size="20"/>
<worker name="default" />
<buffer-pool name="default"/>
</subsystem>
The worker then gets added to the http-listener (or ajp-listener) using the worker attribtue:
<http-listener name="default" worker="my-worker" socket-binding="http"/>
The IO Subsystem uses the XNIO API, that exposes the statistics in the Mbean org.xnio/Xnio/nio/my-worker. You can have a look at them with a jmx-client or jvisualvm:
But I have no idea how you can read them via jboss-cli.
I have an Ubuntu server with Elasticsearch, MongoDB, and Graylog2 running in Azure, and I have an asp.net mvc4 application I am trying to send logs from. (I am using Gelf4Net / Log4Net as the logging component). To cut to the chase, nothing is being logged.
(skip to the update to see what is wrong)
The setup
1 Xsmall Ubuntu VM running the needed software for graylog2
everything is running as a daemon
1 Xsmall cloud service with the MVC4 app (2 instnaces)
A virtual network setup so they can talk.
So what have I tried?
From the linux box the follow command will cause a message to be logged echo "<86>Dec 24 17:05:01 foo-bar CRON[10049]: pam_unix(cron:session):" |
nc -w 1 -u 127.0.0.1 514
I can change the IP address to use the public IP and it works fine as well.
using this powershell script I can log the same message from my dev machine as well as the production web server
Windows firewall turned off and it still doesn't work.
I can log to a FileAppender Log4Net, so I know Log4Net is working.
tailing the graylog2.log shows nothing of interest. Just a few warning about my plugin directory
So I know everything is working, but I can't get the Gelf4Net appender to work. I'm a loss here. Where can I look? Is there something I am missing
GRAYLOG2.CONF
#only showing the connection stuff here. If you need something else let me know
syslog_listen_port = 514
syslog_listen_address = 0.0.0.0
syslog_enable_udp = true
syslog_enable_tcp = false
web.config/Log4Net
//application_start() has log4net.Config.XmlConfigurator.Configure();
<log4net >
<root>
<level value="ALL" />
<appender-ref ref="GelfUdpAppender" />
</root>
<appender name="GelfUdpAppender" type="Gelf4net.Appender.GelfUdpAppender, Gelf4net">
<remoteAddress value="public.ip.of.server"/>
<remotePort value="514" />
<layout type="Gelf4net.Layout.GelfLayout, Gelf4net">
<param name="Facility" value="RandomPhrases" />
</layout>
</appender>
</log4net>
update
for some reason it didn't occur to me to run graylog in debug mode :) Doing so shows this message.
2013-04-09 03:00:56,202 INFO : org.graylog2.inputs.syslog.SyslogProcessor - Date could not be parsed. Was set to NOW because allow_override_syslog_date is true.
2013-04-09 03:00:56,202 DEBUG: org.graylog2.inputs.syslog.SyslogProcessor - Skipping incomplete message.
So it is sending an incomplete message. How can I see what is wrong with it?
I was using the wrong port (DOH!)
I should have been using the port specified in graylog2.config / gelf_listen_port = 12201
so my web.config/log4net/gelf appender should have had
<appender name="GelfUdpAppender" type="Gelf4net.Appender.GelfUdpAppender, Gelf4net">
...
<remotePort value="12201" />
...
</appender>
For anyone who may have the same problem, make sure Log4Net reloads the configuration after you change it. I don't have it set to watch the config file for changes, so it took me a few minutes to realize that I was using the wrong port. When I changed it from 514 to 12201 the first time, messages still weren't getting though. I had to restart the server for Log4Net to pick up the new config, and then it started to work.