Fluentd - Not able to connect to SQL server using windows authentication - ruby-on-rails

I'm able to successfully connect to SQL server using SQL SERVER AUTHENTICATION however it does not work with WINDOWS AUTHENTICATION, is it a bug, or I'm missing something in the configuration?
<source>
#type sql
host HOSTNAME
database db_name
adapter sqlserver
username WindowsUser
password WindowsPwd
<table>
table tbl_name
update_column insert_timestamp
</table>
</source>
<match **>
#type stdout
</match>
I get below error:
[warn]: #0 failed to flush the buffer. retry_time=1 next_retry_seconds=2021-09-01 22:12:40 238620126384680326147/703687441776640000000 +0530 chunk="5caf1c0f1dfbb6d0ca989ce4ffd28fa3" error_class=TinyTds::Error error="Adaptive Server connection failed (localhost)

The issue is resolved, make sure to add a schema name with the table name.

Related

fluentd TimeParser Error - Invalid Time Format

I'm trying to get some Cisco Meraki MX firewalls logs pointed to our Kubernetes cluster using fluentd pods. I'm using the #syslog source plugin, and able to get the logs generated, but I keep getting this error
2022-06-30 16:30:39 -0700 [error]: #0 invalid input data="<134>1 1656631840.701989724 838071_MT_DFRT urls src=10.202.11.05:39802 dst=138.128.172.11:443 mac=90:YE:F6:23:EB:T0 request: UNKNOWN https://f3wlpabvmdfgjhufgm1xfd6l2rdxr.b3-4-eu-w01.u5ftrg.com/..." error_class=Fluent::TimeParser::TimeParseError error="invalid time format: value = 1 1656631840.701989724 838071_ME_98766, error_class = ArgumentError, error = string doesn't match"
Everything seems to be fine, but it seems as though the Meraki is sending it's logs in Epoch time, and the fluentd #syslog plugin is not liking it.
I have a vanilla config:
<source>
#type syslog
port 5140
tag meraki
</source>
Is there a way to possibly transform the time strings to something fluentd will like? Or what am I missing here.

Fluentd configuration for ORACLE clob datatype

I'm using fluentd configuration to read data from a text file and it is pushed to the Oracle database, I have clob & nclob type of datatype when fluentd pushed data, the column is always null and I don't see any errors. I'm not sure how to resolve this issue in fluentd, below is the configuration that I have done.
I 'm using oracle enhanced adapter & sql plugin
https://github.com/rsim/oracle-enhanced
https://github.com/fluent/fluent-plugin-sql/issues
#
# Fluentd configuration file
#
# Config input
<source>
#type forward
port 24224
</source>
# Config output
<match cpu_*>
#type stdout
</match>
<match foo_*>
#type stdout
</match>
<match memory_*>
#type sql
host {DATABASE_HOSTNAME}
port 1521
database {DATABASE_NAME}
adapter oracle_enhanced
username {DATABASE_USERNAME}
password {DATABASE_PASSWORD}
<table>
table fluentd_log
column_mapping 'timestamp:created_at,Mem.text:mem_text,Mem.used:mem_used'
# This is the default table because it has no "pattern" argument in <table>
# The logic is such that if all non-default <table> blocks
# do not match, the default one is chosen.
# The default table is required.
</table>
</match>
CREATE TABLE FLUENTD_LOG
(
ID NUMBER(8),
CREATED_AT VARCHAR2(50 BYTE),
MEM_TEXT CLOB,
MEM_USED VARCHAR2(50 BYTE)
)
ID CREATED_AT MEM_TEXT MEMUSED
1 29-08-99 null test

Fluentd sql output plugin configuration for auto incremented column

I have a fluentd configuration that pulls data from the file and pushes to the SQL server however there is a primary key with the auto-incremented column, so, in my fluentd configuration if I don't mention that column it throws an error saying that the field is missing and if I include the column in the configuration it gives identity error, in below configuration "Id" is the primary and auto-incremented column, also let me know if adapter "sqlserver" is the right thing to use.
<filter record.**>
#type record_transformer
enable_ruby true
<record>
Id ${id}
</record>
<record>
timestamp ${time}
</record>
</filter>
<filter record.**>
#type stdout
</filter>
<match record.**>
#type sql
host myhost
username myuser
password mypassword
database mydb
adapter sqlserver
<table>
table simple_table
column_mapping 'Id:Id,timestamp:timestamp'
</table>
flush_interval 1s
# disable_retry_limit
# num_threads 8
# slow_flush_log_threshold 40.0
</match>
Well, I figured this out, it's mandatory to send the column name in the column_mapping even though if its primary key and auto-incremented, if you login with some other SQL credential it will give you an error, however, if you login with the same details used at the time of table creation it works.

Fluentd hide password or encrypt in configuration

Because of security reasons, we can't keep SQL authentication in plain text, is there a way to hide or encrypt passwords?
<source>
#type sql
#id output_sql
host "sqlserverhost.aws_region.rds.amazonaws.com"
database db_name
adapter sqlserver
username user
password pwd
tag_prefix myrdb # optional, but recommended
select_interval 60s # optional
select_limit 500 # optional
state_file /var/run/fluentd/sql_state
<table>
table tbl_name
update_column insert_timestamp
</table>
</source>
<match **>
#type stdout
</match>

Log4j2 Syslog Appender adds garbage field

I'm using log4j2 to send the log messages to a remote syslog server.
The appender configuration is:
<Syslog name="CLSYSLOG" host="xxx.xxx.xxx.xxx" port="514" protocol="TCP" facility="LOCAL4" format="RFC5424" appName="CEP" id="ES" includeMDC="false" enterpriseNumber="18060" newLine="true" messageId="Audit" mdcId="mdc" />
The message makes it to the remote server but a garbage string of "fe80: 0:0:0:801:24ff:fe62:8910%2" is added after the application name in all the messages.
Any idea how can I get rid of that string?
It turned out to be the IPV6 address of the source. Configuring syslog replaced it with the regular IP Address.

Resources