I have a problem with adding time-stamp to flume header. Here is a snipped from my conf file.
agent.sources.avrosource.interceptors.addTimestamp.type = org.apache.flume.interceptor.TimestampInterceptor$Builder
When I debug with maven, I see that timestamp is not added to header. Here is the debug output:
2013-12-05 10:56:34,963 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - com.btoddb.flume.sinks.cassandra.CassandraSink.process(CassandraSink.java:135)] event: [Event headers = {key=value}, body.length = 12 ]
FYI, I also add timestamp like this, but again it does not work.
agent.sources.avrosource.interceptors = addTime
agent.sources.avrosource.interceptors.addTime.type = timestamp
Any help would be apprecited. Thanks
if you have this useLocalTimeStamp on the sink, it has to work.
agent.sinks.sinkdest.hdfs.useLocalTimeStamp = true
Related
I'm trying to get data from my kafka topic into InfluxDB using the Confluent/Kafka stack. At the moment, the messages in the topic have a form of {"tag1":"123","tag2":"456"} (I have relatively good control over the message format, I chose the JSON to be as above, could include a timestamp etc if necessary).
Ideally, I would like to add many tags without needing to specify a schema/column names in the future.
I followed https://docs.confluent.io/kafka-connect-influxdb/current/influx-db-sink-connector/index.html (the "Schemaless JSON tags example") as this matches my use case quite closely. The "key" of each message is currently just the MQTT topic name (the topic's source is an MQTT connector). So I set the "key.converter" to "stringconverter" (instead of JSONconverter as in the example).
Other examples I've seen online seem to suggest the need for a schema to be set, which I'd like to avoid. Using InfluxDB v1.8, everything on Docker/maintained on Portainer.
I cannot seem to start the connector and never get any data to move across.
Below is the config for my InfluxDBSink Connector:
{
"name": "InfluxDBSinkKafka",
"config": {
"key.converter.schemas.enable": "false",
"value.converter.schemas.enable": "false",
"name": "InfluxDBSinkKafka",
"connector.class": "io.confluent.influxdb.InfluxDBSinkConnector",
"tasks.max": "1",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"topics": "KAFKATOPIC1",
"influxdb.url": "http://URL:PORT",
"influxdb.db": "tagdata",
"measurement.name.format": "${topic}"
}
}
The connector fails, and each time I click "start" (the play button) the following pops up in the connect container's logs:
[2022-03-22 15:46:52,562] INFO [Worker clientId=connect-1, groupId=compose-connect-group]
Connector InfluxDBSinkKafka target state change (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2022-03-22 15:46:52,562] INFO Setting connector InfluxDBSinkKafka state to STARTED (org.apache.kafka.connect.runtime.Worker)
[2022-03-22 15:46:52,562] INFO SinkConnectorConfig values:
config.action.reload = restart
connector.class = io.confluent.influxdb.InfluxDBSinkConnector
errors.deadletterqueue.context.headers.enable = false
errors.deadletterqueue.topic.name =
errors.deadletterqueue.topic.replication.factor = 3
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = class org.apache.kafka.connect.storage.StringConverter
name = InfluxDBSinkKafka
predicates = []
tasks.max = 1
topics = [KAFKATOPIC1]
topics.regex =
transforms = []
value.converter = class org.apache.kafka.connect.json.JsonConverter
(org.apache.kafka.connect.runtime.SinkConnectorConfig)
[2022-03-22 15:46:52,563] INFO EnrichedConnectorConfig values:
config.action.reload = restart
connector.class = io.confluent.influxdb.InfluxDBSinkConnector
errors.deadletterqueue.context.headers.enable = false
errors.deadletterqueue.topic.name =
errors.deadletterqueue.topic.replication.factor = 3
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = class org.apache.kafka.connect.storage.StringConverter
name = InfluxDBSinkKafka
predicates = []
tasks.max = 1
topics = [KAFKATOPIC1]
topics.regex =
transforms = []
value.converter = class org.apache.kafka.connect.json.JsonConverter
(org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig)
I am feeling a little out of my depth and would appreciate any and all help.
The trick here is getting the data in the right format to Kafka in the first place. My MQTT source stream needed to have the value converter set to Bytearray with e schema url and schema = true. Then the Influx Sink started working when I used the jsonconverter, with schema=false. Then it started working. This is deceptive because the message queue looks the same with different valueconverters for the MQTT source connecter, so it took a while to figure out that was the problem.
After getting this working, and realising the confluent stack was perhaps a little overkill for this task, I went with the (much) easier route of pushing MQTT directly to Telegraf and having Telegraf push into InfluxDB. I would recommend this.
I have what should be a simple question but I can't figure it out. What is the correct syntax to use multiple appenders of the same type (RollingFile) with a single logger in Log4j2 properties file format?
For background, I am using Karaf 4.2.7 which uses pax logging. My logging config file is in the properties format.
log4j2.appender.fileapp1.type = RollingRandomAccessFile
log4j2.appender.fileapp1.name = FileApp1
...
log4j2.appender.fileapp2.type = RollingRandomAccessFile
log4j2.appender.fileapp2.name = FileApp2
...
log4j2.logger.myloggername.name = com.acme
log4j2.logger.myloggername.appenderRef.RollingFile.ref = FileApp1, FileApp2
Putting both appenders on that last line separated by a comma does not work. It works if I have only one appender or the other. I also tried
log4j2.logger.myloggername.appenderRef.RollingFile.ref = [FileApp1, FileApp2]
log4j2.logger.myloggername.appenderRef.RollingFile.ref = {FileApp1, FileApp2}
log4j2.logger.myloggername.appenderRef.RollingFile.ref = [{FileApp1}, {FileApp2}]
None of those works. I can't seem to find any examples online of how to do this.
I refer to two web page(thanks).
log4j 2 log4j2.properties(Configuration option)
Log4J 2 Configuration: Using the Properties File
Add and define "~s".
appenders, appenderRefs,
This is notice for define what will be on next.
name=PropertiesConfig
property.filename_fileapp1 = ./logs/fileapp1.log
property.filename_fileapp2 = ./logs/fileapp2.log
appenders = console, fileapp1, fileapp2
appender.console.type = Console
appender.console.name = STDOUT
...
appender.fileapp1.type = RollingRandomAccessFile
appender.fileapp1.name = fileapp1_AppenderName
appender.fileapp1.fileName = ${filename_fileapp1}
appender.fileapp1.filePattern = ${filename_fileapp1}.%d{yyyy-MM-dd}.log
...
appender.fileapp2.type = RollingRandomAccessFile
appender.fileapp2.name = fileapp2_AppenderName
appender.fileapp2.fileName = ${filename_fileapp2}
appender.fileapp2.filePattern = ${filename_fileapp2}.%d{yyyy-MM-dd}.log
...
loggers = mylogger1
logger.mylogger1.name = com.jornathan.sample.log4j2PropertyTest
logger.mylogger1.level = info
#keep this value for testing.
logger.mylogger1.additivity = true
#Here is what you need.
logger.mylogger1.appenderRefs = fileapp1Appender, fileapp2Appender
logger.mylogger1.appenderRef.fileapp1Appender.ref = fileapp1_AppenderName
logger.mylogger1.appenderRef.fileapp2Appender.ref = fileapp2_AppenderName
How to create a separate log file for each bundle deployed in karaf-4.2.3 using pax logging, which has log4j2 native style config?
I've tried with routing appender, but no results.
I am excepted to write each bundle logs in a separate log file for easy debugging.
I don't know anyway doing this automatically. But what you could do is to create for each module a separate configuration based on the root package name
log4j2.logger.xy.name = com.company.module.xy
log4j2.logger.xy.level = INFO
log4j2.logger.xy.additivity = false
log4j2.logger.xy.appenderRef.inovel.ref = XyFile
log4j2.logger.zz.name = com.company.module.zz
log4j2.logger.zz.level = INFO
log4j2.logger.zz.additivity = false
log4j2.logger.zz.appenderRef.inovel.ref = ZzFile
log4j2.logger.keycloak.name = org.keycloak
log4j2.logger.keycloak.level = INFO
log4j2.logger.keycloak.additivity = false
log4j2.logger.keycloak.appenderRef.keycloak.ref = KeycloakFile
And a ref could look like
# keyclok file appender
log4j2.appender.keycloak.type = RollingRandomAccessFile
log4j2.appender.keycloak.name = KeycloakFile
log4j2.appender.keycloak.fileName = ${karaf.data}/log/keycloak.log
log4j2.appender.keycloak.filePattern = ${karaf.data}/log/keycloak.log.%i
log4j2.appender.keycloak.append = true
log4j2.appender.keycloak.layout.type = PatternLayout
log4j2.appender.keycloak.layout.pattern = %d{ISO8601}
log4j2.appender.keycloak.policies.type = Policies
log4j2.appender.keycloak.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.keycloak.policies.size.size = 8MB
log4j2.appender.keycloak.strategy.type = DefaultRolloverStrategy
log4j2.appender.keycloak.strategy.max = 10
This is a lot of manual work. So maybe someone come up with an automatic configuration
Sincerely
Just have a look at the official Log4j 2.x configuration coming with every Karaf distribution and have a look at the commented "Routing" section.
E.g. I've used the following in one of my projects:
# Root logger
log4j2.rootLogger.level = INFO
log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile
log4j2.rootLogger.appenderRef.RollingFile.filter.threshold.type = ThresholdFilter
log4j2.rootLogger.appenderRef.RollingFile.filter.threshold.level = WARN
log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi
log4j2.rootLogger.appenderRef.Console.ref = Console
log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter
log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF}
# Enable log routing...
log4j2.rootLogger.appenderRef.Routing.ref = Routing
# Loggers configuration
...
# Configure the routing (pay close attention to the escapes)...
log4j2.appender.routing.type = Routing
log4j2.appender.routing.name = Routing
log4j2.appender.routing.routes.type = Routes
log4j2.appender.routing.routes.pattern = \$\$\\\{ctx:bundle.name\}
log4j2.appender.routing.routes.bundle.type = Route
log4j2.appender.routing.routes.bundle.appender.type = RollingRandomAccessFile
log4j2.appender.routing.routes.bundle.appender.name = Bundle-\$\\\{ctx:bundle.name\}
log4j2.appender.routing.routes.bundle.appender.fileName = ${karaf.data}/log/bundle-\$\\\{ctx:bundle.name\}.log
log4j2.appender.routing.routes.bundle.appender.filePattern = ${karaf.data}/log/bundle-\$\\\{ctx:bundle.name\}.log.%d{yyyy-MM-dd}
log4j2.appender.routing.routes.bundle.appender.append = true
log4j2.appender.routing.routes.bundle.appender.layout.type = PatternLayout
log4j2.appender.routing.routes.bundle.appender.layout.pattern = ${log4j2.pattern}
log4j2.appender.routing.routes.bundle.appender.policies.type = Policies
log4j2.appender.routing.routes.bundle.appender.policies.time.type = TimeBasedTriggeringPolicy
log4j2.appender.routing.routes.bundle.appender.strategy.type = DefaultRolloverStrategy
log4j2.appender.routing.routes.bundle.appender.strategy.max = 31
That clearly worked for me. I wouldn't even think about a static configuration in OSGi! ;-)
log4j Configuration commented section on below link
https://github.com/apache/karaf/blob/master/assemblies/features/base/src/main/resources/resources/etc/org.ops4j.pax.logging.cfg
will log messages for each bundle to a separate file but By default karaf comes with multiple bundles this will result one log file for each bundle. So many logs file will be generated.
How it can be done for specific bundles which user have deployed on deploy folder
I receive error message 8017:
The UserId, Password or account is invalid while trying to load data
using fast load.
Fast Load Script:
logon 10.61.59.93/796207,Wpwp123;
drop table DATAMDL_SNDBX.QA_FL_PD;
drop table DATAMDL_SNDBX.ERROR_TABLE_ucv;
drop table DATAMDL_SNDBX.ERROR_TABLE_TV;
CREATE SET TABLE DATAMDL_SNDBX.QA_FL_PD ,NO FALLBACK ,
NO BEFORE JOURNAL,
NO AFTER JOURNAL,
CHECKSUM = DEFAULT,
DEFAULT MERGEBLOCKRATIO
(
NAME VARCHAR(10) CHARACTER SET LATIN NOT CASESPECIFIC,
INITIAL VARCHAR(10) CHARACTER SET LATIN NOT CASESPECIFIC)
PRIMARY INDEX CRO_FLIGHT_LEG_DEP_NUPI ( NAME );
SET RECORD VARTEXT'~';
DEFINE
NAME (VARCHAR(10)),
INITIAL (VARCHAR(10))
FILE = C:\Users\Scarlet\Desktop\FL_Data.TXT;
BEGIN LOADING DATAMDL_SNDBX.QA_FL_PD ERRORFILES teradata fastload.ERROR_TABLE_UCV, teradata fastload.ERROR_TABLE_TV;
INSERT INTO DATAMDL_SNDBX.QA_FL_PD
VALUES (:NAME,
:INITIAL);
END LOADING;
LOGOFF;
File containing data (with only 1 record):
NAME~INITIAL
PRASHANT~PD
Error Message:
C:\Windows\system32>cd\
C:\>fastload<C:\Users\Scarlet\Desktop\FL_Script.TXT
===================================================================
= =
= FASTLOAD UTILITY VERSION 14.10.00.03 =
= PLATFORM WIN32 =
= =
===================================================================
===================================================================
= =
= Copyright 1984-2013, Teradata Corporation. =
= ALL RIGHTS RESERVED. =
= =
===================================================================
**** 14:08:03 Processing starting at: Thu Mar 10 14:08:02 2016
===================================================================
= =
= Logon/Connection =
= =
===================================================================
0001 logon 10.61.59.93/796207,
**** 14:08:03 RDBMS error 8017: The UserId, Password or Account is
invalid.
**** 14:08:03 Unable to log on Main SQL Session
**** 14:08:03 FastLoad cannot continue. Exiting.
===================================================================
= =
= Exiting =
= =
===================================================================
**** 14:08:03 Total processor time used = '0.124801 Seconds'
. Start : Thu Mar 10 14:08:02 2016
. End : Thu Mar 10 14:08:03 2016
. Highest return code encountered = '12'.
**** 14:08:03 FDL4818 FastLoad Terminated
The problem was, I should have mentioned what mechanism I should be using to connect.
Adding below line at the top solved my problem
logmech LDAP;
Problem solved.. :)
I do not know how to close this thread.
Thanks
Prashant
I have two graphite setup and I am trying to relay the traffic between the two, but somehow the carbon-relay is not working.
My cache runs on 2003/2004 and relay on 2013/2014
Following are the configurations done :
#carbon file
[cache:b]
LINE_RECEIVER_PORT = 2003
PICKLE_RECEIVER_PORT = 2004
CACHE_QUERY_PORT = 7012
[relay]
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2013
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2014
RELAY_METHOD = rules
REPLICATION_FACTOR = 1
DESTINATIONS = 127.0.0.1:2003:a, aa.bb.cc.dd:2003:b
#relay-rules file
[default]
default = true
destinations = 127.0.0.1:2003:a, aa.bb.cc.dd:2003:b
Any pointers will be helpful
As part of the recent project at work, I've figured out that carbon demons uses PICKLE protocol when sending data to the destinations.
So the destination of carbon-relay should be carbon-cache's pickle receiver port instead.
#carbon.conf
....
[relay]
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2013
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2014
RELAY_METHOD = rules
REPLICATION_FACTOR = 1
DESTINATIONS = 127.0.0.1:2004:a, aa.bb.cc.dd:2004:b
Also modify the relay-rules.conf with the same destinations specified in carbon.conf
relay-rules.conf
.....
[default]
default = true
destinations = 127.0.0.1:2004:a, aa.bb.cc.dd:2004:b