I am facing log issue in maven application in which I am using a dependency jar of another non maven application both are using log4j 2.18.0 version. But, all the logs of main applications are also printing in dependency jar logs location file after the call goes to dependency jar method.
I have mentioned below both the log4j2.properties file content:
Main application log4j2.properties:
status = warn
appender.console.type = Console
appender.console.name = LogToConsole
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
# Rotate log file
appender.rolling.type = RollingFile
appender.rolling.name = LogToRollingFile
appender.rolling.fileName = C:/LOG/Main.log
appender.rolling.filePattern = C:/LOG/Main.log.%d{yyyy-MM-dd}
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.strategy.type = DefaultRolloverStrategy
# Log to console and rolling file
logger.app.name = com.indra
logger.app.level = debug
logger.app.appenderRef.rolling.ref = LogToRollingFile
logger.app.appenderRef.console.ref = LogToConsole
logger.app.additivity=false
rootLogger.level = info
rootLogger.appenderRef.rolling.ref = LogToRollingFile
Dependency jar log4j2.properties:
status = warn
appender.console.type = Console
appender.console.name = LogToConsole
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %5p | %d | %t | %F | %L | %m%n
# Rotate log file
appender.rolling.type = RollingFile
appender.rolling.name = LogToRollingFile
appender.rolling.fileName = C:/LOG2/dependency.log
appender.rolling.filePattern = C:/LOG2/dependency.log.%d{yyyy-MM-dd}
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %5p | %d | %t | %F | %L | %m%n
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.strategy.type = DefaultRolloverStrategy
# Log to console and rolling file
logger.app.name = com.indra
logger.app.level = info
logger.app.appenderRef.rolling.ref = LogToRollingFile
logger.app.appenderRef.console.ref = LogToConsole
logger.app.additivity=false
rootLogger.level = debug
rootLogger.appenderRef.rolling.ref = LogToRollingFile
Please help me with this!
All the loggers of an application use the same logger context with a single configuration (cf. architecture).
Since logging configuration is up to user of the application, not the developer, library jars should not contain any Log4j2 configuration file nor should they depend upon log4j-core (they should only depend upon log4j-api, cf. API separation).
Your application can contain a Log4j2 configuration and log4j-core to provide a default configuration to the application user. However the configuration should not use paths specific to your system.
Conventionally a loggers name is equal to the fully qualified name of the class that uses it, which allows you to easily split your library's and main application logs. If all the packages in your library start with com.indra.dependency you can use a single configuration similar to this:
appender.1.type = RollingFile
appender.1.name = main
...
appender.2.type = RollingFile
appender.2.name = dependency
...
logger.1.name = com.indra
logger.1.appenderRef.1.ref = main
logger.2.name = com.indra.dependency
logger.2.appenderRef.1.ref = dependency
logger.2.additivity = false
Related
kafka2.12-2.4.0 confluent5.4.1
I am trying to use Confluent's Schema-register.
But when I start schema-register and connect-Distributed.
Connect logs did not report errors.
connect-avro-distributed.properties
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://k2:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://k2:8081
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
plugin.path=/usr/local/tools/confluent-5.4.1/share/java,/usr/local/tools/kafka/kafka_2.12-2.4.0/plugin
I have configured the confluent jar address so that connect can find the class. (plugin.path)
But when I POST the conector request.
{
"name": "dbz-mysql-avro-connector",
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"tasks.max": "1",
"database.hostname": "xx.xx.xx.xx",
"database.port": "3306",
"database.user": "debezium",
"database.history.kafka.topic": "dbhistory.debezium.mysql.avro",
"database.password": "123456",
"database.server.id": "184124",
"database.server.name": "debezium",
"key.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"key.converter.schema.registry.url": "http://k2:8081",
"value.converter.schema.registry.url": "http://k2:8081",
"table.whitelist": "debeziumdb.hosttable",
"database.history.kafka.bootstrap.servers": "k1:9092,k2:9092,k3:9092"
}
}
Throw the Exception.
[2020-04-23 10:37:00,064] INFO Creating task dbz-mysql-avro-connector-0 (org.apache.kafka.connect.runtime.Worker:419)
[2020-04-23 10:37:00,065] INFO ConnectorConfig values:
config.action.reload = restart
connector.class = io.debezium.connector.mysql.MySqlConnector
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = class io.confluent.connect.avro.AvroConverter
name = dbz-mysql-avro-connector
tasks.max = 1
transforms = []
value.converter = class io.confluent.connect.avro.AvroConverter
(org.apache.kafka.connect.runtime.ConnectorConfig:347)
[2020-04-23 10:37:00,065] INFO EnrichedConnectorConfig values:
config.action.reload = restart
connector.class = io.debezium.connector.mysql.MySqlConnector
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = class io.confluent.connect.avro.AvroConverter
name = dbz-mysql-avro-connector
tasks.max = 1
transforms = []
value.converter = class io.confluent.connect.avro.AvroConverter
(org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:347)
[2020-04-23 10:37:00,067] INFO TaskConfig values:
task.class = class io.debezium.connector.mysql.MySqlConnectorTask
(org.apache.kafka.connect.runtime.TaskConfig:347)
[2020-04-23 10:37:00,067] INFO Instantiated task dbz-mysql-avro-connector-0 with version 1.1.0.Final of type io.debezium.connector.mysql.MySqlConnectorTask (org.apache.kafka.connect.runtime.Worker:434)
[2020-04-23 10:37:00,067] ERROR Failed to start task dbz-mysql-avro-connector-0 (org.apache.kafka.connect.runtime.Worker:470)
java.lang.NoClassDefFoundError: io/confluent/connect/avro/AvroConverterConfig
at io.confluent.connect.avro.AvroConverter.configure(AvroConverter.java:61)
at org.apache.kafka.connect.runtime.isolation.Plugins.newConverter(Plugins.java:293)
at org.apache.kafka.connect.runtime.Worker.startTask(Worker.java:440)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.startTask(DistributedHerder.java:1140)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1700(DistributedHerder.java:125)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$13.call(DistributedHerder.java:1155)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$13.call(DistributedHerder.java:1151)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2020-04-23 10:37:00,071] INFO [Worker clientId=connect-1, groupId=connect-cluster] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1125)
All jars are in this directory.
Now what can I do to allow the class to be introduced, or does the version of confluent not exist in the class?
Thanks.
I finally solved this exception.
I did not use the confluent platform, just installed the schema-registry component.
To be precise, I only installed the community version and only activated the schema-registry component.
Then I downloaded the Avro jar package on the official website, and finally put it in the plugin completely, and started the connect successfully.
Confluent Avro jar address
And, I executed the following statement so that it can be read.
export CLASSPATH=/usr/local/tools/kafka/kafka_2.12-2.4.0/plugin/*
Looks like your kafka-connect-avro-converter is not compatible with other confluent jars. your Q does not list the kafka-connect-avro-converter jar as well. Can you add the correct version jar of kafka-connect-avro-converter in your classpath.
I have what should be a simple question but I can't figure it out. What is the correct syntax to use multiple appenders of the same type (RollingFile) with a single logger in Log4j2 properties file format?
For background, I am using Karaf 4.2.7 which uses pax logging. My logging config file is in the properties format.
log4j2.appender.fileapp1.type = RollingRandomAccessFile
log4j2.appender.fileapp1.name = FileApp1
...
log4j2.appender.fileapp2.type = RollingRandomAccessFile
log4j2.appender.fileapp2.name = FileApp2
...
log4j2.logger.myloggername.name = com.acme
log4j2.logger.myloggername.appenderRef.RollingFile.ref = FileApp1, FileApp2
Putting both appenders on that last line separated by a comma does not work. It works if I have only one appender or the other. I also tried
log4j2.logger.myloggername.appenderRef.RollingFile.ref = [FileApp1, FileApp2]
log4j2.logger.myloggername.appenderRef.RollingFile.ref = {FileApp1, FileApp2}
log4j2.logger.myloggername.appenderRef.RollingFile.ref = [{FileApp1}, {FileApp2}]
None of those works. I can't seem to find any examples online of how to do this.
I refer to two web page(thanks).
log4j 2 log4j2.properties(Configuration option)
Log4J 2 Configuration: Using the Properties File
Add and define "~s".
appenders, appenderRefs,
This is notice for define what will be on next.
name=PropertiesConfig
property.filename_fileapp1 = ./logs/fileapp1.log
property.filename_fileapp2 = ./logs/fileapp2.log
appenders = console, fileapp1, fileapp2
appender.console.type = Console
appender.console.name = STDOUT
...
appender.fileapp1.type = RollingRandomAccessFile
appender.fileapp1.name = fileapp1_AppenderName
appender.fileapp1.fileName = ${filename_fileapp1}
appender.fileapp1.filePattern = ${filename_fileapp1}.%d{yyyy-MM-dd}.log
...
appender.fileapp2.type = RollingRandomAccessFile
appender.fileapp2.name = fileapp2_AppenderName
appender.fileapp2.fileName = ${filename_fileapp2}
appender.fileapp2.filePattern = ${filename_fileapp2}.%d{yyyy-MM-dd}.log
...
loggers = mylogger1
logger.mylogger1.name = com.jornathan.sample.log4j2PropertyTest
logger.mylogger1.level = info
#keep this value for testing.
logger.mylogger1.additivity = true
#Here is what you need.
logger.mylogger1.appenderRefs = fileapp1Appender, fileapp2Appender
logger.mylogger1.appenderRef.fileapp1Appender.ref = fileapp1_AppenderName
logger.mylogger1.appenderRef.fileapp2Appender.ref = fileapp2_AppenderName
How to create a separate log file for each bundle deployed in karaf-4.2.3 using pax logging, which has log4j2 native style config?
I've tried with routing appender, but no results.
I am excepted to write each bundle logs in a separate log file for easy debugging.
I don't know anyway doing this automatically. But what you could do is to create for each module a separate configuration based on the root package name
log4j2.logger.xy.name = com.company.module.xy
log4j2.logger.xy.level = INFO
log4j2.logger.xy.additivity = false
log4j2.logger.xy.appenderRef.inovel.ref = XyFile
log4j2.logger.zz.name = com.company.module.zz
log4j2.logger.zz.level = INFO
log4j2.logger.zz.additivity = false
log4j2.logger.zz.appenderRef.inovel.ref = ZzFile
log4j2.logger.keycloak.name = org.keycloak
log4j2.logger.keycloak.level = INFO
log4j2.logger.keycloak.additivity = false
log4j2.logger.keycloak.appenderRef.keycloak.ref = KeycloakFile
And a ref could look like
# keyclok file appender
log4j2.appender.keycloak.type = RollingRandomAccessFile
log4j2.appender.keycloak.name = KeycloakFile
log4j2.appender.keycloak.fileName = ${karaf.data}/log/keycloak.log
log4j2.appender.keycloak.filePattern = ${karaf.data}/log/keycloak.log.%i
log4j2.appender.keycloak.append = true
log4j2.appender.keycloak.layout.type = PatternLayout
log4j2.appender.keycloak.layout.pattern = %d{ISO8601}
log4j2.appender.keycloak.policies.type = Policies
log4j2.appender.keycloak.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.keycloak.policies.size.size = 8MB
log4j2.appender.keycloak.strategy.type = DefaultRolloverStrategy
log4j2.appender.keycloak.strategy.max = 10
This is a lot of manual work. So maybe someone come up with an automatic configuration
Sincerely
Just have a look at the official Log4j 2.x configuration coming with every Karaf distribution and have a look at the commented "Routing" section.
E.g. I've used the following in one of my projects:
# Root logger
log4j2.rootLogger.level = INFO
log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile
log4j2.rootLogger.appenderRef.RollingFile.filter.threshold.type = ThresholdFilter
log4j2.rootLogger.appenderRef.RollingFile.filter.threshold.level = WARN
log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi
log4j2.rootLogger.appenderRef.Console.ref = Console
log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter
log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF}
# Enable log routing...
log4j2.rootLogger.appenderRef.Routing.ref = Routing
# Loggers configuration
...
# Configure the routing (pay close attention to the escapes)...
log4j2.appender.routing.type = Routing
log4j2.appender.routing.name = Routing
log4j2.appender.routing.routes.type = Routes
log4j2.appender.routing.routes.pattern = \$\$\\\{ctx:bundle.name\}
log4j2.appender.routing.routes.bundle.type = Route
log4j2.appender.routing.routes.bundle.appender.type = RollingRandomAccessFile
log4j2.appender.routing.routes.bundle.appender.name = Bundle-\$\\\{ctx:bundle.name\}
log4j2.appender.routing.routes.bundle.appender.fileName = ${karaf.data}/log/bundle-\$\\\{ctx:bundle.name\}.log
log4j2.appender.routing.routes.bundle.appender.filePattern = ${karaf.data}/log/bundle-\$\\\{ctx:bundle.name\}.log.%d{yyyy-MM-dd}
log4j2.appender.routing.routes.bundle.appender.append = true
log4j2.appender.routing.routes.bundle.appender.layout.type = PatternLayout
log4j2.appender.routing.routes.bundle.appender.layout.pattern = ${log4j2.pattern}
log4j2.appender.routing.routes.bundle.appender.policies.type = Policies
log4j2.appender.routing.routes.bundle.appender.policies.time.type = TimeBasedTriggeringPolicy
log4j2.appender.routing.routes.bundle.appender.strategy.type = DefaultRolloverStrategy
log4j2.appender.routing.routes.bundle.appender.strategy.max = 31
That clearly worked for me. I wouldn't even think about a static configuration in OSGi! ;-)
log4j Configuration commented section on below link
https://github.com/apache/karaf/blob/master/assemblies/features/base/src/main/resources/resources/etc/org.ops4j.pax.logging.cfg
will log messages for each bundle to a separate file but By default karaf comes with multiple bundles this will result one log file for each bundle. So many logs file will be generated.
How it can be done for specific bundles which user have deployed on deploy folder
I have the followinf flume configuration. I am trying to transfer a file of size 9GB to hdfs using flume from spool directory. I have the following flume configuration.
#initialize agent's source, channel and sink
wagent.sources = wavetronix
wagent.channels = memoryChannel2
wagent.sinks = flumeHDFS
# Setting the source to spool directory where the file exists
wagent.sources.wavetronix.type = spooldir
wagent.sources.wavetronix.spoolDir = /johir/WAVETRONIX/output/Yesterday
wagent.sources.wavetronix.fileHeader = false
wagent.sources.wavetronix.basenameHeader = true
#agent.sources.wavetronix.fileSuffix = .COMPLETED
# Setting the channel to memory
wagent.channels.memoryChannel2.type = memory
# Max number of events stored in the memory channel
wagent.channels.memoryChannel2.capacity = 50000
agent.channels.memoryChannel2.batchSize = 1000
wagent.channels.memoryChannel2.transactioncapacity = 1000
# Setting the sink to HDFS
wagent.sinks.flumeHDFS.type = hdfs
#agent.sinks.flumeHDFS.useLocalTimeStamp = true
wagent.sinks.flumeHDFS.hdfs.path =/user/root/WAVETRONIXFLUME/%Y-%m-%d/
wagent.sinks.flumeHDFS.hdfs.useLocalTimeStamp = true
wagent.sinks.flumeHDFS.hdfs.filePrefix= %{basename}
wagent.sinks.flumeHDFS.hdfs.fileType = DataStream
# Write format can be text or writable
wagent.sinks.flumeHDFS.hdfs.writeFormat = Text
# use a single csv file at a time
wagent.sinks.flumeHDFS.hdfs.maxOpenFiles = 1
wagent.sinks.flumeHDFS.hdfs.rollCount=0
wagent.sinks.flumeHDFS.hdfs.rollInterval=0
wagent.sinks.flumeHDFS.hdfs.rollSize = 6400000
wagent.sinks.flumeHDFS.hdfs.batchSize =1000
# never rollover based on the number of events
wagent.sinks.flumeHDFS.hdfs.rollCount = 0
# rollover file based on max time of 1 min
#agent.sinks.flumeHDFS.hdfs.rollInterval = 0
# agent.sinks.flumeHDFS.hdfs.idleTimeout = 600
# Connect source and sink with channel
wagent.sources.wavetronix.channels = memoryChannel2
wagent.sinks.flumeHDFS.channel = memoryChannel2
But I am getting the following exception.
Exception in thread "SinkRunner-PollingRunner-DefaultSinkProcessor"
java.lang.OutOfMemoryError: Java heap space
at java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1043)
at java.util.concurrent.ConcurrentHashMap.putIfAbsent(ConcurrentHashMap.java:1535)
at java.lang.ClassLoader.getClassLoadingLock(ClassLoader.java:463)
at java.lang.ClassLoader.loadClass(ClassLoader.java:404)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.log4j.spi.LoggingEvent.(LoggingEvent.java:165)
at org.apache.log4j.Category.forcedLog(Category.java:391)
at org.apache.log4j.Category.log(Category.java:856)
at org.slf4j.impl.Log4jLoggerAdapter.warn(Log4jLoggerAdapter.java:479)
at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:461)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:745)
Can anyone help me to solve this problem?
Please edit the file ${FLUME_HOME}/conf/flume-env.sh, then add following code:
export JAVA_OPTS="-Xms1000m -Xmx12000m -Dcom.sun.management.jmxremote"
You can adjust the options "Xmx" and "Xms".
I have configured my flume agent like below. Somehow, the flume agent doesn't run properly. It keeps hanging without any errors. Is there any problem with the below configuration.
FYI: I have a file named "country" with hard-coded header as state
#Define sources, sink and channels
foo.sources = s1
foo.channels = chn-az chn-oth
foo.sinks = sink-az sink-oth
#
### # # Define a source on agent and connect to channel memory-channel.
foo.sources.s1.type = exec
foo.sources.s1.command = cat /home/hadoop/flume/country.txt
foo.sources.s1.batchSize = 1
foo.sources.s1.channels = chn-ca chn-oth
#selector configuration
foo.sources.s1.selector.type = multiplexing
foo.sources.s1.selector.header = state
foo.sources.s1.selector.mapping.AZ = chn-az
foo.sources.s1.selector.default = chn-oth
#
#
### Define a memory channel on agent called memory-channel.
foo.channels.chn-az.type = memory
foo.channels.chn-oth.type = memory
#
#
##Define sinks that outputs to hdfs.
foo.sinks.sink-az.channel = chn-az
foo.sinks.sink-az.type = hdfs
foo.sinks.sink-az.hdfs.path = hdfs://master:9099/user/hadoop/flume
foo.sinks.sink-az.hdfs.filePrefix = statefilter
foo.sinks.sink-az.hdfs.fileType = DataStream
foo.sinks.sink-az.hdfs.writeFormat = Text
foo.sinks.sink-az.batchSize = 1
foo.sinks.sink-az.rollInterval = 0
#
foo.sinks.sink-oth.channel = chn-oth
foo.sinks.sink-oth.type = hdfs
foo.sinks.sink-oth.hdfs.path = hdfs://master:9099/user/hadoop/flume
foo.sinks.sink-oth.hdfs.filePrefix = statefilter
foo.sinks.sink-oth.hdfs.fileType = DataStream
foo.sinks.sink-oth.batchSize = 1
foo.sinks.sink-oth.rollInterval = 0
Thanks,
Vinoth
Regarding the channels list configured at the source:
foo.sources.s1.channels = chn-ca chn-oth
I think chn-ca should be chn-az.
Nevertheless, I think such a configuration will never work since the "state" header used by the selector is not created by any Flume component. You must introduce an interceptor for that, typically the Regex Extractor Interceptor.