I want to create quartz jobs that use a JdbcStore as described in the clustering section of the docs, in Burt's example.
The example shows how to configure quartz using a quartz.properties file.
Now, I'd like my jdbc store to be the same database as my grails application, so that I have less settings to duplicate.
So, assuming I manually create the required tables in my database, is it possible to use the default dataSource configured in Datasource.groovy with the quartz plugin ?
I'm using grails 2.4.4 and quartz 1.0.2.
In other terms, can I add my settings to QuartzConfig.groovy rather than creating a new quartz.properties file ? At least I could benefit from the separate environments settings.
Would something like this be valid in QuartzConfig.groovy ?
quartz {
autoStartup = true
jdbcStore = true
waitForJobsToCompleteOnShutdown = true
exposeSchedulerInRepository = true
props {
scheduler.skipUpdateCheck = true
threadPool.class = 'org.quartz.simpl.SimpleThreadPool'
threadPool.threadCount = 50
threadPool.threadPriority = 9
jobStore.misfireThreshold = 60000
jobStore.class = 'impl.jdbcjobstore.JobStoreTX'
jobStore.driverDelegateClass = 'org.quartz.impl.jdbcjobstore.StdJDBCDelegate'
jobStore.useProperties = false
jobStore.tablePrefix = 'QRTZ_'
jobStore.isClustered = true
jobStore.clusterCheckinInterval = 5000
plugin.shutdownhook.class = 'org.quartz.plugins.management.ShutdownHookPlugin'
plugin.shutdownhook.cleanShutdown = true
jobStore.dataSource = 'myDS'
// [...]
}
I managed to tweak all my settings in QuartzConfig.groovy. The only thing I had to remove to make it work were the database specific options.
Also, I had to add the property scheduler.idleWaitTime = 1000 as advised here http://www.quartz-scheduler.org/generated/2.2.1/pdf/Quartz_Scheduler_Configuration_Guide.pdf on page 12, because despite my job being called as MyJob.triggerNow(paramsMap), there was a 20 to 30 seconds delay before it actually started.
With scheduler.idleWaitTime set to 1 second, the job indeed triggers 1 second after it has been submitted.
QuartzProperties.groovy actually accepts all the properties described in the quartz configuration docs (e.g. : http://quartz-scheduler.org/documentation/quartz-2.1.x/configuration/ConfigJobStoreTX). Just put them inside the props {...} block, and remove the org.quartz prefix.
Here is my final config, as an example :
quartz {
autoStartup = true
jdbcStore = true
waitForJobsToCompleteOnShutdown = true
// Allows monitoring in Java Melody (if you have the java melody plugin installed in your grails app)
exposeSchedulerInRepository = true
props {
scheduler.skipUpdateCheck = true
scheduler.instanceName = 'my_reporting_quartz'
scheduler.instanceId = 'AUTO'
scheduler.idleWaitTime = 1000
threadPool.'class' = 'org.quartz.simpl.SimpleThreadPool'
threadPool.threadCount = 10
threadPool.threadPriority = 7
jobStore.misfireThreshold = 60000
jobStore.'class' = 'org.quartz.impl.jdbcjobstore.JobStoreTX'
jobStore.driverDelegateClass = 'org.quartz.impl.jdbcjobstore.StdJDBCDelegate'
jobStore.useProperties = false
jobStore.tablePrefix = 'QRTZ_'
jobStore.isClustered = true
jobStore.clusterCheckinInterval = 5000
plugin.shutdownhook.'class' = 'org.quartz.plugins.management.ShutdownHookPlugin'
plugin.shutdownhook.cleanShutdown = true
}
}
Don't forget to create the sql tables with the appropriate script, which is located at /path/to/your/project/target/work/plugins/quartz-1.0.2/src/templates/sql/...
Related
I'm trying to write my own NeoVim Lua based config, stripped down to the minimum I need and with the goal to understand at least most of the config. LSP setup is working fine and is the only source I configured for nvim-cmp:
local cmp = require("cmp")
cmp.setup {
sources = {
{ name = 'nvim_lsp' }
}
}
local capabilities = vim.lsp.protocol.make_client_capabilities()
capabilities = require('cmp_nvim_lsp').update_capabilities(capabilities)
After some startup delay the completion is working in the sense that I see popups with proposed completions, based on information from LSP.
But I cannot select any of the proposed completions. I can just continue to type which reduces the proposed completions, but I cannot use tab, arrow keys, ... to select an entry from the popup. I saw in the docs that one can define keyboard mappings but cannot make sense out of them. They are all rather sophisticated, require a snippet package to be installed, ...
I would prefer to select the next completion via tab and to navigate them via arrow key. "Enter" should select the current one.
Could somebody show me a minimal configuration for this setup or point me to more "basic" docs?
Nvim-cmp requires you to set the mapping for tab and other keys explicitly, here is my working example:
local cmp = require'cmp'
local lspkind = require'lspkind'
cmp.setup({
snippet = {
expand = function(args)
-- For `ultisnips` user.
vim.fn["UltiSnips#Anon"](args.body)
end,
},
mapping = cmp.mapping.preset.insert({
['<Tab>'] = function(fallback)
if cmp.visible() then
cmp.select_next_item()
else
fallback()
end
end,
['<S-Tab>'] = function(fallback)
if cmp.visible() then
cmp.select_prev_item()
else
fallback()
end
end,
['<CR>'] = cmp.mapping.confirm({ select = true }),
['<C-e>'] = cmp.mapping.abort(),
['<Esc>'] = cmp.mapping.close(),
['<C-d>'] = cmp.mapping.scroll_docs(-4),
['<C-f>'] = cmp.mapping.scroll_docs(4),
}),
sources = {
{ name = 'nvim_lsp' }, -- For nvim-lsp
{ name = 'ultisnips' }, -- For ultisnips user.
{ name = 'nvim_lua' }, -- for nvim lua function
{ name = 'path' }, -- for path completion
{ name = 'buffer', keyword_length = 4 }, -- for buffer word completion
{ name = 'omni' },
{ name = 'emoji', insert = true, } -- emoji completion
},
completion = {
keyword_length = 1,
completeopt = "menu,noselect"
},
view = {
entries = 'custom',
},
formatting = {
format = lspkind.cmp_format({
mode = "symbol_text",
menu = ({
nvim_lsp = "[LSP]",
ultisnips = "[US]",
nvim_lua = "[Lua]",
path = "[Path]",
buffer = "[Buffer]",
emoji = "[Emoji]",
omni = "[Omni]",
}),
}),
},
})
This is working great for me. The mapping part corresponds to the config for key mapping in the table. You can tweak your conf based on my config to make it work for you.
I have what should be a simple question but I can't figure it out. What is the correct syntax to use multiple appenders of the same type (RollingFile) with a single logger in Log4j2 properties file format?
For background, I am using Karaf 4.2.7 which uses pax logging. My logging config file is in the properties format.
log4j2.appender.fileapp1.type = RollingRandomAccessFile
log4j2.appender.fileapp1.name = FileApp1
...
log4j2.appender.fileapp2.type = RollingRandomAccessFile
log4j2.appender.fileapp2.name = FileApp2
...
log4j2.logger.myloggername.name = com.acme
log4j2.logger.myloggername.appenderRef.RollingFile.ref = FileApp1, FileApp2
Putting both appenders on that last line separated by a comma does not work. It works if I have only one appender or the other. I also tried
log4j2.logger.myloggername.appenderRef.RollingFile.ref = [FileApp1, FileApp2]
log4j2.logger.myloggername.appenderRef.RollingFile.ref = {FileApp1, FileApp2}
log4j2.logger.myloggername.appenderRef.RollingFile.ref = [{FileApp1}, {FileApp2}]
None of those works. I can't seem to find any examples online of how to do this.
I refer to two web page(thanks).
log4j 2 log4j2.properties(Configuration option)
Log4J 2 Configuration: Using the Properties File
Add and define "~s".
appenders, appenderRefs,
This is notice for define what will be on next.
name=PropertiesConfig
property.filename_fileapp1 = ./logs/fileapp1.log
property.filename_fileapp2 = ./logs/fileapp2.log
appenders = console, fileapp1, fileapp2
appender.console.type = Console
appender.console.name = STDOUT
...
appender.fileapp1.type = RollingRandomAccessFile
appender.fileapp1.name = fileapp1_AppenderName
appender.fileapp1.fileName = ${filename_fileapp1}
appender.fileapp1.filePattern = ${filename_fileapp1}.%d{yyyy-MM-dd}.log
...
appender.fileapp2.type = RollingRandomAccessFile
appender.fileapp2.name = fileapp2_AppenderName
appender.fileapp2.fileName = ${filename_fileapp2}
appender.fileapp2.filePattern = ${filename_fileapp2}.%d{yyyy-MM-dd}.log
...
loggers = mylogger1
logger.mylogger1.name = com.jornathan.sample.log4j2PropertyTest
logger.mylogger1.level = info
#keep this value for testing.
logger.mylogger1.additivity = true
#Here is what you need.
logger.mylogger1.appenderRefs = fileapp1Appender, fileapp2Appender
logger.mylogger1.appenderRef.fileapp1Appender.ref = fileapp1_AppenderName
logger.mylogger1.appenderRef.fileapp2Appender.ref = fileapp2_AppenderName
How to create a separate log file for each bundle deployed in karaf-4.2.3 using pax logging, which has log4j2 native style config?
I've tried with routing appender, but no results.
I am excepted to write each bundle logs in a separate log file for easy debugging.
I don't know anyway doing this automatically. But what you could do is to create for each module a separate configuration based on the root package name
log4j2.logger.xy.name = com.company.module.xy
log4j2.logger.xy.level = INFO
log4j2.logger.xy.additivity = false
log4j2.logger.xy.appenderRef.inovel.ref = XyFile
log4j2.logger.zz.name = com.company.module.zz
log4j2.logger.zz.level = INFO
log4j2.logger.zz.additivity = false
log4j2.logger.zz.appenderRef.inovel.ref = ZzFile
log4j2.logger.keycloak.name = org.keycloak
log4j2.logger.keycloak.level = INFO
log4j2.logger.keycloak.additivity = false
log4j2.logger.keycloak.appenderRef.keycloak.ref = KeycloakFile
And a ref could look like
# keyclok file appender
log4j2.appender.keycloak.type = RollingRandomAccessFile
log4j2.appender.keycloak.name = KeycloakFile
log4j2.appender.keycloak.fileName = ${karaf.data}/log/keycloak.log
log4j2.appender.keycloak.filePattern = ${karaf.data}/log/keycloak.log.%i
log4j2.appender.keycloak.append = true
log4j2.appender.keycloak.layout.type = PatternLayout
log4j2.appender.keycloak.layout.pattern = %d{ISO8601}
log4j2.appender.keycloak.policies.type = Policies
log4j2.appender.keycloak.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.keycloak.policies.size.size = 8MB
log4j2.appender.keycloak.strategy.type = DefaultRolloverStrategy
log4j2.appender.keycloak.strategy.max = 10
This is a lot of manual work. So maybe someone come up with an automatic configuration
Sincerely
Just have a look at the official Log4j 2.x configuration coming with every Karaf distribution and have a look at the commented "Routing" section.
E.g. I've used the following in one of my projects:
# Root logger
log4j2.rootLogger.level = INFO
log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile
log4j2.rootLogger.appenderRef.RollingFile.filter.threshold.type = ThresholdFilter
log4j2.rootLogger.appenderRef.RollingFile.filter.threshold.level = WARN
log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi
log4j2.rootLogger.appenderRef.Console.ref = Console
log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter
log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF}
# Enable log routing...
log4j2.rootLogger.appenderRef.Routing.ref = Routing
# Loggers configuration
...
# Configure the routing (pay close attention to the escapes)...
log4j2.appender.routing.type = Routing
log4j2.appender.routing.name = Routing
log4j2.appender.routing.routes.type = Routes
log4j2.appender.routing.routes.pattern = \$\$\\\{ctx:bundle.name\}
log4j2.appender.routing.routes.bundle.type = Route
log4j2.appender.routing.routes.bundle.appender.type = RollingRandomAccessFile
log4j2.appender.routing.routes.bundle.appender.name = Bundle-\$\\\{ctx:bundle.name\}
log4j2.appender.routing.routes.bundle.appender.fileName = ${karaf.data}/log/bundle-\$\\\{ctx:bundle.name\}.log
log4j2.appender.routing.routes.bundle.appender.filePattern = ${karaf.data}/log/bundle-\$\\\{ctx:bundle.name\}.log.%d{yyyy-MM-dd}
log4j2.appender.routing.routes.bundle.appender.append = true
log4j2.appender.routing.routes.bundle.appender.layout.type = PatternLayout
log4j2.appender.routing.routes.bundle.appender.layout.pattern = ${log4j2.pattern}
log4j2.appender.routing.routes.bundle.appender.policies.type = Policies
log4j2.appender.routing.routes.bundle.appender.policies.time.type = TimeBasedTriggeringPolicy
log4j2.appender.routing.routes.bundle.appender.strategy.type = DefaultRolloverStrategy
log4j2.appender.routing.routes.bundle.appender.strategy.max = 31
That clearly worked for me. I wouldn't even think about a static configuration in OSGi! ;-)
log4j Configuration commented section on below link
https://github.com/apache/karaf/blob/master/assemblies/features/base/src/main/resources/resources/etc/org.ops4j.pax.logging.cfg
will log messages for each bundle to a separate file but By default karaf comes with multiple bundles this will result one log file for each bundle. So many logs file will be generated.
How it can be done for specific bundles which user have deployed on deploy folder
I have configured my flume agent like below. Somehow, the flume agent doesn't run properly. It keeps hanging without any errors. Is there any problem with the below configuration.
FYI: I have a file named "country" with hard-coded header as state
#Define sources, sink and channels
foo.sources = s1
foo.channels = chn-az chn-oth
foo.sinks = sink-az sink-oth
#
### # # Define a source on agent and connect to channel memory-channel.
foo.sources.s1.type = exec
foo.sources.s1.command = cat /home/hadoop/flume/country.txt
foo.sources.s1.batchSize = 1
foo.sources.s1.channels = chn-ca chn-oth
#selector configuration
foo.sources.s1.selector.type = multiplexing
foo.sources.s1.selector.header = state
foo.sources.s1.selector.mapping.AZ = chn-az
foo.sources.s1.selector.default = chn-oth
#
#
### Define a memory channel on agent called memory-channel.
foo.channels.chn-az.type = memory
foo.channels.chn-oth.type = memory
#
#
##Define sinks that outputs to hdfs.
foo.sinks.sink-az.channel = chn-az
foo.sinks.sink-az.type = hdfs
foo.sinks.sink-az.hdfs.path = hdfs://master:9099/user/hadoop/flume
foo.sinks.sink-az.hdfs.filePrefix = statefilter
foo.sinks.sink-az.hdfs.fileType = DataStream
foo.sinks.sink-az.hdfs.writeFormat = Text
foo.sinks.sink-az.batchSize = 1
foo.sinks.sink-az.rollInterval = 0
#
foo.sinks.sink-oth.channel = chn-oth
foo.sinks.sink-oth.type = hdfs
foo.sinks.sink-oth.hdfs.path = hdfs://master:9099/user/hadoop/flume
foo.sinks.sink-oth.hdfs.filePrefix = statefilter
foo.sinks.sink-oth.hdfs.fileType = DataStream
foo.sinks.sink-oth.batchSize = 1
foo.sinks.sink-oth.rollInterval = 0
Thanks,
Vinoth
Regarding the channels list configured at the source:
foo.sources.s1.channels = chn-ca chn-oth
I think chn-ca should be chn-az.
Nevertheless, I think such a configuration will never work since the "state" header used by the selector is not created by any Flume component. You must introduce an interceptor for that, typically the Regex Extractor Interceptor.
For example:
Config.groovy:
// ...
grails.variable1 = "a"
grails.varibale2 = "${grails.variable1}bc"
//...
UPDATE 1
Way shown above works with grails 2.2.3. For older versions of grails please use solution #tim_yates suggested
You need to declare a variable:
def rootVar = 'a'
grails.variable1 = rootVar
grails.varibale2 = "${rootVar}bc"
Or you might be able to do it via a closure (not tested):
grails.variable1 = 'a'
grails.varibale2 = { -> "${grails.variable1}bc" }()