Log4j2 programmatic modifying current configuration, synchronizing addAppender and addLogger - log4j2

I want to modify log4j2 configuration after init (which is done with log4j2.xml file), meaning during runtime, and in official docs it says (point 2 in below snippet) that "Modification to the running configuration requires that all the methods being called (addAppender and addLogger) be synchronized."
I don't understand if they meant only those two calls or other calls as well, or if I as a user will need to synchronize the methods, and reason for it is that addLogger() is already Synchronized (implemented in AbstractConfiguration) and addLogger is using a ConcurrentHashMap (for loggerConfigs var), so I see no reason why I need to synchronize these calls.
P.S
I need runtime modification partly for user configuration of messages to be sent to syslog (with host,facility,port of their choice), so I'm adding syslog appenders in runtime. I don't see any other way then programmatic changes.

Related

How to find/define JMX key for ActiveMQ Artemis monitoring

I'm trying to setup monitoring of ActiveMQ Artemis with Zabbix. My intention is to monitor the availability of Artemis and also monitor the size and number of messages accumulating in queues, and setup alerts.
I enabled JMX on Artemis as the documents in struct, and I built the JMX example. From what I can tell, this only involves adding the following lines to these two files in the broker:
management.xml
<connector connector-port="1099" connector-host="192.168.56.101" />
Opened the port:
sudo ufw allow 1099
broker.xml
<jmx-management-enabled>true</jmx-management-enabled>
So I think JMX is enabled, although I haven't managed to confirm this.
In Zabbix I added the "host" (a system to monitor), but the next step is creating an "item" (a thing on the system). To do this I need a JMX key, something similar to jmx["java.lang:type=Memory","HeapMemoryUsage.used"]. (I tried this one but I don't get any data back) This defines the MBean to call.
So where can I find the keys for the available things to monitor on Artemis? Or have I screwed something up here and am not looking for the right thing?
In the example there is a JMWExample.java program. It connects to Artemis, publishes a message, uses JMX to count the messages, then removes the message -- but I don't see any keys to MBeans.
Also, in the admin console for Artemis there is a JMX tab, which lists what I think is all the available things to monitor. For example, I have a queue called "test.queue". Under the JMX tab I find:
org.apache.activemq.artemis:broker="0.0.0.0",component=addresses,address="test.topic",subcomponent=queues,routing-type="multicast",queue="test.queue"
And there are numerous methods listed, including countMessages(). Have I answered my own question here? Is this what I'm looking for?
If so, how does it fit into this key format, jmx[object_name,attribute_name]
{EDIT}
I'm looking at the JMX tab on the console. If I understand correctly, the key should have a format like this: jmx[object_name,attribute_name]
So I see the the object name under the JMX tab for one of my test queues is: org.apache.activemq.artemis:broker="0.0.0.0",component=addresses,address="test.topic",subcomponent=queues,routing-type="multicast",queue="test.queue"
And it has an attribute of: MessageCount
So I treid this, which doesn't work. I also tried replacing 0.0.0.0 with the IP address.
jmx[org.apache.activemq.artemis:broker="0.0.0.0",component=addresses,address="test.topic",subcomponent=queues,routing-type="multicast",queue="test.queue",MessageCount]
The default value for <jmx-management-enabled> is true so you don't need to explicitly configure that.
You can confirm that JMX is enabled by connecting to the broker using a tool like JConsole or JVisualVM which ship with the JVM. Ideally you would do this locally to avoid any network configuration issues.
The broker exposes lots of different MBeans for managing all parts of the broker. Here are the different "control" objects with their default MBean object naming pattern:
ActiveMQServerControl: <domain>:broker=<brokerName>
AddressControl: <domain>:broker=<brokerName>,component=addresses,address=<addressName>
QueueControl: <domain>:broker=<brokerName>,component=addresses,address=<addressName>,subcomponent=queues,routing-type=<routingType>,queue=<queueName>
DivertControl: <domain>:broker=<brokerName>,component=addresses,address=<addressName>,subcomponent=diverts,divert=<divertName>
ClusterConnectionControl: <domain>:broker=<brokerName>,component=cluster-connections,name=<clusterConnectionName>
AcceptorControl: <domain>:broker=<brokerName>,component=acceptors,name=<acceptorName>
BroadcastGroupControl: <domain>:broker=<brokerName>,component=broadcast-groups,name=<broadcastGroupName>
BridgeControl: <domain>:broker=<brokerName>,component=bridges,name=<bridgeName>
The "key" that you use will depend on the name of the attribute from the control that you want to inspect. That name will correspond to the "getter" of the attribute. You can see all the names of all the getters in the linked JavaDoc. For example, if you want to get the number of messages from a queue you'd use the key MessageCount since the getter is named getMessageCount().
The domain by default is org.apache.activemq.artemis and the default broker name is localhost so if you didn't explicitly configure either of these and you wanted to get the message count of the anycast queue "myQueue" on the address "myAddress" you would use something like this:
jmx["org.apache.activemq.artemis:broker=\"localhost\",component=addresses,address=\"myAddress\",subcomponent=queues,routing-type=\"anycast\",queue=\"myQueue\"",MessageCount]
This formatting is based on this Zabbix block post which is also discussed on this Zabbix forum thread.
To be clear, the JMXExample you cited uses a handy helper method named getQueueObjectName to construct the MBean's object name.
If you need to quickly get a broker up and running which supports remote JMX clients do the following:
Open the directory examples/features/standard/jmx in a terminal.
Run the example using mvn clean verify.
This will create a full broker instance in target/server0 which you can use as a template to configure your own. It includes modifications to broker.xml, management.xml, and artemis.profile (to set the java.rmi.server.hostname system property).
If you start this broker instance manually you can connect to it with JConsole or JVisualVM using service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi.

Spring Cloud DataFlow Rabbit Source: how to intercept and enrich messages in a Source

I have been successfully evaluating Spring Cloud DataFlow with a typically simple flow: source | processor | sink.
For deployment there will be multiple sources feeding into this pipeline which I can do using data flow labels. All well and good.
Each source is a different rabbitmq instance and because the processor needs to know where the message came from (because it has to call back to the source system to get further information), the strategy I'd thought of was to enrich each message with header details about the source system which is then transparently passed along to the processor.
Now, I'm well-versed in Spring, Spring Boot and Spring Integration but I cannot find out how to enrich each message in a dataflow source component.
The source component is bound to an org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration. The source uses the default Source.OUTPUT channel. How do I get hold of each message in the source to enrich it?
My processor component uses some Spring Integration DSL to do some of what it needs to do but then this processor component has both an INPUT and OUTPUT channel by definition. Not so with the RabbitSourceConfiguration source.
So, can this be done?
I think you need a custom MessageListener on the MessageListenerContainer in RabbitSourceConfiguration.
In the RabbitSourceConfiguration you can set a custom ChannelAwareMessageListener (You can possibly extend from MessagingMessageListenerAdapter as well) on the MessageListenerContainer that does what you incline to do.
In the end what worked was subclassing org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration to:
override public SimpleMessageListenerContainer container() so that I could insert a custom health check before calling super.container(). My business logic enriches each message (see next bullet) with details of where the message came from (note, this is the publisher of the messages and not the rabbit queue). There's a health check needed to validate the additional enriching information (which is provided via configuration) to ensure that messages aren't consumed from the queue and enriched with the wrong information. If the validation fails, the source component fails to start and hence no messages are consumed.
override the creation of the AmqpInboundChannelAdapter bean so that a custom subclass of DefaultAmqpHeaderMapper can be set on the adapter. This custom mapper adds the enriched headers in public Map toHeadersFromRequest(final MessageProperties source).
For me, the inability of stream/dataflow to intercept and modify messages in Source components is problematic. I really shouldn't have to fiddle around with the underlying message broker API in the ways I did. I should be able to do it with e.g. Spring Integration. Indeed I can register a global message interceptor but I cannot change the headers of the message.
This ability would go on my WIBNI (wouldn't it be nice if) list. Perhaps I'll raise a request for this.

Grails - restarting Rabbitmq plugin consumers by calling method on plugin class

I am using the Grails Rabbitmq Native plugin. When I launch the application, I don't want the RMQ consumers to be automatically started, so in my Config.groovy I have defined:
rabbitmq.enabled == false
The code within doWithSpring() (https://github.com/budjb/grails-rabbitmq-native/blob/master/RabbitmqNativeGrailsPlugin.groovy#L114) means that certain wiring isn't carried out if this flag is false.
At some point, I want to be able to start the RMQ system up. I'd like to call a method defined within the plugin class, such as restartRabbitContext() (https://github.com/budjb/grails-rabbitmq-native/blob/master/RabbitmqNativeGrailsPlugin.groovy#L231) to start up the RMQ consumers. I think I will need to carry out some of the wiring myself.
Is there a way to do this? What is the import required to be able to access the plugin class's methods?
Your best bet is to use the GrailsPluginManager to access your plugin by name using getGrailsPlugin. From there you should be able to access the plugin as a GrailsPlugin and access the public methods defined in the plugin itself.
The GrailsPluginManager can be obtained though the grailsApplication such as: grailsApplication.pluginManager. In the very rare event you can't use DI you can always fall back to Holders to get to the GrailsPluginManager (though this is a very rare case).

Capture outgoing HTTP request from Controller / Service

So I have the following scenario (it's a Grails 2.1 app):
I have a Controller that can be accessed via //localhost:8080/myController
This controller in turn executes a call to another URL opening a connection using new URL("https://my.other.url").openConnection()
I want to capture the request so I can log the information
I have a Filter present in my web.xml already which does the job well for controllers mapped in my app. But as soon as a request is fired to an external URL, I don't get anything.
I understand that my filter will only be invoked to URLs inside my app, and that depends on my filter mapping which is fine.
I'm struggling to see how a solution inside the app is actually viable. I'm thinking of using a mixed approach with the DevOps team to capture such outgoing calls from the container and then log them into a separate file.
I guess my questions are:
Is there a way to do it inside the app itself?
Is the approach I'm planning a sensible one?
Cheers!
Any reason why you don't want to use http-builder? There a Grails plugin for it, and it makes remote XML calls much easier than handling the plumbing yourself. At the bottom of the linked page they describe how you can enable request logging via log4j configuration.

Using syslog in rails application

I am thinking of using syslog in my rails applications. The process is outlined in this blog post:
Add gem 'SyslogLogger' to your Gemfile
Add require 'syslog_logger' to the top of config/environments/production.rb
Also uncomment the config.logger = line in the same file.
In production box I have 4 rails applications running using passenger. If I switch to use syslogger for all 4 of my applications then I am afraid that the log messages from all 4 applications will go to a single file and the log messages will be interleaving. Of course, I can use splunk but first I wanted to check if it was possible for me to get one log file for each of my rails application. That would be desirable for my situation.
Is that possible?
#cite's answer covers one option for distinguishing the apps. However, the syslog message framing actually has 2 fields that make it even easier: hostname and tag (more commonly known and used as program name).
hostname is set by the system syslog daemon before it forwards the message to a centralized server. It will be the same for all apps on the same system but may be handy as you grow past 1 server.
The more interesting one is tag. Your app defines tag when it instantiates SyslogLogger. For example:
SyslogLogger.new('app1')
The logger class will send to the system syslogd as app1, and that appear in both the local log file and any remote syslog destinations (without needing to modify the log message itself). The default is rails. All modern syslog daemons can filter based on tag; see program() for syslog-ng and $programname for rsyslog.
Also, it's worth noting that SyslogLogger is basically wrapping the C openlog() and syslog() functions, so basically all post-log configuration happens on the system daemon. Generally that's desirable, but sometimes you may want your Rails app to log directly to a specific destination (like to simplify automated deployment, change attributes not allowed by syslog(), or run in environments without access to the system daemon).
We ran into a couple of those cases and made a drop-in Logger replacement that generates the UDP packets itself. The remote_syslog_logger gem is on GitHub.
Yes, by default, almost all Unix syslogds will write messages given in the user or local* facility in the same file. However, every syslogd I know will allow you to specify logfiles on a per-facility basis, so you can have your first application log to local1.*, second one to local2.* and so on.
Furthermore, newer syslog daemons like syslog-ng allow for splitting messages to different files by evaluating the message against a regular expression (write log strings which have railsapp_1 in them to /var/log/railsapp_1.log and so on).
So, configure your syslogd appropriately and you are done (the gory details of changing that configuration should be asked on serverfault.com if your system's man pages don't help you doing it.)

Resources