I'm trying to setup monitoring of ActiveMQ Artemis with Zabbix. My intention is to monitor the availability of Artemis and also monitor the size and number of messages accumulating in queues, and setup alerts.
I enabled JMX on Artemis as the documents in struct, and I built the JMX example. From what I can tell, this only involves adding the following lines to these two files in the broker:
management.xml
<connector connector-port="1099" connector-host="192.168.56.101" />
Opened the port:
sudo ufw allow 1099
broker.xml
<jmx-management-enabled>true</jmx-management-enabled>
So I think JMX is enabled, although I haven't managed to confirm this.
In Zabbix I added the "host" (a system to monitor), but the next step is creating an "item" (a thing on the system). To do this I need a JMX key, something similar to jmx["java.lang:type=Memory","HeapMemoryUsage.used"]. (I tried this one but I don't get any data back) This defines the MBean to call.
So where can I find the keys for the available things to monitor on Artemis? Or have I screwed something up here and am not looking for the right thing?
In the example there is a JMWExample.java program. It connects to Artemis, publishes a message, uses JMX to count the messages, then removes the message -- but I don't see any keys to MBeans.
Also, in the admin console for Artemis there is a JMX tab, which lists what I think is all the available things to monitor. For example, I have a queue called "test.queue". Under the JMX tab I find:
org.apache.activemq.artemis:broker="0.0.0.0",component=addresses,address="test.topic",subcomponent=queues,routing-type="multicast",queue="test.queue"
And there are numerous methods listed, including countMessages(). Have I answered my own question here? Is this what I'm looking for?
If so, how does it fit into this key format, jmx[object_name,attribute_name]
{EDIT}
I'm looking at the JMX tab on the console. If I understand correctly, the key should have a format like this: jmx[object_name,attribute_name]
So I see the the object name under the JMX tab for one of my test queues is: org.apache.activemq.artemis:broker="0.0.0.0",component=addresses,address="test.topic",subcomponent=queues,routing-type="multicast",queue="test.queue"
And it has an attribute of: MessageCount
So I treid this, which doesn't work. I also tried replacing 0.0.0.0 with the IP address.
jmx[org.apache.activemq.artemis:broker="0.0.0.0",component=addresses,address="test.topic",subcomponent=queues,routing-type="multicast",queue="test.queue",MessageCount]
The default value for <jmx-management-enabled> is true so you don't need to explicitly configure that.
You can confirm that JMX is enabled by connecting to the broker using a tool like JConsole or JVisualVM which ship with the JVM. Ideally you would do this locally to avoid any network configuration issues.
The broker exposes lots of different MBeans for managing all parts of the broker. Here are the different "control" objects with their default MBean object naming pattern:
ActiveMQServerControl: <domain>:broker=<brokerName>
AddressControl: <domain>:broker=<brokerName>,component=addresses,address=<addressName>
QueueControl: <domain>:broker=<brokerName>,component=addresses,address=<addressName>,subcomponent=queues,routing-type=<routingType>,queue=<queueName>
DivertControl: <domain>:broker=<brokerName>,component=addresses,address=<addressName>,subcomponent=diverts,divert=<divertName>
ClusterConnectionControl: <domain>:broker=<brokerName>,component=cluster-connections,name=<clusterConnectionName>
AcceptorControl: <domain>:broker=<brokerName>,component=acceptors,name=<acceptorName>
BroadcastGroupControl: <domain>:broker=<brokerName>,component=broadcast-groups,name=<broadcastGroupName>
BridgeControl: <domain>:broker=<brokerName>,component=bridges,name=<bridgeName>
The "key" that you use will depend on the name of the attribute from the control that you want to inspect. That name will correspond to the "getter" of the attribute. You can see all the names of all the getters in the linked JavaDoc. For example, if you want to get the number of messages from a queue you'd use the key MessageCount since the getter is named getMessageCount().
The domain by default is org.apache.activemq.artemis and the default broker name is localhost so if you didn't explicitly configure either of these and you wanted to get the message count of the anycast queue "myQueue" on the address "myAddress" you would use something like this:
jmx["org.apache.activemq.artemis:broker=\"localhost\",component=addresses,address=\"myAddress\",subcomponent=queues,routing-type=\"anycast\",queue=\"myQueue\"",MessageCount]
This formatting is based on this Zabbix block post which is also discussed on this Zabbix forum thread.
To be clear, the JMXExample you cited uses a handy helper method named getQueueObjectName to construct the MBean's object name.
If you need to quickly get a broker up and running which supports remote JMX clients do the following:
Open the directory examples/features/standard/jmx in a terminal.
Run the example using mvn clean verify.
This will create a full broker instance in target/server0 which you can use as a template to configure your own. It includes modifications to broker.xml, management.xml, and artemis.profile (to set the java.rmi.server.hostname system property).
If you start this broker instance manually you can connect to it with JConsole or JVisualVM using service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi.
Related
I am a beginner on mosquitto (Alpine Linux machine)
After several searches I did not find the answer
I would like to authorize MQTT messages only from one device in the network
I tried changing "aclfile.example" to "acl.acl"
user "equipment IP"
topic test
But this did not restrict the connection to only this equipment (The server can still receive messages from others)
Ideas?
There are several things that probably need covering here:
Mosquitto ACLs deal in users and topics, not IP addresses.
By default (at least until v2.0.0 shipped this week) mosquitto allows clients to connect without specifying a username/password. You can disable this by adding allow_annonymous false to the config file
Just renaming the example ACL file will not cause it to be loaded, you need to explicitly point to it in the config file with the acl_file directive.
You will also need to specify a password file with the password_file if you want to ensure that a specific username can only be used by authorised clients.
If you really want to limit access to a single local machine then you may do better looking to user the firewall to only accept external connections from that IP address using the firewall. e.g. iptables on Linux.
There are a couple of ways to do this. The easiest would be to define one user, and disable anonymous access. Your mosquitto.conf file would look like this:
port 1883
allow_anonymous false
password_file /etc/mosquitto/pwfile
You might have other options in your config file for things like logging and persistence, but these lines would only let clients that had the user/password connect. You then set your one username/password up in the pwfile file. Here's a great blog post about how to do that: http://steves-internet-guide.com/mqtt-username-password-example/
Keep in mind that your client node now has to also provide the username/password on the CONNECT packet, or be denied access.
Another way would be to issue an SSL cert to your client, and only allow that cert in. Again, Steve has a great blog post about how to set that up: http://www.steves-internet-guide.com/creating-and-using-client-certificates-with-mqtt-and-mosquitto/
Currently, I have a docker container sending logs to a Logstash using gelf. Pretty standard configuration set in the docker-compose file used to create the container.
I'm investigating the feasibility of sending the logs of a docker container to more than one instance of ELK. This is not needed for production, but will greatly improve the quality of life of our dev teams.
Reading the docs, it seems that what i need is not possible (at least, they don't mention whether the gelf-address property accepts a list of URIs or not, and I must assume it doesn't while I look for more info).
Does anyone know if this can be achieved? Thanks!
Looking at docker's code here and here I'd say that currently only a single target address is supported.
// New creates a gelf logger using the configuration passed in on the
// context. The supported context configuration variable is gelf-address.
func New(info logger.Info) (logger.Logger, error) {
// parse gelf address
address, err := parseAddress(info.Config["gelf-address"])
Potentially, this project andviro/grayproxy looks like it could help you take a single GELF input, and forward it to multiple GELF collectors:
By default grayproxy configures the input at udp://0.0.0.0:12201 and no outputs. Outputs are added using -out flag and may be specified multiple times
I have a two part question.
First, what Dart command should I use to "start"
the VM service listening for requests, with possibly
giving it what host and port number to use.
I'm using Windows, and I don't need the Observatory possibly
interfering.
I'm currently trying to use this, after I CD into the project's directory:
dart --pause_isolates_on_start bicycle
And the second part of the question is, is it possible to verify
that the VM service is there and listening on whatever port?
I want to be able to send a request to the VM service,
from a WebSocket client, and get back a response.
After I give the above command, if I do a 'netstat'
it doesn't look like there is anything there listening.
And any attempts at trying to connect to the VM service get
a connection refused Exception, same as if I didn't even
try to start the VM service.
UPDATE:
I was looking at the intelliJ plugin code, to see how they did their connect,
and saw that they used "ws://localhost:8181/ws", I was trying to use
"ws://localhost:8181", and now it's finally getting past the handshake,
the server was returning "200 OK" instead of "101" before.
I'm assuming that I'm talking to the Observer at this point,
and not the VM service, I'm not sure, but at least I'm further along..
When it worked, I was using:
dart --enable-vm-service --pause_isolates_on_start bicycle.dart
Thanks!!
dart --help -v prints
--observe[=<port>[/<bind-address>]]
The observe flag is a convenience flag used to run a program with a
set of options which are often useful for debugging under Observatory.
These options are currently:
--enable-vm-service[=<port>[/<bind-address>]]
--pause-isolates-on-exit
--pause-isolates-on-unhandled-exceptions
--warn-on-pause-with-no-debugger
This set is subject to change.
Please see these options for further documentation.
It depends on what exactly you want to do, but as far as I know the Observatory just uses this service and if you don't access any of its features, it won't add additional load to the process.
There is a Dart client API https://pub.dartlang.org/packages/vm_service_client and a documentation about the protocol https://github.com/dart-lang/sdk/blob/master/runtime/vm/service/service.md
Perhaps this is what you are looking for
enum EventKind {
// Notification that VM identifying information has changed. Currently used
// to notify of changes to the VM debugging name via setVMName.
VMUpdate,
// Notification that a new isolate has started.
IsolateStart,
used with Events https://github.com/dart-lang/sdk/blob/master/runtime/vm/service/service.md#events
This seems to have something to do with the subnet/availability zone, but I'm new to using a VPC and it's eluding me.
VPC: 10.80.0.0/16
subnet: 10.80.1.0/24 (us-east-1b)
subnet: 10.80.2.0/24 (us-east-1a)
All instances are Windows Server 2012.
I have an internet facing ELB created within my VPC (10.80.0.0/16). There is one instance added from AZ us-east-1a, which is on subnet 10.80.2.0/24. The instance is running IIS 7.5, with an app running on port 80 and /health.aspx set up for use as the ELB health check.
Internal traffic on the VPC is flowing normally (unrestricted). I can request health.aspx from this instance from another instance in us-east-1b (10.80.1.0/24). I can also copy files from one instance to another.
Outbound traffic is unrestricted. I can RDP to the instance (when connected to our VPN) and open a browser and request a web page and get it.
The ELB says the instance is healthy and I can see the requests to health.aspx in the IIS logs. Both the ELB and the instance are configured with a security group that allows 80 and 443.
But if I try to request {elb-url}/health.aspx over the open internet the request just times out. Similarly, with an elastic IP associated to the instance, a request to {elastic-ip}/health.aspx times out.
#Chris, thanks for the response...as it happens, I've already worked it out with some help from a friend. I'll post my findings here for posterity (in case anybody else was similarly confused about how ELB works).
This would be more clear with a diagram. But the summary is that in each availability zone, you need to create both a public and a private subnet. When you add availability zones to your ELB, you need to select the public subnet for the zone. This had already been done in us-east-1b before I got to this setup, and I had simply missed this nuance of ELB configuration. So for the new availability zone, I had to do this...
us-east-1c
private subnet 10.1.3.0/24 (using nat instance as default route)
public subnet 10.1.4.0/24 (using internet gateway as default route)
Then my instance goes in the private subnet as expected.
And the lynch pin of this whole thing is (drum roll....)
When I add us-east-1c to my ELB, I have to select the public subnet...10.1.4.0. Otherwise the instances will pass the health check (since the ELB can communicate with any instance within my entire VPC) but the responses from the servers cannot make it back out to the public internet.
This is what is so confusing. And I still don't fully understand it. The instance can make a request for, say, www.google.com. I can RDP to it and open a browser and get the web page. But a request from a host (like my laptop at my house) will die. strange.
PS: another note...make sure you are using enough NAT instance for your load. I think we ran into an issue where our NAT instance simply ran out of ports because too many web servers were trying to route outbound connections to 3rd party APIs through it. Quite honestly, I'm not good enough at this level of network/OS troubleshooting to be sure. But my theory is that our 8 instances of IIS were holding too many connections open to the NAT instance. We were also abusing the NIC on that micro instance. I upped us to two large instances, one per AZ and things smoothed back out. Both NAT instances are humming and we're not seeing the hung processes in IIS anymore.
Debugging this kind of issue is always a challenge. I have a few ideas to suggest based on what you have written (and generally apply to trying to solve this problem) that come from dealing with this a number of times.
Have you checked both the security groups and network ACLs? Bear in mind that all network ACLs need to be specified in both directions, as they are stateless. Also bear in mind that ELBs are a bit unique in this regard. While they are associated with your VPC, they sometimes need extra rules to ensure connectivity. In the past I have debugged this by opening all network ACLs on all ports, then removing these rules until it has stopped working in order to identify where the block was.
Security groups should be checked too. They are stateful but ensure that your load balancer has permissions to be hit from the web.
Have you checked this isn't an application configuration problem? I don't know how IIS comes out of the box but I would check it is setup to respond to all hostnames.
Check the ELB isn't an internal one, as that wouldn't be publically addressable.
You say the ELB is configured with the health check, but it's worth checking you also have the listener setup for port 80? It's in a separate tab on the dashboard and you will need this in addition to the health check for connectivity through the ELB.
Hope one of these tips is useful to you.
I am thinking of using syslog in my rails applications. The process is outlined in this blog post:
Add gem 'SyslogLogger' to your Gemfile
Add require 'syslog_logger' to the top of config/environments/production.rb
Also uncomment the config.logger = line in the same file.
In production box I have 4 rails applications running using passenger. If I switch to use syslogger for all 4 of my applications then I am afraid that the log messages from all 4 applications will go to a single file and the log messages will be interleaving. Of course, I can use splunk but first I wanted to check if it was possible for me to get one log file for each of my rails application. That would be desirable for my situation.
Is that possible?
#cite's answer covers one option for distinguishing the apps. However, the syslog message framing actually has 2 fields that make it even easier: hostname and tag (more commonly known and used as program name).
hostname is set by the system syslog daemon before it forwards the message to a centralized server. It will be the same for all apps on the same system but may be handy as you grow past 1 server.
The more interesting one is tag. Your app defines tag when it instantiates SyslogLogger. For example:
SyslogLogger.new('app1')
The logger class will send to the system syslogd as app1, and that appear in both the local log file and any remote syslog destinations (without needing to modify the log message itself). The default is rails. All modern syslog daemons can filter based on tag; see program() for syslog-ng and $programname for rsyslog.
Also, it's worth noting that SyslogLogger is basically wrapping the C openlog() and syslog() functions, so basically all post-log configuration happens on the system daemon. Generally that's desirable, but sometimes you may want your Rails app to log directly to a specific destination (like to simplify automated deployment, change attributes not allowed by syslog(), or run in environments without access to the system daemon).
We ran into a couple of those cases and made a drop-in Logger replacement that generates the UDP packets itself. The remote_syslog_logger gem is on GitHub.
Yes, by default, almost all Unix syslogds will write messages given in the user or local* facility in the same file. However, every syslogd I know will allow you to specify logfiles on a per-facility basis, so you can have your first application log to local1.*, second one to local2.* and so on.
Furthermore, newer syslog daemons like syslog-ng allow for splitting messages to different files by evaluating the message against a regular expression (write log strings which have railsapp_1 in them to /var/log/railsapp_1.log and so on).
So, configure your syslogd appropriately and you are done (the gory details of changing that configuration should be asked on serverfault.com if your system's man pages don't help you doing it.)