I have this components that publishes messages to a broker I want to export the same message to prometheus
public class ModuleAMessagePublisher {
#Inject
#InternalBroker
private MessagePublisher messagePublisher;
public void publish(String topic, final String message) {
log.info("<><><><><><><><><> (MQ) PUBLISH MODULEA MESSAGE: <><><><><><<>\n", message);
messagePublisher.publish(topic, message);
// code for prometheus to be added here
}
}
I'm so new to using prometheus and I'm not sure if it's possible to be done or how can it be done
You can use the hivemq extention provided to host all the metrics on your hivemq server as described here -
https://www.hivemq.com/extension/prometheus-extension/
This will enable a /metrics endpoint on hivemq server which can be consumed by your prometheus server.
Related
My application is deployed is deployed in azure cloud with Azure service bus configuration.
when i track log i can see many log related to connection aborted in info, warning and in error also.
com.azure.core.amqp.exception.AmqpException: org.apache.qpid.proton.engine.TransportException: connection aborted
Try to track this error, but not find any specific solution, why this log coming.
Here using the following way, I wasn't getting any exceptions while using azure service bus with spring boot JMS
Now I am configuring the service bus in the application.properties file as below :
spring.jms.servicebus.connection-string=Endpoint=<Connection String>
spring.jms.servicebus.pricing-tier=<Price Tier>
Now I have simple Rest api which just sends a message to the Azure service Bus
#RestController
public class PostController {
private static final String DESTINATION_NAME = "<queueName>";
#Autowired
private JmsTemplate jmsTemplate;
#PostMapping("/messages")
public String postMessage(#RequestParam String message) {
jmsTemplate.convertAndSend(DESTINATION_NAME, new Test(message));
return message;
}
}
Here Test is just a class with variable name and we are sending the object of class Test
output:
I added a mqtt interceptor into my artemis broker in order to intercept mqtt client connection:
public class SimpleMQTTInterceptor implements MQTTInterceptor
{
#Override
public boolean intercept(final MqttMessage mqttMessage, RemotingConnection connection) throws ActiveMQException
{
System.out.println("MQTT Interceptor gets called ");
if (mqttMessage instanceof MqttConnectMessage)
{
System.out.println("MQTT connection intercepted ");
}
return true;
}
My client apache paho connect to the broker via this port "ws://0.0.0.0:61614".
My problem is that only message published to topics are intercepted.
Why this doesn't intercept CONNECT message ?
The current version of ActiveMQ Artemis, 2.2.0, at the time I write this response, only supports intercepting MQTT Publish control packets. I opened a pull request adding that feature, therefore, it should be present on future versions.
I am trying to monitor my dropwizard web service using ganglia. I ran gmond and gmetad on my local machine. And I was able to see basic metrics(eg.cpu,memory usage) on ganglia-web.
I also added ganglia reporter in my service according to this. But nothing shows up on my ganglia-web.
private static final MetricRegistry metrics = new MetricRegistry();
private final Timer ingest = metrics.timer("MyApp");
try {
final GMetric ganglia = new GMetric("localhost", 8649, GMetric.UDPAddressingMode.MULTICAST, 1);
final GangliaReporter gangliaReporter = GangliaReporter.forRegistry(metrics)
.convertRatesTo(TimeUnit.SECONDS)
.convertDurationsTo(TimeUnit.MILLISECONDS)
.build(ganglia);
gangliaReporter.start(1, TimeUnit.MINUTES);
} catch (Exception e) {
LOGGER.error("Can not initiate GangliaReporter",e);
}
It looks to me that you entered a normal network address but tell GMetric to expect a multicast address. Here is what I used (and works):
GMetric ganglia = new GMetric("192.168.0.40", 8649, UDPAddressingMode.UNICAST, 1);
If this does not help you, please show your gmond.conf (udp channel config)
I am using spring amqp publishing my messages to RabbitMQ using an outbound gateway. I have set publisher confirms on the connection factory and added my custom callback listener.
The problem is that my CorrelationData is always null and i can't add any correlation data on an outbound gateway. This is only applicable for an outbound channel adapter.
For an outbound gateway will publisher confirms even work?
EDIT
My configuration is below. I looked through the SI code and yes, publisher confirms, are enabled. The problem is what I do when I receive a NACK?
Because of the outbound gateway I don't need a correlation id to handle the response, there is already a thread listening on a temporary reply queue for the response.
What exactly is the point of using publisher confirms with an outbound gateway? If no response is coming or my Rabbit nodes go down I will encounter exceptions. Is there a scenario when I will lose messages?
<rabbit:connection-factory id="rabbitConnectionFactory"
host="someip" port="5672"
username="username"
password="password"
virtual-host="vhost"
publisher-confirms="true"/>
<rabbit:admin connection-factory="rabbitConnectionFactory"/>
<rabbit:template id="amqpTemplate" connection-factory="rabbitConnectionFactory"
confirm-callback="messagesConfirmCallback"/>
<int-amqp:outbound-gateway
request-channel="channel"
amqp-template="amqpTemplate"
exchange-name="exchange"
routing-key-expression="headers['queueSpecific']+'.queue'">
<amqp:request-handler-advice-chain>
<ref bean="retryAdvice"/>
</amqp:request-handler-advice-chain>
</int-amqp:outbound-gateway>
And my callback is also simple
#Component
public class MessagesConfirmCallback implements RabbitTemplate.ConfirmCallback {
private final static Logger LOGGER = LoggerFactory.getLogger(MessagesConfirmCallback.class);
#Override
public void confirm(CorrelationData correlationData, boolean ack) {
if(ack){
LOGGER.info("ACK received");
}
else{
LOGGER.info("NACK received");
}
}
}
This
Unfortunately, I don't see an easy work around with the gateway; the underlying RabbitTemplate only supports adding correlation data on send() methods, not the sendAndReceive methods.
The two options I can think of is to (1) use a pair of outbound and inbound adapters (instead of the gateway), but you'll have to do your own request/reply correlation in that case.
Alternatively (2), use the RabbitTemplate.execute() and in the doInRabbit callback, add code similar to that in the RabbitTempalate.doSendAndReceive, while setting the correlation data as is done in doSend().
I opened a JIRA Issue.
I am using SingalR for my Chat Application. Wanted to play with Redis
and SignalR but I cannot find an working example where i can send msg to
specific connectionId. Below Code that works for a single server instance.
But when i make it a Web Garden with 3 process it stops working as my
server instance that gets the message cannot find the connectionId
for that destination UserId to send the message.
private readonly static ConnectionMapping<string> _connections = new ConnectionMapping<string>();
public void Send(string sendTo, string message, string from)
{
string fromclientid = Context.QueryString["clientid"];
foreach (var connectionId in _connections.GetConnections(sendTo))
{
Clients.Client(connectionId).send(fromclientid, message);
}
Clients.Caller.send(sendTo, "me: " + message);
}
public override Task OnConnected()
{
int clientid = Convert.ToInt32(Context.QueryString["clientid"]);
_connections.Add(clientid.ToString(), Context.ConnectionId);
}
I have used the example below to setup my box and code but none of
them have examples for sending from one client to specific client or
group of specific clients.
http://www.asp.net/signalr/overview/performance-and-scaling/scaleout-with-redis
https://github.com/mickdelaney/SignalR.Redis/tree/master/Redis.Sample
The ConnectionMapping instance in your Hub class will not synced across different SignalR server instances. You need to use permanent external storage such as a database or a Windows Azure table. Refer to this link for more details:
http://www.asp.net/signalr/overview/hubs-api/mapping-users-to-connections