One message got stuck on one of the queue on Solace - solace

In one of the queue in solace, one message got stuck and rest of the message has consumed, please help me to troubleshooting this

One common possibility is that the Solace appliance/VMR has already delivered the message to a consumer, and the consumer has failed to acknowledge the message. Therefore, the Solace appliance/VMR can no longer deliver the message to another consumer unless the consumer acknowledges the message, or disconnects it's flow.
The "show queue message-vpn detail" would help identify whether this is so.
solace1> show queue q1 message-vpn default detail
Name : q1
Message VPN : default
Durability : Durable
Id : 3813
Type : Primary
Admin Ingress : Up
Admin Egress : Up
Access Type : Non-Exclusive
Owner :
Created by mgmt : Yes
All Others Permission : Delete (1111)
Quota (MB) : 700000
Respect TTL : No
Reject Msg to Sender on Discard : Yes
Bind Time Forwarding Mode : Store-And-Forward
Current Messages Spooled : 1
Current Spool Usage (MB) : 0.0001
High Water Mark (MB) : 0.0006
Total Delivered Unacked Msgs : 1 <=================== 1 Message has been delivered to an application but is unacknowledged.
Max Delivered Unacked Msgs Per Flow : 10000
Total Acknowledgments In-Progress : 0
Max Redelivery : 1
Consumer Ack Propagation : Yes
Reject Low-Priority-Msg : No
Reject Low-Priority-Msg Limit : 0
Low-Priority-Msg Congestion State : Disabled
Oldest Msg Id in Spool : 457000639
Newest Msg Id in Spool : 457000639
Max Msg Size Allowed (B) : 10000000
Bind Count : 1
Max Bind Count : 1000
Topic Subscription Count : 2
Network Topic : #P2P/QUE/q1
Egress Selector Present : No
Event Threshold Set Value Clear Value
---------------------------------- ---------------- ----------------
Bind count 80%(800) 60%(600)
Spool usage (MB) 80%(560000) 60%(420000)
Reject Low-Priority-Msg Limit 80%(0) 60%(0)
Egress Flows
Client Name : perfhost/6588/#000b0001
Flow Status : Active-Consumer
Deliver From : input stream
Status Updates : Not Requested
No Local Delivery : No
Request Redelivery : No
Selector :
Window Size : 255
Last Connect Time : 2016-01-05 13:51:44 SGT
Activation Time : 2016-01-05 13:51:44 SGT
Flow Id : 4191
Last Msg Id Delivered : 457000639

Related

Connecting to janus server always hangs with hangup message from janus

I have problem connecting to janus janus.plugin.videoroom plugin from iOS device using swift.
Although every steps take place correctly but janus server send following message:
{
"janus": "hangup",
"session_id": 3201104494179497,
"sender": 7759980289270843,
"reason": "ICE failed"
}
and disconnect.
Debugging the messages of connecting to janus leads me to following:
1- RTCIceGatheringState never changes to Completed
2- The generated candidates are like following:
candidate:3215141415 1 udp 1686052607 w.x.y.z 57168 typ srflx raddr w.x.y.z rport 57168 generation 0 ufrag 340a network-id 1 network-cost 10
as you can see video and audio words are replaced by 1 and 0 respectively in the generated candidate.
Do you have any idea about these two observations!
And why janus send the "ICE failed" message?
I found that the reason of getting "hang up" message is because I did not set the received jsep (from janus) to my peerconnection.
after setAnswer the jsep "hang up" message gone!
1- RTCIceGatheringState never changes to Completed
For the problem of not having "Completed" state For RTCIceGatheringState was because of "continualGatheringPolicy" options in configuring the peerConnection which was set to "gatherContinually" after setting that to "gatherOnce" the Completed state seen! :)
2- The generated candidates are like following:
It seem this is normal to have audio/video or 0/1

SQS - why limiting maximum message size?

Is there any reason why I should a lower than maximum limit in Maximum message size in AWS SQS? I'm not able to find anyone good one...
SQS provides many pro's like bulk message send , delayed messages , polling etc. So since we have all these pro's they definitely need to limit their sizing. But how we handled was ,
we check the message size and if the message size is above 256kb , we upload the message to s3 with unique id as file name and share the message in queue as { largeFile : true , id : (s3 File Name)} , now the consumer checks whether the largeFile is true , if so fetches from s3 and processes the data , simple :)
Or if u want only queue go with other message brokers like rabbitmq , where there isn't any size limits.

Does Firebase always guarantee added events in order?

I am developing messenger IOS app based on Firebase Realtime Database.
I want that all messages to be ordered based on timestamp.
There is a scenario as like below.
There are 3 clients. A, B and C.
1)
All clients register 'figure-1' listener to receive messages from others.
<figure-1>
ref.queryOrdered(byChild: "timestamp").queryStarting(atValue: startTime).observe(.childAdded, with:
{
....
// do work for the messages, print, save to storage, etc.
....
// save startTime to storage for next open.
startTime = max(timeOfSnapshot, startTime)
saveToStorage(startTime)
}
2)
Client A write message 1 to server with ServerValue.timestamp().
Client B write message 2 to server with ServerValue.timestamp().
Client C write message 3 to server with ServerValue.timestamp().
They sent messages extremely the same moment.
All clients have good speed wifi.
So, finally. Server data saved like 'figure-2'
<figure-2>
text : "Message 1", timestamp : 100000001
text : "Message 2", timestamp : 100000002
text : "Message 3", timestamp : 100000003
As my listener's code, i keep messages on storage and next listening timestamp for preventing downloading duplicated messages.
In this case.
Does Firebase always guarantee to trigger callback in order as like below?
Message 1
Message 2
Message 3
If it is not guaranteed, my strategy is absolutely wrong.
For example, some client received messages as like below.
Message 3 // the highest timestamp.
// app crash or out of storage
Message 1
Message 2
The client do not have chance to get message 1, 2 anymore.
I think if there are some nodes already, Firebase might trigger in order for those. Because, that is role of 'queryOrdered' functionality.
However, there are no node before register the listener and added new nodes additionally after then. What is will happen?
I suppose Firebase might send 3 packets to clients. (No matter how quickly the message arrives, Firebase has to send it out as soon as it arrives.)
Packet1 for message1
Packet2 for message2
Packet3 for message3
ClientA fail to receive for packet 1,2
ClientA success to receive for packet 3
Firebase re-send packet 1,2 again.
ClientA success to receive for packet 1,2
Eventually, all datas are consistent. But ordering is corrupted.
Does Firebase guarantee to occur events in order?
I have searched stack overflow and google and read official documents many times. However, i could not find the clear answer.
I have almost spent one week for this. Please give me piece of advice.
The order in which the data for a query is returns is consistent, and determined by the server. So all clients are guaranteed to get the results in the same order.
For new data that is sent to the database after the listeners are attached, all remote clients will receive it in the same order. The local client will see events for it's write operation right away though, before the data even reaches the database server.
In figure 2, it is actually quite simple: since each node has a unique timestamp, and they will be returned in the order of that timestamp. But even if they'd have the same timestamp, they'd be returned in the same order (timestamp first, then key) for each client.

housekeeper [deleted 105926 hist/trends, 0 items, 0 events, 0 sessions, 0 alarms, 0 audit items in 3.718012 sec, idle for 1 hour(s)]

In my zabbix server it logs as
# sudo tail -f /var/log/zabbix/zabbix_server.log
housekeeper [deleted 105926 hist/trends, 0 items, 0 events, 0 sessions, 0 alarms, 0 audit items in 3.718012 sec, idle for 1 hour(s)]
and after this it fails to send the Alerts
5243:20171213:180658.517 Failed sending data to the peer: DATA failed: 550
5243:20171213:180702.182 Failed sending data to the peer: DATA failed: 550
5243:20171213:180705.812 Failed sending data to the peer: DATA failed: 550
Can you help me why this occurs and give me a solution
I solved this
Beacuase i have configured email to a person in more numbers in the sense
user will get more than 3 alerts for a problem and a ok alert and reduced
the alerts to single alert fro problem per person.
when the **zabbix alerter process exceeds 75%** this errors occurs
Zabbix alerter processes more than 75% busy
and zabbix is not able to send alerts to all peer hosts

Wanted to Get The Corresponding Message Details Against Message ID

As we know we can not read the message in Solace Appliance, However we can see the message ID in Solace Appliance.
So I Wanted to Get The Corresponding Message Details Against Message ID.
How to get the details for same.
As we know we can not read the message in Solace Appliance, However we
can see the message ID in Solace Appliance.
This is not accurate.
In order to protect confidential data, management users cannot view the content of messages. However, application users(with the necessary permissions) can create a browser to view the contents of a message without deleting it.
So I Wanted to Get The Corresponding Message Details Against Message
ID. How to get the details for same.
Use a queue browser to view the full contents of the message.
Alternatively, as a management user, you can view basic information.
solace> show queue myqueue message-vpn default messages detail
Name: myqueue
Message Id: 160443684
Date spooled: Jul 11 2016 12:34:02 UTC
Publisher Id: 19456
Sequence Number: n/a
Dead Message Queue Eligible: no
Content: 0.0000 MB
Attachment: 0.0001 MB
Replicated: no
Replicated Mate Message Id: n/a
Sent: no
Redeliveries: 0

Resources