I am using opensips-1.6 on CentOS-5.8.
In certain conditions I am seeing lot of packets queued in recv queue and not getting processed.
I am monitoring the same using "netstat" command.
While observing the siptrace I found, opensips couldn't reply to incoming msgs, and if replied, it replies very late.
What sort of params I should observe/optimize to handle this kinda situation(when getting very high traffic on switch) ??
thnx
Possible reasons for slow UDP/TCP queue processing:
your OpenSIPS processes are in a deadlock (CPU usage 100% ?)
not enough memory! Watch the logfile for any memory-related error!
opensipsctl fifo get_statistics shmem: => to monitor shared memory usage
opensipsctl fifo get_statistics tm: => to see how many transactions are building up
not enough processes! consider increasing the number of children
To conclude, OpenSIPS 1.6 is old (from 2006) and unsupported anymore. Some of the above MI commands might not even work. You should consider upgrading to 1.11. It is stable, has lots of great features and it's LTS.
Related
I work on IOCP Server in windows. And i have to send buffer to all connected socket.
The buffer size is small - up to 10 bytes. When i get notification for each wsasend in GetQueuedCompletionStatus, is there guarantee that the buffer was sent in one piece by single wsasend? Or should i put additional code, that check if all 10 bytes was sent, and post another wsasend if necessary?
There is no guarantee but it's highly unlikely that a send that is less than a single operating system page size would partially fail.
Failures are more likely if you're sending a buffer that is more than a single operating system page size in length and if you're not actively managing how many outstanding operations you have and how many your system can support before running out of "non paged pool" or hitting the "I/O page lock limit"
It's only possible to recover from a partial failure if you never have any other sends pending on that connection.
I tend to check that the value is as expected in the completion handler and abort the connection with an RST if it's not. I've never had this code execute in production and I've been building lots of different kinds of IOCP based client and server systems for well over 10 years now.
We operate two dual-node brokers, each broker having quite different queues and workloads. Each box has 24 cores (H/T) worth of Xeon E5645 # 2.4GHz with 48GB RAM, connected by Gigabit LAN with ~150μs latency, running RHEL 5.6, RabbitMQ 3.1, Erlang R16B with HiPE off. We've tried with HiPE on but it made no noticeable performance impact, and was very crashy.
We appear to have hit a ceiling for our message rates of between 1,000/s and 1,400/s both in and out. This is broker-wide, not per-queue. Adding more consumers doesn't improve throughput overall, just gives that particular queue a bigger slice of this apparent "pool" of resource.
Every queue is mirrored across the two nodes that make up the broker. Our publishers and consumers connect equally to both nodes in a persistant way. We notice an ADSL-like asymmetry in the rates too; if we manage to publish a high rate of messages the deliver rate drops to high double digits. Testing with an un-mirrored queue has much higher throughput, as expected. Queues and Exchanges are durable, messages are not persistent.
We'd like to know what we can do to improve the situation. The CPU on the box is fine, beam takes a core and a half for 1 process, then another 80% each of two cores for another couple of processes. The rest of the box is essentially idle. We are using ~20GB of RAM in userland with system cache filling the rest. IO rates are fine. Network is fine.
Is there any Erlang/OTP tuning we can do? delegate_count is the default 16, could someone explain what this does in a bit more detail please?
This is difficult to answer without knowing more about how your producers and consumers are configured, which client library you're using and so on. As discussed on irc (http://dev.rabbitmq.com/irclog/index.php?date=2013-05-22) a minute ago, I'd suggest you attempt to reproduce the topology using the MulticastMain java load test tool that ships with the RabbitMQ java client. You can configure multiple producers/consumers, message sizes and so on. I can certainly get 5Khz out of a two-node cluster with HA on my desktop, so this may be a client (or application code) related issue.
How is the receive message implemented internally in erlang runtime?
When the process is waiting for a message, the execution hang on the receive.
The receive is done via blocking IO, or asynchronous IO ?
If former, then it means the OS thread is blocked and if there are many process hang on receiving, the performance is bad in reason of thread context switch and also may reach the operation system's thread limitation.
Erlang processes are not corresponded to OS threads or processes. They are implemented as internal structures of Erlang VM and they are scheduled by Erlang VM. The number of OS threads which are started by Erlang VM by default is equal to CPU number. When the Erlang process is waiting for a message no one OS process or thread is blocked.
MSDN Thread
Hi, all.
First of all, please excuse any english language mistakes in the following description, because, I'm not a native speaker and well, I can't write it perfectly.
I'm trying to create a .NET (4.0) service for remote/transactional/asynchronous reception of recoverable messages from several queues. So, first, I use BeginPeek method and then Receive method in a TransactionScope (which implicitly uses MSDTC).
The problem is the mqsvc.exe of the host machine (win7/2k8r2 sp1) running my service, which does nothing else (and certainly nothing related to the reception/hosting of messages, MSMQ is empty and clean). mqsvc.exe memory allocation grows and it never releases any memory. All MSMQ registry keys about cache cleaning interval have a short time value (about 1 minute).
I tried several options :
with local and remote MSDTC (remote with obviously the host machine of messages).
with the COM library mqoa.dll instead of .NET to use explicit MSDTC transactions for MSMQ.
with several different machines (all win7/2k8r2 sp1).
There are no exceptions at the execution of my service, and all resources that I can close or/and dispose are closed/disposed as soon as possible. The memory allocation of my service is stable.
In all cases, it's the same problem. How to solve it?
Thanks in advance.
Vincent.
Problem solved on MSDN.
MSDN Thread
The following hotfix addresses this problem:
High memory usage by the Message Queuing service when you perform a remote transactional read on a Message Queuing 5.0 queue in Windows 7 or in Windows Server 2008 R2
May I know what are the typical metrics that application developers usually find interesting with the use of JMX other than:
CPU Utilization
Memory consumption
Nicholas
I would add:
Class loaders behaviour
Threads
memory usage diagram (you can see gc runs and detect memory leaks)
stack trace of specified thread
jvm uptime, OS information
all jmx exposed data with your application
Garbage Collector Activity (duration and frequency)
Deadlock Detection
Connector Traffic (in / out)
Request processing time
Number of sessions (in relation to max configured)
Average Session duration
Number of sessions rejected
Is Webmodule running ?
Uptime (if less than 5 minutes, then someone restarted the JVM)
Connector threads relative to the max. available connector threads
Datasource Pools: Usage (relative), Lease time
JMS: Queue size, DLQ size
See also Jmx4Perl's predefined Nagios Check for further metrics ....
JMX can be used to support any of the MXbeans metrics.
Refer Java Documentation - http://docs.oracle.com/javase/7/docs/api/java/lang/management/ManagementFactory.html
section Method Summary.