Monitoring performance of opensips presence server - opensips

I have to do some performance testing of the opensips server but I am not able to start.
For generating traffic I'll be using SIPP. I am not able to find about how to monitor the performance of opensips in real time.
I know there is tool- opensipsctl but I am not able to run it. It gives below error:
ERROR: Error opening OpenSIPS's FIFO /tmp/opensips_fifo
ERROR: Make sure you have the line 'modparam("mi_fifo", "fifo_name", "/tmp/opensips_fifo")' in your config
ERROR: and also have loaded the mi_fifo module.
And this is from the config file:
#### FIFO Management Interface
loadmodule "mi_fifo.so"
modparam("mi_fifo", "fifo_name", "/tmp/opensips_fifo")
modparam("mi_fifo", "fifo_mode", 0666)
I am trying to find the cause from forums.
I also tried to install nagios but not able to add service for opensips, basically unable to understand how to do.
I have another doubt regarding the memory management. As I understand, opensips uses pre-configured amount of memory no matter how much memory is available. I guess which means I won't be able to find the actual memory consumption. I even tested some load where I just saw spikes on CPU usage and no spike on memory usage. Please correct if I understood wrong.
I really need some help to understand how to go about doing this.
Thanks

To resolve your mod_fifo related error, please confirm if /tmp/mod_fifo file is exists or not. And if its not there do this
touch /tmp/mod_fifo
chmod 777 /tmp/mod_fifo
/etc/init.d/opensips restart
And regarding your memory doubt,
Private memory is the memory used by one process, while shared memory is
memory accessible by all processes (it is an IPC method, see
http://en.wikipedia.org/wiki/Shared_memory).
The private memory is used
for temporary storages required for certain processing by a process,
while the shared memory is used to store data that must be accessible by
all processes. Opensips init script has that memory related prameters.
Hope this helps.

Related

Memory usage of keycloak docker container

When we starts the keycloak container, it uses almost 700 MB memory right away. I was not able to find more details on how and where it is using this much memory. I have couple of questions below.
Is there a way to find more details about which processes are taking
more memory inside the container? I was looking into the file
/sys/fs/cgroup/memory/memory.stat inside the container which didn't give much info.
Is it normal for the keycloak container to use this much memory? Or we need
to do any tweaking in the configuration file for better performance.
I would also appreciate if anyone has more findings which can be leverage to improve overall performance of the application.
Keycloak is Java app, so you need to understand Java/Java VM memory footprint first: What is the memory footprint of the JVM and how can I minimize it?
If you want to analyze Java memory usage, then Java VisualVM is a good starting point.
700MB for Keycloak memory is normal. There is initiative to move Keycloak to Quarkus (https://www.keycloak.org/2020/12/first-keycloak-x-release.adoc), which will reduce also memory footprint - it is still in the preview, not generally available.
In theory you can switch to different runtime (e.g. GraalVM), but then you may have different issues - it isn't officialy supported setup.
IMHO: it'll be overengineering if you want to optimize your Keycloak memory usage; it is a Java app

AWS server became slow after traffic increase

I have a single page Angular app that makes request to a Rails API service. Both are running on a t2xlarge Ubuntu instance. I am using a Postgres database.
We had increase in traffic, and my Rails API became slow. Sometimes, I get an error saying Passenger queue full for rails application.
Auto scaling on the server is working; three more instances are created. But I cannot trace this issue. I need root access to upgrade, which I do not have. Please help me with this.
As you mentioned that you are using T2.2xlarge instance type. Firstly I want to tell you should not use T2 instance type for production environment. Cause of T2 instance uses CPU Credit. Lets take a look on this
What happens if I use all of my credits?
If your instance uses all of its CPU credit balance, performance
remains at the baseline performance level. If your instance is running
low on credits, your instance’s CPU credit consumption (and therefore
CPU performance) is gradually lowered to the base performance level
over a 15-minute interval, so you will not experience a sharp
performance drop-off when your CPU credits are depleted. If your
instance consistently uses all of its CPU credit balance, we recommend
a larger T2 size or a fixed performance instance type such as M3 or
C3.
Im not sure you won't face to the out of CPU Credit problem because you are using Xlarge type but I think you should use other fixed performance instance types. So instance's performace maybe one part of your problem. You should use cloudwatch to monitor on 2 metrics: CPUCreditUsage and CPUCreditBalance to make sure the problem.
Secondly, how about your ASG? After scale-out, did your service become stable? If so, I think you do not care about this problem any more because ASG did what it's reponsibility.
Please check the following
If you are opening a connection to Database, make sure you close it.
If you are using jquery, bootstrap, datatables, or other css libraries, use the CDN links like
<link rel="stylesheet" ref="https://cdnjs.cloudflare.com/ajax/libs/bootstrap-select/1.12.4/css/bootstrap-select.min.css">
it will reduce a great amount of load on your server. do not copy the jquery or other external libraries on your own server when you can directly fetch it from other servers.
There are a number of factors that can cause an EC2 instance (or any system) to appear to run slowly.
CPU Usage. The higher the CPU usage the longer to process new threads and processes.
Free Memory. Your system needs free memory to process threads, create new processes, etc. How much free memory do you have?
Free Disk Space. Operating systems tend to thrash when the file systems on system drives run low on free disk space. How much free disk space do you have?
Network Bandwidth. What is the average bytes in / out for your
instance?
Database. Monitor connections, free memory, disk bandwidth, etc.
Amazon has CloudWatch which can provide you with monitoring for everything except for free disk space (you can add an agent to your instance for this metric). This will also help you quickly see what is happening with your instances.
Monitor your EC2 instances and your database.
You mention T2 instances. These are burstable CPUs which means that if you have consistenly higher CPU usage, then you will want to switch to fixed performance EC2 instances. CloudWatch should help you figure out what you need (CPU or Memory or Disk or Network performance).
This is totally independent of AWS Server. Looks like your software needs more juice (RAM, StorageIO, Network) and it is not sufficient with one machine. You need to evaluate the metric using cloudwatch and adjust software needs based on what is required for the software.
It could be memory leaks or processing leaks that may lead to this as well. You need to create clusters or server farm to handle the load.
Hope it helps.

What determines the mosquitto.db file size limit

I am new to mosquitto and have a few questions I hope you all can help me with:
what determines the limit size of the persistence file in mosquitto? Is it the system momory or disk space?
What happens when the persistence file gets larger than the limit size? Can I transfer it to another server for temporary storage?
How would mosquitto use the transferred file to publish messages when it restarts?
I appreciate any feedback.
Thanks,
Probably a combination of both Filesystem maximum filesize and system/process memory, which ever is smallest. But I would expect the performance problems that would be apparent before you reached these limits to be a bigger problem.
Mosquitto probably crashes. If mossquitto exceeds the system/process memory limits then it's going to get killed by the OS or crash instantly. I doubt there would be any benefit to moving it to a different machine as if mosquitto crashes due to hitting either of those limits the file is likely to be corrupted so unable to be read in even if restarted on the same machine.
See answer 2
In reality you should never come close to these limits, having that many inflight messages means there are some very SERIOUS issues with the design of your whole system.

Digital Ocean server memory usage above 50%

I am deploying a Flask-based website on the server of Digital Ocean. And the website deployed is mainly static pages, config files and jsons.
This morning I found the memory usage has exceeded 51%. Here is the snapshot.
My memory is 512MB. Would someone please instruct me how to lower the memory usage? Thanks so much!
Update: I've use the "top" command in shell as suggested. Here is the snapshot, does it mean that it is the server itself eaten up those memories?
The memory issue is not related to my application.
I just received the answer from Digital Ocean. Here it is:
Hi there!
Thank you for contacting us! We can help with any memory issues you're having!
Since the Droplet is set up with only 512MB of RAM, once the system and any installed services start, it doesn't take much to push it past 50%. As a result, I don't think what you're seeing is necessarily abnormal under the circumstances. This leaves a few options: the Droplet can be resized and made larger to provide more memory (see https://www.digitalocean.com/community/tutorials/how-to-resize-your-droplets-on-digitalocean), you can add swap space to use part of the Droplet's file system as RAM (see https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-14-04), or you can review the applications and services running on the Droplet and attempt to optimize them to reduce memory use.
We hope this is helpful! Please let us know if there is anything else we can do!
Regards,
I am assuming your are running a Linux server. If so, you can use the top command. It shows you all of the running processes and the system resources they are using. You would then be able to optimize from there.
I found out the cause! Linux borrows unused memory for disk caching. This makes it look like you are low on memory, but you are not! Everything is fine! If your application, or any other process needs more memory, Linux will automatically clear the cache and give memory for your application. Linux does this to speed up the system for you.
If, however, you find yourself needing to clear some RAM quickly to workaround another issue, like a VM misbehaving, you can force Linux to nondestructively drop caches using:
echo 3 | sudo tee /proc/sys/vm/drop_caches

ejabberd: Memory difference between erlang and Linux process

I am running ejabberd 2.1.10 server on Linux (Erlang R14B 03).
I am creating XMPP connections using a tool in batches and sending message randomly.
ejabberd is accepting most of the connections.
Even though connections are increasing continuously,
value of erlang:memory(total) is observed to be with-in a range.
But if I check the memory usage of ejabberd process using top command, I can observe that memory usage by ejabberd process is increasing continuously.
I can see that difference between the values of erlang:memory(total) and the memory usage shown by top command is increasing continuously.
Please let me know the reason for the difference in memory shown.
Is it because of memory leak? Is there anyway I can debug this issue?
What for the additional memory (difference between the erlang & top command) is used if it is not memory leak?
A memory leak in either the Erlang VM itself or in the non-Erlang parts of ejabberd would have the effect you describe.
ejabberd contains some NIFs - there are 10 ".c" files in ejabberd-2.1.10.
Was your ejabberd configured with "--enable-nif"?
If so, try comparing with a version built using "--disable-nif", to see if it has different memory usage behaviour.
Other possibilities for debugging include using Valgrind for detecting and locating the leak. (I haven't tried using it on the Erlang VM; there may be a number of false positives, but with a bit of luck the leak will stand out, either by size or by source.)
A final note: the Erlang process's heap may have been fragmented. The gaps among allocations would count towards the OS process's size; It doesn't look like they are included in erlang:memory(total).

Resources