I have some websites that is build on same a domain.
Server configuration is 8 vCPUs, 30 GB memory.
Centos 7.
MySQL is also hosted on other server.
I did several tests by Jmeter with 4 computers. 200 users/computer, 800 users in total. The test link takes less than 3 seconds for loading.
The results are failed about 10% while CPU peaked less than 25% and SQL server's CPU is not more than 10% ever.
Other problem, sometimes I got the following problem while the server is not much connections.
Is there any problem from server or DNS setting I should check? Thank you.
Screenshot
Related
Locally (outside of docker) it takes less than 5 seconds on average, for the same code and socket endpoint (Mongodb Atlas).
As a workaround, I significantly increased the JVM arguments MaxMetaspaceSize, Xmx and Xms, that made no difference. I’m using openjdk:17-Jdk-alpine.
I used visualvm to identify Socket.read() was the root cause, I just don’t know a viable solution.
We have multiple baremetal servers part of our dockers and Openshfit(kubernetes) cluster. For some reason the underlying pods are extremely slow only with BM Nodes, the traditional VMs hosted on exsi servers work flawless. the pods take up very long to come up at all times liveness probes fail often. The BM nodes have 72 cores and 600 GB RAM and 2 n/w ports & are underutilised say Load Average just about 10 ~ 20 and Free RAM over 300 ~ 400 Gigis at all times. sar output looks normal, /var/log/messages have nothing unusual. Not able to nail down what's causing the slowness..
Is there a linux/docker command that will help here & what do i look for? Could this be a noisy neighbour problem? or do I need to tweak some Kernel Parameter(s). The slowness is always there, it's not intermittent. We have closely worked with RH support and got nothing from that exercise. Any suggestions welcome..
My influxdb measurement have 24 Field Keys and 5 tag keys.
I try to do 'select last(cpu) from mymeasurement', and found result :
When there is no client throwing data into it, it'll take around 2 seconds to got the result
But when I run 95 client throwing data (per 5 seconds) into it, the query will take more than 10 seconds before it show the result. is it normal ?
Note :
My system is a Centos7 VM in xenserver with 4 vcore CPU and 8 GB ram, the top command show 30% cpu while that clients throw datas.
Some ideas:
Check your vCPU configuration on other VMs running on the same host. Other VMs you might have that don't need the extra vCPUs should only be configured with one vCPU, for a latency boost.
If your DB server requires 4 vCPUs and your host already has very little CPU% used during queries, you might want to check the storage and memory configurations of the VM in case your server is slow due to swap partition use, especially if your swap partition is located on a Virtual Disk over the network via iSCSI or NFS.
It might also be a memory allocation issue within the VM and server application. If you have XenTools installed on the VM, try on a system without the XenTools installed to rule out latency issues related to the XenTools driver.
I run a CentOS 5.7 machine (64bit) with 24GB ram and 4x SAS drives in RAID10 setup.
This machine runs nginx/1.0.10, php-fpm & xcache. About a month back the RAM usage of this machine has changed.
About every few hours the 'CACHE' is flushed from the RAM, this happens exactly when the 'Inode table usage' drops. I'm pretty sure these drops are related. (see the 2 attached images).
This server hosts quite a lot of small files (20M all a few KB big). Not many files are deleted (maybe 100 per hour (total size a few MB max)), not enough to account for the huge Inode table drops.
I also have no crons running which could cause these drops.
Sar -r output: http://pastebin.com/C4D0B79i
My question: Why are these huge RAM/Inode usage drops happening? How can I get Nginx/PHP to use all of my servers RAM?
EDIT: I have put my configs here: http://pastebin.com/iEWJchc4 and the output of LSOF here: http://hostlogr.com/lsof.txt. The thing i do notice the VERY large number of php-fpm processes that go to /dev/zero. Which is specified in my xcache configuration. Could that possibly be wrong?
solved it by putting vm.zone_reclaim_mode = 0
This seems so basic that people would be screaming about it, searching the web turned up nothing, but I have tested it on several networks and computers. We are having an issue where we use the .local url to access resources it is very slow. If we use the direct IP address we don’t see these delays.
In our stripped down test setup the device and the computer are on the same switch and are the only devices on the switch. The same thing occurs when we are not in this very limited network configuration. Mac OS X Lion on the command line we are getting these results:
With direct ip:
curl 10.101.62.42 0.01s user 0.00s system 18% cpu 0.059 total
With bonjour name:
curl http://xrx0000aac0fefd.local 0.01s user 0.00s system 0% cpu 5.063 total
It is consistently just above 5 seconds per request to resolve. It does not matter which device we try to connect to, the same seems to be occurring in our iPhone app, and is slow with Python scripts. Safari seems to be able to resolve the names quickly.
We could resolve once and then use the IP address but that first request will still be unacceptably slow, and I don’t think this is the way Bonjour is supposed to work.
We are not exactly sure when this started happening but it was not always this way.
Edit: Another data point. On Snow Leopard it is not slow at resolving:
$ time curl http://hp1320.local > /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
101 2848 0 2848 0 0 15473 0 --:--:-- --:--:-- --:--:-- 36512
real 0m0.201s
user 0m0.005s
sys 0m0.009s
This is resolved in iOS 5 and Lion 10.7.2. Which is a huge relief. Unfortunate that 4.3 app users will get this slow behavior. Guess it is another reason to upgrade.
Do the hosts you mentioned show up when you browse for them? Enumeration should be pretty quick:
mdns -B _http._tcp
Maybe there's something slowing the name resolution. If you query the IPs with dig it should return the correct address pretty much instantly:
dig A xrx0000aac0fefd.local #224.0.0.251 -p 5353
Failing that try running tcpdump and see if there's a device that's spewing out multicast packets on the network.