This seems so basic that people would be screaming about it, searching the web turned up nothing, but I have tested it on several networks and computers. We are having an issue where we use the .local url to access resources it is very slow. If we use the direct IP address we don’t see these delays.
In our stripped down test setup the device and the computer are on the same switch and are the only devices on the switch. The same thing occurs when we are not in this very limited network configuration. Mac OS X Lion on the command line we are getting these results:
With direct ip:
curl 10.101.62.42 0.01s user 0.00s system 18% cpu 0.059 total
With bonjour name:
curl http://xrx0000aac0fefd.local 0.01s user 0.00s system 0% cpu 5.063 total
It is consistently just above 5 seconds per request to resolve. It does not matter which device we try to connect to, the same seems to be occurring in our iPhone app, and is slow with Python scripts. Safari seems to be able to resolve the names quickly.
We could resolve once and then use the IP address but that first request will still be unacceptably slow, and I don’t think this is the way Bonjour is supposed to work.
We are not exactly sure when this started happening but it was not always this way.
Edit: Another data point. On Snow Leopard it is not slow at resolving:
$ time curl http://hp1320.local > /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
101 2848 0 2848 0 0 15473 0 --:--:-- --:--:-- --:--:-- 36512
real 0m0.201s
user 0m0.005s
sys 0m0.009s
This is resolved in iOS 5 and Lion 10.7.2. Which is a huge relief. Unfortunate that 4.3 app users will get this slow behavior. Guess it is another reason to upgrade.
Do the hosts you mentioned show up when you browse for them? Enumeration should be pretty quick:
mdns -B _http._tcp
Maybe there's something slowing the name resolution. If you query the IPs with dig it should return the correct address pretty much instantly:
dig A xrx0000aac0fefd.local #224.0.0.251 -p 5353
Failing that try running tcpdump and see if there's a device that's spewing out multicast packets on the network.
Related
My problem is as follows.
From time to time, Xorg starts a process with a heavy CPU load (50-100% only for one thread) and animation a little freezes on the screen (for example, letters are displayed with a delay of about 0.5-2 seconds after they are entered using the keyboard or with micro freezes when video playback).
Please help me figure out what this process does and how to fix the problem with the fact that the graphics has a delay due to it.
Below I leave the full display of the command (taken from "top" command):
Xorg -nolisten tcp -background none -seat seat0 vt1 -auth /var/run/sddm/{efd6b83b-791f-4698-bfmc-7a0badf342c7} -noreset -displayfd 17
I'm using Manjaro (KDE) and this problem has been happening for at least a year despite regular updates.
What I managed to do myself:
If I somewhat restart the graphical shell: press Ctrl+Alt+F2 (displays a terminal) and then return to Ctrl+Alt+F1 (graphics), then the system reports: Desktop effects were restartet due to a graphics reset, sometimes Xorg disappears process (not always).
Sometimes Xorg process disappears on its own
There is a possibility that this may have something to do with Firefox, which is always open.
Additional Information:
There were big problems with graphics from the very beginning of using a laptop with Linux. From time to time I have to restart the graphical shell due to various glitches when using the computer for a long time without restarting (sleep only), say 20 days (for example, some pop-ups are displayed in black with a 50% probability)
Also sometimes plasmashell takes gigabytes of RAM, which leads to a freeze, and is treated only with "killall plasmashell" and restart
Computer: Lenovo Legion Y540
Video card: NVIDIA GeForce GTX 1650 4GB GDDR5
OS: 5.10.161-1-MANJARO (KDE)
I have some websites that is build on same a domain.
Server configuration is 8 vCPUs, 30 GB memory.
Centos 7.
MySQL is also hosted on other server.
I did several tests by Jmeter with 4 computers. 200 users/computer, 800 users in total. The test link takes less than 3 seconds for loading.
The results are failed about 10% while CPU peaked less than 25% and SQL server's CPU is not more than 10% ever.
Other problem, sometimes I got the following problem while the server is not much connections.
Is there any problem from server or DNS setting I should check? Thank you.
Screenshot
We have multiple baremetal servers part of our dockers and Openshfit(kubernetes) cluster. For some reason the underlying pods are extremely slow only with BM Nodes, the traditional VMs hosted on exsi servers work flawless. the pods take up very long to come up at all times liveness probes fail often. The BM nodes have 72 cores and 600 GB RAM and 2 n/w ports & are underutilised say Load Average just about 10 ~ 20 and Free RAM over 300 ~ 400 Gigis at all times. sar output looks normal, /var/log/messages have nothing unusual. Not able to nail down what's causing the slowness..
Is there a linux/docker command that will help here & what do i look for? Could this be a noisy neighbour problem? or do I need to tweak some Kernel Parameter(s). The slowness is always there, it's not intermittent. We have closely worked with RH support and got nothing from that exercise. Any suggestions welcome..
I want to make an IP scan using NMAP, but the operating time varies for some reason. The command can be executed in 2 seconds, an if I launch it again just after, it can take 30 seconds.
This is the command I use :
nmap -n -sn -T5 --max-rtt-timeout 1s
-n : no DNS resolution
-sn : disable port scan
-T5 fast mode
--max-rtt-timeout round trip timeout for the probes 1s
I don't know if my optimisation is good ? And how to make it better ?
Thank you
Some debug output (-d --packet-trace would be a good start) would be very helpful to diagnose this problem. My first thought was that you were telling Nmap to use timeouts that were too short, leading to retransmissions when they don't need to happen. But that probably wouldn't lead to 30-second run times; the response would be seen and accepted as soon as it came in, even if the probe was retransmitted first.
More helpful information would be the version and platform of Nmap (nmap --version), whether your command is run with root or administrator privileges, and what kind of network you are scanning on. The question is tagged wifi, but you don't say whether the target is on the same link as you, or several network hops away.
Most importantly for you, you should learn what -T5 really means so that you can make rational decisions when tweaking performance variables. Not only is -T5 not properly "fast mode," but you have set the round-trip timeout to be 3 times longer than -T5 defaults to, which is probably a good signal that the rest of the variables are not where they need to be. Try -T4 or even the default -T3 and see if the timing stabilizes. I would not be surprised if it turns out to be nearly as fast as your best -T5 times.
In my case, adding min-parallelism in original command improved scan time from 26.54 second to 3.26 second:
nmap -n -sn -T5 --max-rtt-timeout 1s --min-parallelism 100 172.27.192.0-255
in command:
Does the erlang TCP/IP library have some limitations? I've done some searching but can't find any definitive answers.
I have set the ERL_MAX_PORTS environment variable to 12000 and configured Yaws to use unlimited connections.
I've written a simple client application that connects to an appmod I've written for Yaws and am testing the number of simultaneous connections by launch X number of clients all at the same time.
I find that when I get to about 100 clients, the Yaws server stops accepting more TCP connections and the client errors out with
Error in process with exit value: {{badmatch,{error,socket_closed_remotely}}
I know there must be a limit to the number of open simultaneous connections, but 100 seems really low. I've looked through all the yaws documentation and have removed any limit on connections.
This is on a 2.16Ghz Intel Core 2 Duo iMac running Snow Leopard.
A quick test on a Vista Machine shows that I get the same problems at about 300 connections.
Is my test unreasonable? I.e. is it silly to open 100+ connections simultaneously to test Yaws' concurrency?
Thanks.
It seems you hit a system limitation, try to increase the max number of open files using
$ ulimit -n 500
Python on Snow Leopard, how to open >255 sockets?
Erlang itself has a limit of 1024:
From http://www.erlang.org/doc/man/erlang.html
The maximum number of ports that can be open at the same time is 1024 by default, but can be configured by the environment variable ERL_MAX_PORTS.
EDIT:
The system call listen()
has a parameter backlog which determines how many requests can be queued, please check whether a delay between requests to establish connections helps. This could be your problem.
All Erlang system limits are reported in the Erlang Efficiency Guide:
http://erlang.org/doc/efficiency_guide/advanced.html#id2265856
Reading from the open ports section:
The maximum number of simultaneously
open Erlang ports is by default 1024.
This limit can be raised up to at most
268435456 at startup (see environment
variable ERL_MAX_PORTS in erlang(3))
The maximum limit of 268435456 open
ports will at least on a 32-bit
architecture be impossible to reach
due to memory shortage.
After trying out everybody's suggestion and scouring the Erlang docs, I've come to the conclusion that my problem is with Yaws not being able to keep up with the load.
On the same machine, an Apache Http Components web server (non-blocking I/O) does not have the same problems handling connections at the same thresholds.
Thanks for all your help. I'm going to move on to other erlang based web servers, like Mochiweb.