Is it possible to test the WebSocket connection test for 100k users on single node? If yes, how?
I am wondering how the 1 million connection test per node was carried out as claimed on EMQX official site.
If the port limit of OS itself is 65536
The server only needs one port.
The clients port limit of the operating system can be bypassed (using multiple network cards in ). You can use JMeter to test the connection. In order to simulate the client, multiple servers may be required. Here is an EMQ official video, which gives some operating procedures.
https://www.bilibili.com/video/BV1yp4y1S7zb
Related
I'm programming chat app with MQTT protocol
You can see here mqtt.org about MQTT
My used broker is mosquito
What struck my mind is that
How many users can connect to Mosquito broker?
I saw 100k in some website but i am not sure
This is dependent on a number of factors:
What OS your running on
The size of the machine(s) you run the broker on
What broker you choose to use
How much and what type of load you are generating:
How many clients subscribed to each topic
How big the messages are
How many retained messages you are generating
What the message rates is
Are you queuing message for offline clients
You also need to configure the broker/OS to get the most out of it, e.g. for mosquitto you need to set the number of open file handles on Linux to the maximum.
For a large scale app you may want to look at one of the brokers that supports federation/clustering to spread the load and allow fail over.
Short version: How would you recommend going about connecting a client to a server that are on the same local network, without manually entering the ip, when broadcast is disabled?
Further details: I am working on an educational multiplayer game for children. Many schools appear to be blocking broadcasting for security reasons. The children will be rather young, so it could be difficult for them and error-prone to have to enter the IP manually. They will all be in the same room and will all see the server screen. The game is made in Unity (C#).
Potential solutions: Here's what I thought about:
Connecting both the local server and clients to an external server, communicating the local server ip through the clients through the external server, then connecting directly and disconnect from the server. Not ideal because of the extra hosting costs.
Send a regular UDP message periodically to all ips on the subnet? This will probably be picked up by any decent firewall and blocked though, right?
Putting a QR code on the server that kids would take a picture of with the client app and have it connect that way? May be more of a hassle.
Having the server play random tones corresponding to numbers that the client is listening for? (Speakers may not always work though)
Sounds like the first one is the most sane and easy solution. Do you have any other ideas on what someone in this situation could try?
Is UDP multicast possible?
If yes then a common solution is that all participants join the same multicast group and the server listens on a well known port. If a client wants to know the address of the server it sends a packet to the multicast group, which is received by the client and answered with another packet, which then can be used by the client to determine the servers address.
In addition to that servers can also announce their presence in regular intervals by sending a suitable message to the multicast group.
What I can think of is an ad-hoc communication protocol between all the devices. Say you have 1 server and 10 clients. All the devices should run a service(say server-discovery) that binds to a fixed port say 9999. Now any time the client wants to connect to the server and doesn't know the IP, it starts a scan. Loops through different IPs and tries to connect to 9999. If it manages to hit, it asks for the server IP. In case it manages to hit the server it will get the IP since the server knows it's own IP and the client will maintain the server IP in a cache. If the client hits another client. It can ask for the server IP. It the other client knows the IP it will share the info else decline.
I agree there is a lot of overhead, but I think this will be robust unlike sound and would reduce cost of printing QRs everytime.
on the local network the traffic is direct from host to host.
I don't understand which devices is blocking local broadcasts.
if there aren't too many peers on the LAN ( less then 100 ), I think udp broadcasts work fine and you dont pollute the network.
to have an idea of your "pond", I suggest you to sniff your local traffic.
there are many broadcasts : arp, windows, ipp, dropbox...
I am looking to write a program that will connect to many computers from a single computer. Sort of like "Command Center" where you can monitor all the remote system remotely on a single PC.
My plan is to have multiple Client Sockets on a form. They will connect to individual PCs remotely. So, they can request information from them to display on the Window. Remote PCs will be hosts. Is this possible?
Direct answer to your question: Yes, you can do that.
Long answer: Yes, you can do that but are you sure your design is correct? Are you sure you want to create parallel connections, one to each client? Probably you don't! If yes, then you probably want to run them in separate threads.
If you want to send some commands from time to time (and you are not doing some kind of constant video monitoring) why don't you just use one connection and 'switch' between clients?
I can't tell you more about the design because from your question is not clear about what you want to build (what exactly you are 'monitoring').
VERY IMPORTANT!
Two important notices to take into account before designing your app (both relevants only if the remote computers are not in the LAN (you connect to them via Internet)):
If the remote computers are running as servers, you will have lots of problems to explain your customers (if they are connected (and they probably are) to Internet via a router) how to setup the router and the software firewall. For example, if a remote computer is listening for commands from you, on port 1234 (for example) the firewall in the router will block BY DEFAULT any connection attempt from a 'foreign' computer (from you) to that port.
If your remote computers are running as clients, how they will know master's IP (your IP). Do you have a static IP?
What you actually need is one ServerSocket on the module running on your machine.
To which all your remote PC's will connect through their individual ClientSocket.
You can make your design other way round by putting ClientSocket on the module running on your machine and ServerSocket on the module running on remote machine.
But you will end up creating one ClientSocket to each ServerSocket, what if you have the number of remote servers increase.
Now if you still want to have multiple ClientSockets on your machine then as Altar said you could need a multi threaded application where each thread is responsible for one ClientSocket.
I would recommend Internet Direct (Indy) as they work well in threads, and you can specify a connect time-out per connection, so that your monitoring app will be able to get a 'negative' test result faster than with the default OS connect time-out.
Instead of placing them on the form, I would wrap each client in a class which runs an internal monitoring thread. More work initially but easier to keep independent from each other.
I'm building a chat server with .NET. I have tried opening about 2000 client connections and my Linksys WRT54GL router (with tomato firmware) drops dead each time. The same thing happens when I have several connections open on my Azureus bit-torrent client.
I have three questions:
Is there a limit on the number of open sockets I can have in Windows Server 2003?
Is the Linksys router the problem? If so is there better hardware recommended?
Is there a way to possibly share sockets so that I can handle more open client connections with fewer resources?
AS I've mentioned before, Raymond Chen has good advice on this sort of question: If you have to ask about OS limits, you're probably doing something wrong. The IP protocol only allows for a maximum of 65535 ports and many of these are reserved and not available for general use. I would suggest that your messaging protocols need to be thought out in more detail so that OS limits are not an issue. I'm sure there are many good resources describing such systems, and there are certainly people here that would have good ideas about it.
EDIT: I'm going to put some thoughts about implementing a scalable chat server.
First off, designate a single port on the server for clients to communicate through. Whenever a client needs to update the chat state (a new user message for example) do the following:
create message packet
open port to server
send packet
close port
The server then does the following:
connection request received
get packet
close connection
process packet
for each client that requires updating
open connection to clients
send update packet
close connection
When a new chat session is started, the client starting the session sends a 'new session' message to the server with the clients user details and IP address for responses. The server creates a new chat session and responds with the session ID. The client then sends packets containing the messages the user types, the server processes them and forwards the message to other clients in the same session. When a client leaves the chat, it sends a 'end session' message to the server. The server removes the client from the session and destroys the session when there are no more clients in the session.
Hope that gets you thinking.
i have found some answers to this that i feel i should share:
Windows 2003 server has a limit on the number of ports that may be used. but this is configurable via a registry tweak to change the MaxUSerPort setting from 5000 to say, 64k( max).
Exploring further, i realize that the 64k port restriction is actually per IP address, hence a single server can easily attain much more ports, and hence TCP connections by either installing multiple network cards, or binding more than one IP address to a network card. that way, you can scale your system to handle n x 64k ports.
Had for days a problem with the available sockets on my Window 7 machine. After reading some articles about socket leaks in Win 7, I applied a Windows patch - nothing changed.
Below there is an article describing windows connection problems in great detail:
http://technet.microsoft.com/en-us/magazine/2007.12.network.aspx
For me it worked the following:
Open Regedit
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip\Parameters: Create TcpNumConnections, REG_DWORD, decimal value 500 (this can be set according to your needs); EnableConnectionRateLimiting, REG_DWORD, value 0;
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip: Create MaxUserPort, REG_DWORD, decimal value 65534
Restart Windows
Does the erlang TCP/IP library have some limitations? I've done some searching but can't find any definitive answers.
I have set the ERL_MAX_PORTS environment variable to 12000 and configured Yaws to use unlimited connections.
I've written a simple client application that connects to an appmod I've written for Yaws and am testing the number of simultaneous connections by launch X number of clients all at the same time.
I find that when I get to about 100 clients, the Yaws server stops accepting more TCP connections and the client errors out with
Error in process with exit value: {{badmatch,{error,socket_closed_remotely}}
I know there must be a limit to the number of open simultaneous connections, but 100 seems really low. I've looked through all the yaws documentation and have removed any limit on connections.
This is on a 2.16Ghz Intel Core 2 Duo iMac running Snow Leopard.
A quick test on a Vista Machine shows that I get the same problems at about 300 connections.
Is my test unreasonable? I.e. is it silly to open 100+ connections simultaneously to test Yaws' concurrency?
Thanks.
It seems you hit a system limitation, try to increase the max number of open files using
$ ulimit -n 500
Python on Snow Leopard, how to open >255 sockets?
Erlang itself has a limit of 1024:
From http://www.erlang.org/doc/man/erlang.html
The maximum number of ports that can be open at the same time is 1024 by default, but can be configured by the environment variable ERL_MAX_PORTS.
EDIT:
The system call listen()
has a parameter backlog which determines how many requests can be queued, please check whether a delay between requests to establish connections helps. This could be your problem.
All Erlang system limits are reported in the Erlang Efficiency Guide:
http://erlang.org/doc/efficiency_guide/advanced.html#id2265856
Reading from the open ports section:
The maximum number of simultaneously
open Erlang ports is by default 1024.
This limit can be raised up to at most
268435456 at startup (see environment
variable ERL_MAX_PORTS in erlang(3))
The maximum limit of 268435456 open
ports will at least on a 32-bit
architecture be impossible to reach
due to memory shortage.
After trying out everybody's suggestion and scouring the Erlang docs, I've come to the conclusion that my problem is with Yaws not being able to keep up with the load.
On the same machine, an Apache Http Components web server (non-blocking I/O) does not have the same problems handling connections at the same thresholds.
Thanks for all your help. I'm going to move on to other erlang based web servers, like Mochiweb.