Is there a way to pack msg when using ejabberd? - erlang

I am testing ejabberd for mucroom. The test client is Tsung.
The test condition:
one ejabberd server(4core 16G Ram)
3000 user join one mucroom
user send a message in 1 minute by random, every user send 5 message
The server OS CPU: 90%
enter image description here
The result is not up to expectation.
I suspect the cause is the server need broadcast too much message.One user send one message, the server need to broadcast to 2999 user.
I get the message package with Wireshark and found every message is individual.
Is there a way to pack multy message in one package?
Sorry, I make a mistake.
The ejabberd already pack some message in one message package.
the wireshark screenshot
I believe the ejabberd can take more user in server which have 4Core and 16G Ram.
Does any other reason cause the result not up to expectation?

3.000 users in the same room, all of them chatting? Obviously those are not human people, they are machine. Maybe MUC is not the protocol that better suits your usercase. MUC involves checking each room occupant's role and privilege, their presence, etc.
Maybe you should consider Muc/Sub, or PubSub (or MIX), or MQTT
Is there a way to pack multy message in one package?
You could apply this and experiment with it
https://github.com/processone/ejabberd/pull/3844
https://github.com/processone/xmpp/pull/63

Related

What is maximum connection of Mosquitto?

I'm programming chat app with MQTT protocol
You can see here mqtt.org about MQTT
My used broker is mosquito
What struck my mind is that
How many users can connect to Mosquito broker?
I saw 100k in some website but i am not sure
This is dependent on a number of factors:
What OS your running on
The size of the machine(s) you run the broker on
What broker you choose to use
How much and what type of load you are generating:
How many clients subscribed to each topic
How big the messages are
How many retained messages you are generating
What the message rates is
Are you queuing message for offline clients
You also need to configure the broker/OS to get the most out of it, e.g. for mosquitto you need to set the number of open file handles on Linux to the maximum.
For a large scale app you may want to look at one of the brokers that supports federation/clustering to spread the load and allow fail over.

Mosquitto fire only one for each topic

I implemented a MQTT message broker using mosquitto on my network. I have one web app publishing things to the broker and several servers that subscribed the same topic. So i have a redundancy scenario.
My question is, using mosquitto alone, is there any way to configure it to publish data only on the first subscriber? Otherwise, all of them will do the same thing.
I don't think that is possible.
But you can do this.
Have the first subscriber program respond with an ack on the channel as soon as it gets the message, and have the redundancy program look for the ack for a small time after the initial message.
IF the ack is received the redundancy should not do anything.
So if the first subscriber gets and uses the message, the others wont do anything even if they get the message.
No this is not possible with mosquitto at the moment (without communication between the 2 subscribers as described in the other answer).
For the new release of the MQTT spec (v5)* there is a new mode called "Shared Subscriptions". This allow s multiple clients to subscribe to a single topic and messages will be delivered by round robin to each client. This is more for load balancing rather than master/slave fail over.
*There are some brokers (HiveMQ, IBM MessageSight) that already support some version of Shared Subscriptions at MQTT v3.1.1, but they implement it in slightly different ways (different topic prefixes) so they are not cross compatible.

How to display delivered and read receipts in MQTT broker Mosquitto?

I want to display delivered and read receipts to users in my messaging platform. I am using Eclipse's Paho library with Mosquitto as the broker. Since Mosquitto does not store messages, which is the best way/plugin to
Display delivered receipts - how to use QoS2 acknowledgement receipts to do this?
Display read receipts - suggest me way to do this
How to store messages so that users can view their chat history? Any architectural insights in mysql will be very helpful.
The quick answers to your questions:
High QOS (1/2) is not end to end delivery confirmation, it is only confirmation between the broker and a client. e.g. a publisher publishing at QOS 2 the confirmation is only between the publisher and the broker, not then onward to the subscriber (who may be subscribed at a different QOS anyway). The only way to do this is to send a separate message from the receiving end back to the sender. Also there may be more than one subscriber to any given topic, so you have to think how this would work.
Again, the only way to do this is with a separate message sent when the message is read
You will have to implement this yourself. The only thing that may help is something like the built in support for storing messages in a database present in some brokers (this is not part of the spec, so totally propitiatory to the implementation) e.g. hivemq
Hardlib's answers are 100% on target, but I'll add some thoughts on implementation.
I think a common misunderstanding with MQTT is that it is really a M2M (machine-to-machine) protocol, not a system for exchanging messages between users. That isn't to say you can't use it for messaging (facebook did) but that exists in a layer on top of MQTT. Put another way, MQTT is designed to route messages between machines with little care about the content of those messages. What this means is that user-level niceties (delivery confirmations etc.) aren't really part of it but instead something that you implement on top of MQTT.
So here are some thoughts about how to implement what you propose on top of MQTT:
Consider a situation in which you have two clients (X & Z) which both have access to the same broker (Y). To have client X confirm it has received a message from client Z, simply have client X send a message to a topic (lets say confirmations/z) that client Z is subscribed to. This is trivial to implement in Python or whatever you are writing your application in. (For example, I use that basic procedure to measure round-trip time on my broker.)
However, given that QoS can guarantee that a broker has received the message (and it could be retained or otherwise held for other clients), I would question if this is really necessary unless it is critical that client Z know exactly when client X receives the message.
Depending on your needs there are any number of ways of providing a history for a topic. See the answers here and here for details on MySQL. But if all you need is a local history of a chat or a record of the activity on a few topics, consider simply outputting payloads with timestamps to a text file or JSON. MySQL feels like overkill unless you are dealing with a high volume of messages or need to compose complex queries.

What's the upper bound connections of TServerSocket in Delphi? [duplicate]

I'm building a chat server with .NET. I have tried opening about 2000 client connections and my Linksys WRT54GL router (with tomato firmware) drops dead each time. The same thing happens when I have several connections open on my Azureus bit-torrent client.
I have three questions:
Is there a limit on the number of open sockets I can have in Windows Server 2003?
Is the Linksys router the problem? If so is there better hardware recommended?
Is there a way to possibly share sockets so that I can handle more open client connections with fewer resources?
AS I've mentioned before, Raymond Chen has good advice on this sort of question: If you have to ask about OS limits, you're probably doing something wrong. The IP protocol only allows for a maximum of 65535 ports and many of these are reserved and not available for general use. I would suggest that your messaging protocols need to be thought out in more detail so that OS limits are not an issue. I'm sure there are many good resources describing such systems, and there are certainly people here that would have good ideas about it.
EDIT: I'm going to put some thoughts about implementing a scalable chat server.
First off, designate a single port on the server for clients to communicate through. Whenever a client needs to update the chat state (a new user message for example) do the following:
create message packet
open port to server
send packet
close port
The server then does the following:
connection request received
get packet
close connection
process packet
for each client that requires updating
open connection to clients
send update packet
close connection
When a new chat session is started, the client starting the session sends a 'new session' message to the server with the clients user details and IP address for responses. The server creates a new chat session and responds with the session ID. The client then sends packets containing the messages the user types, the server processes them and forwards the message to other clients in the same session. When a client leaves the chat, it sends a 'end session' message to the server. The server removes the client from the session and destroys the session when there are no more clients in the session.
Hope that gets you thinking.
i have found some answers to this that i feel i should share:
Windows 2003 server has a limit on the number of ports that may be used. but this is configurable via a registry tweak to change the MaxUSerPort setting from 5000 to say, 64k( max).
Exploring further, i realize that the 64k port restriction is actually per IP address, hence a single server can easily attain much more ports, and hence TCP connections by either installing multiple network cards, or binding more than one IP address to a network card. that way, you can scale your system to handle n x 64k ports.
Had for days a problem with the available sockets on my Window 7 machine. After reading some articles about socket leaks in Win 7, I applied a Windows patch - nothing changed.
Below there is an article describing windows connection problems in great detail:
http://technet.microsoft.com/en-us/magazine/2007.12.network.aspx
For me it worked the following:
Open Regedit
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip\Parameters: Create TcpNumConnections, REG_DWORD, decimal value 500 (this can be set according to your needs); EnableConnectionRateLimiting, REG_DWORD, value 0;
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip: Create MaxUserPort, REG_DWORD, decimal value 65534
Restart Windows

reading from multiple imap.gmail.com from the same fetchmail client

For my portfolio software I have been using fetchmail to read from a Google email account over IMAP and life has been great. Thanks to the miracle of idle connection supported by imap3, my triggers fire in near-realtime due to server push, much sooner than periodic polling would allow otherwise.
In my basic .fetchmailrc setup, in which a brokerage customer's account emails trade notifications to a dedicated Gmail/Google Apps box, I've had
poll imap.gmail.com proto imap user "youraddress#yourdomain-OR-gmail.com" pass "yoMama" keep nofetchall ssl idle mimedecode limit 29000 no rewrite mda "myCustomSpecialMDAhandler.sh %F %T"
Trouble is, now I need to support reading from multiple email boxes, and hand off the emails to other specialized MDA scripts I wrote. No problem, just add more poll lines to .fetchmailrc, right? Well that doesn't work when the other accounts also use imap.gmail.com. What ends up happening is that while one account reads fine (and not necessary the first one listed, though usually yes), the other is getting "socket error" all day and the emails remain untouched, unread. I can't figure out why and not even sure if it's some mechanism at imap.gmail.com or not, eg. limiting to one IMAP connection from a host. That doesn't seem right since I have kept IMAP connections to many separate Gmail & Google Apps accounts from the same client for years (like Thunderbird) and never noticed this exclusivity problem.
I haven't tried launching multiple fetchmail daemons using separate -f config files (assuming they wouldn't conflict), or deploying one or more getmail and other similar email fetchers in addition. Still trying to avoid that kind of mess--unscaleable the more boxes I have to monitor.
Don't have the reference offhand but somewhere in fetchmail's docs I recall reading that idle is not so much an imap feature as a fetchmail optional trick, which has a (nasty for me) side effect of choking off all other defined accounts from polling until the connection is cut off by some external event or timeout. So at least that would vindicate Google.
Credit to Carl's Whine Rack blog for some tips.
For now I use killall fetchmail; fetchmail -f fetcher.$[$RANDOM % $numaccounts].rc periodically from crontab to cycle reading accounts each defined individually in fetcher.1.rc, fetcher.2.rc, etc. Acceptable while email events are relatively infrequent.

Resources