I am currently in the process of developing an application that essentially offers multiple conferences/rooms to each user at once.
In other words, an individual user can be in as many rooms as they would like at any one time...
However, whilst this currently works for 10/11 (it varies) rooms, as soon as the user attempts to join more rooms, it fails.
Originally I thought this was as a result of the front-end iOS, but having now tested 10+ rooms with InstantBird (MacOS XMPP Client), I can confirm that the issue seems to be at the ejabberd end...
Thus far, I have been unable to extract any sort of error message returned. I have checked the logs and everything seems fine (I may be missing something as I am not really sure what to look for).
I have a couple of ideas as to what it might be but I don't really know where to start:
Does Ejabberd limit the number of rooms a user can join at any one time?
Does Ejabberd have a timeout that may be exceeded when trying to join multiple rooms (this would explain why it varies between 10 and 12 rooms max)?
Your answer is here:
https://docs.ejabberd.im/admin/guide/configuration/#modmuc
There is a config property
max_user_conferences
and as documentation tells us:
This option defines the maximum number of rooms that any given user can join. The default value is 10. This option is used to prevent possible abuses. Note that this is a soft limit: some users can sometimes join more conferences in cluster configurations.
Related
This might be a very specific problem or just ignorance from my side, but I don't seem to figure it out.
Within our organization, we have a FreeRadius Accounting system logging sessions from Wi-Fi usage. Our team is responsible for the data analysis of this accounting data.
Recently, we had to dump the Radius Accounting Database and made a freeze frame of it. While doing so we found a weird behavior.
Running the same query before and after the dump (a query that retrieves the total amount of sessions for a single day) gave a different amount. Around a difference of 5-10%.
Looking a bit deeper we discovered that several updates were being issued that altered the start time of sessions after they had been first registered in the accounting database.
We then found that previous data we collected had disparity after weeks or months even (with the discrepancy being around 2-10%).
TLDR:
Does FreeRadius adjust the start times of sessions based on some maintenance? Are WiFi controllers allowed to do this? Is it a bug?
Overal we just want to understand the rationale so we can justify the data and adjust our processing correctly, as currently, we cannot trust the values we collect daily or even weekly on these stats!
Any help or insight would be great!!!
FreeRADIUS only updates the database as a result of data in an incoming RADIUS packet, using the SQL queries in the local configuration. The only real way to understand this is to look at your SQL queries, and incoming requests (via radiusd -X) and see what is making changes to the data. It is possible that the NAS is broken and sending invalid or changing data, or possibly re-using session IDs which overwrite existing records.
It is also possible to configure FreeRADIUS to create a "fake" accounting start entry in the database in post-auth, which will then be updated when the real Start packet arrives. If you are doing this then you should check the values that are being written, and also if the session never starts up (or the Start is lost) then bad things might happen.
But in all circumstances the only solution you really have is to look at the debug output and see what is happening and why data is being written in the way that it is. There is nothing in FreeRADIUS that randomly updates the database without being sent that data from the NAS.
I'm developing an iOS application similar to uber or easyTaxi. My scenario has two users; driver and customer. When a customer pushes an order, it goes to all drivers. My problem is I have to keep pulling orders from the server on driver sides every few seconds for new orders or any updates on a particular order, which I think is not a correct approach as it will eventually overload the server. Is there a way where the server can update drivers as soon as an order is pushed to the server?
Or pulling is the only way here.
I'm kinda lost here, so even a if you can put me into the right direction, would be great.
Is there a way in JIRA to run a report to see how many issues were "resolved" by what users and how quickly since the issue was reported? It needs to be per user
Thanks
You can build arbitrary reports yourself with a Report Plugin Module, but my experience is that it's quite a hassle. Note that plugins will only work in self-hosted Jira installations, not in Atlassians hosted service.
Another way would be to leverage the REST API in order to fetch worklogs and process them externally.
Your requirement needs some clarifications I think. It seems you want to see the number of issues that were moved from some status to another status, or perhaps the last time the resolution field was set to a value (any value?). Then group those results by JIRA user.
A second requirement is to track the time from issue creation to the last time the resolution field was set. Again grouping by user.
I'd try using the Vertygo SLA plugin from Valiantsys to do this. It lets you define custom fields to track the time between two JIRA events such as a field updated or a status changed. I believe it can sum those fields and display grouped results in the JIRA statistics and two-dimensional gadgets.
Reports that group by user often become quite large as the number of users increases.
I am new to DynamoDB. I am very much confused about provisioned throughput. I am creating a iPhone game in which the users can chat within the game. I am having a Chat table. The Chat table contains GameID, UserID and Message. How do I find the size of the item to calculate throughput. The size of the item entirely depends on the Message right? How to calculate the size of an item?
Amazon tells that we can either modify the throughput by using UpdateTable API or by manually from the console. If I want to change it form code, how will I know that the provisioned throughput has been exceeded for a certain table? How to check that from code?
I am also confused about the CloudWatch. How to understand this?
Could anyone please help me? Please don't point me to the documentation.
Thanks.
I will do my best to help with the confusion.
DynamoDB is a key:value database
CloudWatch is Amazon's products monitoring tool
Provisioned throughput is roughly the number Items KB you plan to Read/Write per seconds
Whenever you exceed your provisioned throughput,
DynamoDB answers with ProvisionedThroughputExceededException
DynamoDB notifies CloudWatch
What Cloudwatch does is basically record and aggregates data-points. For most applications, it will only keep track of aggregated data over each consecutive 5min periods.
You can then access these data for "manual" monitoring or set up "alarms".
There was a really interesting question on SO a couple of weeks earlier on DynamoDB auto-scaling using alarms. You might be interested in reading it: http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/ErrorHandling.html
Knowing this, you can start building your application.
As for every DynamoDB services, one needs credentials to access it. Even though they can be restricted to a specific table or set of action, it is very dangerous to bundle them in an application. Would you give MySQL or MongoDB or credentials, even Read Only to any untrusted people ?
May I suggest you do build your application to rely on a server of your own ? This server being trusted and build by you, you could safely perform any authorization check there and grant it full access to your table.
I hope this helps. Feel free to ask for more precisions.
My organization is building a new version of our ticketing site and is looking for the best way to build an online waiting room when the number of users in our purchase path exceeds a certain limit. The best version of this queue would let new users in after existing users have either completed their purchase or have exceeded a timeout limit after entering the path.
I'm trying to get an idea of how this has been implemented by other organizations. Has anyone out there done something similar or have any experience with this? We have some ideas, but I'd like to get a sense of what solutions have been tried and what problems those solutions have run up against.
Just to be complete, this site is being built in Ruby on Rails, though I'd love to hear about how people have solved this regardless of platform.
Edit: To clarify: The need for the queue is not primarily to reduce load, but to limit the speed at which the web is purchasing tickets relative to people buying in other ways, like over the phone.
Before I outline one method for this, I want to point out that what you want to do doesn't make a lot of sense. Services on the web aren't like a physical store, where I can walk up and see that it's crowded and decide to stay or not. Queueing people on your site strikes me as shifting the blame from you (unable or unwilling to adequately provision resources) to me (punishing me for trying to use your site).
If you're selling something like show tickets, where quantity is limited and each item is tied to a seat, I think it's better to reserve items and time out those reservations if they aren't paid for in a timely manner. Ticketmaster does this, and I think it's a much better solution than blocking people at the door.
If you still want to go down this path, then I'd design the system like this:
As customers come to your site, record their arrival time. As they interact with the site, record a "last seen" time. "Last seen" will be used to determine activeness. You'll need a background job running very frequently to expire sessions quickly.
Once your limit is hit, you have an ordered queue of people who are blocked. As customers complete their transaction or time out, you'll mark the next person in the queue for entry into the purchase path.
For queued users, their browsers will make a request on a regular basis, checking to see if you've let them in yet. If yes, they proceed to the purchase path. If no, they continue to wait.
The purchase path needs a mechanism to check if someone is trying to circumvent your waiting area, and sends them back.
You might find the Online queuing for ticketing guide helpful. Check their repository at GitHub.
They've integration with Ruby On Rails, PHP, .NET, iOS, Android and similar platforms.
Queue-it enables you to gain control of website overload during extreme traffic peaks by offloading end users into an online queue.
When a peak traffic event occurs on a website, the online queue system sends users to the virtual waiting room environment where the users wait and are redirected back to the website at a rate it can handle.