I've been working on a web app that could be prone to user abuse, especially spam comments/accounts. I know that RECAPTCHA will take care of bots as far as fake users are concerned, but it won't do anything for those users who create an account and somehow put their spam comments on autopilot (like I've seen on twitter countless times).
The solution that I've thought up is to enable any user to flag another user and then have a list of flagged users (boolean attribute) come up on a users index action only accessible by the admin. Then the users that have been flagged can become candidates for banning(another boolean attribute) or unflagging. Banned users will still be able to access the site but will have greatly reduced privileges. For certain reasons, I don't want to delete users entirely.
However, when I thought of it, I realized that going through a list of flagged users to decide which ones should be banned or unflagged could be potentially very time consuming for an admin. Short of hiring someone to do the unflagging/banning of users, is there a more automated and elegant way to go about this?
I would create a table named abuses, containing both the reported user and the one that filed the report. Instead of the flagged boolean field, I suggest having a counter cache column such as "abuse_count". When this column reaches a predefined value, you could automatically "ban" the users.
Before "Web 2.0", web sites were moderated by administrators. Now, the goal is to get communities to moderate themselves. StackOverflow itself is a fantastic case study. The reputation system enables users to take on more "administrative" tasks as they prove themselves trustworthy. If you're allowing users to flag each other, you're already on this path. As for the details of the system (who can flag, unflag, and ban), I'd say you should look at various successful online communities (like StackOverflow) to see how they work, and how successful they are. In the end it will probably take some trial and error, since all communities differ.
If you want to write some code, you might create a script that looks for usage patterns typical of spammers (eg, same comment posted on multiple pages), though I think the goal should be to grow a community that does this for you. This may be more about planning than programming.
Some sophisticated spammers are happy to spend their time breaking your captcha if they feel that the reward is high enough. You should also consider looking at a spam server such as akismet for which there's a great rails plugin (https://github.com/joshfrench/rakismet).
There are other alternatives such as defensio (https://github.com/thewebfellas/defensio-ruby) as well as a gem that I found once which worked pretty well at detecting common blog spam, but I can't for the life of me find it any more.
Related
Apple's latest changes which allow users to hide their IP, hide their email, etc. are creating problems for my web-based app (non-native) which relies upon these things to build a sense of who a person is.
In most situations, I can see why these are great "features" to have, however in my use case I have a voting platform that utilizes things like email address and IP to do a decent job at detecting duplicate votes or fraudulent vote (i.e, logins from other countries, etc.).
Now, before anyone says "These aren't foolproof ways of identifying a person" and derail my actual question: I know. I'm not looking for perfection, but these methodologies shed light on the 95%+ of people who might be trying to circumvent our voting system.
Apple placing the ability to circumvent these measures by being right up in front of the user as a first-class feature shoots major holes in my existing strategy.
Is there a way to detect if a user is utilizing these methods to where I could prompt them that they need to sign-up without using these features?
I think it would be easily justifiable to explain that, due to the nature of the application being a voting website, the ability to create multiple aliases would directly undermine the purpose of the site.
Perhaps there is an email address pattern to look for (I know in my test cases, I was getting email addresses #icloud.com).
If there is no reasonable way, I need to rethink the entire process of identifying individuals and preventing aliases (phone / text confirmation, etc).
I'm planning to make a simple one page or two page website on travel experience. Guests can sent me those details by form and I can post it on website.
The short answer is, yes you can.
From what I understand you want any visitor to your site to be able to type up a travel experience on the site, submit it, you then moderate and check it, and decide to publish it or not.
As much as that describes a "simple one or two page website", there is a lot that needs to happen for you to accomplish that:
You will need a database to store the user submissions in;
You probably want some kind of protection mechanism so that a malicious user or bot cannot just submit millions of rubbish entries;
You will want to send commands to your database in a way that prevents "SQL Injection" whereby a user can hide malicious actions (like deleting all your data in the database) inside his submission.
I can carry on, but I think you get the point: what you want to do is a simple technical exercise for someone who already knows how to build dynamic websites, but quite a challenge for someone with little or no experience.
That does not mean that it won't be a worthwhile exercise and a most valuable learning experience, but it won't be a quick couple of days' work for someone without the experience and knowledge.
There are tons of free resources on the web that you can use to learn to do exactly what you envision, so I encourage you to go for it. Good luck!
They are no need of the user login to send Posts to you. You can simply design a Submit Post page and get the Posts under your view. After that you can Publish or reject the submitted posts.
But there are some problems,
You can not verify the users who are submitting the posts
Accuracy of the posts will be reduce due to unauthorized requests.
I'm a new junior developer joining to this awesome community. I'm developing my first big personal project, and I'm stuck with this specific part.
I would like to build a feed notification system like Facebook with the following features:
Track different models and relationships, for example: new badges earned, new comments in subscribed models, new posts by followed users, new comments on my posts, new likes on my posts...
Group the activities, for example, instead of have 400 activities for each likes in my post, has just one notification that says "User X and 399 more likes your post"
Be possible to mark notifications as readed to don't see them again, at least you explore past notifications.
Scalability, good perfomance, and possible integration in the future in an APP developed for example with Iconic framework.
Push notifications are optional, it's ok if the user need to refresh the page to see the new notifications.
So for that, I have readed a lot of. I have watched some Railcast Videos, followed tutorials, but still I'm not really sure how to begin.
I have considered the following methods:
Use public_activity gem, adding a new a new field "readed" to the migration. And thinking how to manage grouped activites. But I have seen a lot of complains about perfomance. I'm expecting to have around 50000 users in my website in the first month (I already have the users), with peaks of 500-1000 users online. So maybe this is not the best way to go... as I would have a lot of activities, a lot of "notifications" and a lot of users.
Use a system like https://getstream.io/ because they also have integrations available for RoR and Ruby. The main concern here is about pricing, because checking it, if I'm not wrong, with that number of users, with around 10 notifications per user per day, I would be paying probably more than 200$ month, and always keep growing as the users grow.
Build my own system, maybe using Redis. But maybe this would be too complex and require a lot of time for a good, efficient and working code.
So still, considered these option, I don't know which one is best for me, or if it's another possibilities.
If someone have asked before these questions, please let me know your thoughts and what you think is the correct way to go.
Thank you !! :)
I am creating a site where anyone is able to upvote and downvote content.
For the launch, I wish to not force people to create accounts in order to do this. However, without accounts, what is a reliable way to ensure people don't vote on the same content more than once?
The methods that I've looked at are ip based tracking and cookie/session based tracking.
Both have problems.
I am targeting a college campus, and so many users could potentially have the same ip (through their dorm or apartment). Whereas cookies/sessions are very easily exploitable if the user deletes their sessions or even uses a script to vote.
(Being a college campus, there's probably many tech savvy students who may do this)
As far as technology goes, are there more reliable ways to accomplish this?
You have very few options here. Cookies were invented for just this kind of thing, but as you know they can be deleted or altered by those who know how. If there were a reliable, easy way to do this, it would have a catchy name and be well documented all over the web.
What's the best way to keep users from sharing session cookies in Rails?
I think I have a good way to do it, but I'd like to run it by the stack overflow crowd to see if there's a simpler way first.
Basically I'd like to detect if someone tries to share a paid membership with others. Users are already screened at the point of login for logging in from too many different subnets, but some have tried to work around this by sharing session cookies. What's the best way to do this without tying sessions to IPs (lots of legitimate people use rotating proxies).
The best heuristic I've found is the # of Class B subnets / Time (some ISPs use rotating proxies on different Class Cs). This has generated the fewest # of false positives for us so I'd like to stick with this method.
Right now I'm thinking of applying a before filter for each request that keeps track of which Subnets and session_ids a user has used in memcached and applies the heuristic to that to determine if the cookie is being shared.
Any simpler / easier to implement ideas? Any existing plugins that do this?
You could tie the session information to browser information. If people are coming in from 3 or 4 different browser types within a certain time period, you can infer that something suspicious may be going on.
An alternative answer relies on a bit of social-engineering. If you have some heuristic that you trust, you can warn users (at the top of the page) that you suspect they are sharing their account and that they are being watched closely. A "contact us" link in the warning would allow legitimate users to explain themselves (and thus be permanently de-flagged). This may minimize the problem enough to take it off your radar.
One way I can think of would be to set the same random value in both the session and a cookie with every page refresh. Check the two to make sure they are the same. If someone shares their session, the cookie and session will get out of sync.