Malicious entity slowly creating accounts on asp.net server - asp.net-mvc

I'm not sure that this is a stack overflow appropriate question. If not, I'd appreciate a pointer to a more appropriate forum, as I haven't been able to find one.
I have a small website project that gets a few hundred daily unique users and on average I get one or two people to create an account per day. Yesterday I noticed that more users were signing up (like about 50) and today another 150 users signed up. Wonderful, right? Except that then I noticed that while the emails look legitimate, all of the usernames ended in same letters. My site requires that email be confirmed before a user gets any additional access and none of these accounts have confirmed their email. There is no apparent regularity to the creation of these accounts other than that it is happening with slowly increasing frequency.
My first question is, what is the most effective way to prevent this with the least user impact. The only thing that I can think of is adding a captcha step as part of account registration. I really dislike captcha so if anyone has a better idea for a general solution to this I'd appreciate it.
I'm also interested in this: What could this malicious user be gaining by doing this? It's not yet anything other than a minor nuisance to me. The accounts are easily identifiable and they're not (yet) being created at a rate that could represent anything like a denial of service attack. The only thing I can think is that they're trying to confirm that these emails are registered on my site. But I can't think why that would be useful. Also, if the email addresses are real, they're using my site to spam those email, but the spam is a registration confirmation for my site. So I guess they might eventually get my email provider to shut me down if they keep this up.
Thanks in advance for any help, even if that's a redirect to a different forum.
Other possibly useful information:
My site is hosted on Azure using asp.net mvc5 with identity framework
I believe that the emails are legitimate because my email provider
shows a very small bounce rate (like 1%) on these emails.

There are 2 more options which are SMS-Confirmation(by limiting the phone number), and IP restriction.

Related

Email clients access my webapp with changed / strange URLs

For a few years now I observe a strange behaviour, most likely triggered by enduser's email clients using my webapp (Ruby on Rails system, doesn't matter tho).
I am running a mid sized business and send out thousands of mails to my customers each month who buy leads from me.
The emails include two links, one to buy the lead and the other to give feedback. There is a dynamic part in both URLs which is a UUID, example:
offer/968ec0c1-e105-4c70-95b2-fd0c799b58f3
and
feedback/968ec0c1-e105-4c70-95b2-fd0c799b58f3
Every now and then, my webapp gets accessed at the same time on both links (which makes me confident it is not the user since it is the very same second they get accessed) but with different dynamic parts in the url, so i see in my logs
offer/NGVjZjA0YT
and
feedback/NGVjZjA0YT
It is always a random string with a length of 10 chars.
So this is not a big deal since it happens only 1-2 times per week and as far as I can tell no user is really affected by this, but still I wonder what's behind this. Did any of you experience a similar thing?
Maybe an email client wants to crawl / load a preview, seeing an uuid pattern in the url and changes it because whatever?!
I disabled link click tracking in the email sending provider (sendgrid), just as a side note. So they won't / shouldn't replace the email links. Also experienced this when sending links via AWS SES.
Im just curious. Any ideas or experiences? Thanks in advance & have a great day!

Rails/Devise - preventing spam signups?

We have been noticing a large number (~3400) of fake signups over the last year and have not been able to determine where they are coming from. Common parameters:
They often come from weird, yet validly formatted, email address (many in the .ru TLD or from thefmail.com)
Some use cyrillic or arabic characters in their name (we are basically focused only on US English speakers by our content)
They do NOT trigger the Intercom.io javascript for account signup notifications
They somehow defeat reCaptcha 3
They sometimes use URLs for their username
They don't confirm (devise_confirmable)
We've been handling these by disabling the accounts, and there's obviously a few items above we could use to identify these before they even get created, but I was wondering if someone's cracked this nut already or if there's some simple best practices (pwned db check?) that might cut this down to a dull roar or out entirely.
The two big "I don't get its" are bypassing the JS and defeating recaptcha. Is this just mechanical-turking?
Do you know whether these users have been created by the same IP address? (probably not). Are the accounts sporadically created or are they created in batches? The gem rack-attack could be used to mitigate this issue, especially if at least one of these conditions is satisfied. It also comes with some sort of fail2ban filter which could be helpful, as it is designed to detect suspicious requests from 'misbehaving' clients.
I also can't understand how they possibly bypassed recaptcha.

Is AmazonAWS creating non-genuine hits? How can I verify?

I have a site that logs a "hit (via saving a record to a Hits table that captures the date/time and IP of the machine whenever the detail page is loaded)" whenever a user brings up a detail page for a particular item so that admins can see how many hits that particular item gets. We get random instances where items are being hit multiple times/day in twos. So, in the data, it looks like a user is viewing an item, but the site is logging their hit twice in the database (same item, same date/time, same IP Address, etc.). Most hits are only being recorded once, and all my testing has lead to assurance the site is working appropriately. I'm noticing that particular IP Addresses are causing double hits. When I do Reverse IP searches, all the "double hits" are tied to IP Addresses that trace back to Amazonaws in northern Virginia, on the other side of the country. Our site is used locally, and the single hits are coming from IPs that trace back to local areas. Is there a bot hitting my site from afar? Should I block Amazonaws in Azure (which is where my site is hosted) or is that going to lock out genuine users? Is there a way I can detect whether a hit is genuine in my code (my site is in .Net MVC)? Has anyone faced a similar situation in the past?
Note: This IS RELEVANT to software engineering because a part of the question is asking how I can verify in my code that a hit is genuine!!!!!!!!!!!!!!!!!!
Basically, what I found out (no thanks to the elitist user who downvoted my question and offered no contribution) is that, my hit counter is being inflated by web crawlers. The quick and dirty solution is to implement a robots.txt file to block crawlers from hitting that page. Of course, that comes with the sacrifice that my client's site will no longer come up, should the public do a google search for the product being offered.
One alternative is the hidden link method; in which we put a hidden page on the site that no human user would ever access. When a bot hits that page, we record the IP in a "blacklist" table. Then, before our real hit counter logs a hit, it checks the user's IP against the blacklist.
Another alternative is to implement a blacklist of known User-Agents used by bots. We check the user's credentials against that list in order to determine whether a user is a bot.
Neither of these solutions are 100% though.
These are fairly adequate responses to my question. Of course, since this is StackExchange (or StackOverflow or StackYourMomma or whatever it is), people are just going to downvote your question and act like you're beneath a response because you didn't follow all the little bull crap rules that come along with being a member of the SE/SO/SYM community.

How should I secure a private rails application from outside users?

I need to build an application that will only serve people in my workplace. Currently, everyone has a specific company email, which has a unique domain and format.
I created a regular expression that only validates our company email addresses, and configured the application to require email confirmation. This seems like it should be sufficient, unless a malicious person:
Finds a flaw in my expression.
Finds a way around confirmation.
Somehow gets a company email address.
I feel like this isn't secure enough though. Maybe I need to take it one more step, with some kind of pre-approved email list or something?
I'm curious if anyone else has faced this problem. (Most likely.)
Ok, here is my solution:
This will enable a second level of security:
On the User model, create a boolean field called user_active.
Then, create an Admin page that will only allow your admins to check/uncheck accounts.
Then, you can call User.user_active? before logging your users in.
This makes it much harder for somebody who manages to sneak around your security to access your app.
This would be a pain with tons of users, but if you only have 200 or so, this will work.

Handling user abuse in rails

I've been working on a web app that could be prone to user abuse, especially spam comments/accounts. I know that RECAPTCHA will take care of bots as far as fake users are concerned, but it won't do anything for those users who create an account and somehow put their spam comments on autopilot (like I've seen on twitter countless times).
The solution that I've thought up is to enable any user to flag another user and then have a list of flagged users (boolean attribute) come up on a users index action only accessible by the admin. Then the users that have been flagged can become candidates for banning(another boolean attribute) or unflagging. Banned users will still be able to access the site but will have greatly reduced privileges. For certain reasons, I don't want to delete users entirely.
However, when I thought of it, I realized that going through a list of flagged users to decide which ones should be banned or unflagged could be potentially very time consuming for an admin. Short of hiring someone to do the unflagging/banning of users, is there a more automated and elegant way to go about this?
I would create a table named abuses, containing both the reported user and the one that filed the report. Instead of the flagged boolean field, I suggest having a counter cache column such as "abuse_count". When this column reaches a predefined value, you could automatically "ban" the users.
Before "Web 2.0", web sites were moderated by administrators. Now, the goal is to get communities to moderate themselves. StackOverflow itself is a fantastic case study. The reputation system enables users to take on more "administrative" tasks as they prove themselves trustworthy. If you're allowing users to flag each other, you're already on this path. As for the details of the system (who can flag, unflag, and ban), I'd say you should look at various successful online communities (like StackOverflow) to see how they work, and how successful they are. In the end it will probably take some trial and error, since all communities differ.
If you want to write some code, you might create a script that looks for usage patterns typical of spammers (eg, same comment posted on multiple pages), though I think the goal should be to grow a community that does this for you. This may be more about planning than programming.
Some sophisticated spammers are happy to spend their time breaking your captcha if they feel that the reward is high enough. You should also consider looking at a spam server such as akismet for which there's a great rails plugin (https://github.com/joshfrench/rakismet).
There are other alternatives such as defensio (https://github.com/thewebfellas/defensio-ruby) as well as a gem that I found once which worked pretty well at detecting common blog spam, but I can't for the life of me find it any more.

Resources