Security for Web Apps - ruby-on-rails

I'm working on a web application and we are getting ready to launch it. Because it will hold sensitive data for users, I want this to be as secure as possible. Here is a list of what we are currently doing...
Running the app on Heroku (Ruby on Rails)
Site is encrypted with 256 SSL (with forced SSL turned on)
Cookies are encrypted and we pass the Firesheep test
Their password and everything in the database is one way encrypted.. so even if someone got access to the database it would be useless.
We do not store any keys or passwords openly in the source code but rather use Config Vars
Other than that what else should/could we be doing. We are considering McAfee's site scan but they quoted us $2,500 a year. I'm not sure it's worth it.
Does anyone have any suggestions at all?

Make sure to read the OWASP Top 10. Also $2,500 is a rip off, Sitewatch is free. You should also consider running a Web Application Firewall like mod_security, but keep in mind this will cause problems for testing tools like McAfee or Sitewatch. You should configure mod_security to allow specific ip addresses. Or test your application before enabling the WAF.

After ruling out the usual suspects (XSS, SQL injection, mass assignment, etc), client side is where most problems come from, and this is often overlooked. I don't know what your site is about, but things like telling your users that they shouldn't follow links on emails they did not explicitly request usually delivers highest bang-for-the-buck.
Best regards,
-- J. Fernandes

I'd recommend checking out the OWASP Top 10: http://owasptop10.googlecode.com/files/OWASP%20Top%2010%20-%202010.pdf
The OWASP Top Ten provides a powerful awareness document for web application security. The OWASP Top Ten represents a broad consensus about what the most critical web application security flaws are. Project members include a variety of security experts from around the world who have shared their expertise to produce this list.
To verify your SSL configuration, you can try https://www.ssllabs.com/ssldb/index.html.
If you're curious about the sheer variety of attacks, check out Jeremiah Grossman's post titled Top Ten Web Hacking Techniques of 2010 and scroll down until you see "The Complete List".
If you want to fire off a few web app vulnerability scans tools to catch the low hanging fruit you can try:
skipfish: http://code.google.com/p/skipfish/ (free)
netsparker community: http://www.mavitunasecurity.com/communityedition/ (free)
look here for more https://security.stackexchange.com/questions/32/what-tools-are-available-to-assess-the-security-of-a-web-application/
If you're really concerned about security then adopting a secure development plan and working with someone trained in app security would obviously boost your confidence things are being done right.
Regarding development, you may like the ideas presented in Microsoft's simplified SDL:
"The Security Development Lifecycle (SDL) is a security assurance process that is focused on software development."
"The process outlined in this paper sets a minimum threshold for SDL compliance. That said, organizations aren’t uniform – development teams should apply the SDL in a way that is suitable to the human talent and resources available, but doesn’t compromise organizational security goals."
Also it is important to note automated vulnerability scan tools fail to identify most logical vulnerabilities so don't rely solely on automated tools. For example (taken from OWASP):
"Setting the quantity of a product on an e-commerce site as a negative number may result in funds being credited to the attacker. The countermeasure to this problem is to implement stronger data validation, as the application permits negative numbers to be entered in the quantity field of the shopping cart."
Human intelligence is key to spot logical issues.
Security is also all about maintenance. Assigning someone or a team the responsibility to astutely play continuous defense is important.
Note: Encrypting the passwords doesn't imply infallible security. Dictionary/password lists/brute force attacks work all the time to reveal weak passwords. A very common attack is to use SQL injection to dump the user table (with password hashes) then use a password cracker to discover legitimate user/password pairs.

You can find information about common Ruby on Rails application vulnerabilities and their countermeasures at the Zen Rails Security Checklist, including most of the OWASP Top 10 items.

Related

Keycloak for IDM

First and foremost, this post doesn't have any intention to strike down any parties as mentioned in my question.
In fact, I'm not sure whether i should ask this question to this forum or not, but after some thoughtful considerations i decided to just post it here due to my curiosity.
Shortly speaking, I'm working on IAM platform for one of my customer. I've prepared it using keycloak within a day which also cover custom provider to connect with their legacy user internal database.
But I got a pretty shock statement from my customer that they don't trust keycloak since it's free and open source. They only trust commercial products, and they suggested me to go with either forgerock or okta.
I have my own way to answer that statement, but I would also like to hear some feedbacks from the experts here with regards to that matter. Thanks in advance.
Maybe the customer concern is that there is no commercial support with Keycloak. It's a very practical concern, eg if you are not available at some future time and all apps are broken when something strange happens after upgrading the Authorization Server.
Of course on the technical side of things, keep code portable by implementing standards based solutions, so that you can switch providers. Avoid stuff like Keycloak Adapters if they are vendor specific.
DEPLOYMENT
As a containerized solution, Keycloak's deployment model supports multi cloud and means you can run in any cloud provider.
Then again, the Platform as a Service model of some providers is often attractive - no infra to manage and the hope of high availability. With some PAAS providers the trade off may be that there is less control over behavior.
WHAT ARE THE REAL REQUIREMENTS?
Commercial support
Guidance on app scenarios
High Availability
Ease of management
Extensibility
Portability
Different customers have different viewpoints and there is no right answer. The usual thing that software architects do is understand their audience, make recommendations, but let the customer decide - they are the boss after all.

How can I take care that my ruby code is encrypted and invisible?

My product needs to be deployed and installed on the client's server. How can I take care that my ruby code is encrypted and invisible
You can't. In order to execute the code, the CPU needs to understand the code. CPUs are much stupider than humans, so if the CPU can understand the code, then so can a human.
There are only two possibilities:
Don't give your client the code. (The "Google" model.) Instead, give them a service that runs your code under your control.
Give your client a sealed box. (The "XBox" model.) Give your client the code, pre-installed on a hardened, tamper-proof, secure computer under your control, running hardened, tamper-proof, secure firmware under your control, and a hardened, tamper-proof, secure OS under your control. Note that this is non-trivial: Microsoft employed some of the most brilliant hardware security, information security, and cryptography experts on the planet, and they still made a mistake that made the XBox easy to crack.
Unfortunately, you have excluded both those possibilities, so the answer is: you can't.
Note, however, that copying your code is illegal. So, if you don't do business with criminals, then it may not even be necessary to protect your code.
Here are some examples how other companies solve this problem:
Have a good relationship with your clients. People are less likely to steal from friends they like than from strangers they don't know, or people they actively dislike.
Make the product so good that clients want to pay.
Make the product so cheap that clients have no incentive to copy the code.
Offer additional values that you cannot get by copying the code, e.g. support, services, maintenance, training, customization, and consulting.
Especially in the corporate world, clients often prefer to pay, simply for having someone to sue in case something goes wrong. (You can see this as a special case of the last point.)
Note that copy protection schemes are not free. You at least have to integrate it into your product, which takes developer time and resources. And this assumes that the protection scheme itself is gratis, which is typically not the case. These are either pretty expensive, or you have to develop your own (which is also pretty expensive because experienced cryptographers and infosec specialists are not cheap, and cheap cryptographers and infosec specialists will not be able to create a secure system.)
This in turn increases the price of your product, which makes it more likely that someone can't afford it and will copy it.
Also, I have never seen a copy protection scheme that works. There's always something wrong with them. The hardware dongle is only available with an interface the client doesn't have. (For example, when computers stopped having serial and parallel ports in favor of USB, a lot of copy protection schemes still required serial or parallel ports and didn't work with USB-to-serial or USB-to-parallel adapters.) Or, the client uses a VM, so there is no hardware to plug the dongle into. Or, the copyright protection scheme requires Internet access, but that is not available. Or, the driver of the dongle crashes the client's machine. Or, the license key contains characters that can't easily by typed on the client's keyboard. Or, the copy protection scheme has a bug that doesn't allow non-ASCII characters, but you are using the client's name as part of the key. Or, the manufacturer of the copy protection scheme changes the format of dongle to an incompatible one without telling you, and without changing the type number, or the color and physical form of the dongle, so you don't notice.
Note that none of this is hypothetical: all of these have happened to me as a user. Several of these happened at vendors I know.
This means that a there will be significant amount of resources needed in your support department to deal with those problems, which increases the cost of your product even further. It also decreases client satisfaction, when they have problems with your product. (Again, I know some companies that use copy protection and get a significant amount of support tickets because of that.)
There are industries where it is quite common that people buy the product, but then use a cracked version anyway because the copyright protection schemes are so bad that the risk of losing your data due to a cracked version from an untrusted source is lower than losing your data due to the badly implemented copyright protection scheme.
There is a company that is very successful, and very loved by its users that does not use any copy protection in a market where everybody uses copy protection. This is how they do it:
Because they don't have to invest development resources into copy protection, their products are at least as good as their competition's for less development effort.
Because they don't have to invest development resources into copy protection, their products are cheaper than their competition's.
Because their product are not burdened with the overhead of copy protection, their products are more stable and more efficient than their competition's.
They have fair pricing, based on income levels in their target countries, meaning they charge lower prices in poorer countries. This makes it less likely that someone copies their product because they can't afford it.
A single license can be used on as many machines as you like, both Windows and macOS.
There is a no-questions-asked, full-refund return policy.
The lead-developer and the lead-designer personally respond to every single support issue, feature request, and enhancement suggestion.
Of course, they know full well that people abuse their return policy. They buy the product, use it for a project, then give it back. But, they have received messages from people saying "Hey, I copied your software and used it in a project. During this project, I realized how awesome your software is, here's your money, and here's something extra as an apology. Also, I showed it to my friends and colleagues, and they all bought a copy!"
Another example are switch manufacturers. Most of them have very strict license enforcement. However, one of them goes a different route: there is one version of the firmware, and it always has all features enabled. Nothing bad will happen if you use features that you haven't paid for. However, when you need support, they will compare your config to your account, and then say "Hey, we noticed that you are using some features you haven't paid for. We are sure that this is an honest mistake on your part, so we will help you this once, but please don't forget to send us a purchase order as soon as possible, thanks!"
Guess which manufacturer I prefer to work with, and prefer to recommend?

Best method for protecting IP data downloaded to an iOS App?

I'm enhancing a commercial App which until now has used cloud AI models to analyse data and make predictions.
The enhancement is moving the models onto the app for applications with no or limited network access.
These models represent significant IP to our clients and it is essential that we secure any data downloaded to a device from theft.
The App is iOS only for now and I was intrigued by WWDC2020's CoreML update including support for encrypting models. This would be ideal but we can't use CoreML at the moment due to its API not supporting the methods our models require.
Nice to know though that this is a recognised issue with in-app ML model usage.
What is the best method and available options in iOS (>11.0) right now that won't run foul of encryption export laws or even Apple's app store rules etc?
Or models are Javascript which we run in a JavaScriptCore VM with additional data files loaded from json string files.
My current thinking is to use something like the iOS AES encryption. Not hardwire the private key in the app but instead pass it via https, after a user logs in, storing it in the keychain. Decrypt the data strings in memory before loading into the JS VM.
I can see the obvious weaknesses with this approach and would be keen to hear how others have approached this?
The Data
The enhancement is moving the models onto the app for applications with no or limited network access.
These models represent significant IP to our clients and it is essential that we secure any data downloaded to a device from theft.
From the moment you make the data/secrets public, in the sense you include it with your mobile app binary or later download it into the device and store it encrypted, you need to consider it compromised. No bullet proof around this, no matter what you try, you can only make it harder to steal, but with all the instrumentation frameworks available to introspect and instrument code at runtime, your encrypted data can be extracted from the function that decrypts it:
Decrypt the data strings in memory before loading into the JS VM.
An example of a very popular instrumentation framework is Frida:
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
The Private Key
My current thinking is to use something like the iOS AES encryption. Not hardwire the private key in the app but instead pass it via https, after a user logs in, storing it in the keychain.
While not hard-coding the private key in the device is a wise decision it doesn't prevent the attacker from performing a man in the middle(MitM) attack to steal it, or use an instrumentation Framework to hook into the code that stores it in the keychain, but you may already be aware of this or not, because it's not clear from:
I can see the obvious weaknesses with this approach...
In my opinion, and as a side note, I think that first you and the business need to consider if the benefits for the user in having the predictions being made locally on their device outweighs the huge risk being taken of moving the data from the cloud into the device, and data protections laws need to be taken in consideration, because the fines when a data breach occurs can have a huge impact in the organization future.
iOS Solutions
What is the best method and available options in iOS (>11.0) right now that won't run foul of encryption export laws or even Apple's app store rules etc?
I am not an expert in iOS, thus I cannot help you much here, other then recommending you to use as many obfuscation techniques and run-time application self-protections(RASP) in top of the solution you already devised to protect your data, so that you can make an attacker life harder.
RASP:
Runtime application self-protection (RASP) is a security technology that uses runtime instrumentation to detect and block computer attacks by taking advantage of information from inside the running software.
RASP technology is said to improve the security of software by monitoring its inputs, and blocking those that could allow attacks, while protecting the runtime environment from unwanted changes and tampering.
You can also try to use advanced bio-metrics solutions to ensure that a real user is present while the mobile app is being used, but bearing in mind that the more skilled attackers will always find a way to extract the data to a command and control server. It's not a question if they will be able, but when it will happen, and when it happens it's a data breach, and you need to have planned ahead to deal with it's business and legal consequences.
So after you apply the most suitable in app defenses you still have an issue left to resolve, that boils down to ensure your API server knows what is making the request, because it seems you already have implemented user authentication to solve in behalf of who the request is being made.
The Difference Between WHO and WHAT is Accessing the API Server
When downloading the data into the device you need to consider how you will ensure that your API server is indeed accepting the download requests from what you expect, a genuine instance of your mobile app, not from a script, bot, etc., and I need to alert you that user authentication only says in behalf of who the request is being made, not what is doing it.
I wrote a series of articles around API and Mobile security, and in the article Why Does Your Mobile App Need An Api Key? you can read in detail the difference between who and what is accessing your API server, but I will extract here the main takes from it:
The what is the thing making the request to the API server. Is it really a genuine instance of your mobile app, or is it a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
The who is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
Think about the who as the user your API server will be able to Authenticate and Authorize access to the data, and think about the what as the software making that request in behalf of the user.
I see this misconception arise over and over, even among experienced developers, devops and devsecops, because our industry is more geared towards identifying the who not the what.
Others approach
I can see the obvious weaknesses with this approach and would be keen to hear how others have approached this?
As I said previously I am not an expert in iOS and I don't have more to offer to you then what I have already mention in the iOS Solutions section, but if you want to learn how you can lock your mobile app to the API server in order to only reply with a very high degree of confidence to requests from a genuine instance of your mobile app, then I recommend you to read my accepted answer to the question How to secure an API REST for mobile app?, specifically the section Securing the API server and the section A Possible Better Solution, where you will learn how the Mobile App Attestation concept may be a possible solution for this problem.
Do you want to go the Extra Mile?
In any response to a security question I always like to reference the amazing work from the OWASP foundation.
For Mobile Apps
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
OWASP - Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.
For APIS
OWASP API Security Top 10
The OWASP API Security Project seeks to provide value to software developers and security assessors by underscoring the potential risks in insecure APIs, and illustrating how these risks may be mitigated. In order to facilitate this goal, the OWASP API Security Project will create and maintain a Top 10 API Security Risks document, as well as a documentation portal for best practices when creating or assessing APIs.

Test an iOS app security

I am working on an app. Say, it should be secure and safe for the end user, to the degree of a matter of life and death, in the most extreme case. In reality, it's not so hard but, let's assume it.
Thus, I want to make sure, that if serious bad guys get this iPhone and do their tricky work to disassemble it, jailbreak, whatever to get the data from the app, then they get as least clue as possible.
I want to build, test the app and its environment the safest way.
The questions are:
Are there official tools from Apple or other sources to test not
only the app itself but all the security stuff?
How much should I be worried about bad guys gaining access to the
filesystem? How can I prevent data revealing?
How reliable, e.g. backdoorless are existing encryption libraries?
For help with security testing an iOS app, I would recommend checking OWASP's Mobile Security Project. There are a lot of resources about common vulnerabilities in mobile applications, but also guidance on the steps to test a mobile application.
For your specific questions:
XCode has a built-in Analyze feature that looks for problems within the source code of your application. This is a form of static analysis. There are third-party tools that help with dynamic analysis, testing the running application. OWASP ZAP and Burp Suite are examples of tools in this category.
If a user has a jailbroken phone, they'll like have access to the whole filesystem. It's also not possible to protect completely against reverse engineering. This post from the Information Security community might be helpful in that regard. You can however limit the sensitive information you store on the device. Be careful about what information is stored in log files, cached files, plist files, basically anything stored on the device. If the information is very sensitive, it might be better to store it on the server rather than device, since you own the server and don't have direct control over a user's device.
I would consult the Developer's Guide to Encrypting and Hashing Data as well as the iOS Security Guide. I don't know about specific encryption libraries, but in general the most common problem is poor implementation of encryption libraries rather than problems with the libraries themselves. Also, generally using existing libraries is a better practice than trying to create your own.
I'd also consult the Information Security Community, they'll have more guidance on how to security test iOS applications.

How reliable is Heroku for a sensitive app?

How reliable is Heroku for a sensitive app?
Can they be trusted for a very important app?
Have you used it for a long time? What's your opinion?
Thanks
Heroku provides information about security policy in its legal section. According to the security documents, it seems to have a really reliable infrastructure and I have been using it for 1 year without any issues. I also haven't heard about noticeable security flaws in its system.
Some technical restrictions, such as the read-only file-system, can be a hassle at first glance but increase the security of the platform.
It is indeed much more secure than many other VPS providers and, unless you have the benefit of a team of sysadmins and security experts, you can probably trust them more than how you can trust your infrastructure.
A good infrastructure doesn't mean bullet-proof software. Your first priority should be to make sure your software won't include any security flaws. Stress test your software, use unit and integration tests to make sure your software is stable and you are not reintroducing any issues that have already been fixed.

Resources