Signing .NET assemblies: Does this protect my assembly from tampering really? - clr

I am implementing a "locking" system in my app which protects my app against being copied and used illegally. The system checks the signature of a hardware-based code and expects it to be signed with a Private Key that only my company owns. (The app has got the Public Key to validate the signature.)
I want to make sure that no one changes my locking mechanism in the app, so I want to sign my app's assembly and I think it makes sense.
Since I haven't seen the CLR ever talk about an assembly's signature being invalid, I want to make sure this system really works. Does it? What should I do to make it work?
Can an attacker concentrate his efforts on the CLR to make it not care about my signature? That is, if he can't tamper with my code because I've signed it, can he tamper with CLR?
Generally, I would like to know your experience about such safe-guards and protection technologies. Can any one suggest anything else?

Assembly signing is designed to allow applications/assemblies to reference an assembly and be sure that they get the assembly they originally referenced. If someone wanted to, they could in theory decompile your entire app and recompile with no signing. (ie: they could recompile the referencing assembly so that it referenced an unsigned version of the referenced assembly).
They would then be able to modify the code as they wanted, because the client (exe) would now reference an unsigned (or 're-signed') dll.
To make the process of decompilation and recompilation more difficult, you could try creating a mixed-mode C++/CLI assembly containing both managed and native code. But yeah... ultimately people have all your binaries to hand and with enough effort can probably get round any licensing system you think up.

There's a certaining amount of misconception about signed assemblies. Assembly signing is not, as mackenir pointed out, a secure mechanism to be used to prevent your assemblies from being tampered with. The following article on codeproject gives a pretty good treatment of the subject:
http://www.codeproject.com/KB/security/StrongNameExplained.aspx

Signing your code only allows tamper detection, it doesn't prevent it. Somebody who knows what they are doing can remove your signature and if necessary add their own.
Really most copy protection schemes are a waste of time and can be subverted, and they also tend to annoy the hell out of your paying customers. Ultimately you can't prevent somebody from modifying and running your code on hardware that they control. Just make it sufficiently difficult that it is easier to go to the purchasing department and get a check written, and that it is difficult to forget that you haven't got a licensed copy. Those who care will eventually pay, and those who don't never will.
Also note that even if you think that most people won't bother cracking your scheme, or haven't the skill to do it, it doesn't matter. Because once one person has subverted your copy protection scheme, they can make it available on a torrent site for those without the skill to do it, and it is game over.

One technique you can use is to prevent tampering is to use the public key of your assembly to encrypt essential parts of your software such as application/algorithm parameters. If the public key has been changed, the decryption will not work and your app will crash.
Some obfuscators such as Crypto Obfuscator use this technique with the string encryption feature. It uses the public key of your assembly to encrypt all strings. If the public key has been changed or removed, decryption will fail and your app wont even start.

Related

How can I take care that my ruby code is encrypted and invisible?

My product needs to be deployed and installed on the client's server. How can I take care that my ruby code is encrypted and invisible
You can't. In order to execute the code, the CPU needs to understand the code. CPUs are much stupider than humans, so if the CPU can understand the code, then so can a human.
There are only two possibilities:
Don't give your client the code. (The "Google" model.) Instead, give them a service that runs your code under your control.
Give your client a sealed box. (The "XBox" model.) Give your client the code, pre-installed on a hardened, tamper-proof, secure computer under your control, running hardened, tamper-proof, secure firmware under your control, and a hardened, tamper-proof, secure OS under your control. Note that this is non-trivial: Microsoft employed some of the most brilliant hardware security, information security, and cryptography experts on the planet, and they still made a mistake that made the XBox easy to crack.
Unfortunately, you have excluded both those possibilities, so the answer is: you can't.
Note, however, that copying your code is illegal. So, if you don't do business with criminals, then it may not even be necessary to protect your code.
Here are some examples how other companies solve this problem:
Have a good relationship with your clients. People are less likely to steal from friends they like than from strangers they don't know, or people they actively dislike.
Make the product so good that clients want to pay.
Make the product so cheap that clients have no incentive to copy the code.
Offer additional values that you cannot get by copying the code, e.g. support, services, maintenance, training, customization, and consulting.
Especially in the corporate world, clients often prefer to pay, simply for having someone to sue in case something goes wrong. (You can see this as a special case of the last point.)
Note that copy protection schemes are not free. You at least have to integrate it into your product, which takes developer time and resources. And this assumes that the protection scheme itself is gratis, which is typically not the case. These are either pretty expensive, or you have to develop your own (which is also pretty expensive because experienced cryptographers and infosec specialists are not cheap, and cheap cryptographers and infosec specialists will not be able to create a secure system.)
This in turn increases the price of your product, which makes it more likely that someone can't afford it and will copy it.
Also, I have never seen a copy protection scheme that works. There's always something wrong with them. The hardware dongle is only available with an interface the client doesn't have. (For example, when computers stopped having serial and parallel ports in favor of USB, a lot of copy protection schemes still required serial or parallel ports and didn't work with USB-to-serial or USB-to-parallel adapters.) Or, the client uses a VM, so there is no hardware to plug the dongle into. Or, the copyright protection scheme requires Internet access, but that is not available. Or, the driver of the dongle crashes the client's machine. Or, the license key contains characters that can't easily by typed on the client's keyboard. Or, the copy protection scheme has a bug that doesn't allow non-ASCII characters, but you are using the client's name as part of the key. Or, the manufacturer of the copy protection scheme changes the format of dongle to an incompatible one without telling you, and without changing the type number, or the color and physical form of the dongle, so you don't notice.
Note that none of this is hypothetical: all of these have happened to me as a user. Several of these happened at vendors I know.
This means that a there will be significant amount of resources needed in your support department to deal with those problems, which increases the cost of your product even further. It also decreases client satisfaction, when they have problems with your product. (Again, I know some companies that use copy protection and get a significant amount of support tickets because of that.)
There are industries where it is quite common that people buy the product, but then use a cracked version anyway because the copyright protection schemes are so bad that the risk of losing your data due to a cracked version from an untrusted source is lower than losing your data due to the badly implemented copyright protection scheme.
There is a company that is very successful, and very loved by its users that does not use any copy protection in a market where everybody uses copy protection. This is how they do it:
Because they don't have to invest development resources into copy protection, their products are at least as good as their competition's for less development effort.
Because they don't have to invest development resources into copy protection, their products are cheaper than their competition's.
Because their product are not burdened with the overhead of copy protection, their products are more stable and more efficient than their competition's.
They have fair pricing, based on income levels in their target countries, meaning they charge lower prices in poorer countries. This makes it less likely that someone copies their product because they can't afford it.
A single license can be used on as many machines as you like, both Windows and macOS.
There is a no-questions-asked, full-refund return policy.
The lead-developer and the lead-designer personally respond to every single support issue, feature request, and enhancement suggestion.
Of course, they know full well that people abuse their return policy. They buy the product, use it for a project, then give it back. But, they have received messages from people saying "Hey, I copied your software and used it in a project. During this project, I realized how awesome your software is, here's your money, and here's something extra as an apology. Also, I showed it to my friends and colleagues, and they all bought a copy!"
Another example are switch manufacturers. Most of them have very strict license enforcement. However, one of them goes a different route: there is one version of the firmware, and it always has all features enabled. Nothing bad will happen if you use features that you haven't paid for. However, when you need support, they will compare your config to your account, and then say "Hey, we noticed that you are using some features you haven't paid for. We are sure that this is an honest mistake on your part, so we will help you this once, but please don't forget to send us a purchase order as soon as possible, thanks!"
Guess which manufacturer I prefer to work with, and prefer to recommend?

iOS bitcode - security concerns

We are distributing a software module for iOS online. Since, Apple is advocating bitcode and even making it mandatory for apps on some devices (watchOS/tvOS) - forcing us to deliver this software module (static library) with bitcode.
The concern is how secure bitcode is from anyone to reverse engineer and decompile (like java bytecode) and how to protect against it? It is easy for anyone to download libraries from the website and extract bitcode (IR) from it and decompile. Some valuable information on it here
https://lowlevelbits.org/bitcode-demystified/
Bitcode maynot be concern for apps as apple will strip it but definitely appears to be a concern for static libraries.
Any insights?
As the link notes "malefactor can obtain your app or library, retrieve the [code] from it and steal your ‘secret algorithm.’" Yep. Totally true.
Also, if you ship non-bitcode libraries, a "malefactor can obtain your app or library, retrieve the [code] from it and steal your ‘secret algorithm.’"
Also, if you ship non-bitcode apps, a "malefactor can obtain your app or library, retrieve the [code] from it and steal your ‘secret algorithm.’"
There is no situation where this is not true. Tools as cheap as Hopper (my tool of choice, but there are also some cheaper solutions) and elaborate as IDA can decompile your functions into passable C code.
If you're working with Cocoa (ObjC or Swift), you have made it even easier to reverse engineer because it's so easy to dynamically introspect Cocoa.
This is not a solvable problem. Both apps and libraries can try to employ obfuscation techniques, but they are complex, fragile, and typically require significant expense or expertise (and often both). In any case, you will need to continually improve your obfuscation as people break it. This is fairly pointless for a library, since there's very little you could re-protect once it leaks, but you can try.
It will leak. That's not solvable. Bitcode doesn't change a whole lot about that. It might be somewhat simpler to read IR than ARM assembly, but not that much, and certainly not if the thing you're protecting is small (like a small algorithm or a key).
There are some obfuscation vendors out there. Product recommendations are off-topic for Stack Overflow (because they attract spam), but search for "ios obfuscation" and you'll find them. In this space, since it's just "tricky hiding" (not security or encryption) you generally get what you pay for. Open source solutions make little sense, since the whole point is to be tricky and hide how you're doing it. I've worked with some open source obfuscation libraries that make it easier to extract secrets from your code (because they're trivial to reverse, and their use marks the parts of the code where you're hiding things).
If this is important to your business plan, then budget for that, and expect it to introduce some challenging bugs, and expect it to be broken anyway (but maybe take longer).
#Rob Napier, you are wrong in comparing orange to apple. Reading assembly code or disassembling with IDA is world apart compared to reading code generated by decompiling intermediate code. Bitcode is totally a nuisance

iOS lock Cocoapod/Framework

I have to develop library for 3rd party. This library have to be secure so that only parties that have credentials can use it.
I am going in a direction that I will provide API key that 3rd party app will have to enter it in order for library to work.
Is there any possibility that I do some sort of locking of a Cocoapod? Or is Framework better solution for this kind of problem?
Does anyone have any other solution/suggestion?
It would be better if you elaborated a bit on what exactly you are trying to achieve and what type of task this library performs, maybe give some examples. If I understand you correctly, you need to prevent usage of library by those who did not pay for the service etc.
The key approach will be OK if the library is a client for some webservice. In this case, you should have API keys anyway to protect the API itself, so the client library will just forward this key to the webservice. This approach is widely used in lots of client libraries.
If the library does work only locally (for example, it performs some science-heavy computation / computer vision / etc), then you can just give out the compiled library and license to those who have already paid. You can protect it with a key of course, but it is not too useful, as the key will likely be validated locally, therefore it can easily be compromised or reverse engineered. So the only good way will be to distribute the library to those who purchased / requested trial, and force upon them a license which will restrict the library's usage.
EDIT
If by "Cocoapod" you mean "distributing as source code" and by "Framework" you mean "distributing as a binary", then it depends on what exactly you do in the library. If it is just connecting to the endpoint and marshalling data (e.g. parsing), you can just distribute the source version, as there is no "know-how" to that. On the other hand, if there is something business-related and specific done besides contacting the API, use closed source distribution (binary).
Source distribution has a benefit that you don't have to recompile it if new target architectures appear. It is also easier to distribute via CocoaPods, and your library users will like it more (for a variety of reasons).

Test an iOS app security

I am working on an app. Say, it should be secure and safe for the end user, to the degree of a matter of life and death, in the most extreme case. In reality, it's not so hard but, let's assume it.
Thus, I want to make sure, that if serious bad guys get this iPhone and do their tricky work to disassemble it, jailbreak, whatever to get the data from the app, then they get as least clue as possible.
I want to build, test the app and its environment the safest way.
The questions are:
Are there official tools from Apple or other sources to test not
only the app itself but all the security stuff?
How much should I be worried about bad guys gaining access to the
filesystem? How can I prevent data revealing?
How reliable, e.g. backdoorless are existing encryption libraries?
For help with security testing an iOS app, I would recommend checking OWASP's Mobile Security Project. There are a lot of resources about common vulnerabilities in mobile applications, but also guidance on the steps to test a mobile application.
For your specific questions:
XCode has a built-in Analyze feature that looks for problems within the source code of your application. This is a form of static analysis. There are third-party tools that help with dynamic analysis, testing the running application. OWASP ZAP and Burp Suite are examples of tools in this category.
If a user has a jailbroken phone, they'll like have access to the whole filesystem. It's also not possible to protect completely against reverse engineering. This post from the Information Security community might be helpful in that regard. You can however limit the sensitive information you store on the device. Be careful about what information is stored in log files, cached files, plist files, basically anything stored on the device. If the information is very sensitive, it might be better to store it on the server rather than device, since you own the server and don't have direct control over a user's device.
I would consult the Developer's Guide to Encrypting and Hashing Data as well as the iOS Security Guide. I don't know about specific encryption libraries, but in general the most common problem is poor implementation of encryption libraries rather than problems with the libraries themselves. Also, generally using existing libraries is a better practice than trying to create your own.
I'd also consult the Information Security Community, they'll have more guidance on how to security test iOS applications.

Decompilation possibilities in iOS and how to prevent them

I recently read about decompilation of iOS apps and I'm now really concerned about it. As stated in the following posts (#1 and #2) it is possible to decompile an iOS which is distributed to the App Store. This can be done with jailbreak and I think with copying the app from memory to hdd. With some tools it is possible to
read out strings (strings tools)
dump the header files
reverse engineer to assembly code
It seems NOT to be possible to reverse engineer to Cocoa code.
As security is a feature of the software I create, I want to prevent bad users from reconstructing my security functions (encryption with key or log in to websites). So I came up with the following questions:
Can someone reconstruct my saving and encryption or login methods with assembly? I mean can he understand what exactly is going on (what is saved to which path at which time, which key is used etc., with what credentials is a login to which website performed)? I have no assembly understanding it looks like the matrix for me...
How can I securly use NSStrings which cannot be read out with strings or read in assembly? I know one can do obfuscation of strings - but this is still not secure, isn't it?
This is a problem that people have been chasing for years, and any sufficiently-motivated person with skills will be able to find ways to find out whatever information you don't want them to find out, if that information is ever stored on a device.
Without jailbreaking, it's possible to disassemble apps by using the purchased or downloaded binary. This is static inspection and is facilitated with standard disassembly tools. Although you need to have a tool which is good enough to add symbols from the linker and understand method calls sufficiently to be able to tease out what's going on. If you want to get a feel for how this works, check out hopper, it's a really good disassembly/reverse-engineering tool.
Specifically to your secure log in question, you have a bigger problem if you have a motivated attacker: system-based man-in-the-middle attacks. In this case, the attacker can shim out the networking code used by your system and see anything which is sent via standard networking. Therefore, you can't depend on being able to send any form of unencrypted data into a "secure" pipe at the OS or library level and expect it not to be seen. At a minimum you'll need to encrypt before getting the data into the pipe (i.e. you can't depend on sending any plain text to standard SSL libraries). You can compile your own set of SSL libraries and link them directly in to your App, which means you don't get any system performance and security enhancements over time, but you can manually upgrade your SSL libraries as necessary. You could also create your own encryption, but that's fraught with potential issues, since motivated hackers might find it easier to attack your wire protocol at that point (publicly-tested protocols like SSL are usually more secure than what you can throw together yourself, unless you are a particularly gifted developer with years of security/encryption experience).
However, all of this assumes that your attacker is sufficiently motivated. If you remove the low-hanging fruit, you may be able to prevent a casual hacker from making a simple attempt at figuring out your system. Some things to avoid:
storing plain-text encryption keys for either side of the encryption
storing keys in specifically named resources (a file named serverkey.text or a key stored in a plist with a name which contains key are both classics)
avoid simple passwords wherever possible
But, most important is creating systems where the keys (if any) stored in the application themselves are useless without information the user has to enter themselves (directly, or indirectly through systems such as OAUTH). The server should not trust the client for any important operation without having had some interaction with a user who can be trusted.
Apple's Keychain provides a good place to store authentication tokens, such as the ones retrieved during an OAUTH sequence. The API is a bit hard to work with, but the system is solid.
In the end, the problem is that no matter what you do, you're just upping the ante on the amount of work that it takes to defeat your measures. The attacker gets to control all of the important parts of the equation, so they will eventually defeat anything on the device. You are going to need to decide how much effort to put into securing the client, vs securing the server and monitoring for abuse. Since the attacker holds all of the cards on the device, your better approach is going to be methods that can be implemented on the server to enhance your goals.

Resources