Upgrading Encryption Algos for Swift Dependencies - ios

I am using the open source MobSF security framework to scan my Swift project's source code and its dependences for vulnerabilities. Most things look pretty good however I'm concerned that it is showing me that encryption algorithms (MD5, SHA1) in my dependencies are not sufficiently secure.
What would be standard practice for solving this? I made sure to pull the latest branches for most of these but they seem to insist on using outdated algos. I am reluctant to go in and have to change their source code only to have it wiped out each time I rebuild the Podfile.

First, it depends on why they're using these algorithms. For certain uses, there are no security problems with MD5 or SHA-1, and they may be necessary for compatibility with existing standards or backward compatibility.
As an example, PBKDF2 is perfectly secure using SHA-1 as its hash. It doesn't require a very strong hash function to maintain its own security. It's even secure using MD5. Switching to SHA-2 with PBKDF2 doesn't improve security, it's just "security hygiene," which is "avoid algorithms that have known problems even being in your code, even if they cause no problems in your particular use case." Security hygiene is a good practice, but it's not the same thing as security.
For other use cases, the security of the hash function is critical. If a framework is authenticating arbitrary messages using MD5, that's completely broken. Don't take this answer to suggest that algorithms don't matter. They do! But not in every use case. And if you want to decode credit card swipe transactions, you're probably going to need DES to be in your code, which is horribly broken, but you're still going to need it because that's how magnetic stripes are encrypted. It doesn't make your framework "insecure."
When you say "but they seem to insist on using outdated algos," I assume you mean you opened a PR and they rejected it, in which case I assume they have a good reason (such as backward compatibility when there is no actual security problem). If you haven't, then obviously the first step would be to open a PR.
That said, if you want to change this because you feel there is an actual security problem that they will not resolve, or purely for hygiene, then with CocoaPods you would fork the project, modify it, and point to your own version using the source attribute to the pod keyword.
Maintaining a cryptography framework myself, I often get bug reports that are simply wrong from developers using these scanners. Make sure that you know what the scanner is telling you and how to evaluate the findings. False positives are extremely common with these. These tools are useful, but you need to have some expertise to read their reports.

Related

iOS bitcode - security concerns

We are distributing a software module for iOS online. Since, Apple is advocating bitcode and even making it mandatory for apps on some devices (watchOS/tvOS) - forcing us to deliver this software module (static library) with bitcode.
The concern is how secure bitcode is from anyone to reverse engineer and decompile (like java bytecode) and how to protect against it? It is easy for anyone to download libraries from the website and extract bitcode (IR) from it and decompile. Some valuable information on it here
https://lowlevelbits.org/bitcode-demystified/
Bitcode maynot be concern for apps as apple will strip it but definitely appears to be a concern for static libraries.
Any insights?
As the link notes "malefactor can obtain your app or library, retrieve the [code] from it and steal your ‘secret algorithm.’" Yep. Totally true.
Also, if you ship non-bitcode libraries, a "malefactor can obtain your app or library, retrieve the [code] from it and steal your ‘secret algorithm.’"
Also, if you ship non-bitcode apps, a "malefactor can obtain your app or library, retrieve the [code] from it and steal your ‘secret algorithm.’"
There is no situation where this is not true. Tools as cheap as Hopper (my tool of choice, but there are also some cheaper solutions) and elaborate as IDA can decompile your functions into passable C code.
If you're working with Cocoa (ObjC or Swift), you have made it even easier to reverse engineer because it's so easy to dynamically introspect Cocoa.
This is not a solvable problem. Both apps and libraries can try to employ obfuscation techniques, but they are complex, fragile, and typically require significant expense or expertise (and often both). In any case, you will need to continually improve your obfuscation as people break it. This is fairly pointless for a library, since there's very little you could re-protect once it leaks, but you can try.
It will leak. That's not solvable. Bitcode doesn't change a whole lot about that. It might be somewhat simpler to read IR than ARM assembly, but not that much, and certainly not if the thing you're protecting is small (like a small algorithm or a key).
There are some obfuscation vendors out there. Product recommendations are off-topic for Stack Overflow (because they attract spam), but search for "ios obfuscation" and you'll find them. In this space, since it's just "tricky hiding" (not security or encryption) you generally get what you pay for. Open source solutions make little sense, since the whole point is to be tricky and hide how you're doing it. I've worked with some open source obfuscation libraries that make it easier to extract secrets from your code (because they're trivial to reverse, and their use marks the parts of the code where you're hiding things).
If this is important to your business plan, then budget for that, and expect it to introduce some challenging bugs, and expect it to be broken anyway (but maybe take longer).
#Rob Napier, you are wrong in comparing orange to apple. Reading assembly code or disassembling with IDA is world apart compared to reading code generated by decompiling intermediate code. Bitcode is totally a nuisance

Rolling own code instead of using libraries, avoiding the common approach

I have seen a plethora of projects roll their own things instead of using well tested libraries.
In some other instances I have seen people re-implement Elliptic Curves and Random Number Generators, refusing to use tested libraries, because their code is "better".
Why do people do this, choose to spend their time implementing something instead of using something that has been already done, tested and deployed in a plethora of systems?
For example, the Signal Android messenger app has the whole, full copy of OpenSSL embedded into itself for encryption. Ref
Why not use BouncyCastle or java.security.*?
Is it a ego thing? Is it a trust thing, ie. they don't trust libraries?
It can be for a host of different reasons.
Build vs. buy (or use by reference) should come down to a thorough analysis. That said, many folks get into programming because they like building things. Sometimes it's rewarding to build your own code (even when a third party library exists).
That said, I'll try to list some reasons why you might not want to use third party libraries:
Licensing: Does the third party library licensing conflict or restrict your intended usage of your code? For example, GPL-licensed code may not be the best pick for something used commercially.
Security: Has the third party code been thoroughly analyzed for any security vulnerabilities? If it's public-facing, then have there been exploits in the past that have targeted this code? If so, then how quickly have the contributors fixed things (or have they even bothered to issue a patch).
Ease of use: For example, I may not want to try to use a C++ library in C# code. It's possible, but it's less straightforward than using a C# library.
Bug fixes: Is development ongoing on the third party library? If there's a bug, then how easily can you get it fixed?
Domain knowledge: We can't specialize in everything. Using your example of encryption, I'd strongly discourage attempting to build an encryption library from scratch unless you have an encryption background.
Simplicity: Your use case may be much smaller than what a third party library is built to provide. For example, if you needed to build a Point class to represent an X,Y,Z point, then you could reference a third party graphics library. But if you don't need the ability to do graphics calculations on 3D space, then referencing an entire graphics library might be overkill.
All this said, there are many times using a third party library works (and is the appropriate choice). Using your example, I'd never try to implement an encryption stack on my own -- there's no reason to do so with the plethora of open-source options available.

SWF and Actionscript 3 security

How to secure SWF files and Action script 3 code ?
mostly while communicating with server side program ?
Guide me which is most secure encoding encryption or normal way encryption ? :
I really don't know about these encryption stuffs and all correct me if am wrong.
Use a code obfuscator. There are several of them on the internet. The good ones cost money. If you have enough uberness, make your own.
Sockets are a tough nut to crack. Also you can use sha or other security protocols, but nothing is 100% safe. Nothing.
No such as thing as "most secure". It depends on your needs and nature of your application.
If you don't know about security in IT in general, then don't use Flash, a highly insecure language for whatever you seek to do. Else, expect to be intercepted and either make it not worth the "hacker"'s time to hack apart your stuff, or use lossy techniques.
I want to add that you avoid storing important information in the client side,so access tokens, secret passwords or other things that must remain secret one method to avoid this is to call a script on the server side. As mentioned in the other response you can use obfuscation but I noticed that is problematic if you have the project split in libraries.

Creating PGP keys in iOS app

I need to build an iOS application in which PGP keys will be created in order to encrypt and decrypt certain messages.
Since I'm new to PGP encryption in iOS is there some library that will allow me to create, keep and access the PGP keys as well as do the encryption and decryption using the keys.
I've implemented a backend and Android version using RSA algorithm with bouncy castle and OpenPGP in JAVA, however I will need to do the same with the iOS version. That means that the keys created in iOS should be in the same format and compatible with the ones created in the Android version.
Check out this projects: UNNetPGP or ObjectivePGP, this may do the job for you.
OpenPGP keys have a standard format defined in the RFC 4880 (two formats - binary and base64-encoded). As far as I know, it's BouncyCastle that can create keys in some custom non-standard format.
One of options is to use our SecureBlackbox (C++ edition) on iOS - it offers full scope of OpenPGP functionality including key generation and management.
If you really need Bouncy Castle as-is, consider using j2objc
We recently encountered the same situation and so far have had luck with using j2objc to convert both Bouncy Castle and the code that was using it to Objective C. We needed strong compatibility between the iOS and Android versions of the app and didn't want to risk finding out there were incompatibilities with our solution down the road.
In order to convert Bouncy Castle we had to remove a handful of LDAP-related classes (which we didn't have a need for anyway) but beyond that it was pretty straightforward. We did this through trial and error, seeing what it couldn't convert and then just removing the file and trying again.
Using j2objc also had the advantage of letting us port over a lot of the business logic and avoid having to re-implement it in Swift/Objective-C. We just created some simple wrappers in Swift for the classes we needed to use directly and used those throughout the app.
Important Caveats
It's worth noting that this isn't a solution for everyone though, as mentioned in this comment on an issue there are some potential ramifications to using Bouncy Castle this way, so make sure you know what you're doing. It's also something that takes time and know-how to set up, between understanding potential Java classpath issues and figuring out how to pull in and convert everything you need (ideally using shell scripts or something similar to automate the process for when you have updates).
So unless you're using a lot of Bouncy Castle features this may come with additional complexities that make it not worthwhile, particularly the US Export Compliance piece.
I just did a cursory search (which I'm thinking you may have also done) and I found the "GPGTools" project, which is basically an Open Source OpenPGP implementation.
And since it's derived from OpenPGP, the keys you create should be compatible with the keys created on the Android side. They have an OLD (circa 2011) project page here, but the current code (which is in a state of flux) can be found on GitHub.

Decompilation possibilities in iOS and how to prevent them

I recently read about decompilation of iOS apps and I'm now really concerned about it. As stated in the following posts (#1 and #2) it is possible to decompile an iOS which is distributed to the App Store. This can be done with jailbreak and I think with copying the app from memory to hdd. With some tools it is possible to
read out strings (strings tools)
dump the header files
reverse engineer to assembly code
It seems NOT to be possible to reverse engineer to Cocoa code.
As security is a feature of the software I create, I want to prevent bad users from reconstructing my security functions (encryption with key or log in to websites). So I came up with the following questions:
Can someone reconstruct my saving and encryption or login methods with assembly? I mean can he understand what exactly is going on (what is saved to which path at which time, which key is used etc., with what credentials is a login to which website performed)? I have no assembly understanding it looks like the matrix for me...
How can I securly use NSStrings which cannot be read out with strings or read in assembly? I know one can do obfuscation of strings - but this is still not secure, isn't it?
This is a problem that people have been chasing for years, and any sufficiently-motivated person with skills will be able to find ways to find out whatever information you don't want them to find out, if that information is ever stored on a device.
Without jailbreaking, it's possible to disassemble apps by using the purchased or downloaded binary. This is static inspection and is facilitated with standard disassembly tools. Although you need to have a tool which is good enough to add symbols from the linker and understand method calls sufficiently to be able to tease out what's going on. If you want to get a feel for how this works, check out hopper, it's a really good disassembly/reverse-engineering tool.
Specifically to your secure log in question, you have a bigger problem if you have a motivated attacker: system-based man-in-the-middle attacks. In this case, the attacker can shim out the networking code used by your system and see anything which is sent via standard networking. Therefore, you can't depend on being able to send any form of unencrypted data into a "secure" pipe at the OS or library level and expect it not to be seen. At a minimum you'll need to encrypt before getting the data into the pipe (i.e. you can't depend on sending any plain text to standard SSL libraries). You can compile your own set of SSL libraries and link them directly in to your App, which means you don't get any system performance and security enhancements over time, but you can manually upgrade your SSL libraries as necessary. You could also create your own encryption, but that's fraught with potential issues, since motivated hackers might find it easier to attack your wire protocol at that point (publicly-tested protocols like SSL are usually more secure than what you can throw together yourself, unless you are a particularly gifted developer with years of security/encryption experience).
However, all of this assumes that your attacker is sufficiently motivated. If you remove the low-hanging fruit, you may be able to prevent a casual hacker from making a simple attempt at figuring out your system. Some things to avoid:
storing plain-text encryption keys for either side of the encryption
storing keys in specifically named resources (a file named serverkey.text or a key stored in a plist with a name which contains key are both classics)
avoid simple passwords wherever possible
But, most important is creating systems where the keys (if any) stored in the application themselves are useless without information the user has to enter themselves (directly, or indirectly through systems such as OAUTH). The server should not trust the client for any important operation without having had some interaction with a user who can be trusted.
Apple's Keychain provides a good place to store authentication tokens, such as the ones retrieved during an OAUTH sequence. The API is a bit hard to work with, but the system is solid.
In the end, the problem is that no matter what you do, you're just upping the ante on the amount of work that it takes to defeat your measures. The attacker gets to control all of the important parts of the equation, so they will eventually defeat anything on the device. You are going to need to decide how much effort to put into securing the client, vs securing the server and monitoring for abuse. Since the attacker holds all of the cards on the device, your better approach is going to be methods that can be implemented on the server to enhance your goals.

Resources