Which encryption algorithms does AWS Amplify use for iOs apps? - ios

I am publishing my mobile app to the app store, it asks what type of encryption algorithm does my application uses and the options are:
Encryption algorithms that are proprietary or not accepted as standard by international standard bodies (IEEE, IETF, ITU, etc.)
Standard encryption algorithms instead of, or in addition to, using or accessing the encryption within Apple's operating system
Both algorithms mentioned above
None of the algorithms mentioned above
I have a very simple applications which only uses AWS Amplify for authentication to process user interactions , none of the sensitive user information is stored and I don't use any HTTPS requests or anything besides AWS.
I am expecting it to be the second option which is Standard encryption algorithms instead of, or in addition to, using or accessing the encryption within Apple's operating system but I am not sure thus why I am asking this question.

Related

Best method for protecting IP data downloaded to an iOS App?

I'm enhancing a commercial App which until now has used cloud AI models to analyse data and make predictions.
The enhancement is moving the models onto the app for applications with no or limited network access.
These models represent significant IP to our clients and it is essential that we secure any data downloaded to a device from theft.
The App is iOS only for now and I was intrigued by WWDC2020's CoreML update including support for encrypting models. This would be ideal but we can't use CoreML at the moment due to its API not supporting the methods our models require.
Nice to know though that this is a recognised issue with in-app ML model usage.
What is the best method and available options in iOS (>11.0) right now that won't run foul of encryption export laws or even Apple's app store rules etc?
Or models are Javascript which we run in a JavaScriptCore VM with additional data files loaded from json string files.
My current thinking is to use something like the iOS AES encryption. Not hardwire the private key in the app but instead pass it via https, after a user logs in, storing it in the keychain. Decrypt the data strings in memory before loading into the JS VM.
I can see the obvious weaknesses with this approach and would be keen to hear how others have approached this?
The Data
The enhancement is moving the models onto the app for applications with no or limited network access.
These models represent significant IP to our clients and it is essential that we secure any data downloaded to a device from theft.
From the moment you make the data/secrets public, in the sense you include it with your mobile app binary or later download it into the device and store it encrypted, you need to consider it compromised. No bullet proof around this, no matter what you try, you can only make it harder to steal, but with all the instrumentation frameworks available to introspect and instrument code at runtime, your encrypted data can be extracted from the function that decrypts it:
Decrypt the data strings in memory before loading into the JS VM.
An example of a very popular instrumentation framework is Frida:
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
The Private Key
My current thinking is to use something like the iOS AES encryption. Not hardwire the private key in the app but instead pass it via https, after a user logs in, storing it in the keychain.
While not hard-coding the private key in the device is a wise decision it doesn't prevent the attacker from performing a man in the middle(MitM) attack to steal it, or use an instrumentation Framework to hook into the code that stores it in the keychain, but you may already be aware of this or not, because it's not clear from:
I can see the obvious weaknesses with this approach...
In my opinion, and as a side note, I think that first you and the business need to consider if the benefits for the user in having the predictions being made locally on their device outweighs the huge risk being taken of moving the data from the cloud into the device, and data protections laws need to be taken in consideration, because the fines when a data breach occurs can have a huge impact in the organization future.
iOS Solutions
What is the best method and available options in iOS (>11.0) right now that won't run foul of encryption export laws or even Apple's app store rules etc?
I am not an expert in iOS, thus I cannot help you much here, other then recommending you to use as many obfuscation techniques and run-time application self-protections(RASP) in top of the solution you already devised to protect your data, so that you can make an attacker life harder.
RASP:
Runtime application self-protection (RASP) is a security technology that uses runtime instrumentation to detect and block computer attacks by taking advantage of information from inside the running software.
RASP technology is said to improve the security of software by monitoring its inputs, and blocking those that could allow attacks, while protecting the runtime environment from unwanted changes and tampering.
You can also try to use advanced bio-metrics solutions to ensure that a real user is present while the mobile app is being used, but bearing in mind that the more skilled attackers will always find a way to extract the data to a command and control server. It's not a question if they will be able, but when it will happen, and when it happens it's a data breach, and you need to have planned ahead to deal with it's business and legal consequences.
So after you apply the most suitable in app defenses you still have an issue left to resolve, that boils down to ensure your API server knows what is making the request, because it seems you already have implemented user authentication to solve in behalf of who the request is being made.
The Difference Between WHO and WHAT is Accessing the API Server
When downloading the data into the device you need to consider how you will ensure that your API server is indeed accepting the download requests from what you expect, a genuine instance of your mobile app, not from a script, bot, etc., and I need to alert you that user authentication only says in behalf of who the request is being made, not what is doing it.
I wrote a series of articles around API and Mobile security, and in the article Why Does Your Mobile App Need An Api Key? you can read in detail the difference between who and what is accessing your API server, but I will extract here the main takes from it:
The what is the thing making the request to the API server. Is it really a genuine instance of your mobile app, or is it a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
The who is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
Think about the who as the user your API server will be able to Authenticate and Authorize access to the data, and think about the what as the software making that request in behalf of the user.
I see this misconception arise over and over, even among experienced developers, devops and devsecops, because our industry is more geared towards identifying the who not the what.
Others approach
I can see the obvious weaknesses with this approach and would be keen to hear how others have approached this?
As I said previously I am not an expert in iOS and I don't have more to offer to you then what I have already mention in the iOS Solutions section, but if you want to learn how you can lock your mobile app to the API server in order to only reply with a very high degree of confidence to requests from a genuine instance of your mobile app, then I recommend you to read my accepted answer to the question How to secure an API REST for mobile app?, specifically the section Securing the API server and the section A Possible Better Solution, where you will learn how the Mobile App Attestation concept may be a possible solution for this problem.
Do you want to go the Extra Mile?
In any response to a security question I always like to reference the amazing work from the OWASP foundation.
For Mobile Apps
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
OWASP - Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.
For APIS
OWASP API Security Top 10
The OWASP API Security Project seeks to provide value to software developers and security assessors by underscoring the potential risks in insecure APIs, and illustrating how these risks may be mitigated. In order to facilitate this goal, the OWASP API Security Project will create and maintain a Top 10 API Security Risks document, as well as a documentation portal for best practices when creating or assessing APIs.

What is the best practice to deploy CoAP-DTLS server that can support multiple PSK identity/secret sets?

We're estimating the practicability to replace our conventional HTTPS/RESTful over cellular network (4G-LTE) with CoAP/DTLS over NB-IoT, to prolong the battery life of remote devices. The IoT application we've deployed only takes a tiny proportion of 4G-LTE data bandwidth and UDP over NB-IoT is good enough; so transmission performance is not our main concern.
But the problem is, we're now using mutual authentication in SSL/TLS layer and we assign different client certificates to different sub-groups. And I'm not sure how to do that in CoAP/DTLS.
I've learned that the default credential model of CoAP/DTLS is Pre-Shared Key (PSK) and I also learned from RFC4279 that I may use the PSK identity / shared-key pair as an easy alternative to username, which could just fit my needs. But when I'm trying to figure out how to implement this, I found the internet resource is very limited. So far I've surveyed node-coap.js and libcoap but I can't find any hints in the documents. Both seemed to support only one credential at the same time.
What is the best practice to deploy CoAP-DTLS server that can support multiple PSK identity/shared-key sets ? Or do I need to implement the whole authentication mechanism in application layer ?
One option for server/cloud side CoAP is Eclipse Californium. I am involved in that project and may thus be biased. That said, we have actually built Californium for exactly this purpose.

What is meant by the phrase adapter/connector?

This is a basic questions. I want to apply to an entry level java developer position with the following requirement:
Familiarity with the Sailpoint Identity IQ standard adapters/connectors
By standard connectors do they basically mean how Sailpoint exchanges data with third party tools? And by adapter do they mean that the adapter pattern would be used? Thanks
This is going to probably appear well after your interview - but to answer the question:
1) Standard adapters/connectors:
SailPoint ships with a "standard" set of connectors which are part of the purchase price there are those ie EPIC which do not ship as part of the standard product and must be enabled. To give you a deeper view into connectors..
Connectivity Methods:
Direct Connectivity - This is where a connector communicates directly to a system using APIs or data-sources. Some advantages of using direct connect are that you don't have to generate or transmit files, and you can be more efficient in processing only things that have changed. Some disadvantages are the they are subject to availability and downtime concerns like any connected system. They are also typically subject to advantages and disadvantages that APIs might impose as well.
Some people also refer to this as an 'online' method of connectivity.
File-Based Connectivity - This is where a connector reads from a snapshot of data presented in a file, rather than connecting directly to the system. Some advantages of using a file, are that files are portable, easily inspected for data issues, and not typically subject to availability. Some disadvantages are that files are usually processed in their entirety, and may require processing or transformation in order to work effectively.
Some people also refer to this as a 'decoupled' or 'offline' method of connectivity.
Connector Implementations
Source-Specific Implementation - These are connectors built with a specific target-system in mind. These typically use specific APIs targeted to the system they are integrated with. Because the systems and APIs are known, these typically require less configurations to get working.
Examples of these are Active Directory, Workday, Salesforce, SAP, etc.
General Implementation - These are general-purpose connectors which can be used to connect to a variety of sources or systems. These tend to be more flexible in general, but typically do require a bit more setup and configuration to meet needs.
Examples of these are Web Services, SCIM, JDBC, Delimited Files, etc.
Custom Implementation - These are completely custom connectors and tailored to the system and API of your choice. This approach offers the most flexibility of all connector options, however making custom connectors is definitely a development-level activity, and is not to be taken lightly. The code written for custom connectors is maintained and supported by the customer who owns the connector.
Examples of these are custom in-house applications, etc.
Understanding these connector implementations is important, because if a source-specific implementation isn't available, another general or custom connector implementation may be used instead.

iOS lock Cocoapod/Framework

I have to develop library for 3rd party. This library have to be secure so that only parties that have credentials can use it.
I am going in a direction that I will provide API key that 3rd party app will have to enter it in order for library to work.
Is there any possibility that I do some sort of locking of a Cocoapod? Or is Framework better solution for this kind of problem?
Does anyone have any other solution/suggestion?
It would be better if you elaborated a bit on what exactly you are trying to achieve and what type of task this library performs, maybe give some examples. If I understand you correctly, you need to prevent usage of library by those who did not pay for the service etc.
The key approach will be OK if the library is a client for some webservice. In this case, you should have API keys anyway to protect the API itself, so the client library will just forward this key to the webservice. This approach is widely used in lots of client libraries.
If the library does work only locally (for example, it performs some science-heavy computation / computer vision / etc), then you can just give out the compiled library and license to those who have already paid. You can protect it with a key of course, but it is not too useful, as the key will likely be validated locally, therefore it can easily be compromised or reverse engineered. So the only good way will be to distribute the library to those who purchased / requested trial, and force upon them a license which will restrict the library's usage.
EDIT
If by "Cocoapod" you mean "distributing as source code" and by "Framework" you mean "distributing as a binary", then it depends on what exactly you do in the library. If it is just connecting to the endpoint and marshalling data (e.g. parsing), you can just distribute the source version, as there is no "know-how" to that. On the other hand, if there is something business-related and specific done besides contacting the API, use closed source distribution (binary).
Source distribution has a benefit that you don't have to recompile it if new target architectures appear. It is also easier to distribute via CocoaPods, and your library users will like it more (for a variety of reasons).

Decompilation possibilities in iOS and how to prevent them

I recently read about decompilation of iOS apps and I'm now really concerned about it. As stated in the following posts (#1 and #2) it is possible to decompile an iOS which is distributed to the App Store. This can be done with jailbreak and I think with copying the app from memory to hdd. With some tools it is possible to
read out strings (strings tools)
dump the header files
reverse engineer to assembly code
It seems NOT to be possible to reverse engineer to Cocoa code.
As security is a feature of the software I create, I want to prevent bad users from reconstructing my security functions (encryption with key or log in to websites). So I came up with the following questions:
Can someone reconstruct my saving and encryption or login methods with assembly? I mean can he understand what exactly is going on (what is saved to which path at which time, which key is used etc., with what credentials is a login to which website performed)? I have no assembly understanding it looks like the matrix for me...
How can I securly use NSStrings which cannot be read out with strings or read in assembly? I know one can do obfuscation of strings - but this is still not secure, isn't it?
This is a problem that people have been chasing for years, and any sufficiently-motivated person with skills will be able to find ways to find out whatever information you don't want them to find out, if that information is ever stored on a device.
Without jailbreaking, it's possible to disassemble apps by using the purchased or downloaded binary. This is static inspection and is facilitated with standard disassembly tools. Although you need to have a tool which is good enough to add symbols from the linker and understand method calls sufficiently to be able to tease out what's going on. If you want to get a feel for how this works, check out hopper, it's a really good disassembly/reverse-engineering tool.
Specifically to your secure log in question, you have a bigger problem if you have a motivated attacker: system-based man-in-the-middle attacks. In this case, the attacker can shim out the networking code used by your system and see anything which is sent via standard networking. Therefore, you can't depend on being able to send any form of unencrypted data into a "secure" pipe at the OS or library level and expect it not to be seen. At a minimum you'll need to encrypt before getting the data into the pipe (i.e. you can't depend on sending any plain text to standard SSL libraries). You can compile your own set of SSL libraries and link them directly in to your App, which means you don't get any system performance and security enhancements over time, but you can manually upgrade your SSL libraries as necessary. You could also create your own encryption, but that's fraught with potential issues, since motivated hackers might find it easier to attack your wire protocol at that point (publicly-tested protocols like SSL are usually more secure than what you can throw together yourself, unless you are a particularly gifted developer with years of security/encryption experience).
However, all of this assumes that your attacker is sufficiently motivated. If you remove the low-hanging fruit, you may be able to prevent a casual hacker from making a simple attempt at figuring out your system. Some things to avoid:
storing plain-text encryption keys for either side of the encryption
storing keys in specifically named resources (a file named serverkey.text or a key stored in a plist with a name which contains key are both classics)
avoid simple passwords wherever possible
But, most important is creating systems where the keys (if any) stored in the application themselves are useless without information the user has to enter themselves (directly, or indirectly through systems such as OAUTH). The server should not trust the client for any important operation without having had some interaction with a user who can be trusted.
Apple's Keychain provides a good place to store authentication tokens, such as the ones retrieved during an OAUTH sequence. The API is a bit hard to work with, but the system is solid.
In the end, the problem is that no matter what you do, you're just upping the ante on the amount of work that it takes to defeat your measures. The attacker gets to control all of the important parts of the equation, so they will eventually defeat anything on the device. You are going to need to decide how much effort to put into securing the client, vs securing the server and monitoring for abuse. Since the attacker holds all of the cards on the device, your better approach is going to be methods that can be implemented on the server to enhance your goals.

Resources