I'm an iOS Developer and I have one question about security that I can't answer.
Security experts say that using the "certificate pinning" will make your app more secure (for example against man in the middle attacks).
I agree that with this technique you can guarantee that your app is communicating with your backend (and that no-one "in the middle" can sniff the traffic), but as we are using HTTPS, the traffic is already encrypted, so how could someone see the traffic?
There is one possible way, that you get the certificate of the attacker and you install it on your iPhone, but is this really something that can happen ? Or are other ways to make this kind of attack?
HTTPS and Ceritificate Pinning
Security experts say that using the "certificate pinning" will make your app more secure (for example against man in the middle attacks).
That a best practice in terms of security in order to avoid the MitM attacks that you already know, but do you know what certificate pinning works?
I know that your question is about iOS but in my article Securing HTTPS with Certificate Pinning on Android you can learn for what certificate pinning is for and why is needed, because this is agnostic of the mobile platform being used. Please read the article and feel free to ignore the part about implementing pinning in Android.
To give you some context I will quote some of the more relevant parts of my article, that will help clarifying your doubts.
Lets' start with the part about why we need Certificate Pinning:
While HTTPS gives you confidentiality, integrity and authenticity in the communication channel between the mobile app and the API server, certificate pinning will protect these same guarantees from being broken.
Let's see two examples from the article on how the HTTPS guarantees can be broken.
First Example:
To prevent trust based assumptions
Incorrectly issuing leaf certificates to the wrong domain names by Root and Intermediate Certificate Authorities (CAs) would allow an attacker to intercept any HTTPS traffic using them, without the end user noticing anything.
Do you think that is very unlike for this to happen? Just take a look to the famous cases of DigiNotar, GlobalSign and Comodo.
Second Example
Another scenario where the HTTPS guarantees are usually broken is when the device is running in hostile environments:
A good example of a hostile environment is public WiFi, where users can be tricked by an attacker into installing a self signed root certificate authority into the trusted store of the device as a requirement for them to have internet for free. This will allow the attacker to perform a MitM attack and intercept, modify or redirect all HTTPS traffic, because the device will now accept all intercept traffic which is now signed by the root CA of the attacker - now trusted by the device.
Both examples will allow for attackers to see the HTTPS encrypted, thus I hope it answers your question:
I agree that with this technique you can guarantee that your app is communicating with your backend (and that no-one "in the middle" can sniff the traffic), but as we are using HTTPS, the traffic is already encrypted, so how could someone see the traffic?
So, HTTPS will encrypted your traffic in transit and certificate pinning will try to prevent it from being decrypted. Wait, did you said try? Yes, because pinning can also be bypassed in a device the attacker controls. I have several articles (1, 2) on it, but on Android, and for iOS it can also be done, and its on my backlog, thus I will update this answer when done.
Possible Attacks to your Mobile App
There is one possible way, that you get the certificate of the attacker and you install it on your iPhone, but is this really something that can happen ? Or are other ways to make this kind of attack?
An attacker will reverse engineer your mobile app in order to understand how everything fits together and will try then to exploit flaws in your logic and security. For example, the attacker can use the MobSF - Mobile Security Framework to statically reverse engineer your mobile app binary:
Mobile Security Framework is an automated, all-in-one mobile application (Android/iOS/Windows) pen-testing framework capable of performing static analysis, dynamic analysis, malware analysis and web API testing.
Attackers will often perform attacks to your code at runtime to modify its behaviour or bypass certificate pinning and a popular tool used for this propose is Frida:
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
Bear in mind that the attacker can be a legit user of your mobile app trying to bypass some of the limitations of the plan is on in order to get more then what he is entitled to.
Want to Implement Certificate Pinning on iOS?
If you want to go ahead and implement certificate pinning in your iOS mobile app then you can use the Mobile Certificate Pinning Generator free tool to get the iOS configuration generated for you.
First you need to provide the API domains you want to pin:
After you submit the form you need to go to the iOS tab to see your iOS configuration and copy/paste it into your mobile app code:
Do You Want To Go The Extra Mile?
In any response to a security question I always like to reference the excellent work from the OWASP foundation.
For APIS
OWASP API Security Top 10
The OWASP API Security Project seeks to provide value to software developers and security assessors by underscoring the potential risks in insecure APIs, and illustrating how these risks may be mitigated. In order to facilitate this goal, the OWASP API Security Project will create and maintain a Top 10 API Security Risks document, as well as a documentation portal for best practices when creating or assessing APIs.
For Mobile Apps
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
OWASP - Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.
Yes, you need to install a root CA certificate on the iOS device and trust it for making an man-in-the-middle attack on an HTTPS connection used by an iOS app.
But you forget one major case: What happens if the attacker and the iPhone user are the same person? Having access to the transmitted data makes it easier to analyze you app and find potential flaws. So an attacker can install your app on a prepared device, analyze it, find all the mistakes you made and afterwards attack use this knowledge to attack your server(s) and/or your app users.
Another scenario is that people are forced to install root CA certificates may be because the phone is used in a company with own network inspection or at a border control.
Last but not least the recent attack on KlaySwap crypto shows you why certificate pinning is so important: in that attack the BGP was attacked to redirect traffic to a specific web server hosting a JavaScript file to a malicious server. After redirecting the traffic the attackers just used ZeroSSL to create a new 100% valid SSL/TLS certificate which they installed on their malicious redirected web server.
Related
There are two prior questions leading to this question (if you're interested):
ssl-for-intranet-applications-deployed-at-multiple-companies
distributing-ssl-certificates-to-all-browsers-in-an-active-directory-environment
In Electron, do you have the ability to override SSL certificate warnings that you'd typically get when using self-signed certificates via a modern web browser?
Typically, in desktop applications, you do not have to adhere to the strict online-banking-level certificate standards that web browsers warn about. The data I'm transferring isn't that sensitive.
As a matter of fact, one of the only reasons I'm moved my app from http to https, is because certain web standard APIs won't function unless the protocol is https. The Notification API is one example.
Otherwise, the data I'm transferring over intranet just isn't that sensitive. Yet, the browsers attempts to burden me (and my users) with Online-Banking-Level certificate authentication.
I'm trying to avoid this somehow and thought that maybe Electron could give me more client-side control for pre-approving my self-signed certificate. Is this doable in Electron?
I have some trouble with https and untrusted proxy certificates and this in my index.js
// We have to deal with self-signed and therefore untrusted root certificates.
global.process.env['NODE_TLS_REJECT_UNAUTHORIZED'] = '0';
It's an intranet application that never reaches out to the internet, so it's ok for me here.
I tried to use Fiddler to capture some iOS apps traffic, ex: Facebook, SnapChat, Gmail, and Instagram.
Instagram is not using https so I can get all the traffic and see the cookies I sent out but Fiddler cannot decrypt other three apps. It only shows something like this:
A SSLv3-compatible ClientHello handshake was found. Fiddler extracted
the parameters below. Version: 3.3 (TLS/1.2) Random: 54 3F 49 C4 20 08
09 BC A8 84 24 92 08 BF B4 38 39 C9 BB 1C B2 7B 95 6A 39 34 E7 AC FE
0F 62 67 SessionID: empty Extensions: server_name graph.facebook.com
elliptic_curves
Could anyone help me understand how they do this so I can use the same technology to protect my app.
Your question revolves around preventing HTTPS man in the middle (MITM) attacks against iOS applications. Using Fiddler or other HTTPS proxies is a form of naive MITM attack that, unfortunately, often works.
HTTP is built on top of a secure transport protocol called TLS (and before that, SSL). The connection is encrypted using public and private keys between trusted parties. And that is where things tend to go wrong. The concept of trust is central to the security of TLS and SSL before it. The server your application connects to provides cryptographic credentials that must be evaluated to establish trust.
Think of this like a passport or driver's license. In most cases, the license checks out. Then you get one with the name McLovin. If you don't actually look at the name, date of birth, number, photo, hologram, etc. you may just blindly trust that McLovin is who they say they are. And then you're in trouble.
Don't trust McLovin.
Most applications trust McLovin :(
To protect your application against these kinds of attacks you should implement a more strict set of crendential and trust evaluations. Apple has a tech note, Technote 2232: HTTPS Trust Evaluation that details this quite well.
A good start is to implement SSL Pinning. Pinning checks the credential of the remote host against a known value - all or part of that certificate. The iOS application has some copy of that certificate, and when connecting to that host checks the credential the host provides against this "known good" certificate. Some applications just check the meta information, others attempt to checksum the certificate (AFNetworking does this), and others perform a full trust evaluation using the known good certificate against the credential. Apple details this process in the WWDC 2014 session Building Apps for Enterprise and Education. If the remote host is not using the expected credentials, the connection is aborted. There is no traffic for an attacker to intercept. If your server's certificate changes often this can be a problem - which is one of several reasons its preferred to check the server's public key instead of meta information or a hash. Unfortunately, some server administrators change public keys often. Some think this is more secure. It's not.
Now, obviously this requires the iOS application to have a copy of the "good" certificate, or some part of it. You can include the certificate in your application, or implement your own method of key exchange. Secure key exchange has long been the subject of cryptographic research and is not something to be taken lightly.
Including the certificate in your application is the solution most people use. You may decide it's important to secure this certificate from someone who may have compromised or jailbroken the device. You have a number of different options for doing so. Obviously you could include it as a resource, and encrypt that. You can also include it directly within the application binary, which can be much more difficult for an attacker to access. This can be done by using the xxd tool with Xcode, as a script build phase or as a build rule. Obviously, you can implement additional protection on top of that.
If the device has been compromised or the application has been tampered with it's possible the "known good" credential has been altered. This is where the iOS application sandbox can work to your advantage. You can detect many of these scenarios by implementing receipt validation for your application. Assuming your application is being distributed through an Apple channel such as the iOS App Store, when it's installed it includes a receipt. That can be validated, and that can be used to implement tamper-proofing for many common scenarios.
These are all methods that can be implemented in the client to protect communications over HTTPS from MITM attacks. The server can also expose the client in many more ways, and the server should be regularly audited for vulnerabilies. Use only known strong cryptographic algorithms, stay up to date with current public vulnerabilities, etc.
Of course, if your application is something that can connect to random HTTPS services you have no control over, like a web browser, your options are more limited. In those cases, the best you can do when a remote host's credentials are in doubt is give the user the choice to trust or not trust the credentials. On iOS there is no UI for doing so provided by the system frameworks, that would be something your application would have to implement.
This is only one, small facet of securing an iOS application, but your question was specific about man in the middle attacks.
The way in which Fiddler can decrypt HTTPS traffic is by using their own certificate. However, when Facebook/Snapchat/Gmail detects that the certificate is not trusted by the system (and in cases will be more strict and limit the certificates within the trusted, so a third party trusted cert might be rejected), it will refuse to connect with the cert.
Fiddler can generate certs for the iOS to accept and install onto the system, but you first need to follow these instructions:
Install CertMaker
Generate the certificate from fiddler, it should then be on your desktop
Visit the certificate from your Safari browser (Safari only, others will not work)
Install the certificate
From this, you should then be able to sniff traffic from these applications.
So to answer the question again, it's not that they're preventing, it's common for SSL applications to deny responses from the server if the server provides an untrusted certificate. What Fiddler does, is spoof the part of the certificate with its so that when you are communicating over SSL, Fiddler can then use its cert to decrypt your traffic.
To answer the second part of your question, please check out this question for details. Essentially, you can force the user to use a specific certification and thus prevent the user from using installed certs.
However, they can still get around this -- just in a bit more sneaky way, but guided, this is on the client side, anything goes.
I'm fairly new to SSL and secure connections in general. What are the major steps required for an iOS app to talk to a server over a secure communications channel?
I'm aware that an SSL certificate will probably be necessary. I'm planning to purchase one from a trusted certificate authority. However I'm not sure if both the app and the server need certificates or if it's just the server. Also I'm not sure how to handle SSL errors. Perhaps there's a library that can help with this like ASIHTTPRequest or similar.
If you are using HTTPS as your protocol for communication and have valid certificates on your server all that should be required is changing your http:// to https:// on your client. For HTTP libraries a very popular option now is AFNetworking. It is a bit better maintained than ASI and has some nice block features not supported by ASI.
As far as SSL errors, it is usually a good idea to present the warnings to end users (through alert views or some other means). They could point to real security attacks (but more likely will point to miss configured or expired certificates).
We are building an end to end solution that will allow our customers to access their ERP data hosted in their own servers through mobile applications. Version 1 will be an iOS app.
We need to make sure the traffic between the client and the server is encrypted with SSL. The problem lies in that we want the installation of the server to be as seamless as possible, hence we don't want the customer to go through the process of buying and installing SSL Certificates. Not even mentioning having to renew that certificate every year.
We were thinking of creating a self signed CA certificate and use it to create child certificates for each client to install on their servers (along with the public CA certificate). We would automate the process of creating the child certificate and include it as part of the setup process. The certificate will also have a very long expiration date as to not dealing with expirations. But if we use this certificate the requests from the client won't be trusted as the CA won't be trusted.
Can the CA be added to the iOS app or device?
Is there a security concern with this implementation?
I have a very similar situation. So far I have just created self signed certificates and just programmed the clients to ignore allow untrusted SSL certificates. If there is a better answer I'd love to hear it.
It is 2017 and letsencrypt now exists, which provides free domain validation and signing of TLS certificates such that browsers / OS HTTPS or TLS libraries and frameworks will accept them, and through certbot it is relatively easy to set up auto-renewal. I won't describe it here because it's deployment specific, but they have good docs. Combined they're probably the best solution out there.
Bundling and using self-signed certificates is seriously sub-optimal for various reasons, and there's no reason to do it anymore (except perhaps gross laziness), so don't.
Free is only for basic domain-validated certificates, i.e. where letsencrypt.org validates that you own the domain that you say you do (and certbot is used to automate that process). You still need to pay for extra verification steps if you want them. However, for internal TLS connections between your app and your server, you only really need domain validated, because you only have to be sure you are talking to your server. The extra steps are more focussed on giving a customer confidence in a company, so they can give over sensitive data with greater peace of mind. Generally speaking if they are using you app, that suggests they trust your company already, so the extra validation is not important (and probably invisible to the customer anyway).
In development if you want to use self-signed certificates this may still make sense. Check out my answer to this question on how to install self-signed certificates for all apps on your iOS device.
I'm setting up a server to do receipt verification for IAP on the App Store.
My question is: Should I make the connection between the iOS device and my server as a https connection, or does http suffice? All the examples I seen people are just using http.
It seems that if I use http, then it's venerable to a someone redirecting the DNS. Or does that not matter? Seems like it would.
Of course, I'm such small potatoes that it's probably not worth the hassle.
It is always desirable to use https (encrypted) connection when you are passing some credentials or sensitive information such as financial transactions. May be it is not possible for anyone to mangle the transaction itself but still, you are breaching the confidentiality aspect of financial transactions which your client might not like.
However, it is not just https which can help, you can also implement your custom encryption in the application to make the communication secure (may be the security is not strong but does work in cases where you really do not need an overkill). Try to encrypt the data with a pre-shared key and decrypt it on the server (which I do myself many times).