How to prove user identity when the user WANTS others to impersonate them? - ios

I have an interesting problem. Let's say we have a user, Bob, who logs in to some service. How can that service prove Bob's identity, assuming Bob actively wants others to try and impersonate him? i.e. How can we be sure that the user logging in is indeed Bob?
Using the MAC/IP address of Bob wouldn't work as these can be easily spoofed.
A username/password as means of authentication wouldn't work since Bob could just give these credentials to anyone.
A public-key system (e.g. using RSA for signing) wouldn't work as Bob could just share his private key with anyone.
What I essentially need is Bob to have some proof of ID that he cannot share (or is at least hard for someone else to replicate, given all information that Bob has).
Edit (in case this is useful): I'm working with an iOS app (Bob) and a Python web server (the service).

Alternatives:
Hardware token that the user must present during authentication like a usb token or a cryptographic smartcard
Biometrics can not be shared. For example fingerprint/voice/ear/iris recognition. In some cases you will need a reader (note fingerprint biometric data is not available in mobile devices), and you have to work with confidence ranges and huge databases for comparison. An ID is never 100% reliable.
public key cryptographic systems that manage non-extractable keys. The cryptographic provider in user side allow to generate or import keys that can be marked as non extractable, and can not be exported outside. e.g WebCryptographyApi, AndroidKeyStore, iOS KeyChain or Window Keystore. During user registration, is generated a cryptographic key pair, public and private. The public is sent to server associated with user account, and private is stored securely. Authentication is done with a digital signature using the private key.
See FIDO UAF (Universal authentication Framework) and FIDO U2F(Universal second factor)
https://fidoalliance.org/about/what-is-fido/
About iOS KeyChain, it allows to mark a key as non extractable. See Apple Documentation
Important
If you do not set the CSSM_KEYATTR_EXTRACTABLE bit, you cannot extract the imported key from the keychain in any form, including in wrapped form.
Take a look also to Store and retrieve private key from Mac keychain programatically

Related

When can ADAL tokens be shared? (iOS)

In my application, I am getting an access token via ADAL's acquireTokenSilent() for one resource, which succeeds, and then I try to get an access token for another resource and it says it was not found, and hence I have to call the API to explicitly prompt for credentials. This is a problem since then the user has to login twice with the same credentials in order to access two different resources.
I am using the same authority for each resource. Here is the message that shows there is no hit in the cache for the second resource.
May 4 13:22:37 iPad MyApp[290] : ADAL 2.4.1 iOS 10.2.1 [2017-05-04 20:22:37 - XXXX] INFO: No items were found for query: (resource https://MYRESOURCE + client + authority https://login.windows.net/common)
So my question is, under what circumstances will tokens be shared across resources, and is there any special allowances (ways to use the APIs) which allow this?
If you are building two native clients (public clients) and you want to enable single sign on across the two, one option is to share the App ID between the apps versus passing the actual token from one service to another service.
For example lets say your company name is Contoso. You have a Calendar Mobile App, and a Document Editor App.
You can create a single Native Client Application with:
A common application name, like "Contoso Apps"
Redirect URIs for both apps
Permissions required for the sum of the two applications
Then when a user signs into either application, they will see a login screen with the generic name "Contoso Apps", and prompted to consent to permissions for both apps at the same time. Now this might be a little bit of a bad experience, since the permissions of the two will probably be more than the individual permissions required, but that could be fixed in the future with Incremental Consent.
Then assuming you are using our authentication libraries which automatically caches the access tokens, when the user opens the second application, they will not be prompted to consent because you already have a token cached for that Application ID.
This obviously is not the best solution, but one that has been used in the past for large enterprise applications.

Authorize users at a machines level?

Is is possible to authorize users at a machine level. For example, only when using authorized computers (my personal laptop or other managers' pcs) can one get access to the admin page? Any other computers should either get a denial of access message or something else. Authorized computer may still provide their own admin username and password in case people could fake a machine's identity, maybe. I'm not a security expert though.
Correct me if I misunderstand, but you are asking to only allow visitors on specific machines to access your website?
Jumping right into a solution here. The first question is how do you know which machines are "manager's" machines? Do you have a list of their IP addresses? Do you have some other ID on them?
If you have their IP addresses, then IP Whitelist them, and block all other ip addresses.
If you do not have their IP address, then you are limited. There is no machine ID that can be accessed through a web browser, so you'll need to create your own ID by setting a long lived cookie and a registration process.
Since you already have a login process, this next part is fairly easy. You've used this solution before. When you sign in to google mail and click "remember me" and don't need to sign in the next time your computer restarts, google has basically marked (set a cookie) your machine as yours.
Now, if you want to get super fancy, enterprises have NAC setup. Every system is identified before being allowed to connect to the network. Certain systems are given more access than others. For example, at a software development company, engineers may be given access to a production network while sales staff is not. When they connect, sales staff are move to a restricted vlan after identifying who they are and who the machine belongs to. If that were the case for your company, then you would whitelist an entire subnet block.
Last point. Chase bank uses the machine cookie concept like so: The first time you login they ask your username and password. Then the send a code to your phone or some third-party channel. After you enter the code, the set a machine cookie (same old cookie). The next time you login, they ask for username and password, then look for the machine cookie. If the machine cookie is there, then they don't make you enter the code again.
You could make that your registration process, except you provide the manager with a code they can enter. I don't think you want to get much more complex than a static password to register the machine, but if you did, you can generate one time tokens following the spec in rfc 4226.
You can't restrict access to specific computing device (as there are many types of devices used and there's no universal thing to bind to) but depending on your application design you still can solve your problem. You need to bind not to computer, but to other hardware device which is not possible to duplicate.
One of such devices is a hardware cryptotoken or cryptocard with the certificate and a private key in it. The user plugs the device to USB or to card reader respectively, then he authenticates on the server using the certificate and private key stored on this device). Client-side authentication using certificates is a large but well-known topic so I don't discuss it here.
While it's possible to move the cryptographic device to another computer system, it's not possible to duplicate it or extract the private key from it. So you can (with certain high level of reliability) assume that there exists only one copy of the private key and it's stored on certain particular device.
Of course you would need to create another certificate for each device, but this is not a problem - the only purpose of these certificates is to be accepted by the server, so the server can issue new certificates when needed.

How to store a secret API key in an application's binary?

I am creating a Twitter client for Mac OS X and I have a Consumer secret. It's to my understanding I should not share this secret key. The problem is that when I put it as a string literal into my application and use it, like this:
#define QQTwitterConsumerSecret #"MYSECRETYOUMAYNOTKNOW"
[[QQTwitterEngine alloc] initWithConsumerKey:QQTwitterConsumerKey consumerSecret:QQTwitterConsumerSecret];
It is in the data section of my application's binary. Hackers can read this, disassemble the application, etcetera.
Is there any safe way of storing the Consumer secret? Should I encrypt it?
There is no real perfect solution. No matter what you do, someone dedicated to it will be able to steal it.
Even Twitter for iPhone/iPad/Android/mac/etc. has a secret key in there, they've likely just obscured it somehow.
For example, you could break it up into different files or strings, etc.
Note: Using a hex editor you can read ascii strings in a binary, which is the easiest way. By breaking it up into different pieces or using function calls to create the secret key usually works to make that process more difficult.
You could just base64-encode it to obfuscate it. Or, better idea, generate the key instead of just storing it - write something like this:
char key[100];
++key[0]; ... ; ++key[0]; // increment as many times as necessary to get the ascii code of the first character
// ... and so on, you get the idea.
However, a really good hacker will find it no matter what; the only way to really protect it from others' eyes is using a secure hash function, but then you won't be able to retrieve it, too :)
You should not use a secret api key in an application that does not run solely on your server.
Even if it's perfectly hidden.. you can always snoop on the data going through the wire. And since it's your device you could even tamper with SSL (man in the middle with a certificate created by a custom CA which was added to the device's trusted CA list). Or you could hook into the SSL library to intercept the data before actually being encrypted.
A really late answer...
If you setup your own server, you can use it for helping you desktop app getting authorized by users on twitter without sharing (i.e.: embedding) your secret key.
You can use this approach:
When a user installs you desktop app she must register it with twitter and with your server
*)
*) The app asks the server to generate the token request URL
*) The server sends the generated URL to the app
*) The app directs the user to the authorize URL
*) The user authorizes your app on twitter and pastes the generated PIN into it
*) Using the PIN you app grabs the token
*) All further communication uses the token and does not involve your server
Note: the app logs to your server using the user credentials (e.g.: id and password) for your server.

iphone: is there any secure way to establish 2-way SSL from an application

I need to establish a HTTPS 2-way SSL connection from my iPhone application to the customer's server.
However I don't see any secure way to deliver the client side certificates to the application (it's an e-banking app, so security is really an issue).
From what I have found so far the only way that the app would be able to access the certificate is to provide it pre-bundeled with the application itself, or expose an URL from which it could be fetched (IPhone app with SSL client certs).
The thing is that neither of this two ways prevent some third party to get the certificate, which if accepted as a risk eliminates the need for 2-way SSL (since anyone can have the client certificate).
The whole security protocol should look like this:
- HTTPS 2-way SSL to authenticate the application
- OTP (token) based user registration (client side key pair generated at this step)
- SOAP / WSS XML-Signature (requests signed by the keys generated earlier)
Any idea on how to establish the first layer of security (HTTPS) ?
Ok, so to answer my own question...
It turned out that the security has no fixed scale of measurement.
The security requirements are satisfied as long as the price for braking the system is significantly above the prize that one would get for doing so.
In my situation we are talking about e-banking system, but with somewhat low monthly limits (couple of thousands USD).
As I mentioned in my question there would be another layer of security above the HTTPS which will feature WSS XML-Signatures. The process of registering the user and accepting the his public key is also done in several steps. In the first step the user sends his telephone number together with a cod retrieved somehow from my client. Then an SMS is sent to the user with a confirmation code. The user enters the confirmation code into a OTP calculator that would produce OTP code which will identify the user. Then the public key is sent to the server together with the OTP code. From here on every request would be signed by the private counterpart of the public key sent to the server earlier.
So the biggest weakness for the whole process is that of someone reverse engineers the application and retrieves the client certificate used for the SLL. The only problem arising from this is that someone might observe users' transactions. However in order for someone to make a transaction he would need the user's private key, which is generated, encrypted and stored into the keychain. And the price for braking this security level is VERY HIGH.
We will additionally think on how to protect the users' data on a higher level (e.g. using WSS Encryption), but for the start I thing we are good with the current solution.
any opinion ?
regards
https doesn't really work this way. In a nutshell, you attach to a secure server where the certificates are signed by a well known authority.
If you use Apples (iPhone) classes for this, they will only accept 'good' certificates. By good, I mean what Apple deems as acceptable. If you don't use them (there are alternatives in the SDK), you won't be able to connect (except, maybe, in the case where you have an 'Enterprise' developers license - but I can't say that with 100% certainty as I haven't looked enough at this license to be sure)
To continue, use your https connection to your correctly signed website and then institute some sort of login with a built in username/password, or challenge/response based upon the unique ID of the iPhone (for example) and exchange keys using that connection.
Note that this means that your application will have to query for new certificates at (each connection/every X connections/every month/application specified intervals) to keep them up to date. You can then use these certificates to connect to the more secure server.
[edit]
Check this post - may have more information about what you're asking to do
[/edit]
[edit2]
Please note that the request is iphone, not OSX - app store approval is an issue
[/edit2]

How do I send and receive encrypted email in Ruby on Rails?

I have a rails application that triggers Emails on certain events. These emails are sent to a separate company who will add some additional data to the email when replying. This is all understood and working, I am parsing the replies, extracting the data and it works fine.
I have now been asked to encrypt the emails.
Does anyone have any experience/ideas on the best way to do this?
I can not guarantee what Email client the 3rd party will be using so I need a solution that would work generically across many email clients. The encryption must be made both by my application when I send the email and by the client application (Outlook, Thunderbird, Entourage etc) when it replies. I will then need to receive the encrypted email, decrypt and parse it to extract the new information I need.
Can anyone point me at plugins/documents that would help me achieve this?
If the other end doesn't use your application, you should use S/MIME or PGP.
Most desktop email clients support S/MIME out of the box, and PGP is usually available as a plugin (for Thunderbird there's Enigmail, for Apple Mail there's GPGMail, etc.).
Also, S/MIME needs certificates, which you can create yourself or purchase from a Certificate Authority (like Verisign or Thawte), depending on your needs.
I'm sure there are S/MIME and PGP libraries for Ruby, but a quick search didn't reveal the "one true library" for me. However, you can always let OpenSSL (for S/MIME) or GPG do the heavy-lifting for you.
I think Güder's answer is excellent, but keep in mind that all that necessitates that the user already have something like GPG installed and an associated key available. This grueling setup process is about 95% of the obstacle to getting email encryption more widespread.
Are you certain that the individuals who commissioned this project understand that it's not as simple as flipping a switch in the code to send encrypted emails?
One option is to incorporate in the install process for your program a key management routine that depends on (and includes) GPG. Then the user could select a very difficult passphrase (make sure to run checks on it so it's at the very least alphanumeric, etc.), a public key could be generated from that, and uploaded to the popular keyservers.
The generated key could be used for the emails the program generates, and most importantly, the key would be unique to each user. Then you can do a regular external call to the default email client on the user's OS to open the email.
To make sure that the email gets opened up encrypted, I would check on the environment and get the default email client, then send the email from your program with the necessary flags necessary to have the generated email be encrypted. This means it's going to be different for Thunderbird's Enigmail than it is for Apple's Mail, for example.
But don't forget about OpenSSL, certainly....

Resources