Mobile API design - flexibility vs performance - ios

I work for a startup where we have an iOS product that interfaces with a backend API. Initially as the product was being developed, the API was designed to be flexible so that the client could always access the data it needed on a given view; especially as the views were evolving.
As we’re starting to scale, we’re now finding that there are a lot of performance bottlenecks due to the amount of data we’re passing to the client — some of which is unneeded on a given endpoint.
My question is: in the case of a private API, where you’re also building the only client that will consume the API, is it common (or acceptable) to couple the front end requirements directly to what the backend serves, so that the backend is only providing exactly what the client needs for a given endpoint / view?

Yes. The goal of an API is to provide a reasonable service to all the clients you want to support. If you only have one client, it's both common and acceptable (desirable, even) to optimize your API to support the one client.

Related

Best method for protecting IP data downloaded to an iOS App?

I'm enhancing a commercial App which until now has used cloud AI models to analyse data and make predictions.
The enhancement is moving the models onto the app for applications with no or limited network access.
These models represent significant IP to our clients and it is essential that we secure any data downloaded to a device from theft.
The App is iOS only for now and I was intrigued by WWDC2020's CoreML update including support for encrypting models. This would be ideal but we can't use CoreML at the moment due to its API not supporting the methods our models require.
Nice to know though that this is a recognised issue with in-app ML model usage.
What is the best method and available options in iOS (>11.0) right now that won't run foul of encryption export laws or even Apple's app store rules etc?
Or models are Javascript which we run in a JavaScriptCore VM with additional data files loaded from json string files.
My current thinking is to use something like the iOS AES encryption. Not hardwire the private key in the app but instead pass it via https, after a user logs in, storing it in the keychain. Decrypt the data strings in memory before loading into the JS VM.
I can see the obvious weaknesses with this approach and would be keen to hear how others have approached this?
The Data
The enhancement is moving the models onto the app for applications with no or limited network access.
These models represent significant IP to our clients and it is essential that we secure any data downloaded to a device from theft.
From the moment you make the data/secrets public, in the sense you include it with your mobile app binary or later download it into the device and store it encrypted, you need to consider it compromised. No bullet proof around this, no matter what you try, you can only make it harder to steal, but with all the instrumentation frameworks available to introspect and instrument code at runtime, your encrypted data can be extracted from the function that decrypts it:
Decrypt the data strings in memory before loading into the JS VM.
An example of a very popular instrumentation framework is Frida:
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
The Private Key
My current thinking is to use something like the iOS AES encryption. Not hardwire the private key in the app but instead pass it via https, after a user logs in, storing it in the keychain.
While not hard-coding the private key in the device is a wise decision it doesn't prevent the attacker from performing a man in the middle(MitM) attack to steal it, or use an instrumentation Framework to hook into the code that stores it in the keychain, but you may already be aware of this or not, because it's not clear from:
I can see the obvious weaknesses with this approach...
In my opinion, and as a side note, I think that first you and the business need to consider if the benefits for the user in having the predictions being made locally on their device outweighs the huge risk being taken of moving the data from the cloud into the device, and data protections laws need to be taken in consideration, because the fines when a data breach occurs can have a huge impact in the organization future.
iOS Solutions
What is the best method and available options in iOS (>11.0) right now that won't run foul of encryption export laws or even Apple's app store rules etc?
I am not an expert in iOS, thus I cannot help you much here, other then recommending you to use as many obfuscation techniques and run-time application self-protections(RASP) in top of the solution you already devised to protect your data, so that you can make an attacker life harder.
RASP:
Runtime application self-protection (RASP) is a security technology that uses runtime instrumentation to detect and block computer attacks by taking advantage of information from inside the running software.
RASP technology is said to improve the security of software by monitoring its inputs, and blocking those that could allow attacks, while protecting the runtime environment from unwanted changes and tampering.
You can also try to use advanced bio-metrics solutions to ensure that a real user is present while the mobile app is being used, but bearing in mind that the more skilled attackers will always find a way to extract the data to a command and control server. It's not a question if they will be able, but when it will happen, and when it happens it's a data breach, and you need to have planned ahead to deal with it's business and legal consequences.
So after you apply the most suitable in app defenses you still have an issue left to resolve, that boils down to ensure your API server knows what is making the request, because it seems you already have implemented user authentication to solve in behalf of who the request is being made.
The Difference Between WHO and WHAT is Accessing the API Server
When downloading the data into the device you need to consider how you will ensure that your API server is indeed accepting the download requests from what you expect, a genuine instance of your mobile app, not from a script, bot, etc., and I need to alert you that user authentication only says in behalf of who the request is being made, not what is doing it.
I wrote a series of articles around API and Mobile security, and in the article Why Does Your Mobile App Need An Api Key? you can read in detail the difference between who and what is accessing your API server, but I will extract here the main takes from it:
The what is the thing making the request to the API server. Is it really a genuine instance of your mobile app, or is it a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
The who is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
Think about the who as the user your API server will be able to Authenticate and Authorize access to the data, and think about the what as the software making that request in behalf of the user.
I see this misconception arise over and over, even among experienced developers, devops and devsecops, because our industry is more geared towards identifying the who not the what.
Others approach
I can see the obvious weaknesses with this approach and would be keen to hear how others have approached this?
As I said previously I am not an expert in iOS and I don't have more to offer to you then what I have already mention in the iOS Solutions section, but if you want to learn how you can lock your mobile app to the API server in order to only reply with a very high degree of confidence to requests from a genuine instance of your mobile app, then I recommend you to read my accepted answer to the question How to secure an API REST for mobile app?, specifically the section Securing the API server and the section A Possible Better Solution, where you will learn how the Mobile App Attestation concept may be a possible solution for this problem.
Do you want to go the Extra Mile?
In any response to a security question I always like to reference the amazing work from the OWASP foundation.
For Mobile Apps
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
OWASP - Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.
For APIS
OWASP API Security Top 10
The OWASP API Security Project seeks to provide value to software developers and security assessors by underscoring the potential risks in insecure APIs, and illustrating how these risks may be mitigated. In order to facilitate this goal, the OWASP API Security Project will create and maintain a Top 10 API Security Risks document, as well as a documentation portal for best practices when creating or assessing APIs.

Why is ODATA not widely adopted by the developers for RESTful development?

The community of developers using odata for their REST implementations seem to be the least of all the REST implementations that I usually come across.
Any reasons?
There is virtually no contract. A service consumer has no idea how to use the service (for example, what are valid Command arguments, encoding expectations, and so on).
The interface errs on the side of being too liberal in what it will accept.
The contract does not provide enough information to consumers on how to use the service. If a consumer must read something other than the service’s signature to understand how to use the service, the factoring of the service should be reviewed.
Consumers are expected to be familiar with the database and table structures prior to consuming the Web service. This results in a tight coupling between service providers and consumers.
Performance will suffer due to dependencies on late binding and encoding/decoding between boundaries within the same service.
Source: https://docs.servicestack.net/why-not-odata
OData is a great standard to expose datasets with good tool support (Excel, Tableau, PowerBI...).
As far as I'm concerned it saved me a lot of time and effort, projecting/sorting/filtering... being available out of the box without having to code anything (especially with .net). It's my go to option for RESTful APIs on table like structures.
I had an interesting conversation with a contractor from one of the major outsourcing companies the other day. He has built restful APIs for many customers and when I asked if he used OData sometimes, he replied 'we don't do OData, we prefer Json' (sigh...).
So I guess one of the possible answers to your question is ignorance, many simply don't know OData or understand it...

Web service documentation (schemas, locations) discovery in SOA

I need to make a recommendation on approaches for allowing web service (WCF) documentation (wsdl, schemas, locations etc.) to be stored and found. Being able to monitor the services would be a definite bonus.
This needs to be considered in the wider context of moving to an SOA built, where possible, with Microsoft technologies that should be accessible by clients from other frameworks. The aim is to develop a system in which clients do not need to change if a service is moved or new versions are brought online - it should be possible to write the client 'knowing' just one address / location which is capable of directing them appropriately.
Having a central location for the service documentation is important too; our Business Analysts should be able to find all they need to about the services we provide from a central place. We would also want (potentially) to expose that repository of service information to partners as well. I know we could generate wsdls and manually manage them (create a folder somewhere and zip them up before sending them out) but that seems very labour intensive and prone to error (on my part).
As I see it at the moment there are two broad approaches;
Write something bespoke that uses WS-Discoverability and a dynamic routing service which can respond to the client requests.
Get an off the shelf solution.
I have to say that an off the shelf solution is the most likely approach that will be accepted but I have to at least consider the alternatives. For the off the shelf solutions I have identified
BizTalk
WSO2 ESB and WSO2 Governance Registry
as possibly providing the features.
What I need to know
Am I right with my understanding of the broad approaches?
Are there any other approaches I should consider evaluating?
Specifically I also need to know pros and cons of any approach I consider and have an idea of how it could be implemented.
To start with I would definitely not go with Biztalk or any WS-Whatever SOAP based protocol.
Go simpler and you'll be an happy man in the end.
For the middleware I would go Mass Transit
or if you prefer, NServiceBus, which I'm not a big fan off, but which provides another level of enterprise support. If you choose to go with Event SOA you'd get async operations as a bonus.
With the middleware layer defined it is time to define the API Layer. I would not expose my services to the outside world, and if the middleware is event based, the services within it they can only respond to events placed in the bus, so I would use ASP.NET Web API with a REST interface to get the requests to the outside, and based on the request type create the related message (command) and place it on the bus.
Way to high level but I hope it helps.

A public web API: What do developers prefer to consume?

We've got a bunch of data that we'd like to expose to the world hosted on an asp-net.mvc website. I'd like to ensure that we deliver it using technology that is easy for end developers to implement and not tied to any particular platform, rather than using technology that is unpopular/incompatible with developers.
The kind of requests we expect are mainly to retrieve search results (not many parameters), but down the like we'd like to be able to provide catalogue lookups and the like, which may be more complex.
Bearing this in mind, what is the preferred means of doing this?
Windows Communication Foundation can be used to create both SOAP services (great if your consumers are businesses, using Visual Studio/.NET or Java) or REST services (for people on other platforms). Those are the preferred means of exposing public APIs.
If you want maximum exposure, probably best to use the REST approach, since it is easier to consume from "web" languages like JavaScript. Microsoft has extensive resources on putting together a REST API using WCF.
Honestly, for the kinds of requests you say you need to handle, which all seem to be looking up data as opposed to modifying it, the difference is almost trivial - you can switch from SOAP to REST simply by changing a few attributes/configuration options and you could technically even host both at the same time using very little additional code. As long as you stick to WCF and don't use outdated technology like ASMX/WSE then you will be fine.
Reasons to use REST:
Consumable from almost anywhere (including JavaScript, RSS readers, etc.);
It's popular (in use by Google, Twitter, etc.)
Supports many different data formats (JSON, Atom, etc.)
Reasons to use SOAP:
Standardized security protocol (encryption, non-repudiation, etc.)
Distributed transactions
Message Queuing
That's not an exhaustive list but it should give you an idea of who the target markets are for each. If you're hosting a very open, very public site designed to be consumed by anyone and everyone, go with REST. If the service is part of a business system and you need to guarantee reliability, security, and consistency of data, you'll want to go with SOAP. Choose the appropriate technology based on your target market.
Create a RESTful API. As a developer who often consumes web services, it's what I would expect and prefer.
Many popular services (digg/twitter/netflix/google) are moving to REST over SOAP, so you would be wise to follow suit.
If you do create a REST API you should also create a WADL file. It's WISDL for REST. They're not well supported yet, but they're not hard to create and they'll become more useful as support increases.
YOu will want to check out odata. Look at odata.org and live.visitmix.com/videos
This will give you REST access, metadata support like in SOAP, interoperability with the whole office stack and if you are using WCF Data Services you can implement it in a matter of hours, days at most.
Take a look at netflix.com, they have done it right (IMHO).

Separating presentation/web-services

Is it a good practice to develop web-service and web-site in two different languages, on two different servers? E.g. right now I create a Java web-service running on Glassfish and Ruby on Rails presentation layer running in the same server.
I'd like to leave web-service on the same server but use Ruby 1.9, running in Passenger.
Is it a good idea? I don't have experience in architecture of web-apps.
If you write a contract first web service that consumes and produces XML, you can talk to any client that can make an HTTP GET or POST request in the appropriate format. SOAP or REST, doesn't matter.
I've written Java/Spring web services that started with an XSD. A Yahoo UI RIA client took the WSDL, made an HTTP POST to send the request document, and displayed the XML response in a nice data grid.
Technically, yes you can most certainly do that. That is one of the advantages of using WS. They are interoperable.
However, I would give some consideration to the thought if someone else were to maintain it and has expertise in only one of the two platforms (RoR or Java). It is always best to ask :-)
In terms of the architecture of the system, yes, this is a "good practice". By good, I mean that it achieves the goals, does no harm, and enforces separation of concerns.
I've been developing on an architecture that has a similar structure. The user interface is .NET and uses Java Web Services. That web services then are responsible for all interaction with the persistence media, third party components, etc.
I'd say in any system you should be working to abstract your user interface logic from your business logic. It's just good separation of concerns. Using web services to do that is just one way to achieve that goal. I'd recommend using web services in the case that you will re-use those business services in other use cases in your system.
One more thing; after using two different technologies on the UI and WS for the last 8 years, I've learned that most of the challenges are organizational, not technical. For example, it's harder to find those new developers that have both skills you're looking for to maintain your app. You end up having to find an expert on one and then train them on the other technology.
It depends on how similar they are.
If your web service basically mirrors your website in functionality - then it makes a lot of sense to reuse existing code and thus to make them the same thing on the same server.
Note - this is not the same thing as entangling tiers as your views are still separate from your business logic.
From the Ruby-on-Rails perspective, the "web service" and "web-site" are often interchangeable as they are exactly the same code, with only the view-template differing (html for the website, xml for the web service).
If you build with a RESTful architecture in mind from the beginning, then you can achieve this with the minimum of duplication and with all application layers correctly decoupled.

Resources