Fusion Tables Experimental - google-fusion-tables

What exactly does it mean that Google's Fusion Tables is still "Experimental"? I am not familiar with any other Google services that are labeled as "Experimental". Does this mean that its OK to invest time and money developing for this service or does it mean that it will probably get swept up in the next spring cleaning?

Fusion Tables is provided by the Data Management team in Google Research. This means we do not provide Enterprise level support and currently there is no way to purchase increased support, API quota or storage.
Commercial use is allowed as long as it is within usage governed by the Google Terms of Service and Google APIs Terms of Service.
For non-commercial use increased API quota can be requested from the Google API Console, and exceptions for storage quota and maximum mappable points can be requested by sending an email to googletables-feedback#google.com.

Related

Best method for protecting IP data downloaded to an iOS App?

I'm enhancing a commercial App which until now has used cloud AI models to analyse data and make predictions.
The enhancement is moving the models onto the app for applications with no or limited network access.
These models represent significant IP to our clients and it is essential that we secure any data downloaded to a device from theft.
The App is iOS only for now and I was intrigued by WWDC2020's CoreML update including support for encrypting models. This would be ideal but we can't use CoreML at the moment due to its API not supporting the methods our models require.
Nice to know though that this is a recognised issue with in-app ML model usage.
What is the best method and available options in iOS (>11.0) right now that won't run foul of encryption export laws or even Apple's app store rules etc?
Or models are Javascript which we run in a JavaScriptCore VM with additional data files loaded from json string files.
My current thinking is to use something like the iOS AES encryption. Not hardwire the private key in the app but instead pass it via https, after a user logs in, storing it in the keychain. Decrypt the data strings in memory before loading into the JS VM.
I can see the obvious weaknesses with this approach and would be keen to hear how others have approached this?
The Data
The enhancement is moving the models onto the app for applications with no or limited network access.
These models represent significant IP to our clients and it is essential that we secure any data downloaded to a device from theft.
From the moment you make the data/secrets public, in the sense you include it with your mobile app binary or later download it into the device and store it encrypted, you need to consider it compromised. No bullet proof around this, no matter what you try, you can only make it harder to steal, but with all the instrumentation frameworks available to introspect and instrument code at runtime, your encrypted data can be extracted from the function that decrypts it:
Decrypt the data strings in memory before loading into the JS VM.
An example of a very popular instrumentation framework is Frida:
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
The Private Key
My current thinking is to use something like the iOS AES encryption. Not hardwire the private key in the app but instead pass it via https, after a user logs in, storing it in the keychain.
While not hard-coding the private key in the device is a wise decision it doesn't prevent the attacker from performing a man in the middle(MitM) attack to steal it, or use an instrumentation Framework to hook into the code that stores it in the keychain, but you may already be aware of this or not, because it's not clear from:
I can see the obvious weaknesses with this approach...
In my opinion, and as a side note, I think that first you and the business need to consider if the benefits for the user in having the predictions being made locally on their device outweighs the huge risk being taken of moving the data from the cloud into the device, and data protections laws need to be taken in consideration, because the fines when a data breach occurs can have a huge impact in the organization future.
iOS Solutions
What is the best method and available options in iOS (>11.0) right now that won't run foul of encryption export laws or even Apple's app store rules etc?
I am not an expert in iOS, thus I cannot help you much here, other then recommending you to use as many obfuscation techniques and run-time application self-protections(RASP) in top of the solution you already devised to protect your data, so that you can make an attacker life harder.
RASP:
Runtime application self-protection (RASP) is a security technology that uses runtime instrumentation to detect and block computer attacks by taking advantage of information from inside the running software.
RASP technology is said to improve the security of software by monitoring its inputs, and blocking those that could allow attacks, while protecting the runtime environment from unwanted changes and tampering.
You can also try to use advanced bio-metrics solutions to ensure that a real user is present while the mobile app is being used, but bearing in mind that the more skilled attackers will always find a way to extract the data to a command and control server. It's not a question if they will be able, but when it will happen, and when it happens it's a data breach, and you need to have planned ahead to deal with it's business and legal consequences.
So after you apply the most suitable in app defenses you still have an issue left to resolve, that boils down to ensure your API server knows what is making the request, because it seems you already have implemented user authentication to solve in behalf of who the request is being made.
The Difference Between WHO and WHAT is Accessing the API Server
When downloading the data into the device you need to consider how you will ensure that your API server is indeed accepting the download requests from what you expect, a genuine instance of your mobile app, not from a script, bot, etc., and I need to alert you that user authentication only says in behalf of who the request is being made, not what is doing it.
I wrote a series of articles around API and Mobile security, and in the article Why Does Your Mobile App Need An Api Key? you can read in detail the difference between who and what is accessing your API server, but I will extract here the main takes from it:
The what is the thing making the request to the API server. Is it really a genuine instance of your mobile app, or is it a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
The who is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
Think about the who as the user your API server will be able to Authenticate and Authorize access to the data, and think about the what as the software making that request in behalf of the user.
I see this misconception arise over and over, even among experienced developers, devops and devsecops, because our industry is more geared towards identifying the who not the what.
Others approach
I can see the obvious weaknesses with this approach and would be keen to hear how others have approached this?
As I said previously I am not an expert in iOS and I don't have more to offer to you then what I have already mention in the iOS Solutions section, but if you want to learn how you can lock your mobile app to the API server in order to only reply with a very high degree of confidence to requests from a genuine instance of your mobile app, then I recommend you to read my accepted answer to the question How to secure an API REST for mobile app?, specifically the section Securing the API server and the section A Possible Better Solution, where you will learn how the Mobile App Attestation concept may be a possible solution for this problem.
Do you want to go the Extra Mile?
In any response to a security question I always like to reference the amazing work from the OWASP foundation.
For Mobile Apps
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
OWASP - Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.
For APIS
OWASP API Security Top 10
The OWASP API Security Project seeks to provide value to software developers and security assessors by underscoring the potential risks in insecure APIs, and illustrating how these risks may be mitigated. In order to facilitate this goal, the OWASP API Security Project will create and maintain a Top 10 API Security Risks document, as well as a documentation portal for best practices when creating or assessing APIs.

GDPR liability of IoT data platforms

Thinking of developing IoT applications that might contain personally identifiable data on MS Azure IoT Hub or AWS. Although these provide guidance as to how to make the data GDPR-compliant, I couldn't find information about their role on the process.
Are they data controllers? Data processors? Obviously, as far as GDPR is concerned, they might have some liability over personal data.
Can you please point to some Ts&Cs/disclaimers which might cover MS/AWS liability?
Thanks

What are the main benefits of Azure Digital Twin?

I was going through Azure new offering Digital Twin and trying to understand what makes it different and unique so that organization will use it and will not invest in developing theirs own DT systems.
I also posted the question on https://channel9.msdn.com/Shows/Internet-of-Things-Show/Getting-started-with-Azure-Digital-Twins?term=%22Azure%20Digital%20Twins%22&lang-en=true and got the response for https://techcommunity.microsoft.com/t5/azure-iot/at-the-end-of-the-day-what-is-digital-twins-about/m-p/759502#.XS9eIPFF0Pc.link
All links explains the functional details but nowhere it is mentioned what are the core components for building ADT, internal details and how much effort goes to build one w.r.t to using Azure offering. obviously, it is PaaS offering so it will always beneficial to use one then creating one but what are components will need to create one.
Thanks
Azure Digital Twins provides its own Swagger and API and a Spatial Graph.
That's the benefits of Azure Digital Twins.
In my company we made a POC of ADT, and now we are looking to implement another solution to replace it because in Preview Version it has some limits. (100 message / seconds is not enough for us).
But to replace it we will need to completly redesign the Spatial Graph and to redefine every single item that Azure Digital Twins allows you to use. (Devices / Sensors / Spaces)
It's really complicated to do that.
I think that the solution provided by MS is clearly great to treat with IoT objects, smartbuilding because of that SpatialGraph and API.

Mobile API design - flexibility vs performance

I work for a startup where we have an iOS product that interfaces with a backend API. Initially as the product was being developed, the API was designed to be flexible so that the client could always access the data it needed on a given view; especially as the views were evolving.
As we’re starting to scale, we’re now finding that there are a lot of performance bottlenecks due to the amount of data we’re passing to the client — some of which is unneeded on a given endpoint.
My question is: in the case of a private API, where you’re also building the only client that will consume the API, is it common (or acceptable) to couple the front end requirements directly to what the backend serves, so that the backend is only providing exactly what the client needs for a given endpoint / view?
Yes. The goal of an API is to provide a reasonable service to all the clients you want to support. If you only have one client, it's both common and acceptable (desirable, even) to optimize your API to support the one client.

Windows Azure for web developers vs Amazon EC2

I just watched the Windows Azure intro video and it left me feeling like it was a front end shell for hosted IIS instances. Can anyone who know more (possibily that was part of the beta) shed on why you would use this vs. EC2.
it seemed easy enough but really didnt give specifics on how it works, why it works or why you would use this vs the traditional solutions out there?
According to the vision (and I can only talk about the vision here since the product isn't really out yet), here's a couple of reasons you might consider Azure over EC2.
Azure includes built-in load balancing abilities. If you want to do that in Amazon, you have to roll your own solution or buy a third-party solution like www.RightScale.com.
Azure-friendly-coded apps can be delivered internally or in Microsoft's cloud. If you write apps that have confidential information like financial data or health care data, not all of your clients will be willing to put their data in the public cloud. In that case, they can deploy your apps internally on Windows. That's sold as a skillset win, because you can go from public to private projects. Don't get me wrong - if you master Amazon EC2 development, then you can deploy your apps internally with Linux virtual servers in your datacenter, but it's not as turnkey. (Hard to describe a tech preview as turnkey when it's not licensed yet, hahaha.)
Having said that, it wasn't clear that the load balancing functionality is included in the box with internal deployments. If you have to do a combination of Azure plus ISA Server, that'll be a tougher deployment and management sell.
AppHarbor is a .NET cloud hosting environment that sits on Amazon EC2. The nice thing is they offer a free plan (much like Heroku does) so you can check it out yourself with very little friction.
My company is using Amazon EC2 now and I am down at the PDC watching the details on Azure unfold. I have not seen anything yet that would convince us to move away from Amazon. Azure definitely looks compelling, but the fact is I can now utilize Windows and SQL server on Amazon with SLAs in place. Ray Ozzie made it clear that Azure will be changing A LOT based on feedback from the developer community. However, Azure has a lot of potential and we'll be watching it closely.
Also, Amazon will be adding load balancing, autoscaling and dashboard features in upcoming updates to the service (see this link: http://aws.amazon.com/contact-us/new-features-for-amazon-ec2/). Never underestimate Amazon as they have a good headstart on Cloud Computing and a big user base helping refine their offerings already. Never underestimate Microsoft either as they have a massive developer community and global reach.
Overall I do not think the cloud services of one company are mutually exclusive from one another. The great thing is that we can leverage all of them if we want to.
Microsoft should offer up the ability to host Linux based servers in their cloud. That would really turn the world upside down!
Well it's more than just web services. It will also allow you to host other types of connected applications. Plus it provides integrated access to other MS software on the cloud; i.e. SharePoint, Exchange, CRM, SQL data sevices, and will allow you to fully customize and extend those offerings in the same way that you would be able to customize and extend them if they were hosted on-premises.
At the Archtect Insight Conference last year they mentioned that they have started to alter core server products to deal with the large scale failover environment which is very interesting to me at least.
Its bunch of stuff that is coming into the Cloud. I think of this as more of Platform in the Cloud.
Sql Server
CRM
MOSS
Exchange
BizTalk
Geneva (identity)
The terms that are mentioned here are "STORE" and "COMPUTE"
For me this get really intersting around the IDEA of a Internet Service Bus.
It is also about moving to the development workflow process too.
OSLO DSLs and Qudrant - Moving to a Model Driven View
Entity Framework - giving developers strong typed model in code at a click of button
ADO Data Services and Data Dynamic Webtemplates using MVC
Then with the Azure Templates and the new "WebRoles" moving to deployment of the applications to the Cloud.
Then for the Admins one click provisioning of servers is awsome.
On the Data Privacy Rules... which is the one big elephant in the room and has been mentioned... Typically there is the often a ruling in each Country about information security.
UK RIPA
US Patriot Act
Are these really conceptully different? And these 2 countries do share information anyway...IMHO (legally they are different, but to a customer both laws give access to customer data its just question of who)
At this point, information on Windows Azure is pretty scarce. I was in the keynote during the announcement, and my best guess at this point is that they're trying to provide a more extensive virtualization environment than simply hosted IIS instances.
At this point, though, I can't say more than that.
We use S3 for storage very successfully and I've always kept an eye on EC2 for Windows and SQL Server support. So now these are available I dug further.
I was pretty worried when I read this:
http://www.brentozar.com/archive/2008/11/bad-storage-performance-on-amazon-ec2-windows-servers/
Perhaps, as we're developing what will hopefully become a very popular website, we should be considering the new data store models - Azure's or Amazon's SimpleDB. Hmmmmm - complete rewrite!
The major difference going forward is that Amazon EC2 is free from today Nov 1, Check this out.
http://www.buzzingup.com/2010/10/amazon-announces-free-cloud-services-for-new-developers/

Resources