Secrets in Azure Stream Analytics - azure-keyvault

In our Stream Analytics Job we have some constant values that are required for further computations. Those are considered "secret" by our customer, so it would be good to not have them set directly in the Query or the User Defined Function we're using. Is there any best practice how to deal with these, e.g. can we somehow retrieve these values from Azure Key Vault?

Unfortunately, Azure Stream Analytics don’t support Azure Key Vault bindings.
I would suggest you to vote up an idea submitted by another Azure customer.
https://feedback.azure.com/forums/270577-stream-analytics/suggestions/35328418-enhance-security-for-asa-managed-services-identit
https://feedback.azure.com/forums/270577-stream-analytics/suggestions/40530247-azure-key-vault-as-reference-data-input
All of the feedback you share in these forums will be monitored and reviewed by the Microsoft engineering teams responsible for building Azure.

You could try to see if Reference Data option in Stream Analytics could be the place where you store the constants and update often. If the concern is about having control over how your queries get encrypted while using ASA, you can use your own storage account to store all private data assets related to the job. And then encrypt your storage account in whatever approach you see fit.

Related

Does Firebase support read-only collections, even from server?

I've been integrating Firebase for a clients back-end dashboard system and I'm generating timestamps on activities which for FINRA/SEC purposes they would like to see set immutable.
Users have zero control over the data, all of it is handled by API calls to Node but through Node, the server is given full access permissions. Does Firebase provide a way to create a collection so that not even Node nor the Web UI can make changes to the info? Similar in the way an S3 vault-lock works.

disadvantages of storing secrets in Blob Storage

My current customer has secrets stored in Blob Storage and we want to propose them to migrate to KeyVault. May I know what are the benefits or storing secrets into KeyVault as compared to Blob?
When I read the documentation, KeyVault uses the HSM to protect the keys and secrets but Blob also uses the encryption which is also secure. so what are the other advantages?
I'd say that in general they look very similar, however I'd say the most important difference between the two would be the authorization model.
Access to a storage account is done by one of the two available connectionstrings/keys. Access to a KeyVault can be assigned directly to users or groups (from AAD) and the access to resources within the Key vault can be configured with more granularity. Next to that it is very easy to limit the type of resources from within azure that may or may not retrieve data from a KeyVault, reducing the attack service.
Storage accounts do have AAD integration currently in preview, but what i gather is that that is mostly focusing on the Azure file share functionality (https://learn.microsoft.com/en-us/azure/storage/files/storage-files-active-directory-overview).
Another nice differentiation is definitely the integrations that are already available when using KeyVault (i.e. Retrieving Azure DevOps secrets directly from a KeyVault or automatically retrieving Certificates for VMs)
FYI, i'm by no means a KeyVault expert but that's just my 2 cents :)

Azure Blob Storage authorization with SAS

I have a web application (ASP.NET MVC) which uses Azure Blob Storage for storing documents and images. Each user has specific access rights to the blobs and this
is stored in web application's database.
Currently I have a quick temporary solution which uses the web application as a middle layer that runs the authorization and if the client has read access to the blob,
it is first retrieved from Azure and then delivered to the client. This is of course not the optimal way of doing it for many reasons.
I have started to rebuild this part using SAS (Shared Access Signatures), but can't find a good source for setting up a system that will scale well as the number of
users and files grow. I am expecting the number of users to be around 100 and the number of blobs to be around 100 000.
As I see it I have two options.
1) All files have one signature stored in the web applications database and this is used for all users who have access to the file. This would be the easy way to do it,
but if a user for some reason does not still have access to the file, they will still be able to access the file if they have the link from earlier access.
2) All files have specific signatures for each user who has access to the file. This will make it easy to revoke access to files, but the number of signatures will
be massive and will this have any side effects?
Are there any more options?
Any thoughts on this are greatly appreciated!
Rather than having SAS for each users it would be better that you group the files by roles and map the users to roles which is easy to scale irrelevant to number of users.
Also giving access to users to blob directly is not recommended as you want to distribute your blob content through your application. So provide access to application with specific in context of role of user.
See below article for generating twominute SAS which expires in two minute so your users with the link does not have access to image for long time.
http://www.dotnetcurry.com/windows-azure/901/protect-azure-blob-storage-shared-access-signature
Hope this helps. :)

Google cloud storage is it ok to expose API key?

I'm developing an application which lets users upload pictures. I'd like to use Google cloud services to store these pictures. I am creating a unique GUID for each image in database and would like to store the images in the cloud with that name. It makes sense for me to make an ajax request for a GUID and then upload the image from the same page directly to google cloud services.
https://github.com/GoogleCloudPlatform/storage-getting-started-javascript/blob/master/index.html
Like shown in this example.
My first question is, should I be sending this to my back-end(C# code) and uploading it from there? Or is this the correct approach?
And my second question if this is the correct approach is, wouldn't exposing my details like that in javascript allow other people to upload from outside my application as well?
An API key, by itself, identifies a call as being associated with a certain project for purposes of billing. It's only necessary for anonymous calls. An API key does not grant any sort of authorizations. If there's an object in a bucket in your project that only your project members can see, the API key won't give anyone permission to read it.
That said, it's not a great idea to share your API key if you can help it, and if you need to share it, you should lock it down as much as possible. API keys can be limited to use with only certain IP addresses, only with certain web referrers (for instance, it will only work with JavaScript clients on www.yoursite.com), or only when run from a particular iPhone or Android app. These precautions aren't cryptographically fool-proof (there's no reason a hacker couldn't spoof a referer header), but they do make them pretty much useless for someone else who just wants to paste an API key somewhere to enable a web app and doesn't want to pay for it themselves.
The problem with using the javascript client's approach for your application is that individual users would either end up uploading objects completely anonymously or with their own Google accounts. Neither is super great, since the anonymous option would basically require you to create a bucket with anonymous writes enabled, and you don't want to do that.
There is a great approach to letting users upload pictures, though: signed URLs. Signed URLs allow your server to securely sign, in advance, a request to upload an object with your credentials. This is your best option for letting anonymous end users securely upload objects to your buckets.
Documentation on signed URLs: https://cloud.google.com/storage/docs/accesscontrol#Signed-URLs

CloudKit - no server-side logic?

With CloudKit, you can focus on your client-side app development and let iCloud eliminate the need to write server-side application logic. CloudKit provides you with Authentication, private and public database, structured and asset storage services — all for free with very high limits.
You cannot upload any code to run on Apple's servers?
I've heard it being compared to Google App Engine and other cloud computing platforms, but without the ability to run your own code, isn't the whole thing pretty limited and not really comparable?
For example, if I want to build a news app which periodically pushes stories on topics that the user is interested, then this can't be done just using CloudKit because I would need scheduled jobs and data processing on the server.
Any thoughts?
Server-side
As you said CloudKit doesn't allow server-side code.
But there are possibilities.
Crons
You don't want to connect to the iCloud Dashboard everyday in order to perform the push by adding a record. One solution here is to code an app on a mac server (I guess mac mini as server will become more popular with CloudKit) that add a new Daily CKRecord every day.
Subscriptions
Subscriptions concept is that the client registers for specific updates. You can create a record type called Daily for instance and make users register to it. You should check the Apple documentation and WWDC14 videos (even if Subscriptions are not detailed, it's a good start point).
The good thing is push notifications are linked with the subscription concept. So basically you say: Send my a notification for each new CKRecord of type Daily added.
BaaS party
What is the point for using CloudKit (vs Parse and other?)
Price: CloudKit has a really nice pricing
Ready to go: 2 clicks inside XCode and you are ready to go
User consistency: you get free user login for all his devices through their iCloud account. With a very good privacy system. And you can get relationships with a smart system.
But:
You are stick on Apple platform. We don't even know if we could export the data..
Only data-centered for now (no server-side code)
The CloudKit dashboard is too limited
The future
CloudKit is still pretty new. At the WWDC some guys behind it made me understand that they are still heavily working on it. My bets are they are working on 2 important points :
Server side code execution through remote scheduled tasks
CloudKit for Analytics (Visualization side)
Edit: Apple guys are fully aware and concerned about the lack of web access for the data. It means that one day it may be accessible from other platforms. I read in a comment that Apple probably would have bought Parse if CloudKit wasn't better, AFAIK they tried to buy Parse (skills buy it's said, but we don't really know).
Update WWDC15
CloudKit is now available in JS and some dashboard are available now. Wait and see.
Update February 2016
CloudKit Now Supports Server-to-Server Web Service Requests
Web Services Reference
In some cases, we do not need server-side logic, and just storing static data can cover all the usage scenario.
In this case, it would be very helpful if there's a free accessible storage that you can store something. CloudKit provides such stuffs rather then full service platform.
Yes it is limited. Anyway can be useful for some people. For example, your case actually can be supported CloudKit. Though CloudKit is just a static storage, it support subscription. Which monitors a set of conditions and pushes the event notification to client. It's fortunate that the only background job feature supported by CloudKit is just what you need.
Anyway, if you need more, then you might need to consider full fledged servers. Usually simple web services with simple server-side code execution support are also limited.
You cannot upload any code to run on Apple's servers?
You can and you can't. You can't upload code / SOAP based web services to the server, instead of it you can upload / store observers on the server, called subscription.
whole thing pretty limited and not really comparable?
I would say in CloudKit and in MBaas client communicates with server though a more narrower more robust interface: you can not upload exotic web service to do XML parsing, database manipulations and based on it trigger push notifications, but RestFull architecture allows you to perform the 4 basic operation on the data store, and with subscription client can get notified about INSERT / UPDATE / DELETE operations performed on tables.
I think MBaas is just the next step in evolution of server - client architecture. First it seems it is limiting, but you can do all as in SOAP based web services world. Development is extremely fast / scalable / comfortable to use and easier to control things like permissions / setup, maintain server, security needs almost no effort.
Believe it or not, you can actually get REALLY far with this approach.
I've not used CloudKit, but I can describe for you my application stack:
AngularJS (or your favorite client side HTML rendering framework): A single page will host a series of templates/controllers selected by the router and driven by users changing the anchor to select which page they're on.
Firebase.io (or your favorite cloud storage): Any dynamic data goes into the cloud document store. The controller needs to load the data and render the template on the client, and when the data changes, send the data back. This also provides the authentication and authorization as well, since you can limit access to the data.
Now you need a place to serve the HTML/CSS/JS/images... which requires no 'server side code execution', just a web server where you can put the assets.
Using this technique you could store all the user's topics in the database for that user, and when the page loads, go and aggregate all the sources for those topics (also stored in the database) completely client side. There's nothing in your example application which actually requires server side execution that I can see, so long as you have cloud storage which will provide you with authentication and authorization services, and a 'dumb' web server for serving up static assets.
CloudKit isn't a full-fledged web hosting service. Instead, it's an SDK for iCloud. You shouldn't be putting a web site up there, just storing user data that you may want to use in multiple applications or platforms.
iCloud APIs enable your apps to store app data in iCloud, keeping your apps up to date automatically. Use iCloud to give your users a consistent and seamless experience across iCloud-enabled devices.

Resources