Hosting node pools in a different subscription than the control plane? - azure-aks

We received a subscription with additional credits from Microsoft through the Microsoft Partner program.
Now we'd like to use it and reduce operational costs by hosting our AKS node pools in that subscription.
However, it seems moving an existing aks Cluster/node pool isn't supported. Also, I couldn't find anything in the azure CLI documentation.
https://learn.microsoft.com/en-us/azure/aks/faq#can-i-movemigrate-my-cluster-between-azure-tenants
Any idea?

Related

How can we compare the services of different cloud service providers and decide which one is best for us as a company?

For example, my company requires 4 VMs, 2 TB and 256k memory. Which cloud service provider is better for me among AWS, Azure and Google and why?
Do you care about data location, price, security, network, or what?
If you give no information except some random server specs that even say "256k memory", maybe you should just get a cheap server from Hetzner or other bulk VPS provider.

AML Notebook VM auto shutdown policy

Whether an auto shutdown policy can be set to turn off Notebook VM?.Is this need to shutdown the VM Notebook by the user?
In the meantime....a team of MSFT architects developed the following Azure Samples solution to fill this gap:
https://github.com/Azure-Samples/AzureMLResourceGovernance
It uses Logic Apps to orchestrate Container Instances to perform Azure ML resource governance related tasks (such as scheduled shutdowns).
Auto-shutdown is currently not enabled for Compute Instances or Notebook VM but it is on our road map. Details will be provided in Azure Updates when this feature is in preview, in development, or becomes available. Thanks.
Currently Auto shutdown is on roadmap, Workaround While you can see and change settings for the underlying IaaS VM,if using Azure VM Auto-shutdown (also Automation Runbook) is to start the VM from the Azure VM resource blade first, then restart in the Azure ML Workspace UI.
Azure Notebooks starts the underlying virtual machine whenever you run a notebook or other file. The server automatically saves files and shuts down after 60 minutes of inactivity. You can also stop the server at any time with the Shutdown command (keyboard shortcut: h). Ref: https://learn.microsoft.com/en-us/azure/notebooks/configure-manage-azure-notebooks-projects.
Azure Machine Learning Compute Instance Auto shutdown is now in private preview - contact your local Microsoft representative to have it enabled.
Now there is a new Auto-Shutdown preview feature for the compute instance in Azure Machine Learning Service. More details here
Make sure to enable this preview feature as in below image.
Then check the compute instance/schedules option to see the new feature as below.
Note also we can control the auto-shutdown time on a subscription level using a built-in policy. details here

Cloud Run Inter Zone Egress

I have a question on inter-zone egress charges on Google Cloud Run (managed). As I understand there is no control over which zones Cloud Run chooses. So potentially when deploying several microservices talking to each other, there could be significant charges.
In kubernetes this can be alleviated via service topology (preferring same zone or even same host if available). Is there anyway to achieve this with Cloud Run?
https://kubernetes.io/docs/concepts/services-networking/service-topology/
According to Cloud Run pricing and internet egress pricing cost stays the same
independent if apps are within the same zone or not.
Now if you plan to have heavy traffic between your apps you should consider using different setup. Either GKE or Cloud Run for Anthos will allow you to setup communication between your apps through internal IP addresses which is free of charge assuming they are in the same zone. Refer to this table.

Sharing a graph database between Microservices

Is there any way to share a neo4j / aws Neptune graph database between microservices while restricting the access to the specific parts of the graph database to only a specific microservice ? By doing so, will there be any performance impact ?
In Amazon Neptune, there is no way to have ACLs for a portion of a graph at the moment. You can have IAM users who have full access to a cluster or no access at all. (Allow All or Deny All). You would need to handle this at application layer. Fine grained access control would be a good feature to have, so you may want to place a feature request for that (via AWS Forums, for example).
If you rule out access control, and the only thing you need is to make micro services not impact each other, then you can create read replicas, and use that them in your micro services (whether sharing database across micro services is a good choice or not is a separate discussion). Two approaches there are:
Add enough replicas in your cluster and use the cluster-ro (reader) endpoints in your read only micro services. All micro services would share the read replicas, but with DNS round robin.
Add replicas for various use cases, and then use specific instance endpoints with specific micro services. The micro services would not impact each other, however, a drawback with this approach would be that your instance can get promoted to master in the event of crashes and that may be something that you'd need to handle or be ready for.

Is It Possible to Apply SQS Limits for IAM Users?

I'm currently working on a project which has a large amount of IAM users, each of whom need limited access to particular SQS queues.
For instance, let's say I have an IAM user named 'Bob' and an SQS queue named 'BobsQueue'. What I'd like to do is grant Bob full permission to manage his queue (BobsQueue), but I'd like to restrict his usage such that:
Bob can make only 10 SQS requests per second to BobsQueue.
Bob cannot make more than 1,000,000 SQS requests per month.
I'd essentially like to apply arbitrary usage restrictions to this SQS queue.
Any ideas?
From the top of my head none of the available AWS services offers resource usage limits at all, except if built into the service's basic modus operandi (e.g. the Provisioned Throughput in Amazon DynamoDB) and Amazon SQS is no exception, insofar the Available Keys supported by all AWS services that adopt the access policy language for access control currently lack such resource limit constraints.
While I can see your use case, I think it's actually more likely to see something like this see the light as an accounting/billing feature, insofar it would make sense to allow cost control by setting (possibly fine grained) limits for AWS resource usage - this isn't available either yet though.
Please note that this feature is frequently requested (see e.g. How to limit AWS resource consumption?) and it's absence actually allows to launch what Christofer Hoff aptly termed an Economic Denial of Sustainability attack (see The Google attack: How I attacked myself using Google Spreadsheets and I ramped up a $1000 bandwidth bill for a somewhat ironic and actually non malicious example).
Workaround
You might be able to achieve an approximation of your specification by facilitating Shared Queues with an IAM policy granting access to user Bob as outlined in Example AWS IAM Policies for Amazon SQS and monitoring this queue with Amazon CloudWatch in turn by Creating Amazon CloudWatch Alarms for one or more of the Amazon SQS Dimensions and Metrics you want to limit, e.g. NumberOfMessagesSent. Once the limit is reached you could revoke the IAM grant for user Bob for this shared queue until he is in compliance again.
Obviously it is not necessarily trivial to implement the 'per second'/'per-month' specification based on this metric alone without some thorough bookkeeping, nor will you be able to 'pull the plug' precisely when the limit is reached, rather need to account for the processing time and API delays.
Good luck!

Resources