Terraform : How to bind Azure Service Bus Namespace to a VNet? - terraform-provider-azure

Using terraform, we have setup service endpoint from our VNet to service bus. However, we now want to bind the service bus namespace to our vnet so no other networks can access that servic bus namespace.
Microsoft describes how to do the bind with ARM template here.
How do you accomplish this using native terraform (no ARM template)?

This is now possible using the servicebus_namespace_network_rule_set which was released in v2.7.0 of the provider.
You can also assign private endpoints.

Judging by the module reference this is not yet possible with native terraform. This is not uncommon, many things are not possible with terraform in Azure.

Related

access google cloud TPU in self-managed k8s cluster

Is it possible to access google cloud TPU resources in a self-managed k8s cluster(not GKE)? Is there a plugin of any sort to access the TPU resources from within the docker containers?
Cloud TPU has been built into GKE and to do so a custom resource was defined with a separate control logic to handle that resource. This code is built into GKE, and if you wanted to self-manage a k8s cluster, you'd probably want to write this yourself. Personally, my recommendation would be to use TPUs through GKE as they're best supported that way.

Integrate cloud run with exiting service mesh

We have an existing service mesh built using Envoy and internal service control and discovery stack. We want to offer cloud run to our developers. How can we integrate the cloud run into the mesh network so that:
1, The cloud run containers can talk to the mesh services.
2, The services built using cloud run can be discovered and used by other mesh services (each has a Envoy sidecar)
The GCP docs cover this with the Cloud Run for Anthos services using Istio doc.
In a nutshell, you will need to:
Create a GKE cluster with Cloud Run enabled.
Deploy a sample service to Cloud Run for Anthos on Google Cloud.
Create an IAM service account.
Create an Istio authentication policy.
Create an Istio authorization policy.
Test the solution.
But things change depending on how your existing service mesh is configured. Elaborating on that portion can allow the community to be able to better assist you.

Run a web page similar to kubernetes dashboard

I a want to run a web page similar like kubernetes dashboard.The web page takes input from the user and generates a small file but i want the web page to be loaded without using any server. kubernetes is deploying a pod and bringing up the web page i want to do the same.If kubernetes is also using a server how is it using it(is it directly downloading it with the OS in the pod or how is kubernetes doing it).
Overview I want to know how kubernetes dashboard is getting deployed is it using a server if so how is it getting the server installed in the kubernetes pod else how is it bring up the UI.
Actually, Kubernetes plays the role as an orchestrator and provides sufficient way for building communication channels between containers in the cluster and uses Docker by default as a container runtime.
Containers represent run-time environment for images, however images consist with OS layer and application binaries, a good explanation you can find here. In order to build own image you might consider two ways to afford this: create an image from existing one in Docker Hub or compose image from Dockerfile.To store the customized image might be the option to push it into Docker Hub repository or stand for some private isolated repo by deploying a Registry server.
When you are ready with an image, and you plan to implement application in Kubernetes cluster, that's a good time to create first microservice. Although, there are tons of materials about Kubernetes cluster and its run-time engine architecture in the globe, I would focus on the application deployment lifecycle.
Deployment is the main mechanism which defines how are Pods should to be implemented within a cluster and provides specific configuration for further application run-time workflow.
Service describes a way how the particular Pod will communicate with other resources within a cluster, providing endpoint IP address and port where your application will respond.
In general scenario with Kubernetes Dashboard, the method in use kubectl proxy will expose the application by proxying gateway between host and Kubernetes API, which is more like for testing purposes and not secure, in comparison with Nodeport type which brings more convenient way to make application accessible outside the cluster, as described in this Stack thread.
I encourage you to get some more learning stuff in the official Kubernetes documentation.

Kubernetes - Automatically populating CloudDNS records from service endpoints

When running a Kubernetes cluster on Google Cloud Platform is it possible to somehow have the IP address from service endpoints automatically assigned to a Google CloudDNS record? If so can this be done declaratively within the service YAML definition?
Simply put I don't trust that the IP address of my type: LoadBalancer service.
One option is to front your services with an ingress resource (load balancer) and attach it to a static IP that you have previously reserved.
I was unable to find this documented in either the Kubernetes or GKE documentation, but I did find it here:
https://github.com/kelseyhightower/ingress-with-static-ip
Keep in mind that the value you set for the kubernetes.io/ingress.global-static-ip-name annotation is the name of the reserved IP resource, and not the IP itself.
Previous to that being available, you needed to create a Global IP, attach it to a GCE load balancer which had a global forwarding rule targeting at the nodes of your cluster yourself.
I do not believe there is a way to make this work automatically, today, if you do not wish to front your services with a k8s Ingress or GCP load balancer. That said, the Ingress is pretty straightforward, so I would recommend you go that route, if you can.
There is also a Kubernetes Incubator project called "external-dns" that looks to be an add-on that supports this more generally, and entirely from within the cluster itself:
https://github.com/kubernetes-incubator/external-dns
I have not yet tried that approach, but mention it hear as something you may want to follow.
GKE uses deployment manager to spin new clusters, as well as other resources like Load Balancers. At the moment deployment manager does not allow to integrate Cloud DNS functionality. Nevertheless there is a feature request to support that. In the future If this feature is implemented, it might allow further integration between Cloud DNS, Kubernetes and GKE.

Connecting IBM Containers (Dockers) to Watson IoT service instance

I wonder if I can connect my Dockers running in IBM Containers service with a Watson IoT service instance (of course, running in the same organization and space).
I can always assign a public IP to my Docker and connect through the public IP but I think that makes no sense and there is an alternative like I do with other services using something like
-e "CCS_BIND_SRV=My-IoT-Service"
when starting the Docker.
basically you can directly connect to IBM Watson IoT from your docker container. All you need to know are a couple of credentials. You can either obtain those by reading the VCAP_SERVICES JSON property which can be injected into your container:
Here is a link explaining this. (Search for VCAP_SERVICES)
What you also can do is just obtain the credentials from the Bluemix UI and use them accordingly.
Here a python example how to do this
Finally, I can recommend this course since it explains all on connectivity in detail

Resources