I can only select a region but no zone for Google Cloud Run (fully managed) services: which zone should I choose for my Google Cloud SQL server? - google-cloud-run

I have a fully managed Google Cloud Run service running in Frankfurt. I was not able to choose a zone but only a region, so I took "europe-west3". For my Google Cloud SQL server I can and have to choose a zone. I want to select the same data center for my SQL server and my service to keep the distance short and connections fast but I don't know which zone I should use (a, b, c). Do you know a way to determine which zone fits best to a fully managed Cloud Run Service?

Unfortunetly you cannot choose a Zone to deploy your Cloud Run service, the control goes only until Region. However, this is not something that you should be worried about, as you can see in this documentation:
A zone is a deployment area for Google Cloud resources within a region
That means that even thought the resources might not be in the same Cluster or VM, they are still very close geographically and very likely to be in the same Data Center, and as mentioned in the same documentation:
Locations within regions (Zones) tend to have round-trip network latencies of under <1ms on the 95th percentile.
So you are looking at a very low latency between your resources anyway, to the point that might not even noticible.

Related

how to make my amazon-connect instance multi-region or multi-AZ?

please I have amazon-connect instances in a specific region, and I want to implement failover, like I wanna make my amazon-connect instances multi-region or multi-AZ , so that if the primary region failed , the secondary instances from the other region can pick-up the workload easily without downtime?
You don't need to do anything, Amazon takes care of Connect resiliancy as part of the service. See: https://docs.aws.amazon.com/connect/latest/adminguide/reliability-bp.html
Amazon Connect Global Resiliency provides a set of APIs that you use to:
Provision a linked Amazon Connect instance in another AWS Region.
Provision and manage phone numbers that are global and accessible in both Regions.
Distribute telephony traffic between the instances and across Regions in 10% increments.
For example, you can distribute traffic 100% in US East (N. Virginia) / 0% in US West (Oregon), or 50% in each Region.
Access reserved capacity across Regions.
https://docs.aws.amazon.com/connect/latest/adminguide/setup-connect-global-resiliency.html

Best way to run volatile containers on Google Cloud

I have scripts that collect data all the time on Google Cloud VMs, but there are times when I have more or less data to collect, so I need to volatile and automatically allocate CPU and memory so I don't spend so much money. Searching I saw that the best way is to create container and orchestrate them correctly, google offers Kubernetes, Cloud Run or Google Compute Engine, which is the simplest and best for this problem? Or if there is another platform that solves it better, which one?
Ps. I'm new in Cloud Computing, sorry if I made a mistake or said something that doesn't exist.
Definitely forget about GCE ( compute engine ),
It remains GKE or Cloud run, you have to choose depending on your needs, here is the best article I be found:
https://cloud.google.com/blog/products/containers-kubernetes/when-to-use-google-kubernetes-engine-vs-cloud-run-for-containers
However2 if you choose to use k8s, you can manage resources within the deployments tank manifests in the "resources" section. The request would be the minimum allocated resources to your deployment and the limits will be the maximum resources the deployment can use. You may play with this maybe.

Cloud Run Inter Zone Egress

I have a question on inter-zone egress charges on Google Cloud Run (managed). As I understand there is no control over which zones Cloud Run chooses. So potentially when deploying several microservices talking to each other, there could be significant charges.
In kubernetes this can be alleviated via service topology (preferring same zone or even same host if available). Is there anyway to achieve this with Cloud Run?
https://kubernetes.io/docs/concepts/services-networking/service-topology/
According to Cloud Run pricing and internet egress pricing cost stays the same
independent if apps are within the same zone or not.
Now if you plan to have heavy traffic between your apps you should consider using different setup. Either GKE or Cloud Run for Anthos will allow you to setup communication between your apps through internal IP addresses which is free of charge assuming they are in the same zone. Refer to this table.

Problem: empty graphics in GKE cluster node detail (No data for this time interval). How can I fix it?

I have a cluster in Google Cloud. But I need to know information about resources usage.
In interface of each node there are three graphics about CPU, memory and disk usage. But all this graphics in the each node have warning "No data for this time interval" for any time interval.
I upgraded all clusters and nodes to the latest version 1.15.4-gke.22 and changed "Legacy Stackdriver Logging" to "Stackdriver Kubernetes Engine Monitoring".
But it didn't help.
In Stackdriver Workspace there is only "disk_read_bytes" with graphics, any other requests in Metric Explorer have only message "No data for this time interval"
If I do request "kubectl top nodes" in the command line, I see current data for CPU and memory. But I need to see it on Node detail page to understand the peak load. How can I configure it?
In my case, I was missing permissions on the IAM service account associated with the cluster - make sure it has the roles:
Monitoring Metrics Writer (roles/monitoring.metricWriter)
Logs Writer (roles/logging.logWriter)
Stackdriver Resource Metadata Writer (roles/stackdriver.resourceMetadata.writer)
This is documented here
Actually it sound strange because if you can get metrics in command line and the Stackdriver interface doesn't show them maybe it's a bug.
I recommend this: if you be able, create a cluster with the minimum resources, check the same Stackdriver metrics and if there are metrics, it can be a bug and you can report it on in the appropriate GCP channel.
Check the documentation about how to get support within GCP:
Best Practices for Working with Cloud Support
Getting support for Google Cloud

How do I change the timezone of my db2 service on Bluemix

The title should say it all. I have a service on Bluemix. From what I see there are two regions I can set up an application - US South or UK. Because it was set up quite some time ago when and it grew substantially, I don't want to move my service from US South to UK or change all of my time related queries either. Is there a way I could change the timezone of my DB2 service to match my current timezone / region (Irish Summer Time) without moving the whole application "overseas"?
A DB2 instance (including currently both "SQL Database" and "dashDB" services on Bluemix) takes its timezone setting from the underlying operating system. Unless there is an option to select the timezone when you provision either service (which, in my opinion, there must be), I'm afraid you'll have to move the service physically.
There is no setting for timezone for Bluemix dashDB instances, users, tables or columns.
There is also no information on timezone setting in Web GUI; you must query the DB.
MAJOR FAIL

Resources