Scaling Cloud Launcher Apps On Google Cloud - scalability

I created a MongoDB cluster using the google cloud launcher.
Is there any way to add more nodes using the cloud launcher?

Unfortunately, once you create the MongoDB cluster with the Google Cloud Launcher it is not possible to (re) scale the cluster using the launcher. You will have to manually adjust the cluster to scale it. The MongoDB docs are pretty helpful in this regard; for example, here's the doc on how to scale a sharded cluster.

Related

Apply cql to cassandra on bare metal during application deployment to kubernetes cluster

Our customer has a running cassandra cluster (not in a container).
We are currently trying to find a way to apply cql scripts in an automated way during rollout of our application that is deployed in a kubernetes cluster.
Our application is a quarkus application.
Has anyone a hint for me how to handle this in a proper way?

Integrate cloud run with exiting service mesh

We have an existing service mesh built using Envoy and internal service control and discovery stack. We want to offer cloud run to our developers. How can we integrate the cloud run into the mesh network so that:
1, The cloud run containers can talk to the mesh services.
2, The services built using cloud run can be discovered and used by other mesh services (each has a Envoy sidecar)
The GCP docs cover this with the Cloud Run for Anthos services using Istio doc.
In a nutshell, you will need to:
Create a GKE cluster with Cloud Run enabled.
Deploy a sample service to Cloud Run for Anthos on Google Cloud.
Create an IAM service account.
Create an Istio authentication policy.
Create an Istio authorization policy.
Test the solution.
But things change depending on how your existing service mesh is configured. Elaborating on that portion can allow the community to be able to better assist you.

Connect external workers to Cloud Composer airflow

Is it possible to connect an external worker that is not part of the Cloud Composer Kubernetes cluster? Use case would be connecting a box in a non-cloud data center to a Composer cluster.
Hybrid clusters are not currently supported in Cloud Composer. If you attempt to roll your own solution on top of Composer, I'd be very interested in hearing what did or didn't work for you.

How Amazon Elastic Container Service is Different from Kubernetes when we want to deploy our dockerize application over it?

I am planning to start my project but a bit confuse between choosing Amazon ECS and Kubernetes perhaps I am really a beginner with Micro-services architecture.
I would really appreciate if someone can show some path for deploying my docker container on a fast easier to handle platform.
Thanks
Here a list of differences from the top of my head:
AWS ECS / Kubernetes:
Proprietary AWS implementation / Open source solution
Runs on AWS / Supported by most cloud providers and on premise
Task Definitions / PODs have different features
Runs on your EC2 machines or allows for serverless with Fargate (in beta) / Runs on any cluster of (physical/virtual/cloud) machines running the kubernetes controller.
Support for AWS VPCs / Support for multiple networking models
I would also argue that kubernetes has a slightly steeper learning curve but ultimately provides more freedom and is probably a safer bet for the future given the wide adoption.
Features supported in both systems:
Horizontal application scalability
Cluster Scalability
Load Balancing
Rolling upgrades
Logging (with additional logging systems)
Container Health Checks
APIs
Amazon has bowed to customer pressure and currently has a managed kubernetes support in beta (EKS).
*edit: EKS is released now - but with an upcharge for the cluster controller nodes, as compared to google GKE for example.
Here is one article about the topic.

How many WordPress instances can i run on Google Compute engine in a single google cloud platform project.

i was wondering if anyone knows how many instances of a WordPress site i can potentially run on a n1-standard-4 vm instance on the google compute engine either using compute engine or the container engine with docker. is there a way to run benchmarks to figure this out.
An easy way to get a rough estimate for Google Container Engine would be to run the WordPress tutorial with a minor modification: in the cluster create command, use --machine-type=n1-standard-4. Then after following the guide to spin up a replication controller-backed wordpress pod, scale it by running
kubectl scale rc wordpress --replicas=100
and see how many replicas get scheduled. Keep in mind that this will just give you a rough estimate of how many instances you can run on an n1-standard-4 gated by cpu/memory. If you actually wanted to run multiple WordPress instances with persistent storage backing, you would follow this guide and repeat n times for as many as you want.

Resources