How to authenticate with Google Cloud from a Rails application deployed in k8s - ruby-on-rails

We use the method in the first code block in java, but I don't see a corresponding method in the rails documentation, Only the second code block:
Storage storage = StorageOptions.getDefaultInstance().getService();
storage = Google::Cloud::Storage.new(
project: "my-todo-project",
keyfile: "/path/to/keyfile.json"
)
If we use an application specific service account in the kubernetes cluster. How do we configure the Rails application to work in the local developer environment and also run with a k8s cluster?
Also, I would prefer not to use a project_id and a keyfile to initialize, since I will have to manage multiple such JSON files during the initialization process in dev, qa, staging, production environments.

I would recommend initializing without arguments and using the default discovery of credentials as discussed in the Authentication guide.
When running on Google Cloud Platform (GCP), including Google Compute Engine (GCE), Google Kubernetes Engine (GKE), Google App Engine (GAE), Google Cloud Functions (GCF) and Cloud Run, the credentials will be discovered automatically.
For the local developer environment, we always use environment variables with initializing without arguments and the default discovery.

Before moving your app to multiple environments, you should set up your deployment pipeline which will handle how your app is configured for different environments, including configuration of service accounts.
Below you can find two official google cloud documentations on how to do it, plus one example in gitlab, so you can follow what better suits you.
Continuous deployment to Google Kubernetes Engine using Jenkins
Continuous Delivery Pipelines with Spinnaker and Google Kubernetes Engine
Git Lab - continuous-deployment-on-kubernetes
Also, regarding the parameters of instantiation of the cloud storage object, as you can see on the same documentation you provided at your question, the project parameter is the identifier of your storage in the cloud, so if you do not set that your app will not be able to find it. For the Keyfile, it is what allow your service account to authenticate, so you can't make it work without it as well.
I hope This information helped you.

Related

serverless framework: local kafka as event source

I'm trying to build a local development environment in order to make my local tests.
I need to use kafka as event-source.
I've deployed a self-managed cluster into my local environment using docker.
An issue is running in my mind according to documentacion, I need to provide authentication.
Here there's no problem, the issue is which kind of values documentation is required I provide, AWS secrets.
What do those kind of secret, AWS secrets, have to do with my self-managed self-deployed kafka cluster?
How could I provide my kafka cluster as local event source?
I mean, I thought I only need to provide bootstrap servers, consumer group and topic... Something like knative serverless documentation says.
Any ideas about how to connect to my local kafka?

Create service or container from another container, on Google Cloud Run or Cloud Run on GKE

Can I create a service or container from another container, on Google Cloud Run or Cloud Run on GKE ?
I basically want to manage my containers/services dynamically from another container and not sure how to go about this
Adding more details:
One of my microservices needs to create new isolated containers that will run some user-land code. I would like to have full life-cycle control of these containers, run the code, and then destroy as needed.
I also looked at Cloud Run APIs but not sure how to run something like 'kubectl create ...' through the APIs? Is that the right approach?
Yes, you should be able to deploy Cloud Run services from Cloud Run services.
on Cloud Run (hosted): services by default run with Editor permissions, so this should be possible without any extra configuration
note that if you deploy apps with --allow-unauthenticated which requires setting IAM permissions, the Editor role will not be enough, as you need Owner role on the GCP project for that.
on Cloud Run on GKE: services by default run with limited scopes (as they by default inherit GKE node's permissions/scopes). You should add a service account to the Kubernetes Pod and use it to authenticate.
From there, you have several options:
Use the REST API directly: Since run.googleapis.com behaves like a Kubernetes API server, you can directly apply JSON objects of Knative Services. (You can use gcloud ... --log-http to learn how deployments are made using REST API requests).
Use gcloud: you can ship your container image with gcloud and invoke it from your process.
Use Google Cloud Client Libraries: You can use the client libraries that are available for Cloud Run (for example this Go library) to construct in-memory Service objects and send them to the API using a higher level client library (recommended approach)

Routing in cloud run: equivalent of dispatch.yaml / replacement service for cloud run

Any suggestions for an equivalent routing service for cloud-run, similar to dispatch.yaml for app engine?
We'd like the flexibility of (temporarily) sending traffic to a different service based on URL.
If you want to route certain paths to certain Cloud Run services, I recommend using Firebase Hosting, it integrates with Cloud Run.

I have set up the sample app on google cloud platform succesfully. After running a test, I am now wondering where the assets that I create are stored?

I have created a hyperledger google cloud platform installation.
Secondly I then installed the hyperledger sample network. All this went fine. Also the asset creation went find after I created the static IP on the VM. I now am wondering where my "hello world" asset remained.
I saw that a verification peer should have a /var/hyperledger...
Doing the default google cloud platform installation, what are my peers? This seems all to be hidden. Does that mean that the data is just "out there"?
I am checking how to tweak the google cloud platform installation to have private data storage now.
When you are using the Google cloud platform and using VM to run all your things. then all your information is being stored in the persistant disk you selected while install the platform.
Regarding Assets, you can not see physical the assets in fabric, those are stored in Level DB or CouchDb. default configuration of the fabric is LevelDB.
if you configure CouchDb then you can see the data in URL. Hope, this helps.

How to deploy docker app using docker-compose.yml in cloud foundry

I have a docker-compose.yml file which have environment variable and certificates. I like to deploy these in cloud foundry dev version.
I want to deploy microgateway on cloud foundry link for microgateway is below-
https://github.com/CAAPIM/Microgateway
In cloud native world, you instantiate the services to your foundation beforehand. You can use prebuilt services (auto-scaler) available from the market place.
If the service you want is not available, you can install a tile (e.g redis, mysql, rabbitmq), which will add services to the market place. Lot of vendors provide tiles that can be installed on PCF (check on newtork.pivotal.io for the full list).
If you have services that are outside of cloud foundry (e.g. Oracle, Mongo, or MS Sql Server), and you wish to inject them into your cloud foundry foundation, you can create do that by creating User Provide Services (cups).
Once you have a service, you have to create a service instance. Think of it as provisioning a service for you. After you have provisioned i.e. created a service instance, then you can bind it to one or more apps.
A service instance is scoped to an org and a space. All apps within a org - space, can be bound to that service instance.
You deploy your app individually, by itself, to cloud foundry (jar, war, zip). You then bind any needed services to your app (e.g db, scaling, caching etc).
Use a manifest file to do all these steps in one deployment.
PCF 2.0 is introducing PKS - Pivotal Container Service. It is implementation of Kubo within PCF. It is still not GA.
Kubo, Kubernetes, and PKS allow you to deployed your containerized applications.
I have played with MiniKube and little bit of Kubo. Still getting my hands wet on PKS.
Hope this helps!

Resources