The best place to store Google service account JSON file? - docker

Current setup: using Docker to generate the image of the application, deploy the image via Google Container Engine (GKE).
It is not ideal to have Google service account JSON file in the code and later on wrap it into the image. Is there any other way to store the JSON file?

Related

Google App engine 32MB max request size limit- how to upload large files?

We've got a set up using google appengine with a docker container running a laravel application. Our users need to upload large video files (max 1028MB) to the server which in turn is stored in GCS. But GAE gives an error 413 Request entity too large nginx. I've confirmed this is not an issue on our server configs but a restriction on GAE
This is a pretty common requirement. How do you guys get around this?
What i've tried:
Chunking using this package https://github.com/pionl/laravel-chunk-upload and dropzone.js to break down the file when sending (Still results in 413)
Blobstore API is not applicable for us as we need to constantly retrieved and play the files.
As mentioned by #GAEfan, you can't change this limit on GAE. The recommended approach would be to upload your files to Google Cloud Storage and then process the file from Google Cloud Storage.

Pulling a Google Container Registry container into Google Kubernetes Engine from another GCP project

I am looking to pull a container from Google Container Registry that exists in one Google Cloud Platform project into a Google Kubernetes Engine cluster that exists in a separate GCP project.
There's a good resource on this here: https://medium.com/hackernoon/today-i-learned-pull-docker-image-from-gcr-google-container-registry-in-any-non-gcp-kubernetes-5f8298f28969 but it includes the complexity of a non-GCP project. My guess is that there's an easier approach since everything here resides in Google Cloud Platform.
Thanks,
https://medium.com/google-cloud/using-single-docker-repository-with-multiple-gke-projects-1672689f780c
This Medium post from way back seems to describe what you are trying to do. In short: you need to give “Storage Object Viewer” IAM permission to the service account of the cluster that wants to pull images from the other project's registry. The name of the role isn't exactly intuitive but sort of makes sense when you consider that the images are stored in cloud storage.

Video Upload using Dot Net Core MVC to Google Cloud Storage

I'm really struggling here as I'm super new to Dot Net Core as well as Google Cloud Storage. I have looked over a lot of the available documentation online but I still can't understand on how to build the architecture.
So what I'm trying to build is a dot net core MVC application that has a form to upload a video file to Google Cloud storage (Google bucket probably?). The controller will take the data from the form and the Model layer is Google Storage.
Some pointers will be really helpful on how can I proceed about this task. Also some links to tutorials or any documentation if you guys think would be useful. Thanks a lot!!
It sounds like you're trying to get end users to upload files into Google Cloud Storage from their web browser. The trick here is that allowing any random anonymous user write access to your GCS bucket is a bad idea, but you also don't want to require that your users have Google Cloud accounts, either.
To resolve this, Google Cloud Storage offers a feature called "signed URLs." Your server uses its credentials to create a URL that is valid for a limited amount of time and, when presented to GCS by the end user, allows it to do a very specific thing as if it is your application's service account (in this case, uploading an object).
The flow goes like this:
Your app signs a URL for uploading an object to GCS and serves it as part of the page to the user.
The user does an upload to GCS using whatever JavaScript libraries you prefer.
If you want the user to use a literal POST web form, the signature is a little different than other cases. Look at the "policy document" section here: https://cloud.google.com/storage/docs/xml-api/post-object#usage_and_examples
Here's a sample that help answer half your question. It demonstrates how to upload a file to Google Cloud Storage:
https://github.com/GoogleCloudPlatform/dotnet-docs-samples/blob/master/storage/api/Storage/Program.cs#L117

Is all WSO2 API Manager's configuration saved in the database?

Say one implements a WSO2 API Manager Docker instance connecting to a separate database (like MySql) which is not dockerized. Say some API configuration is made within the API Manager (like referencing a Swagger file in a GitHub).
Say someone rebuilds the WSO2 API Manager Docker image (to modify CSS files for example), will the past configuration still be available from the separate database? Or does one have to reconfigure everything in the new Docker instance?
To put it in another way, if one needs to reconfigure everything, is there an easy way to do it? Something automatic?
All the configurations are stored in database. (Some are stored in internal registry, but registry saves data in database at the end)
API artifacts (synapse files) are saved in the file system [1]. You can use API Manager's API import/export tool to migrate API artifacts (and all other related files such as swagger, images, sequences etc.) between one server to another.
[1] <APIM_HOME>/repository/deployment/server/synapse-configs/default/api/

Storing and displaying image files in grails application in cloud foundry

I have an grails application, which allows users to upload image files and these images can be displayed. The application actually only stores file path in database and saves the image files in file system.
I've read some posts which says Cloud Foundry doesn't support local file system access. So my question is what modification should I do if I want to deploy my application to Cloud Foudry? I hope images still can be displayed directly on the webpage and users don't have to download them to their own computer only for viewing them.
The images stored on file system can disappear when your application stops, crashes, or moves. It should not be used for content that you want to persist. Further, the file system storage is not scalable. That is to say if more than one instance of your app is running the local storage is only visible to a specific instance of the app, and is not visible or shared across all instances.
To meet your requirements, a local service such as MongoDB GridFS, MySQL with blob data type or external blob stores such as Box.net or Amazon S3 can be used.

Resources