Envrionment variables in Cloud Run referencing the Google Cloud project id - environment-variables

Even though it was not documented here, I think the GOOGLE_CLOUD_PROJECT environment variable used to be defined inside the container but it is not anymore
I can even find it referenced in code here:
project := os.Getenv("GOOGLE_CLOUD_PROJECT")
Where can I find my project id?

If we look at Container instance metadata server, which is in the Cloud Run docs, we see an explicit reference to this being the mechanism to determine your own environment including Project Id and service accounts.

Related

Is there a way to reference deploymentBucket in serverless?

I have a serverless project that created the deploymentBucket using the default naming with random characters. Now there are other solutions that depend on this name. I can't change the name because of this. However, now I need to create resources such as event notifications and related permissions. So as part of the serverless project itself, I need to know the name of the bucket. I've tried referencing
${self:provider.deploymentBucket}
But this does not seem to be supported. How can I get the name of the deployment bucket as a variable within the same project? Thanks.

Spring Cloud Data Flow - Task Properties

I'm using SCDF and i was wondering if there was any way to configure default properties for one application?
I got a task application registered in SCDF and this application gets some JDBC properties to access business database :
app.foo.export.datasource.url=jdbc:db2://blablabla
app.foo.export.datasource.username=testuser
app.foo.export.datasource.password=**************
app.foo.export.datasource.driverClassName=com.ibm.db2.jcc.DB2Driver
Do i really need to put this prop in a property file like this : (it's bit weird to define them during the launch)
task launch fooTask --propertiesFile aaa.properties
Also, we cannot use the rest API, credentials would appear in the url.
Or is there another way/place to define default business props for an application ? These props will be only used by this task.
The purpose is to have one place where OPS team can configure url and credentials without playing with the launch command.
Thank you.
Yeah, SCDF feels a bit weird in the configuration area.
As you wrote, you can register an application and create tasks, but all the configuration is passed at the first launch of the task. Speaking other way round, you can't fully install/configure a task without running it.
As soon as a task has run once, you can relaunch it without any configuration and it uses the configuration from before. The whole config is saved in the SCDF database.
However, if you try to overwrite an existing configuration property with a new value, SCDF seems to ignore the new value and continue to use the old one. No idea if this is intended by design or a bug or if we are doing something wrong.
Because we run SCDF tasks on Kubernetes and we are used to configure all infrastructure in YAML files, the best option we found was to write our own Operator for SCDF.
This operator works against the REST interface of SCDF and also compensates the weird configuration issues mentioned above.
For example the overwrite issue is solved by first deleting the configuration and recreate it with the new values.
With this operator we have reached what you are looking for: all our SCDF configuration is in a git repository and all changes are done through merge requests. Thanks to CI/CD, on the next launch, the new configuration is used.
However, a Kubernetes operator should be part of the product. Without it, SCDF on Kubernetes feels quite "alien".

How do you reuse the same openapi.yaml file for production and development

We are using a GitOps model for deploying our software. Everything in dev branch goes to the dev environment and everything in main gets deployed to production. All good and fine except that we use Google Cloud Endpoints that rely in the host parameter of the openapi.yaml. There is only room for a single value so we have to remember to change it for each deployment not allowing us to do a fully automated deploy.
How do you manage the same openapi.yaml definition when using Google Cloud Endpoints?
There is one example given in the official documentation, see if it helps your use-case.
Basic structure of an OpenAPI document, notice how the "host" is parameterized with "YOUR-PROJECT-ID.appspot.com"
Deploying the Endpoints configuration, using the provided script "./deploy_api.sh"
Source code for deploy_api.sh
One common solution for different environments properties management is to create different build profiles, and create different environment specific properties files like openapi_dev.yaml, openapi_qa.yaml, openapi_prod.yaml, and supply the one based on the profile(dev/qa/prod) being used. Refer here for more details.
Another way documented at GitOps-style continuous delivery with Cloud Build, where a multi branch, multi-repository approach is suggested.
Under the FAQ section in Swagger OpenAPI guide, it is clearly mentioned that, we can specify multiple hosts, e.g. development, test and production but for OpenAPI 3.0. OpenAPI2.0 supports only one host per API specification (or two if you count HTTP and HTTPS as different hosts). A possible way to target multiple hosts is to omit the host and schemes from your specification and serve it from each host. In this case, each copy of the specification will target the corresponding host.
As per Google documentation Cloud Endpoints currently support OpenAPI version 2.0. A feature request has been filed for support of version 3.0 but there have been no releases. You can follow for the updates here.

Azure Cloud Shell - Storage Creation Failed

Seems each time I try to use an existing share for Cloud Shell, it gives me the annoying error
Error: 400 {"error": "code":"AccountPropertyCannotBeUpdated","message":"The property 'kind'
was specified in the input, but it cannot be updated."}}. I have
tried just creating a Resource Group and then a Storage Account before
hand and then selecting to create a new File share but this too fails.
I wanted to use a single share for storing Cloud Shell img files for
each of the members of my team so we could easily share files.
It seems to be a bad behavior. Please use the standard options for initialize the Cloud Shell and verify your Azure account type.

WSO2 loss APIs after changes in docker container

I'm having another problem using WSO2 API Manager 2.0.0: I have installed it in docker using three containers (one for APIM, one for Analytics and one for MySQL) and I replace some configuration files with my custom version (e.g. DB, server name, gateway setup...).
Both APIM and Analytics are configured to save data in the MySQL container and I am able to see changes in the DB.
The issue is that I cannot find my APIs neither in the publisher nor in the store after the container has been rebuilt. Changes in the DB persists, I can see the statistics for all my APIs and I get an error if I try to create a new API using the same name or context, but the store is always empty after a new build.
I have also tried to put both /repository/deployment/server/synapse-config/default and /repository/tenants/ in two volumes and I can see the files created in /.../default/api/ for my APIs, but I cannot figure out the issue.
Should I persists some additional directory not mentioned in the guide?
I don't want to put the whole APIM and Analytics homes in volumes if possible.
First, check whether artifacts can be located in Resources Browser.
If you can find the API related files, then the issue is related to indexing.
Do the following to re-index the artifacts in the registry:
Rename the <lastAccessTimeLocation> element in the <APIM_2.0.0_HOME>/repository/conf/registry.xml file. If you use a clustered/distributed API Manager setup, change the file in the API Publisher node. For example, change the /_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime registry path to /_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime_1.
Shut down the API Manager, back up and delete the <APIM_2.0.0_HOME>/solr directory.
Finally start the API Manager.
The Api Information resides in the DB and in the File system.(/repository/deployment/server/synapse-config/default/api) It is possible that the registry artifacts are not indexed properly. Can you try following?
Delete the solar directory.
Open registry.xml and change the following line as shown below. < lastAccessTimeLocation>/_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime-1
Now restart the server. Server will re-index all the files again.
Also make sure the Databases are properly configured. Specially Registry mounting related configurations.

Resources