Google Cloud Logging fails for one project but not others (using Serilog) - serilog

I'm using the Serilog sink for Google Cloud Logging. There are two GCP projects: Calvin and Hobbes. Calvin is a new GCP project, Hobbes is a existing GCP project. I am (trying to) write to both from a web api running locally. Hobbes has logs being written to it from existing GCP-deployed apps.
When I use the JSON creds file for the Calvin project, I can connect to Firestore databases in the Calvin project and I can log to Cloud Logging (visible in Calvin's Log Explorer view). To ensure logging works, I also write to a local file (and it does work).
If I then swap out the JSON creds file for the Hobbes project (and adjust the Serilog configuration's projectID value) then I can connect to Firestore databases in the Hobbes project but nothing gets logged to Hobbes' Cloud Logging. The local file does get logged to.
What am I missing? Do I need to adjust more than the JSON creds files and the project ID? Could the Hobbes project be configured to block logging from non-GCP sources?
I was expecting that by just swapping the credentials file and project ID, I should be able to switch between logging in one project, then the other.

Related

Deploy an Azure Function using azurerm_function_app_function

I have created a new C# Azure Function (EventHub trigger) and added my own logic to handle the events that come in. My issue is that I am not sure how to deploy this new app via terraform. I have created a azurerm_function_app_function resource, and from reading the terraform docs I can see that I can add a file (https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/function_app_function#file) however, for a CSharp project, that's not using csx, I don't see how this can be used.
Is there a way to deploy a C# application using terraform, or should I split these components up, and have terraform create the azurerm_function_app_function and then have a pipeline that uploads my code to the function?

save/load thingsboard configuration

Is it possible to somehow serialize current Thingsboard (let's call it TBoard) configuration, save it and than latter load saved configuration on TBoard startup.
I am specifically interested in loading device profiles, rule chains, and dashboards.
I want to save configuration together with my project in git repository so than latter I could just use docker-compose to start multiple services from project (let's call them sensors) and single TBoard instance with saved configuration which will be used for collecting telemetry from sensors and drawing dashboards.
Another reason for saving configuration is what happens if for some reason TBoard container crashes or somehow get corrupted so it can't be started again, would I have to click on the things again in order to create all device profiles, dashboards, configure rule chains ... etc etc ... ?
Regarding this line
I am specifically interested in loading device profiles, rule chains, and dashboards. I want to save configuration together with my project in git repository
I have just recently implemented version control for my Thingsboard deployment. The way i am doing it is with the python REST client.
I have written functions to export all dashboards/data converters/integrations/rule chains/widgets into json files which I save into a github repository.
I have also written the reverse script to push the stored files to a fresh environment, essentially "flashing" it. Surprisingly, this works perfectly.
I have an idea to publish this as a package, but it's something I've never done before so I'm unsure if I will get to it.
Just letting you know that it is definitely possible to get source control operational via the API.

Azure Cloud Shell - Storage Creation Failed

Seems each time I try to use an existing share for Cloud Shell, it gives me the annoying error
Error: 400 {"error": "code":"AccountPropertyCannotBeUpdated","message":"The property 'kind'
was specified in the input, but it cannot be updated."}}. I have
tried just creating a Resource Group and then a Storage Account before
hand and then selecting to create a new File share but this too fails.
I wanted to use a single share for storing Cloud Shell img files for
each of the members of my team so we could easily share files.
It seems to be a bad behavior. Please use the standard options for initialize the Cloud Shell and verify your Azure account type.

Storing configuration settings in Azure Service Fabric and MVC apps

I have reached the point where I have to get my Service Fabric Cluster deployed to Azure :) Besides the the stateful/stateless services I have 2 MVC applications. I currently have a few settings in the web.config files (mostly connection strings).
I plan to configure continuous build / deploy using Visual Studio Online, but have not dogged into to doing that yet.
Where are the recommended place to store the configuration settings. I will need settings for 3 different environments (dev/test/prod).
I found a reference, at some point, to store the settings on the build definition which sounds like a better place to store production credentials than in config files that are being part of the source code for the applications. I need to limit access to values for the production environment and having them in the config files that all developers has access to does not sound like the best way to do this.
Any white papers or best practices regarding this I should be aware of?
You can use de publish profiles and application parameters of the service fabric project to store your settings for each environment.
In my case i have a dev, a homolog and a production environment with different database connection strings, so i created publish profiles named Cloud.Homolog.xml, Cloud.Production.xml and for dev environment i'm still using Local.5Node.xml.
Then, when i want to deploy in some of this environments i choose the correct publish profile.
Here is the documentation for multiple environment management:
Link

WSO2 loss APIs after changes in docker container

I'm having another problem using WSO2 API Manager 2.0.0: I have installed it in docker using three containers (one for APIM, one for Analytics and one for MySQL) and I replace some configuration files with my custom version (e.g. DB, server name, gateway setup...).
Both APIM and Analytics are configured to save data in the MySQL container and I am able to see changes in the DB.
The issue is that I cannot find my APIs neither in the publisher nor in the store after the container has been rebuilt. Changes in the DB persists, I can see the statistics for all my APIs and I get an error if I try to create a new API using the same name or context, but the store is always empty after a new build.
I have also tried to put both /repository/deployment/server/synapse-config/default and /repository/tenants/ in two volumes and I can see the files created in /.../default/api/ for my APIs, but I cannot figure out the issue.
Should I persists some additional directory not mentioned in the guide?
I don't want to put the whole APIM and Analytics homes in volumes if possible.
First, check whether artifacts can be located in Resources Browser.
If you can find the API related files, then the issue is related to indexing.
Do the following to re-index the artifacts in the registry:
Rename the <lastAccessTimeLocation> element in the <APIM_2.0.0_HOME>/repository/conf/registry.xml file. If you use a clustered/distributed API Manager setup, change the file in the API Publisher node. For example, change the /_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime registry path to /_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime_1.
Shut down the API Manager, back up and delete the <APIM_2.0.0_HOME>/solr directory.
Finally start the API Manager.
The Api Information resides in the DB and in the File system.(/repository/deployment/server/synapse-config/default/api) It is possible that the registry artifacts are not indexed properly. Can you try following?
Delete the solar directory.
Open registry.xml and change the following line as shown below. < lastAccessTimeLocation>/_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime-1
Now restart the server. Server will re-index all the files again.
Also make sure the Databases are properly configured. Specially Registry mounting related configurations.

Resources