WSO2 loss APIs after changes in docker container - docker

I'm having another problem using WSO2 API Manager 2.0.0: I have installed it in docker using three containers (one for APIM, one for Analytics and one for MySQL) and I replace some configuration files with my custom version (e.g. DB, server name, gateway setup...).
Both APIM and Analytics are configured to save data in the MySQL container and I am able to see changes in the DB.
The issue is that I cannot find my APIs neither in the publisher nor in the store after the container has been rebuilt. Changes in the DB persists, I can see the statistics for all my APIs and I get an error if I try to create a new API using the same name or context, but the store is always empty after a new build.
I have also tried to put both /repository/deployment/server/synapse-config/default and /repository/tenants/ in two volumes and I can see the files created in /.../default/api/ for my APIs, but I cannot figure out the issue.
Should I persists some additional directory not mentioned in the guide?
I don't want to put the whole APIM and Analytics homes in volumes if possible.

First, check whether artifacts can be located in Resources Browser.
If you can find the API related files, then the issue is related to indexing.
Do the following to re-index the artifacts in the registry:
Rename the <lastAccessTimeLocation> element in the <APIM_2.0.0_HOME>/repository/conf/registry.xml file. If you use a clustered/distributed API Manager setup, change the file in the API Publisher node. For example, change the /_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime registry path to /_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime_1.
Shut down the API Manager, back up and delete the <APIM_2.0.0_HOME>/solr directory.
Finally start the API Manager.

The Api Information resides in the DB and in the File system.(/repository/deployment/server/synapse-config/default/api) It is possible that the registry artifacts are not indexed properly. Can you try following?
Delete the solar directory.
Open registry.xml and change the following line as shown below. < lastAccessTimeLocation>/_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime-1
Now restart the server. Server will re-index all the files again.
Also make sure the Databases are properly configured. Specially Registry mounting related configurations.

Related

Google Cloud Logging fails for one project but not others (using Serilog)

I'm using the Serilog sink for Google Cloud Logging. There are two GCP projects: Calvin and Hobbes. Calvin is a new GCP project, Hobbes is a existing GCP project. I am (trying to) write to both from a web api running locally. Hobbes has logs being written to it from existing GCP-deployed apps.
When I use the JSON creds file for the Calvin project, I can connect to Firestore databases in the Calvin project and I can log to Cloud Logging (visible in Calvin's Log Explorer view). To ensure logging works, I also write to a local file (and it does work).
If I then swap out the JSON creds file for the Hobbes project (and adjust the Serilog configuration's projectID value) then I can connect to Firestore databases in the Hobbes project but nothing gets logged to Hobbes' Cloud Logging. The local file does get logged to.
What am I missing? Do I need to adjust more than the JSON creds files and the project ID? Could the Hobbes project be configured to block logging from non-GCP sources?
I was expecting that by just swapping the credentials file and project ID, I should be able to switch between logging in one project, then the other.

save/load thingsboard configuration

Is it possible to somehow serialize current Thingsboard (let's call it TBoard) configuration, save it and than latter load saved configuration on TBoard startup.
I am specifically interested in loading device profiles, rule chains, and dashboards.
I want to save configuration together with my project in git repository so than latter I could just use docker-compose to start multiple services from project (let's call them sensors) and single TBoard instance with saved configuration which will be used for collecting telemetry from sensors and drawing dashboards.
Another reason for saving configuration is what happens if for some reason TBoard container crashes or somehow get corrupted so it can't be started again, would I have to click on the things again in order to create all device profiles, dashboards, configure rule chains ... etc etc ... ?
Regarding this line
I am specifically interested in loading device profiles, rule chains, and dashboards. I want to save configuration together with my project in git repository
I have just recently implemented version control for my Thingsboard deployment. The way i am doing it is with the python REST client.
I have written functions to export all dashboards/data converters/integrations/rule chains/widgets into json files which I save into a github repository.
I have also written the reverse script to push the stored files to a fresh environment, essentially "flashing" it. Surprisingly, this works perfectly.
I have an idea to publish this as a package, but it's something I've never done before so I'm unsure if I will get to it.
Just letting you know that it is definitely possible to get source control operational via the API.

Azure DevOps secure file guids

In my ADO build pipline, I have a secure file download step. When we branch versions, we use powershell to do the heavy lifting with cloning build definitions and updating settings/info in the cloned pipeline.
One issue I've run into is that the Secure File Download step doesn't accept variables, and in the UI you can only select names of files that already exist, so we've had to manually update it after every new branch we create.
I've grabbed the definition task step in powershell (as $step) and was hoping I could set the $step.inputs.fileInputs to a variable I assign to something like cert-$newVersion, however it currently is set to a guid.
Does anyone know if it possible to get the guid of secure files in ADO via the API or have a solution?
Does anyone know if it possible to get the guid of secure files in ADO via the API or have a solution?
Yes. This API exists.
You could try to use the following Rest API:
Get https://dev.azure.com/{OrganizationName}/{ProjectName}/_apis/distributedtask/securefiles?api-version=6.1-preview.1
Result:
You could get the secure file GUID based on the file name.

Changing path in an config file stored in TFS

We have a solution stored in TFS that deploys to SharePoint. As part of the solution we have a config file that has a path to a specific site. The problem is this path changes dependent on the users dev machine e.g
<site>devmachine1/somesite</site>
<site>devmachine2/somesite</site>
This can obviously be updated to work locally after a check out however when the file gets checked back in it will be incorrect on the next users machine if they do a Get. Is there a way that the file can be excluded or a script can be run to update the path when checked back in or out?
The best option I'd to rationalist all of the developer workstations.
I would do this by adding an identical entry to the hosts file that hard coded the name of the Sharepoint, allowing you to have the same config file work on every dev machine.
Make it dynamic by having a pre build instruction that adds the host, that way any developer can get and build.
You can use a custom check-in policy to update back the file when is checked-in. See here

Jenkins: Make file publicly available

I am creating files with a custom version number during the build that I want to be publicly available through http.
Assuming I am building the project "MyTestApp", I want the version number text file I created to be available at a location like http://jenkins.company/job/MyTestApp/revision.txt
Any idea how to achieve this?
David, this depends on what you mean by "publicly available". If your Jenkins instance is secured (jenkins.company/configureSecurity/), then access to artifacts requires that your http session be authenticated. If all users who need access have accounts on the Jenkins server, then you just need to use the post-build action "archive the artifacts", and your text file would be available here:
jenkins.company/job/MyTestApp/jobnumber/artifact/revision.txt
Or here:
jenkins.company/job/MyTestApp/lastSuccessfulBuild/artifact/revision.txt
See this screenshot: http://note.io/17oiykI
If you need unauthenticated access, you could publish your artifacts to another web server on the same or a different host. Or you could upload them to an Amazon S3 bucket.

Resources