save/load thingsboard configuration - thingsboard

Is it possible to somehow serialize current Thingsboard (let's call it TBoard) configuration, save it and than latter load saved configuration on TBoard startup.
I am specifically interested in loading device profiles, rule chains, and dashboards.
I want to save configuration together with my project in git repository so than latter I could just use docker-compose to start multiple services from project (let's call them sensors) and single TBoard instance with saved configuration which will be used for collecting telemetry from sensors and drawing dashboards.
Another reason for saving configuration is what happens if for some reason TBoard container crashes or somehow get corrupted so it can't be started again, would I have to click on the things again in order to create all device profiles, dashboards, configure rule chains ... etc etc ... ?

Regarding this line
I am specifically interested in loading device profiles, rule chains, and dashboards. I want to save configuration together with my project in git repository
I have just recently implemented version control for my Thingsboard deployment. The way i am doing it is with the python REST client.
I have written functions to export all dashboards/data converters/integrations/rule chains/widgets into json files which I save into a github repository.
I have also written the reverse script to push the stored files to a fresh environment, essentially "flashing" it. Surprisingly, this works perfectly.
I have an idea to publish this as a package, but it's something I've never done before so I'm unsure if I will get to it.
Just letting you know that it is definitely possible to get source control operational via the API.

Related

Spring Cloud Data Flow - Task Properties

I'm using SCDF and i was wondering if there was any way to configure default properties for one application?
I got a task application registered in SCDF and this application gets some JDBC properties to access business database :
app.foo.export.datasource.url=jdbc:db2://blablabla
app.foo.export.datasource.username=testuser
app.foo.export.datasource.password=**************
app.foo.export.datasource.driverClassName=com.ibm.db2.jcc.DB2Driver
Do i really need to put this prop in a property file like this : (it's bit weird to define them during the launch)
task launch fooTask --propertiesFile aaa.properties
Also, we cannot use the rest API, credentials would appear in the url.
Or is there another way/place to define default business props for an application ? These props will be only used by this task.
The purpose is to have one place where OPS team can configure url and credentials without playing with the launch command.
Thank you.
Yeah, SCDF feels a bit weird in the configuration area.
As you wrote, you can register an application and create tasks, but all the configuration is passed at the first launch of the task. Speaking other way round, you can't fully install/configure a task without running it.
As soon as a task has run once, you can relaunch it without any configuration and it uses the configuration from before. The whole config is saved in the SCDF database.
However, if you try to overwrite an existing configuration property with a new value, SCDF seems to ignore the new value and continue to use the old one. No idea if this is intended by design or a bug or if we are doing something wrong.
Because we run SCDF tasks on Kubernetes and we are used to configure all infrastructure in YAML files, the best option we found was to write our own Operator for SCDF.
This operator works against the REST interface of SCDF and also compensates the weird configuration issues mentioned above.
For example the overwrite issue is solved by first deleting the configuration and recreate it with the new values.
With this operator we have reached what you are looking for: all our SCDF configuration is in a git repository and all changes are done through merge requests. Thanks to CI/CD, on the next launch, the new configuration is used.
However, a Kubernetes operator should be part of the product. Without it, SCDF on Kubernetes feels quite "alien".

Fitnesse wiki file persistence options

What are the persistence options for fitnesse files? So far it seems like a file system is the only thing supported. There does appear to be an out of date database plugin. Is there anything else that is supported (S3, database, etc.)? Is there a way to control where files are persisted if using the filesystem?
I believe there is very little in that area. The location of the files can be controlled using a command line option. See http://fitnesse.org/FitNesse.FullReferenceGuide.UserGuide.QuickReferenceGuide#FitNesseCommandLINE
-d /path/to/fitnesse/root
How I've used the FitNesse wiki is as a local development tool, with the pages on the file system. Once I'm satisfied with the tests I commit them to version control (e.g. git) so that they become part of the (integration) test pipeline setup (e.g. they are run as part of the CI/CD pipeline of the project).
There is a plugin I believe that will automatically commit any save actions to Git, but I've never used that. Saving each edit action just pollutes version control in my opinion. I only want to see tests after they have been checked/completed, and that tends not to be each save.
Working on a shared wiki environment (where I would expect a non-file system approach would fit in) you run into the same problem, I expect. Developing automated tests is a development task that requires some iterations before it is 'done', and not all attempts reach that 'done' state. So using shared storage for wiki persistence creates 'noise' in the test-set: which are the tests that form the current reference set that should pass and what is work in-progress.
If you are working on a larger project where new features are developed together with their automated tests it becomes even more important to know which test changes belong to which features/changes. Having tests on the file system, in version control, allows you to develop test in sync with code changes in the same branch. This is what I would recommend.

WSO2 loss APIs after changes in docker container

I'm having another problem using WSO2 API Manager 2.0.0: I have installed it in docker using three containers (one for APIM, one for Analytics and one for MySQL) and I replace some configuration files with my custom version (e.g. DB, server name, gateway setup...).
Both APIM and Analytics are configured to save data in the MySQL container and I am able to see changes in the DB.
The issue is that I cannot find my APIs neither in the publisher nor in the store after the container has been rebuilt. Changes in the DB persists, I can see the statistics for all my APIs and I get an error if I try to create a new API using the same name or context, but the store is always empty after a new build.
I have also tried to put both /repository/deployment/server/synapse-config/default and /repository/tenants/ in two volumes and I can see the files created in /.../default/api/ for my APIs, but I cannot figure out the issue.
Should I persists some additional directory not mentioned in the guide?
I don't want to put the whole APIM and Analytics homes in volumes if possible.
First, check whether artifacts can be located in Resources Browser.
If you can find the API related files, then the issue is related to indexing.
Do the following to re-index the artifacts in the registry:
Rename the <lastAccessTimeLocation> element in the <APIM_2.0.0_HOME>/repository/conf/registry.xml file. If you use a clustered/distributed API Manager setup, change the file in the API Publisher node. For example, change the /_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime registry path to /_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime_1.
Shut down the API Manager, back up and delete the <APIM_2.0.0_HOME>/solr directory.
Finally start the API Manager.
The Api Information resides in the DB and in the File system.(/repository/deployment/server/synapse-config/default/api) It is possible that the registry artifacts are not indexed properly. Can you try following?
Delete the solar directory.
Open registry.xml and change the following line as shown below. < lastAccessTimeLocation>/_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime-1
Now restart the server. Server will re-index all the files again.
Also make sure the Databases are properly configured. Specially Registry mounting related configurations.

How to make WiX install a service in the context of a newly created user

I am creating an msi-package of a Windows service using Wix. I want to run the service under a regular user account without administrative priviliges. For better security I want to put the files of the service in the personal user folders (such as AppData\Local\Programs\CompanyName... for binaries and AppData\Local\CompanyName... for config and data files) with the appropriate file access permissions for the user. I imagine the following scenario:
Start the msi in the per-machine context.
During the client stage of the installation ask for the user name and password.
During the server stage of the installation:
a) create the user
b) change to its context and install the program files to ProgramFilesFolder and the data files to LocalAppDataFolder
c) change back to the admin context and install and configure the service to be run under the user account
I am stuck at the step 3 b) as from what I've learned I can't change the installation context after switching to the server side of the installation. Please could you advice me on how I could achive my goal described in the first lines. In particular if I have to copy files to another user's personal folders, what would be the most reliable way to get their paths? Or maybe I am wrong and installing a service into a personal user folder is bad practice at all?
I am aware of the presence of the built-in Local Service account but would like to narrow the service context even more.
The local appdata folder is the problem. If you create a user account the user folders aren't created until the user does an interactive login, and even then in some environments it may be redirected via policy. I am unaware of any reason that local data is better (in a security sense) then the ProgramFiles folder, which is write-restricted to administrators. I'd just install the service binaries to ProgramFiles. In the UI you can collect credentials and use them when the service is installed. A problem with using external credentials is that things like Repair and sometimes patching will fail unless you have the credentials available, having saved them somewhere safe, because otherwise the property values you use will be empty on repair. If localservice works then use it.
It normally doesn't matter what privileges a service has because it usually knows what it's doing. It's only an issue if it calls unknown external code that may try to do something bad, or if it gets asked to do random things such as "run this program" or "copy this file" without doing any internal validation or having a whitelist of what it's allowed to do. So it might be useful to know if there's a specific problem you're trying to address or just following good practices.
I don't think you're being overcautious, service isolation is definitely a good goal. If you can require Win7/2008R2 or later, then you can run the service under a virtual account. There is no password required for virtual accounts, and they don't have the ability to completely wreck the machine like SYSTEM does. You should be able to use it like this:
<ServiceInstall Account="NT SERVICE\$(var.ServiceName)" Name="$(var.ServiceName)".../>
It's actually better for the service executables to be in Program Files, that way the service can't modify its own exe.

ASP.NET MVC and multiple environments

How does ASP.NET MVC, if at all, deal with or provide ways to create your application using multiple environments? For example:
Development environment (local machine, probably run via the built-in web server and talking to a local database)
Testing (runs against a preloaded databse with example data, although this part could be skipped and mocks could be used)
Production database on a real server with real data
Ruby on Rails has the concept of environments and "automagically" can deduce if you're in development or production, so you can specify your connection information (connection string) in a config file and the framework dynamically pulls the appropriate one. Is there a similar way of doing things with .NET MVC? If not then how are professional developers using .NET MVC handling different environments?
The only way I can think of is to manually add an "environment" global method (or use an enum, or something like that, maybe this is a use for something like the State pattern?) and store the different connection strings in the web.config file, and then create a base class which all data access classes derive from which provides a way to obtain the connection string for the current environment; this would then have to be set to production when the time comes to put the application live.
Is there another way? Most of the .NET MVC videos and articles I've seen don't even bother with separate environments but only use a development database and don't indicate how you do it in production.
I'd say this is really a question of your company's internal processes. Since every company is a little bit different it's hard to have a "right" generic way to support dev/test/alpha/production and/or other environments.
One way: Create a setup program that supplies the correct connection string based on the environment chosen during the setup process.
Another way: System Admin edits web.config file to supply correct connection string during install.
Yet ANother Way: Connection strings are stored in the system registry.
Even Another Odd Way: You have all your connection strings for all environments in web.config, then a setting in appSettings the tells you which one to use.
Depending on the client, I've done all of these. There are more but these are the more popular.
(One client wanted to store the connecting string in the data base itself. Really.)
You can use alias for your database. You just point these aliases to different servers in the different environments. Stored in the registry under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\Client\Connect if i remember right. Then you use the alias in the connectionstring.
In response to Jason's response:
We use Enterprise Library Environments to configure the different environment paramters and via msbuild invoke the Merge Configuration Tool that generates the different configs for each environment. The deploy process picks the right config file depending on which environment to install.
I was able to solve a similar situation following these steps:
In your Visual Studio, access Build > Configuration Manager
Click in "new"
Choose a name for your configuration, and then copy settings from an existing config. After the configuration creation, it will be available for you to target as build configuration
Create a Web.{env-name-you-chose}.config in your application folder, along with the original Web.config file.
Open your .csproj file with Visual Studio or any text editor
Search for a section that looks like the following and add the highlighted lines, with the config file name you gave previously:
Open your Visual Studio, reload projects if it's required, and now you are able to choose your configuration via CLI or manual publish using Visual Studio.
There is a Publishing Wizard (in Visual Studio) wich let's you change parts of web.config for release build automaticaly. Wich happens to be the feature you are asking about. No magic thou.
What we have done is during our automated build process (Hudson), we alter values in web.config depending on which environment the build is for. Unfortunately there isn't a magical way to do this.
For deployment, which I assume that is what the op was asking about, one creates multiple configurations and in the publish, picks a different configuration. These are called transforms and they operate on the web.config. One would have at least three publish profiles, one for dev, test and prod. One can change more than just the connection string in this way. One can turn on custom errors, turn off debugging and change values of configuration variables. I highly recommend it.
I have a similar question. I have a log table reader. I want it to read log tables in the development, test and production databases. The major difficulty lies in my user account doesn't have permission to look at test and production. It's some silly security thing. The user that I'm impersonating in the application does have permission. I'm struggling trying to tell MVC to build the test and production models using the impersonated user.

Resources