Azure Cloud Shell - Storage Creation Failed - azure-cloud-shell

Seems each time I try to use an existing share for Cloud Shell, it gives me the annoying error
Error: 400 {"error": "code":"AccountPropertyCannotBeUpdated","message":"The property 'kind'
was specified in the input, but it cannot be updated."}}. I have
tried just creating a Resource Group and then a Storage Account before
hand and then selecting to create a new File share but this too fails.
I wanted to use a single share for storing Cloud Shell img files for
each of the members of my team so we could easily share files.

It seems to be a bad behavior. Please use the standard options for initialize the Cloud Shell and verify your Azure account type.

Related

save/load thingsboard configuration

Is it possible to somehow serialize current Thingsboard (let's call it TBoard) configuration, save it and than latter load saved configuration on TBoard startup.
I am specifically interested in loading device profiles, rule chains, and dashboards.
I want to save configuration together with my project in git repository so than latter I could just use docker-compose to start multiple services from project (let's call them sensors) and single TBoard instance with saved configuration which will be used for collecting telemetry from sensors and drawing dashboards.
Another reason for saving configuration is what happens if for some reason TBoard container crashes or somehow get corrupted so it can't be started again, would I have to click on the things again in order to create all device profiles, dashboards, configure rule chains ... etc etc ... ?
Regarding this line
I am specifically interested in loading device profiles, rule chains, and dashboards. I want to save configuration together with my project in git repository
I have just recently implemented version control for my Thingsboard deployment. The way i am doing it is with the python REST client.
I have written functions to export all dashboards/data converters/integrations/rule chains/widgets into json files which I save into a github repository.
I have also written the reverse script to push the stored files to a fresh environment, essentially "flashing" it. Surprisingly, this works perfectly.
I have an idea to publish this as a package, but it's something I've never done before so I'm unsure if I will get to it.
Just letting you know that it is definitely possible to get source control operational via the API.

Azure DevOps secure file guids

In my ADO build pipline, I have a secure file download step. When we branch versions, we use powershell to do the heavy lifting with cloning build definitions and updating settings/info in the cloned pipeline.
One issue I've run into is that the Secure File Download step doesn't accept variables, and in the UI you can only select names of files that already exist, so we've had to manually update it after every new branch we create.
I've grabbed the definition task step in powershell (as $step) and was hoping I could set the $step.inputs.fileInputs to a variable I assign to something like cert-$newVersion, however it currently is set to a guid.
Does anyone know if it possible to get the guid of secure files in ADO via the API or have a solution?
Does anyone know if it possible to get the guid of secure files in ADO via the API or have a solution?
Yes. This API exists.
You could try to use the following Rest API:
Get https://dev.azure.com/{OrganizationName}/{ProjectName}/_apis/distributedtask/securefiles?api-version=6.1-preview.1
Result:
You could get the secure file GUID based on the file name.

Sawtooth - configure-onchain-perms problem -

I want to do some specific task using sawtooth in combination with ansible. I am using this ansible project https://github.com/hyperledger/sawtooth-ansible. The problem is when I want to run "Configure onchain permissions". The main problem is in configure-onchain-perms role in task Create Transaction Access Policy. Always as a result, I get the time out. Also, I tried to install everything manually, without ansible, but output is the same. The same result is with this simple command sawtooth identity policy create policy_1 "PERMIT_KEY *". Could anyone guide me how to use identity family in right way ?
Is the identity transaction processor, identity-tp running on all nodes?
A quote from:
https://sawtooth.hyperledger.org/docs/core/releases/latest/cli/identity-tp.html
This process is required to apply any changes to on-chain permissions
used by the Sawtooth platform.
There is also an active chat forum for Sawtooth at https://chat.hyperledger.org/channel/sawtooth

WSO2 loss APIs after changes in docker container

I'm having another problem using WSO2 API Manager 2.0.0: I have installed it in docker using three containers (one for APIM, one for Analytics and one for MySQL) and I replace some configuration files with my custom version (e.g. DB, server name, gateway setup...).
Both APIM and Analytics are configured to save data in the MySQL container and I am able to see changes in the DB.
The issue is that I cannot find my APIs neither in the publisher nor in the store after the container has been rebuilt. Changes in the DB persists, I can see the statistics for all my APIs and I get an error if I try to create a new API using the same name or context, but the store is always empty after a new build.
I have also tried to put both /repository/deployment/server/synapse-config/default and /repository/tenants/ in two volumes and I can see the files created in /.../default/api/ for my APIs, but I cannot figure out the issue.
Should I persists some additional directory not mentioned in the guide?
I don't want to put the whole APIM and Analytics homes in volumes if possible.
First, check whether artifacts can be located in Resources Browser.
If you can find the API related files, then the issue is related to indexing.
Do the following to re-index the artifacts in the registry:
Rename the <lastAccessTimeLocation> element in the <APIM_2.0.0_HOME>/repository/conf/registry.xml file. If you use a clustered/distributed API Manager setup, change the file in the API Publisher node. For example, change the /_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime registry path to /_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime_1.
Shut down the API Manager, back up and delete the <APIM_2.0.0_HOME>/solr directory.
Finally start the API Manager.
The Api Information resides in the DB and in the File system.(/repository/deployment/server/synapse-config/default/api) It is possible that the registry artifacts are not indexed properly. Can you try following?
Delete the solar directory.
Open registry.xml and change the following line as shown below. < lastAccessTimeLocation>/_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime-1
Now restart the server. Server will re-index all the files again.
Also make sure the Databases are properly configured. Specially Registry mounting related configurations.

How to make WiX install a service in the context of a newly created user

I am creating an msi-package of a Windows service using Wix. I want to run the service under a regular user account without administrative priviliges. For better security I want to put the files of the service in the personal user folders (such as AppData\Local\Programs\CompanyName... for binaries and AppData\Local\CompanyName... for config and data files) with the appropriate file access permissions for the user. I imagine the following scenario:
Start the msi in the per-machine context.
During the client stage of the installation ask for the user name and password.
During the server stage of the installation:
a) create the user
b) change to its context and install the program files to ProgramFilesFolder and the data files to LocalAppDataFolder
c) change back to the admin context and install and configure the service to be run under the user account
I am stuck at the step 3 b) as from what I've learned I can't change the installation context after switching to the server side of the installation. Please could you advice me on how I could achive my goal described in the first lines. In particular if I have to copy files to another user's personal folders, what would be the most reliable way to get their paths? Or maybe I am wrong and installing a service into a personal user folder is bad practice at all?
I am aware of the presence of the built-in Local Service account but would like to narrow the service context even more.
The local appdata folder is the problem. If you create a user account the user folders aren't created until the user does an interactive login, and even then in some environments it may be redirected via policy. I am unaware of any reason that local data is better (in a security sense) then the ProgramFiles folder, which is write-restricted to administrators. I'd just install the service binaries to ProgramFiles. In the UI you can collect credentials and use them when the service is installed. A problem with using external credentials is that things like Repair and sometimes patching will fail unless you have the credentials available, having saved them somewhere safe, because otherwise the property values you use will be empty on repair. If localservice works then use it.
It normally doesn't matter what privileges a service has because it usually knows what it's doing. It's only an issue if it calls unknown external code that may try to do something bad, or if it gets asked to do random things such as "run this program" or "copy this file" without doing any internal validation or having a whitelist of what it's allowed to do. So it might be useful to know if there's a specific problem you're trying to address or just following good practices.
I don't think you're being overcautious, service isolation is definitely a good goal. If you can require Win7/2008R2 or later, then you can run the service under a virtual account. There is no password required for virtual accounts, and they don't have the ability to completely wreck the machine like SYSTEM does. You should be able to use it like this:
<ServiceInstall Account="NT SERVICE\$(var.ServiceName)" Name="$(var.ServiceName)".../>
It's actually better for the service executables to be in Program Files, that way the service can't modify its own exe.

Resources