XDT transformation for JSON in VSTS? - tfs

Is there any tool which can transform JSON as well similar to XDT transformations for XML files?
thanks

Azure App service deploy task has a variable substitution option supporting json files. You have to provide json file paths and you can use wild cards in the path. You can define the substitute variables and values in the task so that they will be applied in the deployment. However for on-premises IIS deployments there is no task supporting json file variable substitution support as of now. More information on the json variable substitution is here
This vsts task source code can be found in here you may inspect the implementation logic there and define your own component to do the substitution of json variables.

Related

Jenkins JSON API - Locate Build, Build Environment, and Build Trigger API data

I am working with the Jenkins API JSON.
I understand the format to retrieve API data in JSON
<Jenkins_URL>/job/<job_name>/api/json
Within the job/<job_name>/configure UI we can configure/add Build triggers, build env, and build data.
I want to be able to view the Build, Build Env, and Build Triggers data in a JSON API.
Is it even possible to get said data? What are alternative ways to get all available data that is found in the configure page of a job?
I think the most straightforward way is to access <Jenkins_URL>/job/<job_name>/config.xml.
Yes, it's not JSON, but you can be sure that this contains everything that was configured on the configuration page.
The XML file is the "native" serialized version of the Job configuration. The JSON API will always require some additional glue that may exist or not exist.

Parsing environment variables into configuration yaml file

I am using dask on the GFDL analysis cluster to analyze large climate model output.
I am trying to set my temporary-directory configuration to a temporary directory, which can change depending on the node I am logging with (it is always identified by the environmental variable $TMPDIR).
Is there a way to parse environment variables in the dask configuration files?
Cheers.
Not as of today, but that could be done. I recommend raising an issue on the Dask issue tracker.

Jenkins job to read data from SQL DB

I'm new to Jenkins. I have a task where I need to create a Jenkins job to automate builds of certain projects. The build job parameters are going to be stored in the SQL database. So, the job would keep querying the DB and it has to load data from the DB and perform the build operation.
Examples would be greatly appreciated.
How can this be done?
You have to transform the data from available source to the format expecting by the destination.
Here your source data available in DB and you want to use this data in Jenkins.
There might be numerous ways but the efficient way of reading data is using EnvyInJect Plugin.
If you were able to provide the whole data as Properties file format and type to EnvyInject plugin, the whole data is available as environment variables you can use these variable in the Jobs configuration.
EnvyInject Plugin can read this properties file from the Jenkins Job Workspace. And you can provide that file path in Properties File Path input.
To read the data from source and make available as properties file.
Either you can write a executable or if your application provides api to download the properties data.
Both ways to be executed before the SCM step, for this you have to use Pre-SCM-Step
Get the data and inject the data in pre-scm-step only, so that the data available as environment variables.
This is one thought to give gist for you to start. while implementing you may get lot of thoughts to implement according to your requirement.

Combine swagger Definition files

I am generating a swagger definition for all the my APIs by annotating the source code.
I was wondering if there is any way for make possible merge all the APIs in one single json file?
Note: I am using Swagger 2.0 definitions.
If you deploy those apps in a WebSphere Liberty server with the apiDiscovery-1.0 feature defined in your server.xml, then you can simply go into (GET) /ibm/api/docs and retrieve your aggregated JSON file. You can also retrieve it as YAML, by adding the Accept header "application/yaml".
You can download it for free at wasdev.net then just run the installUtility command to grab the feature (wlp/bin installUtility install apiDiscovery-1.0).
More information in this blog: https://developer.ibm.com/wasdev/blog/2016/04/13/deploying-swagger-enabled-endpoints-websphere-liberty-bluemix-api-connect/

How to set chef data bag values via Jenkins?

I posted "How to set chef attributes via Jenkins?" which was answered correctly, that is, using the "-j" option. However, what if I want to set the load version in the data bag in Jenkins so ALL cookbooks can use it? That is I don't want to use the "-j" option and instead search the value in the data bag? How do I do I set chef data bag values via Jenkins
Best way: use knife commands to upload predfined or generated databags.
knife data bag from file BAG json_file_for_item
the file must have a defined format, extended documentation is HERE
There's requirements on the file system hierarchy and on the file format, copying the doc here sounds a bad idea.
In addition to knife, you can also write your own scripts using the Chef REST API. There are clients for Ruby (Chef-API), Python (PyChef), JavaScript (chef-js), and many others.

Resources