XL Deploy nesting dictionaries - automated-deployment

I am preparing a an application for deployment using XebiaLabs XL Deploy tool.
It is a .Net Windows Service, with environmental specific configuration (multiple instances across several qa/uat/prod servers) in the app.config.
These config values have been migrated to XL Deploy dictionaries, and the app.config uses placeholders to refer to each required value.
Is there a way to nest dictionaries in XL Deploy? I.e. Dictionary 1 will have a key that has dictionary 2 as it's value?
There is no real information about this in the docs, and not much else to be found about this.

In case anyone else is looking for this, the solution I came up with was pretty simple - to use grouped keys as values in dictionaries.
For Example:
XLD Dictionary 1:
Key - Value
Key1D1 - Value1
Key2D1 - Value2
Key3D1 - {{Key1D2}} {{Key2D2}} {{Key3D2}}
XLD Dictionary 2:
Key - Value
Key1D2 - Value3
Key2D2 - Value4
Key3D2 - Value5
Therefore in my config file i can have:
Dbconnection = {{Key1D1}}
Dbpassword = {{Key1D2}}
ConfigRules = {{Key1D3}}
And use the same config for each version, with all the instance specific ConfigRules set only in XLDeploy dictionaries

Related

Unable to query Cosmos DB using numeric partition key

I am trying to use a numeric field as a partition key but I am unable to run stored procedures on them. I am not sure if I am doing something wrong or if it is not possible.
I have two collections with two different partition keys.
A sample document from collection 1
{
"id":"1",
"group": "a"
}
A sample document from collection 2
{
"id":"1",
"group": 1
}
The difference is that the group field in second collection does not have double quotes around it.
Both the collections have group as the partition key and I am trying to execute the sample stored procedure provided in the Azure Portal for Cosmos DB.
While i am able to run the first collection successfully, the second collection produces an error of "no docs found".
I am importing the data from sql server database and the int fields are imported without double quotes. I searched quite a lot but could not find documentation relating to numeric partition key without double quotes.
Is it possible to do partition keys from numeric fields?? Any help appreciated as this field is the best fit for my collection for partitioning and would be really useful.
Please don't worry about the numeric partition key which is supported in the cosmos db stored procedure definitely. You did everything rightly.You met the unexpected results because azure portal identify the partition key input as String type even though you filled Number type.
You could test it with cosmos db sdk or rest api. For example, java sdk sample as below:
import com.microsoft.azure.documentdb.*;
public class ExecuteSPTest {
static private String YOUR_COSMOS_DB_ENDPOINT = "https://***.documents.azure.com:443/";
static private String YOUR_COSMOS_DB_MASTER_KEY = "***";
public static void main(String[] args) throws DocumentClientException {
DocumentClient client = new DocumentClient(
YOUR_COSMOS_DB_ENDPOINT,
YOUR_COSMOS_DB_MASTER_KEY,
new ConnectionPolicy(),
ConsistencyLevel.Session);
RequestOptions options = new RequestOptions();
PartitionKey partitionKey = new PartitionKey(1);
options.setPartitionKey(partitionKey);
StoredProcedureResponse queryResults = client.executeStoredProcedure(
"/dbs/db/colls/group/sprocs/sample",
options,
null);
String document = queryResults.getResponseAsString();
System.out.println(document);
}
}
Output:

How can I write FlowFile attributes to Avro metadata inside the FlowFile's content?

I am creating FlowFiles that are manipulated and split downstream after being emitted by an ExecuteSql processor. I have populated the FlowFiles' attributes with data that I want to put into the Avro metadata contained within each FlowFile's content.
How can I do this?
I've tried using an UpdateRecord processor configured with an AvroReader and AvroRecordSetWriter and a property with a key of /canary that should be writing a FlowFile attribute to that key somewhere in the Avro document. It does not appear anywhere in the output, though.
It would be acceptable to move the records in the Avro data to a subkey and have a metadata section be a part of the record data. I would prefer not to do this, though, because it does not seem like the correct solution and because it sounds much more complex than simply modifying the Avro metadata.
The record-aware processors (and the Readers/Writers) are not metadata-aware, meaning they cannot currently (as of NiFi 1.5.0) act on metadata in any way (inspect, create, delete, etc.), so UpdateRecord won't work for metadata per se. With your /canary property key, it will try to insert a field into your Avro record at the top level, named canary, and should have the value you specify. However I believe your output schema needs to have the canary field added at the top level, or it may be ignored (I'm not positive of this, you can check the output schema to see if it is added automatically).
There is currently no NiFi processor that can update Avro metadata explicitly (MergeContent does some with regards to merging various Avro files together, but you can't choose to set a value, e.g.). However I have an unpolished Groovy script you could use in ExecuteScript to add metadata to Avro files in NiFi 1.5.0+. In ExecuteScript you would set the language to Groovy and the following as the Script Body, then add user-defined (aka "dynamic" properties) to ExecuteScript, where the key will be the metadata key, and the evaluated value (the properties support Expression Language) will be the value:
#Grab('org.apache.avro:avro:1.8.1')
import org.apache.avro.*
import org.apache.avro.file.*
import org.apache.avro.generic.*
def flowFile = session.get()
if(!flowFile) return
try {
// Save off dynamic property values for metadata key/values later
def metadata = [:]
context.properties.findAll {e -> e.key.dynamic}.each {k,v -> metadata.put(k.name, context.getProperty(k).evaluateAttributeExpressions(flowFile).value.bytes)}
flowFile = session.write(flowFile, {inStream, outStream ->
DataFileStream<GenericRecord> reader = new DataFileStream<>(inStream, new GenericDatumReader<GenericRecord>())
DataFileWriter<GenericRecord> writer = new DataFileWriter<>(new GenericDatumWriter<GenericRecord>())
def schema = reader.schema
def inputCodec = reader.getMetaString(DataFileConstants.CODEC) ?: DataFileConstants.NULL_CODEC
// Forward the existing metadata to the output
reader.metaKeys.each { key ->
if (!DataFileWriter.isReservedMeta(key)) {
byte[] metadatum = reader.getMeta(key)
writer.setMeta(key, metadatum)
}
}
// For each dynamic property, set the key/value pair as Avro metadata
metadata.each {k,v -> writer.setMeta(k,v)}
writer.setCodec(CodecFactory.fromString(inputCodec))
writer.create(schema, outStream)
writer.appendAllFrom(reader, false)
} as StreamCallback)
session.transfer(flowFile, REL_SUCCESS)
} catch(e) {
log.error('Error adding Avro metadata, penalizing flow file and routing to failure', e)
flowFile = session.penalize(flowFile)
session.transfer(flowFile, REL_FAILURE)
}
Note that this script can work with versions of NiFi previous to 1.5.0, but the #Grab at the top is not supported until 1.5.0, so you'd have to download Avro and its dependencies into a flat folder, and point to that in the Module Directory property of ExecuteScript.

Lua script for scan with 'match' and 'count' constraints

I am using Jedis. I need a Lua script to scan for a pattern with a specified limit. I don't know how to pass the parameters inside Lua script.
Sample Code:
String script="return {redis.call('SCAN',KEYS[1],'COUNT',KEYS[2],'MATCH',KEYS[3]}";
List<String> response = (List<String>)jedis.eval(script,cursor,COUNT,pattern);
How do I pass these parameters to the script?
Your code has several points to fix.
In scan command, 'match' parameter should be placed prior to 'count'.
You should only use KEYS when it is a place for Redis key. Other things should be represented to ARGV.
You forgot to specify key count while calling Jedis.eval().
So, fixed version of your code is,
String script="return {redis.call('SCAN',ARGV[1],'MATCH',ARGV[2],'COUNT',ARGV[3])}";
List<String> response = (List<String>)jedis.eval(script, 0, cursor, pattern, COUNT);
But I agree Itamar to use Jedis.scan() instead.
Hope this helps.

FSharp.Data.JsonProvider - Getting json from types

The FSharp.Data.JsonProvider provides a means to go from json to an F# type. Is it possible to go in the reverse direction i.e. declare an instance of one of the types created by FSharp.Data.JsonProvider, set the field values to be what I need, and then get the equivalent json?
I've tried things like this,
type Simple = JsonProvider<""" { "name":"John", "age":94 } """>
let fred = Simple(
Age = 5, // no argument or settable property 'Age'
Name = "Fred")
The latest version of F# Data now supports this. See the last example in http://fsharp.github.io/FSharp.Data/library/JsonProvider.html.
Your example would be:
type Simple = JsonProvider<""" { "name":"John", "age":94 } """>
let fred = Simple.Root(age = 5, name = "Fred")
This is one area where C# has an edge over F#, at least in Visual Studio. You can copy your JSON example code into the clipboard and in Visual Studio use the Edit -> Paste Special -> Paste JSON As Classes and it will create a class to match the JSON example. From there you can easily use the class in F#.
More details on paste special here
Hopefully a matching feature will come for F# soon too.

Couchbase queries using composite keys in F#

How would one translate the following composite key query:
?stale=false&connection_timeout=60000&limit=10&skip=0&startkey=["Default",{}]&endkey=["Default"]&descending=true
to couchbase .net api when using F#. I found a similar using C# LINQ here
Couchbase .Net Library complex startKey/endKey types, but how can I accomplish the same using F#?
The missing parts are the ???
let result = myView.Descending(true).Stale(StaleMode.False).Limit(limit).StartKey( ??? ).EndKey( ??? )
Any help would be appreciated.
It appears that you're asking how to create an array in F#. To declare an object array in F# do this:
let (startKey: Object array) = [|35; 23; new Object()|]
let (endKey: Object array) = [|35; 23|]
Note that normally the type specification isn't needed, but since you're mixing types in the array, the compiler will assume the type of the first object in the array (int) and so the new Object() would cause a compile error. Adding the type specification fixes that issue.
let result = myView.Descending(true).Stale(StaleMode.False).Limit(limit).StartKey( startKey ).EndKey( endKey )

Resources