Serilog Filter Expressions: What is the syntax to filter against dictionaries? - serilog

I'm struggling to find the correct syntax to use in a Serilog filter expression to find a particular key/value pair in a dictionary within the event properties.
This is a contrived example, but illustrates the issue.
The relevent setup looks like this:
string exceptionFilter = "SomeDictionary.Other = 'Nope'";
LoggerConfiguration config = new LoggerConfiguration()
.Destructure.JsonNetTypes()
.MinimumLevel.ControlledBy(LevelSwitch)
.Enrich.FromLogContext()
.Enrich.With(new ExceptionEnricher())
.Filter.ByExcluding(exceptionFilter)
.WriteTo.MSSqlServer(
connectionString: ConnectionString,
sinkOptions: SinkOptions,
columnOptions: ColumnOptions
);
Logger = config.CreateLogger();
Then I log an event like this:
Log.Fatal(
ex,
"Real Bad {#SomeDictionary}",
new Dictionary<string, string>() {
{"Test", "Test"},
{"Other", "Nope"}
}
)
What should the expression to exclude SomeDictionary['Other'] = 'Nope' be?
If I pass in the event details as an object:
Log.Fatal(
ex,
"Real Bad {#SomeObject}",
new {
Test = "Test",
Other = "Nope"
}
)
Then the following expression works as expected:
SomeObject.Other = 'None'
Unfortunately, I am enriching our logs with a number of dictionaries that I need to be able to filter by and that syntax doesn't seem to work with them.
EDIT
If I serialize the log event properties, it appears that the issue is probably due to escaped quotation marks within the dictionary keys:
"SomeDictionary": {
"Elements": {
"\"Test\"": {
"Value": "Test"
},
"\"Other\"": {
"Value": "Nope"
}
}
}
Still not sure what I need to do to be able to filter on one of the elements.
Resolution
I switched from using Serilog.Filters.Expressions to Serilog.Expressions and the issue resolved itself. Now I can filter on dictionaries using expected syntax:
SomeDictionary['Other'] = 'Nope'

Related

Retrieving "$ref" field in swagger.json

I am trying to use the swagger parser to parse and retrieve information in the "swagger.json" (io.swagger.parser.SwaggerParser;)
Below is an excerpt of the "swagger.json".
I am trying to retrieve "$ref" : "#/definitions/abc".
"responses" : {
"200" : {
"description" : "abc",
"schema" : {
"$ref" : "#/definitions/abc"
}
},
This is the code to parse it.
SwaggerParser sparse = new SwaggerParser();
Swagger swagger = sparse.read("swagger.json");
// This next line is what I am having a problem with.
swagger.getPath("/endpointurl").getGet().getResponses().get("200").getSchema();
At this point, the ".getSchema()" in the above line has only "getType()" in it I can call. It doesn't have "get$ref()". This is because ".getSchema()" returns a "Property" (io.swagger.models.properties.Property). It doesn't have "get$ref()".
get$ref() is available in "RefProperty" (io.swagger.models.properties.RefProperty)
But ".getSchema()" doesn't return a "RefProperty".
Typecast the result of ".getSchema()" to a "RefProperty" also doesn't work. It ends up in this error.
java.lang.ClassCastException: io.swagger.models.properties.ArrayProperty cannot be cast to io.swagger.models.properties.RefProperty
Has anyone tried parsing a "swagger.json" and was able to retrieve the "$ref": line under "schema" in the "response" block?
Any idea how might I be able to do that?
I figured out a way to do that. Maybe not the best way to do it, but it retrieves the information in "#ref".
Object obj = xxxxx.getSchema(); // xxxxx is whatever your code that gives you ".getSchema()". Mine is in a map and I don't want to distract people.
ArrayProperty arrProp = new ArrayProperty();
arrProp = (ArrayProperty)obj; // .getSchema() returns an ArrayProperty
RefProperty refProperty = (RefProperty) arrProp.getItems(); // .getItems() will return the RefProperty type you need to call ".get$ref()".
String refStr = refProperty.get$ref(); // Voila, there's your content in "#ref".
String simpleRefStr = refProperty.getSimpleRef();
I had to do a few type castings. If you have a more elegant way of doing this, please post here.

Deserialize JSON in Hack strict mode

I have a nested JSON file, consisting of keys and values which are string only. But the structure of the JSON file is not fixed, so sometimes it could be nested 3 levels, sometimes only 2 levels.
I wonder how i could serialize this in strict mode?
"live" : {
"host" : "localhost",
"somevalue" : "nothing",
"anobject" : {
"one" : "two",
"three" : "four",
"five" : {
"six" : "seven"
}
}
}
If i would know the structure of the JSON, i simply would write my own class for it, but since the keys are not fixed, and also the nesting could be into several levels, i really wonder how i cut put such an object into a specific type.
Any help or hints appreciated
I think invariants will serve you well here. First off, it might be helpful to know that you can type a keyed tree strictly in Hack:
<?hh // strict
class KeyedTree<+Tk as arraykey, +T> {
public function __construct(
private Map<Tk, KeyedTree<Tk, T>> $descendants = Map{},
private ?T $v = null
) {}
}
(It must be a class because cyclic shape definitions are sadly not allowed)
I haven't tried it yet, but type_structures and Fred Emmott's TypeAssert look to also be of interest. If some part of your JSON blob is known to be fixed, then you could isolate the nested, uncertain part and build a tree out of it with invariants. In the limiting case where the whole blob is unknown, then you could excise the TypeAssert since there's no interesting fixed structure to assert:
use FredEmmott\TypeAssert\TypeAssert;
class JSONParser {
const type Blob = shape(
'live' => shape(
'host' => string, // fixed
'somevalue' => string, // fixed
'anobject' => KeyedTree<arraykey, mixed> // nested and uncertain
)
);
public static function parse_json(string $json_str): this::Blob {
$json = json_decode($json_str, true);
invariant(!array_key_exists('anobject', $json), 'JSON is not properly formatted.');
$json['anobject'] = self::DFS($json['anobject']);
// replace the uncertain array with a `KeyedTree`
return TypeAssert::matchesTypeStructure(
type_structure(self::class, 'Blob'),
$json
);
return $json;
}
public static function DFS(array<arraykey, mixed> $tree): KeyedTree<arraykey, mixed> {
$descendants = Map{};
foreach($tree as $k => $v) {
if(is_array($v))
$descendants[$k] = self::DFS($v);
else
$descendants[$k] = new KeyedTree(Map{}, $v); // leaf node
}
return new KeyedTree($descendants);
}
}
Down the road, you'll still have to supplement containsKey invariants on the KeyedTree, but that's the reality with unstructured data in Hack.

Passing query parameters in Dapper using OleDb

This query produces an error No value given for one or more required parameters:
using (var conn = new OleDbConnection("Provider=..."))
{
conn.Open();
var result = conn.Query(
"select code, name from mytable where id = ? order by name",
new { id = 1 });
}
If I change the query string to: ... where id = #id ..., I will get an error: Must declare the scalar variable "#id".
How do I construct the query string and how do I pass the parameter?
The following should work:
var result = conn.Query(
"select code, name from mytable where id = ?id? order by name",
new { id = 1 });
Important: see newer answer
In the current build, the answer to that would be "no", for two reasons:
the code attempts to filter unused parameters - and is currently removing all of them because it can't find anything like #id, :id or ?id in the sql
the code for adding values from types uses an arbitrary (well, ok: alphabetical) order for the parameters (because reflection does not make any guarantees about the order of members), making positional anonymous arguments unstable
The good news is that both of these are fixable
we can make the filtering behaviour conditional
we can detect the category of types that has a constructor that matches all the property names, and use the constructor argument positions to determine the synthetic order of the properties - anonymous types fall into this category
Making those changes to my local clone, the following now passes:
// see https://stackoverflow.com/q/18847510/23354
public void TestOleDbParameters()
{
using (var conn = new System.Data.OleDb.OleDbConnection(
Program.OleDbConnectionString))
{
var row = conn.Query("select Id = ?, Age = ?", new DynamicParameters(
new { foo = 12, bar = 23 } // these names DO NOT MATTER!!!
) { RemoveUnused = false } ).Single();
int age = row.Age;
int id = row.Id;
age.IsEqualTo(23);
id.IsEqualTo(12);
}
}
Note that I'm currently using DynamicParameters here to avoid adding even more overloads to Query / Query<T> - because this would need to be added to a considerable number of methods. Adding it to DynamicParameters solves it in one place.
I'm open to feedback before I push this - does that look usable to you?
Edit: with the addition of a funky smellsLikeOleDb (no, not a joke), we can now do this even more directly:
// see https://stackoverflow.com/q/18847510/23354
public void TestOleDbParameters()
{
using (var conn = new System.Data.OleDb.OleDbConnection(
Program.OleDbConnectionString))
{
var row = conn.Query("select Id = ?, Age = ?",
new { foo = 12, bar = 23 } // these names DO NOT MATTER!!!
).Single();
int age = row.Age;
int id = row.Id;
age.IsEqualTo(23);
id.IsEqualTo(12);
}
}
I've trialing use of Dapper within my software product which is using odbc connections (at the moment). However one day I intend to move away from odbc and use a different pattern for supporting different RDBMS products. However, my problem with solution implementation is 2 fold:
I want to write SQL code with parameters that conform to different back-ends, and so I want to be writing named parameters in my SQL now so that I don't have go back and re-do it later.
I don't want to rely on getting the order of my properties in line with my ?. This is bad. So my suggestion is to please add support for Named Parameters for odbc.
In the mean time I have hacked together a solution that allows me to do this with Dapper. Essentially I have a routine that replaces the named parameters with ? and also rebuilds the parameter object making sure the parameters are in the correct order.
However looking at the Dapper code, I can see that I've repeated some of what dapper is doing anyway, effectively it each parameter value is now visited once more than what would be necessary. This becomes more of an issue for bulk updates/inserts.
But at least it seems to work for me o.k...
I borrowed a bit of code from here to form part of my solution...
The ? for parameters was part of the solution for me, but it only works with integers, like ID. It still fails for strings because the parameter length isn't specifed.
OdbcException: ERROR [HY104] [Microsoft][ODBC Microsoft Access Driver]Invalid precision value
System.Data.Odbc. OdbcParameter.Bind(OdbcStatementHandle hstmt,
OdbcCommand command, short ordinal, CNativeBuffer parameterBuffer, bool allowReentrance)
System.Data.Odbc.OdbcParameterCollection.Bind(OdbcCommand command, CMDWrapper cmdWrapper, CNativeBuffer parameterBuffer)
System.Data.Odbc.OdbcCommand.ExecuteReaderObject(CommandBehavior behavior, string method, bool needReader, object[] methodArguments, SQL_API odbcApiMethod)
System.Data.Odbc.OdbcCommand.ExecuteReaderObject(CommandBehavior behavior, string method, bool needReader)
System.Data.Common.DbCommand.ExecuteDbDataReaderAsync(CommandBehavior behavior, CancellationToken cancellationToken)
Dapper.SqlMapper.QueryAsync(IDbConnection cnn, Type effectiveType, CommandDefinition command) in SqlMapper.Async.cs
WebAPI.DataAccess.CustomerRepository.GetByState(string state) in Repository.cs
var result = await conn.QueryAsync(sQuery, new { State = state });
WebAPI.Controllers.CustomerController.GetByState(string state) in CustomerController .cs
return await _customerRepo.GetByState(state);
For Dapper to pass string parameters to ODBC I had to specify the length.
var result = await conn.QueryAsync<Customer>(sQuery, new { State = new DbString { Value = state, IsFixedLength = true, Length = 4} });

Breeze.js OData - Complex Type fails --> Cannot call method '_createInstanceCore' of null

We are currently developing a small Hmtl/ JavaScript application with breeze.js (Version 1.3.4). We configured to used OData protocol to query the entities.
With a simple entity it just works fine. If we are querying a complex entity (contact entity with two complex type properties for phone numbers and addresses), we receive the following error:
"TypeError: Cannot call method '_createInstanceCore' of null
at ctor.startTracking (<ServerAddress>/scripts/breeze.debug.js:14086:49)
at Array.forEach (native)
at ctor.startTracking (<ServerAddress>1/scripts/breeze.debug.js:14069:12)
at new ctor <ServerAddress>/scripts/breeze.debug.js:2952:52)
at proto._createEntityCore (<ServerAddress>1/scripts/breeze.debug.js:6478:9)
at mergeEntity <ServerAddress>/scripts/breeze.debug.js:12458:39)
at processMeta (<ServerAddress>/scripts/breeze.debug.js:12381:24)
at visitAndMerge (<ServerAddress>/scripts/breeze.debug.js:12361:16)
at <ServerAddress>/scripts/breeze.debug.js:12316:33
at Array.map (native)
From previous event:
at executeQueryCore (<ServerAddress>/scripts/breeze.debug.js:12290:77)
at proto.executeQuery (<ServerAddress>/scripts/breeze.debug.js:11243:23)
at DataContext.executeCachedQuery (<ServerAddress>/App/services/datacontext.js:138:33)
at DataContext.getContactsBySearchParams (<ServerAddress>/App/services/datacontext.js:111:25)
at Search.searchCmd.ko.asyncCommand.execute (<ServerAddress>/App/viewmodels/search.js:34:38)
at Search.ko.asyncCommand.self.execute (<ServerAddress>/scripts/knockout.command.js:57:29)
at HTMLButtonElement.ko.bindingHandlers.event.init (<ServerAddress>/scripts/knockout-2.2.1.debug.js:2318:66)"
While debugging the code, we see, that the dataType field of the complex property instance is null:
val = prop.dataType._createInstanceCore(entity, prop.name);
We can also see that the complexTypeName has a strange value formatting like:
<ComplexTypeName>):#<NameSpace>
Another thing we noticed concerning the strange complex type name is, that the entities property is a collection of complex types (a contact may have multiple addresses). The check on Line 14085 always returns isScalar = true, but a complex array should be created instead.
Is there a problem with the OData Metadata for complex types? How could we solve this issue?
Thank you in advance for your answer.
Cheers,
Marc
Breeze currently does support both scalar complex types and arrays of complex types.
But there is a bug with using EntityManager.createEntity to create an entity and its complex type values in a single pass. This will be fixed in the next release in about a week.
So for now the following does NOT work. ( Assume 'location' in the examples below is a complex property of type 'Location', itself with several other properties)
var supplier = em.createEntity("Supplier",
{ companyName: "XXX", location: { city: "LA" } }
);
but the following will ( assuming you are using the breeze Angular/backingStore impl - the knockout code would look a bit different)
var supplier = em.createEntity("Supplier", { companyName: "XXX" });
supplier.location.city = "San Francisco";
supplier.location.postalCode = "91333";
or the following
var supplier = em.createEntity("Supplier", { companyName: "XXX" });
var locationType = em.metadataStore.getEntityType("Location");
supplier.location = locationType.createInstance(
{ city: "Boston", postalCode: "12345" }
);
I am seeing the same problem with breeze 1.4.5.
My metadata looks like:
{ "shortName":"Phone",
...
"dataProperties":[ {"name":"phoneNumber",
"complexTypeName":"PhoneNumber#mynamespace",
"isScalar":true }]
...
},
{"shortName":"PhoneNumber",
"namespace":"mynamespace",
"isComplexType":true,
"dataProperties":[ ... ]
}
My client code makes a call:
var newPhone = manager.createEntity('Phone', {phoneNumber:{num: "234-2342"}});
(there are more properties in the PhoneNumber complex type, but you yet the picture).
The breeze code (same call stack as orignal poster's) tries to dereference the dataType field, which is not defined, and throws an exception:
if (prop.isDataProperty) {
if (prop.isComplexProperty) {
if (prop.isScalar) {
val = prop.dataType._createInstanceCore(entity, prop);
} else {
val = breeze.makeComplexArray([], entity, prop);
}
I went through the Zza sample's schema and found no examples of complex data properties. The Northwind schema included with the samples bundle does, but I'm not sure how to get it to work with my schema.

Grails validation fails after interchanging unique attribute values

Grails validation fails after interchanging unique attribute values
Hi, I am trying to create an interface where users can create some custom enumeration with translations for different languages. For example the user can create an enumeration "Movie Genre". For this enumeration there might be an enumeration-value "Comedy" for which there might exist one ore more enumeration-value-translations for several languages.
As there must only be one translation for a specific language, I added a unique constraint to the enumeration-value-translation domain class. These are my domain classes right now:
class Enumeration {
String label
List<EnumerationValue> enumerationValues = new ArrayList<EnumerationValue>()
static hasMany = [ enumerationValues: EnumerationValue ]
static constraints = {
label(nullable: false, blank: false)
enumerationValues(nullable: true)
}
}
class EnumerationValue {
String label
List<EnumerationValueTranslation> enumerationValueTranslations = new ArrayList<EnumerationValueTranslation>()
static belongsTo = [ enumeration: Enumeration ]
static hasMany = [ enumerationValueTranslations: EnumerationValueTranslation ]
static constraints = {
label(nullable: false, blank: false, unique: 'enumeration')
enumeration(nullable: false)
enumerationValueTranslations(nullable: false)
}
}
class EnumerationValueTranslation {
String value
Language language
static belongsTo = [ enumerationValue: EnumerationValue ]
static constraints = {
value(nullable: false, blank: false)
language(nullable: true, unique: 'enumerationValue')
enumerationValue(nullable: false)
/* unique constraint as mentioned in description text */
language(unique: 'enumerationValue')
}
}
This works pretty fine so far. My problem occures when I update two enumeration-value-translations of the same enumeration-value in a way that the languages interchange. For example I have an
enumeration-value: "Comedy"
and some translations where the language is "accidentally" mixed up
translations for "Comedy"
language: german, value: "Comedy"
language: english, value "Komödie"
if the user recognizes that he mixed up the language, he might want to swap the languages and save the enumeration again. And this is where my error occures, because after swapping the languages the enumeration-value-translations unique constraint validates to false.
To debug this i simply tryed to print out the error causing translations before and after i processed the params, so:
Enumeration enumeration = Enumeration.get(params['id']);
println "before:"
enumeration.enumerationValues.each() { enumValue ->
enumValue.enumerationValueTranslations.each() { enumValueTr ->
println enumValueTr;
if(!enumValueTr.validate()) {
// print errors...
}
}
}
// swap languages:
// (this are the lines of codes that are actually executed, and cause the
// error. The actual processing of params looks different of course...)
// sets the language of "Comedy" to English
EnumerationValueTranslation.get(5).language = Language.get(1);
// sets the language of "Komödie" to German
EnumerationValueTranslation.get(6).language = Language.get(2);
println "after:"
enumeration.enumerationValues.each() { enumValue ->
enumValue.enumerationValueTranslations.each() { enumValueTr ->
println enumValueTr;
if(!enumValueTr.validate()) {
// print errors...
}
}
}
wich results to:
before:
EnumerationValueTranslation(value: Fantasy, language: en_US, enumerationValue: Fantasy)
EnumerationValueTranslation(value: Phantasie, language: de_DE, enumerationValue: Fantasy)
EnumerationValueTranslation(value: Comedy, language: de_DE, enumerationValue: Comedy)
EnumerationValueTranslation(value: Komödie, language: en_US, enumerationValue: Comedy)
after:
EnumerationValueTranslation(value: Fantasy, language: en_US, enumerationValue: Fantasy)
EnumerationValueTranslation(value: Phantasie, language: de_DE, enumerationValue: Fantasy)
EnumerationValueTranslation(value: Comedy, language: en_US, enumerationValue: Comedy)
validation fails: Property [language] of class [Translation] with value [Language(code: en_US)] must be unique
EnumerationValueTranslation(value: Komödie, language: de_DE, enumerationValue: Comedy)
validation fails: Property [language] of class [Translation] with value [Language(code: de_DE)] must be unique
at this state i havend deleted, or saved (or flushed in any way) anything - this is just the result after altering the objects. And as you can see, there really is no inconsistency in the actual data and the validation should'nt fail.
Might there be a mistake in the way i change the translations? I just fetched them by ID and simply updated the language - i tryed that out in a minimalistic example and it worked there...
It also works if i just create a deep copy of all enumeration-values and enumeration-value-translations and store that instead (which means that the validation really should'nt fail), but i think this is really not the way it should be done...
Another strange thing is, that the validation only fails if I iterate through the data. If i dont touch the data at all, no error occures, but the data isn't saved too, meaning that the folowing lines are causing the validations to be evaluated at all:
enumeration.enumerationValues.each() { ev ->
ev.enumerationValueTranslations.each() { evt ->
}
}
thats why i strongly believe that there must be some non-trivial problem... please let me know if there is anything else you need to know.
thanks for any help
Let me take another try :)
I'm looking at UniqueConstraint.processValidate(), and can suppose that its logic does not consider the exchange case.
Particularly, the code
boolean reject = false;
if (id != null) {
Object existing = results.get(0);
Object existingId = null;
try {
existingId = InvokerHelper.invokeMethod(existing, "ident", null);
}
catch (Exception e) {
// result is not a domain class
}
if (!id.equals(existingId)) {
reject = true;
}
}
else {
reject = true;
}
should iterate the obtained results and verify that the field value STILL violates uniqueness. In case of exchange, the other instance should be picked from a cache and have a new field value.
So I'd suggest you create an own descendant of UniqueConstraint and use it, unless anyone's going to patch Grails.

Resources