Configuring Serilog using a JSON config, it is possible to configure log level switches as such:
"LevelSwitches": {
"$appLogLevel": "Debug",
"$netLogLevel": "Information",
"$sysLogLevel": "Error"
},
"MinimumLevel": {
"ControlledBy": "$appLogLevel",
"Override": {
"Microsoft": "$netLogLevel",
"System": "$sysLogLevel"
}
}
the purpose of the switches (when instantiated in code) is to be accessed at a later time in order to change the minimum log levels during run-time. However when configured via the JSON config, I can't find a way to access those switch instances. Does anyone know how to access them?
My current project required highly configurable logging as well as the ability to adjust any of those configured log levels at runtime.
So I had actually already written a work-around (but in a more generalized way), by simply processing the "MinimalLevel" section of the config manually in my Program.cs, like this:
Requires a static dictionary for later reference:
public static Dictionary<String, LoggingLevelSwitch> LogLevel = null;
And a code block to bind the LoggingLevelSwitches:
//Configure logger (optional)
if (appConfig.GetSection("Serilog").Exists()) {
//Configure Serilog
LoggerConfiguration logConfig = new LoggerConfiguration().ReadFrom.Configuration(appConfig);
//If Serilog config parsed okay acquire LoggingLevelSwitches
LogLevel = LoadLoggingLevelSwitches(appConfig);
//Bind LoggingLevelSwitches
foreach (String name in LogLevel.Keys) {
if (String.Equals(name, "Default", StringComparison.InvariantCultureIgnoreCase)) {
logConfig.MinimumLevel.ControlledBy(LogLevel[name]);
} else {
logConfig.MinimumLevel.Override(name, LogLevel[name]);
}
}
//Build logger from config
Log.Logger = logConfig.CreateLogger();
}
which utilizes a routine that instantiates all those switches (based on the config file):
public static Dictionary<String, LoggingLevelSwitch> LoadLoggingLevelSwitches(IConfiguration cfg) {
Dictionary<String, LoggingLevelSwitch> levels = new Dictionary<String, LoggingLevelSwitch>(StringComparer.InvariantCultureIgnoreCase);
//Set default log level
if (cfg.GetSection("Serilog:MinimumLevel:Default").Exists()) {
levels.Add("Default", new LoggingLevelSwitch((LogEventLevel)Enum.Parse(typeof(LogEventLevel), cfg.GetValue<String>("Serilog:MinimumLevel:Default"))));
}
//Set log level(s) overrides
if (cfg.GetSection("Serilog:MinimumLevel:Override").Exists()) {
foreach (IConfigurationSection levelOverride in cfg.GetSection("Serilog:MinimumLevel:Override").GetChildren()) {
levels.Add(levelOverride.Key, new LoggingLevelSwitch((LogEventLevel)Enum.Parse(typeof(LogEventLevel), levelOverride.Value)));
}
}
return levels;
}
I have separate class that handles applying runtime logging level changes via these switches, but this was the easiest way to get anything and everything I needed, however...
after writing all that code and then finding out there was a way to just add the switches directly from the config with the"LevelSwitches" section, I realized I was probably doubling up on the work. Because obviously Serilog needs to be instantiating and binding it's own switches defined in the config... it just doesn't appear to give a nice and easy way to access them so I can use them later. Which is counter intuitive because the whole point of a LoggingLevelSwitch is to reference it later at runtime.
Seems if the switches are allowed to be created through the config, we should be given an easy way to access them. Perhaps I should add this as a feature request over on the Serilog GitHub.
If you want to access your level switches from code, it probably means that you have a way to control them somehow, so you probably don't need them in the config file in the first place...
I do believe it makes more sense to keep that part entirely in the code and have the configuration partly in code and partly in the config file, so that would look like this :
// in C# code
var appLevelSwitch = new LoggingLevelSwitch(LogEventLevel.Debug);
var netLevelSwitch= new LoggingLevelSwitch(LogEventLevel.Information);
var systemLevelSwitch= new LoggingLevelSwitch(LogEventLevel.Error);
var configuration = new ConfigurationBuilder()
.AddJsonFile("appsettings.json")
.Build();
Log.Logger = new LoggerConfiguration()
// load config from config file ...
.ReadFrom.Configuration(configuration)
// ... and complete it in C# code
.MinimumLevel.ControlledBy(appLevelSwitch )
.MinimumLevel.Override("Microsoft", netLevelSwitch)
.MinimumLevel.Override("System", systemLevelSwitch)
.CreateLogger();
and in your config file
{
"Serilog": {
"Using": ["Serilog.Sinks.Console"],
"WriteTo": [
{ "Name": "Console" },
{ "Name": "File", "Args": { "path": "%TEMP%\\Logs\\serilog-configuration-sample.txt" } }
],
"Enrich": ["FromLogContext", "WithMachineName", "WithThreadId"],
"Destructure": [
{ "Name": "With", "Args": { "policy": "Sample.CustomPolicy, Sample" } },
{ "Name": "ToMaximumDepth", "Args": { "maximumDestructuringDepth": 4 } },
{ "Name": "ToMaximumStringLength", "Args": { "maximumStringLength": 100 } },
{ "Name": "ToMaximumCollectionCount", "Args": { "maximumCollectionCount": 10 } }
],
"Properties": {
"Application": "Sample"
}
}
}
For the sake of completeness, though, in order to access the defined control switches, you could do the following (be warned that this is kind of a hack !).
Write a configuration method (i.e. an extension method that can appear after .WriteTo.xxx) that accepts LoggingLevelSwitches as arguments and stores them as static members. That configuration method will introduce a dummy ILogEventSink that does nothing (and for performance sake, we can even specify restrictedToMinimumLevel: LogEventLevel.Fatal so that it is almost never called). Then invoke that extension method from the config file (Serilog.Settings.Configuration knows how to find extension methods and pass them the parameters) and voilà, you can now access the static switches from your code !
Here is what it would look like :
public static class LevelSwitches
{
private static LoggingLevelSwitch _switch1;
private static LoggingLevelSwitch _switch2;
private static LoggingLevelSwitch _switch3;
public static LoggingLevelSwitch Switch1 => _switch1 ?? throw new InvalidOperationException("Switch1 not initialized !");
public static LoggingLevelSwitch Switch2 => _switch2 ?? throw new InvalidOperationException("Switch2 not initialized !");
public static LoggingLevelSwitch Switch3 => _switch3 ?? throw new InvalidOperationException("Switch3 not initialized !");
public static LoggerConfiguration CaptureSwitches(
this LoggerSinkConfiguration sinkConfig,
LoggingLevelSwitch switch1,
LoggingLevelSwitch switch2,
LoggingLevelSwitch switch3)
{
_switch1 = switch1;
_switch2 = switch2;
_switch3 = switch3;
return sinkConfig.Sink(
restrictedToMinimumLevel: LogEventLevel.Fatal,
logEventSink: new NullSink());
}
}
public sealed class NullSink : ILogEventSink
{
public void Emit(LogEvent logEvent)
{
// nothing here, that's a useles sink !
}
}
Then in you json config file :
"LevelSwitches": {
"$appLogLevel": "Debug",
"$netLogLevel": "Information",
"$sysLogLevel": "Error"
},
"MinimumLevel": {
"ControlledBy": "$appLogLevel",
"Override": {
"Microsoft": "$netLogLevel",
"System": "$sysLogLevel"
}
},
"WriteTo":[
{
"Name": CaptureSwitches"",
"Args": {
"switch1": "$appLogLevel",
"switch2": "$netLogLevel",
"switch3": "$sysLogLevel",
}
}
]
(you may need a "Using" directive with the name of the assembly containing the LevelSwitches class)
Configure your logger from the config file
var configuration = new ConfigurationBuilder()
.AddJsonFile("appsettings.json")
.Build();
var logger = new LoggerConfiguration()
.ReadFrom.Configuration(configuration)
.CreateLogger();
From that point, you should be able to access the switches through LevelSwitches.Switch1, LevelSwitches.Switch2 and LevelSwitches.Switch3.
Related
I upgraded my Umbraco project from 9 to 10 and now my integration tests don't run.
I created a new Umbraco 10 project to do a test and see if I missed some upgrade steps, but the issue also occurs in the brand new project.
I created the following simple test to reproduce the problem:
[TestFixture]
[UmbracoTest(Database = UmbracoTestOptions.Database.NewEmptyPerTest)]
public class TestClass : UmbracoIntegrationTest {
[Test]
public void Test1() {
Assert.AreEqual(1, 1);
}
}
The test fails to run with the following output:
Test1
Source: TestClass.cs line 21
Duration: 17 ms
Message:
System.ArgumentNullException : Value cannot be null. (Parameter 'config')
TearDown : System.NullReferenceException : Object reference not set to an instance of an object.
Stack Trace:
ChainedBuilderExtensions.AddConfiguration(IConfigurationBuilder configurationBuilder, IConfiguration config, Boolean shouldDisposeConfiguration)
ChainedBuilderExtensions.AddConfiguration(IConfigurationBuilder configurationBuilder, IConfiguration config)
UmbracoIntegrationTest.<CreateHostBuilder>b__5_0(HostBuilderContext context, IConfigurationBuilder configBuilder)
HostBuilder.BuildAppConfiguration()
HostBuilder.Build()
UmbracoHostBuilderDecorator.Build()
UmbracoIntegrationTest.Setup()
--TearDown
UmbracoIntegrationTest.TearDownAsync()
Make sure you have added appsettings.Tests.json in you integration test project with test database configs as follows:
{
"Logging": {
"LogLevel": {
"Default": "Warning",
"Umbraco.Cms.Tests": "Information"
},
"Console": {
"DisableColors": true
}
},
"Tests": {
"Database": {
"DatabaseType": "SQLite",
"PrepareThreadCount": 4,
"SchemaDatabaseCount": 4,
"EmptyDatabasesCount": 2,
"SQLServerMasterConnectionString": ""
}
}
}
I have added
CustomUmbracoIntegrationTest.cs (from: https://github.com/umbraco/Umbraco-CMS/blob/v10/contrib/tests/Umbraco.Tests.Integration/Testing/UmbracoIntegrationTest.cs)
&
CustomGlobalSetupTeardown.cs (from: https://github.com/umbraco/Umbraco-CMS/blob/v10/contrib/tests/Umbraco.Tests.Integration/GlobalSetupTeardown.cs)
in my project instead of these files from the package.
My integration test classes inherit from CustomUmbracoIntegrationTest instead of Umbraco.Cms.Tests.Integration.Testing.UmbracoIntegrationTest
the tests will work after this change.
I have configuration properties for de Supplier:
#Data
#NoArgsConstructor
#ConfigurationProperties("sybase.supplier")
public class SybaseSupplierProperties {
private short canal = 0;
private int pollSize = 10;
}
I am injecting it on the application:
#SpringBootApplication
#EnableConfigurationProperties(SybaseSupplierProperties.class)
public class SybaseSupplier {
private final DataSource dataSource;
private final SybaseSupplierProperties properties;
#Autowired
public SybaseSupplier(DataSource dataSource,
SybaseSupplierProperties properties) {
this.dataSource = dataSource;
this.properties = properties;
}
}
I have the maven dependency to generate it:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
<optional>true</optional>
</dependency>
It is generated as spring-configuration-metadata.json
{
"groups": [
{
"name": "sybase.supplier",
"type": "br.com.clamed.daflow.apps.sybasesupplier.SybaseSupplierProperties",
"sourceType": "br.com.clamed.daflow.apps.sybasesupplier.SybaseSupplierProperties"
}
],
"properties": [
{
"name": "sybase.supplier.canal",
"type": "java.lang.Short",
"sourceType": "br.com.clamed.daflow.apps.sybasesupplier.SybaseSupplierProperties",
"defaultValue": 0
},
{
"name": "sybase.supplier.poll-size",
"type": "java.lang.Integer",
"sourceType": "br.com.clamed.daflow.apps.sybasesupplier.SybaseSupplierProperties",
"defaultValue": 10
}
],
"hints": []
}
application.properties
spring.cloud.stream.function.bindings.intControleSupplier-out-0=output
spring.cloud.function.definition=intControleSupplier
Internal maven repo is registed.
The app is imported:
app register --name jdbc-sybase-supplier --type source --uri maven://br.com.clamed.cloud.dataflow.apps:jdbc-sybase-supplier:1.0.0-SNAPSHOT
When I use it, properties do not show:
Why?
Not all the properties from spring-configuration-metadata.json would be available when the SCDF server retrieves the application properties. This is to limit the number of properties get loaded at the UI. But this doesn't mean you can't set those properties as application properties. It is just that those properties would be available in the SCDF web UI as well as shell completion as the application properties for you to choose.
In your case, to make your SybaseSupplierProperties available, you need to add a dataflow configuration file that specifies what properties should be available for SCDF to retrieve when loading the app.
You either need to specify spring-configuration-metadata-whitelist.properties (deprecated in the recent releases) or dataflow-configuration-metadata-whitelist.properties inside classpath*:/META-INF/ with the properties classes name you want to include as application configuration properties.
For instance, in your case you would need to the following content in /META-INF/dataflow-configuration-metadata-whitelist.properties :
configuration-properties.classes=br.com.clamed.daflow.apps.sybasesupplier.SybaseSupplierProperties
You can also checkout the documentation on this here
How can I register the scope as InstancePerMatchingLifetimeScope in configuration (Json/Xml) as provided by the Autofac. As of now it is throwing my exception as Invalid Scope.
Autofac configuration does not currently support instance per matching lifetime scope. You can see the code here where lifetime scope values are parsed and there is a table in the documentation showing what is currently supported - be sure to scroll to the right in the table so you can see the list of valid values.
I have not tested this solution.
Make sure you used proper string in the configuration, try below
{
"instanceScope": "per-matching-lifetime"
}
I guess you could do it by creating a Custom Module
public class CustomModule : Module
{
public bool TagName { get; set; }
protected override void Load(ContainerBuilder builder)
{
builder.Register<CustomType>().InstancePerMatchingLifetimeScope(TagName);
}
}
Configuration:
{
"modules": [{
"type": "MyNamespace.CustomModule, MyAssembly",
"properties": {
"TagName": "customRequest"
}
}]
}
If this does not help, please provide additional details about project type(asp.net core/asp.net mvc), exception details, and stack trace.
I have one class which is my getToken class. In this class ı get the token which is String token. Here is my getToken.dart
class GetToken {
String token;
Future<Null> getData() async {
var url = "http://192.168.1.39:7070/api/v2/token";
http.post(url, body: {
"grant_type": "string",
"branchcode": "string",
"password": "string",
"username": "string",
"dbname": "string",
"dbuser": "string",
"dbpassword": "string",
"dbtype": "string"
}).then((response) {
print("Response Status: ${response.statusCode}");
//print("Response Body: ${response.body}");
print('access token is -> ${json.decode(response.body)['access_token']}');
token = json.decode(response.body)['access_token'];
});
}
}
I want to use this token in my getCari class and get the Json values in my rest api. Here is my getCari.dart class
class GetCari{
getCari() async {
final response = await http.get("http://192.168.1.39:7070/api/v2/ARPs",
headers: {HttpHeaders.AUTHORIZATION: token});
if(response.statusCode ==200){
return Cari.fromJson(json.decode(response.body));
}else{
throw Exception("Failed to Load");
}
}
}
I would like to ask how can ı use my token(which is get from getToken.dart) in my getCari.dart class. How can ı pass the token variable other classes?
Please use Dart's top level functions instead of classes which does not need instantiation.
This is mentioned in Effective Dart documentation
token_manager.dart
String _token;
String get token => _token // To ensure readonly
Future<Null> setToken() async {
// set your _token here
}
get_cari.dart
import 'token_manager.dart' as token_manager // use your import path here
getCari() async {
// You can now access token from token_manager from any class like this.
final String token = token_manager.token;
}
Excerpt
In Java and C#, every definition must be inside a class, so it’s
common to see “classes” that exist only as a place to stuff static
members. Other classes are used as namespaces—a way to give a shared
prefix to a bunch of members to relate them to each other or avoid a
name collision.
Dart has top-level functions, variables, and constants, so you don’t
need a class just to define something. If what you want is a
namespace, a library is a better fit. Libraries support import
prefixes and show/hide combinators. Those are powerful tools that let
the consumer of your code handle name collisions in the way that works
best for them.
If a function or variable isn’t logically tied to a class, put it at
the top level. If you’re worried about name collisions, give it a more
precise name or move it to a separate library that can be imported
with a prefix.
you can pass data to getCari method by create object fromGetCari() class
see this
class GetToken {
String token;
Future<Null> getData() async {
var url = "http://192.168.1.39:7070/api/v2/token";
http.post(url, body: {
"grant_type": "string",
"branchcode": "string",
"password": "string",
"username": "string",
"dbname": "string",
"dbuser": "string",
"dbpassword": "string",
"dbtype": "string"
}).then((response) {
print("Response Status: ${response.statusCode}");
//print("Response Body: ${response.body}");
print('access token is -> ${json.decode(response.body)['access_token']}');
token = json.decode(response.body)['access_token'];
GetCari().getCari("fdfd");
});
}
}
class GetCari{
getCari(String token) async {
final response = await http.get("http://192.168.1.39:7070/api/v2/ARPs",
headers: {HttpHeaders.AUTHORIZATION: token});
if(response.statusCode ==200){
return Cari.fromJson(json.decode(response.body));
}else{
throw Exception("Failed to Load");
}
}
You can use a service manager like get_it to reuse values anywhere in your app.
Create the class that will hold your shared information
class GetToken{
String token;
...
}
In your main.dart file, setup your GetIt instance
final getIt = GetIt.instance;
void setup() {
getIt.registerSingleton<GetToken>(GetToken());
}
Now, from anywhere in your app, you can call this class and get your token.
var mytoken= getIt.get<AppModel>().token;
I have a huge Neo4j database that I created using the batch import tool. Now I want to expose certain parts of the data via APIs (that will run a query in the backend) to my users. My requirements are pretty general:
1. Latency should be minimum
2. Support qps of about ~10-20.
Can someone give me recommendations on what I should use for this and any documentation on how to go about this? I see several examples of ruby/rails and REST APIs -- however they are specific to exposing the data as is without any complex queries in the backend. I am not sure how to translate that into the specific APIs that I want. Any help would be appreciated.
Thanks.
I wrote a simple Flask API example that interfaces with Neo4j for a simple demo (backend for a messaging iOS app).
You might find it a helpful reference: https://github.com/johnymontana/messages-api
There are also a few resources online for using Flask with Neo4j:
http://nicolewhite.github.io/neo4j-flask/
http://neo4j.com/blog/building-python-web-application-using-flask-neo4j/
https://github.com/nicolewhite/neo4j-flask
Check out the GraphAware Framework. You can build the APIs directly on top of Neo4j (same JVM) but you have to use Cypher, Java, or Scala.
I'd start with Cypher, because you can write it very quickly, then optimise for performance, and finally, if all else fails and your latency is still to high, convert to Java.
You can expose subgraphs (or even partially hydrated nodes and relationship, i.e. only certain properties) very easily. Checkout out the stuff in the api package. Example code:
You'd write a controller to return a person's graph, but only include nodes' names (not ages or anything else):
#RestController
public class ApiExample {
private final GraphDatabaseService database;
#Autowired
public ApiExample(GraphDatabaseService database) {
this.database = database;
}
#RequestMapping(path = "person/{name}")
public JsonGraph getPersonGraph(#PathVariable(value = "name") String name) {
JsonGraph<?> result = new JsonGraph() {
#Override
protected JsonGraph self() {
return this;
}
};
try (Transaction tx = database.beginTx()) {
Node person = database.findNode(label("Person"), "name", name);
if (person == null) {
throw new NotFoundException(); //eventually translate to 404
}
result.addNode(person, IncludeOnlyNameNodeTransformer.INSTANCE);
for (Relationship worksFor : person.getRelationships(withName("WORKS_FOR"), Direction.OUTGOING)) {
result.addRelationship(worksFor);
result.addNode(worksFor.getEndNode(), IncludeOnlyNameNodeTransformer.INSTANCE);
}
tx.success();
}
return result;
}
private static final class IncludeOnlyNameNodeTransformer implements NodeTransformer<LongIdJsonNode> {
private static final IncludeOnlyNameNodeTransformer INSTANCE = new IncludeOnlyNameNodeTransformer();
private IncludeOnlyNameNodeTransformer() {
}
#Override
public LongIdJsonNode transform(Node node) {
return new LongIdJsonNode(node, new String[]{"name"});
}
}
}
Running this test
public class ApiExampleTest extends GraphAwareApiTest {
#Override
protected void populateDatabase(GraphDatabaseService database) {
database.execute("CREATE INDEX ON :Person(name)");
database.execute("CREATE (:Person {name:'Michal', age:32})-[:WORKS_FOR {since:2013}]->(:Company {name:'GraphAware', est:2013})");
}
#Test
public void testExample() {
System.out.println(httpClient.get(baseUrl() + "/person/Michal/", 200));
}
}
would return the following JSON
{
"nodes": [
{
"properties": {
"name": "GraphAware"
},
"labels": [
"Company"
],
"id": 1
},
{
"properties": {
"name": "Michal"
},
"labels": [
"Person"
],
"id": 0
}
],
"relationships": [
{
"properties": {
"since": 2013
},
"type": "WORKS_FOR",
"id": 0,
"startNodeId": 0,
"endNodeId": 1
}
]
}
Obviously you can roll your own using frameworks like Rails / Sinatra. If you want a standard for the way that your API is formatted I quite like the JSON API standard:
http://jsonapi.org/
Here is an episode of The Changelog podcast talking about it:
https://changelog.com/189/
There's also a gem for creating resource objects which determine what is exposed and what is not:
https://github.com/cerebris/jsonapi-resources
I tried it out a bit with the neo4j gem and it works at a basic level, though once you start getting into includes there seems to be some dependencies on ActiveRecord. I'd love to see issues like that worked out, though.
You might also check out the GraphQL standard which was created by Facebook:
https://github.com/facebook/graphql
There's a Ruby gem for it:
https://github.com/rmosolgo/graphql-ruby
And, of course, another episode of The Changelog ;)
http://5by5.tv/changelog/149
Various other API resources for Ruby:
https://github.com/webmachine/webmachine-ruby
https://github.com/ruby-grape/grape
Use grest.
You can simply define your primary model(s) and its relation(s) (as secondary) and build an API with minimal coding and as quickly as possible!