I would like Logger to have my custom logging level method implemented. For example i would like to call log.custom("custom level log"). According to the documentation it is possible but there is not enough hints for me. Can someone help me with understanding what does this command exactly do?
java -cp log4j-core-2.8.jar \
org.apache.logging.log4j.core.tools.Generate$ExtendedLogger \
com.mycomp.ExtLogger DIAG=350 NOTICE=450 VERBOSE=550 > com/mycomp/ExtLogger.java
What steps should I take after this command exits successfully? What exactly should I swap and where?
What the tool does is generate source code that you can include in your project. The intention is that you use the generated class instead of the standard Log4j2 Logger.
Before running the tool, you need to decide what to call your custom levels and where they rank, relative to the existing levels. The manual page shows a table with the int values of the built-in levels. The int value of your custom level will probably be in between these values.
In the quoted example, the tool will generate a class named ExtLogger in the com.mycomp package that extends the standard Log4j2 Logger with three custom levels (DIAG, NOTICE and VERBOSE). DIAG's int value is 350 so it sits between WARN (300) and INFO (400).
The tool writes the generated source code to the console. The example shows how you can redirect that output to a file. You can then include this file in your project.
Related
We're logging to Elastic through the Serilog dotnet package and have noticed that properties that we add to the Poerperties part of the log events seem to be suffixed with _ (underscore) and then some type identifier. For example:
This causes some issues for us in the indexing process. I've been looking through the Serilog library code to try to find the source of this but without any luck. Can someone point me in the right direction as to where the suffix is added to the property names?
It looks like you're overriding the ElasticsearchSinkOptions.CustomFormatter with code to do this when constructing the sink. Using the default formatter (leaving this un-set) should not result in this style of property name.
If this doesn't cover it, please post as much as you can to show how the sink is being configured in your app.
See also:
https://github.com/serilog-contrib/serilog-sinks-elasticsearch#a-note-about-fields-inside-elasticsearch
https://github.com/serilog-contrib/serilog-sinks-elasticsearch/issues/184
I want to get access to external YAML file which I specify through command-line argument:
java -jar target/app-thorntail.jar -s./test.yaml
This file I need to use to get my custom properties tree by SnakeYaml.
You can use #Inject #ConfigurationValue for your custom properties, and you can #Inject a ConfigView to read the entire configuration tree. I believe that should be enough for your usecase. This approach will also provide correct values in case multiple configuration files are used.
I'm not sure if you can get access to the file itself, except maybe provide a custom main method and parse the command-line arguments yourself.
I am busy making a sublime text plugin/package that will ease development of lua scripts in my workplace.
We have several lua files with different extensions depending on their purpose. I want ST3 to give the proper lua syntax to these files.
I know you can set ST3 to remember syntax for a specific file extension and this is saved as a (in my case) Lua.sublime-settings file in AppData\Roaming\Sublime Text 3\Packages\User
However, if I put this file in my new plugin's folder, it's ignored.
Am I doing something wrong or is what I want not possible?
This doesn't work because syntax specific settings are only loaded from the package that defines the syntax and from the User package (so the user can customize them).
The following is a quote from the official documentation on settings:
Settings files are consulted in this order:
1. Packages/Default/Preferences.sublime-settings
2. Packages/Default/Preferences (<platform>).sublime-settings
3. Packages/User/Preferences.sublime-settings
4. <Project Settings>
5. Packages/<syntax>/<syntax>.sublime-settings
6. Packages/User/<syntax>.sublime-settings
7. <Buffer Specific Settings>
The only places where <syntax> is referenced is from the package itself and from the user package.
If I had to guess, I would say that this is because outside of the original package author that defined the syntax, all other settings would be considered user customizations, and those settings need to be in the User package (specifically in the root of it) to ensure that they're loaded last.
A simple (but undesirable) solution would be to document for the user that they have to take this step manually.
Another approach would be to add some plugin code that extends the settings when your plugin is loaded:
def plugin_loaded():
settings = sublime.load_settings("Lua.sublime-settings")
extensions = settings.get("extensions", [])
if "blarb" not in extensions:
extensions.append("blarb")
settings.set("extensions", extensions)
sublime.save_settings("Lua.sublime-settings")
If you go this route you may want to include an extra sentinel setting somewhere (in settings specific to your package or some such) that says if you did this or not instead of just forcing the setting in as the example above does.
In practice you would then check if you've set that sentinel or not instead of forcing the extension in, so that if the user decides to use some other syntax for your files you're not forcing them into the Lua syntax.
It's also possible to define your own syntax that just embeds the standard Lua syntax, which allows this to Just Work⢠without having to write any code or have the user do anything:
%YAML 1.2
---
name: Blarb
scope: source.lua
file_extensions:
- blarb
contexts:
main:
- include: scope:source.lua
When you do this, the scope in the file will still be source.lua because that's what the scope in the syntax file says. and the status line will set the syntax name to Blarb. You could modify either of those to change the top level scope or displayed name, if desired.
An example would be to change the scope to source.blarb so that you could create key bindings/snippets that only apply to Lua files of your specific variant.
A potential downside/feature of this is that since the name of the syntax specific settings comes from the name of the file that provides the syntax, if the user has any Lua specific settings, they won't apply to your Blarb files by default.
Similarly anything that's specific to Lua by checking for a scope of source.lua won't work in Blarb files for same reasons, which may or may not be an issue.
I am attempting to build a dataflow pipeline to process a text file which contains events that span multiple lines. The dataflow SDK TextIO class assumes each line is a new event.
My plan is to create a new TextReader and register it with the DataPipelineRunner. This new reader will know how to aggregate the multiple lines into a single line.
I am pretty sure that this approach will work but I am wondering if this is the right way to do it or if there is a simpler solution?
The text I am trying to parse is:
==============> len:45 pktype:4 mtype:2
SYMBOL: USOCSTIA151632.00
OPEN_INT: 212
PR_OPEN_INTEREST: 212
TIME_STAMP: 04/10/2015 06:30:17:420 val:1428661817
The result should be the last 4 lines concatenated together and the first line dropped.
Best regards,
Peter
Note that TextReader is an internal implementation detail class, so subclassing it would be highly discouraged and challenging to do properly.
The recommended way to define a new file-based format like yours is to subclass FileBasedSource using the user-defined source API.
In your case, I would recommend to base your class on the LineIO example from documentation, and wrap the LineReader defined there into your own class which would use LineReader as a helper for reading individual lines, but:
In startReading() it would skip until the line starting with "====>"
In readNextRecord() it would read lines until the next "====>" and bundle them into a single record.
Please make sure to carefully read the documentation to FileBasedSource and FileBasedReader: the parallelization mechanism relies on the consistency properties described there, which your format has to satisfy, for ensuring that records are not duplicated or omitted on the boundaries between adjacent processing shards. XmlSource tests are a good example of how to unit-test these properties.
Please tell us how it goes and report back with any problems or questions - we are very interested in feedback on this API.
I use java and saxonee-9.5.1.6.jar included build path , when run, getting these errors at different times.
Error at xsl:import-schema on line 6 column 169 of stylesheet.xslt:
XTSE1650: net.sf.saxon.trans.LicenseException: Requested feature (xsl:import-schema)
requires Saxon-EE
Error on line 1 column 1
SXXP0003: Error reported by XML parser: Content is not allowed in prolog.
javax.xml.transform.TransformerConfigurationException: Failed to compile stylesheet. 1 error detected.
I open .xslt file in hex editor and dont see any different character at the beginning AND
I use transformerfactory in a different project but any error I get.
Check what the implementation class of tFactory is. My guess is it is probably net.sf.saxon.TransformerFactoryImpl - which is basically the Saxon-HE version.
When you use JAXP like this, you're very exposed to configuration problems, because it loads whatever it finds sitting around on the classpath, or is affected by system property settings which could be set in parts of the application you know nothing about.
If your application depends on particular features, it's best to load a specific TransformerFactory, e.g. tFactory = new com.saxonica.config.EnterpriseTransformerFactory().
I don't know whether your stylesheet expects the source document to be validated against the schema, but it it does, note that this isn't automatic: you can set properties on the factory to make it happen.
I would recommend using Saxon's s9api interface rather than JAXP for this kind of thing. The JAXP interface was designed for XSLT 1.0, and it's a real stretch to use it for some of the new 2.0 features like schema-awareness: it can be done, but you keep running into limitations.