I'm using log4j2 to create logs for my Java application.
In the log2j2 properties file I have, among others, the following settings.
appender.console.layout.type = JSONLayout
appender.console.layout.charset = UTF-8
appender.console.layout.complete = false
appender.console.layout.compact = true
Log lines are logged as follows.
{entry0}, {entry1}, ...
I would like to log each entry on its own line, separated by a newline character, like this.
{entry0}
{entry1}
...
How can I make log4j2 separate JSON entries with newline characters, while still maintaining compact mode?
Use eventEol:
appender.console.layout.eventEol = true
According to the docs:
eventEol: If true, the appender appends an end-of-line after each record. Defaults to false. Use with eventEol=true and compact=true to get one record per line.
Related
I have a similar issue;
I copied and edited filetype_extensions.conf in my ~/.config/geany adding:
CALIBRE=*.rul;*.svrf;*.SVRF;*.cal;
Then under ~/.config/geany/filedefs I created following files:
filetypes.CALIBRE.conf ==> my custom filetypes
filetypes.commmon ==> I wanted specific colored named_styles
# For complete documentation of this file, please see Geany's main documentation
[styling]
comment=svrf_comment
key=svrf_keyword_comment,bold
[settings]
# default extension used when saving files
extension=svrf
lexer_filetype=NONE
[keywords]
# all items must be in one line
svrf=EXT ENC INT EXPAND
# the following characters are these which a "word" can contains, see documentation
#wordchars=_abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789
# single comments, like / in this file
comment_single=//
# multiline comments
#comment_open=/*
#comment_close=*/
# set to false if a comment character/string should start at column 0 of a line, true uses any
# indentation of the line, e.g. setting to true causes the following on pressing CTRL+d
#command_example();
# setting to false would generate this
# command_example();
# This setting works only for single line comments
comment_use_indent=true
# context action command (please see Geany's main documentation for details)
context_action_cmd=
[indentation]
#width=4
# 0 is spaces, 1 is tabs, 2 is tab & spaces
#type=1
[build-menu]
# %f will be replaced by the complete filename
# %e will be replaced by the filename without extension
# (use only one of it at one time)
#FT_02_LB=_Lint
#FT_02_CM=jshint "%f"
#FT_02_WD=
#error_regex=([^:]+): line ([0-9]+), col ([0-9]+)
However when I open an svrf file type my custom filetypes is not recognized (no specific color while I chose some styling).
If I choose [styling=C] and lexer_filetype=C I am getting color for "C" code...
I also tried [styling] and lexer_filtype=NONE, but once again my custom highlight is not recognized.
I alread ready geany manual, as well as looked as some post but none of them is completely answering this subject (on the 2nd overflow link user has mapped to existing filetype hence he's not getting behavior he had wished).
geany custom filetype .sass for syntax highlighting
Geany: Syntax highlighting for custom filetype for SOME words
Do you have any idea on how to solve this issue?
What I am doing:
I am using the gmail gem in a Rails 4 app to get email attachments from a specific account at regular intervals. Here is an extract from the core part (here for simplicity only considering the first email and its first attachment):
require 'gmail'
Gmail.connect(#user_email,#user_password) do |gmail|
if gmail.logged_in?
emails = gmail.inbox.emails(:from => #sender_email)
email = emails[0]
attachment = email.message.attachments[0]
File.open("~/temp.csv", 'w') do |file|
file.write(
StringIO.new(attachment.decoded.to_s[2..-2].force_encoding("ISO-8859-15").encode!('UTF-8')).read
)
end
end
end
The encoding of the attached file can vary. The particular one that I am currently having issues with is in Finnish. It contains Finnish characters and a superscripted 3 character.
This is what I expect to get when I run the above code. (This is what I get when I download the attachment manually through gmail user interface):
What the problem is:
However, I am getting the following odd results.
From cat temp.csv (Looks good to me):
With nano temp.csv (Here I have no idea what I am looking at):
This is what temp.csv looks like opened in Sublime Text (directly via winscp). First line and small parts look ok but then Chinese/Japanese characters:
This is what temp.csv looks like in Notepad (after download via winscp). Looks ok except a blank space has been inserted between each character and the new lines seems to be missing:
What I have tried:
I have without success tried:
.force_encoding(...) with all the different "ISO-8859-x" character sets
putting the force_encoding("ISO-8859-15").encode!('UTF-8') outside the .read (works but doesn't solve the problem)
encode to UTF-8 without first forcing another encoding but this leads to Encoding::UndefinedConversionError: "\xC4" from ASCII-8BIT to UTF-8
writing as binary with 'wb' and 'w+b' in the File.open() (which oddly doesn't seem to make a difference to the outcome).
searching stackoverflow and the web for other ideas.
Any ideas would be much appreciated!
Not beautiful, but it will work for me now.
After re-encoding, I convert the string to a char array, then remove the chars I do not want and then join the remaining array elements to form a string.
decoded_att = attachment.decoded
data = decoded_att.encode("UTF-8", "ISO-8859-1", invalid: :replace, undef: :replace).gsub("\r\n", "\n")
data_as_array = data.chars
data_as_array = data_as_array.delete_if {|i| i == "\u0000" || i == "ÿ" || i == "þ"}
data = data_as_array.join('').to_s
File.write("~/temp.csv", data.to_s)
This will work for me now. However, I have no idea how these characters have ended up in the attachment ("ÿ" and "þ" in the start of the document and "\u0000" between all remaining characters).
It seems like you need to do attachment.body.decoded instead of attachment.decoded
When I run my code, I get the following error:
Syntax only allowed with -v Eval.EnableHipHopSyntax=true in /var/web/site/myfile.php on line 26
myfile.php has a function at that line that has:
public static function set (
string $theme // <str> The theme to set as active.
, string $style = "default" // <str> The style that you want to set.
, string $layout = "default" // <str> The layout that you want to assign.
): string // RETURNS <str>
The bottom line, ): string" is the appropriate syntax for the hack language, but for some reason HHVM decided to brilliantly disable its own syntax by default.
I can't seem to find any documentation with HHVM that indicates how to set that config file. How can one go about this process?
Edit:
It turns out my HHVM conversion tool was not converting <?php to <?hh as I had instructed it to, due to having converted itself. In other words, it was attempting to convert <?hh to <?hh, which did me no good.
I had mistakenly assumed that HHVM was disabling it for <?hh tags, which was not the case.
This syntax is part of Hack, but you have a PHP file. If you change the opening tag from <?php to <?hh, it'll work.
Alternatively, you can add hhvm.enable_hip_hop_syntax = true to /etc/hhvm/php.ini.
I'd like to create a type using the FSharp.Data.CsvProvider (v1.1.10) to process CSV files with a ";" separator and a predefined schema.
The following line reports an error:
type CsvType1 = CsvProvider<Sample="1;2;3", Separator=";", Schema="category (string), id (string), timestamp (string)">
The error is:
Specified argument is neither a file, nor well-formed CSV: Could not find file '...\1;2;3'.
Setting Sample to "", null or not setting it at all creates other errors.
Using a separator of "," and a sample of "1,2,3" works fine.. but that cannot read my csv files.
What am I doing wrong?
This is a bug in FSharp.Data (fixed in 2.0.0-alpha3) which thinks 1;2;3 is a file and doesn't try to parse it as a CSV snippet, but you can use the following instead which will work:
CsvProvider<Sample="category (string); id (string); timestamp (string)", Separator=";">
Looks like a bug in CSV provider: text parser doesn't support custom separators for sample texts.
, is not allowed in CSV file URIs and 1,2,3 is treated as a text sample correctly. ; is allowed and 1;2;3 is treated as a file name.
I am using tika to extract text from a pdf file that has lot of tables.
java -jar tika-app-0.9.jar -t https://s3.amazonaws.com/centraldoc/alg1.pdf
It is returning some invalid text and sometimes it is trimming white space between 2 words; for example it returns
"qu inakli fmyathematical ideas to the real world" instead of "Link mathematical ideas to the real world".
Is there a way to minimize this kind of error? or is there another library that I can use? Does it make sense to use OCR to process these kind of pdf.
Try to control order when using PDFBox parser: PDFTextStripper has a flag that controls the order of lines in the document. By default (in PDFBox) it's set to false for performance reasons (no order preserved), but Tika changed its behavior between releases switching this flag on and off.
More details exactly on this problem in my blog Extracting text from PDF files with Apache Tika 0.9 (and PDFBox under the hood).
To get text from PDF to display in the right order, I had to set the SortByPosition flag to true... (tika-app-1.19.jar)
BodyContentHandler handler = new BodyContentHandler();
Metadata metadata = new Metadata();
ParseContext context = new ParseContext();
PDFParser pdfParser = new PDFParser();
PDFParserConfig config = pdfParser.getPDFParserConfig();
config.setSortByPosition(true); // needed for text in correct order
pdfParser.setPDFParserConfig(config);
pdfParser.parse(is, handler, metadata, context);