Is there any way to add more colors to Log4j2's output via log4j2.xml? - log4j2

I know that I can already specify ANSI colors via {style}, which as a limited set of supported colors:
The color names are ANSI names defined in the AnsiEscape class.
I would like to add additional colors that are normally available at my Linux shell to the live terminal console via log4j2.xml. Is there any way to output additional color explicitly, possibly with RGB or hex, by modifying only log4j2.xml?
I'm not concerned with portability. I tried using \e[1;34m and \e[31m, but the escape sequence doesn't pass through to the output.

This feature was introduced in version 2.18.0 (cf. LOG4J2-3538): you can use the specifiers #RRGGBB, FG_#RRGGBB or BG_#RRGGBB in both the %style and %highlight patterns (cf. documentation).
E.g.:
%style{Hello world!}{#abcdef}

Related

Why harfbuzz shape 2 single char into one glyph?

i'm new to both skia and harfbuzz, my project rely on skia to render text(Skia rely on harfbuzz to shape text.).
So, if i try to render text "ff" or "fl" or "fi"(or maybe some other combinations idk.), instead of render 2 "f", skia will render one glyph which composed of 2 chars("ff" or "fl" or "fi"), it will become much more obvious if i set text letter space property.
By following breakpoints, i tracked and found this all result from shaping result of harfbuzz. Harfbuzz will give out 1 glyph if the text is "ff" or "fl" or "fi".
It seems by making some configs of harfbuzz, i can avoid this, but idk how, please give me some hints.
PS:Shape result will be different if i use different font file, so this is also related to font file i used to shape.
What you are observing is the result of ligature glyph substitutions that occur during text layout.
Harfbuzz is performing advanced text layout using OpenType Layout features in a font. OpenType features are a mechanism for enabling different typographic capabilities that are supported in the font.
Some features are required for correct display of a script. For example, certain features are used to trigger word-contextual forms in Arabic script, or to trigger positioning of vowel marks in Bangla script or diacritic marks in Latin script. These are mandatory for correct display of these scripts.
Other features trigger optional typographic capabilities of a font—they're not essential for correct display of the script, but may be desired for high quality typography. Small caps or superscript forms are two examples of optional features. Many optional features are should not be used in applications by default. For instance, small caps should only be used when the content author explicitly wants them.
But in OpenType some optional features are recommended for use by default since they are intended to provide good quality typography for any body text. One such feature is "Standard Ligatures".
Your specific cases, "ff", "fi", etc., are considered standard ligatures. In any publication that has high quality typography, these will always be used in body text. Because the OpenType spec recommends that Standard Ligatures be applied by default, that's exactly what Harfbuzz is doing.
You can read the Harfbuzz documentation to find out more able how to enable or disable OpenType features. And you can find descriptions of all OpenType features in the OpenType Layout Tag Registry (part of the OpenType spec).
OpenType features use data contained directly in the fonts. Harfbuzz will enable the Standard Ligatures feature by default, but not all fonts necessarily have data that pertains to that feature. That's why you see the behaviour with some fonts but not others.
When a font does support features, the font data describe glyph-level actions to be performed. Harfbuzz (or any OpenType layout engine) will read the data and they perform the described actions. There are several types of actions that can be performed. One is a ligature substution—that is, substitute a sequence of glyphs with a single glyph, the ligature glyph. Ligature substitution actions could be used in fonts for a variety of purposes. Forming a "ff" ligature is one example. But a font might also substitute the default glyphs for a base letter and following combining mark with a single glyph that incorporates the base letter and the mark with the precise positioning of the mark for that combination. And that's something that would be essential for correct display of the script, but something that should be optional.
Thus, it would be a bad idea to disable all ligature substitutions. That's why OpenType has features as a trigger/control mechanism: features are organized around distinct typographic results, not the specific glyph-level actions used to achieve those results. So, you could disable a feature like Standard Ligatures without blocking ligature substitution actions that get used by the font for other typographic purposes.

Are there Ansi escape sequences for superscript and subscript?

I'm playing around with ANSI escape sequences, e.g.
echo -e "\e[91mHello\e[m"
on a Linux console to display colored text.
Now I try to use superscript and subscript output like a=b².
I read here and here about: Partial Line Down (subscript) and Partial Line Up (superscript) but I'm not sure about the exact syntax and even which terminal client might supports this.
Any suggestions about this?
Possibly some commercial product supports it, but it's not supported by any terminal emulator you'll encounter (unless someone modifies one just to prove a point).
The standard describes possible escape sequences, but there is no requirement that any given sequence is supported by any terminal. There are commonly supported (and assumed) sequences such as clearing the screen, but even for that, not all terminals have supported the feature.
The reason is that terminal emulators are generally used with applications (such as text editors) which assume a regular set of rows/columns, and that the text is shown compactly (no extra space such as would be needed to allow for partial line movement. Back in the day when people used typewriters, it was common to have 1.5 or 2.0 line-spacing, and get no more than 33 lines on a page. That changed, long ago.
The need for subscripts/superscripts didn't go away — Unicode provides a usable set of characters with that representation (see Superscripts and Subscripts
Range: 2070–209F)
Further reading:
Your New Royal Portable (1953).
Line Spacing - Butterick's Practical Typography
console_codes - Linux console escape and control sequences

Erlang and Elixir's colorful REPL shells

How does Learn some Erlang or IEx colorize the REPL shell? Is kjell a stable drop-in replacement?
The way this is done in LYSE is to use a javascript plugin called highlight.js, so LYSE isn't actually doing it, your browser is. There are plugins/modes for most mainstream(ish) languages available for highlight.js. If the web is what you are interested in, this is one way to do it (except for when a user can't use JS or has it turned off).
This isn't actually the shell being highlighted at all, nor is it useful anywhere outside of browsers. I've been messing around with a way to do this more generically, initially by inserting static formatting in HTML and XML documents (feed it a document, and it outputs one with Erlang syntax highlighted a certain way whenever this is detected/tagged). I don't yet have a decent project to publish for this (very low on my priority list atm), but I can point you in the direction of some solid inspiration: the source for wx:demo.
Pay particular attention to the function demo:code_area/1. There you will see how the tokenization routines are used to provide highlight hints for the source code text display area in the wx:demo application. This can provide a solid foundation to build your own source highlighting/display utility. (I think it wouldn't be impossible, considering every terminal in common use today responds correctly to ANSI color codes, to write a plugin to the shell that highlights terminal input directly -- not that there is a big clamor for this feature at the moment.)
EDIT (Prompted by a comment by Fred the Magic Wonder Dog)
On the subject of ANSI color codes, if this is what you are actually after, they are easy to implement as a prepend to any string value you are returning within a terminal. The terminal escapes them, so you won't see the characters, but will perform whatever action the code represents. There is no termination (its not like a markup tag that encloses the text) and typically no concept of "default color to go back to" (though there are a gajillion-jillion extensions to telnet and terminal modes that enable all sorts of nonsense like this).
An example of basic colorization is the telcon:greet/0 and telcon:sys_help/0 functions in the v0.1 code of ErlMUD (along with a slew of other places -- colorization in games is sort of a thing). What you see there is a pre-built list per color, but this could be represented any way that would get those values at the front of the string. (I just happened to remember the code value sequences, but didn't remember the characters that make them up; the next version of the code represents this somewhat differently.) Here is a link to a list of ANSI color codes and a discussion about colorizing the shell. Play around! Its nerdy fun, 1980's style!
Oh, I almost forgot... if you really want to go down the rabbit hole without silly little child toys like ncurses to help you, take a look at termcap.
I don't know if kjell is a stable drop-in replacement for Erl but it wouldn't be for IEx.
As far as how the colors are done; to the best of my knowledge it's done with ANSI Escape Sequences.

Delphi routine to display arbitrary bytes in arbitrary encoding in arbitrary language

I have some byte streams that may or may not be encoded as 1) extended ASCII, 2) UTF-8, or 3) UTF-16. And they may be in English, French, or Chinese. I would like to write a simple program that allows the user to enter a byte stream and then pick one of the encodings and one of the languages and see what the string would look like when interpreted in that manner. Or simply interpret each string in each of the 9 possible ways and display them all. I would like to avoid having to switch regionalizations repeatedly. I'm using Delphi 2007. Is what I am trying to do even possible?
In Delphi 2009 or later, this would be easier, since it supports Unicode and can do most of this transparently. For older versions, you have to do a bit more manual work.
The first thing you want to do is convert the text to a common codepage; preferably UTF-16, since that's the native codepage on Windows. For that, you use the MultiByteToWideChar function. For UTF-8 to UTF-16, the language doesn't matter; for "extended ASCII", you will need to choose an appropriate source code page (e.g. Windows-1252 for English and French, and GB2312 or Big5 or some other Chinese code page - that depends on what you expect to receive). To store these, you can use a WideString, which stores UTF-16 directly.
Once you have that, you have to draw the text somehow - and that requires you to either get a Unicode-capable control (a label is likely sufficient), or write one, or just call the appropriate Windows API function directly to draw - and that's where it can get a bit messy, because there are several functions for doing that. TextOutW is probably the simplest choice here, but another option would be DrawText. Make sure you explicitly call the W version of these function in order to work with Unicode. (See also the related question How do I draw Unicode text?).
Take note: Due to CJK unification - the encoding of equivalent Chinese Hanzi, Japanese Kanji, and Korean Hanja characters at the same code points in Unicode - you need to pick a font that matches the expected kind of Chinese, traditional or simplified, in order to get expected rendering. To quote a somewhat related post by Michael Kaplan:
What it comes down to is that there are many characters which can have
four different possible looks:
Japanese will default to using MS UI Gothic (fallback to PMingLIU, then SimSun, then Gulim)
Korean will default to using Gulim (fallback to PMingLiu, then MS UI Gothic, then SimSun)
Simplified Chinese will default to using SimSun (fallback to PMingLiu, then MS UI Gothic, then Batang)
Traditional Chinese will default to using PMingLiu (fallback to SimSun, then MS Mincho, then Batang)
Unless you have a specific font you want/need to use, pick the first font in the list for the language variant you want to use, since these are standard fonts (on XP, you will need to enable East Asian Language support before they are available, on Vista and above, they are always included). If you do not do this, then Windows may either not render the characters at all (showing the missing character glyph instead), or it may use an inappropriate fallback (e.g. PMingLiu for Simplified Chinese) - the exact behavior depends on the API function you use to render the text.

Transform a tex source so that all macros are replaced by their definition

Is it possible to see the output of the TeX ‘pre-processor’, i. e. the intermediate step before the actual output is done but with all user-defined macros replaced and only a subset of TeX primitives left?
Or is there no such intermediate step?
Write
\edef\xxx{Any text with any commands. For example, $\phantom x$.}
And then for output in the log-file
\show\xxx
or for output in your document
\meaning\xxx
There is no "pre-processor" in TeX. The replacement text for any control sequence at any stage can vary (this is used for a lot of things!). For example
\def\demo{\def\demo{cde}}
\demo
will first define \demo in one way and then change it. In the same way, you can redirect TeX primitives. For example, the LaTeX kernel moves \input to an internal position and alters it. A simplified version:
\let\##input\input
\def\input#1{\##input#1 }
Try the Selective Macro Expander.
TeX has a lot of difference tracing tools built in, including tracing macro expansion. This only traces live macros as they are actually expanded, but it's still quite useful. Full details in The TeXbook and probably elsewhere.
When I'm trying to debug a macro problem I generally just use the big hammer:
\tracingall\tracingonline
then I dig in the output or the .log file for what I want to know.
There's a lot of discussion of this issue on this question at tex.SE, and this question. But I'll take the opportunity to note that the best answer (IMO) is to use the de-macro program, which is a python script that comes with TeXLive. It's quite capable, and can handle arguments as well as simple replacements.
To use it, you move the macros that you want expanded into a <something>-private.sty file, and include it into your document with \usepackage{<something>-private}, then run de-macro <mydocument>. It spits out <mydocument>-clean.tex, which is the same as your original, but with your private macros replaced by their more basic things.

Resources