Compile dust template with line breaks - dust.js

Compiling a such template with dustc:
$ cat <<EOF | ./node_modules/.bin/dustc -
<p>Hi there!</p>
<p>I'm a {! dust !} template.</p>
EOF
outputs:
(function(){dust.register("-",body_0);function body_0(chk,ctx){return chk.write("<p>Hi there!</p><p>I'm a template.</p>");}return body_0;})();
but without \n between lines, eg: "<p>Hi there!</p>\n<p>I'm a template.</p>"
Is there any way to change this?
Thank you

You can use {~n} to create line breaks in your Dust templates. It's is especially useful within <pre> tags.

You can disable whitespace compression with
dust.optimizers.format = function(ctx, node) { return node };
Precompiling with gulp-dust, there's a preserveWhitespace option that does just that:
var compile=require('gulp-dust');
// ...
gulp.src('templates/**/*.dust')
.pipe(compile({ preserveWhitespace: true }))
// ...

Related

(Nearly.js) How can I import files which content I want to be the input of my grammars as in LaTex?

What I want:
First, assume I am able to successfully parse the following with my grammar.ne:
\begin{chapter}
...content
\end{chapter}
The desired behavior is to be able to make my grammar.ne file be able to parse a source file containing the above text. For example, in LaTex one can write something like this:
\chapter{filename}
The problem is simply that I don't really know how to do it. But I'll explain what I am trying.
What I am trying
% chapter grammar works well
chapter -> "\\begin{junior}" _ chapterContent _ "\\end{junior}" {% (data) => data[2] %}
% chapterTag below is able to read file and return a string.
% What I don't know how to do is making the grammar parse such a string as chapter
chapterTag -> "\\chapter{" anyCharacters "}"
{% (data) => {
// Assume function readfile exists.
const juniorText = readfile(data[1]);
// How do I tell Nearley to parse this string as a chapter?
return juniorText;
}
%}
If you've used LaTex before, you know what the content of the filename with the chapter would be (\begin{chapter} ... \end{chapter}).

print certain words that begins with x from one line

i want to somehow print words where the word starts with for example srcip and srcintf, from this line from /var/log/syslog
Jul 21 13:13:35 some-name date=2020-07-21 time=13:13:34 devname="devicename" devid="deviceid" logid="0000000013" type="traffic" subtype="forward" level="notice" vd="root" eventtime=1595330014 srcip=1.2.3.4 srcport=57324 srcintf="someinterface" srcintfrole="wan" dstip=5.6.7.8 dstport=80 dstintf="anotherinterface" dstintfrole="lan" sessionid=supersecretid proto=6 action="deny" policyid=0 policytype="policy" service="HTTP" dstcountry="Sweden" srccountry="Sweden" trandisp="noop" duration=0 sentbyte=0 rcvdbyte=0 sentpkt=0 appcat="unscanned" crscore=30 craction=131072 crlevel="high"
to something that looks like this
date=2020-07-21 time=13:13:34 devname="devicename" action="deny" policyid=0 srcintf="someinterface" dstintf="anotherinterface" srcip=1.2.3.4 srcport=57324 -----> dstip=5.6.7.8 dstport=80
currently im using awk to do it. the scalability of it is pretty bad for obvious reasons:
cat /var/log/syslog | awk '{print $5,$6,$7,$25,$26,$17,$21,$15,$16,"-----> "$19,$20}'
also not all the lines have srcip in the same "field". so some lines are really skewed.
or would a syslog message rewriter be better for this purpose? how would you go about solving this? thanks in advance!
$ cat tst.awk
{
delete f
for (i=5; i<=NF; i++) {
split($i,tmp,/=/)
f[tmp[1]] = $i
}
print f["date"], f["time"], f["devname"], f["action"], f["policyid"], f["srcintf"], \
f["dstintf"], f["srcip"], f["srcport"], "----->", f["dstip"], f["dstport"]
}
.
$ awk -f tst.awk file
date=2020-07-21 time=13:13:34 devname="devicename" action="deny" policyid=0 srcintf="someinterface" dstintf="anotherinterface" srcip=1.2.3.4 srcport=57324 -----> dstip=5.6.7.8 dstport=80
The above assumes your quoted strings do not contain spaces as shown in your sample input.
I present you an awk answer which is flexible and, instead of a simple one-liner, a bit a more programmatic way. Your log-file has lines that look in general like:
key1=value1 key2=value2 key3=value3 ...
The idea in this awk is to break it down into an array in awk which is associative, so that the elements can be called as:
a[key1]=>value1 a[key2]=>value2 ... a[key2,"full"]=>key2=value2 ...
Using a function which is explained in this answer, you can write:
awk '
function str2map(str,fs1,fs2,map, n,tmp) {
n=split(str,map,fs1)
for (;n>0;n--) {
split(map[n],tmp,fs2);
map[tmp[1]]=tmp[2]; map[tmp[1],"full"]=map[n]
delete map[n]
}
}
{ str2map($0," ","=",a) }
{ print a["date","full"],a["time","full"],a["devname","full"],a["action","full"] }
' file
This method is very flexible. There is also no dependency in the order of the line or whatever.
note: the above method does not take care of quoting. So if a space appears within a quoted string, it might mess things up.
If you have filter.awk:
BEGIN{
split(filter,a,",");
for (i in a){
f[a[i]]=1;
}
}
{
for (i=1; i<=NF; i++) {
split($i,b,"=");
if (b[1] in f){
printf("%s ", $i);
}
}
printf("\n");
}
you can do:
awk -v filter="srcip,srcintf" -f filter.awk /var/log/syslog
In the filter you specify, comma separated, the keywords. It has to find
note: this script also assumed there file is of the form: key1=value key2=value and that there are no space in the values.

Highlight one specific author when generating references in Pandoc

I am using Pandoc to generate a list of publications for my website. I'm using it solely to generate the html with the publications so that I can then paste the raw html in jekyll. This part works fine.
The complications arise when I tty to generate the html so that my name appears boldfaced in all entries. I'm trying to use this solution for that, which works when I apply it to a pure Latex document I am generating. However when I try to apply the same Pandoc, the html is generated without any boldface.
Here's my Pandoc file:
---
bibliography: /home/tomas/Dropbox/cv/makerefen4/selectedpubs.bib
nocite: '#'
linestretch: 1.5
fontsize: 12pt
header-includes: |
\usepackage[
backend=biber,
dashed=false,
style=authoryear-icomp,
natbib=true,
url=false,
doi=true,
eprint=false,
sorting=ydnt, %Year (Descending) Name Title
maxbibnames=99
]{biblatex}
\renewcommand{\mkbibnamegiven}[1]{%
\ifitemannotation{highlight}
{\textbf{#1}}
{#1}}
\renewcommand*{\mkbibnamefamily}[1]{%
\ifitemannotation{highlight}
{\textbf{#1}}
{#1}}
...
And here's the relevant part of my Makefile:
PANDOC_OPTIONS=--columns=80
PANDOC_HTML_OPTIONS=--filter=pandoc-citeproc --csl=els-modified.csl --biblatex
Again: this code generates the references fine. It just doesn't boldface anything as it is supposed to.
Any ideas?
EDIT
Bib entries look like this
#MISC{test,
AUTHOR = {Last1, First1 and Last2, First2 and Last3, First3},
AUTHOR+an = {2=highlight},
}
And versions are
- Biblatex 3.12
- Biber 2.12
You can use a lua filter to modify the AST. The following works for me to get the surname and initials (Smith, J.) highlighted in the references (see here). You can replace pandoc.Strong with pandoc.Underline or pandoc.Emph. Replace Smith and J. with your name/initials, save it as myname.lua and use something like:
pandoc --citeproc --bibliography=mybib.bib --csl.mycsl.csl --lua-filter=myname.lua -o refs.html refs.md
i.e. put the filter last.
local highlight_author_filter = {
Para = function(el)
if el.t == "Para" then
for k,_ in ipairs(el.content) do
if el.content[k].t == "Str" and el.content[k].text == "Smith,"
and el.content[k+1].t == "Space"
and el.content[k+2].t == "Str" and el.content[k+2].text:find("^J.") then
local _,e = el.content[k+2].text:find("^J.")
local rest = el.content[k+2].text:sub(e+1)
el.content[k] = pandoc.Strong { pandoc.Str("Smith, J.") }
el.content[k+1] = pandoc.Str(rest)
table.remove(el.content, k+2)
end
end
end
return el
end
}
function Div (div)
if 'refs' == div.identifier then
return pandoc.walk_block(div, highlight_author_filter)
end
return nil
end
Notes:
The above works if you use an author-date format csl. If you want to use a numeric format csl (e.g. ieee.csl or nature.csl) you will need to substitute Span for Para in the filter, i.e.:
Span = function(el)
if el.t == "Span" then
If you also want to use the multiple-bibliographies lua filter, it should go before the author highlight filter. And 'refs' should be 'refs_biblio1' or 'refs_biblio2' etc, depending on how you have defined them:
function Div (div)
if 'refs' or 'refs_biblio1’ or 'refs_biblio2’ == div.identifier then
For pdf output, you will also need to add -V csl-refs in the pandoc command if you use a numeric format csl.
The filter highlights Smith, J. if formated in this order by the csl. Some csl will use this format for the first author and then switch to J. Smith for the rest, so you will have to adjust the filter accordingly adding an extra if el.content[k].t == "Str”…etc. Converting to .json first will help to check the correct formatting in the AST.

How to show String new lines on gsp grails file?

I've stored a string in the database. When I save and retrieve the string and the result I'm getting is as following:
This is my new object
Testing multiple lines
-- Test 1
-- Test 2
-- Test 3
That is what I get from a println command when I call the save and index methods.
But when I show it on screen. It's being shown like:
This is my object Testing multiple lines -- Test 1 -- Test 2 -- Test 3
Already tried to show it like the following:
${adviceInstance.advice?.encodeAsHTML()}
But still the same thing.
Do I need to replace \n to or something like that? Is there any easier way to show it properly?
Common problems have a variety of solutions
1> could be you that you replace \n with <br>
so either in your controller/service or if you like in gsp:
${adviceInstance.advice?.replace('\n','<br>')}
2> display the content in a read-only textarea
<g:textArea name="something" readonly="true">
${adviceInstance.advice}
</g:textArea>
3> Use the <pre> tag
<pre>
${adviceInstance.advice}
</pre>
4> Use css white-space http://www.w3schools.com/cssref/pr_text_white-space.asp:
<div class="space">
</div>
//css code:
.space {
white-space:pre
}
Also make a note if you have a strict configuration for the storage of such fields that when you submit it via a form, there are additional elements I didn't delve into what it actually was, it may have actually be the return carriages or \r, anyhow explained in comments below. About the good rule to set a setter that trims the element each time it is received. i.e.:
Class Advice {
String advice
static constraints = {
advice(nullable:false, minSize:1, maxSize:255)
}
/*
* In this scenario with a a maxSize value, ensure you
* set your own setter to trim any hidden \r
* that may be posted back as part of the form request
* by end user. Trust me I got to know the hard way.
*/
void setAdvice(String adv) {
advice=adv.trim()
}
}
${raw(adviceInstance.advice?.encodeAsHTML().replace("\n", "<br>"))}
This is how i solve the problem.
Firstly make sure the string contains \n to denote line break.
For example :
String test = "This is first line. \n This is second line";
Then in gsp page use:
${raw(test?.replace("\n", "<br>"))}
The output will be as:
This is first line.
This is second line.

Performance implications of using :coffescript filter inside HAML templates?

So HAML 4 includes a coffeescript filter, which allows us coffee-loving rails people to do neat things like this:
- word = "Awesome."
:coffeescript
$ ->
alert "No semicolons! #{word}"
My question: For the end user, is this slower than using the equivalent :javascript filter? Does using the coffeescript filter mean the coffeescript will be compiled to javascript on every page load (which would obviously be a performance disaster), or does this only happen once when the application is started?
It depends.
When Haml compiles a filter it checks to see if the filter text contains any interpolation (#{...}). If there isn’t any then it will be the same text to transform on each request, so the conversion is done once at compile time and the result included in the template.
If there is interpolation in the filter text, then the actual text to transform will vary on each request, so the Coffeescript will need to be compiled each time.
Here’s an example. First with no interpolation:
:coffeescript
$ ->
alert "No semicolons! Awesome"
This generates the code (use haml -d to see the generated Ruby code):
_hamlout.buffer << "<script>\n (function() {\n $(function() {\n return alert(\"No semicolons! Awesome\");\n });\n \n }).call(this);\n</script>\n";
This code simply adds a string to the buffer, so no Coffeescript is being recompiled.
Now with interpolation:
- word = "Awesome."
:coffeescript
$ ->
alert "No semicolons! #{word}"
This generates:
word = "Awesome."
_hamlout.buffer << "#{
find_and_preserve(Haml::Filters::Coffee.render_with_options(
"$ ->
alert \"No semicolons! #{word}\"\n", _hamlout.options))
}\n";
Here, since Haml needs to wait to see what the value of the interpolation is, the Coffeescript is recompiled each time.
You can avoid compiling the Coffeescript on each request by not having any interpolation inside your :coffeescript filters.
The :javascript filter behaves similarly, checking to see if there is any interpolation, but since the :javascript filter only outputs some text to the buffer when it runs there is much less of a performance hit using it. You could possibly combine :javascript and :coffeescript filters, putting interpolated data in :javascript and keeping your :coffeescript static:
- word = "Awesome"
:javascript
var message = "No semicolons! #{word}";
:coffeescript
alert message
matt's answer is clear on what is going on. I made a helper to add locals to :coffeescript filters from a hash. This way you don't need to use global JavaScript variables. As a side note: on Linux, the slowdown is really negligible. On Windows however, the impact on performance is quite important (easily more than 100ms per block to compile).
module HamlHelper
def coffee_with_locals locals={}, &block
block_content = capture_haml do
block.call
end
return block_content if locals.blank?
javascript_locals = "\nvar "
javascript_locals << locals.map{ |key, value| j(key.to_s) + ' = ' + value.to_json.gsub('</', '<\/') }.join(",\n ")
javascript_locals << ";\n"
content_node = Nokogiri::HTML::DocumentFragment.parse(block_content)
content_node.search('script').each do |script_tag|
# This will match the '(function() {' at the start of coffeescript's compiled code
split_coffee = script_tag.content.partition(/\(\s*function\s*\(\s*\)\s*\{/)
script_tag.content = split_coffee[0] + split_coffee[1] + javascript_locals + split_coffee[2]
end
content_node.to_s.html_safe
end
end
It allows you to do the following:
= coffee_with_locals "test" => "hello ", :something => ["monde", "mundo", "world"], :signs => {:interogation => "?", :exclamation => "!"} do
:coffeescript
alert(test + something[2] + signs['exclamation'])
Since there is no interpollation, the code is actually compiled as normal.

Resources