As described here, Pandoc records Synctex-like information when the source is commonmark+sourcepos. For example, with this commonmark input,
---
title: "Sample"
---
This is a sample document.
the output in native format starts like this:
Pandoc
Meta
{ unMeta =
fromList [ ( "title" , MetaInlines [ Str "Sample" ] ) ]
}
[ Div
( "" , [] , [ ( "data-pos" , "Sample.knit.md#5:1-6:1" ) ] )
[ Para
[ Span
( ""
, []
, [ ( "data-pos" , "Sample.knit.md#5:1-5:5" ) ]
)
[ Str "This" ]
, Span
( ""
, []
, [ ( "data-pos" , "Sample.knit.md#5:5-5:6" ) ]
)
[ Space ]
, Span
( ""
, []
, [ ( "data-pos" , "Sample.knit.md#5:6-5:8" ) ]
)
[ Str "is" ]
but all that appears in the .tex file is this:
{This}{ }{is}...
As a step towards Synctex support, I'd like to insert the data-pos information as LaTeX markup, i.e. change the .tex output to look like this:
{This\datapos{Sample.knit.md#5:1-5:5}}{ \datapos{Sample.knit.md#5:5-5:6}}{is\datapos{Sample.knit.md#5:6-5:8}}...
This looks like something a Lua filter could accomplish pretty easily: look for the data-pos records, copy the location information into the Str record. However, I don't know Lua or Pandoc native language. Could someone help with this? Doing it for the Span records would be enough for my purposes. I'm using Pandoc 2.18 and Lua 5.4.
Here is an attempt that appears to work. Comments or corrections would still be welcome!
Span = function(span)
local datapos = span.attributes['data-pos']
if datapos then
table.insert(span.content, pandoc.RawInline('tex', "\\datapos{" .. datapos .. "}"))
end
return span
end
Related
When writing a Chinese paper, both Chinese and English papers could be cited. However, styles are slightly differently. The example is as follows:
Cite an English article (Smith et al. 2022), and cite a Chinese article (张三 等 2018).
In other words, for papers with multiple authors, et al. is used for English papers, while 等 is applied for Chinese papers. Considering that Citation Style Language cannot handle multiple languages, I’d ask help for Lua filter.
A Markdown file named test.md as an example:
Cite an English article [#makarchev2022], and cite a Chinese article [#luohongyun2018].
Then run the command below:
pandoc -C -t native test.md
And the output of the main body:
[ Para
[ Str "Cite"
, Space
, Str "an"
, Space
, Str "English"
, Space
, Str "article"
, Space
, Cite
[ Citation
{ citationId = "makarchev2022"
, citationPrefix = []
, citationSuffix = []
, citationMode = NormalCitation
, citationNoteNum = 1
, citationHash = 0
}
]
[ Str "(Makarchev"
, Space
, Str "et"
, Space
, Str "al."
, Space
, Str "2022)"
]
, Str ","
, Space
, Str "and"
, Space
, Str "cite"
, Space
, Str "a"
, Space
, Str "Chinese"
, Space
, Str "article"
, Space
, Cite
[ Citation
{ citationId = "luohongyun2018"
, citationPrefix = []
, citationSuffix = []
, citationMode = NormalCitation
, citationNoteNum = 2
, citationHash = 0
}
]
[ Str "(\32599\32418\20113"
, Space
, Str "et"
, Space
, Str "al."
, Space
, Str "2018)"
]
, Str "."
]
Because #luohongyun2018 is a Chinese bibliography, I want to replace the last English et al. followed it, i.e.:
, Str "et"
, Space
, Str "al."
to an Chinese word 等:
, Str "\31561"
Is it possible to make it via Lua filter? Following the example in the Lua filter page, I have tried but didn’t make it by myself.
Any suggestions would be appreciated. Thanks in advance.
The filter below does two things: it checks if the citation text contains Chinese characters and, if so, then continues to to replace the et al..
The test for Chinese characters is a bit fragile; it could be made more robust by using the utf8.codepoint function from standard Lua library instead.
function Cite (cite)
return cite:walk{
Inlines = function (inlines)
local has_cjk = false
inlines:walk {
Str = function (s)
has_cjk = has_cjk or
pandoc.layout.real_length(s.text) > pandoc.text.len(s.text)
end
}
-- do nothing if this does not contain wide chars.
if not has_cjk then
return nil
end
local i = 1
local result = pandoc.Inlines{}
while i <= #inlines do
if i + 2 <= #inlines and
inlines[i].text == 'et' and
inlines[i+1].t == 'Space' and
inlines[i+2].text == 'al.' then
result:insert(pandoc.Str '等')
i = i + 3
else
result:insert(inlines[i])
i = i + 1
end
end
return result
end
}
end
I'm encountering a problem in a Lua script that I'm learning from (i am new to Lua) this error got me heavily confused, when I run the code it gives me this following error:
attempt to index global "zoneName" (a nil value)
this is my code:
local zoneName = zoneName:gsub ( "'", "" )
if dbExec ( handler, "INSERT INTO `safeZones` (`rowID`, `zoneName`, `zoneX`, `zoneY`, `zoneWidth`, `zoneHeight`) VALUES (NULL, '".. tostring ( zoneName ) .."', '".. tostring ( zoneX ) .."', '".. tostring ( zoneY ) .."', '".. zoneWidth .."', '".. zoneHeight .."');" ) then
createSafeZone ( { [ "zoneName" ] = zoneName, [ "zoneX" ] = zoneX, [ "zoneY" ] = zoneY, [ "zoneWidth" ] = zoneWidth, [ "zoneHeight" ] = zoneHeight } )
outputDebugString ( "Safe Zones: Safe zone created name: ".. tostring ( zoneName ) )
return true
else
return false, "Unable to create the safe zone"
end
You reference zoneName already in it's definition, you code equals to
local zoneName = nil:gsub("'", "")
hence the error (zoneName is not yet defined when Lua tries to execute zoneName:gsub()).
Either define zoneName before the gsub() call or use string.gsub()
We are regularly setting up new DOORS installations on standalone networks, and each of these networks use slightly different drive mappings and installation directories. We have a set of DXL scripts that we copy over to each network that uses DOORS, but these DXL scripts reference some Microsoft Word templates that are used as the basis for custom-developed module export scripts.
We no longer have a DXL expert in-house, and I'm trying to make the scripts more portable so that they no
longer contain hard-coded file paths. Because we copy all of the templates and DXL files in a pre-defined directory structure, I can use the dxlHere() function to figure out the execution path of the DXL script, which would print something like this:
<C:\path\to\include\file\includeFile.inc:123>
<C:\path\to\include\file\includeFile.inc:321>
<Line:2>
<Line:5>
<Line:8>
What I'd like to do is extract everything before file\includeFile.inc:123>, excluding the starting <. Then I want to append templates\template.dotx.
For example, the final result would be:
C:\path\to\inclue\template.dotx
Are there any built-in DXL functions to handle string manipulation like this? Is regex the way to go? If so, what regexp would be appropriate to handle this?
Thanks!
I got this... kind of working.
dxlHere is something I don't work with much, but this seems to work- as long as it's saved to a an actual dxl or inc file (i.e. not just run from the editor)
string s = dxlHere()
string s2 = null
string s3 = null
Regexp r = regexp2 ( "\\..*:.*> $" )
Regexp r2 = regexp2 ( "/" )
if ( r s ) {
s2 = s[ 1 : ( ( start ( 0 ) ) - 1 ) ]
s3 = s[ 1 : ( ( start ( 0 ) ) - 1 ) ]
int x = 0
while ( r2 s2 ) {
x++
s2 = s2[ ( ( start ( 0 ) ) + 1 ) : ]
}
int z = 0
for ( y = 0; y <= length( s3 ); y++ ){
if ( s3[y] == '/' ) {
z++
if ( z == ( x - 2 ) ) {
s = s3[ 0 : y ]
break
}
}
}
}
print s
So we're doing a single regexp to check if we have a valid 'location', then running through it to find ever '/' character, then leaving off the last 2 of them.
Hope this helps!
A = [ [1,2,3],[4,5,6]].
B = [ [a,b,c],[d,e,f]].
The output should be:
[ [{1,a},{2,b},{3,c}],[{4,d},{5,e},{6,f}]].
This is what I have got so far.
Input: [ [{Y} || Y<-X ] || X<-A].
Output: [[{1},{2},{3}],[{4},{5},{6}]]
I think this is what you need:
[lists:zip(LA, LB) || {LA, LB} <- lists:zip(A, B)].
You need to zip both lists to be able to work with their elements together.
in my CKeditor config I have this:
CKEDITOR.editorConfig = function( config ) {
config.toolbar = [
{ name: 'styles', items : [ 'Format' ] },
]
};
I want this config only show heading2 & heading3 but it shows all headings.
how can I do that?
tanks a lot.
Everything is described in documentation - see config.format_tags.
E.g.:
config.format_tags = 'p;h2;h3;pre';