SlamData: embed query result into markdown - slamdata

I'm trying to embed query result into markdown
I see in the reference possibility to embed a query into markdown using
!`SELECT COUNT(*) FROM “/col”`
However this does not work for me!
I just receive the SELECT text in red color (like embedding code into text) and this does not run

I am a committer on this project. As of July 20, 2015, evaluated queries are not yet supported. However, the syntax you are using is correct, and when evaluated queries are supported, it should evaluate to a number which will be embedded into the rendered Markdown with the same position and styling as the inline code element.
In the meantime, although it's not quite the same visually, you can create a query block which follows your Markdown block, and executes the same query. This will give you the count in a table, which is less than ideal, but a passable workaround until someone gets around to implementing evaluated queries.
EDIT: The feature has been implemented and works as described.

Related

How to properly do custom markdown markup

I currently work on a personal writing project which has ended up with me maintaining a few different versions due to the differences of the relevant platforms and output formats I want to support that are not trivially solved. After several instances of me glancing at pandoc and the sheer forest that it represents, I have concluded mere templates don't do what I need, and worse, that I seem to need a combination of a custom filter and writer... suffice to say: messing with the AST is where I feel way out of my depth. Enough so that, rather than asking specific questions of 'how do I do X' here, this is a question of 'is X the right way to go about it, or what is the proper way to do it, and can you give an example of how it ties together?'... so if this question is rather lengthy: my apologies.
My current goal is to have custom markup like the following which is supposed to 'track' which character says something:
<paul|"Hi there">
If I convert to HTML, I'd want something similar to:
<span class="speech paul">"Hi there"</span>
to pop out (and perhaps the <p> tags), whereas if it is just pure markdown / plain text, I'd want it to silently disappear:
"Hi there"
Looking at the JSON AST structures I've studied, it would make sense that I'd want a new structure type similar to the 'Emph' tag called 'Speech' which allows whole blobs of text to be put inside of it with a bit of extra information attached (the person speaking). So something like this:
{"t":"Speech","speaker":"paul","c":[ ... ] }
Problem #1: At the point a lua-filter sees the document, it is obviously already distilled to an AST. This means replacing the items in a manner similar to what most macro expander samples do cannot really work since it would require reading forward. With this method, I just replace bits and pieces in place (<NAME| becomes a StartSpeech and the first solitary > that follows becomes an EndSpeech, but that would make malformed input a bigger potential problem because of silent-ish failures. Additionally, these tags would be completely out of sorts with how an AST is supposed to look.
To complicate matters even further, some of my characters end up learning a secondary language throughout the story, for which I apply a different format that contains a simplified understanding of the spoken text with perspective-characters understanding of what was said. Example:
<paul|"Heb je goed geslapen?"|"Did you ?????">
I could probably add a third 'UnderstoodSpeech' group to my filter, but (problem #2) at this point, the relationship between the speaker, the original speech, and the understood translation is completely gone. As long as the final documents need these values in these respective orders and only in these orders, it is fine... but what if I want my HTML version to look like
"Did you?????"
with a tool-tip / hover-over effect containing the original speech? That would be near impossible to achieve because the AST does not contain that kind of relational detail.
Whatever kind of AST I create in the filter is what I need to understand in my custom writer. Ideally, I want to re-use as much stock functionality of pandoc as possible for the writer, but I don't even know if that is feasible at this point.
So now my question: could someone with great pandoc understanding please give me an example on how to keep relevant data-bits together and apply them in the correct manner? By this I mean show a basic example of what needs to be put in the lua-filter and lua-writer scripts in the following toolchain
[CUSTOMIZED MARKDOWN INPUT] -> lua-filter -> lua-writer -> [CUSTOMIZED HTML5 OUTPUT]

Is there a solution for transpiling Lua labels to ECMAScript3?

I'm re-building a Lua to ES3 transpiler (a tool for converting Lua to cross-browser JavaScript). Before I start to spend my ideas on this transpiler, I want to ask if it's possible to convert Lua labels to ECMAScript 3. For example:
goto label;
:: label ::
print "skipped";
My first idea was to separate each body of statements in parts, e.g, when there's a label, its next statements must be stored as a entire next part:
some body
label (& statements)
other label (& statements)
and so on. Every statement that has a body (or the program chunk) gets a list of parts like this. Each part of a label should have its name stored in somewhere (e.g, in its own part object, inside a property).
Each part would be a function or would store a function on itself to be executed sequentially in relation to the others.
A goto statement would lookup its specific label to run its statement and invoke a ES return statement to stop the current statements execution.
The limitations of separating the body statements in this way is to access the variables and functions defined in different parts... So, is there a idea or answer for this? Is it impossible to have stable labels if converting them to ECMAScript?
I can't quite follow your idea, but it seems someone already solved the problem: JavaScript allows labelled continues, which, combined with dummy while loops, permit emulating goto within a function. (And unless I forgot something, that should be all you need for Lua.)
Compare pages 72-74 of the ECMAScript spec ed. #3 of 2000-03-24 to see that it should work in ES3, or just look at e.g. this answer to a question about goto in JS. As usual on the 'net, the URLs referenced there are dead but you can get summerofgoto.com [archived] at the awesome Internet Archive. (Outgoing GitHub link is also dead, but the scripts are also archived: parseScripts.js, goto.min.js or goto.js.)
I hope that's enough to get things running, good luck!

How can I search for <item1> AND <item2> using the Delphi XE2 IDE search?

I use searching all the time to locate stuff within my (huge) application source, so search effectiveness is very important to me. Presently in the Delphi XE2 IDE I like to use:
Find in Files
Include subdirectories.
Nothing else fancy, just a text keyword. This works ok but what I would really like to do is to extend what I'm doing now to include lines that contain 'A' AND 'B' where A and B are any group of characters (one type of boolean search). Exact matches against A and B are fine, because this now allows you to put in two very partial keywords and still find a unique occurence. I've been using this method in my own search engine for years. Is there an easy way of doing this in the Delphi IDE please?
Thanks
You can use regular expressions (just check the regular expressions checkbox on the right side of the Find window). The regex support is somewhat limited - it's documented for XE2 on the XE2 docwiki here.
I use GExperts Grep Search instead (part of the GExperts IDE experts set), which offers fuller regex support (although still not great) and a better display (IMO) of the search results. (Note the image of the Grep Search dialog contains a regular expression that will match WordA or WordB in either order in the file, so it satisfies your search logic within the limited regex support in GExperts. It matches single words on the line as well, but the results dialog makes it easy to find the lines you're interested in, and double-clicking a line will take you to that match in the IDE's code editor.)
The above results are based on a single file search and those results. For multiple files (in this case, just two), the dialog appears like this:

Sanitize pasted text from MS-Word

Here's my wild and whacky psuedo-code. Anyone know how to make this real?
Background:
This dynamic content comes from a ckeditor. And a lot of folks paste Microsoft Word content in it. No worries, if I just call the attribute untouched it loads pretty. But the catch is that I want it to be just 125 characters abbreviated. When I add truncation to it, then all of the Microsoft Word scripts start popping up. Then I added simple_format, and sanitize, and truncate, and even made my controller start spotting out specific variables that MS would make and gsub them out. But there's too many of them, and it seems like an awfully messy way to accomplish this. Thus so! Realizing that by itself, its clean. I thought, why not just slice it. However, the microsoft word text becomes blank but still holds its numbered position in the string. So I came up with this (probably awful) solution below.
It's in three steps.
When the text parses, it doesn't display any of the MSWord junk. But that text still holds a number position in a slice statement. So I want to use a regexp to find the first actual character.
Take that character and find out what its numbered position is in the total string.
Use a slice statement to cut it from.
def about_us_truncated
x = self.about_us.find.first(regExp representing first actual character)
x.charCount = y
self.about_us[y..125]
end
The only other idea i got, is a regex statement that allows it to explicitly slice only actual characters like so :
about_us([a-zA-Z][0..125]) , but that is definately not how it is written.
Here is some sample text of MS Word junk :
&Lt;! [If Gte Mso 9]>&Lt;Xml>&Lt;Br /> &Lt;O:Office Document Settings>&Lt;Br /> &Lt;O:Allow Png/>&Lt;Br /> &Lt;/O:Off...
You haven't provided much information to go off of, but don't be too leery of trying to build this regex on your own before you seek help...
Take your sample text and paste it in Rubular in the test string area and start building your regex. It has a great quick reference at the bottom.
Stumbled across this
http://gist.github.com/139987
it looks like it requires the sanitize gem.
This is technically not a straight answer, but it seems like the best possible one you can find.
In order to prevent MS Word, you should be using CK Editor's built-in MS word sanitizer. This is because writing regex for it can be very complicated and you can very easily break tags in half and destroy your site with it.
What I did as a workaround, is I did a force paste as plain text in the CK Editor.

Parsing text with simple wildcards logic in Java / C / Objective-C

I'm looking for a fast library/class to parse plain text using expressions like below:
Text is: <b>Name:</b>John<br><i>Age</i>32<br>
Pattern is: {*}Name:</b>{%}<br>{*}Age</i>{%}<br>
And it will find me two values: John and 32.
Intent is to parse simple HTML web pages without involving heavy duty tools. It should not be using string operations or regexps internally but probably do char by char parsing.
Since you appear to be asking the user to specify the HTML content you want, it's probably alright to use regular expressions here (why do you have an aversion to them?). It's not HTML parsing, anymore, just simple text matching, which is what regular expressions are designed for.
Here's an example:
$match =~ s/{\*}/.*?/g;
$match =~ s/{%}/(.*?)/g;
$html =~ /$match/;
Which will leave what you need in your capturing groups.
A regex replacement would work. Just get it to return both values together like "John%32" and then split the response to get the two separate values.
There's really no advantage to character-by-character parsing manually implemented here, as such problems have been by and large solved for these types of problems.
If you're dealing with an extremely normalized set of data (i.e. the template you described above is formatted exactly the same in every circumstance with no possibility of missing closing tags, HTML being inserted in odd places, etc.), regular expressions are a perfectly appropriate tool to parse this sort of data.
If the HTML can not be guaranteed to be perfect, then the most straightforward solution is to use a tool to load the HTML structure into a DOM and find the appropriate elements in the document tree.
Developing a character-by-character approach will probably end up being equivalent to manually implementing one of the above two options, which is not a trivial thing to implement.

Resources