Looking for unused symbols in Lua - lua

I'm trying to embed Lua in my host program but my host program doesn't allow users to type '\', '{' and '}' symbols.
I need to use '{' and '}' for table construction so I'm looking for alternative symbols that are unused in Lua so I can replace these symbols internally before sending the code to an interpreter.
I'm a beginner in Lua and would like know if there's any symbol which is never used in Lua programming language. (except when used in a string)
I personally am guessing the grave accent symbol (`) is not used in Lua.
I would appreciate if anyone can confirm this.
Thanks!

Specifically the grave accent doesn't seem to be used and you can check it yourself by looking at the complete syntax of Lua section in the manual.

As far as I know this is not currently used in Lua, so you are fine. It might also be useful to know that Lua, as most of programming languages, has some special (magic) characters that when used in a specific context convey a special meaning. Take a look at the official documentation.

Related

Using tree-sitter as compiler's main parser

Can a parser, generated by tree-sitter, be used both for both syntax highlighting and compiler itself? If not - why?
It would be counterproductive to write 2 different parsers and maintain them.
Note: I haven't used tree-sitter yet, but consider using it for highlighting syntax of my own programming language. Due-to that, I may misunderstand how it's parser actually works.

Removing Fparsec Pipes from a F# project

So i found a kind of abandonned open source project that uses fparsec to parse some graphql syntax. But I would need it in .net core and so I tried to port it. But the problem is, that it uses a fparsec library pipes that is not ported to .net core. So I wanted to fork the repo and remove the pipes library, so I can port the rest to .net core even tough I do not know F# or have any experience in fparsec.
This is the F# parser:
https://github.com/Lauchi/graphql-net/blob/master/GraphQL.Parser/Parsing/Parser.fs
I think I managed to translate some of the stuff back to fparsec, what I got so far is:
%[...] is choice [...]
%% '#' is pchar '#'
%% 'hello' is pstring 'hello'
%% '[' -..- is something like pchar '[' >>.
But by now you guys might have figured that I actually have no clue what I am doing because the syntax is very confusing to me. I understand what a parser and combinator is in theory and I have some understanding o functional programming and how i pass functions and parameters around but I just can not read the syntax and feel very lost.
Can anybody with some more experience give me some tips on how I can change this library back to normal fparsec code?

Common Lisp lexer generator that allows state variables

Neither of the two main lexer generators commonly referenced, cl-lex and lispbuilder-lexer allow for state variables in the "action blocks", making it impossible to recognize a c-style multi-line comment, for example.
What is a lexer generator in Common Lisp that can recognize a c-style multi-line comment as a token?
Correction: This lexer actually needs to recognize nested, balanced multiline comments (not exactly C-style). So I can't do away with state-variables.
You can recognize a C-style multiline comment with the following regular expression:
[/][*][^*]*[*]+([^*/][^*]*[*]+)*[/]
It should work with any library which uses Posix-compatible extended regex syntax; although a bit hard to read because * is extensively used both as an operator and as a literal character, it uses no non-regular features. It does rely on inverted character classes ([^*], for example) matching the newline character, but afaik that is pretty well universal, even for regex engines in which a wildcard does not match newline.

How to write a language with Python-like indentation in syntax?

I'm writing a tool with it's own built-in language similar to Python. I want to make indentation meaningful in the syntax (so that tabs and spaces at line beginning would represent nesting of commands).
What is the best way to do this?
I've written recursive-descent and finite automata parsers before.
The current CPython's parser seems to be generated using something called ASDL.
Regarding the indentation you're asking for, it's done using special lexer tokens called INDENT and DEDENT. To replicate that, just implement those tokens in your lexer (that is pretty easy if you use a stack to store the starting columns of previous indented lines), and then plug them into your grammar as usual (like any other keyword or operator token).
Check out the python compiler and in particular compiler.parse.
I'd suggest ANTLR for any lexer/parser generation ( http://www.antlr.org ).
Also, this website ( http://erezsh.wordpress.com/2008/07/12/python-parsing-1-lexing/ ) has some more information, in particular:
Python’s indentation cannot be solved with a DFA. (I’m still perplexed at whether it can even be solved with a context-free grammar).
PyPy produced an interesting post about lexing Python (they intend to solve it using post-processing the lexer output)
CPython’s tokenizer is written in C. It’s ad-hoc, hand-written, and
complex. It is the only official implementation of Python lexing that
I know of.

What's a Rails plugin, or Ruby gem, to automatically fix English grammar?

Facebook just re-launched Comments, with a automatic grammar fixing feature.
What does the grammar filter do?
Adds punctuation (e.g. periods at the end of sentences)
Trims extra whitespace Auto cases words (e.g. capitalize the first word of a
sentence)
Expands slang words (e.g. plz becomes please)
Adds a space
after punctuation (e.g. Hi,Cat would become Hi, Cat)
Fix common
grammar mistakes (e.g. convert ‘dont' to ‘don’t’)
What is an equivalent plugin or gem?
I don't know of anything with those particular features.
However, you might look at Ruby LinkParser, which is a Ruby wrapper for the Link Grammar parser developed by academics and used by the Abiword project for grammar checking. (Note that "link" in Link Grammer parser doesn't refer to HTML links, but rather to a structure that describes English syntax as a set of links between words).
Here's another interesting checker, written in Ruby, which is designed to check LaTex files for some of the problems you mention (plus others).
"After the Deadline" is a complete (free) grammar checking service. Someone has already written a Ruby wrapper for it.
https://github.com/msepcot/after_the_deadline
You may be interested in Gingerice which seems to do what you are looking for!

Resources