Computing a label from a label and relative offset - bazel

I have a macro that is generating two rules to avoid circularity issues. For a call like yaspl_bootstrap_library(name=foo, deps=[":bar"]) I want to generate the following rules:
yaspl_library(name=foo, deps=[":bar"])
yaspl_srcs(name=foo_srcs, deps=[":bar_srcs"])
Thus I need a function to turn ":bar" into ":bar_srcs". And while the obvious string concatenation works in this example it fails in the case where "//lib/foo" needs to be turned into "//lib/foo:foo_srcs".
This seems like a common thing that would happen in macros yet I cannot seem to find anything that does it easily.

First, I'll point out that this kind of string manipulation will not work with the select function (https://docs.bazel.build/versions/master/be/functions.html#select).
If it's not an issue for you, you can go ahead. This function can be written in a .bzl file. I agree this label manipulation functions should become available. In the meantime, you can try this function:
def explicit_label(label):
if ":" in label or "//" not in label:
return label
return label + ":" + label[label.rfind("/")+1:]
explicit_label(dep) + "_srcs"

Related

Have anyone found beautiful way to replace "smth if smth.present?"?

Often I'm facing lines like
result = 'Some text'
result += some_text_variable if some_text_variable.present?
And every time I want to replace that with something more accurate but I don't know how
Any ideas plz?
result += some_text_variable.to_s
It will work if some_text_variable is nil or empty string for example
But it always will concat empty string to original string
You can also use
result += some_text_variable.presence.to_s
It will work for all presence cases (for example for " " string)
You could "compact" and join an array, e.g.
['Some text', some_text_variable].select(&:present?).join
I realise this is a longhand form, just offering as an alternative to mutating strings.
This can look a bit nicer, if you have a large number of variables to munge together, or you want to join them in some other way e.g.
[
var_1,
var_2,
var_3,
var_4
].select(&:present?).join("\n")
Again, nothing gets mutated - which may or may not suit your coding style.

Building Latex/Tex arguments in lua

I use lua to make some complex job to prepare arguments for macros in Tex/LaTex.
Part I
Here is a stupid minimal example :
\newcommand{\test}{\luaexec{tex.print("11,12")}}% aim to create 11,12
\def\compare#1,#2.{\ifthenelse{#1<#2}{less}{more}}
\string\compare11,12. : \compare11,12.\\ %answer is less
\string\test : \test\\ % answer is 11,12
\string\compare : \compare\test. % generate an error
The last line creates an error. Obviously, Tex did not detect the "," included in \test.
How can I do so that \test is understood as 11 followed by , followed by 12 and not the string 11,12 and finally used as a correctly formed argument for \compare ?
There are several misunderstandings of how TeX works.
Your \compare macro wants to find something followed by a comma, then something followed by a period. However when you call
\compare\test
no comma is found, so TeX keeps looking for it until finding either the end of file or a \par (or a blank line as well). Note that TeX never expands macros when looking for the arguments to a macro.
You might do
\expandafter\compare\test.
provided that \test immediately expands to tokens in the required format, which however don't, because the expansion of \test is
\luaexec{tex.print("11,12")}
and the comma is hidden by the braces, so it doesn't count. But it wouldn't help nonetheless.
The problem is the same: when you do
\newcommand{\test}{\luaexec{tex.print("11,12")}}
the argument is not expanded. You might use “expanded definition” with \edef, but the problem is that \luaexec is not fully expandable.
If you do
\edef\test{\directlua{tex.sprint("11,12")}}
then
\expandafter\compare\test.
would work.

(F) Lex, how do I match negation?

Some language grammars use negations in their rules. For example, in the Dart specification the following rule is used:
~('\'|'"'|'$'|NEWLINE)
Which means match anything that is not one of the rules inside the parenthesis. Now, I know in flex I can negate character rules (ex: [^ab] , but some of the rules I want to negate could be more complicated than a single character so I don't think I could use character rules for that. For example I may need to negate the sequence '"""' for multiline strings but I'm not sure what the way to do it in flex would be.
(TL;DR: Skip down to the bottom for a practical answer.)
The inverse of any regular language is a regular language. So in theory it is possible to write the inverse of a regular expression as a regular expression. Unfortunately, it is not always easy.
The """ case, at least, is not too difficult.
First, let's be clear about what we are trying to match.
Strictly speaking "not """" would mean "any string other than """". But that would include, for example, x""".
So it might be tempting to say that we're looking for "any string which does not contain """". (That is, the inverse of .*""".*). But that's not quite correct either. The typical usage is to tokenise an input like:
"""This string might contain " or ""."""
If we start after the initial """ and look for the longest string which doesn't contain """, we will find:
This string might contain " or "".""
whereas what we wanted was:
This string might contain " or "".
So it turns out that we need "any string which does not end with " and which doesn't contain """", which is actually the conjunction of two inverses: (~.*" ∧ ~.*""".*)
It's (relatively) easy to produce a state diagram for that:
(Note that the only difference between the above and the state diagram for "any string which does not contain """" is that in that state diagram, all the states would be accepting, and in this one states 1 and 2 are not accepting.)
Now, the challenge is to turn that back into a regular expression. There are automated techniques for doing that, but the regular expressions they produce are often long and clumsy. This case is simple, though, because there is only one accepting state and we need only describe all the paths which can end in that state:
([^"]|\"([^"]|\"[^"]))*
This model will work for any simple string, but it's a little more complicated when the string is not just a sequence of the same character. For example, suppose we wanted to match strings terminated with END rather than """. Naively modifying the above pattern would result in:
([^E]|E([^N]|N[^D]))* <--- DON'T USE THIS
but that regular expression will match the string
ENENDstuff which shouldn't have been matched
The real state diagram we're looking for is
and one way of writing that as a regular expression is:
([^E]|E(E|NE)*([^EN]|N[^ED]))
Again, I produced that by tracing all the ways to end up in state 0:
[^E] stays in state 0
E in state 1:
(E|NE)*: stay in state 1
[^EN]: back to state 0
N[^ED]:back to state 0 via state 2
This can be a lot of work, both to produce and to read. And the results are error-prone. (Formal validation is easier with the state diagrams, which are small for this class of problems, rather than with the regular expressions which can grow to be enormous).
A practical and scalable solution
Practical Flex rulesets use start conditions to solve this kind of problem. For example, here is how you might recognize python triple-quoted strings:
%x TRIPLEQ
start \"\"\"
end \"\"\"
%%
{start} { BEGIN( TRIPLEQ ); /* Note: no return, flex continues */ }
<TRIPLEQ>.|\n { /* Append the next token to yytext instead of
* replacing yytext with the next token
*/
yymore();
/* No return yet, flex continues */
}
<TRIPLEQ>{end} { /* We've found the end of the string, but
* we need to get rid of the terminating """
*/
yylval.str = malloc(yyleng - 2);
memcpy(yylval.str, yytext, yyleng - 3);
yylval.str[yyleng - 3] = 0;
return STRING;
}
This works because the . rule in start condition TRIPLEQ will not match " if the " is part of a string matched by {end}; flex always chooses the longest match. It could be made more efficient by using [^"]+|\"|\n instead of .|\n, because that would result in longer matches and consequently fewer calls to yymore(); I didn't write it that way above simply for clarity.
This model is much easier to extend. In particular, if we wanted to use <![CDATA[ as the start and ]]> as the terminator, we'd only need to change the definitions
start "<![CDATA["
end "]]>"
(and possibly the optimized rule inside the start condition, if using the optimization suggested above.)

mysql_real_escape_string when echoing out?

I know I have to use mysql_real_escape_string when running it in a query, for example:
$ProjectHasReservationQuery = ("
SELECT *
FROM reservelist rl
INNER JOIN project p on rl.projectid = p.projectid
WHERE rl.projectid = ". mysql_real_escape_string($record['projectid']) ."
AND restype = 'res'
");
But how about echoing it out, like:
query1 = mysql_query("SELECT * FROM users");
while ($record = mysql_fetch_array($query1 ))
{
echo "".stripslashes(mysql_real_escape_string($record['usersurname']))."";
// OR
echo "".$record['usersurname']."";
}
Which one is it? Personally I think echo "".$record['usersurname']."";, since this is coming FROM a query and not going INTO. But want to be 100% sure.
(I am aware about PDO and mysqli)
I know I have to use mysql_real_escape_string when running it in a query
Quite contrary, you should not use mysql_real_escape_string on a query like this.
It will do no good but leave you with false feeling of safety.
As you can say from the function name, it is used to escape strings, while you are adding a number. So, this function become useless, while your query still remains wide open for injection.
One have to use this function only to format quoted strings in the SQL query.
Thus you can conclude the answer from this rule: no, there is no point in using this function for output.
As for the protection, either treat your number as a string (by quoting and escaping it) or cast it using intval() function.
Or, the best choice, get rid of this manual formatting and start using placeholders to represent dynamical data in the query. it is not necessarily prepared statements - it could use the same escaping, but encapsulated in some placeholder handling function

Labeled constants in LaTeX

I have several lemmas in which I specify constants $C_1$, $C_2$, and so forth for later reference. Naturally, this is annoying when I later insert a new constant definition in the middle. What I'd like is a macro that lets me assign labels to constants and handles the numbering for me. I'm thinking something along the lines of
%% Pseudocode
\begin{lemma}
\newconstant{important-bound}
We will show that $f(x) \le \ref{important-bound} g(x)$ for all $x$.
\end{lemma}
Is this possible?
Expanding on rcollyer's suggestions of using a counter:
%counter of current constant number:
\newcounter{constant}
%defines a new constant, but does not typeset anything:
\newcommand{\newconstant}[1]{\refstepcounter{constant}\label{#1}}
%typesets named constant:
\newcommand{\useconstant}[1]{C_{\ref{#1}}}
(This code was edited to allow labels longer than one character)
And here is a code snippet that seems to work:
I want to define two constants:\newconstant{A}\newconstant{B}$\useconstant{A}$ and
$\useconstant{B}$. Then I want to use $\useconstant{A}$ again.
What you're looking for is to create your own counter.
Expanding on Aniko's answer, I used
this layered macro so that it created a shorthand for the label,
\newcounter{constant}
\newcommand{\newconstant}[1]{\refstepcounter{constant}\label{#1}}
\newcommand{\useconstant}[1]{C_{\ref{#1}}}
\newcommand{\defconstant}[1]{ \newconstant{c_#1}\expandafter\newcommand\csname c#1\endcsname{\useconstant{c_#1}} } %
So to use this, you would then do
\defconstant{a}
\defconstant{b}
There exist constant $\ca$ and $\cb$ such that ....
careful not to overwrite existing commands (i'm sure it would warn you anyhow)

Resources