kong header base rate limiting - rate-limiting

I want to apply rate limiting by header.
if X-yunus-api-key == fasdasd231jnde32e32e , apply rate limit.
Does Kong has this feature ?
--header 'Cache-Control: no-cache' \
--header 'X-yunus-api-key: fasdasd231jnde32e32e' \
--header 'Content-Type: application/json' \
--data-raw '{ }'

I want to apply rate limiting by header.
if X-yunus-api-key == fasdasd231jnde32e32e , apply rate limit.
Setting config.limit_by to header and specify config.header_name to X-yunus-api-key in rate-limiting plugin should be enough for your use case.
If you also want to validate the consumer. Using the key-auth plugin and setting credentials for consumers is another valid option

Related

How to create a blank but valid EPS?

One of our printing applications runs an external program which does some magic and sometimes returns a barcode in EPS format to be printed on the document.
if [ ... some magic ]
then
gnu-barcode -b $1 -c -e code39 -u mm -t 1x3 > $TMP.ps
ps2epsi $TMP.ps $TMP.eps
cat $TMP.eps
rm -f $TMP.eps $TMP.ps
else
cat /dev/null
fi
This works OK. However, it generates an annoying warning on the printing application side about not receiving a valid EPS when the else ... runs and we do cat /dev/null. I need to return a blank but valid EPS instead of the cat /dev/null. How can I accomplish this?
The EPS format is defined in Adobe Technical note 5002, its available on the web but it moves around so much I won't attempt to post a URL. However, unless you are a PostScript programmer that probably won't help you.
The simplest possible valid EPS would be something like:
%!PS-Adobe-2.0 EPSF-3.0
%%BoundingBox:0 0 0 0
That's the only required content in an EPSF. Of course, a real printing application might not like a BoundingBox of 0 0 0 0.

Character Position from starting of a line

In flex, what's the way to get character position from the starting of a line?
I've got a post regarding position from start of a file but i want it from the start of a line.
Also it should handle case like this:
/** this is
a comment
*/int x,y;
output:
Here position of "int"=3
Please give me some hints to implement this.
I presume that the "post regarding position from start of a file" is this one, or something similar. The following is based on that answer.
To track the current column offset, you only need to add the code to reset the offset value when you hit a newline character. If you follow the model presented in the linked answer, you can use a YY_USER_ACTION macro like this to do the adjustment:
#define YY_USER_ACTION \
yylloc.first_line = yylloc.last_line; \
yylloc.first_column = yylloc.last_column; \
if (yylloc.first_line == yylineno) \
yylloc.last_column += yyleng; \
else { \
int col; \
for (col = 1; yytext[yyleng - col] != '\n'; ++col) {} \
yylloc.last_column = col; \
yylloc.last_line = yylineno; \
}
The above code assumes that the starting line/column of the current token is the end line/column of the previous token, which means that yylloc needs to be correctly initialized. Normally you don't need to worry about this, because bison will automatically declare and initialize yylloc (to {1,1,1,1}), as long as it knows that you are using location information.
The test in the third line of the macro optimizes the common case where there was no newline in the token, in which case yylineno will not have changed since the beginning of the token. In the else clause, we know that a newline will be found in the token, which means we don't have to check for buffer underflow. (If you call input() or otherwise manipulate yylineno yourself, then you'll need to fix the for loop.)
Note that the code will not work properly if you use yyless or yymore, or if you call input.
With yymore, yylloc will report the range of the last token segment, rather than the entire token; to fix that, you'll need to save the real token beginning.
To correctly track the token range with yyless, you'll probably need to rescan the token after the call to yyless (although the rescan can be avoided if there is no newline in the token).
And after calling input, you'll need to manually update yylloc.last_column for each character read. Don't adjust yylineno; flex will deal with that correctly. But you do need to update yylloc.last_line if yylineno has changed.

How to escape special character in curl post

How do I escape these kinds of special character in curl. The below is passed in the query part. I am trying to query splunk data and this is reg expression with rex command. Without the below characters it works fine. But I need this to be part of the query.
(?i)^(?:[^-]*-){}\s+\d+
I tried giving -g to stop globbing. But that doesn't work. Is there any simpler way to do this. I am passing --data urlencode in curl so encoding is automatically taken care.
I recently encountered the exact same problem statement. Splunk was having an issue with my regex when it contained [ or ] characters. I was also using curl, and I was sending my search query with curl's -d parameter for post data. I tried several variations of encoding of the brackets (escaping them, percent-encoding them, etc.), but to no avail.
The solution in my case was to use the --data-urlencode parameter instead of the --data parameter.
From the curl help:
$ curl --help | grep "\-\-data"
-d/--data <data> HTTP POST data (H)
--data-ascii <data> HTTP POST ASCII data (H)
--data-binary <data> HTTP POST binary data (H)
--data-urlencode <name=data/name#filename> HTTP POST data url encoded (H)

custom paper size for labelprinter (Brother QL 570)

I have a new label printer (Brother QL 570) wich supports endless paper. My thought was, that I will be able to save paper by printing just as much paper as I need - wrong!
The printer comes with paper sizes of 63mm x 100mm and 63mm x 29mm (and some others) but I need 63mm x 'felxible lenght' or something like 63mm x 40mm.
How can I change that? I will print from OpenOffice.
Thanks!
(Driver is CUPS, using Mint 17.1)
The CUPS drivers that come with the printer include a utility for adding custom paper sizes.
Open a terminal and type:
brpapertoollpr_ql570
And you will see the usage examples for adding and removing custom sizes:
===========================
LPRng Paper Size Tool (v 0.1) Copyright by 2005
Usage: brpapertoollpr_xxxx -P QL-xxxx Printer Name [-n add a Lable Format Name (<=32 bytes) -w Media Width(unit:mm) -h Media Height(unit:mm)]/[-d delete Lable Format Name]
For example:
1. Add a new Label Format with "New Label Format" name and 29mm width and 70mm length:
"brpapertoollpr_ql570 -P QL-570 -n New\ Label\ Format -w 29 -h 70" [enter]
2. Remove the Label Format with "New Label Format" name:
"brpapertoollpr_ql570 -P QL-570 -d New\ Label\ Format" [enter]
===========================
As an example, I did
brpapertoollpr_ql570 -P QL-570 -n long_label -w 62 -h 200
to set up a custom paper size called "long_label" which prints on 62mm continuous roll, a 200mm long label

seq2sparse seems to be ignoring the value of my "-x" parameter

I'm using mahout 0.7 on a pseudo-distributed hadoop installation for testing purposes.
A lot of what I'm doing is being guided by Mahout in Action, which I know deals with 0.5, but as far as I can tell, nothing major has changed with seq2sparse.
I'm having a problem with the tfidf vectors generated by seq2sparse. No matter what I set "-x" (max document frequency percentage) to, I end up with the same number of terms in my dictionary, and vectors of the same size.
I found one posting about mahout 0.6 where -x was being parsed as an absolute number of documents rather than a percentage of documents. That was supposed to have been fixed in 0.7, but I tried using it in that way too just to see if it would help. No change in the number of terms I'm getting. Here are the values I've tried, and the number of terms I've ended up with. My data set is 4850 wikipedia articles from: http://dumps.wikimedia.org/enwiki/20110803/
The exact file is: pages-articles1.xml.bz2
The xml file was turned into a seqfile with:
mahout seqwiki -all -i <path to xml file> -o <path to output directory>
My calls to seq2sparse look like this:
mahout seq2sparse -i <seq directory> -o <out dir> -ow -wt tfidf -x 4800 -nv
My results:
|-x value| #of terms |
|4800 | 256623 |
|4600 | 256623 |
|2500 | 256623 |
|99 | 256623 |
|90 | 256623 |
|25 | 256623 |
|5 | 256623 |
Any ideas on what I'm doing wrong?
I ended up asking this question on the mahout user mailing list and got an answer. I'll reproduce it here for anybody wondering the same thing I was:
Dave Byrne - "maxDFPercent won't actually remove the terms from the dictionary, or reduce the size of the tfidf vectors. It simply sets the value of the vector to 0 for that term.
In other words, the dictionary size and vector length will remain the same, with fewer non-zero terms."

Resources