I've tried following this gist
and i've found that once the user reach the limit, in the db the value for input octects and output octets assume random values
Related
API in question: https://api.slack.com/methods/team.accessLogs
The maximum page is 100 and the maximum records per page(count) is 1000 so total 100,000 records could potentially be returned. Since there is no way to limit the starting date for the accessLog, the results will continue to grow as more unique user/IP/useragent combinations are used until it reaches the limit at which point it wouldn't be possible to return all records. Is this correct?
Also, the documentation does not specify how the results are ordered?
You have mentioned correctly that typically you can fetch 100,000 records.
But there is a way to limit the starting date.
before argument in api lets you set the time before which you want the records.
https://api.slack.com/methods/team.accessLogs#arg_before
The records are fetched in reverse chronological order i.e. latest record first, and by default, the value of before argument is 'now'.
After fetching first 100,000 records,
set before argument value as "date_last" value from the last record.
(keep in mind that before argument is inclusive of the value provided, therefore the last record will be repeated. To avoid it you can reduce "date_last" value by 1 )
When trying to put some date into InfluxDB (1.7.3) I am getting error that max-series-per-database is reached:
(“error”:“partial write: max-series-per-database limit exceeded: (1000000) dropped=2")
Meanwhile show series exact cardinality for specific database shows that there are just around 510 000 entries.
Also select count(*) from database gives same result
Any idea I am getting error that max series per database is reached?
upd:
I am using open source version of InfluxDB without clustering
show series cardinality show almost the same result what exact cardinality does
Try increasing max-series-per-database configuration option and see if the error persists.
If you are using enterprise clustering, the exact cardinality may only count series on one region server, while another has the rest 490k series.
Are there other retention policies in the same database?
Also note, that error may be generated based on approximate cardinality.
I have got a requirement to fetch and store response time and count of different 2xx,4xx and 5xx request from access logs to influxdb.(for graphing and alerting purpose).
I know I can use telegraf to parse logs file and keep sending data to influxdb. And get these counts from running query on the data.
[1, 2 ]
But in this way, I will be sending a lot of datapoints to the influxbd server.
What I am trying to find is, if there is any way, I can only send processed data to influxdb, like no of req/sec, no of 2xx/4xx/5xx req/sec.
I have been reading on various threads and blogs, but couldn't find anything matching.
Anyhelp would be appreciated.
Thanks.
"c:\Program Files (x86)\Log Parser 2.2\LogParser.exe"
SELECT sc-status, COUNT(*) FROM [logfile]
GROUP BY sc-status
you can add the math on select statement to calculate /sec responses.
Is there a limit for table rows in additional databases in abas erp?
If there is a limit: On which factor the limit is based, how can I calculate the limit and what happens if I try to add more lines by GUI, FO or EDP/EPI?
Can I find it documented in the abas online help? I haven't.
Yes there is a limit, which is unfortunately not customizable.
You can see a full list of know limitations under help/hd/html/49B.1.4.html
In your specific case the limit of lines in additional databases is 65535.
If you reach the limit, the abas core will show an error message and terminate your current FOP. You can (and should) get the current amount of lines by evaluating the variable tzeilen (currTabRow)
In this case I'm also not aware of any other than the one you mentioned, but you can query ozeilen in a selection list (for master files, not for i.e. sales and purchasing because the rows there aren't physically 'rows'). tzeilen (currTabRow) is buffer related.
Through a script I can collect a sequence of videos that search list returns. The maxresults variable was set to 50. The total number items are big in number but the number of next page tokens are not enough to retrieve all the desired results. Is there any way to take all the returned items or it is YouTube restricted?
Thank you.
No, retrieving the results of a search is limited in size.
The total results that you are allowed to retrieve seems to have been reduced to 500 (in the past it was limited to 1000). The api does not allow you to retrieve more from a query. To try to get more, try using a number of queries with different parameters, like: publishedAfter, publishedBefore, order, type, videoCategoryId, or vary the query tags and keep track of getting different video id's returned.
See for a reference:
https://code.google.com/p/gdata-issues/issues/detail?id=4282
BTW. "totalResults" is an estimation and its value can change on the next page call.
See: YouTube API v3 totalResults field is returning 1 000 000 when it shoudn't