SNMP operations on MIB - monitoring

Hello i am creating a MIB and i have a table with attributes of files. I have name, file type. etc... and a DateAndTime object to represent the time at which the file was created.
In order to delete elements of said table one column has to be of the RowStatus type.
Now my question is, if i wanted to get all files that were created in the last 12 hours what command sequence would the snmp agent use to select that?
To my knowledge it is not possible to select data within a timeframe attribute inside a table.

I found there is no way to select data with timestamps in SNMP as you would do in a sql query.
In a table you have to read all the data and, if needed just select he lines that start within the timeframe you are looking for.

Related

Essbase Hyperion Add data inside Rule Files Bulk more than 100 rows

I already have a rule file (ex. Rule MM01), and I need to add more data rows in rule MM01 to one dimension like below.
For example I want to add more 100 rows of data in column "Replace" and column "With"
Do I have to add 100 rows one by one? Input manually? Or anything else to add bulk data into a rule file?
Nope, you just have to type them in.
If new items keep on popping-up in your source data, you might consider one of the following:
put your source text file into a SQL table and make your load rule read from the table (or even better, try to directly load from the tables that generated the text file)
(assuming you have the data load automated via MaxL) add a powershell script that does the rename before you load the data

Google Data Studio: how to create time series chart with custom Big Query query

I have a Data Studio report with a Time Series added. The data source is from a custom query using the Big Query connector:
select user_dim.app_info.app_version, count(1) as count
from [my_app_domain_ANDROID.app_events_20160929]
group by 1
According to the Data Studio documentation at: https://support.google.com/360suite/datastudio/answer/6370296?hl=en
BigQuery supports querying across multiple tables, where each table has a single day of data. The tables have the format of YYYYMMDD. When Data Studio encounters a table that has the format of YYYYMMDD, the table will be marked as a multi-day table and only the name prefix_YYYYMMDD will be displayed in the table select.
When a chart is created to visualize this table, Data Studio will automatically create a default date range of the last 28 days, and properly query the last 28 tables. You can configure this setting by editing the report, selecting the chart, then adjust the Date Range properties in the chart's
However, in the Time Series Properties DATA tab, there no no valid "Time Dimension" to select. According to the documentation, I should not need to select a Time Dimension. It should query the right table automatically.
Something I am not understanding yet?
There are 2 issues with the query in the question:
To get a time series, you'll need to add a time based column to the custom query.
For example:
SELECT created_at, COUNT(*) c
FROM [githubarchive:day.20160930]
WHERE type='WatchEvent'
GROUP BY 1
Data Studio won't do the 28 day expansion with custom queries. To get the expansion featured in the documentation, you need to point to an actual table (and Data Studio will figure out the prefix and date expansion).
I left a working example at:
https://datastudio.google.com/open/0ByGAKP3QmCjLSjBPbmlkZjA3aUU

Run workflow several times to load data from source to single target

I am having a relational source having 60 records in it.
I want to run a workflow for this mapping, for the first time it should load only 20(1-20) records , 2nd time remaining 20(21-30) records, 3rd time remaining 20(31-60) records to a single source.
how can we do this?
One solution would be to:
use a mapping variable eg. $$rowAmount with initial value of 20
add a Sequence to generate row numbers
use a Filter with a condition RowId>$$rowAmount-20 AND RowId<$$rowAmount
use SETVARIABLE function to increase $$rowAmount by 20 and store in repository

SQL Server table ID approachs to max length

I have code first MVC 4 application with SQL Server 2008.
One of my tables is used much, so, many data stored on it in every day and after some time I delete old data. That is why, element's ID increases speedily.
I defined it's ID as int type in the model. I am worry about table will fill after some time.
What will I do if table ID will arrive max length? I never meet with this situation.
My second question is that, if I will change ID's type from int to long type, then export-import database, will this long type affect (reduce) speed of the site?
If you use an INT IDENTITY starting at 1, and you insert a row every second, every day of the year, all year long -- then you need 66.5 years before you hit the 2 billion limit ...
If you're afraid - what does this command tell you?
SELECT IDENT_CURRENT('your-table-name-here')
Are you really getting close to 2,147,483,647 - or are you still a ways away??
If you should be getting "too close": you can always change the column's datatype to be BIGINT - if you use a BIGINT IDENTITY starting at 1, and you insert one thousand rows every second, you need a mind-boggling 292 million years before you hit the 922 quadrillion limit ....
As far as I know, there isn't any effective method to prevent reaching the limit of the auto-increased identity. You can set it to a data type big enough to last long when you create the table. But, here's one solution I can think of.
Create a new temp table with the same data structure, with the auto-increament column already included and set as primary key. Then, inside the Management Studio, import the data into the new table from the old table. When you are asked to copy the data or write your own query, just choose to write the query, and select everything from your old table except the ID. This can reset the identity to start back from 1. You can delete the old table after that and rename the new temp table. Although you have to right click the database itself to access the Import and Export command, you can set the source and destination database as the same in the options.
This method is pretty much easy and I've done it myself a couple of times.

Append Query That Also Selects A Lookup Table Value Based On Text Parsing?

I've posted a demo Access db at http://www.derekbeck.com/Database0.accdb . I'm using Access 2007.
I am importing an excel spreadsheet, which my organization gets weekly, importing it into Access. It gets imported the table [imported Task list]. From there, an append query reformats it and appends it to my [Master Task List] table.
Previously, we have had a form, where we would manually go through the newest imports, and manually select whether our department was the primary POC for a tasking. I want to automate this.
What syntax do I require, such that the append query will parse the text from [imported Task list].[Department], searching for those divisions listed on [OurDepartments] table (those parts of our company for which we are tracking these tasks), and then select the appropriate Lookup field (connected to [OurDepartments] table) in our [Master Task List] table?
I know that's a mouth full... Put another way, I want the append query update the [Master Task List].[OurDepartments], which is a lookup, based on parsing the text of [imported Task list].[Department].
Note the tricky element: we have to parse the text for "BA" as well as "BAD", "BAC", etc. The shorter "BA" might be an interesting issue for this query.
Hoping for a Non-VBA solution.
Thanks for taking a look!
Derek
PS: Would be very helpful if anyone might be able to respond within the work week. Thx!
The answer is here: http://www.utteraccess.com/forum/Append-Query-Selects-L-t1984607.html

Resources