cognos cache for story, dashboard and report - cognos-11

Q1: If I set "Data cache = On" in a dashboard or story,
What is the default cache expire duration for story and dashboard ? I don't see any thing about it in document. I need a number.
https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.0.0/com.ibm.swg.ba.cognos.ug_ca_dshb.doc/ca_enable_data_caching.html
Q2: If I set "use local cache = yes" in a report ,
What is the default cache expire duration for report ?
The document says 60 minutes, really ? where to modify it ?
https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ug_fm.doc/c_queryreuse.html

i know this isn't 100 % answer to your question but if you use data modules you can select a table in the module, open properties and under advanced modify data cache functionality.

Related

Microsoft Graph API - What is the page size for every delta query on users and groups?

What is the page size for every delta query?
https://developer.microsoft.com/en-us/graph/docs/concepts/delta_query_users
https://developer.microsoft.com/en-us/graph/docs/concepts/delta_query_groups
My understanding is the $top doesn't work with delta query on users and groups. So we cannot set a custom page size.
TL;DR: For delta queries page size is not fixed/guaranteed. There's Prefer: odata.maxpagesize=X parameter but it doesn't work for all queries.
If you try to reproduce this situation in Graph Explorer using the following:
https://graph.microsoft.com/v1.0/me/calendarView/delta?startDateTime=2010-01-01 00:00:00&endDateTime=2020-01-01 00:00:00&$top=5
It'll give you the following error:
The '$top' parameter is not supported with change tracking over the 'CalendarView' resource as page size cannot be guaranteed. Express page size preferences using the Prefer: odata.maxpagesize= header instead.
As stated in this error page size is not guaranteed. However, by adding additional header
Prefer: odata.maxpagesize=10
you will be able to see that only 10 results are returned.
One remark - this param is not supported for some of the resources (also for the ones you asked about - user and group).
To see if the delta query for the other resource is supported, go to this page, choose the API and go to Request headers paragraph in its documentation. If you find header with odata.maxpagesize={x}. Optional. in the description - it's supported.
As for today (July '18) the following APIs support odata.maxpagesize param: event, mailFolder, message, contactFolder, contact and the following don't support it: group, user, driveItem, plannerUser.
Feel free to play with Graph Explorer as it might be very helpful in troubleshooting.

Volusion API - Export Orders by Date Range

On a scheduled basis, I would like to export Volusion orders by a date range:
select * from orders o where o.OrderDate >= '7/20/2015' AND o.OrderDate <= '7/23/2015'
Is this possible? It appears my URL can only do an equals sign:
https://www.XX.net/net/WebService.aspx?Login=shopxperts#yahoo.com&EncryptedPassword=XX&EDI_Name=Generic\Orders&SELECT_Columns=*&WHERE_Column=o.orderdate&WHERE_Value=7/18/2015 10:58:09 AM
I looked at the SQL saved query feature. Is there a way to save a query with parameters, then fill them in?
Yes you can insert parameters into the query and execute it from the Generic folder.
Using a custom ASP page, this is a summary of what you would have to do
1- Read the querystring(s) and sanitizing them.
2- Construct the SQL query with sanitized parameters
3- Write the SQL and XSD files to the "Generic" folder
4- execute the now written file in the generic folder by making a http request
https://www.XX.net/net/WebService.aspx?Login=shopxperts#yahoo.com&EncryptedPassword=XX&EDI_Name=Generic\xyz
5- Delete the files since it's no longer needed once the query is complete.
6- Return data from the query to the page requesting the data
Obviously, this is an very abridged version of what you would need to do but it is certainly possible.

How can getLoadTime plugin be implemented in Adobe DTM?

Where do I make the initial function call to s_getLoadTime(). My library is being managed by Adobe.
https://marketing.adobe.com/resources/help/en_US/sc/implement/getLoadTime.html
Step 1: Add the plugin and timer start code
First, you need a Page Load Rule that is set to trigger at "Top of Page". If you already have an existing rule that triggers every page load at top of page, you can use that. If you do not, then create a new one.
Then, in the Javascript / Third Party Tags section, click on "Add New Script". Set the Type to "Sequential Javascript" and check the Execute Globally option.
In the code box, paste the following code:
// this is for older browser support
var inHeadTS=(new Date()).getTime();
// plugin
function s_getLoadTime(){if(!window.s_loadT){var b=new Date().getTime(),o=window.performance?performance.timing:0,a=o?o.requestStart:window.inHeadTS||0;s_loadT=a?Math.round((b-a)/100):''}return s_loadT}
// call plugin first time
s_getLoadTime();
Click on Save Code and then Save Rule.
Step 2: Make the 2nd call to plugin and assign to Adobe Analytics variables
Next, you need a Page Load Rule that is set to trigger at "Bottom of Page". If you already have an existing rule that triggers every page load at bottom of page, you can use that. If you do not, then create a new one.
Then, go to Conditions > Rule Conditions > Criteria and from the dropdown select Data > Custom and click "Add Criteria". In the code box, add the following:
_satellite.setVar('loadTime',s_getLoadTime());
return true;
Then within Adobe Analytics section of the rule, you can set your prop and/or eVar to %loadTime%.
Note: Using a rule set to trigger at "Onload" will technically be more accurate. However, DTM does not currently offer ability to trigger Adobe Analytics Onload (options are only for top or bottom of page), so if you set the rule to "Onload" it will trigger after AA has made a request so your variables will not be populated and sent in that request. If you really want to keep the accuracy then you will need to explore other options, such as implementing AA as a 3rd party script so that you have more control over when it triggers.
Click on Save Rule and then Approve/Publish once you have tested.
The question should really be, "Why should the getLoadTime() plugIn be used, ever?". Yasho, I started with the same question that you had and blindly implemented the plugIn in Adobe DTM following the instructions at https://marketing.adobe.com/resources/help/en_US/sc/implement/getLoadTime.html
Only after starting to analyze the data did I look into the plugIn to see what it does.
Below is the beautified code of the plugIn:
function s_getLoadTime() {
if (!window.s_loadT) {
var b = new Date().getTime(),
o = window.performance ? performance.timing : 0,
a = o ? o.requestStart : window.inHeadTS || 0;
s_loadT = a ? Math.round((b - a) / 100) : ''
}
return s_loadT
}
So, basically the function records s_loadT once and only once. The first call (way at the top of the page) sets the value and any subsequent call to the function will return that same value since it has been persisted in window.s_loadT
Scratch your head a bit and ask the obvious question, "So what does this measure anyway?" Best case, it measures the difference between window.performace.timing.requestStart and the timeStamp when the function was first called. Worst case it measures the difference between a timestamp set in the head of the document by javascript (and that difference could very well be a negative number). Or even worse if 'a' resolves to 0, you'll just get 'b' which will be a huge number.
If you are following the directions and calling getLoadTime() up high in the document (DTM page top rule), you're really just be measuring how long it takes to fire a page top rule. If you put the first call into the top of your s_code.js, you're just measuring how long it takes to load (and execute) s_code.js

How to export all Issues and its contents (Full content) to excel in JIRA?

Here I can able to download only the fields / I can get the contents of only one particular issues to word.
JIRA : Using Latest version.
Logged in as Administrator.
I searched Google but could'nt find.
Go to Issues and make a filter that returns all the issues you want
In the top right corner, there is a Views menu item. Open it.
Select the Excel (all fields) option to export all issues to Excel
#user1747116 you can use the method described by Whim but you do not get all of the information out of an issue.
You do have a couple of options:
If you are versed in XML you can go to System->Import / Export Section -> Backup and it does a full backup of your JIRA instance in XML as described in this help post.
You can use the method described by Whim of simply going to the issues list and clicking on the 'export function', but ALSO before doing that using one of the add-ons that allows you to export comments as well. Plug-ins specifically mentioned in this help article are "All Comments", "JIRA Utilities", and "Last Comment".
Write a Crystal Report formatted in a way to export into Excel. We have done this to make the information both accessible to those not versed in SQL. We have in particular done this for
You write an SQL Query and go directly at the database, and saving to CSV. Note in JIRA 4 to 6 the schema changed and we had to redo several of our queries so keep this in mind. But this is one to get you started in JIRA 6. Note time log is in ([worklog] and File Attachments are in ([fileattachment]) and comments are in ([jiraaction]). Each of these tend to have multiple entries per issue so you will need to do further joins to get them all into the same query. This is also useful know how if you are doing it in a Crystal Report and then exporting to excel.
SELECT TOP 1000 _JI.ID
,_JI.pkey
,_JI.PROJECT
,_PRJ.pname
,_JI.REPORTER
,_JI.ASSIGNEE
,_JI.issuetype
,_IT.pname
,_JI.SUMMARY
,_JI.DESCRIPTION
,_JI.ENVIRONMENT
,_JI.PRIORITY
,_PRI.pname
,_JI.RESOLUTION
,_RES.pname
,_JI.issuestatus
,_IS.Pname
,_JI.CREATED
,_JI.UPDATED
,_JI.DUEDATE
,_JI.RESOLUTIONDATE
,_JI.VOTES
,_JI.WATCHES
,_JI.TIMEORIGINALESTIMATE
,_JI.TIMEESTIMATE
,_JI.TIMESPENT
,_JI.WORKFLOW_ID
,_JI.SECURITY
,_JI.FIXFOR
,_JI.COMPONENT
,_JI.issuenum
,_JI.CREATOR
FROM jiraissue _JI (NOLOCK)
LEFT JOIN PROJECT _PRJ ON _JI.Project = _PRJ.ID
LEFT JOIN ISSUESTATUS _IS ON _JI.issuestatus = _IS.ID
LEFT JOIN ISSUETYPE _IT ON _JI.issuetype = _IT.ID
LEFT JOIN PRIORITY _PRI ON _JI.Priority = _PRI.ID
LEFT JOIN RESOLUTION _RES ON _JI.Resolution = _RES.ID
Note: You could get rid of the redundant fields, but I left both in so you can see where they came from. You can also put a where clause for a single issue ID or limit the outputs to a particular project. The top 1000 only displays the first 1000 results. Remove that if you are comfortable with it returning everything. (We tens of thousands in our db so I put that in there).
Exporting all details to Excel using the built-in export feature is simply impossible. Excel export will not export you the comments, the attachment, change history, etc. As other answers mention the Excel output produced by JIRA is in fact an HTML file, which works in many situations, but doesn't if you need precise representation of data.
Our company built a commercial add-on called the Better Excel Plugin, which generates native Excel exports (in XLSX format) from JIRA data.
It is powerful alternative to the built-in feature, with major advantages and awesome customization. It really supports Excel analysis functionality, including formulas, charts, pivot tables- and pivot charts.
This was my solution.
I downloaded the file like this:
"Issues" > "Search for Issues"
"Export" button > "Excel (HTML, All Fields)"
After downloading the file, Excel (Microsoft Office Professional Plus 2013) was not opening the download Jira.xls file for me.
I worked around that by doing the following:
Change the ".xls" to ".html"
Open the new "Jira.html" file in Chrome
Highlight/Select the table contents of the exported Jira Issues
Copy and then paste into a new excel file
The Better Excel add-on is great (we use it) but it cannot do attachments (AFAIK). Another add-on, JExcel Pro, can.

Elmah log files deletion, manually or is there a setting?

How do I delete the log files that Elmah generates on the server?
Is there a setting within Elmah that I can use to delete log files? I would prefer to specify some criteria (e.g. log files that are older than 30 days).
Or should I write my own code for that ?
You can set the maximum number of log entries, but there isn't a native function for clearing out logs older than a given date. It's a good feature request, though!
If you are storing your error logs in memory the maximum number stored is 500 by default and this requires no additional configuration. Alternatively, you can define the number using the size keyword:
<elmah>
<errorLog type="Elmah.MemoryErrorLog, Elmah" size="50" />
</elmah>
Setting a fixed size is obviously more important for in-memory or XML-based logging, where resources need to be closely managed. You can define a fixed size for any log type though.
This SQL script deletes the rows older than the newest "size" rows:
declare #size INTEGER = 50;
declare #countno INTEGER;
SELECT #countno = count([ErrorId]) FROM [dbo].[ELMAH_Error]
/*
SELECT TOP (#countno-#size)
[ErrorId]
,[TimeUtc]
,[Sequence]
FROM [dbo].[ELMAH_Error]
order by [Sequence] asc
GO
*/
DELETE FROM [dbo].[ELMAH_Error]
WHERE [ErrorId] IN (
SELECT t.[ErrorId] FROM (
SELECT TOP (#countno-#size)
[ErrorId]
,[TimeUtc]
,[Sequence]
FROM [dbo].[ELMAH_Error]
order by [Sequence] asc
) t
)
GO
/*
-- Print remaining rows
SELECT * FROM [dbo].[ELMAH_Error]
order by [Sequence] desc
*/
In response to what Phil Wheeler mentioned above, on a current project I am doing in Mvc4, we have an Mvc area called SiteAdmin. This area will be responsible for all site administration duties, including Elmah.
To get over the lack of delete functionality, I implemented a function to delete all of the current log entries in Elmah (we're using the XML based version).
Here is an image of the SiteAdmin index view:
View Error Log - Opens the Elmah UI in a new window.
Clear Error Log - Presents a confirmation popup, then deletes all entries if the user confirms.
If anyone needs the code as an example, I'd be happy to send it along.
My mechanics could be modified pretty easily to provide a mechanism for selective deletions by criteria if needed (by date, by status code, etc...).
The point of my answer here is that you could provide the delete functionality on your own AND not fork the open source code of the Elmah project.
At least within a MVC project, you can go to the App_Data/Elmah.Errors directory to delete the error XML files that you want to remove.

Resources