Scheduling a Cognos report based on Dates from a data warehouse table - cognos-11

We have a report already written for Student Services, but we need to schedule it for specific times in the term; these times are from the date table in our data warehouse. For example, we need it on the first day of the term (one of the MANY dates defined in our date table), and two weeks prior to the first day of the term. If the current date is either one of these dates, we need the report to run; otherwise no. Should I use trigger-based Cognos reporting? Is there a way to do it in regular Cognos scheduling? Should I schedule it out of an external (Oracle) stored procedure?

We were able to set up Event Studio to first have a daily check to see if it is 14 days before term (had to add that to the date table), and 2 weeks after start of term (also in our date table). Set up the run condition, set up tasks for the reports required, then set up the email. Could not set up Run Agent in Event studio (IBM was singularly unhelpful here) so we scheduled it in Cognos. It runs like a charm.

Related

Spreadsheet Data Not Refreshing Prior to Running Google Script

I have a spreadsheet which imports stock prices from google finance & other sources, then calculates port folio value.
There is also script which saves daily valuation data.
This has been running well for nearly 2 years, but since early May, it seems to be saving the same data every day, like it's not refreshing the stock prices.
Of course, if I open it manually and run the script, it all works OK.
If I don't open the sheet, the script now saves unrefreshed stock prices. What's the best way to force a refresh ?
You can use trigger function in your app script, where trigger to run your script function daily, hourly and it is one of the most powerful feature in Apps script, far better and easily to set-up compared to RPA.

BigQuery API Connection in Sheets (Potential Caching)

My team is currently running an Flask API Script from the Google Cloud Console that updates a BigQuery table every 15 minutes.
We are using sheets to pull the results from that table and and display it with visual tools from sheets.
This full sheets ecosystem looks like this:
A sheets document has a linked project that has a JS-like file that
has been saved as a shared library.
All other sheets in the ecosystem that are used for displaying tables use the above mentioned shared library to work with BigQuery.
The shared library facilitates the connection to BigQuery and the uploading of its static formatted response to a static location
in the sheet, which will then update the visuals of the sheet.
Outside the shared library each sheet has its own timezone, BigQuery dataset name, and BigQuery table name defined as a variable.
The shared library uses these variables to grab the correct BigQuery tables information.
All of the sheets have been checked for BigQuery connection API turned on, and verified they are linked to the same Cloud Console
project where the BigQuery dataset is held.
This is the issue I am facing:
For a few weeks all was working as expected in this ecosystem. Then one day inexplicably the BigQuery table, which updates every minute, completely stopped updating. I first went to check the Cloud Console server where my Flask project is running (outside of sheets) to update the BigQuery source table. The Flask server was on time and the BigQuery data was up to date.
The data tracks metrics through out the day, so I can tell the BigQuery table is updated if it has data for the current hour. So if its 4:36 PM I can expect the table to have data for around 4:15 PM at the very least. In this case the BigQuery table is actually up to date, but the sheet was not up to date (4-5 hours behind).
This prompted me to check the BigQuery connection in sheets and log out the results. This is where things got very weird. I was getting the snap shot of the BigQuery table from 4 hours ago. As I said before I manually checked the table and the actual BigQuery table was up to date.
I fixed this error by copy and pasting my exact code into a new function and re-naming it. When I ran the new function, which had identical code, but a different function name, the table no longer pulled an old "cached" version of the BigQuery table but instead pulled the correct values.
I can only imagine that there is some sort of caching going on in the native sheets-BigQuery api integration. I am also confused by the fact that I have 9 of these environments running in parallel and only 2 of them were effected. They are all running exactly the same code and all have been verified to be pulling from correctly updated BigQuery tables but only 2 randomly fell behind pulling the same table for 4 hours. Now this completely looks like a caching problem but I have no way to actually test as I have no insight into the BigQuery API library that is native to sheets.
What I would like know is can I somehow add a cache buster or something to the BigQuery request to be sure this doesn't happen?
I have the table checking for updates on cron every 1 minute to make sure it updates as often as possible as it is a realtime visual update application. Would lowering the cron help with this cacheing issue?

how should I do the scheduling

I have never written a windows service or any scheduler before so I couldn't figure out what to do.
I need to write a windows service. There is a Report table in my DB, and I need to check it every day to see if there are new reports added. Reports have receivers and the time settings, such as 15th of every month at 14:00, or daily at 12:35 or weekly on Wednesdays at 13:00. And I need to send emails with some reports at these times.
As I have decided, I will use Quartz.NET. But there are a couple of things I don't understand. So I will have 2 Jobs I think. One for checking the DB every day, to see if there are new reports that users want. And when I receive them, I'll create new different amount of Jobs with new triggers based on the times in the DB? Do I create new triggers in the job of the first daily check? I didn't understand it.
And when for example a time of one report is updated, or deleted, Do I need to delete the Job and the trigger from the scheduler? I'd appreciate the help. I am using VS 2015 with C#.
And when I do the windows service, I'll just initiate this Quartz thing that I have written? Sorry I couldn't understand what I have read so far.
I would recommend Hangfire IO over Quartz.net
http://hangfire.io/
Its a more modern approach to scheduled jobs. In the past I've used Quartz.net as well. First of all, using hangfire requires no service. The jobs are persistent, and retries are built in. The syntax is also easier.
I've used hangfire and its wonderful and simple.
however, Hangfire does not support Oracle db so far. Also Quartz provide more flexibility in terms of scheduling (calendars, end dates etc).

Jira - report on bug resolution per user

Is there a way in JIRA to run a report to see how many issues were "resolved" by what users and how quickly since the issue was reported? It needs to be per user
Thanks
You can build arbitrary reports yourself with a Report Plugin Module, but my experience is that it's quite a hassle. Note that plugins will only work in self-hosted Jira installations, not in Atlassians hosted service.
Another way would be to leverage the REST API in order to fetch worklogs and process them externally.
Your requirement needs some clarifications I think. It seems you want to see the number of issues that were moved from some status to another status, or perhaps the last time the resolution field was set to a value (any value?). Then group those results by JIRA user.
A second requirement is to track the time from issue creation to the last time the resolution field was set. Again grouping by user.
I'd try using the Vertygo SLA plugin from Valiantsys to do this. It lets you define custom fields to track the time between two JIRA events such as a field updated or a status changed. I believe it can sum those fields and display grouped results in the JIRA statistics and two-dimensional gadgets.
Reports that group by user often become quite large as the number of users increases.

Managing timezones for different users

I have a website where I have many users coming from different countries. Users can schedule a task based on their timezone. Now there is a cron running on the server after every min, the cron executes a script which checks if there are any scheduled task of any user and if so it does the needful.
Since my server is based in the US, the script executed by the cron considers the timezone of the US. What do I have to do in my script that will execute the user's task based on user's timezone instead of server's timezone?
Thanks in advance for any ideas
Lookup the user's timezone.
Compute the current time in the user's timezone.
For each job, look up the last time it was run and compute the next time it should run.
For any job whose next run time is now or in the past, run that job and update the record of the last time each job was run.
I did something similar on the iPhone a few months ago.
My solution was to capture the time as a string. So if the user selected 8am, I would just capture 08:00 and their time zone e.g. Europe/London.
Every 5 minutes or so on my server, I could then convert this 08:00 into the current UTC time based on the timezone. If this time was "present", I would carry about a check on the user's transport status and issue alerts.
To help me with the TimeZones, I used NodaTime. http://noda-time.blogspot.co.uk/

Resources