Adobe Analytics | Merge data from multiple report suites - adobe-analytics

We are capturing information for consumer sites in multiple different report suites.
Is it possible to merge all these data to a parent report suite without adding that parent report suite's account id in s_account variable?
For example
Site 1 uses report-suite1
s_account = "report-suite1";
Site 2 uses report-suite2
s_account = "report-suite2"
Instead of using
s_account = "report-suite1,report-suite2"
is it possible to merge the data to a 3rd virtual account from the Reports console itself?

The only way you can route data to a separate fully fledged report suite is either via javascript (e.g. setting s_account as you have shown in your post), or to ask Adobe To create a VISTA rule.
You didn't state your reasons for not wanting to throw a "global" rsid into your js code. Is it because you don't have the technical resources/ability to do it? If so, and if you want a full 3rd rsid for all the data to go to, then you can ask Adobe to create a VISTA rule. It should be fairly easy for them to setup, but they will charge you for it. And I think they will create one for each report suite. I don't generally recommend going this route unless you really have to, though. Mostly because the cost, but also because you don't have personal visibility into it.
Alternatively, if you do have the tech resources to update the js code, but the cost of throwing another rsid into the mix is an issue (from extra server hits), then you may want to consider replacing all of your report suites with a single global report suite, e.g.
s_account='report-global';
Then, create a Virtual Report Suite for each site. You can go to Components > Virtual Report Suites to set them up. The TL;DR is you create them by pointing at your report-global rsid as the source and then creating a segment based off something unique to the site (e.g. the domain, or maybe some eVar with a site-specific value).
The major downside to going the virtual report suite route is historical data from your previous report suites will not be available in the same place as this new global report suite and its virtual report suites. But it's a "one time migration" thing, and the historical data won't be lost; you'll just have some extra work on your end referencing it in the old rsids, esp if you want to compare historical to current in the new (virtual) risds.
The 2nd major thing to consider is unique limits. Not sure how much traffic / unique values vars get on your sites, but there is a monthly unique value limit you may have to consider with all of the sites going to the same report suite. Beyond looking at tricks to make values less unique on a case by case basis (e.g. removing query param string from URLs), there isn't a good way to solve for this except to stick with separate rsids. Well.. Adobe will increase unique limit on certain vars if you ask them, but it will cost you..
Another alternative to consider is a Rollup report suite. If you go to Admin > Report Suites, where your current report suites are listed. To the left you should see Rollups and an Add link next to it. This will create a Rollup report suite made up of data from one or more report suites.
Note though that a Rollup report suite is not the same as full fledged report suite. Please refer to the link above for full details/limitations, but the main benefit is it won't cost you anything except the couple of minutes to set it up in the interface. But the limitations of it.. the main points of note are you only get aggregated data, data is not deduped between the rsids, and many reports are limited or not available. In practice, I rarely ever see anybody actually go this route because it's too limited. But hey, maybe it's good enough for you.

Related

Allow User to Extract Data Dumps From DW

We use synapse in azure as our warehouse and create reports in power bi for our users on top of this. We currently have a request to move all of the data dumps from our production system onto our warehouse DB as some of them are causing performance issue in production when run. We've been looking to re-do these into reports in power bi, however in some instances we still need to provide the "raw" data in csv/excel format. This has thrown an issue as some of these extracts are above 150k rows and therefore we can't use power bi to provide the extract as it has a limit on the rows it can export. Our solution would be to build a process to runs against the db and spits out a file into sharepoint for the user to consume, which we can do however we're unsure of how we could provide a method of the user triggering the extract. One of the ways I was thinking of doing it would be using power apps, however I'm wondering if there is an easier way someone on here might be able to suggest? I just need to provide pages with various buttons that trigger extracts to sharepoint from azure when clicked, which can be controlled by security in some way. Any advice would be appreciated.
Paginated Report Export doesn't have that row limit.
See, eg
https://learn.microsoft.com/en-us/power-bi/collaborate-share/service-automate-paginated-integration
Or you can use ADF Copy Activity to create .csv extracts.

Creating a structured Jenkins Failing Test Report

The situation right now:
Every Monday morning I manually check Jenkins jobs jUnit results that ran over the weekend, using Project Health plugin I can filter on the timeboxed runs. I then copy paste this table into Excel and go over each test case's output log to see what failed and note down the failure cause. Every weekend has another tab in Excel. All this makes tracability a nightmare and causes time consuming manual labor.
What I am looking for (and hoping that already exists to some degree):
A database that stores all failed tests for all jobs I specify. It parses the output log of a failed test case and based on some regex applies a 'tag' e.g. 'Audio' if a test regarding audio is failing. Since everything is in a database I could make or use a frontend that can apply filters at will.
For example, if I want to see all tests regarding audio failing over the weekend (over multiple jobs and multiple runs) I could run a query that returns all entries with the Audio tag.
I'm OK with manually tagging failed tests and the cause, as well as writing my own frontend, is there a way (Jenkins API perhaps?) to grab the failed tests (jUnit format and Jenkins plugin) and create such a system myself if it does not exist?
A good question. Unfortunately, it is very difficult in Jenkins to get such "meta statistics" that spans several jobs. There is no existing solution for that.
Basically, I see two options for getting what you want:
Post-processing Jenkins-internal data to get the statistics that you need.
Feeding a database on-the-fly with build execution data.
The first option basically means automating the tasks that you do manually right now.
you can use external scripting (Python, Perl,...) to process Jenkins-internal data (via REST or CLI APIs, or directly reading on-disk data)
or you run Groovy scripts internally (which will be faster and more powerful)
It's the most direct way to go. However, depending on the statistics that you need and depending on your requirements regarding data persistance , you may want to go for...
The second option: more flexible and completely decoupled from Jenkins' internal data storage. You could implement it by
introducing a Groovy post-build step for all your jobs
that script parses job results and puts data of interest in a custom, external database
Statistics you'd get from querying that database.
Typically, you'd start with the first option. Once requirements grow, you'd slowly migrate to the second one (e.g., by collecting internal data via explicit post-processing scripts, putting that into a database, and then running queries on it). You'll want to cut this migration phase as short as possible, as it eventually requires the effort of implementing both options.
You may want to have a look at couchdb-statistics. It is far from a perfect fit, but at least seems to do partially what you want to achieve.

Test suite to generate page faults and LLC miss events

I need to profile a virtual machine memory access in terms of the number of page
faults per second generated and number of last level cache miss encountered per
second. Is there a standard test suite that helps me achieve this?
Below I describe the exact scenario I need to achieve:
Run a program / test suite on a virtual machine to generate enormous number
of page faults.
Run a program / test suite on a virtual machine to generate large number of
last level cache misses.
Monitor the number of page faults per second and last level cache miss per
second on the virtual machine.
Monitor the corresponding number of page faults & last level cache miss on
hosting bare metal machine.
Beyond this is the set of analysis results I need to generate.
Query 1:
Is there a standard test suite which helps me achieve my objective? Please point
out the reference if so. I browsed through SPEC benchmarks, but I did not seem to
find anything of much use to my work.
Query 2:
IF there is no such suite, is there a way I can write a program to emulate the
scenario described above?
Any pointers in either directions are appreciated.
Thanks!

Acceptance Tests for a Windows Service

I'm writing a windows service that processes a number of different rss news feeds at regular intervals. These news items will be saved into our database and associated with different objects in the system.
Although there is a set specification on what needs to happen, there is no UI component for the customer to verify.
What's the best way to write acceptance tests for something like this?
Should I create some simple web pages that display a summary of data that needs to be verified?
Since the data is stored into a database the customer can verify it by reading the database with an IDE or by dumping the data to excel/csv.
I would recommend against doing a lot of extra work to make it possible for them to verify the results because they may end up testing the verification procedure more than the real underlying program.
For internal testing, we often rely on logging for testing. We tell testers what logs to look for for good/bad results.

Using MSTest as site/environment monitoring tool

We currently use Hp SiteScope for monitoring synthetic transactions across some of our web apps. This works pretty well except for the licensing cost for each synthetic transaction makes it prohibitive to ensure adequate coverage across our applications.
So, an alternative would be to use SiteScope's URL monitoring which can basically call a URL and then provide some basic checks for the certain strings. With that approach, I'd like to create a page that either calls a bunch of pages or try to tap into a MSTest group somehow to run tests.
In the end, I'd like a set of test cases that can be used against multiple environments to be used for production verification, uptime, status, etc.
Thanks,
Matt
Have you taken a look at System Center Operations Manager 2007?
I'm just getting started, but it appears to do what you are describing in your question.
We are looking to monitoring our data center and the a web application...from the few things I have found on the web it is going to fit our need.
Update
I've since moved to Application Insights. A great overview can be found here, https://azure.microsoft.com/en-us/documentation/articles/app-insights-monitor-web-app-availability/
There are two methods one can use, a simple ping, or record a multi-step synthetic user "experience". Basically you act as a user, and using IE and a Visual Studio Web Test project you record navigating around your site and upload that file to Azure.
For example, I record logging in, navigating a few pages, and then logging out. As long as all of those events happen in a timely manner the site is in a good operating state.
If the tests fail, take too long to respond for example, I'll get an email alerting me something isn't exactly right.

Resources