Mondrian saiku - vertica query translation error - translation

Hi trying to use saiku with vertica.
Vertica has the concept of db -> schemas -> tables. So in the xml file, instead of the table name, I am giving schemaName.tableName
<?xml version="1.0"?>
<Schema name="Sales" metamodelVersion='3.6' quoteSql='false'>
<Cube name="Sales" defaultMeasure="sales">
<Table name="schemaName.factName"></Table>
<Dimension name="date_mysql">
<Hierarchy hasAll="true">
<Level name="date" column="date" type="Date" uniqueMembers="false"/>
</Hierarchy>
</Dimension>
<Measure name="sales" aggregator="sum" column="sales" formatString="#,###" />
<Measure name="orders" aggregator="sum" column="orders" formatString="#,###" />
</Cube>
</Schema>
This seem to work, and mondrian is able to pick up the measure and dimension properly. The problem is the SQL query generated is syntactically wrong
select "schemaName"."tableName"."date" as "c0"
from "schemaName"."tableName" as "schemaName"."tableName"
group by "schemaName"."tableName"."date"
order by CASE WHEN "schemaName"."tableName"."date" IS NULL THEN 1 ELSE 0 END, "schemaName"."tableName"."date" ASC
There are two problems here.
Vertica treats double quotes as any other character and hence "tableName" and tableName are distinct. ( quoteSql='false' doesnt work as Iam using metamodel 3.6)
Mondrian seems to generate alias from the the table name specified (which here is schema.table) which goes wrong here.
Is there any other way to mention schema? And how to get rid of the double quotes?

The table tag carries a schema attribute as well.(Thanks to Paul Stoellberger for pointing out) So
<Table name="factName" schema="schemaName"></Table>
This takes care of the dialect and quoting problems
http://mondrian.pentaho.com/documentation/xml_schema.php#Table

Related

Turning on Saxon performance analysis in eXist-db

I've used the performance analysis tool in Saxon (https://www.saxonica.com/documentation11/index.html#!using-xsl/performanceanalysis) to analyze stylesheets, and it's quite useful. I'd like to do the analysis from within eXist-db rather than the command line. For one, the performance could be different. But mainly because some stylesheets open documents in exist-db, and I can't run these from the command line. Is there a way to configure Saxon to output the profile.html document when it's run via eXist-db?
I was hoping there would be an attribute in conf.xml , or attributes that could be sent via transform:transform(), but I don't see any options related to the performance analysis tool.
I can't be certain this will work, but it's worth a try.
The $attributes parameter of Exist-db's transform() method allows you to set Saxon configuration properties. The available properties are listed at https://www.saxonica.com/documentation11/index.html#!configuration/config-features . You could try setting the properties TRACE_LISTENER_CLASS (to "net.sf.saxon.trace.TimingTraceListener") and TRACE_LISTENER_OUTPUT_FILE (to the required "profile.html" output file).
Incidentally, note that the performance is going to be different when you run with a trace listener. The profile generated using -TP is useful because it tells you which templates and functions are accounting for the most time and therefore need attention; the absolute numbers are not important or accurate. The hot spots will almost certainly be the same whether you are running within eXist or from the command line.
I can confirm this did work. Here's the XQL I used:
let $params :=
<parameters>
</parameters>
let $attributes :=
<attributes>
<attr name="http://saxon.sf.net/feature/traceListenerClass" value="net.sf.saxon.trace.TimingTraceListener"/>
<attr name="http://saxon.sf.net/feature/traceListenerOutputFile" value="/tmp/profile.html"/>
</attributes>
let $serialization := 'method=html5 media-type=text/html indent=no'
return transform:transform($xml, $xsl, $params, $attributes, $serialization)

Howto log to 2 instances of same type of sink (Seq)?

Possible?
Cannot find a "sink forwarder", where one sink can forward to several other sinks, possibly of the same type.
Serilogs documentation (https://github.com/serilog/serilog/wiki/AppSettings)
clearly states that
NOTE: When using serilog: keys need to be unique.*
so adding the same Seq sink several times doesnt seem to be a good idea.
I'm looking for the same concept as in log4net, where one logger can hold several appenders.
Unfortunately the <appSettings> config provider for Serilog doesn't support this case; the appSettings.json one does, if you're able to use it, otherwise configuring the sinks in code WriteTo.Seq(...).WriteTo.Seq(...) is the way to go.
Semi-workaround style of solution:
Put a "read these keys" in appsettigs
Example 1: Read one key
<add key="SerilogToHttpKeys" value="MyMachineA" />
Example 2 (which solves the problem): Read many keys
<add key="SerilogToHttpKeys" value="MyMachineA, MyLocalMachine, MachineOnTheMoon" />
Both cases "points" to an unlimited number of keys, that are then read via code (see 2) and hence be tweaked w/out recompiling
<add key="MyLocalMachine" value="http://localhost:5341/;juzOPqqqqqqqq" />
<add key="MyMachineA" value="http://10.107.14.57:5341/;m8QVnDaqqqqqqqqqqqqq" />
<add key="MachineOnTheMoon" value="http://10.107.14.62:5341/;Ah0tSzqqqqqqqqqqqq"
Loop the keys in code - each key points to a http address with an API key, which is used for logging to Seq, but change the structure of each entry, and you could log to file ect.
foreach (var aKey in System.Configuration.ConfigurationManager.AppSettings.Get("SerilogToHttpKeys")
.Split(',')//Use , as separator
.Select(s => s.Trim()))
{
var fields = System.Configuration.ConfigurationManager.AppSettings.Get(aKey);
var separator = ';';
string serverUrl = fields.Split(separator)[0];
string apiKey = fields.Split(separator)[1];
loggerConfiguration = loggerConfiguration.WriteTo.Seq(serverUrl: serverUrl, apiKey: apiKey);
}
I use this for logging both to my server and my dev machine at the same time - its easier to keep the localhost Seq open when errors occour, and see if I can find them there instead of logging into the server. However, in case my devmachine is not online, I have the logs on the server as well. Off course, if more than one persons accesses the server licenses are needed for Seq, but in a simple "one dev, one dev-machine, one server" it works.

Invalid object name '#Results' SSIS when using sp in dataflow

I have a stored procedure that runs fine in SQL Management Studio but I'm having problems running it in SSIS 2008 R2. If I run it as an Execute SQL Task, it runs fine without any errors but when I use it as an ADO NET Source in a Data Flow Task, I get an error messaging
Invalid object name #Results (Microsoft SQL Server, Error:208)
However when I click Preview, I do get rows of data displayed.
I don't have access rights to modify the stored procedure so I'm not sure what is going on inside the stored procedure itself but as I have said previously, I can run the stored procedure in management studio and when used in an Execute SQL Task in SSIS.
One of the steps in SSIS is validation of metadata - the contract says we should have an integer and then a character size 8. When the data flow database source components (ado or ole) attempt to get their metadata, it's basically boils down the first query that is found.
The approach here is the same hack we use with dynamic tables in stored procedures. Change the stored procedure, which you've specified you cannot do, to provide a hint to SSIS on the expected metadata.
CREATE PROCEDURE dbo.Sample
AS
BEGIN
SET NOCOUNT ON;
-- Any condition that will never evaluate to true
IF NULL = NULL
BEGIN
-- SSIS will key off of this query even
-- though it is impossible for this branch to ever execute
-- So, define our metadata here
SELECT
CAST(NULL AS int) AS MyFirstColumn
, CAST(NULL as char(8)) AS SomeCodeColumn;
END
-- Assume complex queries here that banjax the metadata
-- yet ultimately return the actual data
SELECT TOP 1000
CAST(ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS int) AS MyFirstColumn
, CAST(LEFT(AC.name, 8) AS char(8)) AS SomeCodeColumn
INTO
#RubeG
FROM
sys.all_columns AS AC;
SELECT
RG.MyFirstColumn
, RG.SomeCodeColumn
FROM
#RubeG AS RG;
END
For sources of SQL Server 2012+, you can try to specify the WITH RESULT SETS property to your EXECUTE call.
EXECUTE dbo.Sample
WITH RESULT SETS
(
(
c1 bigint
, c2 varchar(8)
)
);
Biml
Sample biml package definition.
download and install BIDS Helper
Open/create Integration Services project type
Add new biml file
Paste following definition
Adjust connection string value in line 5 (for OLE) an 8 (for ADO.NET)
Ensure the stored procedure dbo.Sample exists
Remove DFT Sample Result Set if using a 2008 database
Code here
<Biml xmlns="http://schemas.varigence.com/biml.xsd">
<Connections>
<Connection
Name="tempdb"
ConnectionString="Data Source=.\dev2014;Initial Catalog=tempdb;Provider=SQLNCLI10.1;Integrated Security=SSPI;"
/>
<AdoNetConnection
Name="CM_ADO"
ConnectionString="Data Source=localhost\dev2014;Integrated Security=SSPI;Connect Timeout=30;Database=tempdb;"
Provider="SQL"
/>
</Connections>
<Packages>
<Package Name="so_31206473">
<Tasks>
<Dataflow Name="DFT Sample">
<Transformations>
<OleDbSource ConnectionName="tempdb" Name="OLESRC dbo_Source">
<DirectInput>EXECUTE dbo.Sample</DirectInput>
</OleDbSource>
<DerivedColumns Name="DER Placeholder" />
</Transformations>
</Dataflow>
<Dataflow Name="DFT Sample RESULTS SET">
<Transformations>
<OleDbSource ConnectionName="tempdb" Name="OLESRC dbo_Source RS">
<DirectInput>
<![CDATA[EXECUTE dbo.Sample
WITH RESULT SETS
(
(
c1 bigint
, c2 varchar(8)
)
);]]>
</DirectInput>
</OleDbSource>
<DerivedColumns Name="DER Placeholder" />
</Transformations>
</Dataflow>
<Dataflow Name="DFT SampleADO">
<Transformations>
<AdoNetSource ConnectionName="CM_ADO" Name="ADOSRC dbo_Sample">
<DirectInput>EXECUTE dbo.Sample</DirectInput>
</AdoNetSource>
<DerivedColumns Name="DER Placeholder" />
</Transformations>
</Dataflow>
<Dataflow Name="DFT SampleADO RESULTS SET">
<Transformations>
<AdoNetSource ConnectionName="CM_ADO" Name="ADOSRC dbo_Sample">
<DirectInput>
<![CDATA[EXECUTE dbo.Sample
WITH RESULT SETS
(
(
c1 bigint
, c2 varchar(8)
)
);]]>
</DirectInput>
</AdoNetSource>
<DerivedColumns Name="DER Placeholder" />
</Transformations>
</Dataflow>
</Tasks>
</Package>
</Packages>
</Biml>
Sample metadata for an OLE Source
WITH RESULTS SET metadata for an OLE Source
The results are the same for ADO.NET providers, I simply didn't notice that nuance to the question when I built my screenshots. Updated Biml makes it trivial to add those in though.

Where can I get adwords report response xml fixture/mock

Hi fellow stackoverflowers! :)
I am wiring my application with adwords API, and want it to display reports based on the retrieved data. My problem is that I am using test account that have no data that could be used for reporting, and so far us I understand testing account don`t provide any. According to the https://developers.google.com/adwords/api/docs/test-accounts#developing_with_test_accounts I should fake data. I am totally fine writing tests and feed then with fixtures, expect I can't find any relevant example of how the response XML will look like so I can create my own fixtures.
For example:
I want to pull campaign performance report, and segment it by Week
<reportDefinition>
<selector>
<fields>CampaignId</fields>
<fields>Clicks</fields>
<fields>Impressions</fields>
<fields>Week</fields>
<predicates>
<field>CampaignId</field>
<operator>EQUALS</operator>
<values>111111</values>
</predicates>
<dateRange>
<min>20150201</min>
<max>20150601</max>
</dateRange>
</selector>
<reportName>Campaign Performance Report NAme</reportName>
<reportType>CAMPAIGN_PERFORMANCE_REPORT</reportType>
<dateRangeType>CUSTOM_DATE</dateRangeType>
<downloadFormat>XML</downloadFormat>
<includeZeroImpressions>true</includeZeroImpressions>
</reportDefinition>
Which gives me response:
<report>
<report-name name="Campaign Performance Report NAme" />
<date-range date="Feb 1, 2015-Jun 1, 2015" />
<table>
<columns>
<column name="campaignID" display="Campaign ID" />
<column name="clicks" display="Clicks" />
<column name="impressions" display="Impressions" />
<column name="week" display="Week" />
</columns>
</table>
What will be the response with actual data? How it is going to look like in case segmentation will be set to: Date, Month, Quarter, Year?
I have tried to find any xml example on the web and github without luck. Can you please share response examples or point me to the doc, where it says how can I "generate" data for my test acount?
Thank you!
eolexe, I am assuming you did not receive an error for the Campaign Performance Report you tried to fetch. If that is the case then that means there was no match for the predicates and date range you had entered. This why you do not have the rows element within the XML. Also some attributes are are not filterable. But I am pretty sure the CampaignID is. Take a close look at the API just to be sure. You need to migrate to version 201502 because 201402 is deprecated and 201409's sunset day is in July.
I have been working on AdWords API for reporting purpose for years, and I can tell you up till now there is still no official sample XML or CSV given. So I simply test the program using real account data...
I don't pick XML as the response type so I cannot really address your question, but you may wish to know that the XML response does not return a total row. That's why I always pick CSV/TSV format.

What is the second value of quartz.net configuration job-type parameter?

I'm trying to use Quartz.net in my web project. I configured my application like this:
<job>
<name>CRMMoreThanOneJob</name>
<group>jobGroup1</group>
<job-type>ReportingPortalBLL.Jobs.CRMCalledMoreThanOneJob, ReportingPortalBLL.Jobs</job-type>
<durable>true</durable>
<recover>false</recover>
<job-data-map>
<entry>
<key>MessageToLog</key>
<value>Hello from MyJob</value>
</entry>
</job-data-map>
</job>
But it did not work because of the job-type statement. My Job class' is defined like below and its namespace is ReportingPortalBll.Jobs
namespace ReportingPortalBLL.Jobs
{
public class CRMCalledMoreThanOneJob:IJob
{ .
.
}
}
After i changed it to ReportingPortalBLL.Jobs.CRMCalledMoreThanOneJob, ReportingPortalBLL (without .Job) it worked well.
I looked at the documentation but couldn't find what is represented at the second value of job-type parameter. What should i write on the second parameter? What is the second value on the below representation means? I will be using Quartz on my other projects so it would be nice to know how to configure it easily.
<job-type>Namespace.Job1, secondValue</job-type>
The secondValue corresponds to assembly name.
If you go through the source code of quartz.net you can see that the job-type is being passed to Type.GetType as parameter and Type.GetType accepts a assembly qualified name. The assembly-qualified name of a type consists of the type name, including its namespace, followed by a comma, followed by the display name of the assembly.
refer to these links for more info
http://msdn.microsoft.com/en-us/library/c5cf8k43.aspx
http://msdn.microsoft.com/en-us/library/system.type.assemblyqualifiedname.aspx

Resources