Adding comments to a SQL query in FileNet Content Engine - filenet-content-engine

Is it possible to add comments to a query, in FileNet?
I refer, in particular, to queries written via FEM.
I have searched on the documentation, but I have found nothing about it.
I have a large query, and I wish to add comments for its subqueries in order to keep it documented.
-- I am trying the "classic" approach with the double dash
but with no luck.

Related

is it possible to implement query rewrite & materialized view in generated OBIEE query

I have read a paper about materialized view selection in datawarehouse , and I have been trying to implement it in OBIEE, but the generated query from OBIEE are start using WITH Clause / CTE / Subquery factoring.
This query are not rewrited to materialized view as I expected, and its a similiar issue in this oracle community forum , but query rewrite work only when I create materialize view in base table on where clause as discussed in the paper above , but our generated obiee query use an extremely complex subquery factoring, then this won't make a significant increase in query speed.
I just want to know is it possible to implement query rewrite & materialized view in generated OBIEE query in terms of the discussion in the paper above ? Thanks.
Short answer: No. Longer answer: No, because that's not how the product is built. It's meant to be a source-agnostic query generator and analytics platform, not a programming platform to let you develop queries on your own. You need to understand how the product works, hoe IT optimizes queries and chooses its sources. There are literally dozens of ways to influence how your query is getting written but they are all semantic/structural configuration options. You do not write code in OBI.

Riak: search by key prefix

I'm a newcomer to Riak and I've been reading this chapter from riak's docs. It goes to show that by adding structure information to buckets and keys one can overcome some of the limitations of key/value operations.
Though the article states an example on how such key would be structured:
sensor data keys could be prefaced by sensor_ or temp_sensor1_ followed by a timestamp
(e.g. sensor1_2013-11-05T08:15:30-05:00)
no method is mentioned on how to query the data by key prefix (e.g sensor1_). Looking around stackoverflow I found this question. In it MapReduce and key filtering are mentioned as a possible solution. But the documentation on key filters states that they are a soon-to-be deprecated feature. I also checked out Riak search as a possible way but wasn't able to find a way to query data by key prefix.
My question is: What is the best way to search data by key prefix? I would greatly appreciate an example.
The best way to search for a key prefix is to not do it if you don't need to, i.e. design around that search pattern if you can. The primary way to do that is to use deterministic keys that your application can easily compute. That said, if you cannot avoid building your application to require searching on key prefixes there are couple of things you can do (all of which have their drawbacks).
Key Filters - http://docs.basho.com/riak/latest/dev/references/keyfilters/ - as you noted already these are marked as deprecated and not recommended at this point.
MapReduce - http://docs.basho.com/riak/latest/dev/advanced/mapreduce/ - a good option if you can query in batches but not really suited for real time querying. You could cache the query results if precomputing the queries is helpful.
Riak Search 2.0 (Solr) - http://docs.basho.com/riak/latest/dev/using/search/ - this is probably the easiest method to implement from an application perspective and allows to query your keys using a query along the lines of: 'curl "$RIAK_HOST/search/sensor?wt=json&q=_yz_rk:sensor1_*"'. Using search does come with a performance hit over straight key based queries but you can cache queries.
Data Modeling - querying by key directly is always going to provide the best performance as mentioned above. One option is to to take advantage of Riak's Data Types (CRDTs) and create a bucket that uses sets. You could create a set for each sensor that contained a list of keys associated with that sensor in the first bucket. Then you can iterate over the keys in the set and do a multi-get to return all of associated records.
Hope this gives you some ideas.

solr join vs lucene join

I am trying to find how Solr join compares with respect to the Lucene joins. Specifically, if Lucene joins uses any filter cache during the JOIN operation. I looked into code and it seems that in the QParser there is a reference to cache, but I am not sure if it's a filter cache. If somebody has any experience on this, please do share, or please tell me how can I find that.
The Solr join wiki states
Fields or other properties of the documents being joined "from" are not available for use in processing of the resulting set of "to" documents (ie: you can not return fields in the "from" documents as if they were a multivalued field on the "to" documents).
I am finding it hard to understand the above limitation of solr join,does it means that unlike the traditional RDMS joins that can have columns from both the TO and FROM field, solr joins will only have fields from the TO documents ? Is my understanding correct ? If yes, then why this limitation ?
Also, there's some difference with respect to scoring too and towards that the wiki says
The Join query produces constant scores for all documents that match -- scores computed by the nested query for the "from" documents are not available to use in scoring the "to" documents
Does it mean the subquery's score is not available the main query? If so again why solr scoring took this approach ?
If there are any other differences that are worth considering when moving from Lucene join to Solr, please share.
this post is quite old, but I jump on it. Sorry if it's not active any more.
To tell the truth, it's far better to avoid the join strategy on solr/lucene. You have to think as object as a whole, joining is much an SQL approch that is not close to the phylosophy of SOLR.
Despite that, solr enables very limited joins operations. Take a look to this very good reference join solr lucene! And also this document about the block join support in solr

Linq-to-SQL query - Need to filter by IDs returned by Full-Text Search sql functions - Hitting limit for Contains

My objective:
I have built a working controller action in MVC which takes user input for various filter criteria and, using PredicateBuilder (part of LinqKit - sorry, I'm not allowed enough links yet) builds the appropriate LINQ query to return rows from a "master" table in SQL with a couple hundred thousand records. My implementation of the predicates is totally inelegant, as I'm new to a lot of this, and under a very tight deadline, but it did make life easier. The page operates perfectly as-is.
To this, I need to add a Full-Text search filter. Understanding the way LINQ translates Contains to LIKE(%%), using the advice in Simon Blog: LINQ-to-SQL - Enabling Full-Text Searching, I've already prepared Table Functions in SQL to run Freetext queries on the relevant columns. I have 4 functions, to match the query against 4 separate tables.
My approach:
At the moment, I'm building the predicates (I'll spare you) for the initial IQueryable data object, running a LINQ command to return them, like so:
var MyData = DB.Master_Items.Where(outer);
Then, I'm attempting to further filter MyData on the Keys returned by my full-text search functions:
var FTS_Matches_Subtable_1 = (from tbl in DB.Subtable_1
join fts in DB.udf_Subtable_1_FTSearch(KeywordTerms)
on tbl.ID equals fts.ID
select tbl.ForeignKey);
... I have 4 of those sets of matches which I've tried to use to filter my original dataset in several ways with no success. For instance:
MyNewData = MyData.Where(d => FTS_Matches_Subtable_1.Contains(d.Key) ||
FTS_Matches_Subtable_2.Contains(d.Key) ||
FTS_Matches_Subtable_3.Contains(d.Key) ||
FTS_Matches_Subtable_4.Contains(d.Key));
I just get the error: The incoming tabular data stream (TDS) remote procedure call (RPC) protocol stream is incorrect. Too many parameters were provided in this RPC request. The maximum is 2100.
I get that it's because I'm trying to pass a relatively large set of data into the Contains function and LINQ is converting each record into a separate parameter, exceeding the limit.
I just don't know how to get around it.
I found another post linq expression to return property value which seemed SO promising. I tried ifwdev's solution (2nd highest ranked answer): using LinqKit to build an extension that will break up the queries into manageable chunks. But I can't figure out how to implement it. Out of my depth right now maybe?
Is there another approach that I'm missing? Some simpler way to accomplish this that I've overlooked?
Sorry for the long post. But thank you for any help you can provide!
This is a perfect time to go back to raw ado.net.
Twisting things around just to use linq to sql is probably just as time consuming if you wrote the query and hydration by hand.

Is it possible to alias or rename fields in YQL?

I'm making a bunch of YQL queries at once & have a standard way of accessing the fields on the server. Unfortunately one of the feeds uses a different name than the rest for a field so I was assuming I could alias it within YQL.
Something like:
SELECT title, link, encoded AS description FROM...
But it looks like YQL's parser doesn't like that as I get this error:
Syntax error(s) [line 1:37 expecting field got 'AS']
So, is it possible to alias fields in YQL like you can in SQL? I don't seen anything in the YQL docs or on the internet at large.
Tacking another (small) question on as well, is there a spec anywhere for YQL's syntax?
No, it's not possible to do an alias in the YQL query. (As #codeulike mentioned, it's really not true "SQL" like you might find in MySQL or other databases.)
One capability that might help your needs is the ability in Open Tables to create an alias for parameter names. See the YQL Open Tables documentation and search for "alias".
I think YQL corresponds to SQL in only a metaphorical sort of way; although it superficially uses things like SELECT, it doesn't try to cover much of the breadth of SQL. Hence if its not in the documentation, its probably not possible.
In this guide: http://developer.yahoo.com/yql/guide/select_statement.html ... aliasing of fields is not mentioned, so I figure its not a feature.
Although, if you run your YQL query through Yahoo Pipes, you can use their Rename module to rename elements of the data.

Resources