Can we have multiple group filters created in BI Publisher?
I want to have more than 1 group filter i.e for example SAL >5000 & DEPT_ID IN (10,20). Right now I'm able to add only 1 group filter. Will BI Publisher allow me to add multiple group filters??
Yes you can have multiple group filters.
Write your data model and when you are going to save it,add all the parameters you want to keep in Parameters tab.
See the screenshot below
In your case,you don't have any parameter,so this you want is only to write the sql query and in where clause to put the conditions.
i.e
where SAL >5000 and DEPT_ID IN (10,20)
Hope,this helps..
Related
I have this table:
User
Name
Role
Mason
Engineer
Jackson
Engineer
Mason
Supervisor
Jackson
Supervisor
Graham
Engineer
Graham
Engineer
There can be exact duplicates (same Name/Role combination). Ignore comments about primary key.
I am writing a query that will give the distinct values from 'Name' column, with the corresponding 'Role'. To select the corresponding 'Role', if there is a 'Supervisor' role for a name, that record is returned. Otherwise, a record with the 'Engineer' role should be returned if it exists.
For the above table, the expected result is:
Name
Role
Mason
Supervisor
Jackson
Supervisor
Graham
Engineer
I tried ordering 'Role' in descending order, so that I can group by Name,Role and pick the first item - it will be a 'Supervisor' role if present, else 'Engineer' role - which matches my expecation.
I also tried doing User.select('DISTINCT ON (name) \*).order(Role: :desc) - I am not seeing this clause in the SQL query that gets executed.
Also, I tried another approach to get all valid Name, Role combinations and then process it offline iterating the result set and using if-else to decide which row to display.
However, I am interested in anything that is efficient and does not over do this handling.
I am new to Ruby and therefore reaching out.
If I wanted to do this in pure SQL, I would have to use GROUP BY.
SELECT Name, MAX(Role) FROM User GROUP BY Name
So one method would be to execute this SQL statement against the base connection.
ActiveRecord::Base.connection.execute("SELECT Name, MAX(Role) FROM User GROUP BY Name")
That would provide exactly the data you need, though it wouldn't be returned as ActiveRecord models. If you need those models then I would use find_by_sql and do an inner join to provide the records.
User.find_by_sql("SELECT User.* FROM User INNER JOIN (SELECT Name AS n, MAX(Role) AS r FROM User GROUP BY Name) U2 WHERE Name = U2.n AND Role = U2.r")
Unfortunately that would provide both records for Graham.
I am using Google Sheets and have a connected query where I am using parameters. When one of the parameters is configured to be a subquery, the query will run, but no results are returned.
For example, here is my (simplified) query:
SELECT *
FROM table
WHERE campaign IN (#CAMPAIGN);
In this example, I have the #CAMPAIGN parameter in the Google Sheet configured as:
SELECT DISTINCT campaign FROM table2
If I manually substitute the parameter in the BQ console, it runs fine and returns the expected results. Is there a reason this functionality does not work with parameter substitution in the Google Sheet? Is there a way around this?
Depending on how much SQL SELECT type lookups you do, it may help to use a #customfunction that I wrote. You need to place my SQL .js in your Google sheets project and the =gsSQL() custom function will be available.
The one requirement for this versus using =QUERY() is that unique column titles are required for each column.
It is available on github:
gsSQL github project
This example works if each sheet is a table, so it would be entered something like
=gsSQL("SELECT books.id, books.title, books.author_id
FROM books
WHERE books.author_id IN (SELECT id from authors)
ORDER BY books.title")
In this example, I have a sheet named 'books' and another sheet named 'authors'.
If you need to specify a named range or an A1 notation range as a table, this can also be done with a little more work...
=gsSQL("SELECT books.id, books.title, books.author_id
FROM books
WHERE books.author_id IN (SELECT id from authors)
ORDER BY books.title", {{'books', 'books!$A$1:$I', 60};
{'authors', 'authors!$A$1:$J30', 60}}, true)
In this example, the books and authors come from specific ranges, the data will be cached for 60 seconds and column titles are output.
TL;DR
How to model data into fields vs tags incase you want to perform both group by and count(distinct())
So currently this is my influxdb data model:
api_requests (database)
- requests_stats (measurement)
- api_path (tag)
- app_version (tag)
- host (tag)
- platform (tag)
- account_id (field)
- user_id (field)
- function_name (field)
- network (field)
- network_type (field)
- time_to_execute (field)
So now I want to find out the number of distinct accounts (active accounts).
So I can run the following query:
SELECT count(distinct("account_id")) AS "active_accounts"
FROM "api_requests"."autogen"."requests_stats"
This works fine as account id is a field.
Now suppose I want to perform a group by operation on account_id, for example to find the number of requests received per account:
SELECT count("function_name") AS "request_count"
FROM "api_requests"."autogen"."requests_stats"
GROUP BY "account_id"
I cannot do this as group by is recommended on tags.
How would one manage this kind of scenerio?
One of the solution is to store the value in both field and value but that would be data redundancy.
The other and the most optimal way would be for count(distinct()) to work on tags. Is this possible? This was actually a feature request in their github repo.
Or can something be done about the data model to achieve the same?
Use tag for account_id. Instead of count query:
SELECT count(distinct("account_id")) AS "active_accounts"
FROM "api_requests"."autogen"."requests_stats"
use query, which will calculate exact tag value cardinality:
SHOW TAG VALUES EXACT CARDINALITY WITH KEY = "account_id"
This will work only for your use case, because you don't want to use any additional (time, tag) filter in your distinct count query.
I have a stored procedure that assigns a "totalstype_number" based on the field (1=AGS, 2=invoice count, etc). When I want to display a given totals type, I group by the totalstype, and filter it to totalstype ="1" (or whatever number I am trying to populate). What I need to do is create a field that takes totaltype = 1 and divide it by totalstype = 2. I am not really sure how to go about this, I have tried doing it at the tablix level, but I am not sure how to set my row group to group by. Column groups are by channel (# 1-5) and then 2 adjacent child groups underneath that are value1 (year1) and value2 (year2).Thanks!
You do not need to work with the SSRS, modify your query in your SP instead, add group by totalstype and count(*) as total_Number_this_salesType in your original SP, the easiest way is to have two more parameters to store the total number for each sales_type. Which would be like:
#parameter1 = (query to get total number of sales type 1)
#parameter2 = (query to get total number of sales type 2)
then populate #parameter1/#parameter2
I am trying to generate a report to screen of accounting transaction history. In most situations it is one display row per record in the AccountingTransaction table. But occasionally there are transactions that I wish to display to the end user as one transaction which are really, behind the scenes, two accounting transactions. This is caused by deferral of revenues and fund splitting since this app is a fund accounting app.
If I display all rows one by one, those double entries look odd to the user since the fund splitting and deferral is "behind the scenes". So I want to roll up all the related transactions into one display row on screen.
I have my query now using group by to group the related transactions
#history = AccountingTransaction.where("customer_id in (?) AND no_download <> 1", customers_in_account).group(:transaction_type_id, :reference_id).order(:created_at)
as I loop through I get the transactions grouped as I want but I am struggling with how to display the total sum of the 'credit' field for all records in the group. (It is only showing the credit for the first record of the group) If I add a .sum(:credit) to my query, of course, it returns the sums just as I want but not all the other data.
Is there a way for me to group these records like in my #history query and also get the sum of the credit field for each respective group?
* Addition *
What I really want is what the following SQL query would give me.
SELECT transaction_type_id, reference_id, sum(credit)
WHERE customer_id in (21,22,23,24) AND no_download <> 1
GROUP BY reference_id, transaction_type_id ORDER BY created_at
I'm not sure you can do "ORDER BY created_at" and not include it in the select fields, but here is an example.
#history = AccountingTransaction.
select([:reference_id, :transaction_type_id, :created_at]).
select(AccountingTransaction.arel_table[:credit].sum.as("credit_sum")).
where("customer_id in (?) AND no_download <> 1", customers_in_account).
group(:transaction_type_id, :reference_id).
order(:created_at)
To access the credit_sum you could do:
#history[0].attributes["credit_sum"]
I guess if you'd like, you could create a method:
def credit_sum
attributes["credit_sum"]
end
EDIT *
As stated in comments you can access the attribute directly:
#history[0].credit_sum