Greetings beloved comrades,
I am building a series of power bi dashboards, and as they go into production I'd like to put them into TFS. However, due to the large datasets involved, some of these report definitions are quite large (1.6GB).
It doesn't seem like a good idea to force TFS to store all of the actual data, when only the definition really matters.
Is there a simple way to remove the data from a .pbix file or save only the definition?
Edit: Looks like Microsoft has rendered this question obsolete with the creation of PowerBI templates. April Update for PowerBI
Nevertheless, the workaround in the answer could be used for other purposes.
I would add a "Parameters" Query (a table with a single row - created using Edit Queries / Edit Data) with a column called [Data Load], with a single row containing "Yes".
Then I would add a Filter step to the end of all the other Queries, referring to that "Parameters" Query. The Filter syntax would be:
Parameters{0}[Data Load] = "Yes"
That syntax is a bit obscure - it means:
Go to the Parameters Query, get the value from the 1st row, in the [Data Load] column, test if it equals "Yes".
When you want to empty all the data from the .pbix file, edit the Source step in the "Parameters" Query and change the [Data Load] value to "No", Apply and Refresh.
I've built a working example of this which you can download from my OneDrive and try out:
http://1drv.ms/1AzPAZp
It's the file: Power BI Demo - Dynamically filter all data.pbix
Convert the pbix files to a pbit file using "Save As..." option, and then version those pbit files in TFS, using Visual Studio, but controlling them on the server.
This approach is a bit interesting. When you commit a .pbix it uploads it to Premium, extracts the JSON metadata for the model, and then commits that back to DevOps next to the .pbix. That way you can see a diff over time of the model metadata including Power Query changes, measure changes, etc.
Related
My company is using BI Publisher for some data dumps and I know BI Publisher isn't really designed for that, but this is what I have to use.
I have two files with over 100 fields each. Is there a way to add every field to the report or do I have to add each field individually?
If you have an data model export, you could use the BIP designer in word to add a table, and select multiple fields to the output. The wizard will do the xdo code for you.
Use E-Text output. It is designed for EFT (Electronic Fund Transfers) in Payment module. But you can use it to export a CSV (Comma Separated Value) file, or a fixed width file. Both of which can be opened in Excel. The E-Text output is fully documented in the BI Publisher user guides. Some advanced stuff is a bit harder to accomplish, but it is not terribly difficult. A simple CSV file should be quick and easy to create. You will need to list every field, there's no "give me everything" command.
It's actually to your benefit to use those for larger data extracts, when you you use the Word RTF tables, the output Excel files are VERY large due to the way BI Publisher formats the cells.
I've just started using SPSS after using R for about five years (I'm not happy about it, but you do what your boss tells you). I'm just trying to do a simple count based on a categorical variable.
I have a data set where I know a person's year of birth. I've recoded into a new variable so that I have their generation as a categorical variable, named Generation. I also have a question that allows for multiple responses. I want a frequency of how many times each response was collected.
I've created a multiple response variable (analyze>multiple response > Define variable sets). However, when I go to create crosstabs, the Generation variable isn't an option to select. I've tried googling, but the videos I have watched have the row variables as numeric.
Here is a google sheet that shows what I have and what I'm looking to achieve:
https://docs.google.com/spreadsheets/d/1oIMrhYv33ZQwPz3llX9mfxulsxsnZF9zaRf9Gh37tj8/edit#gid=0
Is it possible to do this?
First of all, to double check, when you say you go to crosstabs, is this Analyze > Multiple Response > Crosstabs (and not Analyze > Descriptive Statistics > Crosstabs)?
Second, with multiple response data, you are much better off working with Custom Tables. Start by defining the set with Analyze > Custom Tables > Multiple Response Sets. If you save your data file, those definitions are saved with it (unlike the Mult Response Procedure).
Then you can just use Custom Tables to tabulate mult response data pretty much as if it were a regular variable, but you have more choices about appropriate statistics, tests of significance etc. No need in the CTABLES code to explicitly list the set members.
Try CUSTOM TABLES, although this is an additional add-on modules that you need to have a licence for:
CTABLES /TABLE Generation[c] by (1_a+ 1_b + 1_c)[s][sum f8.0 'Count'].
TFS supports querying on tags, but I can't find a way to create a TFS work item query to return results that have no tags or any tag.
It is not possible to sort the Tag column in the TFS web access, so I can't even return all work items and then sort on the column to focus on those with tags and those without tags.
For example, I was hoping to see a wild card or any for the Tags value below:
I'm about a year too late, but I just accomplished this by doing Tags, Does Not Contain, "". I know that makes no sense, but it worked. I was able to pull in a task that had no tag.
This is not exactly helpful in the tool, but works to get the information. I select all, copy, paste into a worksheet. I can format as table and sort the results to get what I need. It is not very helpful for in tool info, but it works. Another option is to install the TFS Plugin with Excel, you can create and modify items using an existing query, sort items, add items, filter to show only items that do not have a tag, etc.
https://msdn.microsoft.com/en-us/library/ms181675.aspx?f=255&MSPPError=-2147217396
Can you do Tags=""?
I am not sure I understand the use case, but you can sugest new features on http://visualstudio.uservoice.com
I have two variables in the detail tab using Delphi report builder and the running totals are correct. It is getting its data from two sets of the client data sets within the Delphi code.
Now I need to get the grand total of the total that comes from the variables in the details band to calculate in the summary band. How would I go about doing this? Would I declare a global variable and set that? and if yes then how would I use that global variable with two different totals? Thanks in advance.
(I assume you mean Digital Metaphor's Report Builder - not sure if that's bundled with Delphi now or not...)
To create your grand totals:
Just create a summary band in the designer: Report->Summary, and put two DBCalc components in there. Assign their data fields to the fields you want to summarize as grand totals. This works because the DBCalc component is context aware - it knows what sort of band it lives in: If it's in a group footer, it aggregates for the group, if it's in the report summary section, it aggregates for the whole report.
Important: Consider if you need to summarize your subtotals, or directly aggregate all the data in the report. Depending on the data types and how you handle rounding, truncating, etc, there could be a difference between the two that causes you to come up with results that you aren't expecting.
BTW, I'm not sure why you're referring to variables - you should use DBCalc components for all your summations - they work automatically and are very easy to use, provided you put them in the correct band and feed them the correct data.
(All this is readily available in the Report Builder docs: Report Builder - Documentation | Developer's Guide, which is probably why this question was downvoted. You are supposed to do some preliminary research before posting a question here. )
I've posted a demo Access db at http://www.derekbeck.com/Database0.accdb . I'm using Access 2007.
I am importing an excel spreadsheet, which my organization gets weekly, importing it into Access. It gets imported the table [imported Task list]. From there, an append query reformats it and appends it to my [Master Task List] table.
Previously, we have had a form, where we would manually go through the newest imports, and manually select whether our department was the primary POC for a tasking. I want to automate this.
What syntax do I require, such that the append query will parse the text from [imported Task list].[Department], searching for those divisions listed on [OurDepartments] table (those parts of our company for which we are tracking these tasks), and then select the appropriate Lookup field (connected to [OurDepartments] table) in our [Master Task List] table?
I know that's a mouth full... Put another way, I want the append query update the [Master Task List].[OurDepartments], which is a lookup, based on parsing the text of [imported Task list].[Department].
Note the tricky element: we have to parse the text for "BA" as well as "BAD", "BAC", etc. The shorter "BA" might be an interesting issue for this query.
Hoping for a Non-VBA solution.
Thanks for taking a look!
Derek
PS: Would be very helpful if anyone might be able to respond within the work week. Thx!
The answer is here: http://www.utteraccess.com/forum/Append-Query-Selects-L-t1984607.html