Skip to content

Building a BI Portal for the Tabular Model

Thanks to the folks that braved the snow and freezing rain to see the session on building a BI portal for the tabular model. Here is the deck: Building a BI Portal.

There is some reference material in this deck that is useful to those who were not there this morning. What I did was walk through four major Microsoft reporting/data visualization tools (Excel, Power View, Reporting Services, and PerformancePoint) and looked at the strengths and weaknesses of the tools. I sliced the decision making on two axes: technological considerations (requirements such as SharePoint, mobile, ad hoc reporting, etc) and also data visualizations (availability of the different types of charts, graphs, Analysis Services capabilities, etc) in each tool. There are a number of slides that stack the four tools against each other in each of these areas, and compare/contrast in one place the capabilities of each tool. Check it out, if there are questions let me know.

Bottom line: Power View just adds to the offerings in the Microsoft BI stack. No existing tools, including older tools like PerformancePoint, are killed or displaced by Power View. In some ways, Power View actually makes BI pros jobs harder. The slick data visualizations that people want to see front and centre are actually incredibly difficult to integrate into a full-fledged BI dashboard, as there is no custom web part for Power View and no way to link in child reports from Power View reports. BI Pros wanting to make use of the full set of reporting capabilities available for the tabular model will have to do a delicate and creative dance between four tools and try to glue the bits and pieces together.

This presentation was based off the data set for the Hans Rosling project. I have extended this data set and added arms import and export data (to get some additive measures), I will post an updated data set in the future. Also I will post some DAX learning resources for building the tabular model driving the Hans Rosling project. Links to the date/time functions in particular will be very helpful for those approaching DAX for the first time.

Looking forward to doing this one again at SQLBits.

Upcoming speaking engagements

Just a quick note to let you know about a few upcoming events.  Tomorrow I will be speaking at a Vancouver PASS BI event at the Microsoft Canada office. The all-new talk is called Building a BI Portal for Tabular Models.  You can view the abstract on the PASS site. Basically I will talk about using Excel,  Power View, Reporting Services, and Performance Point with the tabular model, and when to use each technology when building a BI Portal.

Next,  it’s SQL Saturday 114 in Vancouver on March 17th. The topic is modeling with DirectQuery. See the abstract on the SQL Saturday site.

Last but not least,  it’s off to London at the end of March for SQL Bits.  I’ve got a full day pre-con on tabular modeling, plus a session on Friday repeating my BI Portal talk.  Abstracts are here and here. See you then.

Creating a tabular partition in code

Following up on my previous post on managing tabular models, here is a small AMO example for adding a partition in code. This will be the first in a series of posts on tabular model automation.

This example will show how to add a partition to the Demographic Facts table of the Hans Rosling Project database. To create that database, simply create a new tabular project by importing from the Hans Rosling PowerPivot workbook, set the name of the database to hans_rosling_project, and deploy.

Notice that the Analysis Services database only has data from 1960 onward. However, the underlying relational data source has data points going back towards 1800. Let’s write a C# program that creates a partition with the data from the year 1800 and processes it fully (we will do more advanced processing techniques in another post).

First, create a C# project (you could use VB, I just don’t). You must add a reference to the Microsoft.AnalysisServices.dll assembly. Since this assembly is not in the GAC and the location of the assembly is on disk is not enumerated by the Add Reference dialog, you must browse to the location of the DLL (by default C:\Program Files (x86)\Microsoft SQL Server\110\SDK\Assemblies) to add the reference.

Now you can go ahead and write the code. This example has zero error handling, but gets the point across:

//connect to server, substitute the server/instance name with yours as necessary
Server s=new Server();

//navigate to the Demographic Facts measure group, we must add the partition to the measure group
Database db = s.Databases.FindByName("hans_rosling_project");
DataSourceView dsv = db.DataSourceViews.FindByName("Sandbox");
Cube c = db.Cubes.FindByName("Model");
MeasureGroup mg = c.MeasureGroups.FindByName("Demographic Facts");

//define the new partition
Partition p = mg.Partitions.Add("Partition 1800");
p.Source = new QueryBinding(dsv.DataSourceID, "SELECT [dbo].[DemographicFacts].* FROM [dbo].[DemographicFacts] WHERE ([Year] = 1800)");
p.StorageMode = StorageMode.InMemory;

//update the server with the new partition, process it, and disconnect

The most interesting part of the code to me is the part where you navigate to the measure group to add the partition. There’s a few things to note about this code:

  • There are DataSourceViews, Cubes, and MeasureGroups. These things did not go away with the tabular model, they’re just hidden out of sight in the code of the BIM file or in the script of the database.
  • Just as for multidimensional models, partitions are added to measure groups.
  • The FindByName method is used liberally throughout this code. Using the ID, while more reliable, is definitely not as user friendly. IDs are of the form prefix+GUID where the prefix is the name of the item when it is first created. The ID is rarely (if ever) displayed in the UI. The ID is never updated, even when the object is renamed. So, if you choose to use IDs in your code, your code will be quite unreadable.
  • The DataSourceView is always called Sandbox.This is a hangover from PowerPivot when the DataSourceView was always hidden from view. Now you will be referring to this mysterious Sandbox object from code.
  • The Cube name is the same as the name supplied in the tabular project properties, if you do not change the default cube name then use the value “Model” here.
  • The Measure Group name is the same as the table name. The Measure Group name is automatically updated whenever the table is renamed, so you can safely use the values you see in the UI for the FindByName call.

The rest of the code is pretty straight forward. Once you have found the place to add the partition, you set its query and storage mode, then update the server and process it. I do have some suggestions for processing strategies in Processing tabular models 101, but more guidance will come on that topic in the future. Stay tuned.

Tabular post round-up

When I worked at Microsoft on the Analysis Services team, I posted a great deal of content about the tabular model on my MSDN blog. For those of you who haven’t seen those posts before, they’re definitely worth a review. Here they are by category.

All about the tabular designer

This is about how the tools are designed. Once you understand how we built the tabular designer, and how tabular projects work under the covers, you can use the tools most effectively.

Core tabular modeling functionality

This is some core conceptual material around tabular model features (security, scale, DirectQuery, etc).

Management, administration, and automation

Obviously this is a space where the Analysis Services team could improve the documentation. I get asked for information about how to manage models all the time. Here is my collection of posts on the topic so far.

Tips and tricks

Finally, a grab bag

Bonus: the DAX Editor

Not a blog post, but a project. The DAX Editor is a Visual Studio add-in for editing those DAX expressions and queries. Pick it up from CodePlex.