Understanding Analysis Services Relationships using Dimension Usage

As a part of designing an Analysis Services cube you must define how each dimension is related to each measure group in your cube through the Dimension Usage tab.  The Dimension Usage tab is found when editing the cube in the cube designer.  The screenshot below shows dimensions on rows and measure groups on columns.  The points at which they intersect define how they are related.  These relationships are automatically defined when the two objects (measure groups and dimensions) are added to the cube but to manually change the relationships click the ellipses intersecting relationship boxes.


The first type of relationship option you should know about really is not a relationship at all.  This type is called No Relationship as shown in the image below.  Anywhere you find this relationship type it will appear as the greyed out box at the intersecting points on the Dimension Usage tab.  This relationship type would be used when a dimension has no relationship to a measure group.  For example, if you have a measure group that holds nothing but sales that occur on the internet and a dimension that has employee data then you are likely going to use the No Relationship type.  Sales that transpire on the internet likely have no employee associated with them so in this case it makes sense that the cube has no relationship defined.


Keep in mind that having no relationship can cause confusion to end users if they try to browse two unrelated objects.  That’s why it is important to properly train users how to browse the cube.  The measure group property IgnoreUnrelatedDimensions can also help end users from getting confused when browsing unrelated cube objects.

The Regular relationship type is the most traditional relationships that exist in data warehouses.  Think of this as a typical one-to-many relationship.  For example, a product dimension is full of a unique list of products but the fact table that it relates to has many instances of that product for each time it is sold.  In a traditional data warehouse design the product dimension has a unique key that represents each distinct instance of a product, while the fact table may store that product key several times for each transaction that sold the same product.  In this case the product key is a primary key in the product dimension and a foreign key in the fact table.

The screenshot below shows the FactInternetSales measure group related to the DimCustomer dimension using the CustomerKey.  The diagram in the image depicts the relationship with the yellow table as the fact table and the blue table as the dimension table.


          The Fact relationship type is used when a measure group is also used as a dimension.  Sounds a little confusing right? Sometimes there are business cases when you not only want to aggregate data that is in a measure group but you also want to slice by values in it as well.  When this is a requirement you create the measure group and dimensions separately and then relate them in the Dimension Usage tab with a Fact relationship.  This type of relationship is also known as a degenerate dimension.

          An example when you may use this relationship type is with a measure group that stores sales orders.  In this case not only do you want to aggregate sales order data but you also want to slice by Order Numbers in the same table.


          Referenced relationship types are often used when your data warehouse design utilizes a snowflake schema design.  Without getting into a lengthy data warehouse design discussion snowflake design is different from a star schema design because a snowflake design is a more normalized view of the data with dimensions that “branch” off of each other.  A star schema is a more denormalized view of the data with dimensions all directly relating to the fact table.


          Imagine you run a data warehouse for a movie store (yes I realize movie stores are pretty much extinct).  The fact table (measure group) holds sales that occur in the store and a dimension will all the movies that could possibly be sold.  There is another dimension with the movie genre that is related to the movie dimension instead of the fact table.  If I want to have an independent dimension in my cube for genre then I must use a Referenced relationship type to relate it to the appropriate measure groups.    There is no direct relationship between genre and the fact table which is where the “referenced” part of this relationship becomes relevant.  The cube “hops” through the movie dimension to simulate a direct relationship between genre and the fact table.  The dotted line in the diagram on the below images represents this.  Referenced dimensions are not optimal for performance and if you can optionally avoid them you should do so.


          Using a Many-to-Many relationship is done when a data warehouse design implements a bridge table between dimensions to appropriately represent all combinations of data.  For example, a dating website has members and members have hobbies.  A member can have many hobbies and a hobby can be associated with multiple members.  To accomplish this in a data warehouse design a bridge table can be used to relate each combination of hobby and member.  This bridge table is often called a factless fact table because when it is brought into the cube it is necessary to define it as a measure group.


          When defining this relationship in the cube you the factless fact table is the intermediate dimension that is used to relate the two dimensions to the measure group.


          The rarely used Data Mining relationship relates a mining model that was created from a cube dimension to the appropriate measure group.


          If you have any other questions about data warehouse design or cube creation feel free to email me at dknight@pragmaticworks.com

Choosing the Right Analysis Services: MOLAP vs. Tabular Recording and Q&A

I hope you were able to attend my free webinar on Choosing the Right Analysis Services: MOLAP vs. Tabular on February 11, 2014 that I delivered with Dustin Ryan.  If you weren’t you can now watch the recording here.

Traditionally following a webinar I try to post a quick Q&A session for everything I was not able to answer in the allotted time.

Q: Since Tabular solutions are many ways better than Multidimensional..then my question is when to go for Multidimensional solution

You would likely still choose Multidimensional for things that Tabular does not have full feature parity.  For example, Tabular does not have the ability do writeback. 

Some still choose Multidimensional because their data model is very complex and Multidimensional is more forgiving in this regard.  Last, Multidimensional is a more scalable for larger datasets because it is less (still uses memory, CPU, etc.. excessively) reliant on pure memory to get the performance you need, unlike Tabular.  As we mentioned in the webinar, Yahoo chose Multidimensional because it would better scale their 26 terabyte cube.

Q: When deciding whether to use Tabular or Multidimensional, do you have to have SSAS installed / set up specifically for one or the other?

Yes, when setting up SSAS Multidimensional or Tabular they are separate installs.  Meaning they can’t be installed at the same time with the SQL Server installer.  Because of their resource demands you traditionally want to install these instances on separate servers if possible.

Q: How do i link if column have more than one column is key column in tabular?

Great question.  Usually, if you have tables that need to be joined together on multiple columns you have create a key column that combine the values into a single column either through DAX or through the source query extract.

SQL Server Analysis Services Webinar

Join me tomorrow, April 2. 2013, at 11:00 PM EST for a free webinar which is part of an entire month of Back to Basics themed free training.  This webinar is for anyone trying to understand the basics behind Analysis Services and how to get started design a cube.  With the release of SQL Server 2012 there are now two forms of Analysis Services so we’ll also spend time discussing Multidimensional and Tabular and why you would choose one over the other.

Also, during this webinar I reference a Decision Matrix file to help you with making the decision on which Analysis Services is right for you.  You can find that file here.  If you would like the PowerPoint slides for this session you can download them here.

Register for the webinar here and if you won’t be able to make it at this time all of the webinars are recorded and can be watched at a later date by using the same link.

OLAP PivotTable Extensions Update

OLAP PivotTable Extensions is a really cool free add-in for Excel that’s been around for quite a while now.  It allows you to have some exposure to what’s going on under the covers (Among many other features) while you browse a cube using an Excel PivotTable. This weekend there has been an updated version and you should check out the features and download it now!


The new features in this release are:

  • Support for Excel 2013
  • Upgrading to the AS2012 version of ADOMD.NET to now support connections to .bism files. (This upgrade required adding the .NET Framework v3.5 SP1 as a prerequisite. Use a prior version of OLAP PivotTable Extensions if this prerequisite is a problem for your organization.)

SSRS – Using Analysis Services Dynamic Management Views to Build Dynamic Reports

Dynamic Management Views are a very powerful tool that are part of Analysis Services in 2008 and 2008 R2.  They give you the ability to monitor Analysis Services usage and metadata. 

MSDN articles on SSAS DMVs – http://msdn.microsoft.com/en-us/library/ms126079.aspx

Good post on using SSAS DMVs – http://dwbi1.wordpress.com/2010/01/01/ssas-dmv-dynamic-management-view/

A report developer could leverage the capability to read Analysis Services metadata to build a report that can dynamically change the type of information that is displayed on a report.

Let’s walk through a basic example of how we can Dynamic Management Views to build a dynamic Reporting Services report.  I’m starting with a basic report that has already been built off of an Analysis Services cube that looks like this:


As you can see it shows Internet Sales by category over several calendar years.  The MDX query used for this is pretty standard as you can see.

    [Measures].[Internet Sales Amount] ON Columns
    ,NON EMPTY ([Date].[Calendar].[Calendar Year]
    ,[Product].[Product Categories].[Category]) ON Rows
    [Adventure Works]

I start by changing the query to use an in session calculation with the WITH clause so I can basically alias it to [Measures].[Generic Measure].  By aliasing the measure it will make since more later because the field name doesn’t identify with any specific measure.

    Member [Measures].[Generic Measure]
        AS STRTOVALUE("[Measures].[Internet Sales Amount]")

    [Measures].[Generic Measure] ON Columns
    ,NON EMPTY ([Date].[Calendar].[Calendar Year]
    ,[Product].[Product Categories].[Category]) ON Rows
    [Adventure Works]

The next step is to create a parameter in the report so I can pass in the measure name to the query dynamically.  Eventually I will populate this parameter using a DMV but for now I’ll just have it as a text field that I can type in the measure name.

Now to make this query dynamic with my newly created @MeasureName parameter I replace the dataset query with an expression that looks like this:

="With "
+"    Member [Measures].[Generic Measure] "
+"        AS STRTOVALUE("""+Parameters!MeasureName.Value+""")  "

+"Select "
+"    [Measures].[Generic Measure] ON Columns "
+"    ,NON EMPTY ([Date].[Calendar].[Calendar Year] "
+"    ,[Product].[Product Categories].[Category]) ON Rows "
+"From "
+"    [Adventure Works] "


You’ll notice that I’ve basically changed the query to a string in the expression except for the measure name which I’ve changed to use my new parameter.  This report will now work dynamically but the user would have to type in the measure they would like to see in the report.  Not only would they have to type it but they must know the MDX for it.  For example, users would have to type [Measures],[Internet Tax Amount].  Obviously, that’s not going to happen so we need to make this simpler for the user. 

This is where Dynamic Management Views help.  We can use the mdschema_measures DMV to return back a list of measures from our cube.  The following query will return back all my measures names, MDX, and format: 

SELECT Measure_Name, Measure_Unique_Name, Default_Format_String
FROM $System.mdschema_measures
WHERE Cube_Name = ‘Adventure Works’
ORDER BY Measure_Name

Unfortunately, not all format strings translate to Reporting Services but this could be manipulated using expressions.  Using this query we can create a new dataset and have that dataset populate our @MeasureName parameter.  The dataset cannot be used in entered into a dataset using the Query Builder because it’s not really MDX.  So you must select to build an expression on the new dataset and paste in the query.  It does not require an actual expression so remote the equal sign before the query.


With the dataset completed go back to the @MeasureName properties and change the available values to use the Measure_Unique_Name and Measure_Name fields to populate the parameter dropdown box.


Now that the parameter dropdown box it populated with the DMV query go ahead and preview the report.  You’ll see you can select any measure front the dropdown box and it will change the information shown on the report as shown below:


SSAS – Display Folders

Display folders are an often used tool in Analysis Services to help end users navigate through a cube or dimension they could find what they need to build a report.  Imagine you develop a cube that has dozens of measures.  For an end user to find what they need they would be forced to dig through every measure.  If an end user gets lost while trying browsing the cube they are a lot less likely to continue using it. 

This is where display folders come in handy.  They make it easy for end users to find what they need and I’ve learned a few tricks with them that make it even easier for user to find what they need.

Whether it is a measure, calculation or attribute you can view the properties and give the object a display folder as shown below. 


When the user browses the cube now there will be a folder that makes it easy to find measures when organized properly.


Now most people that have developed Analysis Services cubes already know about display folders but the little piece I have to add here is how to nest folders and how one item can be in multiple folders.

To put a measure or an attribute in a nested folder you provide a folder name like this:



To put a measure or an attribute in multiple folders you provide a folder name like this:



I hope this helps make life easier for you and your end users so they can find what they want faster when building reports off your cube.

SSAS – Deployment Wizard

The Analysis Services Deployment Wizard is an often under utilized but important tool when moving your OLAP database throughout your environment.  Often I see developers deploy directly from BIDS to all Development, QA, and Production environment by changing the project properties to the different instances.  For multiple reasons this is a bad practice. 

  • With a sloppy developer like me I’m likely to accidently deploy to the wrong server.
  • You could unintentionally deploy over Production security
  • You could unintentionally overwrite a data source connection string
  • It doesn’t allow you to script out deployment

The better solution is to use the deployment wizard which will take care of all these problems for you. 

To get started open the deployment wizard by navigating to it in your start menu. Start –>All Programs->Microsoft SQL Server 2008 R2 (or your current version of SQL Server)->Analysis Services->Deployment Wizard.

After the wizard opens hit Next past the welcome screen.  It will then ask you for a database file.  This is a .asdatabase file which can be found in the file structure where you developed your Analysis Services project.  Just look under the /bin folder and you should see a file with the .asdatabase extension.  This will only appear if you’ve built your project, so if you don’t see it then do a build in Visual Studio.  Once you’ve selected the database file hit Next.

The wizard will then ask you which Analysis Server you wish to deploy to.  Simply tell it the server and what you desire the new database to be named.  Not that just because the database field is a dropdown box does not mean you cannot type a different database name on top of it.  Once this is complete hit Next.


The next screen allows you to control some of the objects that are deployed.  For example the partitions section allows you to either deploy the partitions that are part of current project or you can retain the existing partitions if you are deploying on top of an existing database.  This is useful if you have developed partitions on Production that accommodate a larger dataset than exists on Development.

You can also control security through the Roles and Members section.  For example it is likely that you have more stringent security on Production than Development.  This section will allow you to deploy all object either including or excluding the security.  It can even be more granular than that.  Maybe your security roles is the same but the membership is different.  You can optionally choose to just deploy membership changes.


The Specify Configuration Properties window will allow you to make just about any configuration change you would need to before deploying.  For example, when I deploy to Production the source database may be on a different server than it was on Development.  This screen will allow you to change the connection string of the Data Source.  It can also allow you to change where the data files will be stored.  So if you have everything stored on the C drive in Development but on Production you want them stored somewhere else this wizard will help with that.  You can also use it to change details of Actions.  So if you have a report action that uses a Reporting Services report on your Development server than you can change it to production here also.  This is a very useful screen.  When you’re happy with the changes click Next.


Next you will be asked to select processing options.  You can either do the actual processing of the Analysis Services database or just deploy metadata changes.  Click Next after deciding if you want to process or not.


The Confirm Deployment window is a little more helpful than it sounds.  Here you can click Next to begin the deployment or check Create deployment script and provide a file location to actually script out the entire deployment so it can be done later.  This is extremely helpful for those of you that must script out all deployment and hand it to Production DBAs.  If you want to do the actual deployment without scripting it out first just click Next without checking the box.


After hitting the final Next the Deployment Wizard will do the operation you’ve specified with the configuration requirements you provided.  The out put will be either a deployed cube or a .xmla file that has scripted the deployment for you to be done at a later time.


SSAS – Understanding KeyColumns Dimension Property

For many Analysis Services developers building dimensions with many attributes and hierarchies is something that comes as second nature to them.  However, I’ve found for those that are new to Analysis Services changing the KeyColumns property can be particularly difficult to understand why and when it is necessary.

In a previous post I discussed the need for defining attribute relationships in a dimension.  The side effect of have attribute relationships is that you will often, depending on your data, need to define multiple KeyColumns for the selected attribute.  For example, think about a date dimension, which just about every cube will have.  Your date dimension likely has a hierarchy of Year-Quarter-Month in it and after you define attribute relationships the dimension no longer will process successfully.  The warning message you receive looks something like:

Errors in the OLAP storage engine: A duplicate attribute key has been found when processing: Table: ‘dbo_DimDate’, Column: ‘CalendarQuarter’, Value: ‘4’. The attribute is ‘Quarter’.

So what is this trying to tell us is wrong with the Quarter attribute?  Let’s take a look at the data to find out.

Notice in the table that the Quarters 1, 2, 3 and 4 appear multiple times in different years.  Quarter 1 appears in both 2006 and 2007.  This is a problem because after defining attribute relationship we told Analysis Services that Quarter relates to Year but because our data shows quarter can relate to multiple years we have to be more specific.  For this date dimension that means having both Year and Month as the key.  This is where KeyColumns come in.  KeyColumns allow you to define how each attribute will work internally within Analysis Services.  Because Quarter is not unique by itself now we have to create a composite key between Year and Quarter to no longer receive the duplicate attribute key warning.  To make the composite key you select the attribute to alter, in this case Quarter and go to the properties (F4).  Navigate to the KeyColumns property and this hit the ellipsis.

This opens the Key Columns dialog box.  Simply add Year to your key column and then hit OK.  You can change the order of the Key Columns but that is generally done just for sorting purposes.

After making a composite key you will see a red line under the attribute (shown below) which, is stating that you must define the NameColumn property.  Because we now have a composite key it doesn’t know what to show the users so this property will define what the users should see.  Select the desired column to display to users then hit OK.

The duplicate attribute key warning will also occur with the Month attribute in my situation.  I would have to follow the same method that I did above to solve the problem for Month.

The other way to solve this problem would be to change the data.  So instead of have 1, 2, 3 and 4 for Quarter I could instead have 012005, 022005, 032005, 042005, 012006, 022006, 032006, 042006, etc…

SSAS – Creating and Using a Writeback Measure Group

Imagine you have spent the last 9 months developing a sales Data Warehouse and an Analysis Services cube.  Your end users love what you have built for them so far but it’s never good enough right?  You have a new requirement to allow your users to enter in projected or target sales for future months.  You think about your options for a while and narrow it down to 3 possible ways to solve the problem.

Option 1:  Have the users enter the targets into an excel spreadsheet and then import the data into your warehouse using a tool like SSIS.

Option 2:  Have the users enter the targets into a SharePoint list and then import the data into your warehouse using a tool like SSIS.

Option 3:  Enable writeback on the measure group’s partition so the users can enter the data in Excel and it will automatically write the data back to the cube and a separate writeback table in a SQL Server database also.

Each of these options has benefits and disadvantages but for the purpose of this post I will focus on option 3 (even writeback has disadvantages).  The goal is to allow users to enter targets with the least amount of effort and maintenance and accomplishes this for the most part.

To get started you need to first add the table, view or even a named query that stores the granularity that targets will be entered in at.  For example, sales are probably recorded at a daily level but budgeting for targets on sales are likely done at a month level.  So the data source view may look something like the screenshot below.

The highlighted object is a named query (this could be a physical table or view) called TargetCategorySales that stores the proper grain that targets would be entered and a place holder column for the soon to be entered targets.  The other non-highlighted tables are the supporting dimensions and the actual sales.

Next add the TargetCategorySales tables will get added as a measure group to the cube.  After adding the new measure group in the Cube Structure tab in the cube designer make your way over to the dimension usage tab and ensure the relationships are properly defined between dimensions and the new measure group.

Notice here you will see the granularity is different for each measure group.  The actual sales can be tracked all the way down to an individual product and a day, while the targets are only tracked to a category and month.  It’s probably a good idea to go ahead and reprocess now to ensure everything is still working properly

Next, go to the Partitions tab and right click on the target measure group and click Writeback Settings as shown below.

This brings up a dialog box that will create a new SQL Server table that will store the data changes made by the user in Excel.  Name the table whatever you want then click OK.  This table will not be actually created until the next time you process.  So go ahead and process and you will see the new table in the database you selected in the dialog box earlier.  This table will be empty until the user enters in targets in Excel.

The user now will open Excel and create a pivot table with the proper granularity, in this case the month and category level.  It’s probably a good idea to create a template for the users that has the appropriate granularity already set in the pivot table.  After creating a proper Excel template as shown below you can enable writeback in Excel 2010 by clicking Enable What-If-Analysis under the PivotTable Options menu.  You’ll also probably want to go ahead and click Automatically Calculate Changes under the same menu, which will apply your changes in Excel as you go.

Now you’re ready to start adding some targets!  All you have to do is type right on top on the existing numbers in the pivot table.  After typing in your target values go back to the What-If-Analysis menu and click Publish Changes.

With these changes in the data it will now appear in the cube for browsing and comparing to actuals.  You can go back and look at the writeback table to see all of the changes that were recorded as shown below.

Now after a while this writeback table can get very large so you will eventually want to integrate these targets back into the data warehouse using some kind of ETL process.

SSAS – Using Dimension OrderBy AttributeKey

Often when building dimensions there is a need to order you dimension attributes by something other than the NameColumn or KeyColumn properties.  An example of this may be an accounting structure that is preferred to be viewed in a certain order.  The data doesn’t represent that order well so you create a sort column that does it for you.  The sort column can be in a table or just in a SQL Server view if you prefer.

Once you’ve created this field in the table or view you can utilize it in you dimension design by bringing it in as an attribute.  Because this field is used for nothing more than sorting you can set the properties of the new attribute to AttributeHierarchyVisible to False and AttributeHierarchyEnabled to False.  Next go to the Attribute Relationships tab in the dimension designer and right-click on the attribute you wish to sort.  Select New Attribute Relationship which will bring open the screen showed below.  Set the Related Attribute to the column you want to sort by then click OK.

Create Attribute Relationship

The Attribute Relationships should now look something like this:


The last step is to return to the Dimension Structure tab and select the field you want to sort.  In my case I selected English Product Name and then opened the properties menu by hitting F4.  The two properties you need to change are the OrderBy property, which you will change to AttributeKey, and the OrderByAttribute property, which you will set to the Sort field you created earlier.


Now after processing when you browse this attribute if will be sorted by the sorting attribute you created.


Get every new post delivered to your Inbox.

Join 36 other followers