Preparing a Technical Session Part 1: Picking a Topic

So you’ve decided or perhaps were told to do a technical presentation. If this is something that’s new for you then you may be going through a variety of emotions. You may start excited in anticipation of the event but quickly that changes to anxiety when you realize all the work that’s ahead of you.

Delivering a presentation regardless of the subject matter can be a challenge. Even if you’re a seasoned speaker there’s several steps that lead to you completing a successful presentation.

In this blog series on preparing a technical session I’ll cover the following steps:

  1. Picking a Topic
  2. Coming up with a Title
  3. Writing an Abstract
  4. Building the PowerPoint
  5. Building the Demos
  6. Delivering the Presentation

While discussing these I’ll be sharing not only how I personally go through this process but also feedback I’ve gathered from peers. The good news is with more experience these steps will likely flow more naturally for you and with hopefully less stress.

My goal is to help guide those that are new to presenting through the process, help them understand what to expect and hopefully help grow a larger pool of speakers at events. 

Picking a Topic

When you submit to a major conference deciding the topic to focus on is your first step to getting started. This step is clearly critical because the idea that you come up with will impact the rest of your preparation. If you’re in need of a topic and have the equivalent to “writer’s block” then here are some tips to help you’re brainstorming process.

Talk About Your Passion

Have you ever been assigned a presentation topic that you’re not really passionate about? This may happen more in a corporate environment when you’re given a topic that just has to be covered with co-workers. When you’re not excited about a topic then it can often show in your preparation and delivery of the content.

If you are passionate about a topic then more likely write a more compelling abstract, be more proactive about content development and even deliver the information in a way that connects better with the audience. Now keep in mind some people may not be passionate about the same things you are so try not to be offended when others don’t share your excitement about a topic.

Present On What You Know

This seems obvious but you’d be surprised how many new speakers pick topics that are completely out of their comfort zone. If you’re a new presenter then this whole process may be foreign to you already so don’t add any extra pressure on your self to learn a completely new topic. Now, having said that I do see some experienced presenters occasionally pick topics that maybe aren’t necessarily completely new to them but are certainly going to challenge them to learn a few new skills. So in short if you’re new at this pick a topic you know well and if you’ve been doing this for a number of years then do what works for you!

Use Things You’ve Done At Work

Give yourself some credit. You’re smart and pretty good at what you do! I bet you’ve come up with some pretty inventive ways of solving problems while at work. Why not share some of the design patterns you’ve used to help others?

Don’t worry I’m not suggesting that do anything that would hurt your company and potentially cause you to lose your job. I bet any problem you experience at work are the same kind of problems that others are experiencing. Why can’t you take your solution and generalize the details, including the data, so it shouldn’t matter what your place of business is. The other benefit is these topics are often the most popular because they’re based on real world problem solving. My number one goal when I attend a session is to figure out how I can use what I just learned when I get back to the office. What better way to solve that then by showing problems you’ve actually solved at work.

Is Anyone Else Interested?

If you’re debating whether or not a topic would get much interest then ask? Take to social media with a poll of topics you’re thinking about presenting on and see what people like best. Not only are you getting valuable information back but you’re also doing a little early promoting for your session.

Journal Topic Ideas

Ideas can come at any moment. If you’re not prepared than you could have a stroke of brilliance and before you know if you’ve forgotten it. Be prepared and keep a pen and paper handy, or if you’re living in this decade sign up for Evernote or OneNote and log your topic ideas in a digital journal.

PASS Summit 2014 in Review

I’m finally completely recovered from a great week in Seattle last week for PASS Summit.  Now that it’s back to the grind of regular work I thought I’d put together some thoughts and tell you about my experiences from the week. 

You may have read others blogs about their experiences during the conference.  I always love how everyone may do very different things while at this conference but still have a great time!

My week started early as I arrived in Seattle on Saturday to prepare for MVP Summit that was going on at the same time.  There were some mixed opinions about these events going on at the same time but I liked it.  I think this helped many that are from out of the country justify coming to both events in a single trip rather then making two expensive and time consuming trips to Seattle.  MVP Summit started Sunday for SQL Server MVPs and ended a bit early for me, on Monday, because I delivered a Pre-Con on Tuesday.  Once our day in Redmond was done on Monday I went and registered for the PASS Summit in Seattle and called it an early night, knowing that I had a full day of talking during my Pre-Con coming up the next day.


This was a very full day as Brian Knight (Blog | Twitter) and myself taught a full day of SSIS: Problem, Design, Solution in our Pre-Con.  The Pre-Con was well attended with about 140 attendees.  I really enjoyed teaching and even attending sessions like these that focus on solving problems.  There’s many reasons why I attend PASS Summit but one of the biggest is that I want to learn things that I can immediately go use to help me solve problems when I get back to work.  I think we accomplished that with this session.  It’s was very focused on problems and different ways to solve them using SSIS.

Brian and I have done many presentations together over time but we’re always learning still.  One of the things we learned from feedback last year was to provide a PowerPoint deck with a little more substance.  PASS prints these decks for attendees to keep and take notes on.  While a single image slide with verbal discussion comes across great in a presentation it’s not effective when the slides are printed.  By the way, I still like the single image slide for presentations but not when you know the slides are printed for attendees.

Feedback on the session this year seemed to be pretty good but I’ll know for sure when the reviews are released.

Once the Pre-con was done I dropped my things off at the hotel and came back to the conference for the Welcome Reception.

Later that night I went to the Networking Party that was moved to Yard House.  I stayed and talked for a bit and meet some new people.  Unfortunately it looked like they weren’t ready to handle that volume of people at once so I ordered a meal from somewhere else to end my night.


On Wednesday I started my morning after breakfast by attending the Keynote feature T.K. “Ranga” Rengarajan, James Phillips, and Joseph Sirosh from Microsoft.  Definitely some interesting things announced during the this Keynote including Power BI Dashboard Preview coming soon.  You can watch the Keynote still on PASStv

Wednesday I had to deliver my session towards the end of the day so I wasn’t able to attend as many sessions as I would have liked, as I did some practice runs of my session. I did however attend Bradley Balls (Blog | Twitter) session on Using PowerShell to Manage Cloud Integrated Data Platforms. Brad always does a great job and shows his passion for the technology.  He showed many of the capabilities that PowerShell has for integrating into Azure.

Towards the end of the conference day I sat in the session that was prior to mine because I was a bit worried that there would be a lot of cross over with my topic because it was Power View related too.  Luckily there was very little that was the same.  I did my session on Creating and End to End Reporting Solution with Power View.  I think it went pretty well, even with Excel crashing on me.  I always have a back up ready :).  There was a lot of interest in Power View with SharePoint and how Multidimensional cubes interact with Power View so I saved some time towards the end for that discussion.

Immediately after my session I ran down to the Expo Hall for the Exhibitor Party where I was doing a book signing for Pragmatic Works.  We do this every year and it’s always a blast to meet so many people and give away free books!

Here’s a video Jeremy from our team who took showing the line that built up.

After the Exhibitor Party Pragmatic Works hosted our annual karaoke party at the Hard Rock with a live band.  I always have fun here and was able to catch up with friends.

B1vFDH2CAAIRg_u (1) 


I started my day by attending the Keynote by Dr Rimma Nehme. This was possibly one of the best Keynotes I’ve attended.  Her explanation of cloud computing was done in a way that I think my sales team would be comfortable with.  You can watch the Keynote still on PASStv.

You can watch the Keynote here or read Steve Jones’ (Blog | Twitter) summary here.

In between networking with a few new folks I also attended two session: The BI Power Hour and Top Five Power Query M Functions That You Don’t Know.

The BI Power Hour is always a fun “infotainment” type session with many of the members of the Power BI product team.

I also really enjoyed the Top Five Power Query M Functions session.  I’ve always really liked Power Query but Reza Rad (Blog | Twitter) showed me a few new tricks!

I ended my day by attending the PASS Community Appreciation Party at the EMP (Experience Music Project) Museum.  This place is always fun and of course they had live band lead karaoke.


This was my day to work the Pragmatic Works booth so my morning was eaten up by that. To end the day I sat in on the Speaker Idol competition. The winner of the last round would be guaranteed a session at PASS Summit 2015. I was curious to watch this because we do a version of this locally for the user group in Jacksonville. Here’s Jason Carter (Blog | Twitter) doing his presentation

Embedded image permalink

I had a late flight home and actually didn’t end up back in Jacksonville until 11 AM Saturday morning… Long day!

If you’ve never been to a PASS Summit before I highly recommend it.  It is an event run by the community, which makes it unique.  While there are sponsorships and partnerships it’s community run so you hear from and meet a lot of people that are going through the same things you are.  Next years PASS Summit will be Oct 27-30. I hope to see you there!

Creating a Reporting Services Subscription

Subscriptions are a great feature in Reporting Services that will run a report unattended and deliver it to users either by email or Windows File Share. Subscriptions can also be scheduled so if your end users need a report monthly you can automate the delivery process. The steps given in this article will show you how to configure subscriptions using the Windows File Share delivery method.

Before you start to configure your first subscription make sure to have the following taken care of first:

Start the SQL Server Agent

When you installed Reporting Services it also created the ReportServer database. This database stores all subscription related data among other things. Ensure the SQL Server Agent is running on the database engine that has the ReportServer database.

To start the SQL Server Agent you can either login to the database engine in SQL Server Management Studio or open the SQL Server Configuration Manager then right-click and select start on the SQL Server Agent.

SQL Server Configuration Manager SQL Server Management Studio
1 2

Enabling sharing on folder delivering to

If you have decided to deliver reports via Windows File Share then you must configure the folder that will accept the reports for sharing. Navigate to the folder that you want to be used with the Subscription and right-click on it then select Properties. Go to the Sharing property tab to enable sharing.


Use SQL Server Authentication on data sources used in report

To enable Subscriptions on a report you must have a SQL Server Authenticated account used to access the data sources. In SQL Server Management Studio, create a new SQL login that has db_datareader access to the databases used in the report data sources.


After you have created this account change the report data sources to make sure it is being used in the Report Manager.


Now that these prerequisites have been taken care you can actually create a new Subscription. Click on the report that you want to add a Subscription to and click New Subscription in the report toolbar. If you do not see the New Subscription option you may need to have your privileges elevated.


There are several options you have when creating a Subscription:

Delivered by – Can either be Windows File Share or Email. If you have not setup your SMTP server to send email then your only option is Window File Share. The SMTP configuration is setup in the Reporting Services Configuration Manager.

File Name – What you want the file that is created to be called. Also, there is a checkbox for “Add a file extension when the file is created” which is checked by default. There may be some special circumstances where you do not want the file extension, like your work email does not accept attachments with an .xml extension.

Path – The file path where the file should be sent to. It must be written as the full UNC path (\\ServerName\FolderName). For example, my path would be \\Devin-PC\Reports.

Render Format – How you want the format to be rendered. Your options are XML file with report data, CSV (comma delimited), Acrobat (PDF) file, HTML 4.0, MHTML (web archive), Excel, RPL renderer, TIFF file, or Word.

Credentials used to access the file share – This must be a login that you setup to have access the file share in the eariler prerequisites.

Overwrite options – Overwrite an existing file with a newer version is set by default and will always replace a file if they are named the same. Do not overwrite the file if a previous version exists will keep the original file and cause the subscription to fail. Increment file names as newer versions are added will continue to add a new file if one already exists with the same name but will increment it (ReportName_1.pdf, ReportName_2.pdf, ReportName_3.pdf).


You can also click Select Schedule to design a set time for the Subsciption to run. You can also configure when you want the schedule to start and how long it should continue. Once you have set the schedule you want click OK.


This will return you to the previous screen where can lastly decide to pass in different values to the report parameters before you finalize the Subsciption by clicking OK.

On the Subscription tab of the report you should see listed all the Subscriptions that have been created for this report.


When the Subscription runs you should see the file now in your shared folder. If you open Management Studio and take a look at the SQL Agent jobs you will see there is a new one for the Subscription you created. Here you can run the Subscription manually so you do not have to wait for the scheduled time.


WARNING: Do not change any settings of this job. Even just changing the name could cause the Subscription to fail.

Optimizing Power BI Q&A with Synonyms and Phrasing using Cloud Modeling

If you’ve have used or even just seen a demo of Power BI Q&A you’ve likely seen there’s great potential in the feature for even low-tech user. I usually say, “If the user knows how to use a search engine then they can interact with Q&A.” You can read the basics of how Power BI Q&A works from my previous post here.

While having this capability is great it can take some fine tuning to perfect for users interacting with it. Power BI provides you with the ability to now optimize your models for Q&A and it can all be done directly from the Power BI site with what Microsoft is calling Cloud Modeling.

Cloud Modeling gives you the ability to add synonyms and phrasings to your Power Pivot workbook from the web interface in Power BI sites.  Let’s looks at a couple scenarios that show why synonyms and phrasings are necessary and how these features cans solve the problem.

The Problem

Let’s discuss the problem through a simple example.


This data model is designed to show US Presidential Election data. When this is added to Power BI Q&A users will likely ask questions like “How many votes by party and state” or “Which candidate won the election”.

Data consumers that interact with Power BI Q&A will often ask questions that do not correlate to exact table or column names that are actually in your Power Pivot data model. Then sometimes users will use linguistic terms that are difficult for a computer to comprehend to return back data.

For example, take the sample questions I gave a moment ago.  Let’s start with the question “How many votes by party and state”. The issue with this question is that it references things like votes and party. Looking at the data model you’ll find that none of these columns exist. So asking this question will return no results.

The question “Which candidate won the election” has a different issue to address. In this case we’re using a verb ‘won’ to describe a relationship between a candidate and an election. Because this verb appears no where in the data model we would get zero results returned. 

Now let’s look at how these problems can be solved with synonyms and phrasings.

The Solution


To solve our first problem with the question “How many votes by party and state” we would need to create synonyms on the appropriate columns in your data model. Synonyms are like aliases that can be created for both tables and columns inside your Power Pivot data model.

You should create synonyms on your tables and columns for the different ways that people would ask for your data. For example, in my question “How many votes by party and state” I would need to create the following synonyms in my data model:


Power Pivot Model Name Synonym to Create
Total Popular Vote Votes
Party Name Party

There are two locations that you can create synonyms on your model. You can either launch the Excel desktop client (Office 365 only) and make the change in the Power Pivot model or you can do it using the Cloud Modeling approach on the Power BI site under the Power BI Site Settings.

Synonyms in Excel

Let’s look first at the Excel approach.  When looking at your Power Pivot model in Excel you can add synonyms from the Advanced table by clicking the Synonyms button.  This button only appears if you are using the Office 365 version of Excel.


This will launch the diagram view and a Synonyms pane.  Select the table you wish to create your alias on and then type the synonyms for either the table or columns you wish.


Once you’ve done this the next time you deploy your changes to Power BI you can ask questions that utilize the synonyms you’ve created.

Synonyms in Cloud Modeling

While synonyms inside of Excel is nice I honestly could see people more often defining synonyms using the Cloud Modeling approach. To use this approach you must first deploy a workbook to Power BI and enable it for Q&A.  I discussed how to do these steps in the previous post here.

Once the workbook is in Power BI you’ll go to the Power BI Site Setting page as shown below.  This is also where you will go to configure Phrasing, that we’ll discuss later.


Select Q&A.


Then find your workbook and click the ellipsis next to it.  Select Optimize for Q&A to launch the Cloud Modeling window.


You can use this to ask questions using Q&A but also enhance the model using the Cloud Modeling options on the right pane. Notice here I asked my question “How many votes by party and state” but did not get any questions. In fact, some of my question “How many votes…” is grayed out because Q&A was not able to find anything in my Power Pivot model that matched this name.

I know I have data for votes in my model but the original column likely had a different name. To fix this I can add a synonym by click the Synonyms tab in the Cloud Modeling pane.image

Just like we saw earlier in Excel you select the column that you wish to create a Synonym for and type a comma separated list of values you would like as aliases.  You notice here as soon as you type in the synonyms and click away Power BI Q&A is not able to answer the question.

image  When you’re happy with the synonyms you have created you will need to hit Save to send the changes back into the Power Pivot workbook.


The other problem we discussed earlier was with extra words we may use to describe the relationship between things. Take the example from earlier: “Which candidate won the election”. The problem is how do we define what it means for someone to ‘won’ and election.

Creating synonyms would not help solve this problem because it is not simply an alias for something else. In this case the term ‘won’ is used when talking about our data to define a relationship between the two entities: candidate and election. To solve this problem we must create a Phrasing.

Phrasing can only be done from the Cloud Modeling pane (Not Excel) on the Power BI site that we just looked at in the previous example. So from the Cloud Modeling pane I’ll click Phrasing and ask the question ”Which candidate won the election”.

This does return results but you’ll notice that the word ‘won’ is grayed out meaning Power BI Q&A can’t figure out what do do with it. So the results are just showing me all candidates that have values in my election table.


We need to add a Phrasing to define what ‘won’ means. In the Cloud Modeling pane I click Add Phrasing and type “Candidates win elections” then click OK. This helps and defines the relationship but to take it a step further I can click on Show Advanced Options to define the threshold for a ‘win’.

To win a US presidential election a candidate must have greater than 270 electoral votes (sorry non-US citizens no time to explain the dynamics of our election process). So I setup a condition defining that for someone to ‘win’ an election they must have more than 270 electoral votes.


This changes my results to only show candidates who have won elections.  To verify this is correct I change my question to “Which candidate won the election in 2008” and as you would expect the only results that are returned is Barack Obama


With these changes made I would click Save to ensure these changes are pushed back to the workbook. You’ll also notice there’s an option here to export or import the configuration you turn on here. Perhaps you could use this to create your Synonyms and Phrasing on a development site and import them into production.

With these changes implemented your users should be able to ask typical questions of the data without you worrying about poor results.  Hope you enjoyed this tour of Cloud Modeling.

Analysis Services Partitions

Partitions are a great feature in Analysis Services that allow you to split measure groups in to separate physical files. All measure groups by default have one partition but by splitting that partition you will gain improved query and processing performance.

Partitions can be split in any way you see fit. Many people choose to separate measure group data by date. This makes sense because just about every fact table is going to have a date dimension associated with it. It also makes for a clear way to draw a line in the sand where each partition can be separated. For example, your company has three years worth of sales data so you decide to split the sales measure group into three partitions, one for each year. Depending on the sales volume you may decide to split it even further down to each quarter or even daily.

Multiple partitions can only be deployed to a server running SQL Server enterprise edition. However, if you are developing on a machine that uses standard but will be deploying to enterprise then you can impersonate developing in enterprise by right-clicking on the project file in the Solution Explorer and clicking Properties. Here you can change to the edition of SQL Server you are deploying to and this will also change the restrictions in BIDS on what you can and cannot do.


So why do you want to add partitions to your cube? Well the leading reason is to increase performance. So why does adding partitions actually help performance? It’s going to help in several ways.

Query Performance

Query performance will be increased because rather than querying an entire measure group Analysis Services can isolate a single partition to search. It can also search multiple partitions in parallel if need be.

Processing Performance

Partitions will also increase processing performance. Processing performance is optimized for the same reason it was while querying with partitions. Instead of processing the entire measure group you can just process your most current data skipping over your partitions that store data that is years old and doesn’t need to be reprocessed. Analysis Services will also allow you to process partitions in parallel. When processing the cube you can


My recommendation for setting up partitions is to not let a measure group exceed 50 million rows before creating a second partition. This is very subjective though and 50 million rows is the maximum you would ever want to see a partition. There is nothing wrong with creating a partition before you reach this number. Again, this is very subjective you could decide to create a new partition after one million rows. I also generally recommend using the default MOLAP storage mode.


Here are the very basic differences between the different storage modes:

Storage Mode Description

· Data and aggregations are stored in multi-dimensional format. Makes for slower processing time but faster querying time.

· Cube must be reprocessed to get updated data.

Scheduled MOLAP

· Same as MOLAP but cube refreshes every 24 hours

Automatic MOLAP · Same as MOLAP but updates in the relational database raise events that trigger cube refresh.
Medium Latency MOLAP · Same as MOLAP but updates in the relational database will trigger a switch to Real-time ROLAP while cube is processing. When cube completes processing returns back to MOLAP. Default latency is 4 hours.
Low Latency MOLAP · Same as Medium Latency. Default latency is 30 minutes.
Real-time HOLAP · Data is stored in the relational data source but aggregations are stored in multi-dimensional format. Fast for processing but slow querying.
Real-time ROLAP · Data and aggregations are stored in multi-dimensional format. Fastest for processing but very slow querying.
· No data latency.


Now that you know what partitions are and why they are helpful let’s go through the steps of splitting a measure group into two new partitions. In this example you want to place internet sales that occur before 2003 into a partition called Internet Sales Old and everything after that date should go into a partition called Internet Sales New.

Step One – Restrict Rows to Original Partition

Remember all measure groups have at least one partition by default. Before creating a new partition you must first change the old partition to restrict what rows are brought back. If you don’t do this before trying to create a new partition you will get the following warning:


Click on the Source for the original partition so you can restrict the rows that it stores.


· Change the Binding type from Table Binding to Query Binding. You could leave this as Table Binding if you separate your measure group data into multiple tables or views.

· When you change this property to Query Binding you will see that it automatically provides you the query that will return back the table with a blank WHERE clause at the end. Remember we want to have a partition with all data prior to 2003 so the WHERE clause needs to be changed to only return data prior to that date (Ex. WHERE OrderDateKey <= 20021231). Once the query has been changed hit OK.


Step Two – Create a Second Partition

Rows are now being restricted from your first partition so you can now create a new one.

· Select New Partition under the measure group that is ready for a second partition and hit Next to start the Wizard.


· Check the table(s) used for this measure group that should be used in this partition then hit Next.

· Check the box the reads Specify a query to restrict rows and add to the WHERE clause like you did in step one to bring back all the dates after 2002 (WHERE OrderDateKey > 20021231). Be very careful when writing these WHERE clauses. If you accidently did >= instead on > then there would be overlapping sales data for 20021231. I could also accidently exclude a day if on the first partition I used < and the second partition >. This would exclude one days worth of data from my measure group. On the bottom of the dialog box you will see a warning describing the possibility of overlap and missing days. Hit Next when you finish typing the query.


· You can select a storage location other then the default if you would like then hit Next.

· On the last screen you can give the partition a name like Internet Sales Old and decide whether you want to design an aggregation now or later. I will write a second article explaining aggregations so select Design aggregations later. After you have named the partition hit Finish.

· Rename the original partition Internet Sales Old


Now you can make optional change to Storage Settings that were discussed earlier in the article under the recommendations section. You may also find an option called Enable Proactive Caching in the Storage Settings, which is an option that will also be discussed in a future article. This is a great first step to Performance Tuning your cube.

Analysis Services Dynamic Security

Imagine you have a requirement to capture the user accounts that log into your cube and based on their username they will only be permitted to see specific data.  This can be accomplished by creating dynamic or table based security in Analysis Services which is much less difficult to setup than it sounds like.  In this article I will walk you through the setup needed on the relational database side as well as Analysis Services side to get this working in your environment.

You will start by identifying what attributes you want to base your security off of.  For example, maybe I want User1 to only be able to see the sales for the states Florida, Georgia and Louisiana, while User2 can see California, Oregon and Washington.   Typically, if I were to create this security I would create a role for each grouping of states and add the appropriate users.  An easier way to maintain security like this example would be to store all this information in relational tables that way a DBA that may not be as familiar with Analysis Services can easily handle adding new users or modifying existing ones.  This table structure would look something like this:


FactSales is storing all the company sales, while DimState has all the possible distinct states that sales can occur in.  This is a very small scale example, but here’s in detail what DimUser and UserStateBridge tables store:


This table simply has a key which is an identity column in this case and an account column which is the user account that you wish to grant access.  So example records would look like this:

UserID Account
1 Domain\User1
2 Domain\User2


This is a table that has a many-to-many relationship between the DimUser table and DimState.  This many-to-many relationship allows for one user to see many states and one state to accept many users.  Here’s a sample of what this table may look like:

UserID StateID
1 5
1 27
1 8
1 14
2 42
2 27
2 6

I’m not going to show you what the State dimension would look like because this is just a typical dimension you see day to day.  The StateID here is the identity column from the state table.

Once this table structure is created your next step is to bring it into Analysis Services.  Here are the steps to follow inside BIDS to leverage these tables (I’ll assume you’ve already created a basic cube first):

· Add these new tables (DimUser and UserStateBridge) to you Data Source View.

· Create a new dimension based off the DimUser table.  Hide all the columns by changing AttributeHierarchyVisible to False.  You don’t want users to be able to view this dimension.  Process this dimension.

· Create a new Measure Group in the cube based off the UserStateBridge.  Delete all measure it creates except for the automatically generated Count measure.  Select the single measure left to be hidden by selecting it and changing the Visible property to False.  This will hide the entire Measure Group from your users but you can still use it when writing MDX.

· Ensure these two have a Regular relationship setup in the Dimension Usage table of the cube browser.  Often this relationship is created automatically.


· In the Solution Explorer, right-click on Roles and select New Role.  Give Read access in the General, Data Sources and Cubes tabs.  On the Membership tab add any groups or user accounts this Role will effect.

· On the Dimension Data tab select the State dimension from the dropdown list.  Then select the StateID(you would likely have this column hidden in the dimension) in the Attribute Hierarchy dropdown list.

· Inside the Dimension Data tab select the Advanced tab.  In the Allowed member set area enter this MDX:

Template Sample

NonEmpty (

[Dimension To Secure Name].[Dimension Key].Members,


StrToMember ("[User Dimension].[User Key].[" + UserName () + "]"),

[Measures].[Bridge Table Count Measure]



NonEmpty (

[State].[State ID].Members,


StrToMember ("[User].[User ID].[" + UserName () + "]"),

[Measures].[User State Bridge Count]



The UserName() function will grab the user account that attempts to access the cube.  StrToMember converts a literal string to a dimension member.

· Last in the bottom left of your current screen check the box called Enable Visual Totals.


· Reprocess the entire SSAS database now.  When you navigate to the cube browser now you can test and make sure that your security is working properly by emulating one of the users you’ve added to the role.


After verifying you security in the browser if you still find problems with the results then your next step is to check the user and bridge table.  Again this is a very basic example of creating dynamic security so if you did something more complicated like based a role off of two bridge tables then you may run into conflicts between the two.  For example, say you not only want to base your security off of state but also on the store the items was sold at.  In that case you would create two bridge tables.  One for state, which we’ve already done, and a second for store built the same way the first one was.  So image your fact table had some results similar below:

StateID StoreID SalesAmount
3 4546 1000
23 6434 1200
14 1554 900
43 7664 500
3 4578 1100
35 6789 3000

When using multiple bridge tables in a single role you need to make sure that if User1 has rights to StateID 3 in the UserStateBridge table then he also needs rights to StoreID 4546 and 4578 in the store bridge table to be able to see each record when browsing.  This means the bridge tables can actually over rule each other if they do not have permission for both keys.  This is especially important if you’re using inferred members in you data warehouse using some kind of indicator like -1.  If there is a store in an unknown store and you assign unknowns to a -1 key value when you will want to make sure your users have access to that -1 store.

Adding new accounts is very simple now.  Add them to the DimUser table and then reprocess.  If you need additional states associated with users simply add them to the bridge table then reprocess the cube.

After reading this article if you still need more information on this topic I highly recommend the book Expert Cube Development with SSAS Multidimensional Models.  This book shows many different types of dynamic security including security based off of a stored procedure. 

MDX Time Calculations Built to Scale

When creating time calculations in Analysis Services you may take several approaches. For people that have just begin learning SSAS they likely have spent several days writing calculations that return things like year to date, rolling twelve month, and period over period for every measure they have.

For the more adventurous developer they have tried using the Business Intelligence wizard which automates the process of writing time calculations on selected measures. It creates an attribute in the date dimension that can then be used to view calculations. This sounds like a great idea but it is very inflexible when changes are needed and is still specific to the measures that were selected during the wizard configuration. Another problem with the BI wizard is it can only be run against one date dimension hierarchy at a time.With many dimensional models date can mean many different things, it can be a calendar date, fiscal date, ship date, order date, and many others. There is likely only one date table in the data warehouse but plays the role of many different dates making it a role playing dimension. If you wanted those same time calculations for each role playing dimension using the BI wizard you would have to run through the wizard multiple times and then enjoy managing that mess later.

The best way to handle time calculation is detailed in the book Expert Cube Development with SSAS Multidimensional Models. You may have heard me sing praises of this book before because the content is that useful for real world problems. What I’m going to show you in this article is my variation of what they show in that book. The benefit of this method is multifold. It is a lot easier to manage than the BI wizard because you have total control of it. It’s dynamic because the formula will work no matter what measure is being viewed. Also, it is easy to filter just the calculations you want because it’s just like any other attribute that you would filter by.

To follow this example you can download the sample database from When you download and install this sample database you also receive sample files for deploying an Analysis Services cube that can be found here C:\Program Files\Microsoft SQL Server\100\Tools\Samples.

To start this example create a SQL Server view that will simply store the name of the calculation that you want to create and an ID column.

Create VIEW [dbo].[DateCalcs] AS

SELECT ID = 1, Calculation = ‘Actuals’


SELECT ID = 2, Calculation = ‘Calendar Year To Date’


SELECT ID = 3, Calculation = ‘Fiscal Year To Date’


SELECT ID = 4, Calculation = ‘Calendar Previous Period’


SELECT ID = 5, Calculation = ‘Fiscal Previous Period’


SELECT ID = 6, Calculation = ‘Calendar Previous Period Diff’


SELECT ID = 7, Calculation = ‘Fiscal Previous Period Diff’

The resulting view looks like this:

ID Calculation

1 Actuals

2 Calendar Year To Date

3 Fiscal Year To Date

4 Calendar Previous Period

5 Fiscal Previous Period

6 Calendar Previous Period Diff

7 Fiscal Previous Period Diff

Create as many time values for the Calculation column as you will need time calculations in the cube. Here I have created a set of calculations for Calendar and Fiscal because it is likely I will be required to have these calculations for both my Calendar and Fiscal dates.

Once this is created add the view to your data source view. Notice after adding the view that it has no relationships to any other object, which is fine because no relationships are needed.

Create a new dimension based off the DateCalcs view with the ID column as the Key column and the Calculation column as a regular attribute. Select the ID attribute and change the AttributeHierarchyVisible property to False to hide it from the users. Select the Calculation attribute and change the IsAggregatable property to False. This will remove the All level you are used to seeing in dimensions which isn’t necessary for the dimension.


The last step is to select the Calculation attribute and change the DefaultMember property to the Actuals member [Date Calcs].[Calculation].&[Actuals]. When all these changes have been made you can process this dimension. After the processing completes add the new dimension to be used in this cube by going to the Cube Structure tab in the cube designer. To add the dimension right click in the Dimensions pane in the bottom left and select Add Cube Dimension. Select Date Calcs from the list to make it usable to this cube. You will notice if you view the Dimension Usage tab that it has no relationship to any Measure Group, which is not a problem.


With the Date Calcs dimension prepped you can now open the Cube designer and open the Calculations tab. Here you will be using the mdx SCOPE statement to handle each of the calculations you wish to use. It does not matter what order we write each of these calculation but the first one we will tackle is Calendar Year To Date. Hit the New Script Command button ScriptIcon to get started.

Calendar Year To Date

SCOPE ([Date Calcs].[Calculation].[Calendar Year To Date]);



YTD ([Order Date].[Calendar Date].CurrentMember),

[Date Calcs].[Calculation].[Actuals]



Fiscal Year To Date (Using a different method to calculate YTD)

SCOPE ([Date Calcs].[Calculation].[Fiscal Year To Date]);



[Order Date].[Fiscal Date].[Fiscal Year],

[Order Date].[Fiscal Date].CurrentMember


([Date Calcs].[Calculation].[Actuals])




Calendar Previous Period

SCOPE ([Date Calcs].[Calculation].[Calendar Previous Period]);


([Order Date].[Calendar Date].CurrentMember.PrevMember,

[Date Calcs].[Calculation].[Actuals]);


Fiscal Previous Period

SCOPE ([Date Calcs].[Calculation].[Fiscal Previous Period]);


([Order Date].[Fiscal Date].CurrentMember.PrevMember,

[Date Calcs].[Calculation].[Actuals]);


Calendar Previous Period Diff

SCOPE ([Date Calcs].[Calculation].[Calendar Previous Period Diff]);


([Order Date].[Calendar Date].CurrentMember,

[Date Calcs].[Calculation].[Actuals])-

([Order Date].[Calendar Date].CurrentMember.PrevMember,

[Date Calcs].[Calculation].[Actuals]);


Fiscal Previous Period Diff

SCOPE ([Date Calcs].[Calculation].[Fiscal Previous Period Diff]);


([Order Date].[Fiscal Date].CurrentMember,

[Date Calcs].[Calculation].[Actuals])-

([Order Date].[Fiscal Date].CurrentMember.PrevMember,

[Date Calcs].[Calculation].[Actuals]);


This gives you a good start on a Time Calculation dimension. Once these calculations are written you will need to process the cube. You may have noticed that in each of these calculations I never actually identify a measure that exists in the cube. That is actually the beautiful thing about this technique is that it works across all measures now! So instead of having to create a calculation for Sales, Profit, and all the other measures in the cube you only have to create this calculation once and it will work on all measures. Try it out yourself!



Get every new post delivered to your Inbox.

Join 42 other followers