Writing Parametrized MDX for Reporting Services

Writing MDX inside of Reporting Services has been the nemesis for many report writers for far too long.  If all you need is the Query Designer to create a drag and drop datasets then it’s a piece of cake, but have you ever wondered what’s going on with all the MDX that the Query Designer creates.  Or how about how do the parameters work that it creates inside MDX.  I’ve heard many report writers that use Analysis Services as a data source (including myself) say it is too difficult to create parameters and that’s why they use the Query Designer.  In reality, I think what the real problem is probably that the MDX results that the Query Designer provides make it look a lot more difficult than it really is.  If you know some rudimentary MDX you can probably be writing your datasets yourself instead of using the Query Designer, which there’s no problem with sticking with if it fits your needs.  The goal of this article is to guide you through writing a basic MDX statement that can be used for a report query and more importantly show you how to parameterize it. 

The two MDX functions that we are going to use for this are:


Returns the set specified in the string expression (MSDN definition).  Essentially, Converts the text you write into an MDX set.


Returns the member specified in the string expression (MSDN definition).  Converts the text you write into an MDX member.

 First we will start with an example that uses StrToSet and then add to it using StrtoMember.  To follow along with this example you’ll need the Adventure Works cube that can be found at www.codeplex.com.  I’ll assume you know some basics of Reporting Services and not define each component of the tool that is not new for this example. 

Create a new Report Server Project and Report that uses the Adventure Works cube as a Data Source.  Next, create a Dataset and for this example you can call it CategorySales (this Dataset should use the Adventure Works cube data source).   Select Query Designer and click the Design Mode button 1a  to begin writing your query.  Use the following query to return back results without a parameter:


[Measures].[Internet Sales Amount] on Columns
From [Adventure Works]

All this query did was returned back the total sales, but this is useless for a report by itself.  Let’s say we want to add the product category that the sales belong to.  If we were to do this normally in MDX it would look something like this:

[Measures].[Internet Sales Amount] on Columns,
[Product].[Category].Children on Rows
From [Adventure Works]

That’s not good enough for our pretend end users though.  They not only want to see the list of product category sales, but they also want it made into a parameter so they can select from a dropdown box the category or categories they wish to view.  To do this, first create the parameter by selecting the Query Parameters button ParameterButton.

  • Assign the parameter a name. Ours  will be called ProductCategory
  • Identify the Dimension and Hierarchy it associates with. Ours will be from the Product dimension and the Category attribute hierarchy
  • Multiple values identifies if it should accept multiple values.
  • Last give the parameter a default value. This can set to Bikes for our example


After you hit OK to confirm the Query Parameter screen you will need to modify your MDX to take advantage of the parameter you have created.

[Measures].[Internet Sales Amount] on Columns,
StrToSet(@ProductCategory, Constrained) on Rows
From [Adventure Works]

Here we use StrToSet to convert whatever is brought in from the parameter values selected by the end users to something MDX can understand.  The Constrained flag here just ensures that parameter provides a member name in the set.

Hit OK twice to confirm the query so far and make a simple tabular report to view your results so far.  You will notice that a dropdown box parameter was automatically created for the ProductCategory parameter we defined. You may already be familiar with this if you have ever used the drag and drop interface for making parameters. Reporting Services automatically creates a Dataset for this parameter dropdown box, which you can view if you right-click on the datasets folder and select Show Hidden Datasets.


Now let’s take this example one step further.  We will add two more parameters to our query to create a date range.  Hit the Query Parameters button again and create the parameters StartDate and EndDate that reference the Date Dimension and the Date attribute.  These can be defaulted to any date for now. 


Change the MDX to now add a where clause that restricts dates this time using StrToMember .

[Measures].[Internet Sales Amount] on Columns,
StrToSet(@ProductCategory, Constrained) on Rows
From [Adventure Works]
Where StrToMember(@StartDate, Constrained) :StrToMember(@EndDate, Constrained)

Hit OK twice to confirm the changes to the Dataset.  Remember that these parameters will automatically create new Datasets and parameters so you should just need to preview the report to see the changes.


I hope this makes writing your own MDX with parameters inside Reporting Services a little easier. 

New Book: Building Performance Dashboards and Balanced Scorecards with SQL Server Reporting Services

I’m very excited to announce that you can now purchase my latest book: Building Performance Dashboards and Balanced Scorecards with SQL Server Reporting Services . This booked, offered by Wiley, takes you through designing effective balanced scorecards using Reporting Services. You can get your copy of my new book here!

SSRS – Using Lookup Function for Multiple Column Join

When SQL Server 2008 R2 was released several new Reporting Services expressions were made available that would help join multiple datasets together.  Those functions were the Lookup Function, LookupSet Function and the MultiLookup Function.  My general recommendation for these function is not to use them when you have two dataset from the same data source.  They’re really intended for connecting datasets together that are not directly related because otherwise I could just handle it in my source query.

My experience with them had been fairly academic until recently when I had a real scenario where the Lookup Function would be beneficial.  I was working with a customer that had a report requirement that needed data from DB2 and Oracle in a single tabular report.  What made this circumstance more difficult was that the join between the DB2 dataset and the Oracle dataset was not a single column but two.

Seems basic enough right?  If we were writing T-SQL we would simply do a join on multiple columns with an AND between them.  Because these are two different data sources in Reporting Services I had to get a little more creative.  If you’ve used the Lookup function in SSRS (template below) expressions then you know it can only accept one field from the source and one from the destination.  So what if you have two columns to join on?

Lookup(source_expression, destination_expression, result_expression, dataset)

The approach I took was to concatenate the two join columns together and then place the concatenated value in the Lookup function.  I also placed a literal string between the two columns to protect myself from coincidental matches.  To make the solution a little cleaner I created a calculated field on each dataset that handled the expression.  So both datasets had a calculated field I called JoinColumn and was defined similar to this:

=Fields!JoinColumn1.Value + ”||” + Fields!JoinColumn2.Value

Notice the double pipes that are used here to protect from coincidental matches.  With these calculated field created on each dataset now my Lookup function would be fairly standard.  Here’s my end result:

Lookup(Fields!JoinColumn.Value, Fields!JoinColumn.Value, Fields!ReturnColumn.Value, “DataSet2”)

I hope this helps as you find practical uses for the Lookup Function in Reporting Services.

Choosing the Right Microsoft Reporting Technology Part 6: Making the Decision

We’ve completed reviewing the major Microsoft tools and now you have to make “The Decision”.  Which tool(s) is the best fit for the reporting needs?  Remember just picking one tool and using it for all reports may not be the best decision because you are missing out on a lot of the benefits of other tools. 

One tool I like to use that helps me make a decision like this is a decision matrix.  With a decision matrix you provide the possible options, which for us has been the following tools:

You also provide the factors that are important to you in making a decision.  You can see in the screenshot below I’m considering tools based on:

  • Ad Hoc Reporting
  • Content Sharing
  • User Flexibility
  • Developer Flexibility
  • Time to Develop


Within the matrix you rate each of the tools (0-100) in their capabilities of the factors you’re considering.  This rating should only be done once. 

Some factors to making a reporting decision may be more important than others depending on the need.  For example, for one report Ad Hoc Reporting may be the most important thing while another report Content Sharing may be the most important thing.  That’s why you sign a relative weight (0-10) to each decision you need to make.  That way if Ad Hoc Reporting is most important than rate it higher than the other factors.  Again, this weighting system would be changed for each report you need to make a decision on choosing a tool.

Download a sample Decision Matrix here:


Using a tool like a Decision Matrix can help you make decisions quickly and with more confidence.  I hope this series has been helpful.  If you need to catch up on previous posts follow the links below.

Choosing the Right Microsoft Reporting Technology Part 1: Report Services

As Business Intelligence has evolved over the years the number of tools we have to choose from for presenting data has advanced drastically.  With so many tools to choose from it can be rather confusing (especially when your company is just getting their feet wet in BI) to determine which tool is the right one for an organizations reporting needs.  Just within the Microsoft suite of tools (not including third-party tools) you have Reporting Services, Excel, PowerPivot, PerformancePoint and Power View.

Many companies try to marry themselves to one or two reporting tools and fit their needs into the restrictions of the tool(s) they have chosen.  The truth is not a single one of these tools can solve all reporting needs.  While each one of these tools by themselves may be able to present your data, you will find that using a combination approach will conclude in a much more well rounded and impressive reporting solution.

That’s why in this blog series I hope to demystify the decision process and educate you a little bit on each of the tools so that you can easily determine which tool would be best in different scenarios.  Like I mentioned earlier the Microsoft reporting tools we will cover will be:

  • Reporting Services
  • Excel (From the perspective of building PivotTables without PowerPivot)
  • PowerPivot
  • PerformancePoint
  • Power View (Yes Power View is supposed to be considered part of Reporting Services in SQL Server 2012, but it really is an entirely different tool than SSRS is traditionally thought of as)

After discussing each of the tools each individually I’ve bring it all together by showing you how to use a decision matrix to determine which tool makes sense for your work examples.

Reporting Services

What it is

Reporting Services is traditionally used for developing static reports.  The word static here is referring to the report layout rather than the report data (obviously the data will change).  For example, I want my sales team to be able to review a line item detail of each of their sales from the prior day.  With Reporting Services I could have the sales team either log onto a web front end to view a live version of the report or I could have the report emailed to them daily.


The reports developed with the tool are highly customizable.  In fact just about any object of the report can be manually changed or made dynamic with the expression language it uses.

If you’re looking for a tool for creating dashboards then Reporting Services also has the capability of filling that need.  With charts, gauges, indicators, sparklines and maps you have many of the necessary tools for creating an impressive dashboard.  There are other tools that like PerformancePoint that can create more impressive interactive dashboards but Reporting Services can get the job done.

Reporting Services is also highly scalable.  Without much effort the workload for report processing can be shared across multiple instances of reporting services.  After installing multiple instances the native wizard guides you through creating a farm of report servers.

Another thing worth noting because it will not be true of all the other tools I discuss in this series is that it can connect to virtually any data source you have.

What it isn’t

While there are some parts of Reporting Services that can provide ad hoc reporting capabilities the core of the tool is for static reporting.  If you really have a need for ad hoc reporting and could care less about report layout than you would likely pick another tool like Excel.

Who Uses it

While Reporting Services does have functionality for a power user to create their own reports it is typically thought of as a developers tool.  So report developers would use the tool Business Intelligence Development Studio (BIDS), which is part of the SQL Server install) to create incredibly detailed reports with almost endless flexibility when it comes the visualizations.

The tool that power users can use for creating reports is called Report Builder.  It has almost all the same functionality as BIDS but has been simplified a little for end users with more wizards and easier ways to bring in datasets.  Because it has a lot of the same functionality of the developers too you will find that this tool is for highly technical end user.

How is it consumed

Like I mentioned earlier Reporting Services reports can be consumed through a web front end of the native Report Manager, SharePoint, or even embedded in a custom application you’ve written with the Report Viewer control.

In addition to viewing the reports from a web browser they can also be delivered directly to the users via email or in a shared folder that they can access.  This is done using an incredibly useful part of the tool call Subscriptions.


Again, Reporting Services is a static reporting tool therefore if your users find the need to change report layout frequently than that may eliminate this tool as an option for that particular report.

Another limitation you may run into is some of the functionality with Subscriptions, the report delivery tool mentioned earlier, are enterprise only features.  For example, if you wanted to make the reports delivered dynamic based off a list of emails you have in the database that must be done using Data Driven Subscriptions, which are Enterprise only.


As we go through this series remember these high level characteristics about Reporting Services:

  • For static report development
  • Extremely customizable
  • Dashboard capable
  • Highly scalable
  • Traditionally thought of as a developers tool
  • Has a native web front end but can also view reports through SharePoint and direct delivery with subscriptions

I hope you’ve found this helpful and stay tuned for the Part 2 in this series on Excel.  To read any of the other parts to this series follow the links below.

SSRS – Creating a Reporting Services Web Farm

Reporting Services performance problems can often be narrowed down to report design or resource constraints and sometimes a combination of the two.  In this post I’ll review the process of scaling-out Reporting Services so server resource constraints can be shared across multiple servers instead of relying on a single server for all report processing.

If you are new to understanding Reporting Services architecture it is important to understand the general steps that are occurring behind the scenes with Reporting Services. From the beginning of a user request the steps that happen are:

  • Client makes a request to the report server
  • Report server then makes a request to the report catalog
  • Report catalog sends back to report server report definition and tells the report server which data sources to hit.
  • Report server request data from source database
  • Source database sends data and the report server renders report metadata and data for the client

Understanding Scaling-Out Architecture

If you have ever installed installed SQL Server and the BI stack locally on your machine for testing purposes than you’ve likely done a single server install.  This means you have SQL Server, Reporting Services, Analysis Services and Integration Services engines all running on the same machine. 


While this might be fine running in a local testing environment on your machine this obviously doesn’t scale well on a server that requires thousands of users to connect.  In an environment that requires this many users a scaled-out deployment is ideal and can support a heavier workload. 

In a scaled-out deployment multiple Report Server instances are installed that all use the same repository or ReportServer database.  So when a thousand client machines make a request to the ReportServer the report rendering can be split the requests across multiple servers to do the work.  They still must use the same repository database because they still want to view the same report metadata but the report processing should be helped.  The screenshot below depicts a scaled-out deployment with three instances of Reporting Services installed and a single SQL Server database repository to return back report metadata.  This is an example of a common setup but not the only way to do this of course.


With this approach we could have our west coast customers using a URL that points to Server A, mid west customers using a URL that points to Server B, and all east coast customer use Server C. 

Another approach to this is to use a Network Load Balancer (NLB).  The idea behind a Network Load Balancer is that you provide a single virtual URL to the report server and the NLB sends the user to the Report Server that is least utilized.  So if you have two servers and Server A is only 20 percent utilized and Server B is 80 percent utilized then the NLB would choose Server A for the next incoming user instead of you manually providing a URL for each user.  The Network Load Balancer can either be a physical device on your network for making these decisions or you can enable the feature in Windows Server.

Setup and Configuration

The setup and configuration for scaling Reporting Services into a Web Farm is actually fairly easy.  In fact to oversimplify it here is an overview of the steps required:

  • Install SQL Server (Database Engine) on one server.
  • Install Reporting Services on two servers without configuring either.
  • Pick one of the Report Server installations and configure it using the Reporting Services Configuration Manager.  Go through each page of the configuration manager except the Execution Account and E-mail Setting, which are optional setting depending on you required setup.
  • Configure the second Report Server but use same report catalog database as the first and set URLs.  So on the database page ensure it points to the same database that was setup during the setup of the first Report Server


  • Go back to first Report Server you configured and click Add Server under the Scale-out Deployment page to join the instances as shown below.


Without overcomplicating this that is really all it takes.  This would be done for as many Report Servers as you would like to add to the farm.  Hope this helps.

SSRS – Embedding Refreshable Reports in Office Tools

Have you ever experienced this?

You have developed a set of Reporting Services report that your users love and they often show them off. The most common way users show off the reports is by taking screenshot images of the reports and embedding those images in a PowerPoint slide deck along with other information they plan to discuss.

I personally have seen this several times and I’ve always thought wouldn’t it be better to have a live version of the report available in the PowerPoint slide deck so it would always be up to date. This way new screenshots never have to be updated if visual aspects or data changes of the report occur because the slide deck would always have a live view of the report.

While this is a feature that has been touted in Power View you don’t have to wait because it is possible with Reporting Services now. Now of course the new Power View method of embedding reports is much more impressive because the reports remain interactive as if they were viewing on the SharePoint.

For this post let’s focus on how this can be done for Reporting Services because many people do not realize this is possible. Many of the Office products have the ability to embed images which can come from a URL. With this in mind we could also embed a URL of a report image and anytime the file is opened it the report will be updated.

Open a PowerPoint deck and go to the Insert menu in the Office Ribbon to insert a Picture. for the file name use the below template replacing the bold text with the relevant information about the report you want to embed.


**Note** The plus signs between folder name and report name are for any spaces you have between each word.

Then before you click Insert you will need to hit the dropdown arrow and select Insert and Link. Using this options retains the link to the image so that is can be refreshed each time the PowerPoint file is opened.


You should now see an image of your report embedded in your document and any time the files is reopened the view of the image will be updated to reflect new data and visualization changes. My example used PowerPoint but this could be done in other tools like Word or Excel too.

I hope this proves to be helpful to the reporting solutions you build.

Whitepaper – Extending Reporting Services with Custom Code

Today Pragmatic Works released a whitepaper I wrote and I thought I would share it with you.  Prepare for Marketing email…

Banner Image

A Whitepaper by Devin Knight

SQL Server Reporting Services is capable of creating eye-popping visualizations that can be consumed by a variety of business users. The tool has built-in functionality for developers to produce reports that users will find not only functional, but also visually appealing. This whitepaper will focus on how to overcome the gaps that expressions leaves by using custom code. You will also learn many other advantages of using custom code like creating consistency across all reports developed. As you read this paper, you will walk through many examples of using custom code. These examples will emphasize the use of custom code, and will therefore skip topics like creating data sources, datasets and basic report toolbox items.

Download the Whitepaper Here

Interested in more SSRS training? Sign up for Devin’s Master SSRS Online course

What You’ll Learn:

  • Overcome the gaps that expressions don’t cover by using custom code
  • Create consistency across all reports developed
  • Learn how to use Visual Basic and Reporting Services in real-world examples
  • And much more!

Training Image

SSRS – Using Analysis Services Dynamic Management Views to Build Dynamic Reports

Dynamic Management Views are a very powerful tool that are part of Analysis Services in 2008 and 2008 R2.  They give you the ability to monitor Analysis Services usage and metadata. 

MSDN articles on SSAS DMVs – http://msdn.microsoft.com/en-us/library/ms126079.aspx

Good post on using SSAS DMVs – http://dwbi1.wordpress.com/2010/01/01/ssas-dmv-dynamic-management-view/

A report developer could leverage the capability to read Analysis Services metadata to build a report that can dynamically change the type of information that is displayed on a report.

Let’s walk through a basic example of how we can Dynamic Management Views to build a dynamic Reporting Services report.  I’m starting with a basic report that has already been built off of an Analysis Services cube that looks like this:


As you can see it shows Internet Sales by category over several calendar years.  The MDX query used for this is pretty standard as you can see.

    [Measures].[Internet Sales Amount] ON Columns
    ,NON EMPTY ([Date].[Calendar].[Calendar Year]
    ,[Product].[Product Categories].[Category]) ON Rows
    [Adventure Works]

I start by changing the query to use an in session calculation with the WITH clause so I can basically alias it to [Measures].[Generic Measure].  By aliasing the measure it will make since more later because the field name doesn’t identify with any specific measure.

    Member [Measures].[Generic Measure]
        AS STRTOVALUE("[Measures].[Internet Sales Amount]")

    [Measures].[Generic Measure] ON Columns
    ,NON EMPTY ([Date].[Calendar].[Calendar Year]
    ,[Product].[Product Categories].[Category]) ON Rows
    [Adventure Works]

The next step is to create a parameter in the report so I can pass in the measure name to the query dynamically.  Eventually I will populate this parameter using a DMV but for now I’ll just have it as a text field that I can type in the measure name.

Now to make this query dynamic with my newly created @MeasureName parameter I replace the dataset query with an expression that looks like this:

="With "
+"    Member [Measures].[Generic Measure] "
+"        AS STRTOVALUE("""+Parameters!MeasureName.Value+""")  "

+"Select "
+"    [Measures].[Generic Measure] ON Columns "
+"    ,NON EMPTY ([Date].[Calendar].[Calendar Year] "
+"    ,[Product].[Product Categories].[Category]) ON Rows "
+"From "
+"    [Adventure Works] "


You’ll notice that I’ve basically changed the query to a string in the expression except for the measure name which I’ve changed to use my new parameter.  This report will now work dynamically but the user would have to type in the measure they would like to see in the report.  Not only would they have to type it but they must know the MDX for it.  For example, users would have to type [Measures],[Internet Tax Amount].  Obviously, that’s not going to happen so we need to make this simpler for the user. 

This is where Dynamic Management Views help.  We can use the mdschema_measures DMV to return back a list of measures from our cube.  The following query will return back all my measures names, MDX, and format: 

SELECT Measure_Name, Measure_Unique_Name, Default_Format_String
FROM $System.mdschema_measures
WHERE Cube_Name = ‘Adventure Works’
ORDER BY Measure_Name

Unfortunately, not all format strings translate to Reporting Services but this could be manipulated using expressions.  Using this query we can create a new dataset and have that dataset populate our @MeasureName parameter.  The dataset cannot be used in entered into a dataset using the Query Builder because it’s not really MDX.  So you must select to build an expression on the new dataset and paste in the query.  It does not require an actual expression so remote the equal sign before the query.


With the dataset completed go back to the @MeasureName properties and change the available values to use the Measure_Unique_Name and Measure_Name fields to populate the parameter dropdown box.


Now that the parameter dropdown box it populated with the DMV query go ahead and preview the report.  You’ll see you can select any measure front the dropdown box and it will change the information shown on the report as shown below:


SSRS – Custom Code with External Assemblies

In a previous post I wrote about how you can use embedded custom code to extend the capabilities of Reporting Services.  This week I will show you another method of using custom code but this time with external assemblies.  Ideally when using custom code you would choose to do so using external assemblies.  External assemblies help developers manage code from outside Reporting Services and share the exact same code across multiple report.  Here’s a few of the pros and cons of using external assemblies for custom code.


  • Can be any .Net language
  • Updates to code can be managed from outside of SSRS
  • Best way to standardize custom code


  • Deployment is a bit tedious
  • Assemblies will have restricted access to system resource

In this example I’m going to walk you through beginning to end of an example of how to create an external assembly then use it in Reporting Services.  Our goal is to compartmentalize our code into and assembly so every developer does an age calculation the same way.  Calculating age can easily be done in the expression language the Reporting Services provides but using the assembly assures It’s done the exact same way every time.  These steps will walk you through:

  1. Creating an assembly with VB.Net (could be done with C# as well)
  2. Signing the assembly with a strong name.  This must be done because an assembly is deployed to the Global Assembly Cache (GAC)
  3. Provide the proper permissions to the assembly so Reporting services can use it with the <Assembly: AllowPartiallyTrustedCallers()> declaration
  4. Deploying the assembly to Reporting Services and the GAC
  5. Using the Assembly in Reporting Services.

Creating an External Assembly

  • Open Visual Studio and create a new Visual Basic Class Library project called AgeAssembly. This project can be stored in any location you would like.
  • Ensure the project is set to .NET Framework 2.0 then click OK.  Currently only .NET Framework 2.0 is supported for Reporting Services assemblies.


  • Rename the Class1.vb file to Age.VB and use the following code:

Public Class Age

Public Shared Function CalculateAge(ByVal BirthDate As Date) As Integer

Return DateDiff("yyyy", BirthDate, DateTime.Now())

End Function

End Class


  • Next setup a strong name to sign or version the assembly.  This must be done before the assembly can be deployed to the GAC.  To setup a strong name right-click on the Project in the Solution Explorer and select Properties.
  • Select the Signing page and check Sign the assembly.
  • Select <New…> from the Choose a strong name key file dropdown box.
  • Type SNsnk for the Key file name and uncheck Protect my key file with a password before hitting OK. You can close the properties after this is complete.


  • To allow the Reporting Services engine to call this code, you must apply a new assembly attribute. In the Solution Explorer click the Show All Files button to expose the My Project folder.


  • Inside the My Project folder find and open the AssemblyInfo.vb file.
  • Add the namespace to the top of the code called Imports System.Security
  • Add the assembly attribute <Assembly: AllowPartiallyTrustedCallers()>
  • This will allow Reporting Services to have access to the assembly. The last step it so build the project. Right-click on the project in the Solution Explorer and click Build.


  • Navigate to the folder that contains the project you just created. Once you find the project open Debug folder (…AgeAssembly\bin\Debug).
  • Copy the AgeAssembly.dll to the “C:\Program Files\Microsoft SQL Server\MSRS10_50.MSSQLSERVER\Reporting Services\ReportServer\bin” folder then drag and drop the assembly to the C:\Windows\assembly too.
  • Open BIDS and create a new Reporting Services project.
  • Create a new Shared Data Source that points to AdventureWorksDW2008R2 on your local SQL Server database instance and name the data source AdventureWorksDW2008R2
  • Create a basic new report that uses the DimEmployee table which has birthdate and a hire date in a tablix.  We can use our assembly to calculate how old someone is and how long they’ve been working for us.  I’m not focusing on the exact report design for this example so go wild with however you would like this to look! 
  • After creating the basics of a report go to the Report menu in the toolbar and select Report Properties. Go to the References page and click Add to add a new assembly.
  • Click the ellipsis then click the browse tab to add the assembly we just created. Browse to the C:\Program Files\Microsoft SQL Server\MSRS10_50.MSSQLSERVER\Reporting Services\ReportServer\bin folder and select AgeAssembly then click OK. Click OK again once you return to the Report Properties window.  Even though we’ve selected the assembly from the Reporting Services folder in actuality the dll deployed to the GAC is what is used. 


  • Change the HireDate field in your report tablix to an expression using the following code: =AgeAssembly.Age.CalculateAge(Fields!HireDate.Value) Then click OK.
  • Rename the column header above the expression to Service Years.
  • Change the BirthDate field to an expression using the following code: =AgeAssembly.Age.CalculateAge(Fields!BirthDate.Value) Then click OK.
  • Rename the column header above the expression to Age.
  • Preview the report to see the result of adding the custom assembly. Results may vary from my screen shot depending on when you run the report because they may have gotten older!


I hoped this step by step helps.  If this topic is something that interest you then you may also be interested in a SSRS Master (Advanced) Class I’m teaching.  You can find when the next one is here.


Get every new post delivered to your Inbox.

Join 37 other followers