JavaScript testing – first impressions of Wallabyjs and Visual Studio

I’ve been a big fan of NCrunch for a long time now, when it comes to .Net test runners I don’t think there are any that beat it. So when I heard about the JavaScript test runner Wallaby I was intrigued enough to give it a try, even if it did mean writing JavaScript tests.

This week I started to write some new AngularJS functionality at DrDoctor and thought that now was as good a time as any to start writing a few JavaScript tests.

This post isn’t about writing Jasmine tests for Angular, there are already plenty of tutorials out on the internet, but rather about highlighting some of the features of Wallabyjs.

In line code coverage
When you have failing tests the inline coverage changes for that block to red
Show covering tests
If a test is failing it will show up in here as well and give you some details
Wallabyjs will also display the reasons your tests are failing
Highlight uncovered lines of code in a file

Nancy and NCrunch – Playing nicely together…at last

If you haven’t tried NCrunch before then stop reading this and go download the trial and try it out, it will forever change your testing habits* – for the better.

NCrunch is the breakfast of champions


At DrDoctor we use Nancy (rather than the typically used ASP.Net MVC Framework) the main reason is summed up by the overall goal of the framework:

The goal of the framework is to stay out of the way as much as possible and provide a super-duper-happy-path to all interactions.

One of the greatest things about Nancy vs ASP.Net MVC is that everything can be tested, and we have tests for pretty much all of our Modules (read Controllers) as well as our Views. NCrunch does some serious magic when it comes to running your test intelligently in parallel. Up until a few days ago I couldn’t figure out how to get my tests to work.

Testing Nancy Views

Testing Nancy Views is very easy, for demonstration purposes I created a new Nancy project from the standard ASP.Net and Razor Nancy template.


The template creates a module called IndexModule which returns a view called Index.

namespace TestingDemo
  using Nancy;

  public class IndexModule : NancyModule
    public IndexModule()
      Get["/"] = parameters =>
        return View["index"];

I then created an MS Test project called specs, and added the following test to make sure that when someone browses to “/” they would be served up the Index view

public void Should_Return_IndexView()
  var bootstrapper = new ConfigurableBootstrapper(with =>

  var browser = new Browser(bootstrapper);

  var response = browser.Get("/");

  Assert.AreEqual("index", response.GetViewName());

To make this work I needed to implement my own custom IRootPathProvider,

public class TestRootPathProvider : IRootPathProvider
  public string GetRootPath()
    string assemblyFilePath = new Uri(typeof(TestingDemo.IndexModule).Assembly.CodeBase).LocalPath;
    var assemblyPath = System.IO.Path.GetDirectoryName(assemblyFilePath);
    var path = PathHelper.GetParent(assemblyPath, 3);
    path = System.IO.Path.Combine(path, "TestingDemo");

    return path;

In the Specs project I reference the TestingDemo project and then added the Nancy.Testing and Nancy Razor View Engine (since I’m testing Razor views) NuGet packages.

At this point the test passes in the visual studio test runner, but fails in NCrunch.

Eating Nancy tests with NCrunch

To get the tests to pass in NCrunch you will need to make a few changes to the NCrunch configuration for the test project.

  1. Open the NCrunch configuration window from the NCrunch menu and select the test project (below mine is called Specs)
  2. Under General select “Additional files to include”
  3. Click the button with three dots (see highlighted in red above), on the Additional Files to Include dialog click “Add Directory”
  4. Browse to and select the root folder of your Nancy project, then click OK
  5. You will see the folder appear in the Additional Files To Include window, click OK2-additional-files-to-include
  6. Back in the NCrunch configuration window, under Build Settings set “Copy referenced assemblies to workspace” to True
  7. You can now close the NCrunch configuration window

You should see that the test has now passed in the NCrunch Tests window


* May not actually change your testing habits.

Nancy Testing – Razor View with Referenced Models

The last few weeks I’ve been working with Nancy (an awesome web framework for .Net), the core goal of the framework is the super-duper-happy path:

The goal of the framework is to stay out of the way as much as possible and provide a super-duper-happy-path to all interactions.

This means that everything in Nancy is setup to have sensible defaults and conventions, instead of making you jump through hoops and go through configuration hell just to get up and running. With Nancy you can go from zero to website in a matter of minutes. Literally.

From the experience that I’ve had with Nancy to date, I would say it easily lives up to this goal for both development and testing. It is very easy to get started writing tests and most of what you need to know is covered in the docs.

However, on Friday I started experiencing a problem with some of our existing tests which check that the Razor view has been rendered correctly – by counting the number of rows in a <table>.

The cause of the failing test was that I changed the ViewModel to include a List of objects which live outside of the Web App assembly. To get my Nancy project to work when deployed I had to update the web.config file to include a list of assemblies for Nancy to reference when rendering the Razor views.

This morning I came back this problem afresh and found the solution 😀 Reading through the Razor View Engine docs on GitHub there is mention of the IRazorConfiguration:

You can specify assemblies and default namespaces that Razor needs to use whilst compiling the views by bootstrapping your own IRazorConfiguration implementation, thus removing the need to add the @using statements to each view. This step is totally optional if you don’t require additional references or namespaces in your view.

After trying various things I came across two different ways to get my tests to pass:

1. Update the App.Config file

This is the simple and easy solution, unfortunately I discovered this solution after the harder one below. On Friday I made the mistake of updating the web.config file which was in my Tests project and my tests kept failing – no surprise there.

What I should have done was update my App.Config file, after I did this all the tests passed.

Add the following section to the configSections group:

<section name="razor" type="Nancy.ViewEngines.Razor.RazorConfigurationSection, Nancy.ViewEngines.Razor" />

Then add the following to the end of the config file just before </configuration>:

    <add assembly="Domain" />
    <add assembly="Some.OtherAssembly" />

With that simple change my view tests passed. I guess the web.config file was added when I added the Nancy NuGet package.

2. Implement the IRazorConfiguration interface

The slightly harder solution is to implement the IRazorConfiguration interface and reference this when setting up the ConfigurableBootstrapper – the advantage of this is that you have finer controller over what is going on, the downside is that you will need to include this everywhere you are using the ConfigurableBootstrapper.

You will need to create a RazorConfigurationTestingStub which implements the IRazorConfiguration, mine looked like this:

internal class RazorConfigurationTestingStub : IRazorConfiguration

    public bool AutoIncludeModelNamespace
        get { return true; }

    public IEnumerable<string> GetAssemblyNames()
        return new List<string>() { "Domain", "Some.OtherAssembly" };

    public IEnumerable<string> GetDefaultNamespaces()
        return Enumerable.Empty<string>();

As you can see the list of Assembly names is the same list which was included in the App.Config file (in my Tests project) and the same list which is included in the web.config file (in my Nancy Web App project).

When setting up the ConfigurableBootstrapper you will need to include the stub as the implementation for the IRazorConfiguration dependency:

var bootstrapper = new ConfigurableBootstrapper(cfg =>
    cfg.ApplicationStartup((container, pipelines) =>
    cfg.RootPathProvider(new TestRootPathProvider());
    <strong>cfg.Dependency<IRazorConfiguration>(new RazorConfigurationTestingStub());</strong>

After I did this all the tests passed.

Building an SSIS Testing Framework

If you compared the Google results for “testing c#” and “testing SSIS” you would quickly realize that testability isn’t one of the strengths of SSIS. I’ve seen a few different frameworks which were either too complicated or didn’t really work, in this post I’m going to give you my take on building an SSIS Testing Framework.

The Goal

The overall goal of what I wanted to achieve from my test framework:

  • Simplicity – writing tests needs to be as simple and easy as possible
  • The ability to load sample data into database tables
  • The ability to run a “setup” sql script
  • The ability to run an assert script to determine if the test has passed/failed
  • Auditing/history of previous test runs

My Solution

I didn’t initially set out to build a testing framework, it just evolved into one over the course of the couple of days I was working on it. I started off by manually testing my SSIS packages and realized I was following the same steps over and over again:

  1. Load sample data
  2. Run SSIS Package
  3. Run some scripts to check if it worked

I could see that this was going to get repetitive so started to think about how to automate it. After a few iterations this is what I’ve ended up with (at time of writing)


The TestController package follows the well trodden AAA (assemble, act, assert) approach that is found among the TDD (test driven development) practitioners in the software development industry.

This package does the following:

  1. Gets a list of tests to execute
  2. Then in the foreach loop does the following for each of the test cases:


    1. A script task is used to get a bunch of test parameters from a SQL table, then updates the associated variables in the TestController package, at the moment these are only used to control which tables have data loaded
    2. A pre-data load setup SQL script is then executed
    3. Load Sample Data will then load any sample data for this test run
    4. A post data load setup SQL script is then executed


  1. Execute the package under test (PUT)


  1. Run an assert SQL script

 The Downsides

The solution isn’t perfect, here are a few of the problems that it has at the moment:

  • Unable to pass package parameters to the package under test in the Act phase
  • Makes assumptions about how you want to do your setup routine
  • Is purely focused on packages that touch the database
  • Writing assertions isn’t as easy as I would like, it involves writing lots of T-SQL and copy/pasting code throughout the different assert scripts
  • All your databases need to be on the same server
  • Maintaining the variables and and connection managers is painful

Some of these I’m not terribly bothered about, for example having to have all of your databases on a single server I think is acceptable as this is for testing purposes. Whereas the duplication of assertion code could be easily fixed by creating some reusable stored procedures (e.g. Assert.TableEqual, Assert.TableEmpty etc.) similar to those offered by tSQLt.

One of the more painful areas of the framework is setting up tables to have sample data loaded, the process involves adding variables, setting up connection managers adding components to the data flow task and updating the script component. One way of fixing this issue that I thought of was generating the TestController package with BIML and have all of this automatically generated.

Wrap Up

As a starting point I’m really happy with how the TestController is working at the moment. During the development of it and the use afterwards I picked up a few bugs with the packages that I wanted to test, so it is paying for itself already 🙂

If you’re interested in trying it out for yourself then leave a comment and I’ll put it up on GitHub.

Setting up SSDT Database Projects and tSQLt

Previously I’ve written about the database unit testing framework tSQLt, you can read about it here, there is also an excellent Pluralsight course by Dave Green (blog | twitter) which you can find here.

In this post I’m going to show you a method of version controlling your database and unit tests with SQL Server Database Projects in SQL Server Data Tools (SSDT).

Setting up the Solution

In my solution I’ve created two SQL Server Database Projects

1-the-solutionThe first project SimpleDB will represent the actual database project (i.e. the one that is being tested) this will contain all of the tables, stored procedures, functions etc. The second project SimpleDB_Tests will contain all of the tSQLt objects and associated tests.

Once this is done, the next step is to create some database references, right click on References and select Add Database Reference.

The first reference is to SimpleDB, ensure that you select Same Database under the Database Location.

2-simpledb-database-referenceThe second reference is to the master database, ensure that you enter sys as the Database name.


Version Controlling tSQLt

The next step is to install the tSQLt framework into your local database, download the latest release from their website and follow the install instructions.

After you have installed tSQLt go back to SSDT and start a new Schema Comparison, select as the source your local development database and then target as your SimpleDB_Tests project.

To make the results easier to understand I like to group the results by schema, you can do this by clicking on the Group results icon and selecting schema.

Exclude everything except the objects in the tSQLt schema and the tSQLt assembly


Then write the results into the SimpleDB_Tests project by clicking the Update button.


The next step is to create a Pre Deployment Script, right click on the SimpleDB_Tests project and under Add click Script. This will bring up the Add New Item dialog, click Pre-Deployment Script and give it the name Script.PreDeployment.Setup_tSQLt.sql

Into this script copy and paste the contents from the SetClrEnabled.sql script which is included with tSQLt.

Follow the above step, but this time add a Post Deployment Script and add the following


This script will be executed each time the SimpleDB_Tests project is deployed and will run all of the tSQLt tests.

Adding Unit Tests

There are two ways you can go about adding unit tests.

The first and probably easiest is to create them in SQL Server Management Studio, you can read about how to do that in my previous post on tSQLt.

After you have created a new test class and some tests just simply use Schema Compare in SSDT to sync the changes into the project – just make sure you don’t include any of the objects that belong to the main database project. Only objects related to your test should go into the _Tests project!

The second way is to create everything inside of SSDT yourself.

If you do this then notice that when the tSQLt.NewTestClass proc is used to generate a test schema it adds an extended property, so make sure you do the same.

CREATE SCHEMA [stp_create_new_person]

EXECUTE sp_addextendedproperty @name = N'tSQLt.TestClass', @value = 1, @level0type = N'SCHEMA', @level0name = N'stp_create_new_person';

Writing a test is actually very easy

CREATE PROCEDURE [stp_create_new_person].[test Check that a person is created]
	-- SET NOCOUNT ON added to prevent extra result sets from
	-- interfering with SELECT statements.

  DECLARE @expected int = 1
  EXEC tSQLt.FakeTable 'Person', 'dbo'

  EXEC stp_create_new_person @first_name = 'bob', @last_name = 'brown', @email = ''
  DECLARE @actual int = (SELECT COUNT(*) FROM dbo.Person)

	EXEC tSQLt.AssertEquals @expected = @expected, @actual = @actual, @message = 'Actual didn''t match expected'


Notice that since the reference to the SimpleDB was set up as ‘same database’ the test can just refer to the Person table and stp_create_new_person stored procedure directly.

Deploy and Test

Once you’ve made some changes, (e.g. adding new tests, changing stored procs etc…) it is time to deploy and test.

To do this we will be deploying the SimpleDB_Tests project. Right click on SimpleDB_Tests in the solution explorer and click Publish. You will need to set the Target Database Connection and ensure that the database name given doesn’t already exist.


Then click Generate Script (or if you are brave you can just go ahead and click Publish).

Once the script has been generated you can give it a quick glance over then click the Play icon to execute it.

If you clicked Generate Script (instead of Publish) then you will see the status being updated in the messages section. Here you can see that one of my tests passed and the other failed so the status of the publish is ‘completed with errors’


If instead you clicked Publish you will see the status in the Data Tools Operations section being updated. Here you can see that it says ‘an error has occurred’


To see the details click on the View Results link. This will bring up the script that was executed and a very similar Messages section will appear with the following error

9-failed-messageSome Helpful Tips

If you use Schema Compare a lot to keep your projects in sync, then I suggest saving a schema compare file into each of your database projects with it setup to either, ignore the test objects (for your main database project e.g. SimpleDB) or all the main database objects (this would be for your test project e.g. SimpleDB_Tests). This streamlines the process of keeping your projects in sync.

Likewise save the Publishing Profile, unless you enjoy entering in the connection details over and over again (you can do this from the Save Profile As button on the Publish Database dialog).

One other idea that I have, but haven’t tried yet is having a post deployment script to drop the Tests database after all the tests have been executed.

An Introduction to Database Unit Testing with tSQLt

Last week I was looking through the Recently Published Courses list on Pluralsight and noticed one on Database Unit Testing (check it out). Given that it was quiet at work as not everyone was back from holidays I thought it would be a good time to look into it.

What is tSQLt?

tSQLt (website) is a testing framework for SQL Server, that basically means it provides (almost) everything you need to write tests against your database.

All the tests are written in T-SQL so that means that anyone working with SQL Server can write them and they are executed by a stored procedure. This means you stay working in SQL Server Management Studio with no need to context switch to another tool to write or execute your tests.

What tSQLt provides

tSQLt is a very rich testing framework and provides the following features:

  • Isolation and setup functionality
  • Assertions
  • Expectations (for exceptions)

Anatomy of a Unit Test (The Three A’s)

In case you aren’t familiar with unit testing theory then the three A’s are a very simple way of thinking about how to write easily understood tests.

The three A’s are Arrange, Act, Assert.

Arrange: This is where any setup for running the test is done, including mocking objects if necessary.

Act: This is where the piece of functionality we are testing is executed (this is often referred to the system under test).

Assert: This is where the result from the Act stage is compared to the expectation (i.e. did the test do what we were expecting it to do)

Example tSQLt Unit Test

Time for an example.

Take the following two tables:

create table dbo.[person]
  person_id INT Identity (1,1) Primary Key,
  first_name nvarchar(50) not null,
  last_name nvarchar(50) not null

create table dbo.[audit_log]
  audit_id INT Identity(1,1) Primary Key,
  audit_message nvarchar(MAX) NOT NULL

and the following stored procedure

create procedure stp_create_new_person
  @first_name nvarchar(50),
  @last_name nvarchar(50)

  insert into dbo.[person] ( first_name, last_name ) values ( @first_name, @last_name )
  insert into dbo.[audit_log] ( audit_message ) values ( 'Created user with first_name ' + @first_name + ' and last_name ' + @last_name )


To test that an entry into the audit_log table is made after a user has been created we can write the following stored procedure to act as our tSQLt unit test

create procedure [stp_create_new_person].[test Check that an entry to the audit_log table is made with correct first and last name]

  declare @expected nvarchar(MAX) = 'Created user with first_name bob and last_name brown'

  exec tSQLt.FakeTable @TableName = 'person'
  exec tSQLt.FakeTable @TableName = 'audit_log'

  exec dbo.stp_create_new_person @first_name = 'bob', @last_name = 'brown'
  declare @actual nvarchar(MAX) = (SELECT top 1 audit_message from [audit_log])

  exec tSQLt.AssertEqualsString @expected = @expected, @actual = @actual, @message = 'Audit message didn''t match expected result'


Let’s dissect this piece by piece. In the Assemble stage, we set our expectation, which will be used in the Assertion section

declare @expected nvarchar(MAX) = 'Created user with first_name bob and last_name brown'

Next is setting up the database, FakeTable is a tSQLt function which under the hoods makes a fresh copy of the table, removing all constraints, triggers, identities etc and then renames the existing one to a temporary name (this is then renamed back after the test has finished).

This allows the test to run in isolation.

exec tSQLt.FakeTable @TableName = 'person'
exec tSQLt.FakeTable @TableName = 'audit_log'

In the Act stage the stored procedure that is being tested is executed.

exec dbo.stp_create_new_person @first_name = 'bob', @last_name = 'brown'

Then from the audit_log table we set the actual result, which will then be compared to the excepted value

declare @actual nvarchar(MAX) = (SELECT top 1 audit_message from [audit_log])

Finally in the Assert stage we use the tSQLt.AssertEqualsString stored procedure to check if the actual result matches the expected value

exec tSQLt.AssertEqualsString @expected = @expected, @actual = @actual, @message = 'Audit message didn''t match expected result'

And that’s all there is to writing a simple unit test.

tSQLt likes you to group all the tests that relate to one database object into it’s own schema as this makes it easier to find all related tests in the object explorer.

tSQLt provides a stored procedure that can be used to create a new schema (test class)

  exec tSQLt.NewTestClass '[stp_create_new_person]'

tSQLt also provides a way to execute only unit test that belong to the same test class (schema), to execute all the stp_create_new_person tests

  exec tSQLt.Run '[stp_create_new_person]'

Wrap Up

As you can see, using tSQLt makes writing unit tests very easy. This should mean that you no longer have an excuse to not write them!

In my next post I’m going to cover off the issue of source control when using SQL Server Data Tools and the SQL Server project type.

Some Useful Resources

In this post I have only just scratched the surface of what can be achieved with tSQLt, I came across a number of useful resources that helped me to apply tSQLt:

Microsoft Fakes and TeamCity (and XUnit)

This post is a note to my future self on how to configure a TeamCity build to run tests that use Microsoft Fakes. If you haven’t ever come across Microsoft Fakes then take a look at this post – Testing the untestable with Microsoft Fakes ( for a good introduction.

Setting up the Build Agent

You will need to install Visual Studio 2012 and make sure that Update 3 is applied, no additional installation is required as Microsoft Fakes comes bundled.

If your tests are written using XUnit, which in my case they were then the next step is very important, (if you are using MSTest then you can skip this step and go to Configuring the Build Step).

You will then need to logon to the build agent machine with the user that the build agent runs as (i.e. not you!). Once logged on, launch Visual Studio and install the XUnit test runner. The reason for this is that Visual Studio extensions are installed per user. This caught me out!

Next up, you will need to download the TeamCity vstest logger from GitHub (, build this and follow the instructions from the project page as to where the built DLL needs to go.

At this point, everything on the build server is setup and it is time to configure the TeamCity project.

Configuring the Build Step

Go into the setup of your project and add a new Build step.

Runner type: Command line

Step name: <can be whatever you like>

Execute step: Only if all previous steps were successful

Working directory: Set to the bin\Release (or bin\Debug) folder for your solution, depending on which one you are building/testing, this will most likely be Release

Run: Custom Script

Custom Script: C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow\vstest.console.exe myinsertingproject.tests.dll /logger:TeamCityLogger /UseVsixExtensions:true

Update 21/11/2013 –

Turns out you do need to set the working directory, also updated the custom script to include the full path to the vstest.console.exe (thanks to @kevindrouvin)

Summary and Resources

That should be all you need to do to get the tests running and being reported back into TeamCity. Depending on how your environment path is setup you might need to give the directory to the vstest.console.exe in the Custom Script setup.

Here are a few different resources that I used:

Testing SSRS Reports

In this post I’ll outline how I used a project from Codeplex to write tests against SSRS reports.

From the description from the project page: Allows testing reports deployed to reporting server 2008R2 or 2012 in Native or SharePoint mode. The tests cases are created in xml file in declarative form.

Getting Started

1. Download and extract the latest release from the Codeplex site

2. Open the Settings.xml file, in here you will need to set a few different options:

  • Path this is the path to the report server, it will probably look something like this: http://my-reportserver/ReportServer
  • Mode this can be Native or Sharepoint.
  • UserName and UserPassword these only need to be set if the report server requires authentication, otherwise you can keep the defaults
  • HttpClientCredentialType this will be either Windows or Ntlm

Configuring Tests

Test cases are written in the Settings.xml file. The basic structure is this:

  <ReportServer Path="[Path]" Mode="[Mode]" UserName="@UserName" UserPassword="@UserPassword" HttpClientCredentialType="[CredentialType]">
    <Folder Name="[NameOfFolderOnReportServer]">
      <Report Name="[NameOfReportInReportFolder]">
          <Param Name="[ParamFromReport]" Value="[Value]" />
          <!-- More parameters can be specified -->
          <!-- Test cases go here -->

A few notes:

  1. The settings for the <ReportServer> attributes are outlined in the Getting Started section.
  2. I had problems initially with my reports because there were a few in the same folder that had similar names, this caused the project to throw an exception. This I didn’t figure out until I downloaded the source code and stepped through line by line.

Setting up a sample report

At present the only supported test cases are to assert IsNotNull and AreEqual, however I noticed that the AreEqual assertion only works on numeric data. So forget trying to test if columns contain text, you can also only test data that appears in the body of the report. Which for most people probably isn’t going to pose any problems.

To write test cases you need to know or understand XPath, and how it relates to your SSRS reports.

All the reports that I’m testing use a Matrix (or Tablix as they are also known).

  1. As an example I created a new report which connects to an SSAS cube and pulls out four measures, I then added a matrix to the report and dragged in the measures from my data set, here is what my report now looks like:
    SSRS example report to test
    I have highlighted the following: data set, the matrix/tablix in the design window and Row Groups and Column Groups. These are important to note when it comes to writing the XPath for the test cases.
  2. The next step is to run the report, and then save the output as XML.SSRS-Testing-2
  3. Then open the XML output SSRS XML Report OutputIf you take a close look you will see a <Tablix2> node, this refers to the name that SSDT gave the Matrix when I added it to the report, you will also notice the <RowGroup_Collection> and <Column_Group> nodes which correspond to the highlighted sections in the first step. Using these XML output we can now write some tests!
  4. Open the Settings.xml file from the SSRS Unit Test project, you should already have the <ReportServer> node attributes configured, add a new <Report> node to the <Folder> section. To test the report I’ve created for this post I added this:
    <Report Name="SummaryReport_ForAutomatedTests"></Report>
  5. Next we add a <TestCases> node, so we now have:
     <Report Name="SummaryReport_ForAutomatedTests"><TestCases></TestCases></Report>

Now we are ready to add a <TestCase>

NB: You will also need to deploy the report to the report server and folder you specified in the Settings.xml file.

Writing Test Cases

This is where knowing a bit of XPath comes in handy, however we are helped a lot by having the report XML output.

Example: Checking that the Total matches the expected value (AreEqual):

Looking at the XML output I can see that Textbox14 corresponds to the field representing my Total. If I wanted to I could go back to SSDT and give this field a better name, but for now I’m happy keeping it as Textbox14. I also know from looking at the XML output that the value I expect to have returned is 3233.

So, I will add the following to the <TestCases> collection:

<TestCase Assert="AreEqual" Path="//Tablix2/RowGroup_Collection/RowGroup[1]/ColumnGroup_Collection/ColumnGroup[1]/Textbox14/@Total" Value="3233" />
  • Assert=”AreEqual” tells SSRS Unit Test that we want to check the value of a field.
  • Path=”//Tablix2/RowGroup_Collection/RowGroup[1]/ColumnGroup_Collection/ColumnGroup[1]/Textbox14/@Total” tells SSRS Unit Test how to traverse the XML to get the value of the field
  • Value=”3233″ is the value that we are expecting.

Example: Checking that the Answered matches the expected value (AreEqual):

Next up we can do the same thing for the Answered field in the report, once again looking at the XML output we can take a guess that Textbox25 corresponds to the Answered field, so can add the following:

<TestCase Assert="AreEqual" Path="//Tablix2/RowGroup_Collection/RowGroup[1]/ColumnGroup_Collection/ColumnGroup[1]/Textbox25/@Answered" Value="1000" />

Example: Checking that Failed has a value (IsNotNull):

Lastly, we can test that the failed field has a value (IsNotNull), by once more looking at the XML output we can see that Textbox27 corresponds to the failed field. Our test case will look like this:

<TestCase Assert="IsNotNull" Path="//Tablix2/RowGroup_Collection/RowGroup[1]/ColumnGroup_Collection/ColumnGroup[1]/Textbox27/@Failed" />

Test Output

We can now run SSRS Unit Test and if it is configured correctly, then it should produce a file called something like “TestSuite yyyy-mm-dd hh-mm-ss.xml”. This file looks a lot like the Settings.xml file, but with an extra field.

SSRS Testing OutputYou will see highlighted in the picture above that a Passed attribute has been added to each <TestCase> which we defined in the Settings.xml file, this will either say True or False. Depending on if the test case passes.

There is also an xls transform that you can apply to the test results xml file, which produces a nice html file.