A rigorous management of Entity Framework migrations adapted to multiple app deployments

This post follows a question that I asked on stackoverflow several months ago. I did not receive a satisfactory answer at that time. I will expose the solution that we found at Keluro for this problem.

At Keluro, our client app products (for example the VSTO KMailAssistantKBilling and the SPA web apps) communicate with a REST web api. This web api uses EntityFramework 6.0  on top of SQL Server for the persistence. However, some of our web api deployments are not necessarily multitenant. Indeed, we do have multiple clients who do not want to share their infrastructure, they demand an isolated deployment mainly for security and confidentiality reasons. For us, in order to keep things simple, it was important to be sure that all of our client deployments share the same database schema, even if they got into production at a different step during the development of the products. Consequently, we have N deployments of our web api with as many database catalogs. We also want to have all our web api deployments to be up-to-date compared to a stable revision of the source code. To this aim, a continuous build is in charge to update all these web api. Necessarily, the associated databases also need to  be updated automatically. The Entity Framework is able, when the web app starts, to handle the migration process (if needed), the topic of this post is to propose a rigorous methodology to manage the migrations.

Entity Framework Code First supports migrations (see documentation here). To answer the problem explained above, when reading the documentation it is not clear how we should use the migrations. Indeed, we are told to call Update-Database from Visual Studio or to use Enable-Automatic migrations. Let us explain how to use the set of features proposed by Entity Framework to handle multiple deployment in a clean and rigorous way. Note also that this approach works well for local databases that are deployed with your ‘rich client‘  application, for example with SQL Server CE which we use for our VSTO addins KMailAssistant and KBilling.

TLDR; Determine a “stable production schema”:  which is the database schema corresponding to the web app code for a stable branch/tag in your source control. Avoid the so-called AutomaticMigration and always create new code based migration using Add-Migration with respect to an empty database that have been updated to the “stable production schema” after applying all existing migrations. Do not use Update-Database command to update your database in production, let the framework do it for you at startup using the MigrateDatabaseToLatestVersion initializer.  Then, when you release a new environment starts the application with an empty database. You will also have to take care of version control when working with feature branches.

Generating clean code based migrations

For the following, I will assume that you have read the Entity Framework documentation. It is very important that you decide what is the “stable database schema”. It corresponds to the schema determined by the source code (remind that we use EF Code First) for the selected revision in your stable branch or tag. We advise you to avoid the AutomaticMigrations. Actually, AutomaticMigration does not mean that the migrations created will be applied automatically (we will discuss how to do that latter). It means that the migration needed, which is the piece of SQL needed to change the database between its actual stated and what it should be, will be generated and applied on the fly. This is dangerous in our situation, think of our multiple deployments, they have not been started at the same time. Indeed, with automatic migrations some migrations could have been generated and applied for some older client environments while you have to push a new environment right now. Consequently, the history of automatically generated migrations could be different even for the same revision of the source code.

The best solution to avoid this situation is that all deployment share the same series of code based migrations. On a stable source code revision, the succession of existing code migrations applied to an empty database produce a database with the so-called “stable database schema” introduced above. Then, if a new client is deployed, an empty database (no tables, no data) is created and then all existing migrations will be applied automatically by Entity Framework when the web app starts the first time. For example, suppose that we have the following list of migrations: 201401050000000_MigrationA (January 5st 2014), 20150300000000_1MigrationB, 201504120000000_MigrationC, 201511150000000_MigrationD. This means that, if a client web app and its database is put in production on March 2015, all migrations will be applied (including MigrationA and MigrationB).

When using code base migrations, it is important to keep in mind that the migrations are presented as csharp files that represent the SQL instructions to be applied (e.g. drop a table, adding a column etc.). In addition, a given database has also a table _MigrationHistory which keeps all migrations that have been applied to it. Then, if all your web apps are up-to-date with respect to the same web app source code, for all your databases, you will get exactly the same rows in the table _MigrationHistory.

When an application starts, to automatically migrate the associated database to the latest migration, you have to run this code at startup (e.g. Globalasax.cs for an asp.net web app).

Database.SetInitializer(new MigrateDatabaseToLatestVersion<MyDbContext, Configuration>());
Code based list of migrations in Visual Studio

Code based list of migrations in Visual Studio

Keep a clean list of code based migrations

Let us explain how to keep a clean list of code migrations. I suggest to create a clean code based migration anytime a database schema change is required. To do so, you will need to use the Add-Migration command in your Package Manager Console in Visual Studio. Remind that if you do not specify a database connection string in the Powershell command, the connection string used will be the first found in your app.config or web.config file. This selected database may not have the proper “production schema”, it is error prone. My advice is to create only for the generation of this new code migration a database with no data but with the actual “production schema”. This is extremely simple and is also a sanity check of your existing migrations: create a new empty database, in one click in VisualStudio SQL Server Express (see picture below).

Create empty database from Visual Studio

Create empty database from Visual Studio

To update this database schema, take the connection string (right click on your database > Properties) then update the database by targeting the last migration, e.g.201511150000000_MigrationD, with the following command:

Update-Database MigrationD -ConnectionString "<yourConnectionString>" -ConnectionProviderName "System.Data.SqlClient" -Verbose

Then now this local database is “up-to-date” and you can generate your new migration named MigrationE (choose something more meaningful in your case) with the command:

Add-Migration MigrationE -ConnectionString "<yourConnectionString>" -ConnectionProviderName "System.Data.SqlClient" -Verbose

Then the migration files are generated, it is recommended to read them and make sure they correspond to the changes you intended to introduce. Now they are ready to be committed in a single and clean commit. As we have seen the migration is prefixed with number which corresponds to its generation date (e.g. 20150501000000_MigrationE). This number is effectively used by when using the MigrateDatabaseToLatestVersion database initialize and it can be a problem when not carefully used with version control.

Migrations and version control

There may be troubles when branching, to see this let us explain how the Entity Framework applies the code base migrations. Have a look at the table _MigrationHistory, the rows are the migrations, the date when the migration was generated is also there, because it is included in the name of the migration. Entity framework takes the date of most recent migration applied in the _MigrationHistory table and applies all migrations in the web app code that have been generated latter.

The _MigrationHistory table generated by Entity Framework

The _MigrationHistory table generated by Entity Framework

You see the potential problem? Say that you have created two feature branches: X and Y. Suppose that you have generated a migration for each of these branches, for X then for Y. But for some reason, you merged Y into your stable branch before X, the migration of X will not be applied!

To avoid this as much as possible, I suggest that you generate a minimum of migrations and , for each newly generated migration, put it in a dedicated commit with nothing but the code of the migration and with a clean indicator in the commit log (e.g. put a “[MIGRATION]” tag). Remark that the git rebase interactive command of git can be useful (take care when rebasing pushed commits!). For example you can remove all intermediate [MIGRATION] commits and regenerate a single one. or if you decide that a migration (not deployed!) is no longer needed you can drop the commit etc. I think it is wise thing to name a “database” master in your dev team. This person should be the one responsible for merging branches involving database migrations. He will be aware of the potential problem with migrations date generation and will know how to fix it.

TeamCity on Windows Azure VM part 2/2 – enabling SSL with reverse proxy

In the previous post we explained how to install a TeamCity server on a Windows Azure Virtual Machine. We used an external SQL Azure database for the internal database used by Teamcity. The first post ended with a functional TeamCity web app that could not be visible from outside. The objective of this second post is to show how to secure the web app by serving up the pages with SSL. Similarly as the previous post I will detailed out all the procedure from scratch so this tutorial will be accessible to an IIS beginner. Indeed, we are going to use a reverse proxy technique on IIS (Internet Information Service, the Microsoft web server).  I would like to thank my friend Gabriel Boya who suggested me the reverse proxy trick. Let us remark that it could also be a good solution for serving under SSL any other non IIS website running on windows.

If you have followed the steps of the previous post, then you should have a TeamCity server that is served on port 8080. You should be able to view it with the IE browser under the remote desktop VM at localhost:8080. Let us start this post by answering the following question:

What is a reverse proxy and why use it?

If you try to enable SSL with Tomcat server who serves TeamCity, you are going to suffer for not a very good result. See for instance this tutorial and all the questions it brought. Even if you manage to import your certificate on the java certificate store, you will have problem with WebSockets…

So I suggest you to implement a reverse proxy. The name sounds complicated but it is very basic concept: this is simply another website that will act as an intermediate for communicating to your first and primary website (in our case the Tomcat server). Here, we are going to create an empty IIS website, visible from outside on port 80 and 443. It will redirect the incoming request to our TeamCity server which is listening on port 8080.

Representation of the reverse proxy for our situation

Representation of the reverse proxy for our situation

Install IIS and the required components

First we will have to install and setup IIS with the following features on our Windows server 2012. It is a very easy thing to do. Search the ServerManager in windows  then, choose to configure this Local Server (this Virtual Machine)  with a Role-based installation.


Add roles and features

Then, check the “Web Server (IIS)” as a role to install.


Install IIS

Keep the default feaures.


Keep default installation features

In this recent version, TeamCity uses the WebSockets. To make them work, our reverse proxy server will need them: check WebSocket Protocol.


Check the WebSocket Protocol

Check that everything is right there… and press Install.


Check that everything is prepared for installation

Now that we have installed IIS with our required features (WebSocket),it is accessible from the menu. I suggest you pin that to the easy launch if you do not want to search it each time you will need it.


IIS well installed

Install URL rewrite module

The most simple way to set up the reverse proxy is to have the IIS URL rewrite module installed. Any Microsoft web modules should be installed using the Microsoft Web platform Installer. If you do not have it yet, install it from there.

Then, in the Web Platform Installer look for the URL Rewrite 2.0 and install it.


URL Rewrite 2.0 installation with Web Platform Installer

The Reverse proxy website

Ok now we are going to create our proxy website. IIS has gently created a website and its associated pool. Delete them both without any mercy and create a new one (called TeamcityFrom) with the following parameters. Remark: that there is no website and nothing under the C:inetpubwwwroot this is just the default IIS website directory.


New TeamCityFront IIS website that point to the inetpub/wwwroot folder

Create the rewrite rule

We are going to create a rule that basically transform all incoming http request to request targeting locally localhost:8080. Open the URL rewrite menu that should be visible in the window when you click on your site and create a blank rule with the following parameters.


URL Rewrite for our TeamcityFront website


second part of the rewrite rule

Now let us go back to the management Azure portal and add two new endpoints http for port 80 and https for 443 in Windows Azure


Add HTTP on port 80 and HTTPS on port 443 with the Azure Management Portal for our VM

Now check that you are able to browse your site at testteamcity.cloudapp.net from the outside. But you could object what was the point? Indeed, we could have setup TeamCity on port 80 and add the HTTP endpoint on Azure and the result would be the same. Yes you would be right, but remember, our goal is to serve securely with SSL!

Enabling SSL


Certificate installation

To enable SSL you need to have a certificate, you can buy one at Gandi.net for example. When you get the .pfx file install it on your VM by double clicking and put the certificate on the personal store.

An SSL certificate is bound to a domain and you do not own cloudapp.net so you cannot use the subdomain testteamcity.cloudapp.net of your VM. You will have to create an alias for example build.keluro.com and create a CNAME that will redirect to the VM.

Here is the procedure if you manage your domains in Office365.


Creating a CNAME subdomain in Office365 that point to the *.cloudapp.net address of your VM

Now in IIS, click on your site and edit SSL bindings, add this newly created  subdomain build.keluro.com and use the SSL certificate that should be recognized automatically by IIS.


Create an HTTPS binding for the proxy server under IIS

At this stage, you should be able to browse your site on https from the outside with a clean green lock sign.

Browsing in security with https

Browsing in security with https

Redirection Http to Https

You do not want your user to continue accessing insecurely your web app with basic Http. So a good mandatory practice is to redirect your http traffic to a secure https endpoint. Once again, we will benefit from the reverse proxy on IIS. Simply create a new URL rewrite rule.


An HTTP to HTTPS redirect rewrite rule

Then place you HTTPS redirection rule before the ReverseProxy rule.


Place the HTTPS redirection rule before the ReverseProxy rewrite rule

Then know when you type http://build.keluro.com or simply build.keluro.com you’ll be automatically redirected and the job is done!


A working website that redirects automatically to https

TeamCity on Windows Azure VM part 1/2 – server setup

TeamCity is a continuous integration (CI) server developed by Jetbrains. I have worked with CruiseControl and Hudson/Jenkins and, to my point-of-view, TeamCity is beyond comparison. I will not detail all the features that made me love this product, let me sum up by saying that TeamCity is both a powerful and easy to use software. It is so easy to use that you can let non tech guys visit the web app, trigger their own builds, collect artifacts etc.
TeamCity 9.1 has been developed with the Tomcat/JEE tech stack. However, it works on windows and, in this post, I will explain how to setup a proper installation on Windows VM hosted on Azure using Azure SQL. Precisely, I will detail the following points

  • Installing TeamCity 9.1 on a Windows Server 2012 Virtual Marchine hosted on Azure with and SQL Azure database.
  • Serving your TeamCity web app securely through https. To this aim, we will setup a reverse proxy on IIS (Internet Information Services).  I will also detail how to make the websockets, used by the TeamCity GUI, work with IIS 8.0.

This first post is dedicated only to the first point. The second one will be the topic of the next post.

Installing Teamcity 9.0 on Azure VM

creating the VM

First of all, start by creating the VM. I recommend you to use manage.windowsazure.com. I personally think that portal.azure.com is not very clear for creating resources at the time of the writing. You can use the ‘Quick create’ feature. There is nothing particular, no special tricks here. Azure will create a virtual machine with Windows Server 2012. Note that I recommend you to provide a instance with more resources while you are configuring. Indeed, we are going to use the Remote Desktop for configuring the TeamCity web app and the IIS server. Remind that, when you allocate more resource (D1 or D3 for example), the Remote Desktop will be more responsive, you’ll save a lot of time. Of course you could downgrade later to minimize hosting costs.

Quick create azure VM

‘Quick create’ a Windows Server 2012 VM is created on Azure

creating the Azure SQL Database and server

Then you will have to create an Azure SQL Server with an SQL database. This times, on windows.azure.com, go to Sql database and click ‘Custom create’. For the SQL Server, choose ‘new sql server’. You can select the minimal configuration: a Basic Tier with maximum size of 2GB. Important: choose Enable V12 for your SQL server. Non-v12 server are not supported by TeamCity. At the time of the writing, it seems impossible to upgrade a database server to v12 after it is created, so do not forget and create a v12 instance. You will be asked to provide credentials for the administrator of the database. Keep your password, you will need it later when configuring the database settings in TeamCity.

TeamCity installation

When it is done, connect your Azure VM using Remote Desktop. Now we are going to install TeamCity on the Azure VM. Precisely, we will install it to run under the admin account that we created when creating the VM. I strongly recommend to run TeamCity with admin privileges unless you may encounter many problematic errors.

On remote desktop VM, start Internet Explorer and visit the Jetbrains TeamCity website: https://www.jetbrains.com/teamcity. You may have to put this site as a trusted source in order to visit from IE, this is also the same for downloading the file. To change internet security settings click the gear at the right top corner of IE and then InternetOptions security etc.

Once downloaded, start the installer. You may install it almost anywhere on the VM disk, the default, C:\TeamCity, is fine.

On the next installation screen, you will be asked ‘which component to install?’. You have to install both components Build agent and Server to have a working configuration. Indeed, in this post we will performed a basic TeamCity installation where the build agent and the continuous integration server are installed on the same machine. However keep in mind that TeamCity is a great product that enables you to distribute on several machine many build agents. They will be in charged of executing heavy builds simultaneously.

Install agent and build server

Install agent and build server

Later, you’ll be asked for the port, set it to 8080 (for example) instead of the default 80. Indeed, we do not want our TeamCity integration website being served without SSL, but serving the website through https/443 will be the topic of the next post… Finally, choose to Run TeamCity under a user account (the administrator of the vm, no need to create a new user).

After that, TeamCity server starts and an Internet Explorer window on localhost:8080 will be opened. I will ask you the DataDirectory location on the TeamCity server machine (choose the default).

Database configuration

Now comes the tricky thing, configuring the database. As explained in many places by Jetbrains do not keep the internal database for your installation. To use an Azure SQL Database, choose MSSQL Server.

Database connection settings

Database connection settings in TeamCity

You will have to install je jdbc 4.0 driver that is needed by TeamCity to work with an SQL Server database. It is a very easy task that consist of putting a jar file ( sqljdbc4.jar) under the DataDirectory. It is well documented in the Jetbrains link. Consequently, I will just add that the default DataDirectory is an hidden folder located by default at C:ProgramDataJetBrainsTeamCitylibjdbc.

Now we have to fill the form for database settings. We will need the connection string that is available on the azure portal. If you select the ado.net representation, it should look like Server=tcp:siul9kyl03.database.windows.net,1433;Database=teamcitydb;User ID=teamcitydbuser@siul9kyl03;Password={your_password_here};Trusted_Connection=False;Encrypt=True;Connection Timeout=30;


Entering the database connection settings.

If you use a connection string formatted as above. When filling the form in TeamCity, you will enter the following entries (see screensho).

Remark that you may be rejected because your SQL database server does not allow Windows Azure (then this current VM) to access it. You have to manually grant it, in Azure management portal, on the Configure menu of the SQL SERVER. Click ‘yes’ for allowed Windows Azure services.

Then you will meet a last step, that ask you to create an admin user for your TeamCity web application.


Final step! create an administrator for TeamCity web app

Now, the first step of our installation tutorial is completed and we have a TeamCity server setup on our Azure VM. Visiting your TeamCity website on teamcityserver.cloudapp.net or even teamcityserver.cloudapp.net:8080 is not possible. Indeed, we setup the server on port 8080. This endpoint is blocked by your Windows Azure VM. In the next part, we will see how to serve properly and securely our integration web app through 443 port.

Before going to the second part. I suggest that you check your TeamCity works well locally on the VM by creating a very simple project. In our case, we create a build configuration that just checkouts source from a public github repository and perform a Powershell task that says “hello world”. When triggering the build manually (run button), the build is well executed. This means that the setup of the TeamCity server is completed. Configuring its web access will be the rest of our job.


Our first build is passed

PS: do not forget to downgrade the resources associated with your VM instance when you are done with configuration. There is no need to pay for a large VM if you do not need such performance.

My WPF/MVVM “must have” part 3/3 – On view model unit tests organization

One of the great advantage of the MVVM pattern is that it allows you to test the Graphic User Interface logic of the application through the view models. The latter ones are in charge of ‘presenting’ the data to your screen controls, the so-called views. For a very short introduction to MVVM, I recommend this article. This post is the last one of the series on the WPF/MVVM and it deals with the organization of the view model unit tests in a large project. I have written a blog post on how MVVM can let you test some part of your logic that you might not have thought possible before. The present blog post does not aim at presenting ‘how’ you write unit test but rather at presenting a way to organize the numerous unit tests.

When you implement unit tests for view models, you must test end-user actions. For example, you will implement test assertions for the click command on a button. Unit test should be organized so that when they fail you can easily find out the user action or presentation status of the data that you are testing. To this aim, we will use a structure of unit tests that use extensively class inheritance. Although, MSTest is not know to be the best framework for tests with inheritance, it remains the framework that I use. I do believe that one can find an even more elegant approach with other tests framework. The approach presented here is mine, I do not pretend this is the best one, it is just an example of an organization that have proven to work in large projects.

The mocking framework used is Moq. A mocking framework is not 100% mandatory for TDD. However, anytime I saw a C# project that was not using a mocking framework, then the unit tests were not really unit tests. Sometimes they were badly written and were not asserting relevant stuff and sometimes they were more or less integration tests.

Let us get back to our organization of view model tests: the basic idea is taken from the book Professional Test Driven Development in C#. I recommend it if you are unfamiliar with unit testing. It is one of the best reference that I found on the subject of TDD. It contains many relevant real life examples.

So the idea is to have a base class called Specification.

public abstract class Specification
    protected Specification()

    public abstract void Because_Of();

    public abstract void Extablished_Context();

This is the base class that all view model unit test class should inherit. I acknowledge that there are virtual calls in constructor, this not a very neat pattern. But as you probably know, C# defers from Java and C++ and the virtual call will always call the most derived class implementation of a virtual method, even in constructors.

Another thing that you must know: for each unit test execution, a new instance of the class is created, even if the tests belongs to the same test class. Consequently, we get a well test isolation without even use [TestInitialize] [TestCleanup] [ClassInitialize] attributes.

Let us go back to our Specification class, you remarked that it is abstract. To continue the organization, for each view model, you create a directory under the ViewModels directory of the test project. In this directory, create a new abstract class derived from the Specification class. This will be the base class for all test of a given view model we prefix this base class by ‘General’, e.g. for a view model called AdminViewModel, we created GeneralAdminViewModelTests class.

public abstract class GeneralAdminViewModelTests : Specification
    protected AdminViewModel AdminViewModel;
    protected Mock BusinessObjectFactory = new Mock();
    protected Mock IIoservice = new Mock ();
    protected Mock SettingsProvider = new Mock();
    protected Mock SubVmFactory = new Mock();

    protected Mock FileNameGeneratorViewModel = new Mock ();
    protected Mock SummaryViewModel = new Mock();
    protected Mock MainViewModel = new Mock ();
    protected Mock Bootstrapwindow = new Mock();
    protected abstract void SetupMocks();

    public override void Extablished_Context()

        AdminViewModel = new AdminViewModel(MainViewModel.Object,

In this GeneralAdminViewModelTests, we instanciate the System Under Test (SUT), in this example: AdminViewModel. It is assigned to a protected field with the same name, this is the only ‘true’ implementation. The role of this GeneralAdminViewModelTest, is to pass the dependencies, that can be other view model (parent view models) or other services. The dependencies are “fresh” mock object, without any setup. The configuration of the setups depends on their usage, which will be precised in the ‘true’ derived child test class, to this aim we declare a protected abstract method SetupMocks. There is a big benefit of performing the instanciation of the SUT in one base class. Indeed, whenever you’ll add or remove a dependencies you will have just this base class implementation to change. This would have been a tedious operation if the SUT were instanciated in multiple places.

Personally, I did not managed to use one fake implementation for all tests and inject them using a IoC container. Actually, when you write unit tests you always need a different behavior of you mock depending on the test specification. For example, if you have an IoService that abstracts usage of disk I/O operation, sometimes you will want to fake a failure, an exception throwage etc. Therefore, my approach is to setups the mocks only with the behavior required for the tests. However, it is possible to factorize your mocks configuration, by using some ‘Usual Mocks’ setups.

Let us detail the true implementation of unit tests. You must separate your tests according to “existing” configurations. In my example, the AdminViewModel must deal with two situations (among others): when there are some existing data (called matters) and when there are none. That is why, under the AdminViewModelTests directory we created two files “WhenThereAreThreeMatters.cs” and “WhenThereAreNoMatter.cs”. See the resulting tree file structure in picture below.

An example of view model test organization for the 'AdminViewModel'

An example of view model test organization for the ‘AdminViewModel’

Once again, the trick is to play with inheritance. A the top of the file, we declare once again a base class. That will be the mother class of all test class in this file. In this situation, we will setup the mocks so that they fake the existence of three matters in the database.

Once the mocks are setup, we can implement the true tests class, that act as the specification of the view models. They are the most derived class (they could be marked as sealed), they are decorated with the [TestClass] attributes and the [TestMethod] attiribute for the tests. In the code listing below, there are two examples. First the WhenNothingIsDone, that will assert the following facts: when nothing is done, there are still three matters displayed, the view model is not ‘dirty’ (no changes from the end-user) and that is still possible to delete a matter. Then we have another class, that will test the behavior of our AdminViewModel, when there are ThreeExistingMatters: when we click Delete (on a matters) then, there are only two matters displayed and the view model is dirty. You may think, of others TestClass, asserting behavior of this view model under these circumstances, for example, you may Assert what is going on when the end-user clicks Save (after clicking Delete) etc.

Remark that we needed more setups for the mock in the WhenDeletingMatters implementation, so it is possible to override the the SetupMocks method to add some behavior to the mock. See examples in C# code listing below.

public abstract class AdminViewModelWithThreeExistingMatters : GeneralAdminViewModelTests
    public const string FirstMatter = "QSG vs Rapid Vienna";
    public const string SecondMatter = "FCN vs Juventus";
    public const string ThirdMatter = "Bordeaux vs Milan";
    protected Guid SecondMatterId;
    protected Guid FirstMatterId;

    protected override void SetupMocks()
        var matter1 = MocksHelpers.GetMockMatter(FirstMatter);
        var matter2 = MocksHelpers.GetMockMatter(SecondMatter);
        var matter3 = MocksHelpers.GetMockMatter(ThirdMatter);
        SecondMatterId = matter2.Item1.Id;
        FirstMatterId = matter1.Item1.Id;
        BusinessObjectFactory.Setup(c => c.GetOrderedSharedMattersInfos()).Returns(new [] { matter1.Item2, matter2.Item2, matter3.Item2 });
        BusinessObjectFactory.Setup(c => c.GetMatterFromId(It.IsAny())).Returns((Guid guid) =>
            if (guid == matter1.Item1.Id) return matter1.Item1;
            if (guid == matter2.Item1.Id) return matter2.Item1;
            if (guid == matter3.Item1.Id) return matter3.Item1;
            throw new NotImplementedException();

        this.Bootstrapwindow.Setup(c => c.ShowCancelableDeleteMatterWindow()).Returns(true );

public class AdminViewModelWhenNothingIsDone : AdminViewModelWithThreeExistingMatters
    public override void Because_Of()

    public void ThenTheSelectedMatterShouldBeTheFirstOne()
        Assert.AreEqual(FirstMatter, AdminViewModel.SelectedMatter.Name);

    public void ThenTheAvailableMattersShouldBeThree()
        Assert.AreEqual(3, AdminViewModel.MatterInfos.Count());

    public void ThenTheHasChangedShouldBeFalse()

    public void ThenTheCanClickDeleteIsSetToTrue()
        Assert.IsTrue( this.AdminViewModel.CanClickDelete);

public class AdminViewModelWhenClickDelete : AdminViewModelWithThreeExistingMatters
    public override void Because_Of()

    protected override void SetupMocks()
        this.BusinessObjectFactory.Setup(c => c.DeleteSharedMatterAsync(It .IsAny()))
            .Returns(() =>
                return UsualMocks.GetSuccesfullSyncTask();
        this.BusinessObjectFactory.Setup(c => c.UpsertSharedMatterAsync(It .IsAny()))
            .Returns(() =>
                return UsualMocks.GetSuccesfullSyncTask();

    public void ThenTheDataBaseDeleteMethodShouldBeInvokedIfClickingSave()
        AdminViewModel.ClickSave.Execute( null);
        BusinessObjectFactory.Verify(c =>; c.DeleteSharedMatterAsync(It .IsAny()), Times.Once);

    public void ThenNumberOfAvailableMattersShouldBeSmaller()
        Assert.AreEqual(2, AdminViewModel.MatterInfos.Count());

    public void HasChangedIsSetToTrue()

    [ TestMethod]
    public void ThenTheSelectedMatterShouldBeTheSecondOne()
            c =>
            c.GetFileNameGeneratorViewModel( It.IsAny(),
                                     It.IsAny()), Times.Exactly(2));

    public void ThenTheFactoryShouldBeInvokedForAllTabs()
            c =>
            c.GetFileNameGeneratorViewModel( It.IsAny(),
                                     It.IsAny()), Times.Exactly(2));
            c =>
            c.GetSummaryViewModel( It.IsAny(),
                                     It.IsAny()), Times.Exactly(2));

    public void ThenTheSelectedNameIsNotTheSecondOne()
        Assert.AreEqual(SecondMatter, AdminViewModel.SelectedMatter.Name);

Finally, one particularly helpful trick is to use the folder Namespacing conventions. Remark that a feature from Resharper enables you to enforce this convention for all files in a given folder, the current project and even the whole solution. Then our class WhenDeletingAMatter has for complete name .ViewModels.AdminViewModelTests.WhenThereAreThreeMatters.WhenDeletingAMatter. In your Resharper Test Session or VisualStudio TestExplorer, you can select ‘group by namespaces’, then you have a very neat a readable organization of your unit test as shown in the screenshot below.

When selecting the option 'GroupByNamespaces' then the readability of the test organization becomes very clear

When selecting the option ‘GroupByNamespaces’ then the readability of the test organization becomes very clear

Implementing the OAUTH 2.0 flow in app for office sandboxed environment

EDIT: this approach is outdated. You can have a look at the sample we released and especially in the README section we compare the different approaches

In this post I will explain how I implemented the OAUTH 2.0 flow in the app for office sandboxed environment. The objective of this post is to discuss with others if the proposed implementation is satisfactory. Do not hesitate to criticize and propose alternatives.

Following the msdn documentation, the app for office model is the new model for

engaging new consumer and enterprise experiences for Office client applications. Using the power of the web and standard web technologies like HTML5, XML, CSS3, JavaScript, and REST APIs, you can create apps that interact with Office documents, email messages, meeting requests, and appointments.

Personally, I find this extremely promising, I even release my first app last year: Keluro web analytics that is distributed here on the store. I even wrote a small article dealing with app for office on the Keluro web blog.

Let us get straight to the point for this tech article. Apps for office aim at building connected service. For example, Keluro web analytics fetchs data from Google Analytics within MS Excel. I am working on a new project that must communicate with other Office 365 APIs. In all cases, you must use the OAUTH 2.0 flow to grant your app the right to communicate with those APIs. Sometimes there are some client libraries to leverage technicalities of the OAUTH flow. For my fist app, I used the Google Javascript API and for the second one I used the Azure Active Directory adal.js library.

However, there is problem. The apps for office run in a sandboxed environment that prevent cross domain navigation. This sandboxed environment is an iframe with sandbox attributes for the case of the office web apps. I asked the question for more information in this msdn thread. In all cases, the consequence is that the usual OAUTH 2.0 flow that redirects you to https://login.microsoft.com/etc. or https://apis.google.com to prompt the user for allowing access will not work. In addition, the pages served by those domains cannot be ‘iframed’ whether they are served to with X-Frame-Options deny or some js ‘frame busting’ is embedded.

I will get straight to the only working solution that I found. Actually, its the same recipe that I used for Google APIs and ADAL.

It looks like that the apps for office run with the following sandboxed attributes sandbox=”allow-scripts allow-forms allow-same-origin ms-allow-popups allow-popups”. Therefore, we are allowed to use popups and this popup is not restricted by CrossOrigin. What we will do is to open a popup and we will do the OAUTH flow in this popup, when its done we will get back the necessary information in the main app iframe. Here is a summary with a small graphic.

An overview of the solution found for the OAUTH2.0 in sandboxed env

An overview of the solution found for the OAUTH2.0 in sandboxed env

Then, you got it we will use a special callback.html file (that is basically blank and contains only javascript) as returned url. Here it its code.

<!--usual html5 doctype and meta-->
    <div id="oauthurl"></div>
        var closeWindowCheck = function () {
        <!--transmit the hash of the return url to the parent iframe-->
            window.oauthHash =  location.hash;
            <!-- wait that the parent window give its approbation to close-->
            if (window.canclosewindow === true) { 
        var interval = setInterval(closeWindowCheck, 200);

Then in the main iframe when it’s time to start the OAUTH flow, you have a piece of javascript that looks like the following

//this javascript lives in the sandboxed environment
//urlNavigate is the oauth2.0 url of the apis you are targeting with the parameters
//use callback.html as redirect uri.
var childWindow =window.open(urlNavigate,'Auth window', 'height=500,width=500');
var interval;
var pollChildWindow = function () {
    if (childWindow.closed === false) {
        var oauthHash;
        try {
           //this throws exception until the popup returns to
           //callback.html served in our domain.
            oauthHash = childWindow.oauthHash; 

        if (typeof oauthHash === &quot;string&quot;) {
            //childWindow.close(); // does not work with Chrome
            // ->; that is why we have the setInterval in callback.html javascript
            childWindow.canclosewindow = true; // allow the child window to close.
            var startWith = oauthHash.indexOf('#id_token');
            var currentRef = window.location.href.split('#')[0];
            //you get the id_token and other information from OAUTH2.0 
            //do whatever you have to do with it in your app.
            //YOUR LOGIC HERE
    } else {
interval = setInterval(pollChildWindow, 200);

I personally find this implementation not satisfactory because of the use of the popup (that may be blocked). What do you think of it ? Do you think of an alternative. I tried many things using postMessage API but it does not work with IE in sandboxed environment.

This is the implementation that is discussed in this ADAL.js issue. You may also find a setup of a sandboxed environment and some implementation attempt (without apps for office for simplicity) on my remote branch of my Adal.js fork.