Category Archives: Uncategorized

My TransientFaultHandling utilitary classes for DocumentDB

Keluro uses extensively DocumentDB for data persistence. However, it’s extensive scaling capabilities come with a price. Indeed with your queries or your commands you may exceed the amout of request unit you are granted. In that case you will received a 429 error “Request rate too large” or a “DocumentException” if you use the .NET SDK. It is your responsability then to implement the retry policies to avoid such a failure and wait the proper amout of time before retrying.

Edit: look at the comment below. The release v1.8.0 of the .NET SDK proposes some settings options for these retry policies.

Some samples are provided by Microsoft on how to handle this 429 “Request too large error”, but they are concerning only commands, such as inserting or deleting a document, there is no sample on own to implement the retry policies for common queries. A Nuget package is also available: “Microsoft.Azure.Documents.Client.TransientFaultHandling” but even if integrating it is as quick as an eye blink, there is no logging capabilities. In my case it did not really resolve my exceeding RU problem, I even doubt that I made it work and the code is not opensource. Then, I decided to integrate the ideas from the samples in own utilitary classes on top of the DocumentDB .NET SDK.

The idea is similar to “TransientFaultHandling” package: to wrap the DocumentClient inside another class exposed only through an interface. By all accounts, it is a good thing to abstract the DocumentClient behind an interface for testability purposes. In our case this interface is named IDocumentClientWrapped.

public interface IDocumentClientWrapped : IDisposable
{
    /* Some command like methods*/
    Task DeleteDocumentAsync(Uri uri);

    Task CreateDocumentAsync(Uri uri, object obj);

    /* Some query like methods*/
    IRetryQueryable<T> CreateDocumentQuery<T>(Uri documentUri, FeedOptions feedOptions = null);

    IRetryQueryable<T> CreateDocumentQuery<T>(Uri documentCollectionUri, SqlQuerySpec sqlSpec);

    /* Copy here all other public method signature from DocumentClient but return IRetryQueryable instead of IOrderedQueryable or IQueryable */
}

Instead of returning an Queryable<T> instance as DocumentClient would do we return an IRetryQueryable<T>. This latter type, whose definition will follow, is also a wrapper on the IQueryable<T> instance returned by the DocumentDB client. However, this interface explicitely retries when the enumeration fails because of 429 request too large exception raised by the database engine, DocumentDB in our case.

public interface IRetryQueryable<T>
{
    IRetryQueryable<T> Where(Expression<Func<T, bool>> predicate);

    IDocumentQuery<T> AsDocumentQuery();

    IEnumerable<T> AsRetryEnumerable();

    IRetryQueryable<TResult> SelectMany<TResult>(Expression<Func<T, IEnumerable<TResult>>> predicate);

    IRetryQueryable<TResult> Select<TResult>(Expression<Func<T, TResult>> predicate);
}

In this interface we only expose the method extension methods that are actually supported by the “real” IQueryable<T> instance returned by DocumentDB: Select, SelectMany, Where etc. For example, at the time of the writing GroupBy is not supported. You would get an runtime exception if you used it directly on the IQueryable<T> instance returned by DocumentClient.

Now look at how we use these interfaces in the calling code.

IDocumentClientWrapped client = /* instanciation */;
IRetryQueryable<MyType> query = client.CreateDocumentQuery<MyType>(/* get usual uri using the UriFactory*/);
IEnumerable<string> = query.Where(c => c.Member1 == "SomeValue").Select(c=>c.StringMember2).AsRetryEnumerable();
/* manipulate your in memory enumeration as you wish*/

Independently of the retry policies classes and the “request too large errors”, let me emphasis that LINQ can be tricky here. Indeed, the piece of code above is completly different from this one:

IDocumentClientWrapped client = /* instanciation */;
IRetryQueryable<MyType> query = client.CreateDocumentQuery<MyType>(/* get usual uri using the UriFactory*/);
IEnumerable<string> = query.AsRetryEnumerable().Where(c => c.Member1 == "SomeValue").Select(c=>c.StringMember2);
/* manipulate your in memory enumeration as you wish*/

In this case the Where constraint is perfomed “in memory” by your .NET application server. It means that you have fetched all data from DocumentDB to your app server. If MyType contains a lot of data, then all have been transfered from DocumentDB to your application server and/or if the Where constraint filters a lot of documents you will probably have a bottleneck.

Let us get back to our problem. Now that we saw that having retry policy for a query means only calling AsRetryEnumerable() instead of AsEnumerable() let us jump to the implementation of thoses classes.

The idea is to use an IEnumerator that “retries” and use two utility method: ExecuteWithRetry,ExecuteWithRetryAsync. The former one for basic mono threaded calls while the latter is for the async/await context. Most of this code is verbose because it is only wrapping implementation. I hope it will be helpful for others.

public class DocumentClientWrapped : IDocumentClientWrapped
{
    private class RetriableEnumerable<T> : IEnumerable<T>
    {
        private readonly IEnumerable<T> _t;
        public RetriableEnumerable(IEnumerable<T> t)
        {
            _t = t;
        }

        public IEnumerator<T> GetEnumerator()
        {
            return new RetriableEnumerator<T>(ExecuteWithRetry(() => _t.GetEnumerator()));
        }

        IEnumerator IEnumerable.GetEnumerator()
        {
            return this.GetEnumerator();
        }
    }

    private class RetriableEnumerator<T> : IEnumerator<T>
    {
        private IEnumerator<T> _t;
        public RetriableEnumerator(IEnumerator<T> t)
        {
            _t = t;
        }

        public T Current
        {
            get
            {
                return ExecuteWithRetry(()=> _t.Current);
            }
        }

        object IEnumerator.Current
        {
            get
            {
                return ExecuteWithRetry(() => _t.Current);
            }
        }

        public void Dispose()
        {
            _t.Dispose();
        }

        public bool MoveNext()
        {
            return ExecuteWithRetry(() => _t.MoveNext());
        }

        public void Reset()
        {
            _t.Reset();
        }
    }

    private class RetryQueryable<T> : IRetryQueryable<T>
    {
        private readonly IQueryable<T> _queryable;
        public RetryQueryable(IQueryable<T> queryable)
        {
            _queryable = queryable;
        }

        public IDocumentQuery<T> AsDocumentQuery()
        {
            return this._queryable.AsDocumentQuery();
        }

        public IEnumerable<T> AsRetryEnumerable()
        {
            return new RetriableEnumerable<T>(this._queryable.AsEnumerable());
        }

        public IRetryQueryable<T> Where(Expression<Func<T, bool>> predicate)
        {
            var queryable = this._queryable.Where(predicate);
            return new RetryQueryable<T>(queryable);
        }

        public IRetryQueryable<TResult> SelectMany<TResult>(Expression<Func<T, IEnumerable<TResult>>> predicate)
        {
            var queryable = this._queryable.SelectMany(predicate);
            return new RetryQueryable<TResult>(queryable);
        }

        public IRetryQueryable<TResult> Select<TResult>(Expression<Func<T, TResult>> predicate)
        {
            var queryable = this._queryable.Select(predicate);
            return new RetryQueryable<TResult>(queryable);
        }
    }


    private const int MaxRetryCount = 10;

    private static async Task<V> ExecuteWithRetriesAsync<V>(Func<Task<V>> function)
    {
        TimeSpan sleepTime = TimeSpan.FromSeconds(1.0);
        int count = 0;
        while (true)
        {

            try
            {
                return await function();
            }
            catch (DocumentClientException de)
            {
                if ((int)de.StatusCode != 429)
                {
                    throw;
                }
                if (++count > MaxRetryCount)
                {
                    throw new MaxRetryException(de, count);
                }
                Trace.TraceInformation("DocumentDB async retry count: {0} - retry in: {1} - RUs: {2}", ++count, sleepTime.TotalMilliseconds, de.RequestCharge);
                sleepTime = de.RetryAfter;
            }
            catch (AggregateException ae)
            {
                if (!(ae.InnerException is DocumentClientException))
                {
                    throw;
                }

                DocumentClientException de = (DocumentClientException)ae.InnerException;
                if ((int)de.StatusCode != 429)
                {
                    throw;
                }
                if (++count > MaxRetryCount)
                {
                    throw new MaxRetryException(de, count);
                }
                Trace.TraceInformation("DocumentDB async retry count: {0} - retry in: {1} - RUs: {2}", ++count, sleepTime.TotalMilliseconds, de.RequestCharge);
                sleepTime = de.RetryAfter;
            }
            await Task.Delay(sleepTime);
        }
    }

    private static V ExecuteWithRetry<V>(Func<V> function)
    {
        TimeSpan sleepTime = TimeSpan.FromSeconds(1.0);
        int count = 0;
        while (true)
        {
            try
            {
                return function();
            }
            catch (DocumentClientException de)
            {
                if ((int)de.StatusCode != 429)
                {
                    throw;
                }
                if (++count > MaxRetryCount)
                {
                    throw new MaxRetryException(de, count);
                }
                Trace.TraceInformation("DocumentDB sync retry count: {0} - retry in: {1} - RUs: {2}", ++count, sleepTime.TotalMilliseconds, de.RequestCharge);
                sleepTime = de.RetryAfter;
            }
            catch (AggregateException ae)
            {
                if (!(ae.InnerException is DocumentClientException))
                {
                    throw;
                }

                DocumentClientException de = (DocumentClientException)ae.InnerException;
                if ((int)de.StatusCode != 429)
                {
                    throw;
                }
                if (++count > MaxRetryCount)
                {
                    throw new MaxRetryException(de, count);
                }
                Trace.TraceInformation("DocumentDB sync retry count: {0} - retry in: {1} - RUs: {2}", ++count, sleepTime.TotalMilliseconds, de.RequestCharge);
                sleepTime = de.RetryAfter;
            }
            Thread.Sleep(sleepTime);
        }
    }

    private readonly DocumentClient _client;
    public DocumentClientWrapped(string endpointUrl, string authorizationKey)
    {
        _client = new DocumentClient(new Uri(endpointUrl), authorizationKey);
    }

    public async Task CreateDocumentAsync(Uri documentCollectionUri, object obj)
    {
        ResourceResponse<Document> response = await ExecuteWithRetriesAsync(()=> _client.CreateDocumentAsync(documentCollectionUri, obj));
        LogResponse<Document>("CreateDocumentAsync", response);/* Do something with the response if you want to use it */
    }

    public async Task DeleteDocumentAsync(Uri documentUri)
    {
        ResourceResponse<Document> response = await ExecuteWithRetriesAsync( () => _client.DeleteDocumentAsync(documentUri));
        LogResponse<Document>("DeleteDocumentAsync", response); /* Do something with the response if you want to use it */
    }

    public async Task ReplaceDocumentAsync(Uri documentCollectionUri, object document)
    {
        var response = await ExecuteWithRetriesAsync(()=>  _client.ReplaceDocumentAsync(documentCollectionUri, document));
        LogResponse<Document>("ReplaceDocumentAsync", response);
    }

    public IRetryQueryable<T> CreateDocumentQuery<T>(Uri documentCollectionUri, SqlQuerySpec sqlSpec)
    {
        var queryable = _client.CreateDocumentQuery<T>(documentCollectionUri, sqlSpec);
        return new RetryQueryable<T>(queryable);
    }
}

TeamCity on Windows Azure VM part 1/2 – server setup

TeamCity is a continuous integration (CI) server developed by Jetbrains. I have worked with CruiseControl and Hudson/Jenkins and, to my point-of-view, TeamCity is beyond comparison. I will not detail all the features that made me love this product, let me sum up by saying that TeamCity is both a powerful and easy to use software. It is so easy to use that you can let non tech guys visit the web app, trigger their own builds, collect artifacts etc.
TeamCity 9.1 has been developed with the Tomcat/JEE tech stack. However, it works on windows and, in this post, I will explain how to setup a proper installation on Windows VM hosted on Azure using Azure SQL. Precisely, I will detail the following points

  • Installing TeamCity 9.1 on a Windows Server 2012 Virtual Marchine hosted on Azure with and SQL Azure database.
  • Serving your TeamCity web app securely through https. To this aim, we will setup a reverse proxy on IIS (Internet Information Services).  I will also detail how to make the websockets, used by the TeamCity GUI, work with IIS 8.0.

This first post is dedicated only to the first point. The second one will be the topic of the next post.

Installing Teamcity 9.0 on Azure VM

creating the VM

First of all, start by creating the VM. I recommend you to use manage.windowsazure.com. I personally think that portal.azure.com is not very clear for creating resources at the time of the writing. You can use the ‘Quick create’ feature. There is nothing particular, no special tricks here. Azure will create a virtual machine with Windows Server 2012. Note that I recommend you to provide a instance with more resources while you are configuring. Indeed, we are going to use the Remote Desktop for configuring the TeamCity web app and the IIS server. Remind that, when you allocate more resource (D1 or D3 for example), the Remote Desktop will be more responsive, you’ll save a lot of time. Of course you could downgrade later to minimize hosting costs.

Quick create azure VM

‘Quick create’ a Windows Server 2012 VM is created on Azure

creating the Azure SQL Database and server

Then you will have to create an Azure SQL Server with an SQL database. This times, on windows.azure.com, go to Sql database and click ‘Custom create’. For the SQL Server, choose ‘new sql server’. You can select the minimal configuration: a Basic Tier with maximum size of 2GB. Important: choose Enable V12 for your SQL server. Non-v12 server are not supported by TeamCity. At the time of the writing, it seems impossible to upgrade a database server to v12 after it is created, so do not forget and create a v12 instance. You will be asked to provide credentials for the administrator of the database. Keep your password, you will need it later when configuring the database settings in TeamCity.

TeamCity installation

When it is done, connect your Azure VM using Remote Desktop. Now we are going to install TeamCity on the Azure VM. Precisely, we will install it to run under the admin account that we created when creating the VM. I strongly recommend to run TeamCity with admin privileges unless you may encounter many problematic errors.

On remote desktop VM, start Internet Explorer and visit the Jetbrains TeamCity website: https://www.jetbrains.com/teamcity. You may have to put this site as a trusted source in order to visit from IE, this is also the same for downloading the file. To change internet security settings click the gear at the right top corner of IE and then InternetOptions security etc.

Once downloaded, start the installer. You may install it almost anywhere on the VM disk, the default, C:\TeamCity, is fine.

On the next installation screen, you will be asked ‘which component to install?’. You have to install both components Build agent and Server to have a working configuration. Indeed, in this post we will performed a basic TeamCity installation where the build agent and the continuous integration server are installed on the same machine. However keep in mind that TeamCity is a great product that enables you to distribute on several machine many build agents. They will be in charged of executing heavy builds simultaneously.

Install agent and build server

Install agent and build server

Later, you’ll be asked for the port, set it to 8080 (for example) instead of the default 80. Indeed, we do not want our TeamCity integration website being served without SSL, but serving the website through https/443 will be the topic of the next post… Finally, choose to Run TeamCity under a user account (the administrator of the vm, no need to create a new user).

After that, TeamCity server starts and an Internet Explorer window on localhost:8080 will be opened. I will ask you the DataDirectory location on the TeamCity server machine (choose the default).

Database configuration

Now comes the tricky thing, configuring the database. As explained in many places by Jetbrains do not keep the internal database for your installation. To use an Azure SQL Database, choose MSSQL Server.

Database connection settings

Database connection settings in TeamCity

You will have to install je jdbc 4.0 driver that is needed by TeamCity to work with an SQL Server database. It is a very easy task that consist of putting a jar file ( sqljdbc4.jar) under the DataDirectory. It is well documented in the Jetbrains link. Consequently, I will just add that the default DataDirectory is an hidden folder located by default at C:ProgramDataJetBrainsTeamCitylibjdbc.

Now we have to fill the form for database settings. We will need the connection string that is available on the azure portal. If you select the ado.net representation, it should look like Server=tcp:siul9kyl03.database.windows.net,1433;Database=teamcitydb;User ID=teamcitydbuser@siul9kyl03;Password={your_password_here};Trusted_Connection=False;Encrypt=True;Connection Timeout=30;

databaseconnectionsettings

Entering the database connection settings.

If you use a connection string formatted as above. When filling the form in TeamCity, you will enter the following entries (see screensho).

Remark that you may be rejected because your SQL database server does not allow Windows Azure (then this current VM) to access it. You have to manually grant it, in Azure management portal, on the Configure menu of the SQL SERVER. Click ‘yes’ for allowed Windows Azure services.

Then you will meet a last step, that ask you to create an admin user for your TeamCity web application.

createfinalstep

Final step! create an administrator for TeamCity web app

Now, the first step of our installation tutorial is completed and we have a TeamCity server setup on our Azure VM. Visiting your TeamCity website on teamcityserver.cloudapp.net or even teamcityserver.cloudapp.net:8080 is not possible. Indeed, we setup the server on port 8080. This endpoint is blocked by your Windows Azure VM. In the next part, we will see how to serve properly and securely our integration web app through 443 port.

Before going to the second part. I suggest that you check your TeamCity works well locally on the VM by creating a very simple project. In our case, we create a build configuration that just checkouts source from a public github repository and perform a Powershell task that says “hello world”. When triggering the build manually (run button), the build is well executed. This means that the setup of the TeamCity server is completed. Configuring its web access will be the rest of our job.

firstbuild

Our first build is passed

PS: do not forget to downgrade the resources associated with your VM instance when you are done with configuration. There is no need to pay for a large VM if you do not need such performance.

Organize a multilanguage Jekyll site

In this post I will describe a simple organization of Jekyll sources to get a mulitlanguage website.

EDIT: In the post below I suggest to use a ‘translated’ tree structure under the /fr directory. e.g. /about/about.html becomes fr/apropos/apropos.html. This leads to complication and nobody cares if the url is translated. I suggest you to keep the exact same tree structure in base site, _layout and fr directories.

What is Jekyll actually? Jekyll describes itself as a “a blog-aware static site generator”. If you know the product you’ll find this description totally fitted but still obscure if you are a newcomer. Let me introduce Jekyll in my own terms. Jekyll helps you to build .html pages, they are created when you publish (or update your site). Contrary to traditional dynamic webservers (e.g. ASP.NET, PHP etc.), the pages are not served and created “on demand” when the user visits the website. With Jekyll it’s more simple, they are only web pages stored on the server disk as if you have created them “by hand”. The magic lies in the fact that you do not have to duplicate all the html everywhere. Jekyll is a “generator” which means by using its basic features, includes, layouts etc. you will not have to repeat yourself so your site will be easily maintainable. What does “blog aware” mean? Jekyll can create pages but it has also been created for blogging on the first place. On its core you will find many things related to blog matters such as pagination, ordering, tagging etc. To conclude the presentation of Jekyll let us mention the fact that it is supported natively by github-pages.

The multi language organization described in this post will illustrate some of the features of Jekyll. If you create a website with Jekyll, you must have a clean website structure such as the following one. You will notice the usual directories _layout and _site.


├── _config.yml
├── _layouts
|    ├── default.html
|    └── post.html
├── _site
|    └── ...
└── index.html

_layout is extremely powerful, it is a form of inheritance. When you declare a page such as index.html and if you reference in the YAML (a kind of file header recognized by Jekyll) the default layout then your page will use all the html defined in the _layout default. We will use extensively the layout pattern for our localized website structure. The main idea is simple: the content of the pages are stored in a layout file located under the _layout directory, the rest of the site will only “call” layouts with the proper language to use.

Let us take a simple example to illustrate this. Suppose that your have a website which default language is english and another one: french. Naturally, the default language is located at the root of the website while other languages are located on a subfolders (e.g. www.myexample.com/fr). Suppose now that you have another subdirectory “about” (“à propos” in french) then your hosted website will have the following structure:


├── fr
|    ├── à propos
|    |    └── apropos.html
|    └── bonjour.html
├── about
|    └──about.html
└── hello.html

The question now is: how to achieve this easily with Jekyll such that the translation of the sentences are kept side by side on the same file? This latter requirement is important to work with your translators. The trick is to replicate the website structure in _layouts. We use the YAML formatter to define a key/value variable that will hold the translation. The “true” pages will only call the layout with the right language to use.

Then, for the example mentioned above, the Jekyll sources structure looks like:


├── _site
|    └── ...
├── _layouts
|    ├── about
|    |    └──about_lyt.html
|    └── hello_lyt.html
├── fr
|    ├── à propos
|    |    └── apropos.html
|    └── bonjour.html
├── about
|    └──about.html
└── hello.html

The about_lyt.html contains the html content of the page and the translation such as this one. This new layout uses the default layout containing the common content of all website pages.


---
layout: default
t:
first:
en: "This is my first sentence"
fr: "C'est ma première phrase"
second:
en: "My second sentence"
fr: "Ma deuxième phrase"
third:
en: "Last sentence"
fr: "Dernière phrase"
---
<div>
<ul>
<li>t.first[page.lang]</li>
<li>t.second[page.lang]</li>
<li>t.third[page.lang]</li>
</ul>
</div>

Then the two localized files simply contain a very basic YAML header.
Let us have a look at the file about.html

---
layout: about/about_lyt
title: "about"
lang: en
---

And now let us examine its french alter ego apropos.html

---
layout: about/about_lyt
title: "A propos"
lang: fr
---

This, organization is very simple and maintainable, this is the one that I have chosen for my own company’s website www.keluro.com. It is also very easy to send the YAML to the translators and put them back in your files to update your website. If you are interested in a script to extract YAML you may use this one written in powershell.

A sample Wix installer using the ActiveSetup feature

Windows Users

All users have the software in the case of a local machine install contrary to user install

In this post I will explain the technical details of the Wix sample installer for local machine install of Excel-DNA addins. Wix is a popular free toolset to create .msi installer. I used the ActiveSetup feature to address a “per user” install problem and this is the part I will cover in this post. Therefore, it maybe useful for both who do not know ActiveSetup at all and for people looking out to write their own Wix installer leveraging the ActiveSetup. Even, if I think ActiveSetup should be used only if there is no alternative this barely known feature may save your life so it is good to be aware of its existence. We will use the terminology of the Windows Registry that you can check on the Wikipedia page.

First let us present the problem and why we needed this ActiveSetup feature. Automation addins and by consequent Excel-DNA ones are registered by setting a value OPEN in the current user registry hive (HKCU). This OPEN value contains the path to the packaged Excel addin which a file with .xll extension. If there are two addins to register then the values should be named OPEN, OPEN2, OPEN3… see this StackOverflow discussion or this MS documentation. This is a per user registration, if you create a new user, then he will not have the addin starting with Excel. There is a WixSample available on the Excel-DNA github. This is fine as long as you need only a per user install, however, for enterprise deployment you need most of the time a local machine install. Here starts the troubles: the OPEN value mentioned above cannot be set as a HKLM (machine) subkey only in the current user hive (HKCU). Therefore we have to find a way to setup this OPEN value for all users and this is where the the ActiveSetup comes into play. We have also tried to set the addin in the XLSTART machine directory but this has also drawbacks see this discussion

So what is this ActiveSetup feature? ActiveSetup is a simple yet powerful mechanism built-in in Windows that allows you to execute a custom action at logon for new user. How does it work? Actually this is quite simple. You have to create a subkey in the HKLM hive Software/Microsoft/ActiveSetup/Installed Components. Typically this subkey is a GUID that contains a StubPath REG_EXPAND_SZ value whose data points to the executable of your choice. This value is mirrored in the HKCU registry. Precisely, at logon windows checks if you have this ActiveSetup key in the current user HKCU hive, and if not, the executable of the StubPath is invoked. If the key is present in the HKCU registry then nothing happens. The nice thing is that localization, version and uninstall are handled. Let us detail the uninstall mechanism. Actually, if you need to remove your feature, then you can set the IsInstalled feature to 0 and change the StubPath so that the command there executes a cleaning task of your choice. At logon if the HKCU subkey of active setup was registered previously then the command in the StubPath will be executed. The versioning allows you to reexecute the ActiveSetup command even if the ActiveSetup has already been executed by the user i.e. if the HKLM ActiveSetup key is already mirrored in the HKCU hive. ActiveSetup is poorly (read not) documented but you may find information on the web for example in this good article.

ActiveSetup set in the HKLM registry for the Wix sample installer

ActiveSetup set in the HKLM registry for the Wix sample installer

Ok now, how we will use this for our Wix Install? In addition to the common HKLM registration, our sample builds a .NET40 executable binary called managedOpenKey.exe. This .exe, when invoked with /install parameter, looks at the HKCU hive and creates the OPEN subkey mentioned in the second paragraph, so that the addin will be opened when starting Excel. When the .exe is invoked with /uninstall it will remove the OPEN value, if present, in the HKCU. This is simple: at install/ repair/upgrade our Wix creates the ActiveSetup HKLM key with a StubPath of the form “manageOpenKey.exe /install “. For uninstall, we set the IsInstalled value to zero and change the StubPath to “manageOpenKey.exe /uninstall”. To do so we use some custom actions written in .NET40 of the Wix installer, they must be invoked without impersonation so that we are allowed to write in HKLM.

We wished to allow the usage of the addin without a reboot. Therefore, at install/repair/upgrade the msi quietly invokes the same manageOpenKey.exe so that, for the current user performing the install we do not need the reboot and wait for the active setup to create the HKCU OPEN value. But there is a trick…. We must force the mirroring of the ActiveSetup key in the HKCU registry. Indeed, take the following example: user 1 installs the addin; the OPEN key is registered because we have invoked the .exe directly in the .msi. Now user1 logs off and user2 logs on, ActiveSetup is triggered for him and the addin is registered and everyone is happy. But now user2 decides to uninstall the addins and does so. The user1 when he logs back and opens Excel will have an error message saying that the addin is not found. What happened? The HKCU key for ActiveSetup was not mirrored in the user1 HKCU therefore ActiveSetup does not consider that the feature was installed for him then it does not trigger the StubPath command for cleaning environment when the IsInstalled has been set to false. Tricky, isn’t it? These manual creation of HKCU key are made by C# custom actions that should be impersonated because you want to write your key in the HKCU of the user who made the install not the LocalAdministrator that runs the custom action not impersonated such as those mentioned in previous paragraph.

There is also a trick regarding the OS bitness. If you do not take care and write the ActiveSetup in the HKLM hive within a 32bit process on a 64bit OS then your active setup will be written under /Wow6432 path. However, when performing the HKCU mirroring mentioned above the .NET API will write in HKCU/Software/Microsoft/Active Setup because it is not supposed to have a Wow6432 key in HKCU. However, there is Wow6432 HKCU key used only by ActiveSetup and you should be cautious. Indeed, the ActiveSetup in /Wow6432 and the one directly under HKLM/Software/Microsoft do not interact. Therefore, in our Wix CustomActions we have to check OS bitness and write the HKLM part in the proper slot with the following code snippet.

To conclude, I would like to thank my colleague Sébastien Cadorel who helped me a lot to write down this install sample. If you encounter any troubles with this install sample feel free to submit a bug or to pull request on the github.

No hdmi sound without OS reboot? Restart the drivers…

Edit: end july 2015 the problem is now fixed for me with windows 10.

This post blog follows a problem that I encountered when I upgraded my laptop to windows 8.1. Precisely, the hdmi sound stops working immediately when I connect the hdmi cable to my television. The hardware works fine because if the OS is reboot with hdmi cable plugged, then the TV as a sound playback device is available. This is terribly annoying, you do not want to reboot each time you want to use your TV as a display device. While searching the internet for a solution, I discovered that I am not the only one with this problem. Therefore, I will try to expose my solution with many details in order to make it reusable by others who would stumble on this post with the same problem.

Assuming that you are in the same, or a close, situation: the hdmi sound works well when Windows is reboot with the cable plugged in. Consequently, when everything is working, if you right click on the sound icon in the tray bar and select “playback devices” you should be able to see the TV in the device playbacks .

The list of available playback devices.

The list of available playback devices.

Before going any further the first thing to do is to update the drivers. This is quite simple, right click on the windows menu at the bottom left of your desktop screen and select the device manager. My advice is to update all drivers for the two entries in the tree view “display adapters” and “sound, video and game controllers”.

The device manager

The device manager

Once all drivers have been updated, check that the problem still occurs (restart the OS once at least) and if it does you may be interested in the following.

Upgrading driver software...

Upgrading driver software

So even after updating the drivers, you still need the reboot to get the sound. Actually, the drivers managing the hdmi need to be restarted and that is what we are going to do manually, without restarting the OS. To perform such action, we are going to need devcon.exe which is a tiny program distributed freely by Microsoft with the Windows Driver Kit. Download it here (you do not need Visual studio).

After the (silent) installation of WDK you may find devcon.exe in the the path “C:\Program Files (x86)\Windows Kits\8.1\Tools\x64”. You will notice that you have two directories in Tools the “/x64” and the “/x86” (i.e 32 bits), use the one corresponding to your system (to know more on 32 bits vs 64 bits check this).

WARNING: devcon.exe error messages are extremely misleading (read completely wrong…). Indeed, if you are using the x86 version instead of the x64 version or if you are trying to restart the drivers (see below) without administrators right you will still have the same error.: “No matching devices found”. Consequently, do not try to interpret the error raised by devcon.exe.

listmedia

“devcon.exe listclass media” executed in powershell using conemu terminal

Now, we are going to restart the two drivers using devcon.exe. First, let us see all drivers dedicated to sound, then type in your command prompt (or better with powershell) run as administrator: <pathToDevCon>/devcon.exe listclass media

The response here is a list of two items, we are going to retain the first one, “Intel (R) display”, which was the audio adapter used for the hdmi (see the first screen shot of this post). The driver Id: HDAUDIOFUNC_01&VEN_8086&DEV_2807&SUBSYS_80860101&REV_10004&36161A1D&0&0001 is going to be needed. If you are using powershell you may use the Out-File command to write it down in a text file…

You can try to restart it directly by typing (in command line run as administrator): <pathToDevCon>/devcon.exe restart “@<DRIVER_ID>”.
Note that the @ symbol is inside the quotes which may surprise you if you are a .NET developer…

Unfortunately, it does not seem to be sufficient and the system also needs to reboot the display adapter driver after that. So we are going to do the same action but for display adapter driver: we type in an admin command line:
<pathToDevCon>/devcon.exe listclass display

devcon.exe listclass display

devcon.exe listclass display

Then now if you restart the driver listed here as “Intel(R) HD Graphics Family” then the HDMI sound device playback should be available if the cable is connected. To do so execute:

<pathToDevCon>/devcon.exe restart “@PCIVEN_8086&DEV_0A16&SUBSYS_130D1043&REV_093&11583659&0&10”

The Id above is mine, obviously you should use yours.

Fine but this is not over yet. Indeed, you do not want to do all this each time you are connecting your hdmi cable with your TV. What we are going to do right now, is to add a powershell script that will automated this process so that you will just have to click on one icon when you will want your TV sound. If you have never run powershell script on your machine, then the execution of ps1 file (powershell script) will not be allowed. Read this to enable powershell script execution (basically you will just have to type in a powershell console, run as admin, Set-ExecutionPolicy Unrestricted).

Here is the script that you can use. I have added two instructions to enable the drivers, it seems to be sometimes useful when you plug/unplug the hdmi cable several times. All you have to do to make it work is to change the path to devcon.exe and the id. You can save it to the place of your choice such as the Desktop with a .ps1 extension (e.g. restart_drivers.ps1)

Remind that to be effective this .ps1 file should be executed with administrator rights. Remind also, that devcon.exe will only tell you that the drivers cannot be found if you are not with admin rights. So I am suggesting to make the script more easily run in admin mode, you can put a powershell.exe shortcut next to the .ps1 script file. By right clicking on the shortcut file go to Properties and fill the target property as follows. Target:”powershell.exe restart_drivers.ps1″. Then, in Advanced… select Run as administator.

A shortcut to run restart_drivers.ps1 directly with admin right

A shortcut to run restart_drivers.ps1 directly with admin right

That is all, no when you will connect your hdmi cable you will only have to click on the shortcut and enjoy the sound from your TV.