Tag Archives: azure

Using Analytics in Application Insights to monitor DocumentDB Requests

Following Wikipedia, DocumentDB is

Microsoft’s multi-tenant distributed database service for managing JSON documents at Internet scale.

The throughput of the database is charged and measured in request unit per second (RUs). Therefore, when creating application on top of DocumentDB, this is a very important dimension that you should pay attention to and monitor carefully.

Unfortunately, at the time of the writing the Azure portal tools to measure your RUs usage are very poor and not really usable. You have access to tiny charts where granularity cannot be really changed.

DocumentDB monitoring charts in Azure Portal

These are the only monitoring charts available in the Azure Portal

In this blog post, I show how Application Insights Analytics can be used to monitor the RUs consumption efficiently. This is how we monitor our collections now at Keluro.

Let us start by presenting Application Insights, it defines itself here as

an extensible Application Performance Management (APM) service for web developers on multiple platforms. Use it to monitor your live web application. It will automatically detect performance anomalies. It includes powerful analytics tools to help you diagnose issues and to understand what users actually do with your app.

Let us show how to use it in a C# application that is using the DocumentDB .NET SDK.

First you need to install the Application Insights Nuget Package. Then, you need to track the queries using a TelemetryClient object, see a sample code below.

public static async Task<FeedResponse<T>> LoggedFeedResponseAsync<T>(this IQueryable<T> queryable, string infoLog, string operationId)
{
	var docQuery = queryable.AsDocumentQuery();
	var now = DateTimeOffset.UtcNow;
	var watch = Stopwatch.StartNew();
	var feedResponse = await docQuery.ExecuteNextAsync<T>();
	watch.Stop();
	TrackQuery(now, watch.Elapsed, feedResponse.RequestCharge, "read", new TelemetryClient(), infoLog, operationId, feedResponse.ContentLocation);
	return feedResponse;
}

public static void TrackQuery(DateTimeOffset start, TimeSpan duration, double requestCharge, string kind, TelemetryClient tc, string infolog, string operationId, string contentLocation)
{
	var dependency = new DependencyTelemetry(
			"DOCDB",
			"",
			"DOCDB",
			"",
			start,
			duration,
			"0", // Result code : we can't capture 429 here anyway
			true // We assume this call is successful, otherwise an exception would be thrown before.
			);
	dependency.Metrics["request-charge"] = requestCharge;
	dependency.Properties["kind"] = kind;
	dependency.Properties["infolog"] = infolog;
	dependency.Properties["contentLocation"] = contentLocation ?? "";
	if (operationId != null)
	{
		dependency.Context.Operation.Id = operationId;
	}
	tc.TrackDependency(dependency);
}

The good news is that you can now effectively keep records of all requests made to DocumentDB. Thanks to a great component of Application Insights named Analytics, you can browse the queries and see their precise request-charges (the amount of RUs consumed).

You can also add identifiers (with variables such as kind and infolog in sample above) from your calling code for a better identification of the requests. Keep in mind that the request payload is not saved by Application Insights.

In the screenshot below you can list and filter the requests tracked with DocumentDB in Application Insights Analytics thanks to its amazing querying language to access data.

Getting all requests to DocumentDB in a a timeframe using application Insights Analytics

Getting all requests to DocumentDB in a a timeframe using application Insights Analytics

There is one problem with this approach is that for now, using this technique and DocumentDB .NET SDK we do not have access to the number of retries (the 429 requests). This is an open issue on Github.

Finally, Analytics allows us to create a very important chart. The accumulated RUs per second for a specific time range.
The code looks like the following one.

dependencies
| where timestamp > ago(10h)
| where type == "DOCDB"
| extend requestCharge = todouble(customMeasurements["request-charge"])
| extend docdbkind = customDimensions["kind"]
| extend infolog = customDimensions["infolog"]
| order by timestamp desc
| project  timestamp, target, data, resultCode , duration, customDimensions, requestCharge, infolog, docdbkind , operation_Id 
| summarize sum(requestCharge) by bin(timestamp, 1s)
| render timechart 

And the rendered charts is as follows

Accumulated Request-Charge per second (RUs)

Accumulated Request-Charge per second (RUs)

Hosting Jekyll website on Azure Web app using TeamCity automated deployment

The public website of my company Keluro is built with Jekyll. If you are a Jekyll user you are probably aware of Github pages. To me, the best feature of Github pages is the automated deployment of your website when pushing on a specific branch (namely ‘gh-pages’). We hosted the website almost a year in Github pages. However, we suffer many inconveniences with it:

  • Https is not supported by Github pages. As Google announced, http over SSL is now a bonus in term of rankings. At Keluro we do have a wild card SSL certificate, its a shame that we could not use it for our public corporate website!
  • We could not tune some server caching configuration (ETag, Cache-Control etc.), resulting on poor results from Google Page Speed Insights.
  •  With Githup pages you cannot use custom advanced gems. They are ruby extensions, which is the technology on which Jekyll is based on. I have already blogged about our solution to support multi-lang website. Even if it works, I am more and more thinking that a Jekyll gem would do a better job…
  • We had problem with Facebook scrapping our open graph meta tags and there was nothing we could do about it: see issue here.
  • You do not control the version of Jekyll run by Github pages. We found out that there are some breaking changes introduced when migrating from Jekyll 2 to Jekyll 3. We do not want our website to get down because of a silent release of a new Jekyll revision.
  •  At Keluro due to our business model we are windows users. However, the server running Github pages are Linux servers, so you face the technicalities coming from switching between Linux/Windows: CRLF vs LF etc. Even if there are solutions such as .gitattribute file etc. these are extra technicalities that our non-tech teammates working on the website are not aware of and we do not want them to spend time on this.
  • We use a project page rather than a personal page, it was complicated to configure DNS names etc. There are always exception when it comes to project pages with Github pages.

For all these reasons I wanted to quit Github pages. As the CTO of Keluro, I had not a lot of time to investigate all alternatives and wanted a simple solution. We are already Bizspark member, our web apps, APIs, VMs etc. are hosted on Azure. We are familiar with configuration of IIS server. Consequently, it was a reasonable solution to host the website on Azure with all our other resources. On the other hand, we already had an automated deployment solution: our continuous integration server, TeamCity, which is also hosted on Azure.

The solution proposed here is then very simple. Similarly to the automated deployment provided by Github pages, we use TeamCity to trigger changes on a given branch of the Git repository. Then, the website is built by Jekyll on the integration server virtual machine. Finally, it is synchronized with the Azure web app FTP using the sync library WinSCP. At the end, our static html pages are served using Azure Web app.

Once the TeamCity build configuration is created, you just have to write two Powershell build steps (see code below). You can also use two Psake build targets and chain them as I wrote here.

The prerequisite is to install Jekyll on the windows server VM with jekyll.exe on the environment variable $PATH. You can add WinSCP.exe and .dll in a folder within your source code under the ‘ignored\build’ location. Make sure that ‘ignored’ is indeed an ignored folder by Jekyll (you do not want Jekyll to output these to your deployed _site folder).

In the TeamCity build configuration you can set up environment variable that will be consumed by the Powershell script ($env:VARNAME). It is an acceptable solution for avoiding hardcoding passwords, location path etc. on the sources. For example, the variable $env.RepoDir is set to %system.teamcity.build.checkoutDir%. You use such environment variable, to store ftp settings. To recover the FTP settings of an Azure Web App, see this stackoverflow question.

REMARK: We did not manage to redirect the WinSCP ouput of the sync logs in real time to TeamCity. We log the results when the syncing is completed. If someone has a solution we will be glad to hear it.
We tried the WinSCP powershell CMDLets but they seem heavily bugged at the time of the writing.

Build the website with Jekyll

$repoDir = $env:RepoDir
cd $repoDir #change location and go to the repository
Exec{Invoke-Expression "& jekyll build"} #invoke Jekyll from Powershell command line

Sync the website with WinSCP

$hostName = $env:FtpHostName
$repoDir = $env:RepoDir
$userName = $env:FtpUserName
$pwd = $env:FtpUserPwd

$sitePath = Join-Path $repoDir -ChildPath "_site"
$windscpPath = Join-Path $repoDir -ChildPath "ignored\build\WinSCP"

$dllPath = Join-Path $windscpPath -ChildPath "WinSCPnet.dll"
$exePath = Join-Path $windscpPath -ChildPath "WinSCP.exe"
if(!(Test-Path $dllPath)){
    Write-Error "No dll path found! " $dllPath
}
if(!(Test-Path $exePath)){
    Write-Error "No exe path found! " $exePath
}

Add-Type -Path $dllPath

$sessionOptions = New-Object WinSCP.SessionOptions
$sessionOptions.Protocol = [WinSCP.Protocol]::Ftp
$sessionOptions.HostName = $hostName
$sessionOptions.UserName = $userName
$sessionOptions.Password = $pwd
$sessionOptions.FtpSecure = [WinSCP.FtpSecure]::Explicit

$session = New-Object WinSCP.Session
$session.ExecutablePath = $exePath
$session.SessionLogPath = $logpath

try
{
    $session.Open($sessionOptions)

    Write-Host "Start syncing..."

    $transferResult = $session.SynchronizeDirectories([WinSCP.SynchronizationMode]::Remote, $sitePath, "/site/wwwroot", $True, $False,[WinSCP.SynchronizationCriteria]::Size)

    $transferResult.Check()

    foreach ($transfer in $transferResult.Downloads)
    {
    Write-Host ("Download of {0} succeeded" -f $transfer.FileName)
    }
    foreach ($transfer in $transferResult.Uploads)
    {
        Write-Host ("Upload of {0} succeeded" -f $transfer.FileName)
    }
    foreach ($transfer in $transferResult.Removals)
    {
    Write-Host ("Removal of {0} succeeded" -f $transfer.FileName)
    }
    Write-Host "... finish syncing."
}
finally
{
    # Disconnect, clean up
    $session.Dispose()
}

TeamCity on Windows Azure VM part 2/2 – enabling SSL with reverse proxy

In the previous post we explained how to install a TeamCity server on a Windows Azure Virtual Machine. We used an external SQL Azure database for the internal database used by Teamcity. The first post ended with a functional TeamCity web app that could not be visible from outside. The objective of this second post is to show how to secure the web app by serving up the pages with SSL. Similarly as the previous post I will detailed out all the procedure from scratch so this tutorial will be accessible to an IIS beginner. Indeed, we are going to use a reverse proxy technique on IIS (Internet Information Service, the Microsoft web server).  I would like to thank my friend Gabriel Boya who suggested me the reverse proxy trick. Let us remark that it could also be a good solution for serving under SSL any other non IIS website running on windows.

If you have followed the steps of the previous post, then you should have a TeamCity server that is served on port 8080. You should be able to view it with the IE browser under the remote desktop VM at localhost:8080. Let us start this post by answering the following question:

What is a reverse proxy and why use it?

If you try to enable SSL with Tomcat server who serves TeamCity, you are going to suffer for not a very good result. See for instance this tutorial and all the questions it brought. Even if you manage to import your certificate on the java certificate store, you will have problem with WebSockets…

So I suggest you to implement a reverse proxy. The name sounds complicated but it is very basic concept: this is simply another website that will act as an intermediate for communicating to your first and primary website (in our case the Tomcat server). Here, we are going to create an empty IIS website, visible from outside on port 80 and 443. It will redirect the incoming request to our TeamCity server which is listening on port 8080.

Representation of the reverse proxy for our situation

Representation of the reverse proxy for our situation

Install IIS and the required components

First we will have to install and setup IIS with the following features on our Windows server 2012. It is a very easy thing to do. Search the ServerManager in windows  then, choose to configure this Local Server (this Virtual Machine)  with a Role-based installation.

servermanager

Add roles and features

Then, check the “Web Server (IIS)” as a role to install.

servermanager1

Install IIS

Keep the default feaures.

servermanager2

Keep default installation features

In this recent version, TeamCity uses the WebSockets. To make them work, our reverse proxy server will need them: check WebSocket Protocol.

servermanager3

Check the WebSocket Protocol

Check that everything is right there… and press Install.

servermanager4

Check that everything is prepared for installation

Now that we have installed IIS with our required features (WebSocket),it is accessible from the menu. I suggest you pin that to the easy launch if you do not want to search it each time you will need it.

iis

IIS well installed

Install URL rewrite module

The most simple way to set up the reverse proxy is to have the IIS URL rewrite module installed. Any Microsoft web modules should be installed using the Microsoft Web platform Installer. If you do not have it yet, install it from there.

Then, in the Web Platform Installer look for the URL Rewrite 2.0 and install it.

urlrewrite

URL Rewrite 2.0 installation with Web Platform Installer

The Reverse proxy website

Ok now we are going to create our proxy website. IIS has gently created a website and its associated pool. Delete them both without any mercy and create a new one (called TeamcityFrom) with the following parameters. Remark: that there is no website and nothing under the C:inetpubwwwroot this is just the default IIS website directory.

TeamcityFront

New TeamCityFront IIS website that point to the inetpub/wwwroot folder

Create the rewrite rule

We are going to create a rule that basically transform all incoming http request to request targeting locally localhost:8080. Open the URL rewrite menu that should be visible in the window when you click on your site and create a blank rule with the following parameters.

inboundRule1

URL Rewrite for our TeamcityFront website

inboundRule2

second part of the rewrite rule

Now let us go back to the management Azure portal and add two new endpoints http for port 80 and https for 443 in Windows Azure

endpoints3

Add HTTP on port 80 and HTTPS on port 443 with the Azure Management Portal for our VM

Now check that you are able to browse your site at testteamcity.cloudapp.net from the outside. But you could object what was the point? Indeed, we could have setup TeamCity on port 80 and add the HTTP endpoint on Azure and the result would be the same. Yes you would be right, but remember, our goal is to serve securely with SSL!

Enabling SSL

PersonalPfx

Certificate installation

To enable SSL you need to have a certificate, you can buy one at Gandi.net for example. When you get the .pfx file install it on your VM by double clicking and put the certificate on the personal store.

An SSL certificate is bound to a domain and you do not own cloudapp.net so you cannot use the subdomain testteamcity.cloudapp.net of your VM. You will have to create an alias for example build.keluro.com and create a CNAME that will redirect to the VM.

Here is the procedure if you manage your domains in Office365.

office365

Creating a CNAME subdomain in Office365 that point to the *.cloudapp.net address of your VM

Now in IIS, click on your site and edit SSL bindings, add this newly created  subdomain build.keluro.com and use the SSL certificate that should be recognized automatically by IIS.

SSLBindings

Create an HTTPS binding for the proxy server under IIS

At this stage, you should be able to browse your site on https from the outside with a clean green lock sign.

Browsing in security with https

Browsing in security with https

Redirection Http to Https

You do not want your user to continue accessing insecurely your web app with basic Http. So a good mandatory practice is to redirect your http traffic to a secure https endpoint. Once again, we will benefit from the reverse proxy on IIS. Simply create a new URL rewrite rule.

redirecturlrewrite

An HTTP to HTTPS redirect rewrite rule

Then place you HTTPS redirection rule before the ReverseProxy rule.

urlrewrite2

Place the HTTPS redirection rule before the ReverseProxy rewrite rule

Then know when you type http://build.keluro.com or simply build.keluro.com you’ll be automatically redirected and the job is done!

browsinghttp

A working website that redirects automatically to https

TeamCity on Windows Azure VM part 1/2 – server setup

TeamCity is a continuous integration (CI) server developed by Jetbrains. I have worked with CruiseControl and Hudson/Jenkins and, to my point-of-view, TeamCity is beyond comparison. I will not detail all the features that made me love this product, let me sum up by saying that TeamCity is both a powerful and easy to use software. It is so easy to use that you can let non tech guys visit the web app, trigger their own builds, collect artifacts etc.
TeamCity 9.1 has been developed with the Tomcat/JEE tech stack. However, it works on windows and, in this post, I will explain how to setup a proper installation on Windows VM hosted on Azure using Azure SQL. Precisely, I will detail the following points

  • Installing TeamCity 9.1 on a Windows Server 2012 Virtual Marchine hosted on Azure with and SQL Azure database.
  • Serving your TeamCity web app securely through https. To this aim, we will setup a reverse proxy on IIS (Internet Information Services).  I will also detail how to make the websockets, used by the TeamCity GUI, work with IIS 8.0.

This first post is dedicated only to the first point. The second one will be the topic of the next post.

Installing Teamcity 9.0 on Azure VM

creating the VM

First of all, start by creating the VM. I recommend you to use manage.windowsazure.com. I personally think that portal.azure.com is not very clear for creating resources at the time of the writing. You can use the ‘Quick create’ feature. There is nothing particular, no special tricks here. Azure will create a virtual machine with Windows Server 2012. Note that I recommend you to provide a instance with more resources while you are configuring. Indeed, we are going to use the Remote Desktop for configuring the TeamCity web app and the IIS server. Remind that, when you allocate more resource (D1 or D3 for example), the Remote Desktop will be more responsive, you’ll save a lot of time. Of course you could downgrade later to minimize hosting costs.

Quick create azure VM

‘Quick create’ a Windows Server 2012 VM is created on Azure

creating the Azure SQL Database and server

Then you will have to create an Azure SQL Server with an SQL database. This times, on windows.azure.com, go to Sql database and click ‘Custom create’. For the SQL Server, choose ‘new sql server’. You can select the minimal configuration: a Basic Tier with maximum size of 2GB. Important: choose Enable V12 for your SQL server. Non-v12 server are not supported by TeamCity. At the time of the writing, it seems impossible to upgrade a database server to v12 after it is created, so do not forget and create a v12 instance. You will be asked to provide credentials for the administrator of the database. Keep your password, you will need it later when configuring the database settings in TeamCity.

TeamCity installation

When it is done, connect your Azure VM using Remote Desktop. Now we are going to install TeamCity on the Azure VM. Precisely, we will install it to run under the admin account that we created when creating the VM. I strongly recommend to run TeamCity with admin privileges unless you may encounter many problematic errors.

On remote desktop VM, start Internet Explorer and visit the Jetbrains TeamCity website: https://www.jetbrains.com/teamcity. You may have to put this site as a trusted source in order to visit from IE, this is also the same for downloading the file. To change internet security settings click the gear at the right top corner of IE and then InternetOptions security etc.

Once downloaded, start the installer. You may install it almost anywhere on the VM disk, the default, C:\TeamCity, is fine.

On the next installation screen, you will be asked ‘which component to install?’. You have to install both components Build agent and Server to have a working configuration. Indeed, in this post we will performed a basic TeamCity installation where the build agent and the continuous integration server are installed on the same machine. However keep in mind that TeamCity is a great product that enables you to distribute on several machine many build agents. They will be in charged of executing heavy builds simultaneously.

Install agent and build server

Install agent and build server

Later, you’ll be asked for the port, set it to 8080 (for example) instead of the default 80. Indeed, we do not want our TeamCity integration website being served without SSL, but serving the website through https/443 will be the topic of the next post… Finally, choose to Run TeamCity under a user account (the administrator of the vm, no need to create a new user).

After that, TeamCity server starts and an Internet Explorer window on localhost:8080 will be opened. I will ask you the DataDirectory location on the TeamCity server machine (choose the default).

Database configuration

Now comes the tricky thing, configuring the database. As explained in many places by Jetbrains do not keep the internal database for your installation. To use an Azure SQL Database, choose MSSQL Server.

Database connection settings

Database connection settings in TeamCity

You will have to install je jdbc 4.0 driver that is needed by TeamCity to work with an SQL Server database. It is a very easy task that consist of putting a jar file ( sqljdbc4.jar) under the DataDirectory. It is well documented in the Jetbrains link. Consequently, I will just add that the default DataDirectory is an hidden folder located by default at C:ProgramDataJetBrainsTeamCitylibjdbc.

Now we have to fill the form for database settings. We will need the connection string that is available on the azure portal. If you select the ado.net representation, it should look like Server=tcp:siul9kyl03.database.windows.net,1433;Database=teamcitydb;User ID=teamcitydbuser@siul9kyl03;Password={your_password_here};Trusted_Connection=False;Encrypt=True;Connection Timeout=30;

databaseconnectionsettings

Entering the database connection settings.

If you use a connection string formatted as above. When filling the form in TeamCity, you will enter the following entries (see screensho).

Remark that you may be rejected because your SQL database server does not allow Windows Azure (then this current VM) to access it. You have to manually grant it, in Azure management portal, on the Configure menu of the SQL SERVER. Click ‘yes’ for allowed Windows Azure services.

Then you will meet a last step, that ask you to create an admin user for your TeamCity web application.

createfinalstep

Final step! create an administrator for TeamCity web app

Now, the first step of our installation tutorial is completed and we have a TeamCity server setup on our Azure VM. Visiting your TeamCity website on teamcityserver.cloudapp.net or even teamcityserver.cloudapp.net:8080 is not possible. Indeed, we setup the server on port 8080. This endpoint is blocked by your Windows Azure VM. In the next part, we will see how to serve properly and securely our integration web app through 443 port.

Before going to the second part. I suggest that you check your TeamCity works well locally on the VM by creating a very simple project. In our case, we create a build configuration that just checkouts source from a public github repository and perform a Powershell task that says “hello world”. When triggering the build manually (run button), the build is well executed. This means that the setup of the TeamCity server is completed. Configuring its web access will be the rest of our job.

firstbuild

Our first build is passed

PS: do not forget to downgrade the resources associated with your VM instance when you are done with configuration. There is no need to pay for a large VM if you do not need such performance.