Tag Archives: powershell

Powershell srcset image generator

If you have a website and SEO matters for you, then you probably had to optimize images. To this aim, you may want to have responsive images. As explained here,

a responsive image is an image which is displayed in its best form on a web page, depending on the device your website is being viewed from.

One of the modern way to serve quickly responsive images is to benefit from the srcset html attribute. Shortly, depending on parameters and your viewport (i.e. browser window) the srcset attribute will tell the browser to download the best appropriate image for the current display.

For example, if you put the following HTML element

<img src="images/fcnantes-champions-95.jpg"
srcset="images/fcnantes-champions-95.jpg 200w, images/fcnantes-champions-95-400.jpg 400w,
images/fcnantes-champions-95-600.jpg 600w,
images/fcnantes-champions-95-800.jpg 800w">

Your server logic can serve up to four different images representing the same pictures.

You may guess that creating all this different resized pictures can be painful manually. In this blog post we propose the following Powershell script to help you for the automation of this task.

Param ( [Parameter(Mandatory=$True)] [ValidateNotNull()] $imageSource, [Parameter(Mandatory=$true)][ValidateNotNull()] $quality )

if (!(Test-Path $imageSource)){throw( "Cannot find the source image")}
if ($quality -lt 0 -or $quality -gt 100){throw( "quality must be between 0 and 100.")}

[void][System.Reflection.Assembly]::LoadWithPartialName("System.Drawing")
$resolvedPath = Join-Path $PWD -ChildPath $imageSource
$bmp = [System.Drawing.Image]::FromFile($resolvedPath)

#hardcoded canvas size...
$canvasWidths = @(200, 400, 600, 800)

foreach($canvasWidth in $canvasWidths){
    #Encoder parameter for image quality
    $myEncoder = [System.Drawing.Imaging.Encoder]::Quality
    $encoderParams = New-Object System.Drawing.Imaging.EncoderParameters(1)
    $encoderParams.Param[0] = New-Object System.Drawing.Imaging.EncoderParameter($myEncoder, $quality)
    # get codec
    $myImageCodecInfo = [System.Drawing.Imaging.ImageCodecInfo]::GetImageEncoders()|where {$_.MimeType -eq 'image/jpeg'}

    #compute the final ratio to use
    $ratioX = $canvasWidth / $bmp.Width;
    $ratioY = $canvasWidth / $bmp.Height;
    $ratio = $ratioY
    if($ratioX -le $ratioY){
        $ratio = $ratioX
    }

    #create resized bitmap
    $newWidth = [int] ($bmp.Width*$ratio)
    $newHeight = [int] ($bmp.Height*$ratio)
    $bmpResized = New-Object System.Drawing.Bitmap($newWidth, $newHeight)
    $graph = [System.Drawing.Graphics]::FromImage($bmpResized)

    $graph.Clear([System.Drawing.Color]::White)
    $graph.DrawImage($bmp,0,0 , $newWidth, $newHeight)

    $targetFileName = [System.IO.Path]::GetFileNameWithoutExtension($imageSource) + "-" + $canvasWidth + ".jpg"
    $dir = [System.IO.Path]::GetDirectoryName($resolvedPath)
    $targetFilePath = Join-Path $dir -ChildPath $targetFileName
    Write-Host "Saving file" $targetFilePath
    #save to file
    $bmpResized.Save($targetFilePath,$myImageCodecInfo, $($encoderParams))
    $graph.Dispose()
    $bmpResized.Dispose()
}
$bmp.Dispose()

Now you can simply invoke the script like this: .\SrcsetBuilder.ps1 "..\images\MyImage.jpg" 85. Then all generated images: MyImage-200.jpg, MyImage-400.jpg, MyImage-600.jpg, MyImage-800.jpg are located next to MyImage.jpg.
You can modify the generated images widths by changing the values in the array $canvasWidths (line 11).

Hosting Jekyll website on Azure Web app using TeamCity automated deployment

The public website of my company Keluro is built with Jekyll. If you are a Jekyll user you are probably aware of Github pages. To me, the best feature of Github pages is the automated deployment of your website when pushing on a specific branch (namely ‘gh-pages’). We hosted the website almost a year in Github pages. However, we suffer many inconveniences with it:

  • Https is not supported by Github pages. As Google announced, http over SSL is now a bonus in term of rankings. At Keluro we do have a wild card SSL certificate, its a shame that we could not use it for our public corporate website!
  • We could not tune some server caching configuration (ETag, Cache-Control etc.), resulting on poor results from Google Page Speed Insights.
  •  With Githup pages you cannot use custom advanced gems. They are ruby extensions, which is the technology on which Jekyll is based on. I have already blogged about our solution to support multi-lang website. Even if it works, I am more and more thinking that a Jekyll gem would do a better job…
  • We had problem with Facebook scrapping our open graph meta tags and there was nothing we could do about it: see issue here.
  • You do not control the version of Jekyll run by Github pages. We found out that there are some breaking changes introduced when migrating from Jekyll 2 to Jekyll 3. We do not want our website to get down because of a silent release of a new Jekyll revision.
  •  At Keluro due to our business model we are windows users. However, the server running Github pages are Linux servers, so you face the technicalities coming from switching between Linux/Windows: CRLF vs LF etc. Even if there are solutions such as .gitattribute file etc. these are extra technicalities that our non-tech teammates working on the website are not aware of and we do not want them to spend time on this.
  • We use a project page rather than a personal page, it was complicated to configure DNS names etc. There are always exception when it comes to project pages with Github pages.

For all these reasons I wanted to quit Github pages. As the CTO of Keluro, I had not a lot of time to investigate all alternatives and wanted a simple solution. We are already Bizspark member, our web apps, APIs, VMs etc. are hosted on Azure. We are familiar with configuration of IIS server. Consequently, it was a reasonable solution to host the website on Azure with all our other resources. On the other hand, we already had an automated deployment solution: our continuous integration server, TeamCity, which is also hosted on Azure.

The solution proposed here is then very simple. Similarly to the automated deployment provided by Github pages, we use TeamCity to trigger changes on a given branch of the Git repository. Then, the website is built by Jekyll on the integration server virtual machine. Finally, it is synchronized with the Azure web app FTP using the sync library WinSCP. At the end, our static html pages are served using Azure Web app.

Once the TeamCity build configuration is created, you just have to write two Powershell build steps (see code below). You can also use two Psake build targets and chain them as I wrote here.

The prerequisite is to install Jekyll on the windows server VM with jekyll.exe on the environment variable $PATH. You can add WinSCP.exe and .dll in a folder within your source code under the ‘ignored\build’ location. Make sure that ‘ignored’ is indeed an ignored folder by Jekyll (you do not want Jekyll to output these to your deployed _site folder).

In the TeamCity build configuration you can set up environment variable that will be consumed by the Powershell script ($env:VARNAME). It is an acceptable solution for avoiding hardcoding passwords, location path etc. on the sources. For example, the variable $env.RepoDir is set to %system.teamcity.build.checkoutDir%. You use such environment variable, to store ftp settings. To recover the FTP settings of an Azure Web App, see this stackoverflow question.

REMARK: We did not manage to redirect the WinSCP ouput of the sync logs in real time to TeamCity. We log the results when the syncing is completed. If someone has a solution we will be glad to hear it.
We tried the WinSCP powershell CMDLets but they seem heavily bugged at the time of the writing.

Build the website with Jekyll

$repoDir = $env:RepoDir
cd $repoDir #change location and go to the repository
Exec{Invoke-Expression "& jekyll build"} #invoke Jekyll from Powershell command line

Sync the website with WinSCP

$hostName = $env:FtpHostName
$repoDir = $env:RepoDir
$userName = $env:FtpUserName
$pwd = $env:FtpUserPwd

$sitePath = Join-Path $repoDir -ChildPath "_site"
$windscpPath = Join-Path $repoDir -ChildPath "ignored\build\WinSCP"

$dllPath = Join-Path $windscpPath -ChildPath "WinSCPnet.dll"
$exePath = Join-Path $windscpPath -ChildPath "WinSCP.exe"
if(!(Test-Path $dllPath)){
    Write-Error "No dll path found! " $dllPath
}
if(!(Test-Path $exePath)){
    Write-Error "No exe path found! " $exePath
}

Add-Type -Path $dllPath

$sessionOptions = New-Object WinSCP.SessionOptions
$sessionOptions.Protocol = [WinSCP.Protocol]::Ftp
$sessionOptions.HostName = $hostName
$sessionOptions.UserName = $userName
$sessionOptions.Password = $pwd
$sessionOptions.FtpSecure = [WinSCP.FtpSecure]::Explicit

$session = New-Object WinSCP.Session
$session.ExecutablePath = $exePath
$session.SessionLogPath = $logpath

try
{
    $session.Open($sessionOptions)

    Write-Host "Start syncing..."

    $transferResult = $session.SynchronizeDirectories([WinSCP.SynchronizationMode]::Remote, $sitePath, "/site/wwwroot", $True, $False,[WinSCP.SynchronizationCriteria]::Size)

    $transferResult.Check()

    foreach ($transfer in $transferResult.Downloads)
    {
    Write-Host ("Download of {0} succeeded" -f $transfer.FileName)
    }
    foreach ($transfer in $transferResult.Uploads)
    {
        Write-Host ("Upload of {0} succeeded" -f $transfer.FileName)
    }
    foreach ($transfer in $transferResult.Removals)
    {
    Write-Host ("Removal of {0} succeeded" -f $transfer.FileName)
    }
    Write-Host "... finish syncing."
}
finally
{
    # Disconnect, clean up
    $session.Dispose()
}

Resize image and preserve ratio with Powershell

My recently created company website Keluro has two blogs: one in french and one in english. The main blog pages are a preview list of the existing posts. I gave the possibility to the writer to put a thumbnail image in the preview. It’s a simply <img /> tag where a css class is responsible for the resizing and to display the image in a 194px X 194px box while keeping the original aspect ratio. Most of the time this preview is a reduction of an image that is displayed in the blog post. Everything was fine until I found out that the these blog pages did not received a good mark while inspecting them with PageSpeedInsights . It basically says that the thumbnails were not optimized enough… For SEO reasons I want these blog pages to load quickly so I needed to resize these images once for all even if it has to duplicate the image.

Resizing and keeping aspect ratio

Resizing two pictures with landscape and portrait aspect ratio to make them fill a given canvas.

I think that most of us already had to do such kind of image resizing task. You can use many available software to do that: Paint, Office Picture Manager, Gimp, Inkscape etc. However, when it comes to manipulate many pictures, it could be really useful to use a script. Let me share with you this Powershell script that you can use to resize your .jpg pictures. Note that there is also a quality parameter (from 1 to 100) that you can use if you need to compress more the image.

Param ( [Parameter(Mandatory=$True)] [ValidateNotNull()] $imageSource,
[Parameter(Mandatory=$True)] [ValidateNotNull()] $imageTarget,
[Parameter(Mandatory=$true)][ValidateNotNull()] $quality )

if (!(Test-Path $imageSource)){throw( "Cannot find the source image")}
if(!([System.IO.Path]::IsPathRooted($imageSource))){throw("please enter a full path for your source path")}
if(!([System.IO.Path]::IsPathRooted($imageTarget))){throw("please enter a full path for your target path";)}
if ($quality -lt 0 -or $quality -gt 100){throw( "quality must be between 0 and 100.")}

[void][System.Reflection.Assembly]::LoadWithPartialName("System.Drawing")
$bmp = [System.Drawing.Image]::FromFile($imageSource)

#hardcoded canvas size...
$canvasWidth = 194.0
$canvasHeight = 194.0

#Encoder parameter for image quality
$myEncoder = [System.Drawing.Imaging.Encoder]::Quality
$encoderParams = New-Object System.Drawing.Imaging.EncoderParameters(1)
$encoderParams.Param[0] = New-Object System.Drawing.Imaging.EncoderParameter($myEncoder, $quality)
# get codec
$myImageCodecInfo = [System.Drawing.Imaging.ImageCodecInfo]::GetImageEncoders()|where {$_.MimeType -eq 'image/jpeg'}

#compute the final ratio to use
$ratioX = $canvasWidth / $bmp.Width;
$ratioY = $canvasHeight / $bmp.Height;
$ratio = $ratioY
if($ratioX -le $ratioY){
  $ratio = $ratioX
}

#create resized bitmap
$newWidth = [int] ($bmp.Width*$ratio)
$newHeight = [int] ($bmp.Height*$ratio)
$bmpResized = New-Object System.Drawing.Bitmap($newWidth, $newHeight)
$graph = [System.Drawing.Graphics]::FromImage($bmpResized)

$graph.Clear([System.Drawing.Color]::White)
$graph.DrawImage($bmp,0,0 , $newWidth, $newHeight)

#save to file
$bmpResized.Save($imageTarget,$myImageCodecInfo, $($encoderParams))
$graph.Dispose()
$bmpResized.Dispose()
$bmp.Dispose()

Now suppose that you have saved and named the script above “MakePreviewImages.ps1”. You may use it in a loop statement such as the following one where we assume that MakePreviewImages.ps1 is located under the current directory and the images are in a subfolder called “images”.

Get-ChildItem .images -Recurse -Include *.jpg | Foreach-Object{
   $newName = $_.FullName.Substring(0, $_.FullName.Length - 4) + "_resized.jpg"
     ./MakePreviewImages.ps1 $_.FullName $newName 75
   }

Executing PSake build script with Teamcity

For my build tasks I have been using MSBuild for a while. I found out that it is fine to use it for standard build tasks such as invoking msbuild.exe itself. However, when it comes to really custom actions it is really painful. I did not want to struggle any longer with xml config files and would like to stay as close as possible to a simple procedural programming language. Most of all I was fed up by the quoting issues when invoking external executables. When writing such actions I wanted to stick to a shell scripting language that I already know. Still, basic scripting can be a little bit improved when it comes to build process. To be precise, the notion of task (or target) is extremely handy because you can split a build step in sub tasks that you may combine differently depending on the build config. While it looks like overkill for a small project it will ease your life in large projects where packaging can be complex.

The tool that I will present in this post is called PSake (pronounce saké like the asian alcohol) and is described by the Wikipedia as

a domain-specific language and build automation tool written in PowerShell to create builds using a dependency pattern similar to Rake or MSBuild.

This was all I needed: a build tool in Powershell.

Even if I will not cover this in the present post, one of the main benefit of using Powershell for build scripts is the possibility to use directly Powershell extensions such as Azure or Sharepoint Cmdlets. Even if you do not need it right now, they may become extremely useful in the future of your Windows based software project.

In this post I will describe how to use PSake for a very simple build task: patching the AssemblyInfo.cs file for a visual studio .NET solution. I will also show you the organization of the targets that I have chosen.

Let us recall that the version of the .NET assembly produced by the compilation of your C# project uses the AssemblyInfo.cs file that you can find under the /Properties folder. The lines that are automatically generated in the AssemblyInfo.cs file regarding the assembly version are

[assembly: AssemblyVersion("1.0.0.0")]
[assembly: AssemblyFileVersion("1.0.0.0")]

Consequently, the task has to patch the AssemblyVersion so that it contains major.minor while the AssemblyFileVersion will be major.minor.build. Once the task will be finished all AssemblyInfo.cs in the code source repository will be patched similarly, for example with [assembly: AssemblyVersion("6.3")] and [assembly: AssemblyFileVersion("6.3.567")]. If you are not familiar with product versioning you may read the wikipedia page. Here we chose a major.minor.build sequence where the major and minor numbers are fully handled by the owner of the project. However, the .build number is provided by the continuous integration system which will be Teamcity in our case. Similarly to food industry where, given the number of a ready-made meal you are supposed to be able to track back where the beasts came from, in software, you should be able from the product version to track back the source of your product and its external dependencies. This is why both source code and build history are precious and why the Teamcity database should also be backed up.

The task presented here may not the best advocate for PSake over MSBuild XML configs because using the <FileUpdate> command performs the same kind of patch easily. In addition the MSbuildExtensionPack has a built in AssemblyInfo task for this purpose. But, if you are a .NET developer I am pretty sure you will be at ease, then think as an illustration for PSake.

Here is the source code for the task.

The outer braces: task PathAssemblyInfo defines the scope of the target. The command does not depend from where the script is run because we use the $PSScriptRoot variable (available since Powershell 3.0). It basically looks for all AssemblyInfo files in the solution directory and patch them with the appropriate version numbers. The detailed implementation of the functions GetAssemblyVersion, GetAssemblyFileVersion and PatchFile can be found here.

Now let us look on how to combine and invoke the PSake targets from Teamcity. I have a /build directory at the root of the repository which looks like

The organisation of build target with PSake

The organisation of build target with PSake

The PSake project is located under the /build folder. In the /targets directory we put all targets where one target equals one file with the same name. The bootstrapTargets.ps1 loads all build targets that are put in the subdirectory with the same name. In bootstrapTargets.ps1 I also put the high level targets, the one that will be called by the Continous Integration. Here, I have defined a high level target Example that executes first our PatchAssemblyInfo then another target that I have called RenameMSI (only for the example). You can also put in bootstrapTargets.ps1 some functions that will be reused by the different targets.

Now see how to invoke the PSake Example task within Teamcity. Create a Powershell using at least the 3.0 version.

Invoke the Example PSake task within TeamCity

Invoke the Example PSake task within TeamCity

The script in the Teamcity editor (and only this one) assumes the current directory is the repository root. The first line imports the module while second one invoke the Example task using PSake. The third line below is important so that exceptions thrown in the script appear as build failures.
if ($psake.build_success -eq $false) { exit 1 } else { exit 0 }

To conclude, I would say that PSake is good alternative to those who know Powershell and want to use less XML for their builds.

Tools for javascript TDD with Visual Studio

In this post I describe the tools that I have selected for efficient development of javascript tests within Visual Studio. Indeed, if you are developing sites with ASP.NET or Apps for Office then you are more or less committed to use Visual Studio. Therefore, you probably do not want to use another editor for the javascript development.

What were my requirements when I chose the tools that I will describe to you in this post? Actually, I needed tools which can provide me the same comfort that I have while I am developing in TDD with .NET. Precisely, the test should be run individually (or at least individually for a given set). They should be run on the continuous integration build, which is teamcity in my case. The test runner may use any browser for executing the tests even if flexibility for using all kind of browsers would be appreciated. In addition, the test should be easily debugged and this is a very important aspect for me.

Let me detail a little this latter requirement. Indeed, I have read recently this blog post and its written that

Debugging the tests
Wiseman said: “If you need to debug unit test then something is wrong with it’s unitness.”
PS: If it’s really needed you can debug it in browser as you normally debug JavaScript code.

Even if the rest of the blog is brilliant, I do not agree at all with this sentence.  If you practice regularly TDD then you’ll see that you have to debug, that’s part of the game. If you never have to debug maybe it’s because you are asserting too much trivialities and not enough your own complex logic. In addition, if you have to create the webpage on your own, for debugging, that is not acceptable to me either. You should be able to put your breakpoint anywhere in your code and hit it in less than five seconds. We will see that the tool chosen permits such quick debugging and that’s the main reason why I am using it.

At the time of the writing, the popular Visual Studio Addin Resharper with its version 8 supports javascript tests with the frameworks Jasmine or QUnit. Unfortunately, the debugger cannot be properly attached while debugging the scripts. Sadly, I had to reject Resharper that I appreciate so much for .NET development.

Finally, the best tool that I found is Chutzpah (which means Audacity in Yeddish). Firstly, there is the Chutzpah runner which is a javascript test runner that uses the headless browser phantomJs, we will invoke it from the command line in the continuous build. Sedondly, there is the Chutzpah Visual Studio extension which enables quick runs and debugging sessions from Visual Studio. Both runner and Visual Studio extension is fully compatible with JS test framework Jasmine 2.0. that I have chosen. Note that I also use the mocking library sinonJs but I won’t discuss it there.

You may retrieve the code samples below on my github repository.

To illustrate these tools we will use a very simple mockup project and we start its description with the following VisualStudio solution.

The Visual Studio solution describing the example of this post.

The Visual Studio solution describing the example of this post.

As usual there are two projects: WebApplication1, the core project, containing the app folder with our javascript logic, especially the one that we would like to test: sut.js (for SystemUnderTest). There is also the test project, WebApplication1.Tests. The file testing the logic contained in sut.js is simply sutTests.js. The testing framework Jasmine is added to the solution by referencing the folder lib/jasmine-2.0.0..

We will take a very simple example, where the object app.sut has a function for computing the factorial of an integer. The code of sut.js can be found in the following snippet.

(Note that in app we have implemented a basic namespacing function such as this one)

Let us now focus on the test suite in the sutTests.js code.

This piece of code is very basic Jasmine syntax for a test suite. We assert the values of the factorial function for the two corner cases where the input is 0 and 1 and we also test the situation with n=5. Remark also that we have a test which asserts that an exception is thrown when a non positive value is passed to the function. We have not covered this situation with the snippet above, consequently, we expect this test to fail… Remark that all the needed references to js files are handled with ///<reference of Visual Studio. By the way, we benefit from Visual Studio Intellisense here.

The Javascript intellisense with Visual Studio

The javascript Intellisense with Visual Studio

In the next screen shot, we run the tests with the Visual Studio Test Explorer that is compatible with Jasmine thanks to Chutzpah.

Run all our test with the Visual Studio Test Explorer. As expected we encounter a failure.

Run all our test with the Visual Studio test explorer. As expected we encounter a failure.

If we were in the situation where we do not know what is going wrong (which is often the case) we would need to debug the test. Unfortunately, like Resharper, you cannot debug the test directly in VisualStudio, the debugger does not attach well. Even Chutzpah creator does not know how we would do that, so I believe we will have to wait to debug javascript test as easily as we debug .NET within visual studio. Then, we will have to debug with a web browser. However, Chutzpah creates the webpage for bootstrapping your failing test. So just by clicking in Open in browser you will have the web page loaded and a link to run the failing test within the browser (see screenshots below).

Chutzpah Visual Studio extension creates web page for running/debugging tests in the web browser.

Chutzpah Visual Studio extension creates web page for running/debugging tests in the web browser.

Then the debugging sessions happens in your browser. Reclicking the links reexecutes the tests.

Then the debugging sessions happens in your browser (here Chrome). Clicking again the links reexecute the tests.

Now, we do have all the material for editing efficiently tests in Visual Studio. Let us have a few words for running the test using the command line on the server. Personally, I embed the Chutzaph runner in a /build directory within my sources and executes the following Powershell script.

Here is what it looks when ran it an Powershell console with teamcity options.

The execution of all javascript tests of the solution with Chutzpah runner using Powershell

The execution of all javascript tests of the solution with Chutzpah runner using Powershell

To conclude, I would say that Chutzpah is a great project and if you need a simple and ready to use test runner I would recommend it. The only limitation for now  would be that the runner only supports phantomJS. However, you may use another another test runner for executing the tests with different browser and you can keep the Chutzpah Visual Studio extension for the development. One last important thing to note is the fact that Chutzpah 3.0 supports RequireJs.