Quick TeamCity Build Status with AngularJS

So, this isn’t supposed to be the ultimate guide to AngularJS or anything like that – I’m not even using the latest version – this is just some notes on my return to The World of the View Model after a couple of years away from WPF. Yeah, that’s right, I just said WPF while talking about Javascript development. They may be different technologies from different eras: one may be the last hurrah of bloated fat-client development and the other may be the latest and greatest addition to the achingly-cool, tie dyed hemp tool belt of the Single Page App hipster, but under the hood they’re very very similar. Put that in your e-pipe and vape it, designer-bearded UX developers!

BuildStatus

Anyway, when I started, I knew nothing about SPA development. I’d last done JavaScript several years ago and never really used it as a real language. I still contend that JavaScript isn’t a real language (give me Scala or C# any day of the week) but you can’t ignore the fact that this is how user interfaces are developed these days… so, yeah, I started with a tutorial on YouTube.

I decided to do an Information Radiator to show build status from TeamCity on the web. Information Radiators are my passion – at least they’re one of the few passions I’m allowed to pursue at work – and we use Team City for all our continuous integration, release builds, automated tests and so on. Our old radiators are coded in WPF, which looks awesome on the big TVs dotted around the office, but doesn’t translate well for remote workers.

There is no sunshine and there are no rainbows in this article. I found javascript to be a hateful language, filled with boilerplate and confusion. Likewise, though TeamCity is doubtless the best enterprise CI platform on planet earth, the REST APIs are pretty painful to consume. With that in mind, let’s get into the weeds and see how this thing works…

Enable cross-site scripting (CORS) on your Team City server

You can’t hit a server from a web page unless that server is the server that served the web page you’re hitting the server with… unless of course you tell the server you want to hit that the web page you want to hit it with, served from a different server, is allowed to hit it. Got that? Thought so. This is all because of a really logical thing called “Cross Origin Resource Sharing”, which you can enable pretty easily in TeamCity as long as you have admin permissions.

Check out Administration -> Server Administration -> Diagnostics -> Internal Properties. From there you should be able to edit, or at least get the location of the internal.properties file. Weirdly, if the file doesn’t exist, there is no option to edit, so you have to go and create the file. Since my TeamCity server is running on a Windows box, I created the new file here:

C:\ProgramData\JetBrains\TeamCity\config\internal.properties

and added the following:

rest.cors.origins=*

You might want to be a little more selective on who you allow to access the server this way – I guess it depends on how secure your network is, how many clients access the dashboard and so on.

Tool Chain

This article is about AngularJS and it’s about TeamCity. It’s not about NPM or Bower or any of that nonsense. I’m not going to minify my code or use to crazy new-fangled pseudo-cosmic CSS. So setting up the build environment for me was pretty easy: create a folder, add a file called “index.html”, fire up the fantastic Fenix Web Server and configure it to serve up the folder we just created. Awesome.

If you’re already confused, or if you just want to play with the code, you can download the lot from GitHib: https://github.com/DanteLore/teamcity-status-with-angular

I promise to do my best

Hopefully you’ve watched the video I linked above, so you know the basics of an AngularJS app. If not, do so now. Then maybe Google around the subject of promises and http requests in AngularJS. Done that? OK, good.

Web requests take a while to run. In a normal app you might fetch them on another thread but not in JavaScript. JavaScript is all about callbacks. A Promise is basically a callback that promises to get called some time in the future. They are actually pretty cool, and they form the spinal column of the build status app. This is because the TeamCity API is so annoying. Let me explain why. In order to find out the status (OK or broken) and state (running, finished) of each build configuration you need to make roughly six trillion HTTP requests as follows:

  1. Fetch a list of the build configurations in the system. These are called “Build Types” in the API and have properties like “name”, “project” and “id”
  2. For each Build Type, make a REST request to get information on the latest running Build with a matching type ID. This will give you the “name”, “id” and “status” of the last finished build for the given Build Type.
  3. Fetch a list of the currently running builds.
  4. Use the list of finished builds and the list of running builds to create a set of status tiles (more on this later)
  5. Add the tiles to the angular $scope and let the UI render them

Here’s how that looks in code. Hopefully not too much more complicated than above!

buildFactory.getBuilds()
	.then(function(responses) {
		$scope.buildResponses = responses
			.filter(function(r) { return (r.status == 200 && r.data.build.length > 0)})
			.map(function(r){ return r.data.build[0] })
	})
	.then(buildFactory.getRunningBuilds)
	.then(function(data) {
		$scope.runningBuilds = data.data.build.map(function(row) { return row.buildTypeId })
	})
	.then(function() {
		$scope.builds = $scope.buildResponses.map(function(b) { return buildFactory.decodeBuild(b, $scope.runningBuilds); });
	})
	.then(function() {
		$scope.tiles = buildFactory.generateTiles($scope.builds)
	})
	.then(function() {
		$scope.statusVisible = false;
	});

Most of the REST access has been squirrelled away into a factory. And yes, our build server is called “tc” and guest access is allowed to the REST APIs and I have enabled CORS too… because sometimes productivity is more important than security!

angular.module('buildApp').factory('buildFactory', function($http) {
	var factory = {};
	  
	var getBuildTypes = function() {
		return $http.get('http://tc/guestAuth/app/rest/buildTypes?locator=start:0,count:100');
	};
	
	var getBuildStatus = function(id) {
		return $http.get('http://tc/guestAuth/app/rest/builds?locator=buildType:' + id + ',start:0,count:1&fields=build(id,status,state,buildType(name,id,projectName))');
	};
	
	factory.getRunningBuilds = function() {
		return $http.get('http://tc/guestAuth/app/rest/builds?locator=running:true');
	};

// etc

Grouping and Tiles

We have over 100 builds. Good teams have lots of builds. Not too many, just lots. Every product (basically every team) has CI builds, release/packaging builds, continuous deployment builds, continuous test builds, metrics builds… we have a lot of builds. Builds are good.

But a screen with 100+ builds on it means very little. This is an information radiator, not a formal report. So, I use a simple (but messy) algorithm to convert a big list of Builds into a smaller list of Tiles:

  1. Take the broken builds (hopefully not many) and turn each one into a Tile
  2. Take the successful builds and group them by “project” (basically the category, which is basically the team or product name)
  3. Turn each group of successful builds into a Tile, using the “project” as the tile name
  4. Mark any “running” build with a flag so we can give feedback in the UI

BuildStatus2

Displaying It

Not much very exciting here. I used Bootstrap, well, a derivative of Bootstrap to make the UI look nice. I bound some content to the View Model and that’s about it. Download the code and have a look if you like.

Here’s my index.html (which shows all the libraries I used):

<html ng-app="buildApp">
<head>
  <title>Build Status</title>
  
  <link href="https://bootswatch.com/cyborg/bootstrap.min.css" rel="stylesheet">
  <!--link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" rel="stylesheet"-->
</head>

<body>
  <div ng-view>
  </div>

  <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.5.5/angular.min.js"></script>
  <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.5.5/angular-route.js"></script>
  <script src="https://code.jquery.com/jquery-2.2.3.min.js"></script>
  <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js"></script>
  <script src="utils.js"></script>
  <script src="app.js"></script>
  <script src="build-factory.js"></script>
</body>
</html>

Here’s the “view” HTML for the list (in “templates/list.html”). I love the Angular way of specifying Views and Controllers by the way. Note the cool animated CSS for the “in progress” icon.

<div>
  <style>
	.glyphicon-refresh-animate {
		-animation: spin 1s infinite linear;
		-webkit-animation: spin2 1s infinite linear;
	}

	@-webkit-keyframes spin2 {
		from { -webkit-transform: rotate(0deg);}
		to { -webkit-transform: rotate(360deg);}
	}

	@keyframes spin {
		from { transform: scale(1) rotate(0deg);}
		to { transform: scale(1) rotate(360deg);}
	}
  </style>
  
	<div class="page-header">
		<h1>Build Status <small>from TeamCity</small></h1>
	</div>
	  
    <div class="container-fluid">
		<div class="row">
    		<div class="col-md-3" ng-repeat="tile in tiles | orderBy:'status' | filter:nameFilter">
        		<div ng-class="getPanelClass(tile)">
               <h5><span ng-class="getGlyphClass(tile)" aria-hidden="true"></span>   {{ tile.name | limitTo:32 }}{{tile.name.length > 32 ? '...' : ''}}   {{ tile.buildCount > 0 ? '(' + tile.buildCount + ')' : ''}} </h5>
               <p class="panel-body">{{ tile.project }}</p>
              </div>
        	</div>
    	</div>
    </div>
	<br/><br/><br/><br/><br/><br/>
  
  <nav class="navbar navbar-default navbar-fixed-bottom">
  <div class="container-fluid">
    <p class="navbar-text navbar-left">
		<input type="text" ng-model="nameFilter"/>  <span class="glyphicon glyphicon-filter" aria-hidden="true"></span>  
		<span class="glyphicon glyphicon-refresh glyphicon-refresh-animate" ng-hide="!statusVisible"></span>
	</p>
  </div>
</nav>
</div>

That’s about it!

I think I summarized how I feel about this project in the introduction. It looks cool and the MVC MVVM ViewModel vibe is a good one. The data binding is simple and works very well. All my gripes are with JavaScript as a language really. I want Linq-style methods and I want classes and objects with sensible scope. I want less syntactic nonsense, maybe the odd => every now and again. I think some or all of that is possible with libraries and new language specs… but I want it without any effort!

One thing I will say: that whole page is less than 300 lines of code. That’s pretty darned cool.

Feel free to download and use the app however you like – just bung in a link to this page!

BuildStatus

Shape Files and SQL Server

Over the last couple of weeks I have been doing a lot of work importing polygons into an SQL server database, using them for some data processing tasks and then exporting the results as KML for display. I thought it’d be worth a post to record how I did it.

Inserting polygons (or any other geometry type) from a shape file to the database can be done with the ogr2ogr tool which ships with the gdal libraries (and with Mapserver for Windows). I knocked up a little batch file to do it:

SET InputShapeFile="D:\Dropbox\Data\SingleView\Brazillian Polygons\BRA_adm3.shp"

SET SqlConnectionString="MSSQL:Server=tcp:yourserver.database.windows.net;Database=danTest;Uid=usernname@yourserver.database.windows.net;Pwd=yourpassword;"

SET TEMPFILE="D:\Dropbox\Data\Temp.shp"
SET OGR2OGR="C:\ms4w\tools\gdal-ogr\ogr2ogr.exe"
SET TABLENAME="TestPolygons"

%OGR2OGR% -overwrite -simplify 0.01 %TEMPFILE% %InputShapeFile% -progress

%OGR2OGR% -lco "SHPT=POLYGON" -f "MSSQLSpatial" %SqlConnectionString% %TEMPFILE% -nln %TABLENAME% -progress

The first ogr2ogr call is used to simplify the polygons. The value 0.01 is the minimum length of an edge (in degrees in this case) to be stored. Results of this command are pushed to a temporary shape file set. The second call to ogr2ogr pushes the polygons from the temp file up to a database in Windows Azure. The same code would work for a local SQL Server, you just need to tweak the connection string.

You can use SQL Server Management Studio to show the spatial results of your query, which is nice! Here I just did a “select * from testPolygons” to see the first 5000 polygons from my file.

PolygonsInSqlServer

Sql Server contains all sorts of interesting data processing options, which I’ll look at another time. Here I’ll just skip to the final step – exporting the polygon data from the database to a local KML file.

polygonsInKml

SET KmlFile="D:\Dropbox\Data\Brazil.kml"

SET SqlConnectionString="MSSQL:Server=tcp:yourserver.database.windows.net;Database=danTest;Uid=usernname@yourserver.database.windows.net;Pwd=yourpassword;"

SET TEMPFILE="D:\Dropbox\Data\Temp.shp"
SET OGR2OGR="C:\ms4w\tools\gdal-ogr\ogr2ogr.exe"
SET SQL="select * from TestPolygons"

%OGR2OGR% -lco "SHPT=POLYGON" -f "KML" %KmlFile% -sql %SQL% %SqlConnectionString%  -progress

Obviously you can make the SQL in that command as complex as you like.

Polygons here are from this site which allows you to download various polygon datasets for various countries.

Serial on Raspberry Pi Arch Linux

So the new version of Arch Linux doesn’t have runlevels, rc.d or any of that nonsense any more. It just has systemd. Super simple if you know how to use it, but a right pain in the backside if you don’t.

I have a little serial GPS module hooked up to my Raspberry Pi via the hardware serial port (ttyAMA0). My old instructions for getting this to work aren’t much use any more. Here’s the new procedure for getting serial data with the minimum of fuss:

1. Disable serial output during boot

Edit /boot/cmdline.txt using your favourite editor. I like nano these days.

sudo nano /boot/cmdline.txt

Remove all chunks of text that mention ttyAMA0 but leave the rest of the line intact. Bits to remove look like:

console=ttyAMA0,115200 kgdboc=ttyAMA0,115200

2. Disable the console on the serial port

This was the new bit for me. The process used to involve commenting out a line in /etc/innitab but that file is long gone.

Systemd uses links in /etc to decide what to start up, so once you find the right one, removing it is easy. You can find the files associated with consoles by doing:

ls /etc/systemd/system/getty.target.wants/

One of the entries clearly refers to ttyAMA0. It can be removed using the following command:

sudo systemd disable serial-getty@ttyAMA0.service

3. Check you’re getting data

I used minicom for this as it’s very simple to use. First of all, make sure you plug in your device (with the power off, if you’re as clumsy as me!).

sudo pacman -S minicom
minicom -b 4800 -o -D /dev/ttyAMA0

You should see a lovely stream of data. I my case it was a screen full of NMEA sentences. Great stuff!

WordPress: Oh deary deary me!

This evening I was innocently setting up a wireless dongle for my darling wife. I casually typed in the address of this very web page into her browser to check it was working, only to find that all the posts were missing!

404 errors on every page but the front page. Poo! I desperately dived in to the wordpress settings, everything was set up fine. I updated wordpress but it made no difference. In the end, I went back to the post URL settings and clicked “Apply” again. It fixed the problem!

Looking at the stats, Logical Genetics seems to have been off the air since Independence Day. Almost a month. Miserable.

Back now though, and soon to be posting an article on my Build Status Traffic Lights.

Using a BufferBlock to Read and process in Parallel

Wrote an app this week – top secret of course – to load data from a database and process the contents. The reading from the database is the slow part and the processing takes slightly less time. I decided it might help if I could read a batch of results into memory and process it while loading the next batch.

Batching was dead easy, I found an excellent extension method on the internet that batches up an enumerable and yields you a sequence of arrays. The code looks like this, in case you can’t be bothered to click the link:

public static IEnumerable<T[]> Batch<T>(this IEnumerable<T> sequence, int batchSize)
{
    var batch = new List<T>(batchSize);

    foreach (var item in sequence)
    {
        batch.Add(item);

        if (batch.Count >= batchSize)
        {
            yield return batch.ToArray();
            batch.Clear();
        }   
    }  

    if (batch.Count > 0)
    {
        yield return batch.ToArray();
        batch.Clear();
    }  
}

That works really well, but it doesn’t give me the parallel read and process I’m looking for. After a large amount of research, some help from an esteemed colleague and quite a bit of inappropriate language, I ended up with the following. It uses the BufferBlock class which is a new thing from Microsoft’s new Dataflow Pipeline libraries (which provide all sorts of very useful stuff which I may well write an article on at a later date). The BufferBlock marshals data over thread boundaries in a very clean and simple way.

public static IEnumerable<T[]> BatchAsync<T>(this IEnumerable<T> sequence, int batchSize)
{
    BufferBlock<T[]> buffer = new BufferBlock<T[]>();

    var reader = new Thread(() =>
        {
            foreach (var batch in sequence.Batch(batchSize))
            {
                buffer.Post(batch);
            }
            buffer.Post(null);
            buffer.Complete();
        }) { Name = "Batch Reader Async" };
    reader.Start();

    T[] blocktoProcess;
    while ((blocktoProcess = buffer.Receive()) != null)
    {
        yield return blocktoProcess;
    }
}

The database read is done on a new thread and data is pulled back to the calling thread in batches. This makes for nice clean code on the consumer side!

Tracking Kanban with TFS

Kanban is a great way to manage your bug backlog.  It’s much better than Scrum simply because of the nature of bugs as compared to user stories. Scrum is all about making firm commitments based on estimates but bugs are very hard to estimate up-front. Generally when you’ve looked hard enough into the code to find the problem, you are in a position to fix it very quickly. Bug fixing is essentially a research task – like a spike – so time-boxing the work makes much more sense.

Set up a prioritised backlog and blast off the top as many bugs as possible in the time you’ve set aside – Kanban Style.  This works very well but, as with most agile approaches, it leaves old fashioned managers a bit grumpy.  They want to track your productivity and it’s fair to say that you should too because that’s how you spot impediments (plus it’s always good to show off).

Scrum-style burn downs don’t work with Kanban because they track progress against some committed target.  The answer is the Cumulative Flow Diagram:

CumulativeFlowDiagram3

So I did some tweaking to my Information Radiator to add a page showing the CFD for the last 60 days of one of our projects.  The data comes out of TFS via the C# API and a WIQL query – which has a very nice historical query feature which I’ll explain below.

Cumulative Flow Diagrams Explained

Cumulative flow diagrams couldn’t be simpler.  Like a burn-up chart they show a running total of the bugs fixed over time.  Since bugs aren’t estimated, the Y axis shows the bug count.  In the chart above the X axis is in days but I guess you could do weeks or even hours if you like.  In addition to the “fixed bugs” series, there are also stacked series for other states: “committed”, “in development” and “in QA”.

The benefit of showing the other issue states is that it gives you a readout on how the process is working.  The QA and development series should generally be the same thickness.  If the QA area gets fatter than the development area then you have a bottleneck in QA.  If the development series gets too fat then you’re spread too thinly – you have an impediment in development or need to think about your Kanban limit.

Note how there are a couple of “steps” on the left of my graph.  Those correspond to the first couple of sprints in which we used TFS. The team weren’t familiar with it, so work item states were generally changed at the end of the sprint.  As time went on we got better at updating the system and the steps turned into a nice looking slope.

Historical Queries in TFS 2012

It’s not every day that I openly applaud Microsoft for doing something brilliant and until now I’ve never been that cheerful about TFS.  But… the historical querying in WIQL (work item query language) is bloody brilliant!

Drawing a CFD chart depends on an ability to get the historical state of any issue in the system at a specified point in time.  In WIQL this is done using the “AsOf” keyword:

            
Select [ID], [Title], [Effort - Microsoft Visual Studio Scrum 2_0], [Assigned To]
From WorkItems
Where
  [Team Project] = 'Project'
And
  [Work Item Type] = 'Bug'
And
  [Iteration Path] under 'Project\Release'
AsOf '21/01/2013'

So the algorithm for drawing the CFD is pretty simple:

  • Grab the sprints for the project in question and use them to get the start and end dates for your chart
  • For each day on the X axis
    • Run a WIQL statement to get the state of all the bugs in the project on that date
    • Use linq to count issues in the various states you’re showing on the graph series
    • Populate a list of view model/data objects (one for each X value)
  • Throw the values at the chart

The only complications were the fact that the WPF Toolkit chart doesn’t support stacked area series (so I had to do it myself in the view model) and that getting data on group membership from TFS is very hard and very slow (so I build a cache of dev and QA group members up front and do comparisons on the display name).

Circular Polarisation the Easy Way

Antennas?  Antennae?  I’m not an expert on pluralisation, but I know how to search the internet.  Seems English authors don’t differentiate between metallic apparatus and sensory appendages, so antennae it is.

Anyway, a couple of months ago I carefully soldered together a couple of circular polarised FPV antennae.  I followed the guide on rcexplorer.se, carefully measured some 0.8mm welding wire and fiddled for hours getting it soldered together.  The end result was a pair of incredibly fragile antennae in which I had very little confidence.  At 5.8GHz the tolerances are incredibly small and even the smallest blob or errant solder can cause issues, not to mention bashing into the ground and ruining everything.

IMG_7613

The answer?  Buy something!  These fantastic plastic-potted circular polarised antennae are built to plug into Fat Shark goggles but can be made to fit the HobbyKing FPV stuff with a very cheap adapter that can be found on eBay for about £1 each (search for an SMA female to RP-SMA male adapter).

HobbyKing just added a version that doesn’t need the adapter too!  I found it while looking for a link for the previous paragraph.  I’m not too fussed that I missed out though – the ones I got are great!

IMG_7615

These little antennae are much smaller than I expected.  They are also very light and very tough.  The flexible but stiff cable allows them to be bent into the right position for your model too.

Can’t wait to give these a try – when it stops raining.