Amazon Athena – First Look

Amazon recently launched Athena – their answer to Google’s Big Query. It’s basically an SQL interpreter which runs over files in S3.  It reminds me of Apache Drill, but people round the office say it looks more like Hive.

AWS Athena is in no way associated with the ancient goddess of wisdom. Any similarity is purely coincidental.

The barrier to entry is very low. Upload the data files (CSV, Parquet and JSON are supported, amongst others), define a table, run a query. All this is done using a simple query editor.

Quick “Hello World”

To test Athena I uploaded some Parquet files, containing data from the open house price dataset to an S3 bucket (I had wanted to load the CSV files “as is” but due to limitations in the CSV reader I couldn’t). I then declared a table like so:

CREATE EXTERNAL TABLE IF NOT EXISTS house_prices.price_paid (
  `id` string,
  `price` int,
  `date` string,
  `postcode` string,
  `property_type` string,
  `old_or_new` string,
  `tenure_duration` string,
  `address1` string,
  `address2` string,
  `street` string,
  `locality` string,
  `town` string,
  `district` string,
  `county` string,
  `ppd_category` string,
  `record_status` string,
  `month` string 
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
WITH SERDEPROPERTIES (
  'serialization.format' = '1'
) LOCATION 's3://logicalgenetics.data/price-paid/'

And a few seconds later we’re ready to go:

select town, avg(price) as price 
from house_prices.price_paid 
group by town 
order by price desc
1	GATWICK	2683329.6666666665
2	THORNHILL	985000.0
3	VIRGINIA WATER	741140.2347652348
4	CHALFONT ST GILES	731333.515394913
5	COBHAM	610556.8430019713
6	BEACONSFIELD	587652.6552173913
7	KESTON	584417.7181571815
8	ESHER	551595.5002180074
9	GERRARDS CROSS	513740.5765843979
10	ASCOT	461468.9531164819

Good Stuff

The ease of setup in simple cases makes this technology very lightweight. If you already have data in S3, you can just start using Athena straight away. It’s perfect for ad-hoc querying, sanity checking and QA/test activities.

Athena uses a “server less” model – you pay for the rows you scan – no need to set up a cluster etc. At the time of writing, it’s something like $5 per 1TB of data scanned. As with everything on AWS, this is bearable but not exactly cheap.

Not Good Stuff

At the time of writing, Athena is very new. There are many missing features at the moment, which I hope Amazon will be adding in future.

Firstly, CSV read is limited to pure comma-separated data. Quotes are not supported. This is painfully annoying, as almost all CSV data has quotes around string fields. If you have to transform existing CSV data to remove quotes, the cost is going to outweigh any benefit you might have got from doing the direct queries.

The other annoyance to me is the lack of options for saving data back to S3. select into and create as select style statements are not (yet) supported. This breaks a key use-case for me: the ability to do one-off transforms of legacy or 3rd party data to new file formats. Wouldn’t it be nice to take a CSV file, uploaded by a 3rd party, change a few field names, transform to parquet (or JSON or whatever) and save back into your data warehouse? Yes it would. But you can’t. Sorry.

Conclusion

Athena is pretty good if you want a simple tool for doing basic ad-hoc querying over data stored in S3 – provided that data is in a compatible format.

Sadly though, Athena is just not ready for the big time, as yet. With the addition of support for more data formats and the ability to save data back to S3, it could be an incredibly useful tool, but right now I could count the number of use-cases on one hand.

One to watch!

Evolutionary Algorithm: Playable Demo

Here I’m combining a bit of visualisation with my other favourite subject – the Evolutionary Algorithm (or Genetic Algorithm if you prefer).  I’m not going to write anything about the properties of the algorithm – you can just play with the controls below the chart and see how the different settings effect its ability to find a good solution, adapt to changes and explore the problem space.

The problem: Find a value of x which maximises the value of y. The function is a set of sinusoidal waves of varying frequency and amplitude. The blue line shows the “fitness” for each value of x.

Basically, a population of different solutions is maintained – in this case, each solution is simply a value for x. Every individual has a fitness which can be calculated based on it’s value. Each iteration (100ms here) a solution is removed from the population – killed by selective pressure. Fitter individuals have a greater chance at surviving, less fit individuals have a less of a chance.

A replacement solution is “bred” each iteration, to replace the solution killed-off by selective pressure. This new individual is generated by combining the “genetic material” of one or more parents. In this case, just by taking the x value of a single parent. Importantly, a mutation is applied to the new solution – this is key to exploring the problem space effectively.

And that’s all there is to an Evolutionary Algorithm – it’s just a way of finding the right combination of input variables to maximise some arbitrarily complex fitness function. It does this through a guided random search.

screenshot-2016-11-11-17-25-56

Well travelled or just plain old?

A friend of mine has always said that young cars with high mileage are better than old cars with low mileage. The theory being that company cars, which have spent their time cruising on the motorways, have had a much easier life than their stay-at-home cousins who’ve done short hops around town and sat on their driveways seizing up.

So I pointed some very simple Spark queries at the UK government’s open MOT data to see what I could find (you can read about the last time I did this here). First factoid to note is that both mileage and age are relevant when it comes to predicting pass rates. The following two charts show pass rate vs mileage and age.

Pass Rate vs Mileage

Pass Rate vs Age

To look at all three variables together I created the following chart which shows shows age on on the x axis and mileage on the y. Pass rate is a colour scale with red being the worst and green the best. Green squares show combinations of mileage and age at which vehicles are more likely to pass their MOT on the first attempt. Red squares show combinations where a first-try failure is likely.

Pass Rate vs Mileage and Age

There is some truth to my mate’s theory – at least if this chart is to be believed – the pass rate for 3-5 year old cars looks pretty good even at very high mileages. Looking horizontally for very-low-mileage cars of increasing age there seems to be something quite odd going on for vehicles on less than 20k miles. For the 20k-40k range there does seem to be a green stripe across the ages, but it is not as apparent as it’s vertical counterpart.

So should we all be buying a four-year-old car with 180k miles on the clock? Well, no. At least not if we want to keep it for more than a year or two. Cars with high mileages on the clock go into the red much earlier than those with low mileage (based on the fact that vehicles can only move right and up through the chart as they get older and drive further).

Pass Rate vs Mileage and Age… to the MAX

That last chart shows the same heat-matrix view, but to the full extents of the data. There are some interesting facts hidden in that chart… but I’ll leave them as an exercise for the reader!144

UPDATE: Proper Stats:

So it turns out that calculating correlation and covariance with Spark is pretty easy. Here’s the results and the code:

For cars < 20 years and < 250,000 miles
cov(testMileage, pass) = -3615.011
corr(testMileage, pass) = -0.195
cov(age, pass) = -0.401
corr(age, pass) = -0.235
For all data
cov(testMileage, pass) = -3680.0456
corr(testMileage, pass) = -0.177
cov(age, pass) = -0.383
corr(age, pass) = -0.152

Looking at cars in the “normal” range (i.e. less than 20 years old and less than 250k miles) there’s a stronger correlation between age and pass rate than between mileage and pass rate. Interestingly, looking over the full range of the data this relationship is inverted, with mileage being very slightly better.  There’s little to separate the two as a predictor for pass or fail – not least because age and mileage are largely dependant on each other (with a correlation of 0.277 across all data).

Basic statistical functions are available under DataFrame.stat. See the calls hidden in the println lines below:

  it should "calculate covariance and correlation for normal cars" in {
    val motTests = Spark.sqlContext.read.parquet(parquetData).toDF()
    motTests.registerTempTable("mot_tests")

    val df = motTests
      .filter("testClass like '4%'") // Cars, not buses, bikes etc
      .filter("testType = 'N'") // only interested in the first test
      .filter("age &amp;lt;= 20")
      .filter("testMileage &amp;lt;= 250000")
      .withColumn("pass", passCodeToInt(col("testResult")))

    println("For cars &amp;lt; 20 years and &amp;lt; 250,000 miles")
    println(s"cov(testMileage, pass) = ${df.stat.cov("testMileage", "pass")}")
    println(s"corr(testMileage, pass) = ${df.stat.corr("testMileage", "pass")}")

    println(s"cov(age, pass) = ${df.stat.cov("age", "pass")}")
    println(s"corr(age, pass) = ${df.stat.corr("age", "pass")}")
  }

Moving data around with Apache NiFi

I’ve been playing around with Apache NiFi in my spare time (on the train) for the last few days. I’m rather impressed so far so I thought I’d document some of my findings here.

NiFi is a tool for collecting, transforming and moving data. It’s basically an ETL with a graphical interface and a number of pre-made processing elements. Stuffy corporate architects might call it a “mediation platform” but for me it’s more like ETL coding with Lego Mindstorms.

This is not a new concept – Talend have been around for a while doing the same thing. Something just never worked with talend though, perhaps they abstracted at the wrong level or prerhaps they tried to be too general. Either way, the difference between Talend and NiFi is like night and day!

Screenshot 2016-07-01 17.44.54

Garmin Track Data

So I don’t have access to a huge amount of “big data” on my laptop, and I’ve done articles on MOT and National Rail data recently, so I decided to use a couple of gigs of Garmin Track data to test NiFi. The track data is a good test as it’s XML: exactly the sort of data you don’t want going into your big data system and therefore exactly the right use-case for NiFi.

<?xml version="1.0" encoding="UTF-8"?>
<TrainingCenterDatabase xsi:schemaLocation="blah blah blah">
  <Activities>
    <Activity Sport="Biking">
      <Id>2015-04-06T13:26:53.000Z</Id>
      <Lap StartTime="2015-04-06T13:26:53.000Z">
        <TotalTimeSeconds>3159.267</TotalTimeSeconds>
        <DistanceMeters>12408.35</DistanceMeters>
        <MaximumSpeed>8.923999786376953</MaximumSpeed>
        <Calories>526</Calories>
        <Track>
          <Trackpoint>
            <Time>2015-04-06T13:26:53.000Z</Time>
            <Position>
              <LatitudeDegrees>51.516099665910006</LatitudeDegrees>
              <LongitudeDegrees>-1.244160421192646</LongitudeDegrees>
            </Position>
            <AltitudeMeters>91.80000305175781</AltitudeMeters>
            <DistanceMeters>0.0</DistanceMeters>
          </Trackpoint>

          <!-- ... -->

          <Trackpoint>
            <Time>2015-04-06T13:26:54.000Z</Time>
            <Position>
              <LatitudeDegrees>51.516099665910006</LatitudeDegrees>
              <LongitudeDegrees>-1.244160421192646</LongitudeDegrees>
            </Position>
            <AltitudeMeters>91.80000305175781</AltitudeMeters>
            <DistanceMeters>0.0</DistanceMeters>
          </Trackpoint>
        </Track>
      </Lap>
    </Activity>
  </Activities>
</TrainingCenterDatabase>

The only data in the file I’m particularly interested in is “where I went”. The calorie counts and suchlike are great on the day, but don’t tell us much after the fact. So, the plan is to extract the Latitude and Longitude fields from the Track element. Everything else is just noise.

Working with NiFi

NiFi uses files as the fundamental unit of work. Files are collected, processed and output by a flow of processors. Files can be transformed, split or combined into more files as needed. The links between processors act as buffers, queuing files between processing stages.

Screenshot 2016-07-04 07.40.18

The first part of the flow gathers the XML files from their location on disk (since Garmin charge an obcene amount for access to your own data via their API), splits the XML into multiple files then uses a simple XPath expression to extract out the Latitude and Longitude.

A GetFile processor reads whole XML file. Next a SplitXml processor takes the XML in each file and splits into multiple files by chopping the XML at a secified level (in this case 5) making a set of new files, one per TrackPoint element. Following that, an EvaluateXPath processor extracts the Lat and Long and stores them as attributes on each individual file.

Screenshot 2016-07-04 07.47.49

The rather naive XML split will return all elements at the specified level within the document tree. XPath is fine with that, it will either match a Lat and Long or it won’t. The issue is that we’ll end up with a large number of files where no location was found. The RouteOnAttribure process can be used to discard all these empty files. Settings shown below:

Screenshot 2016-07-04 18.28.52

So, now we have a stream of files (actually empty files!) each of which is decorated with attribues for Latitude and Longitude. The last part of the flow is all about saving these to a file.

Screenshot 2016-07-04 18.31.06

The first processor in this part of the flow takes the attributes of each file and converts them to JSON, dropping the JSON string into the file body. We could just save the file at this stage, but that would be a lot of files. The second block takes a large number of single-record JSON files and joins them together to create a single line-delimited JSON file which culd be read by something like Storm or Spark. I had all sorts of trouble escaping a carriage return within the MergeContent block, so in the end I stored a carriage return character in a file called “~/newLine.txt” and referenced that in the processor settings. Not pretty, but it works. The last block in the flow saves files – not much more to say about that!

Drawbacks and/or Benefits

It took a little over one train journey to get this workflow set up and working and most of that was using Google! Compared to using Talend for the same job it was an abslute dream!

Perhaps the only shortcoming of the system is that it can’t do things like aggregations – so I can’t convert the stream of locations to a “binned” map wit counts per 50x50m square for example. De-duplication doesn’t seem possible either… but if you think about how these operations would have to be implemented, you realise how complicated and resource hungry they would make the system. If you want to do aggregations, de-duplication and all that jazz, you can plug NiFi into Spark Streaming.

Most data integration jobs I’ve seen are pretty simple: moving data from a database table to HDFS, pulling records from a REST API, downloading things from a dropzone… and for all of these jobs, NiFi is pretty much perfect. It has the added benefit that it can be configured and maintaned by non-technical people, which makes it cheaper to integrate into a workflow.

I like it!

Thatcham Trains

This is the final article in my brief series on the National Rail API. As usual, the code can be found on github:

https://github.com/DanteLore/national-rail

The Idea

There are a million and one different websites and apps which will tell you the next direct train from London Paddington to Thatcham (or between any other two railway stations) but all those apps are very general. You have to struggle through the crowds on the Circle Line while selecting the stations from drop-downs and clicking “Submit”, for example. Wouldn’t it be good if there was a simple way to see the information you need without any user input? Even better, what if you could get notifications when the direct trains are delayed or cancelled?

Enter stage left, the Twitter API. This article is all about a simple mash-up of the National Rail and twitter APIs to show information on direct trains between London and Thatcham. You can use it for other stations too – it’s all in the command line parameters.

People who live in Thatcham can use my twitter feed @ThatchamTrains or you can set up your own feed and run the python script to populate it with the stations you’re interested in.

The script also sends direct messages if the trains are more than 15 minutes late or cancelled.

Using the script

I host my instance of the script on my raspberry pi, which is small, cheap, quiet and can be left on 24×7 without much hassle. These instructions are therefore specific to setup on the pi, but the script will work on Windows and other version of Linux too.

1. Install the python libraries you need. You may already have these installed.

$ sudo easy_install argparse
$ sudo easy_install requests
$ sudo easy_install xmltodict
$ sudo easy_install flask

2. Get a twitter account and a set of API keys by following the steps on the Twitter developers page. You’ll need four magic strings in total, which you pass to the script as command line parameters.

3. Get a national rail API key from their website. You just need one key for this API, which is nice!

4. Clone the source and run the script using the three commands below… simples!

$ git clone https://github.com/DanteLore/national-rail.git
$ cd national-rail
$ python twitterrail.py --rail-key YOUR_NATIONAL_RAIL_KEY --consumer-key YOUR_CUST_KEY --consumer-secret YOUR_CUST_SECRET --access-token YOUR_ACCESS_TOKEN --access-token-secret YOUR_ACCESS_TOKEN_SECRET --users YourTwitterName --forever

When run with the –forever option, the script will query the NR API and post to twitter every 5 minutes. Note that there are some basic checks to prevent annoying behaviour and duplicate messages. You can specify one or more usernames who you’d like to receive direct messages when there are delays and cancellations; note that only users who follow you can receive DMs on twitter.

You can use other stations by specifying the three character station codes (CRS) for “home” and “work” on the command line. Here are the command line options:

$ python twitterrail.py --help

usage: twitterrail.py [-h] [--home HOME] [--work WORK] [--users USERS]
                      [--forever] --rail-key RAIL_KEY --consumer-key
                      CONSUMER_KEY --consumer-secret CONSUMER_SECRET
                      --access-token ACCESS_TOKEN --access-token-secret
                      ACCESS_TOKEN_SECRET

Tweeting about railways

optional arguments:
  -h, --help            show this help message and exit
  --home HOME           Home station CRS (default "THA")
  --work WORK           Work station CRS (default "PAD")
  --users USERS         Users to DM (comma separated)
  --forever             Use this switch to run the script forever (once ever 5 mins)
  --rail-key RAIL_KEY   API Key for National Rail
  --consumer-key CONSUMER_KEY
                        Consumer Key for Twitter
  --consumer-secret CONSUMER_SECRET
                        Consumer Secret for Twitter
  --access-token ACCESS_TOKEN
                        Access Token for Twitter
  --access-token-secret ACCESS_TOKEN_SECRET
                        Access Token Secret for Twitter

The Code

There’s not much to say about the code, since I’ve covered the National Rail API in graphic detail in a previous article. The only real difference between this script and my previous adventures with the API is that this time I did unit testing.

There’s a fair bit of business logic in the twitter app: rules about when to post and when to be quiet, duplicate message detection and all sorts of time- and data-based rules which can’t be tested using real data. It’s also pretty bad form to test code like this against a live API, so I mocked out the NR query and the Twitter API and wrote a small suite of tests to check that the behaviour is right.

Like I said, all the code is on GitHub, so I won’t bang on about it here.

https://github.com/DanteLore/national-rail

Live Train Route Animation

The code for this article is available on my github, here: https://github.com/DanteLore/national-rail

Building on the Live Departures Board project from the other day, I decided to try out mapping some departure data. The other article shows pretty much all the back-end code, which wasn’t changed much.

route-planner

The AngularJS app takes the routes of imminent departures from various stations and displays them on a Leaflet map as polylines. I used this great Snake library to animate the lines as they appear. Map tiles come from CartoDB, which is free, unlike Mapbox.

route-planner2

Here’s the code-behind for the Angular app:

var mapApp = angular.module('mapApp', ['ngRoute']);

mapApp
    .config(function($routeProvider){
	    $routeProvider
		    .when('/',
		    {
		    	controller: 'MapController',
			    templateUrl: 'map.html'
		    })
		    .otherwise({redirectTo: '/'});
	})
	.controller('MapController', function($scope, $http, $timeout, $routeParams) {

        var mymap = L.map('mapid').fitBounds([ [51.3933180851, -1.24174419711], [51.5154681995, -0.174688620494] ]);
        L.tileLayer('http://{s}.basemaps.cartocdn.com/dark_all/{z}/{x}/{y}.png', {
            attribution: '© <a href="http://www.openstreetmap.org/copyright">OpenStreetMap</a> © <a href="http://cartodb.com/attributions">CartoDB</a>',
            subdomains: 'abcd',
            maxZoom: 19
        }).addTo(mymap);

        $scope.routeLayer = L.featureGroup().addTo(mymap);
        $scope.categoryScale = d3.scale.category10();

        $scope.doStation = function(data) {
            data.forEach(function(route){
                var color = $scope.categoryScale(route[0].crs)
                var path = []

                route.filter(function(x) {return x.latitude && x.longitude}).forEach(function(station) {
                    var location = [station.latitude, station.longitude];
                    path.push(location);
                });

                var line = L.polyline(path, {
                    weight: 4,
                    color: color,
                    opacity: 0.5
                }).addTo($scope.routeLayer);

                line.snakeIn();
            });
        };

        $scope.refresh = function() {
            $scope.routeLayer.clearLayers();

            $scope.crsList.forEach(function(crs) {
                $http.get("/routes/" + crs).success($scope.doStation);
            });

            $timeout(function(){
                $scope.refresh();
            }, 10000)
        };

        $http.get("/loaded-crs").success(function(crsData) {
            $scope.crsList = crsData;

            $scope.refresh();
        })
    });

Quick TeamCity Build Status with AngularJS

So, this isn’t supposed to be the ultimate guide to AngularJS or anything like that – I’m not even using the latest version – this is just some notes on my return to The World of the View Model after a couple of years away from WPF. Yeah, that’s right, I just said WPF while talking about Javascript development. They may be different technologies from different eras: one may be the last hurrah of bloated fat-client development and the other may be the latest and greatest addition to the achingly-cool, tie dyed hemp tool belt of the Single Page App hipster, but under the hood they’re very very similar. Put that in your e-pipe and vape it, designer-bearded UX developers!

BuildStatus

Anyway, when I started, I knew nothing about SPA development. I’d last done JavaScript several years ago and never really used it as a real language. I still contend that JavaScript isn’t a real language (give me Scala or C# any day of the week) but you can’t ignore the fact that this is how user interfaces are developed these days… so, yeah, I started with a tutorial on YouTube.

I decided to do an Information Radiator to show build status from TeamCity on the web. Information Radiators are my passion – at least they’re one of the few passions I’m allowed to pursue at work – and we use Team City for all our continuous integration, release builds, automated tests and so on. Our old radiators are coded in WPF, which looks awesome on the big TVs dotted around the office, but doesn’t translate well for remote workers.

There is no sunshine and there are no rainbows in this article. I found javascript to be a hateful language, filled with boilerplate and confusion. Likewise, though TeamCity is doubtless the best enterprise CI platform on planet earth, the REST APIs are pretty painful to consume. With that in mind, let’s get into the weeds and see how this thing works…

Enable cross-site scripting (CORS) on your Team City server

You can’t hit a server from a web page unless that server is the server that served the web page you’re hitting the server with… unless of course you tell the server you want to hit that the web page you want to hit it with, served from a different server, is allowed to hit it. Got that? Thought so. This is all because of a really logical thing called “Cross Origin Resource Sharing”, which you can enable pretty easily in TeamCity as long as you have admin permissions.

Check out Administration -> Server Administration -> Diagnostics -> Internal Properties. From there you should be able to edit, or at least get the location of the internal.properties file. Weirdly, if the file doesn’t exist, there is no option to edit, so you have to go and create the file. Since my TeamCity server is running on a Windows box, I created the new file here:

C:\ProgramData\JetBrains\TeamCity\config\internal.properties

and added the following:

rest.cors.origins=*

You might want to be a little more selective on who you allow to access the server this way – I guess it depends on how secure your network is, how many clients access the dashboard and so on.

Tool Chain

This article is about AngularJS and it’s about TeamCity. It’s not about NPM or Bower or any of that nonsense. I’m not going to minify my code or use to crazy new-fangled pseudo-cosmic CSS. So setting up the build environment for me was pretty easy: create a folder, add a file called “index.html”, fire up the fantastic Fenix Web Server and configure it to serve up the folder we just created. Awesome.

If you’re already confused, or if you just want to play with the code, you can download the lot from GitHib: https://github.com/DanteLore/teamcity-status-with-angular

I promise to do my best

Hopefully you’ve watched the video I linked above, so you know the basics of an AngularJS app. If not, do so now. Then maybe Google around the subject of promises and http requests in AngularJS. Done that? OK, good.

Web requests take a while to run. In a normal app you might fetch them on another thread but not in JavaScript. JavaScript is all about callbacks. A Promise is basically a callback that promises to get called some time in the future. They are actually pretty cool, and they form the spinal column of the build status app. This is because the TeamCity API is so annoying. Let me explain why. In order to find out the status (OK or broken) and state (running, finished) of each build configuration you need to make roughly six trillion HTTP requests as follows:

  1. Fetch a list of the build configurations in the system. These are called “Build Types” in the API and have properties like “name”, “project” and “id”
  2. For each Build Type, make a REST request to get information on the latest running Build with a matching type ID. This will give you the “name”, “id” and “status” of the last finished build for the given Build Type.
  3. Fetch a list of the currently running builds.
  4. Use the list of finished builds and the list of running builds to create a set of status tiles (more on this later)
  5. Add the tiles to the angular $scope and let the UI render them

Here’s how that looks in code. Hopefully not too much more complicated than above!

buildFactory.getBuilds()
	.then(function(responses) {
		$scope.buildResponses = responses
			.filter(function(r) { return (r.status == 200 && r.data.build.length > 0)})
			.map(function(r){ return r.data.build[0] })
	})
	.then(buildFactory.getRunningBuilds)
	.then(function(data) {
		$scope.runningBuilds = data.data.build.map(function(row) { return row.buildTypeId })
	})
	.then(function() {
		$scope.builds = $scope.buildResponses.map(function(b) { return buildFactory.decodeBuild(b, $scope.runningBuilds); });
	})
	.then(function() {
		$scope.tiles = buildFactory.generateTiles($scope.builds)
	})
	.then(function() {
		$scope.statusVisible = false;
	});

Most of the REST access has been squirrelled away into a factory. And yes, our build server is called “tc” and guest access is allowed to the REST APIs and I have enabled CORS too… because sometimes productivity is more important than security!

angular.module('buildApp').factory('buildFactory', function($http) {
	var factory = {};
	  
	var getBuildTypes = function() {
		return $http.get('http://tc/guestAuth/app/rest/buildTypes?locator=start:0,count:100');
	};
	
	var getBuildStatus = function(id) {
		return $http.get('http://tc/guestAuth/app/rest/builds?locator=buildType:' + id + ',start:0,count:1&fields=build(id,status,state,buildType(name,id,projectName))');
	};
	
	factory.getRunningBuilds = function() {
		return $http.get('http://tc/guestAuth/app/rest/builds?locator=running:true');
	};

// etc

Grouping and Tiles

We have over 100 builds. Good teams have lots of builds. Not too many, just lots. Every product (basically every team) has CI builds, release/packaging builds, continuous deployment builds, continuous test builds, metrics builds… we have a lot of builds. Builds are good.

But a screen with 100+ builds on it means very little. This is an information radiator, not a formal report. So, I use a simple (but messy) algorithm to convert a big list of Builds into a smaller list of Tiles:

  1. Take the broken builds (hopefully not many) and turn each one into a Tile
  2. Take the successful builds and group them by “project” (basically the category, which is basically the team or product name)
  3. Turn each group of successful builds into a Tile, using the “project” as the tile name
  4. Mark any “running” build with a flag so we can give feedback in the UI

BuildStatus2

Displaying It

Not much very exciting here. I used Bootstrap, well, a derivative of Bootstrap to make the UI look nice. I bound some content to the View Model and that’s about it. Download the code and have a look if you like.

Here’s my index.html (which shows all the libraries I used):

<html ng-app="buildApp">
<head>
  <title>Build Status</title>
  
  <link href="https://bootswatch.com/cyborg/bootstrap.min.css" rel="stylesheet">
  <!--link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" rel="stylesheet"-->
</head>

<body>
  <div ng-view>
  </div>

  <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.5.5/angular.min.js"></script>
  <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.5.5/angular-route.js"></script>
  <script src="https://code.jquery.com/jquery-2.2.3.min.js"></script>
  <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js"></script>
  <script src="utils.js"></script>
  <script src="app.js"></script>
  <script src="build-factory.js"></script>
</body>
</html>

Here’s the “view” HTML for the list (in “templates/list.html”). I love the Angular way of specifying Views and Controllers by the way. Note the cool animated CSS for the “in progress” icon.

<div>
  <style>
	.glyphicon-refresh-animate {
		-animation: spin 1s infinite linear;
		-webkit-animation: spin2 1s infinite linear;
	}

	@-webkit-keyframes spin2 {
		from { -webkit-transform: rotate(0deg);}
		to { -webkit-transform: rotate(360deg);}
	}

	@keyframes spin {
		from { transform: scale(1) rotate(0deg);}
		to { transform: scale(1) rotate(360deg);}
	}
  </style>
  
	<div class="page-header">
		<h1>Build Status <small>from TeamCity</small></h1>
	</div>
	  
    <div class="container-fluid">
		<div class="row">
    		<div class="col-md-3" ng-repeat="tile in tiles | orderBy:'status' | filter:nameFilter">
        		<div ng-class="getPanelClass(tile)">
               <h5><span ng-class="getGlyphClass(tile)" aria-hidden="true"></span>   {{ tile.name | limitTo:32 }}{{tile.name.length > 32 ? '...' : ''}}   {{ tile.buildCount > 0 ? '(' + tile.buildCount + ')' : ''}} </h5>
               <p class="panel-body">{{ tile.project }}</p>
              </div>
        	</div>
    	</div>
    </div>
	<br/><br/><br/><br/><br/><br/>
  
  <nav class="navbar navbar-default navbar-fixed-bottom">
  <div class="container-fluid">
    <p class="navbar-text navbar-left">
		<input type="text" ng-model="nameFilter"/>  <span class="glyphicon glyphicon-filter" aria-hidden="true"></span>  
		<span class="glyphicon glyphicon-refresh glyphicon-refresh-animate" ng-hide="!statusVisible"></span>
	</p>
  </div>
</nav>
</div>

That’s about it!

I think I summarized how I feel about this project in the introduction. It looks cool and the MVC MVVM ViewModel vibe is a good one. The data binding is simple and works very well. All my gripes are with JavaScript as a language really. I want Linq-style methods and I want classes and objects with sensible scope. I want less syntactic nonsense, maybe the odd => every now and again. I think some or all of that is possible with libraries and new language specs… but I want it without any effort!

One thing I will say: that whole page is less than 300 lines of code. That’s pretty darned cool.

Feel free to download and use the app however you like – just bung in a link to this page!

BuildStatus