SysAdmin, DevOps Engineer, Head Of Robots, Operations Engineer, Backend Engineer, Infrastructure Engineer. I like things to make things identical, repeatable and disposable.
I joined Demand Logic at the beginning of a project to shift their infrastructure away from Rackspace Cloud and into Google Cloud. As part of this, I introduced Terraform, which now describes every GCP resource at DL in a repeatable way.
I was also responsible for introducing Concourse CI (to replace Jenkins) and have built extensive (Python) tooling around this to, among other things, bring the platform towards Continuous Delivery.
I primarily worked as a Ruby dev at OpenCorporates, maintaining their Rails apps, although I also turned my hand to occasional bits of SysAdmin-ish work when required. I was also heavily involved in a project to map corporate networks, initially using Neo4j and later TigerGraph.
I was eventually lured away by the prospect of getting back into more straight-up Ops work.
I joined Moo at the beginning of a massive project to migrate from hosted tin to AWS. This involved jumping in at the deep-end with Terraform and Ansible, and was pretty much all I worked on until July 2017, when we pulled the trigger on a remarkably smooth cut-over.
The rest of the summer was spent on cleaning-up some things we’d kicked down the road during the migration project, and since September I’ve been on a project to build a Kubernetes-based self-service platform.
My role at the ODI encompassed many things - initially, leading on the building and maintenance of all of the test-driven Chef infrastructure for the ODI’s first round of core tools, primarily Open Data Certificates and CSV Lint, and our adoption of GDS’s CMS suite.
Subsequent highlights include:
Everything we ever built at the ODI has been built in the open and all of our code is up on Github.
Working closely with AMEE’s relatively small team of Java and Ruby devs, I was responsible for:
alongside the usual SysAdmin work of backup-and-restore, capacity planning, etc.
My role at VisualDNA was pretty much DevOps before I knew that DevOps was even a thing – as a busy startup with diverse client projects there were often multiple deploys per day, meaning I had to work very closely with the developers to make sure we were all on the same page.
This close working relationship was particularly fruitful during the gradual transfer of many of VisualDNA’s core services from the legacy platform (a couple of racks of Linux boxes in a London datacentre) to Amazon Web Services. The back-and-forth between myself and the development team was invaluable as we iterated through various combinations of EC2 Instance Types to find the setup that best fitted our requirements.
The transfer to AWS accelerated rapidly during 2011; the setup I left them with included:
This platform is still in production use, including an all-new Quiz Engine which as far as I know is still performing extremely well.
During my time at VisualDNA I also:
My work at Rex covered the usual gamut of Sysadmin tasks, including: backup and recovery, webserver administration, DNS management, plenty of scripting (mostly in bash), patching servers, and writing and maintaining documentation. There was also some SQL Server admin, and a certain amount of desktop support – Rex is a company of ~80 employees, supported by an IT department of four. All new server hardware passed through my hands for installation and configuration.
When I joined Rex, the IT department consisted of two very busy people. The IT infrastructure had been growing rapidly, deployment had happened on a seemingly ad-hoc basis, and documentation was fairly sparse. My initial tasks included:
Subsequently, I was directly involved in:
Rex went on to acquire another picture agency in Los Angeles, which brought about a project to integrate their image archive into Rex’s, modifying the server software to enable them to use the Rex client application, and deploying a new set of servers to support of all of this. The final setup consisted of a redundant pair of Microsoft SQL servers, a set of NetApp filers and shelves, and a group of FreeBSD servers (running apache and mod_perl) serving up three websites and a range of internal webservices, all sitting behind a pair of Alteon load-balancers.
Having joined Empower at its inception, my initial responsibilities were to design and implement the IT infrastructure necessary to support the operations of the fledgling business. This included: network planning; server acquisition and installation (various internal servers, external mail server, firewall, etc); deploying a backup and recovery scheme; managing the website; and a great deal of user education. I also got involved in many other aspects of the business – this was a tiny start-up, so I found myself doing testing, writing user manuals for Empower’s products, and even doing a little Java.
I was solely responsible for supporting this infrastructure for the first year, until the business expanded to the extent that further IT staff were required. I was asked to set up an IT department, and recruited another Sysadmin who specialised in Windows; the team continued to expand over the following years. I gained experience with Solaris 8, HP-UX and qualified as an Oracle administrator in order to install some of Empower’s products onto carrier-grade hardware; I also assisted with deploying the hardware into telcos.
Empower unfortunately ceased trading in November 2006.
The canonical version of this document is at sam.pikesley.org/cv/. Accept no substitutes