sam.pikesley.org
My CV
- Kubernetes
- ConcourseCI
- GCP
- Ruby
- Python
- Git
- Linux
- FreeBSD
- AWS
- Arduino
- Terraform
- Open Data
- RabbitMQ
- Docker
- Salt
- Redis
- Postgres
- make
-
October 2021 - present
Senior Infrastructure Engineer at Cervest
Cervest is architecting a new era of climate intelligence.
-
February 2019 - September 2021
Senior Infrastructure Engineer at Demand Logic
Demand Logic is a software tool which provides actionable intelligence to property managers and building contractors. It is intended to deliver quantifiable benefits in a short space of time and to make the management of buildings easier.
I joined Demand Logic at the beginning of a project to shift their infrastructure away from Rackspace Cloud and into Google Cloud. As part of this, I introduced Terraform, which now describes every GCP resource at DL in a repeatable way.I was also responsible for introducing Concourse CI (to replace Jenkins) and have built extensive (Python) tooling around this to, among other things, bring the platform towards Continuous Delivery.
-
March 2018 - February 2019
Backend Engineer at OpenCorporates
OpenCorporates is the largest open database of companies and company data in the world, with in excess of 100 million companies in a similarly large number of jurisdictions.
I primarily worked as a Ruby dev at OpenCorporates, maintaining their Rails apps, although I also turned my hand to occasional bits of SysAdmin-ish work when required. I was also heavily involved in a project to map corporate networks, initially using Neo4j and later TigerGraph.
I was eventually lured away by the prospect of getting back into more straight-up Ops work.
-
January 2017 - March 2018
Operations Engineer at Moo
Moo is a print-on-demand company printing business cards, postcards, stickers and other material.
I joined Moo at the beginning of a massive project to migrate from hosted tin to AWS. This involved jumping in at the deep-end with Terraform and Ansible, and was pretty much all I worked on until July 2017, when we pulled the trigger on a remarkably smooth cut-over.
The rest of the summer was spent on cleaning-up some things we'd kicked down the road during the migration project, and since September I've been on a project to build a Kubernetes-based self-service platform.
-
January 2013 - December 2016
Head Of Robots at The Open Data Institute
The ODI connects, equips and inspires people around the world to innovate with data.
My role at the ODI encompassed many things - initially, leading on the building and maintenance of all of the test-driven Chef infrastructure for the ODI's first round of core tools, primarily Open Data Certificates and CSV Lint, and our adoption of GDS's CMS suite.
Subsequent highlights include:
- Gaining a comprehensive grasp of HTTP and REST (mainly from working under the guidance of Jeni Tennison)
- Learning to love TDD, primarily through heavy exposure to RSpec and Cucumber
- Embracing the Github -> Travis -> Heroku continuous deployment chain
- Building front-end apps (e.g. the TfL Train Data Demonstrator) and so getting a handle on Bootstrap, JavaScript, Sass and D3
- Providing technical mentoring for one of our PhD students from the WDAqua programme
- Occasional speaking engagements, primarily on the subject of Working In The Open
Everything we ever built at the ODI has been built in the open and all of our code is up on Github.
-
June 2011 - January 2013
DevOps Engineer at AMEE UK
AMEE is a start-up whose mission is to measure the carbon footprint of everything on the planet.
Working closely with AMEE's relatively small team of Java and Ruby devs, I was responsible for:
- Putting in a huge amount of Chef plumbing (running off of AMEE's own Chef server) - AMEE's config management previously consisted of a handful of bash scripts
- Deploying and configuring Splunk
- Migrating several of AMEE's legacy apps from leased iron in a DC to AWS
alongside the usual SysAdmin work of backup-and-restore, capacity planning, etc.
-
August 2009 - June 2011
Systems Administrator at VisualDNA
VisualDNA is a dynamic startup based in Soho. The company generates profiles for users through the use of visual quizzes, working with clients including the LA Times, match.com and the Daily Mirror.
My role at VisualDNA was pretty much DevOps before I knew that DevOps was even a thing – as a busy startup with diverse client projects there were often multiple deploys per day, meaning I had to work very closely with the developers to make sure we were all on the same page.
This close working relationship was particularly fruitful during the gradual transfer of many of VisualDNA’s core services from the legacy platform (a couple of racks of Linux boxes in a London datacentre) to Amazon Web Services. The back-and-forth between myself and the development team was invaluable as we iterated through various combinations of EC2 Instance Types to find the setup that best fitted our requirements.
The transfer to AWS accelerated rapidly during 2011; the setup I left them with included:
- A 16-node Cassandra cluster
- A 6-node Hadoop cluster
- Several groups of Elastic-Load-Balanced web servers
This platform is still in production use, including an all-new Quiz Engine which as far as I know is still performing extremely well.
-
August 2003 - August 2009
Systems Administrator at Rex Features
Rex Features is Britain’s leading independent photographic press agency and picture library. Rex supplies a daily service of news, celebrity, features, and stock photos to all national newspapers, magazines, TV, web and other media in the UK and in more than 30 countries worldwide.
My work at Rex covered the usual gamut of Sysadmin tasks, including: backup and recovery, webserver administration, DNS management, plenty of scripting (mostly in bash), patching servers, and writing and maintaining documentation. There was also some SQL Server admin, and a certain amount of desktop support – Rex is a company of ~80 employees, supported by an IT department of four. All new server hardware passed through my hands for installation and configuration.
When I joined Rex, the IT department consisted of two very busy people. The IT infrastructure had been growing rapidly, deployment had happened on a seemingly ad-hoc basis, and documentation was fairly sparse. My initial tasks included:
- Installing a CVS server (yes, this was 2003) and gathering code and scripts into it
- Rolling out the Amanda backup system and setting up a proper backup and recovery scheme
- Getting the RT ticketing system up and running (we later moved to Jira)
- Beginning the process of documenting everything in Twiki (we subsequently migrated to Mediawiki)
Subsequently, I was directly involved in:
- Setting up the Nagios network monitoring system
- Migrating the internal mail from Novell to Exchange, and later outsourcing this function to Cobweb’s hosted Exchange platform
- Configuring VPNs between Rex’s headquarters and various locations – initially using isakmpd on OpenBSD, and latterly on a Watchguard Firebox
- Overseeing the transfer of Rex’s image data – 5 terabytes of jpegs at the time of writing – from a cluster based on a number of FreeBSD servers to a set of Network Appliance 3050 filers
- Configuring and deploying Alteon load-balancers for the Rex website, which gets ~1.5 million hits and shifts ~13 gigs of data a day
- Migrating Rex’s code from CVS to Subversion
- Specifying and documenting a “standard Rex server install” – except for a handful of Windows servers, the whole of Rex’s server room and colo are running FreeBSD, so the standard install is a set of common ports and a number of scripts
Rex went on to acquire another picture agency in Los Angeles, which brought about a project to integrate their image archive into Rex’s, modifying the server software to enable them to use the Rex client application, and deploying a new set of servers to support of all of this. The final setup consisted of a redundant pair of Microsoft SQL servers, a set of NetApp filers and shelves, and a group of FreeBSD servers (running apache and mod_perl) serving up three websites and a range of internal webservices, all sitting behind a pair of Alteon load-balancers.
-
April 2000 - July 2003
Systems Administrator at Empower Interactive
Empower Interactive was a telecoms software startup, founded in 2000 in the City of London.
Having joined Empower at its inception, my initial responsibilities were to design and implement the IT infrastructure necessary to support the operations of the fledgling business. This included: network planning; server acquisition and installation (various internal servers, external mail server, firewall, etc); deploying a backup and recovery scheme; managing the website; and a great deal of user education. I also got involved in many other aspects of the business – this was a tiny start-up, so I found myself doing testing, writing user manuals for Empower’s products, and even doing a little Java.
I was solely responsible for supporting this infrastructure for the first year, until the business expanded to the extent that further IT staff were required. I was asked to set up an IT department, and recruited another Sysadmin who specialised in Windows; the team continued to expand over the following years. I gained experience with Solaris 8, HP-UX and qualified as an Oracle administrator in order to install some of Empower’s products onto carrier-grade hardware; I also assisted with deploying the hardware into telcos.
Empower unfortunately ceased trading in November 2006.