Pirate-Jim Here We Come

Amongst the diving and working it seems this year I’ll also be partaking in some sailing too..

  • May – Quick whiz around the Solent with Commodore Yachting to reacquaint myself with a yacht
  • June – Sailing around Norway from Bergen to Aalesund
  • November – All being well, I may be sailing the Atlantic Rally for Cruisers from Las Palmas (Spain) to St Lucia

Simple AWS Diagrams

So my most recent post included some Visio-style diagrams but not done in Visio.. try draw.io out, it’s pretty good. Basic, but good.

My First vPC

my first vpc

My first attempt at building some form of basic infrastructure constructs in AWS.. Keep in mind that this is the learning curve, so in no way represents best-practice deployments!

The Building Blocks

  • Internet Gateway
  • Single vPC in London Region (eu-west)
  • Two subnets, one in each availability zone (eu-west-2a and eu-west-2b) for Web Servers
  • Two subnets, one in each availability zone for Bastion hosts
  • Two Launch Configurations – one for bastion hosts, one for webservers
  • One Auto Scaling Group for Web Servers – min instances 2, linked to webserver Launch Configuration
  • One Auto Scaling Group for Bastion host – min instances 1, linked to Bastion Launch Configuration
  • Elastic Load Balancer – inbound  HTTP/s connected to the Web Server auto-scaling group
  • One Security Group for Web Servers – enables inbound HTTP / HTTPs from anywhere, and SSH from the Bastion Subnets
  • One Security Group for Bastion hosts – enables inbound SSH from anywhere
  • One IAM Role – to enable Read-Only access to S3

Web Server Launch Configuration

Each web server is built using a Launch configuration which has a bootstrap script to do the following:

  • Update standard AMI packages
  • Install Apache and PHP
  • Start Apache
  • Set Apache to start on bootup
  • Copy custom index.php from S3  (this is why it needs an IAM role to access S3!)
  • Copy health-check HTML from S3
  • Make index.php executable

The index.php is a basic “Hello World” which also shows the internal IP of the host serving it.. this way when tweaking with load-balancers I can tell which instance has served the request.  The two pages are stored in an S3 bucket and the IAM role applied to the Launch Configuration allows the instances to copy the files down to the web server.

 #! /bin/bash
 yum update -y
 yum install http php -y
 service httpd start
 chkconfig httpd on
 aws s3 --region eu-west-2 cp s3://e02-lab-scripts/index.php /var/www/html/
 aws s3 --region eu-west-2 cp s3://e02-lab-scripts/healthcheck.html /var/www/html/
 chmod +x /var/www/html/index.php


This stuff is bloody complicated – but – certainly not impossible.  Once you know what all the components are, how they work and interact with each other, it’s easy to start building services and constructs based on them.

And it all started so well

Well – my first foray into AWS was going so well.. I’ve been using A Cloud Guru for my AWS Architect Associate training and following each of the labs and doing a bit of playing in the background myself.  Since getting through the initial S3 and EC2 videos, I brought up a Bitnami WordPress instance to host this site and did the relevant Route53 transfer and what have you.  I also moved our wedding website to an S3-Static Site bucket as post-wedding it didn’t need PHP or anything fancy to run.

All seemed well and good.

Then I decided that since I was now hosting ‘live’ sites from my AWS account, I didn’t really want to be meddling with labs and accidentally destroy something.. So I created a new account and went about building a little lab environment: New vPC, subnets for web services, subnets for bastion hosts, scaling groups.. the works!  I thought I was doing so well until I tried connecting to the first bastion instance I’d created and got a dreaded SSH time-out!

Cut to the chase: If you don’t want to read the rest, just remember this:  Don’t forget a default gateway in the routing table – it doesn’t add itself!

The idiot network engineer

“User error” – I thought.  Obviously.  So I walked through my configuration:

  • ElasticIP associated.. check.
  • Instance attached to Subnet.. duh, check.
  • Subnet associated in routing table.. check.
  • vPC associated with Internet Gateway.. check.
  • Security group has sensible ruleset (SSH and HTTP/s inbound from check.

So I still couldn’t see what the issue was.  Now, at this point, I hadn’t done any training on CloudWatch, but I noticed in the vPC configuration I could do something with Flow Logs.  So a bit of playing around and following the on-screen “You’re trying to do something you’re not ready for yet“-type instructions, I got flow information for the vPC going into a log and I could drill down on a per-instance basis.

Now I could see what was going on.. Nothing wrong with the rules, my ping’s and SSH attempts clearly had ‘ACCEPT’ next to them, so the inbound path was fine.

Then it dawn on me… stuff is getting in, but how is it getting out – or more precisely – does it know HOW to get out?  So back to look at the routing table I went, with the sudden realisation that there was no default route!  Doh!  And I’m supposed to be a network engineer!

So what’s the lesson here:  An AWS Internet Gateway isn’t given the default route by default.. makes sense, you might want to use a vASA or something

What good came out of this.. I  learnt how to use CloudWatch

The House has Moved

That’s right.. in bowing to the mentality that you should do what you preach, the 23,333 blog has moved home to AWS!

More in the next post how this was accomplished…

Moving Markets and upskilling

I had absolutely no idea that AWS in its contemporary form came into existence in 2006, right around the time I started at Cisco and my first venture into the network industry. I was aware of AWS as a platform a few years later but like many (and with my lack of experience and insight) didn’t realise the impact or potential of it.  Having worked in networking for 10-years now (Oct 2006-2016) and seen the dramatic change from old C6500 switching to modern SoC-based/merchant-silicon, as well as the more recent influx of ‘SDx’ technologies, I can see clearly now that platforms like AWS and Azure are quickly becoming the de-facto choice for future IT strategies for both infrastructure and services.

With this in mind, it’s time to upshift the skill set and move on.  I had originally planned to complete the VCDX-NV qualification and I may well still do this over the long term but, in the short-term I’m going to focus on retaining my CCIE R&S until it becomes Emeritus and put significant efforts into training for AWS, Azure and some more general architecture specialisations such as TOGAF.

2017 will be the year of Architecture for me.

Docker-Pi – DB and Visualisation

During my docker trials and tribulations, I found two great tools for storing measurement and then displaying them..


It’s not a complex database like MySQL – it’s a simple way of storing time-lapse measurements.  I’ll late use it for storing temperature and humidity measurements, but for now we’ll get it setup and drop in some resource stats from the Pi.

Thankfully, someone’s already compiled Influx for the Raspberry Pi and Docker..

HypriotOS/armv7: pirate@black-pearl in ~
$ docker run -d -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 -v /var/docker_data/influxdb:/data --name influxsrv sbiermann/rpi-influxdb

InfluxDB exposes two webports:

  • 8083 – is a web-based UI for basic DB administration and querying
  • 8086 – is a HTTP API for posting/getting data

The default username and password for influx is root/root.

Getting System Stats

It’s useful to know what your Pi is up to and how the resource utilisation looks, especially if you start pushing some heavy scripts or apps to it.  Telegraf has been compiled for the Pi architecture here.  Don’t follow the instructions about creating a dedicated data folder.. let Docker do this for you.

Now- the default rpi-telegraf configuration tries to send data to influx using localhost:8086 – this will fail as we’re not running influx inside the same container.  To fix this we need to do two things..

Firstly – add the ‘–link’ command to the docker run CLI to link the influxdb container to the telegraf container.

  • –link influxsrv:influxsrv – docker will create a DNS entry internally and map the influxsrv hostname to the dynamic IP of the influx container

Secondly – modify the telegraf configuration to point to the right influx hostname. To do this, you’ll need to run telegraf once and then use the docker inspect to find the data directory and edit the telegraf.conf file.

Run telegraf with the link:

HypriotOS/armv7: pirate@black-pearl in /var/docker_data
$ docker run -ti -v /data –link influxsrv:influxsrv –name telegraf apicht/rpi-telegraf

And then kill the process

Find the config config:

As we’ve been creating a dedicated store for our container’s data, you should find the telegraf data in /var/docker_data/telegraf

Edit the telegraf.conf file and the influxdb section:

[[outputs.influxdb]] ## The full HTTP or UDP endpoint URL for your InfluxDB instance.
## Multiple urls can be specified as part of the same cluster,
## this means that only ONE of the urls will be written to each interval.
# urls = [“udp://localhost:8089”] # UDP endpoint example
urls = [“http://influxsrv:8086”] # required
## The target database for metrics (telegraf will create it if not exists).
database = “telegraf” # required

Now telegraf can be run as a daemon container:

HypriotOS/armv7: pirate@black-pearl in ~
$ docker run -d -v /data –link influxsrv:influxsrv –name telegraf apicht/rpi-telegraf

Docker-Pi – Getting Started

Setup Raspberry Pi2 with HypriotOS

This gives us the basic docker platform to start from.. saves the agro of trying to work it all out for ourselves.  The guys over at Hypriot have put together a baseline OS for the ARM/Pi architecture that boots, DHCPs for network and then automatically starts the Docker components.
Download from here, and if you run a mac, follow these instructions.
The default SSH credentials for HypriotOS are pirate (username) / hypriot (password).

Assign static addressing

If you’re like me, you can’t remember half of the distribution variations for setting a static address.. HypriotOS is based on Debian – so the following would work fine: edit /etc/network/interfaces/eth0
auto eth0
iface eth0 inet static
 dns-search jimleach.co.uk

Getting Persistance

Docker instances are non-persistant, but for most of the things I want to use them for, I need some consistent storage I can present to them.  Don’t do this if you want your containers to be portable! A better way would be to present some storage via NFS and map that instead.. something a bit less host-centric.

HypriotOS/armv7: pirate@black-pearl in /var/docker_data
$ pwd

Create directories for:

  • dockerui
  • influxdb
  • telegraf
  • grafana

We’ll need these later as we build up our stack of containers..


Docker itself doesn’t have a web-frontend, it’s all CLI driven – but Docker-UI is a container-made app that allows you to see all the images and containers in your docker engine and view the connectivity between them.  Hypriot have pre-compiled the UI for their OS, and you can grab directly from the docker hub and manually run or, or by the power of docker, just run it (without downloading first) and let docker do the hard work:
HypriotOS/armv7: pirate@black-pearl in ~
$ docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v /var/docker_data/dockerui:/data --name dockerui hypriot/rpi-dockerui
Unable to find image 'hypriot/rpi-dockerui:latest' locally
latest: Pulling from hypriot/rpi-dockerui
f550508d4a51: Pull complete
a3ed95caeb02: Pull complete
Status: Downloaded newer image for hypriot/rpi-dockerui:latest

By trying to run the image without first downloading, you prompt docker into pulling it automatically from the Docker Hub and then starting it.

  • -d – Puts the instance into daemon mode
  • -p 9000:9000 – maps port 9000 on the localhost (the RiPi) to port 9000 on the instance
  • -v – Maps our local storage to a volume/directory in the container (local:container)
  • –name – gives us a recognisable name to reference the container with

Now if you browse to the Pi’s address on port 9000 – you should get the Docker UI:


Pi-Docker Failure

Arrrgh!  Total disappointment this weekend.  I’d spent a few days learning the basics of docker and installed HypriotOS to my RPi2.  I added a temperature/humidity sensor and wrote some scripts that bound a set of containers running pigpiod, influxdb, grafana and a python script to grab the sensor data and push it to influx. I had it all working perfectly and was ready to start playing with Git and trying out some rapid script development when a hasty and ungraceful shutdown lost the lot!
Even though I don’t have any backup (it was early days) – I still have plenty of notes, so I’ll rebuild it this week and post up as I go.

Cisco Live 365

Cisco Live 365 is an unbelievable resource that’s totally free!  For those who haven’t come across it before – it’s a library of all the presentations from the Cisco Live conferences from the last four years (starting London 2012).  For most of the sessions, the presentation material is available for download in PDF form and for many the actual session has been recorded and hosted as well.
The Library of resources aren’t just there for those who couldn’t make the Cisco Live conferences in person, but provides excellent resources for reference and training materials. I just can’t recommend using enough for those of both Sales and Technical backgrounds.
For those who’re technically focused.. just search ‘Deep Dive’ and you’ll find a mass of material that’s not on cisco.com’s documentation/support pages, most of it’s written by Technical Marketing Engineers or those just live-and-breathe specific technology stacks.