Moving Markets and upskilling

I had absolutely no idea that AWS in its contemporary form came into existence in 2006, right around the time I started at Cisco and my first venture into the network industry. I was aware of AWS as a platform a few years later but like many (and with my lack of experience and insight) didn’t realise the impact or potential of it.  Having worked in networking for 10-years now (Oct 2006-2016) and seen the dramatic change from old C6500 switching to modern SoC-based/merchant-silicon, as well as the more recent influx of ‘SDx’ technologies, I can see clearly now that platforms like AWS and Azure are quickly becoming the de-facto choice for future IT strategies for both infrastructure and services.

With this in mind, it’s time to upshift the skill set and move on.  I had originally planned to complete the VCDX-NV qualification and I may well still do this over the long term but, in the short-term I’m going to focus on retaining my CCIE R&S until it becomes Emeritus and put significant efforts into training for AWS, Azure and some more general architecture specialisations such as TOGAF.

2017 will be the year of Architecture for me.


Docker-Pi – DB and Visualisation

During my docker trials and tribulations, I found two great tools for storing measurement and then displaying them..

InfluxDB

It’s not a complex database like MySQL – it’s a simple way of storing time-lapse measurements.  I’ll late use it for storing temperature and humidity measurements, but for now we’ll get it setup and drop in some resource stats from the Pi.

Thankfully, someone’s already compiled Influx for the Raspberry Pi and Docker..

HypriotOS/armv7: pirate@black-pearl in ~
$ docker run -d -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 -v /var/docker_data/influxdb:/data --name influxsrv sbiermann/rpi-influxdb

InfluxDB exposes two webports:

  • 8083 – is a web-based UI for basic DB administration and querying
  • 8086 – is a HTTP API for posting/getting data

The default username and password for influx is root/root.

Getting System Stats

It’s useful to know what your Pi is up to and how the resource utilisation looks, especially if you start pushing some heavy scripts or apps to it.  Telegraf has been compiled for the Pi architecture here.  Don’t follow the instructions about creating a dedicated data folder.. let Docker do this for you.

Now- the default rpi-telegraf configuration tries to send data to influx using localhost:8086 – this will fail as we’re not running influx inside the same container.  To fix this we need to do two things..

Firstly – add the ‘–link’ command to the docker run CLI to link the influxdb container to the telegraf container.

  • –link influxsrv:influxsrv – docker will create a DNS entry internally and map the influxsrv hostname to the dynamic IP of the influx container

Secondly – modify the telegraf configuration to point to the right influx hostname. To do this, you’ll need to run telegraf once and then use the docker inspect to find the data directory and edit the telegraf.conf file.

Run telegraf with the link:

HypriotOS/armv7: pirate@black-pearl in /var/docker_data
$ docker run -ti -v /data –link influxsrv:influxsrv –name telegraf apicht/rpi-telegraf

And then kill the process

Find the config config:

As we’ve been creating a dedicated store for our container’s data, you should find the telegraf data in /var/docker_data/telegraf

Edit the telegraf.conf file and the influxdb section:

[[outputs.influxdb]] ## The full HTTP or UDP endpoint URL for your InfluxDB instance.
## Multiple urls can be specified as part of the same cluster,
## this means that only ONE of the urls will be written to each interval.
# urls = [“udp://localhost:8089”] # UDP endpoint example
urls = [“http://influxsrv:8086”] # required
## The target database for metrics (telegraf will create it if not exists).
database = “telegraf” # required

Now telegraf can be run as a daemon container:

HypriotOS/armv7: pirate@black-pearl in ~
$ docker run -d -v /data –link influxsrv:influxsrv –name telegraf apicht/rpi-telegraf


Docker-Pi – Getting Started

Setup Raspberry Pi2 with HypriotOS

This gives us the basic docker platform to start from.. saves the agro of trying to work it all out for ourselves.  The guys over at Hypriot have put together a baseline OS for the ARM/Pi architecture that boots, DHCPs for network and then automatically starts the Docker components.
Download from here, and if you run a mac, follow these instructions.
The default SSH credentials for HypriotOS are pirate (username) / hypriot (password).

Assign static addressing

If you’re like me, you can’t remember half of the distribution variations for setting a static address.. HypriotOS is based on Debian – so the following would work fine: edit /etc/network/interfaces/eth0
auto eth0
iface eth0 inet static
 address 192.168.1.111/24
 gateway 192.168.1.1
 dns-nameservers 192.168.1.1
 dns-search jimleach.co.uk

Getting Persistance

Docker instances are non-persistant, but for most of the things I want to use them for, I need some consistent storage I can present to them.  Don’t do this if you want your containers to be portable! A better way would be to present some storage via NFS and map that instead.. something a bit less host-centric.

HypriotOS/armv7: pirate@black-pearl in /var/docker_data
$ pwd
/var/docker_data

Create directories for:

  • dockerui
  • influxdb
  • telegraf
  • grafana

We’ll need these later as we build up our stack of containers..

Docker-UI

Docker itself doesn’t have a web-frontend, it’s all CLI driven – but Docker-UI is a container-made app that allows you to see all the images and containers in your docker engine and view the connectivity between them.  Hypriot have pre-compiled the UI for their OS, and you can grab directly from the docker hub and manually run or, or by the power of docker, just run it (without downloading first) and let docker do the hard work:
HypriotOS/armv7: pirate@black-pearl in ~
$ docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v /var/docker_data/dockerui:/data --name dockerui hypriot/rpi-dockerui
Unable to find image 'hypriot/rpi-dockerui:latest' locally
latest: Pulling from hypriot/rpi-dockerui
f550508d4a51: Pull complete
a3ed95caeb02: Pull complete
Digest:sha256:6e245629d222e15e648bfc054b9eb24ac253b1f607d3dd513491dd9d5d272cfb
Status: Downloaded newer image for hypriot/rpi-dockerui:latest
34d0b3f00a25e847743fd04b59952d7870f2bebbd3b7524e009afd6d5fd0404c

By trying to run the image without first downloading, you prompt docker into pulling it automatically from the Docker Hub and then starting it.

  • -d – Puts the instance into daemon mode
  • -p 9000:9000 – maps port 9000 on the localhost (the RiPi) to port 9000 on the instance
  • -v – Maps our local storage to a volume/directory in the container (local:container)
  • –name – gives us a recognisable name to reference the container with

Now if you browse to the Pi’s address on port 9000 – you should get the Docker UI:

screen-shot-2016-09-18-at-16-33-44


Pi-Docker Failure

Arrrgh!  Total disappointment this weekend.  I’d spent a few days learning the basics of docker and installed HypriotOS to my RPi2.  I added a temperature/humidity sensor and wrote some scripts that bound a set of containers running pigpiod, influxdb, grafana and a python script to grab the sensor data and push it to influx. I had it all working perfectly and was ready to start playing with Git and trying out some rapid script development when a hasty and ungraceful shutdown lost the lot!
Even though I don’t have any backup (it was early days) – I still have plenty of notes, so I’ll rebuild it this week and post up as I go.

Cisco Live 365

Cisco Live 365 is an unbelievable resource that’s totally free!  For those who haven’t come across it before – it’s a library of all the presentations from the Cisco Live conferences from the last four years (starting London 2012).  For most of the sessions, the presentation material is available for download in PDF form and for many the actual session has been recorded and hosted as well.
The Library of resources aren’t just there for those who couldn’t make the Cisco Live conferences in person, but provides excellent resources for reference and training materials. I just can’t recommend using enough for those of both Sales and Technical backgrounds.
For those who’re technically focused.. just search ‘Deep Dive’ and you’ll find a mass of material that’s not on cisco.com’s documentation/support pages, most of it’s written by Technical Marketing Engineers or those just live-and-breathe specific technology stacks.

The Problem with NSX and ACI

Let’s face it, if there’s even the slightest whiff of someone in a business somewhere mentioning or even thinking about ‘SDN’, Cisco and VMware will be knocking on that door… with a sledge hammer!
The problem is, neither vendor’s product is perfect and as yet, they don’t talk to each other.
NSX doesn’t manage infrastructure. Period.  It has not a care in the world to what is going on with the underlay.  And you might say “Well that’s how it’s designed – to be underlay agnostic”.  My problem with this is; if you’re doing a greenfield DC or refresh, you still have to consider the physical infrastructure. How are you going to manage that infrastructure, monitor it and maintain it. NSX won’t make it go away.  What NSX is good at is the logical stuff – it’s easy to understand the concepts of an edge firewall, distributed firewall, dLR and logical networks. And it’s easy to create the tenant spaces within those constructs.
ACI is infrastructure, it is not virtualisation. The super-cool thing about ACI is just how easy it is to deploy, configure and manage large-scale network infrastructure.  It’s unbelievable how easy it is! Where it fails, not abysmally just badly, is delivery of the infrastructure constructs into the hypervisor space.  Cisco need to create an hypervisor-component capable of everything a physical leaf does – sending traffic up to a physical leaf for processing and then returning it back to the hypervisor is just clumsy. Even worse, assigning VLANs (which we’re trying to get away from the limits of) into port-groups on the VDS and using that for [micro-]EPG separation is clunky.
Are these two competitors? Cisco and VMware believe so, but in reality they are solving different problems, expensively.
What is the answer.. working together.  Which is tricky – NSX has come along way in terms of the VXLAN/Logical Switching/dLR development and of course ACI is doing the same at the physical layer in the leaf(s) (leaves?). I like NSX’s ability to provide a limited set of basic network functions (Edge, SSL/VPN/SLB) in an easy-to-consume way, what I don’t like is it’s total ignorance to physical infrastructure and physical workloads.

NSX Ninja

In the later-half of 2015 I was lucky enough to be invited to the NSX Ninja partner course at VMware in Staines.  This is a course specifically to drive the knowledge-base of partner consultants and architect-types to enable them to seek out and position NSX oppertunities.  With two weeks of training on the agenda and the assumption you’ve already spent some time on either a training course (ICM or Fast Track) and earned the VCP-NV; this course focuses first on low-level troubleshooting components and packet flows, then on the design side with the intention of preparing students for the VCIX-NV.

Read the rest of this entry »


Cloud and the Reseller

Attending an event today at Cisco I was blindsided by the lack of forsight with some ‘fellow’ VAR representatives.

When asked the question ‘Do you feel that cloud is a direct competitor or threat to your business’, one plainly answered; ‘Yes- if the customer is buying cloud then they’re not buying hardware’.
I think most of us are in agreement that the days of being a pure box-seller (even if it’s a solution of boxes) are long gone. VARs should be looking to exploit cloud services and create offerings around them- incorporate them into solution designs; sell services such as DRaaS in the cloud, or Global Availability/localisation, or flexible compute for development. Sure, you might not get as big of a slice of the Capex budget, but you’ll get on-going Opex monies instead. If you’re smart, you’ll sell some form of managed service or cloud-support too.

vLab Beast

Virtualisation of common workloads is the norm, and virtualised networks will soon be too.  Back in 2009 I took my CCIE R&S and at the time, I had the full resources of being a Cisco employee supporting me.  Now however, I find myself more of a lone-wolf and in order to keep up with the new trends of virtualised infrastructure and software-defined/developed-everything I’ve decided to invest in a small virtual lab box.

Now – being that I live in a flat with no dedicated study/office space and a partner who (quite rightly) enjoys having an aesthetically-pleasing home, I’ve come up with a box which I can hide in a cupboard and it’s relatively low-powered and almost silent..

  • Supermicro X10SDV-TLN4F 
  • onboard – Intel Xeon D-1540/1541 SoC (System on Chip) – 8c + HT
  • onboard – Dual 1GE, and Dual 10GE Base-T LAN
  • onboard – Dedicated IPMI interface
  • 64GB DDR4 RDIMM (16GB x 4)
  • 256GB M.2 PCI-e 3.0 x4 SSD (Samsung SM951)
  • Two 1TB Seagate Barracude (ST1000DM00) slow disks
  • 16GB USB stick (to boot from)
  • All wrapped up in a Cool Master Elite 120 Advanced

All the kit is on order and hopefully I’ll have some updates as to the build and performance as the weeks go by.


Full Stack Engineers

In an article titled “Places the CCIE can’t take me”, Ethan Banks recently wrote that network engineers need more and more to be aware of ‘the complete stack’; in my eyes this means the compute, the storage, the virtualisation, the applications and the management.

I’ve been lucky in that I was introduced to VMware in 2002 – you know, before ESX and vSphere, when you still had to compile Workstation from source.  So when Cisco dropped the UCS bomb in 2009, setting up vSphere wasn’t alien to me – I was one of a few network engineers who could understand the interaction between all the components.  It was a good time to be an engineer!

This wholistic knowledge I’ve carried forward today; I am a network engineer at heart and will always start there but I talk to other engineers and customers about all the other components too; what are you running on the network, how is it hosted, what hypervisor or bare-metal OS are you using, what type of storage is it and how is it accessed and a 100 other questions that lead me to some idea of what is trying to be achieved.

In the last few years I’ve also started to ask the questions around managing infrastructure; what do you monitor and how?  How do you control and backup configuration?  These questions have been spawned from exposure to financial customers, where availability, integrity and latency are high on the agenda.  Infrastructure engineers have been scripting configuration tools for years, but now application developers are trying to do it as well and they get called DevOps.

In the future, I think there’ll still be a need for the specialist engineers we have today; network engineers, storage engineers; compute guys etc – but they’re all going to need to understand a more about the wider picture than they do now.  The scariest thing for me, recently; talking to a DC network guy who doesn’t know the damnedest about vSwitches and in the same five minutes a Nutanix engineer who didn’t know if he needed a port-channel for his vSwitch uplinks or not.