List of my blogs/articles/videos


These are my blogs


The difference between Open and open source –

On LinkedIn

Dissecting openstack contribution statistics –

What has building a TV cabinet got to do with #OpenSource software ?


Managing OpenStack with Open Source Tools


How telcoms can escape vendor lock-in with open source software

On Intel blog site

How You Can Build a Cost Effective Telecommunications Cloud Using Open Source Software on Intel Architecture

Intel® Network Builders

On Red Hat blog sites

Telco and NFV






YouTube videos

How to deploy OpenStack using the RedHat OpenStack Platform director Graphical User Interface in 3 videos:

  1. Prerequisites for installing Red Hat OpenStack Platform – video 1 of 3
  2. Registering nodes with Red Hat OpenStack Platform Director – video 2 of 3
  3. Deploying OpenStack using Red Hat OpenStack Platform Director – video 3 of 3

Networking primer for NFV (Network Functions Virtualization)

What comprises a network?

This is the start of a series of blog posts focusing on Network Functions Virtualization (NFV) the transformation of the telecommunication industry. This blog post can also be heard at

A large network is comprised of an  IP backbone, Enterprise, SMB, residential wifi  and public cellular networks.




  • IP backbone – the core / central / backbone network
  • Distribution networks – connect access networks to the core
  • Access networks – provide access to computers, servers, mobile devices.

Networks comprise:

  1. Ethernet – Cables used to connect servers and workstations, buildings and campuses, up to 100gig and soon 400.
  2. Wide Area Networks -Typically T1 and ATM though in the future to will likely be supplanted by Ethernet.
  3. WIFI – Wireless networks used in the home, public space and Enterprise.
  4. Cellular – Four generations 1G,2G,3G,4G – Each generation becomes faster adds  encryption and channels. One day 5G will have more intelligence, Quality of Service, adaptive network reconfiguration etc
  5. Unified communication – One platform that unifies web, video, audio, telephony instant messenger etc.

There are essentially two types of network traffic:

  • Elastic traffic adapts to changes in delay and throughput the network – example FTP, HTTP, SSH. Though painful, I can wait for a webpage to load over HTTP or file to download over FTP. TCP is an example of a protocol that controls congestion in the network. 
  • Inelastic traffic does not adapt to changes in delay and throughput – example voice, media, video and airline pilot simulation. Anyone watching a video online and the image is choppy or blurry or listening to an audio broadcast or participating in voice of IP phone call and the sound breaks up and words are skipped has experienced inelastic traffic.

Reliable network traffic communication needs:

  • A minimum throughput


  • No delay – for example pricing of stocks on an exchange cannot be delayed, traders need realtime communication

Delay can be described by

  • delay jitter – Is the magnitude of delay variation – incoming packets are buffered to compensate and for Internet delays and then steadily streamed to the application expecting the traffic.
  • packet loss- RealTime applications generally expect no traffic loss. Again, think of those lost words in an internet phone call or imagine data lost by a financial trading network.

Quality of Service

How good/bad/average is the service I am providing? Perspective of the service provider.

  1. What is throughput – bytes per second for logical traffic flow
  2. Is there any delay – average or max delay , latency
  3. What is the packet jitter – max allowable jitter
  4. What is the error rate – What fraction of bits are delivered in error
  5. Packet loss – What fraction of packets are lost
  6. Priority – How do I prioritize flows of traffic
  7. Availability – What % of the time is my service available? Over 365 months was my service up for 363?
  8. Security – Is my service secure? What levels of security do I offer?

Quality of Experience

What is the user experience? Perspective of the end user who is using the service.

  1. Perceptual – Sensory experience of the user. How the user perceives the video: sharpness, brightness, flicker and the audio: clarity and timbre.
  2. Psychological – What is the ease of use: Is the service a joy to use? Is it useful? What is the perceived quality / satisfaction. Is the user annoyed by poor service quality?
  3. Interactive – If the application requires interaction for example the user speaking how responsive / natural / communicative / expressive is the user experience?


Creating containers in Linux post 2 of several

Linux containers

In blog post 1, I defined containers and how they are distinguished from virtualization.
This post focuses on Linux containers.
LXC is an acronym for LinuXContainer.  LXC is not a container, but a set of tools that interfaces with kernel namespaces, cgroups etc to create and manage containers.
There are two types of containers, privileged, run lxc commands as root, and non-privileged, run lxc commands as a non-root user.

Container concepts

  1. cgroups
  2. Namespaces
  3. Templates
  4. Networking


Linux control groups, controls how resources are used in a container: Memory, disk and network i/o, guarantee CPU time, lock a container to a specific CPU and provide resource accounting.


This allows containers to be isolated from one another. Groups of processes are separated and cannot “see” resources in other groups. There are several namespaces:

  1. The PID namespace isolates process identifiers and their details.
  2. Network namespace isolates physical or virtual NICs, firewall rules, routing tables etc.
  3. “UTS” namespace allows different hostnames
  4. The Mount namespace allows different file system layout, or specific mount points read-only.
  5. User namespace isolates user IDs between namespaces.


Templates define:

  • root filesystem
  • selinux on or off
  • hostname
  • networking
  • root password
  • services – define some; remove others that are superfluous
  • etc

Templates are available per Linux distribution and those shipped with lxc are found here: /usr/share/lxc/templates.

Creating and networking containers

jonathan@ubuntu:~$ ls /usr/share/lxc/templates
lxc-alpine lxc-archlinux lxc-centos lxc-debian lxc-fedora lxc-openmandriva lxc-oracle lxc-sshd lxc-ubuntu-cloud lxc-altlinux lxc-busybox lxc-cirros lxc-download lxc-gentoo lxc-opensuse lxc-plamo lxc-ubuntu

Let’s see cirros (or any other template) details.

more /usr/share/lxc/templates/lxc-cirros

This command creates a cirros container

sudo lxc-create --template /usr/share/lxc/templates/lxc-cirros --name c2

Another alternative is to download a template from a server.

sudo lxc-create --template download --name c1
This provides a list of images to choose from and I chose:
  • Distribution: ubuntu
  • Release: trusty
  • Architecture: amd64

Now I have two containers, c1 and c2

$ sudo lxc-ls -f
c1    STOPPED  -     -     NO         
c2    STOPPED  -     -     NO

Where are the containers c1 and c2 stored on the host?

$sudo ls -alt /var/lib/lxc
drwx------  4 root root 4096 Jul  1 10:59 .
drwxrwx---  3 root root 4096 Jul  1 10:58 c2
drwxrwx---  3 root root 4096 Jul  1 10:20 c1
drwxr-xr-x 41 root root 4096 Jul  1 09:56 ..


LXC creates a private network namespace for each container, which includes a layer 2 networking stack. A container can exist without a private network namespace, and will only have access to the host network.

Physical NICs: Containers connect to the outside world by a physical NIC or veth tunnel endpoint connected to  the container. But a NIC can only exist in one namespace at a time, so a physical NIC cannot simultaneously connect to the host and a container.

Bridge: LXC creates a bridge, lxcbr0, at host startup.

Containers created using the default configuration will have one veth NIC with the remote end connected to the lxcbr0 bridge.

Thus my cirros container has an IP which is only accessible from the host ‘ubuntu’,

jonathan@ubuntu:~$ ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.060 ms

not other machines.

us-jonathan-mac:~ jonathan$ ping
PING ( 56 data bytes
Request timeout for icmp_seq 0

While it can resolve remote servers, it cannot reach them

$ ip addr show
12: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
 link/ether 00:16:3e:aa:df:dd brd ff:ff:ff:ff:ff:ff
 inet brd scope global eth0
 valid_lft forever preferred_lft forever
 inet6 fe80::216:3eff:feaa:dfdd/64 scope link 
 valid_lft forever preferred_lft forever
$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default UG 0 0 0 eth0 * U 0 0 0 eth0
$ ping
PING ( 56 data bytes

The container config file tells me that it is currently set to bridge to the host only. Network type veth means virtual ethernet and it is linked to the host by br0

$ sudo more /var/lib/lxc/c2/config
# Network configuration = veth = up = br0 = 00:16:3e:13:38:b8

So I need to setup a bridge, br0, between the host and the container. So on the host I setup:

$ more /etc/network/interfaces
# loopback
auto lo
iface lo inet loopback
# primary
auto eth0
iface eth0 inet static
auto br0
iface br0 inet dhcp
bridge_ports eth0

So now from my Cirros container, I can ping the outside world since it picked up a dhcp address.

login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.cirros login: cirros
$ ip addr show
eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:16:3e:aa:df:dd brd ff:ff:ff:ff:ff:ff net brd scope ...
$ ping
PING ( 56 data bytes
64 bytes from seq=0 ttl=112 time=88.989 ms

Check container configuration

# lxc-checkconfig
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-3.10.0-229.el7.x86_64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled
--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled
--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled

I can show the bridge info

$ brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000c290a1957 no eth0

And get info on the running container to see how it is connected to the bridge.

jonathan@ubuntu:~$ sudo lxc-info --name c2
Name:           c2
State:          RUNNING
PID:            1717
IP:             2601:647:4b00:c3:216:3eff:feaa:dfdd
CPU use:        0.15 seconds
BlkIO use:      3.98 MiB
Memory use:     2.62 MiB
KMem use:       0 bytes
Link:           vethY0OLAB
TX bytes:      4.41 KiB
RX bytes:      348.96 KiB
Total bytes:   353.37 KiB

Next post… containers in public clouds, AWS and Google

Diving into containers – post #1 of several

In this series of posts I will explore containers and their related technologies.

The problem that containers attempts to solve

  • Allow organizations to respond quickly to new business requirements by speeding application delivery.
  • Helps keep systems and data secure by isolating applications.
  • Lowers development costs by enabling increasing developer agility
  • Reduce maintenance time and cost, since an entire container can be replaced rather than patched and updated.
  • Use existing infrastructure to adopt a new innovative development and hosting discipline.
How do containers achieve these goals?
Containers enable software to run reliably when moved from one computing environment to
another, example:
  • From a developer’s laptop to a test environment, to a QA environment and then into production.
  • From a server in a datacenter to a virtual machine in a private or public cloud.

Containers are easy to deploy and portable across host systems because the complete application environment is included in the container.

What is a container
  • Containers are an application isolation capability on a host operating system.
  • A container consists of an entire runtime environment: an application its dependencies, libraries , binaries, and configuration files, packaged. This bundled application and dependencies abstracts differences in Operating System distributions and infrastructure.
  • All containers on a host must use the same kernel.
Advantages of containers
  • Containers are easy to deploy and portable across host systems because the complete application environment is included in the container.
  • Containers enable server consolidation by running multiple applications on a host, similar to virtualization.
Doesn’t virtualization also enable concurrent applications to run in an isolated environment.
What’s the difference between containers and virtualization? 
  • Virtualization virtualizes CPU, storage and networking to enable a guest operating system to run, and applications run on the guest. Essentially a virtual server.
  • A container host provides a logically isolated runtime environment within the same Operating System of a physical or virtual server.

Containers can run on virtual machines.

Containers compared to Virtual Machines (VM)
Container VM
Shares same kernel Each VM is its own operating system; runs on a hypervisor
Seconds to start Minutes to boot up
Tens of megabytes in size Gigabytes in size
Arguably less secure because root access on a Linux host has access to all containers. Arguably more secure because VMs are isolated.
Available on Linux, Solaris and Windows Operating Systems and  AWS, Azure and Google clouds/ VMware ESXi, KVM, HyperV are hypervisors that run VMs. VMs also run on public clouds: AWS, Azure and Google.


In the next blog post, I will explore containers in Linux and other environments.

The business value of an OpenStack cloud

There is a lot of hype around OpenStack today. But OpenStack seem so complex to deploy, manage, upgrade.


What is OpenStack and what value does it bring to an organization?

OpenStack enables an organization to build applications quickly.

  • Elasticity: Applications that can expand as demand grows; shrink as demand lessens.
  • High Availability: Applications that can survive failure of the underlying hardware without any interruption to the end user.
  • Automation: Add compute,networking and storage using an API or a web dashboard.
  • SelfService: Enable end users to self service their needs for compute,networking and storage without waiting for the IT department to fulfill their requests.

This agility, automation and self service allows organizations to evolve to meet the demands of users and customers who want applications

  •  OpenStack is open-source which means any organization can download the software and contribute to the project.
  • OpenStack is hardware independent and not tied to any particular vendor.

To learn more about the business value of OpenStack, attend the OpenStack summit in Vancouver, but first help my three sessions get nominated for the summit as follows:

  1. How to save money and improve agility with Network Functions Virtualization (NFV)
  2. Comparing your choices, OpenStack or VMware to build a private cloud
  3. What’s the difference between AWS public cloud and OpenStack

Thank you!

Why and how you can and should understand your healthcare costs

As a consumer when I make a new purchase, the price is clearly marked, whether it is a washing machine, bar of chocolate or menu item in a restaurant. The price is fixed and not negotiable. Indeed the .com era heralded many price comparison websites such as: etc….

Two exceptions: (1.) the car market, where “sticker price” varies – what you finally pay after haggling (2) the housing market which is also driven by offers and counter-offers until a deal is signed. But in both cases, the consumer at least has a starting price point.

So why is the price of a healthcare procedure unavailable to the patient?

When a patient goes to the doctor there is no discussion of price. The patient (and often the Dr) have no idea in advance what the visit and procedures will cost. Charges and payments are “contracted” or negotiated between the healthcare provider and the various insurance companies. Often the patient only finds out the cost after the bill has been submitted to the insurance and the patient is faced with a deductible, co-payment or co-insurance or worst case no insurance!

Imagine this…. before going to see the Dr you have access to this information:


Before a Dr visit, I believe the patient should be entitled to know:

  1. Cost of the visit to the Dr office
  2. Cost of associated procedures such as labs, blood tests, xrays
  3. Amount insurance will pay
  4. Amount the patient is responsible to pay.

This would allow the patient to shop around for prices and not accept the de-facto charge. (Of course this only applies to patients with a PPO insurance, not HMO)

There are efforts to provide healthcare cost transparency:

  • AnthemBlueCross does not have a cost estimator.
  • United Healthcare has a cost estimator
  • Cigna has cost estimate application (login required).

Here is a non-partisan effort to allow patients to shop for healthcare by price:

Patients – understand the breakdown of your healthcare bill and how you can shop for alternatives and even negotiate the cost. When you shop for a car, you learn to understand miles per gallon, frequency between oil changes, road handling; if you buy a washing machine you may want to know gallons/liters of water used per load.  

Thus you should learn the terminology and key terms used in medical costs.

Medical professionals use numerical codes when they diagnose patients and write up the diagnosis or medical procedure. Everyth diagnosis has a code from a dental filling to a heart transplant. There are two major codes in use, CPT and ICD. 

  1. CPT codes – Current Procedural Terminology.
  2. ICD-9 – International Classification of Diseases.

Armed with this information you the patient can understand your healthcare costs as follows using a CPT lookup. Ask your healthcare provider for the CPT code of the procedure, then before undergoing the procedure find out from your health insurance how much they cover and what you owe. Use the CPT code to shop around from other providers and get the best price.

Building an IaaS cloud with Red Hat OpenStack – 2 of several posts

This is the second in a series that will step through the process of building an OpenStack IaaS cloud in your datacenter. (The first post introduced Infrastructure as a Service).

What are cloud workloads? What applications are suited to the cloud?

Physical servers that run email, ERP, CRM or web applications are reasonably static and their load is predictable. ERP may have a predictable spike in traffic at month end, CRM or email may use extra compute resources when a marketing campaign is launched. To grow this infrastructure time, labour and costs are incurred to purchase and rack additional servers, storage and networking.

Virtualization provides some gains as existing physical servers and their apps can be consolidated and higher utilization extracted from them by running several virtual machines on the same physical hardware.  To grow this infrastructure a virtualization administrator creates additional virtual machines.

Cloud workloads are applications that are primarily elastic, they grow and shrink dynamically and automatically, with no user intervention – no racking of new servers, no creation of additional virtual machines. Furthermore they can be deployed automatically using APIs by the end-user with no intervention required by an IT administrator.

Overview of OpenStack

OpenStack is an Infrastructure as a Service OpenSource project, that was started by NASA and RackSpace but now has over 200 contributors. OpenStack offers an Infrastructure-as-a-Service cloud that can be deployed as private or public offering. To fully understand OpenStack, lets dispense with some mis-understandings:

OpenStack is a product

OpenStack is not a product, per se. Comparing OpenStack to a commercial product like VMware vCloud or Microsoft Private cloud is an apples to oranges comparison. OpenStack is an open-source project used to build an IaaS cloud. OpenStack is a cloud operating system, that controls virtual compute, software defined networking and storage.

You can download the distribution directly from the github source and deploy, test and run it alone or purchase a supported distribution such as Red Hat’s, where most of the testing is done for you and deployment services are included.

OpenStack is a replacement for virtualization

OpenStack does not replace your virtualization initiative. OpenStack runs alongside the hypervisor and supports several: XEN, VMware, HyperV, LXC and of course KVM. Moreover, OpenStack supports container technologies such as Docker (will be available on Red Hat Enterprise Linux 7) and OpenStack even runs on bare-metal (still in development).

So how is OpenStack used? What are some use cases?

OpenStack is a cloud operating system. OpenStack provides a platform to deploy virtual servers, software defined networking and storage. It is suitable for workloads that dynamically grow (and shrink). This takes advantage of the elastic nature of a cloud infrastructure.

  1. Automatic deployments – where scripts and programs use APIs to deploy virtual servers, networking and storage. This is a faster and more efficient alternative than using a web GUI or contacting an IT admin.
  2. Reuseable scenarios – where applications demand and identical configuration be created and reused. Example QA testing on the identical configuration
  3. Servers and applications based on templates – a prepackaged set of services that is deployed from an image or template. Example: A webserver, application server and database.
  4. Automatic scale – accommodate a growth in demand automatically. Example an eCommerce application that has spikes during peak shopping periods
  5. Big Data – analysis of datasets that vary in size and thus require automatic addition of nodes/servers as the data grows and the need for compute expands. Example: Hadoop performs data analysis on the end-node and not at a central server, thus as data analysis grows new nodes need to automatically spin up for analysis to be performed.

OpenStack offers:

  • Distributed object storage
  • Persistent block-level storage
  • Storage for provisioning virtual-machines and their images.
  • RBAC – Roles based Access Control and Authentication
  • Software-defined networking
  • Web browser-based GUI or a Command Line and APIs for users and administrators.

OpenStack components/projects

The projects that comprise OpenStack are listed in this table and described in detail below.

Service OpenStack Project code name Description
Dashboard Horizon Web-based GUI for using/administering OpenStack
Identity Keystone Authentication and Authorization (roles/privileges)
Networking Neutron Software defined networking for connectivity between OpenStack components
Block Storage Cinder Persistent block storage/volumes/virtual disks for instances/virtual machines
Compute Nova Launch and schedule instances/virtual machines on servers/nodes
Image Glance A registry for virtual machine images
Object Storage Swift Storage of files for users
Metering Ceilometer Usage and measuring of cloud resources
Orchestration Heat Template based engine for automatically creating resources(compute/storage/networking)

OpenStack Compute service: (PROJECT NOVA)

OpenStack Compute is the heart or core of the  OpenStack cloud. Compute provisions and manages on-demand virtual machines. Compute schedules virtual machines to run on a set of physical servers (nodes). Virtual machines can be started, stopped, suspended, created and deleted. The virtual machines run on hypervisors such as XEN, ESXi, HyperV and of course or KVM.

Compute interfaces with the Identity service to authenticate a user who is requesting a compute action (create/delete/suspend/copy a virtual machine). Compute interfaces with the Dashboard (project Horizon) service for the user web GUI interface. Compute interfaces with the image service (project Glance) to provision an image.  There are security/access controls to govern which images are accessible by users and a quota on how many instances can be created per project.

The Compute service scales horizontally on standard hardware: meaning to grow an OpenStack cloud you add more servers/nodes (horizontal scaling), rather than adding memory, cpu and disk to an existing server (vertical scaling).

OpenStack Image service: (PROJECT GLANCE)

The OpenStack Image Service provides discovery, registration and delivery services for disk and server images. Images can be used as a template to create new virtual servers. It can also be used to store and catalog backups.

The image service stores images in a variety of formats:

  • AMI- Amzon Machine Image
  • ISO – (virtual CDROM)
  • qcow2 (Qemu/KVM)
  • OVF (Open Virtualization Format)
  • RAW (unstructured)
  • VDI (VirtualBox)
  • VHD (Hyper-V, XEN, Microsoft, VMware)
  • VMDK (VMWare)

OpenStack Object storage: (PROJECT SWIFT)

Object storage provides virtual containers which allows users to store and retrieve files (images, documents, video files, graphics etc.). Object storage supports asynchronous eventual consistency replication and uses the concept of:

  • Replicas – maintain the state of objects in the case of an outage.
  • Zones – used to host replicas and ensure that each replica of given object can be stored separately. A zone might represent a disk, an disk array, a rack of servers or an entire datacenter.
  • Regions – a group of zones sharing a location

OpenStack Block storage: (PROJECT CINDER)

Block storage provides persistent block storage that comprise virtual hard-drives or volumes used by OpenStack virtual machines. These volumes are integrated into the Dashboard and Compute services to enable users to manage their own storage needs. Thus users can create (or list or delete) a volume(s) and attach it to (or detach from) a virtual machine(s). Virtual Machine snapshots are also stored on block storage volumes.

OpenStack Metering service: (PROJECT CEILOMETER)

The Metering service provides user level statistics that can be used for alerting, billing or monitoring. There is a plugin system to add new monitors.

OpenStack Orchestration service: (PROJECT HEAT)

The Orchestration service provides a template-based engine for the OpenStack cloud, used to create and manage cloud  resources: storage, networking, instances (virtual machines), and applications as a repeatable running environment. Templates are used to create stacks, or collections of resources (instances, floating IPs, volumes, security groups, or users). The service offers access to the OpenStack core services via a single modular template.

OpenStack Networking service: (PROJECT NEUTRON)

OpenStack provides networking models to accomodate different applications. It is a scalable and API driven system for providing network connectivity. As a software defined network, OpenStack networking can create networks, assign IP addresses, route traffic amd connect servers. Various network services are supported: flat networks, VLANs, GREs (Generic Routing Encapsulation – a tunneling protocol), multi-tier topologies etc.

OpenStack Networking manages IP addresses, to allocate static  or DHCP addresses. Floating IP addresses allow traffic to be dynamically rerouted to any compute resource,  for example to redirect traffic during maintenance or in the case of a failure. OpenStack Networking has a plugin extension framework to add intrusion detection systems (IDS), load balancing, firewalls and virtual private networks (VPN) .

The diagram below shows:

  1. Horizon – providing a web user interface to manage and use Cinder(block)/Swift(object), Glance (images) Nova (compute) and Quantum (networking).
  2. Keystone – providing authentication/authorization to Cinder/Swift (storage), Glance (images) Nova (compute) and Quantum (networking).
  3. Nova(compute) – scheduling and provisioning Virtual Machines.
  4. Ceilometer – monitoring Cinder/Swift (storage), Glance (images) Nova (compute) and Quantum (networking).
  5. Cinder(block storage) – providing volumes for Virtual Machines and storing backups in Swift.
  6. Glance – storing images in Swift and providing them to Virtual Machines.
  7. Quantum – providing network connectivity to Virtual Machines.
  8. Heat – orchestrating the OpenStack cloud.
OpenStack architecture

OpenStack architecture – courtesy of