List of my blogs/articles/videos

 

These are my blogs

On IDG

The difference between Open and open source –
http://www.idgconnect.com/blog-abstract/16909/the-difference-source

On LinkedIn

Dissecting openstack contribution statistics – https://www.linkedin.com/pulse/dissecting-openstack-contribution-statistics-who-jonathan-gershater

What has building a TV cabinet got to do with #OpenSource software ? https://www.linkedin.com/pulse/my-tv-cabinet-project-open-source-software-jonathan-gershater

On Linux.com

Managing OpenStack with Open Source Tools

On opensource.com

How telcoms can escape vendor lock-in with open source software https://opensource.com/business/15/6/telecoms-escape-vendor-lock-in-open-source-nfv

On Intel blog site

How You Can Build a Cost Effective Telecommunications Cloud Using Open Source Software on Intel Architecture

Intel® Network Builders

On Red Hat blog sites

Telco and NFV

  1. http://verticalindustriesblog.redhat.com/tune-in-how-to-build-an-open-source-cloud-for-telcos/
  2. http://verticalindustriesblog.redhat.com/inside-the-open-networking-summit-2016/
  3. http://verticalindustriesblog.redhat.com/the-fcc-is-about-to-free-up-your-selection-of-tv-channels/
  4. http://verticalindustriesblog.redhat.com/addressing-telco-service-providers-requirements-with-open-source/
  5. http://verticalindustriesblog.redhat.com/why-upstream-contributions-matter-when-developing-open-source-nfv-solutions/

OpenStack

  1. http://redhatstackblog.redhat.com/2015/08/31/how-red-hats-openstack-partner-networking-solutions-offer-choice-and-performance/
  2. http://redhatstackblog.redhat.com/2015/08/06/how-to-choose-the-best-fit-hardware-for-your-openstack-deployment/
  3. http://redhatstackblog.redhat.com/2015/05/13/public-vs-private-amazon-compared-to-openstack/
  4. http://redhatstackblog.redhat.com/2015/03/27/an-ecosystem-of-integrated-cloud-products/
  5. http://redhatstackblog.redhat.com/2015/03/26/an-openstack-cloud-that-frees-you-to-pursue-your-business/

Virtualization

  1. http://rhelblog.redhat.com/2017/01/04/five-reasons-to-switch-from-vsphere-to-red-hat-virtualization/

YouTube videos

How to deploy OpenStack using the RedHat OpenStack Platform director Graphical User Interface in 3 videos:

  1. Prerequisites for installing Red Hat OpenStack Platform – video 1 of 3
  2. Registering nodes with Red Hat OpenStack Platform Director – video 2 of 3
  3. Deploying OpenStack using Red Hat OpenStack Platform Director – video 3 of 3
Advertisements

Networking primer for NFV (Network Functions Virtualization)

What comprises a network?

This is the start of a series of blog posts focusing on Network Functions Virtualization (NFV) the transformation of the telecommunication industry. This blog post can also be heard at https://thenfvpodcast.org/what-comprises-a-network/

A large network is comprised of an  IP backbone, Enterprise, SMB, residential wifi  and public cellular networks.

 

network-diagram-example-enterprise-network-architecture

courtesy smartdraw.com

  • IP backbone – the core / central / backbone network
  • Distribution networks – connect access networks to the core
  • Access networks – provide access to computers, servers, mobile devices.

Networks comprise:

  1. Ethernet – Cables used to connect servers and workstations, buildings and campuses, up to 100gig and soon 400.
  2. Wide Area Networks -Typically T1 and ATM though in the future to will likely be supplanted by Ethernet.
  3. WIFI – Wireless networks used in the home, public space and Enterprise.
  4. Cellular – Four generations 1G,2G,3G,4G – Each generation becomes faster adds  encryption and channels. One day 5G will have more intelligence, Quality of Service, adaptive network reconfiguration etc
  5. Unified communication – One platform that unifies web, video, audio, telephony instant messenger etc.

There are essentially two types of network traffic:

  • Elastic traffic adapts to changes in delay and throughput the network – example FTP, HTTP, SSH. Though painful, I can wait for a webpage to load over HTTP or file to download over FTP. TCP is an example of a protocol that controls congestion in the network. 
  • Inelastic traffic does not adapt to changes in delay and throughput – example voice, media, video and airline pilot simulation. Anyone watching a video online and the image is choppy or blurry or listening to an audio broadcast or participating in voice of IP phone call and the sound breaks up and words are skipped has experienced inelastic traffic.

Reliable network traffic communication needs:

  • A minimum throughput

AND

  • No delay – for example pricing of stocks on an exchange cannot be delayed, traders need realtime communication

Delay can be described by

  • delay jitter – Is the magnitude of delay variation – incoming packets are buffered to compensate and for Internet delays and then steadily streamed to the application expecting the traffic.
  • packet loss- RealTime applications generally expect no traffic loss. Again, think of those lost words in an internet phone call or imagine data lost by a financial trading network.

Quality of Service

How good/bad/average is the service I am providing? Perspective of the service provider.

  1. What is throughput – bytes per second for logical traffic flow
  2. Is there any delay – average or max delay , latency
  3. What is the packet jitter – max allowable jitter
  4. What is the error rate – What fraction of bits are delivered in error
  5. Packet loss – What fraction of packets are lost
  6. Priority – How do I prioritize flows of traffic
  7. Availability – What % of the time is my service available? Over 365 months was my service up for 363?
  8. Security – Is my service secure? What levels of security do I offer?

Quality of Experience

What is the user experience? Perspective of the end user who is using the service.

  1. Perceptual – Sensory experience of the user. How the user perceives the video: sharpness, brightness, flicker and the audio: clarity and timbre.
  2. Psychological – What is the ease of use: Is the service a joy to use? Is it useful? What is the perceived quality / satisfaction. Is the user annoyed by poor service quality?
  3. Interactive – If the application requires interaction for example the user speaking how responsive / natural / communicative / expressive is the user experience?

 

Creating containers in Linux post 2 of several

Linux containers

In blog post 1, I defined containers and how they are distinguished from virtualization.
This post focuses on Linux containers.
LXC is an acronym for LinuXContainer.  LXC is not a container, but a set of tools that interfaces with kernel namespaces, cgroups etc to create and manage containers.
There are two types of containers, privileged, run lxc commands as root, and non-privileged, run lxc commands as a non-root user.

Container concepts

  1. cgroups
  2. Namespaces
  3. Templates
  4. Networking

Cgroups

Linux control groups, controls how resources are used in a container: Memory, disk and network i/o, guarantee CPU time, lock a container to a specific CPU and provide resource accounting.

Namespaces

This allows containers to be isolated from one another. Groups of processes are separated and cannot “see” resources in other groups. There are several namespaces:

  1. The PID namespace isolates process identifiers and their details.
  2. Network namespace isolates physical or virtual NICs, firewall rules, routing tables etc.
  3. “UTS” namespace allows different hostnames
  4. The Mount namespace allows different file system layout, or specific mount points read-only.
  5. User namespace isolates user IDs between namespaces.

Templates

Templates define:

  • root filesystem
  • selinux on or off
  • hostname
  • networking
  • root password
  • services – define some; remove others that are superfluous
  • etc

Templates are available per Linux distribution and those shipped with lxc are found here: /usr/share/lxc/templates.

Creating and networking containers

jonathan@ubuntu:~$ ls /usr/share/lxc/templates
lxc-alpine lxc-archlinux lxc-centos lxc-debian lxc-fedora lxc-openmandriva lxc-oracle lxc-sshd lxc-ubuntu-cloud lxc-altlinux lxc-busybox lxc-cirros lxc-download lxc-gentoo lxc-opensuse lxc-plamo lxc-ubuntu

Let’s see cirros (or any other template) details.

more /usr/share/lxc/templates/lxc-cirros

This command creates a cirros container

sudo lxc-create --template /usr/share/lxc/templates/lxc-cirros --name c2

Another alternative is to download a template from a server.

sudo lxc-create --template download --name c1
This provides a list of images to choose from and I chose:
  • Distribution: ubuntu
  • Release: trusty
  • Architecture: amd64

Now I have two containers, c1 and c2

$ sudo lxc-ls -f
NAME  STATE    IPV4  IPV6  AUTOSTART  
------------------------------------
c1    STOPPED  -     -     NO         
c2    STOPPED  -     -     NO

Where are the containers c1 and c2 stored on the host?

$sudo ls -alt /var/lib/lxc
drwx------  4 root root 4096 Jul  1 10:59 .
drwxrwx---  3 root root 4096 Jul  1 10:58 c2
drwxrwx---  3 root root 4096 Jul  1 10:20 c1
drwxr-xr-x 41 root root 4096 Jul  1 09:56 ..

Networking

LXC creates a private network namespace for each container, which includes a layer 2 networking stack. A container can exist without a private network namespace, and will only have access to the host network.

Physical NICs: Containers connect to the outside world by a physical NIC or veth tunnel endpoint connected to  the container. But a NIC can only exist in one namespace at a time, so a physical NIC cannot simultaneously connect to the host and a container.

Bridge: LXC creates a bridge, lxcbr0, at host startup.

Containers created using the default configuration will have one veth NIC with the remote end connected to the lxcbr0 bridge.

Thus my cirros container has an IP 10.0.2.120 which is only accessible from the host ‘ubuntu’,

jonathan@ubuntu:~$ ping 10.0.2.120
PING 10.0.2.120 (10.0.2.120) 56(84) bytes of data.
64 bytes from 10.0.2.120: icmp_seq=1 ttl=64 time=0.060 ms

not other machines.

us-jonathan-mac:~ jonathan$ ping 10.0.2.120
PING 10.0.2.120 (10.0.2.120): 56 data bytes
Request timeout for icmp_seq 0

While it can resolve remote servers, it cannot reach them

$ ip addr show
12: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
 link/ether 00:16:3e:aa:df:dd brd ff:ff:ff:ff:ff:ff
 inet 10.0.2.120/24 brd 10.0.2.255 scope global eth0
 valid_lft forever preferred_lft forever
 inet6 fe80::216:3eff:feaa:dfdd/64 scope link 
 valid_lft forever preferred_lft forever
$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.0.2.1 0.0.0.0 UG 0 0 0 eth0
10.0.2.0 * 255.255.255.0 U 0 0 0 eth0
$ ping cnn.com
PING cnn.com (157.166.226.26): 56 data bytes

The container config file tells me that it is currently set to bridge to the host only. Network type veth means virtual ethernet and it is linked to the host by br0

$ sudo more /var/lib/lxc/c2/config
<snip>
# Network configuration
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.hwaddr = 00:16:3e:13:38:b8

So I need to setup a bridge, br0, between the host and the container. So on the host I setup:

$ more /etc/network/interfaces
# loopback
auto lo
iface lo inet loopback
# primary
auto eth0
iface eth0 inet static
address 10.0.1.16
netmask 255.255.255.0
gateway 10.0.1.1
#bridge
auto br0
iface br0 inet dhcp
bridge_ports eth0

So now from my Cirros container, I can ping the outside world since it picked up a dhcp address.

login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.cirros login: cirros
Password: 
$ ip addr show
eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:16:3e:aa:df:dd brd ff:ff:ff:ff:ff:ff net 10.0.1.16/24 brd 10.0.1.255 scope ...
$ ping cnn.com
PING cnn.com (157.166.226.25): 56 data bytes
64 bytes from 157.166.226.25: seq=0 ttl=112 time=88.989 ms

Check container configuration

# lxc-checkconfig
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-3.10.0-229.el7.x86_64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled
--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled
--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled

I can show the bridge info

$ brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000c290a1957 no eth0
vethY0OLAB
lxcbr08000.000000000000no

And get info on the running container to see how it is connected to the bridge.

jonathan@ubuntu:~$ sudo lxc-info --name c2
Name:           c2
State:          RUNNING
PID:            1717
IP:             10.0.1.16
IP:             2601:647:4b00:c3:216:3eff:feaa:dfdd
CPU use:        0.15 seconds
BlkIO use:      3.98 MiB
Memory use:     2.62 MiB
KMem use:       0 bytes
Link:           vethY0OLAB
TX bytes:      4.41 KiB
RX bytes:      348.96 KiB
Total bytes:   353.37 KiB

Next post… containers in public clouds, AWS and Google

Diving into containers – post #1 of several

In this series of posts I will explore containers and their related technologies.

The problem that containers attempts to solve

  • Allow organizations to respond quickly to new business requirements by speeding application delivery.
  • Helps keep systems and data secure by isolating applications.
  • Lowers development costs by enabling increasing developer agility
  • Reduce maintenance time and cost, since an entire container can be replaced rather than patched and updated.
  • Use existing infrastructure to adopt a new innovative development and hosting discipline.
How do containers achieve these goals?
Containers enable software to run reliably when moved from one computing environment to
another, example:
  • From a developer’s laptop to a test environment, to a QA environment and then into production.
  • From a server in a datacenter to a virtual machine in a private or public cloud.

Containers are easy to deploy and portable across host systems because the complete application environment is included in the container.

What is a container
  • Containers are an application isolation capability on a host operating system.
  • A container consists of an entire runtime environment: an application its dependencies, libraries , binaries, and configuration files, packaged. This bundled application and dependencies abstracts differences in Operating System distributions and infrastructure.
  • All containers on a host must use the same kernel.
Advantages of containers
  • Containers are easy to deploy and portable across host systems because the complete application environment is included in the container.
  • Containers enable server consolidation by running multiple applications on a host, similar to virtualization.
Doesn’t virtualization also enable concurrent applications to run in an isolated environment.
What’s the difference between containers and virtualization? 
  • Virtualization virtualizes CPU, storage and networking to enable a guest operating system to run, and applications run on the guest. Essentially a virtual server.
  • A container host provides a logically isolated runtime environment within the same Operating System of a physical or virtual server.

Containers can run on virtual machines.

Containers compared to Virtual Machines (VM)
Container VM
Shares same kernel Each VM is its own operating system; runs on a hypervisor
Seconds to start Minutes to boot up
Tens of megabytes in size Gigabytes in size
Arguably less secure because root access on a Linux host has access to all containers. Arguably more secure because VMs are isolated.
Available on Linux, Solaris and Windows Operating Systems and  AWS, Azure and Google clouds/ VMware ESXi, KVM, HyperV are hypervisors that run VMs. VMs also run on public clouds: AWS, Azure and Google.

kernel-vm

In the next blog post, I will explore containers in Linux and other environments.

Building an IaaS cloud with Red Hat OpenStack – 2 of several posts

This is the second in a series that will step through the process of building an OpenStack IaaS cloud in your datacenter. (The first post introduced Infrastructure as a Service).

What are cloud workloads? What applications are suited to the cloud?

Physical servers that run email, ERP, CRM or web applications are reasonably static and their load is predictable. ERP may have a predictable spike in traffic at month end, CRM or email may use extra compute resources when a marketing campaign is launched. To grow this infrastructure time, labour and costs are incurred to purchase and rack additional servers, storage and networking.

Virtualization provides some gains as existing physical servers and their apps can be consolidated and higher utilization extracted from them by running several virtual machines on the same physical hardware.  To grow this infrastructure a virtualization administrator creates additional virtual machines.

Cloud workloads are applications that are primarily elastic, they grow and shrink dynamically and automatically, with no user intervention – no racking of new servers, no creation of additional virtual machines. Furthermore they can be deployed automatically using APIs by the end-user with no intervention required by an IT administrator.

Overview of OpenStack

OpenStack is an Infrastructure as a Service OpenSource project, that was started by NASA and RackSpace but now has over 200 contributors. OpenStack offers an Infrastructure-as-a-Service cloud that can be deployed as private or public offering. To fully understand OpenStack, lets dispense with some mis-understandings:

OpenStack is a product

OpenStack is not a product, per se. Comparing OpenStack to a commercial product like VMware vCloud or Microsoft Private cloud is an apples to oranges comparison. OpenStack is an open-source project used to build an IaaS cloud. OpenStack is a cloud operating system, that controls virtual compute, software defined networking and storage.

You can download the distribution directly from the github source and deploy, test and run it alone or purchase a supported distribution such as Red Hat’s, where most of the testing is done for you and deployment services are included.

OpenStack is a replacement for virtualization

OpenStack does not replace your virtualization initiative. OpenStack runs alongside the hypervisor and supports several: XEN, VMware, HyperV, LXC and of course KVM. Moreover, OpenStack supports container technologies such as Docker (will be available on Red Hat Enterprise Linux 7) and OpenStack even runs on bare-metal (still in development).

So how is OpenStack used? What are some use cases?

OpenStack is a cloud operating system. OpenStack provides a platform to deploy virtual servers, software defined networking and storage. It is suitable for workloads that dynamically grow (and shrink). This takes advantage of the elastic nature of a cloud infrastructure.

  1. Automatic deployments – where scripts and programs use APIs to deploy virtual servers, networking and storage. This is a faster and more efficient alternative than using a web GUI or contacting an IT admin.
  2. Reuseable scenarios – where applications demand and identical configuration be created and reused. Example QA testing on the identical configuration
  3. Servers and applications based on templates – a prepackaged set of services that is deployed from an image or template. Example: A webserver, application server and database.
  4. Automatic scale – accommodate a growth in demand automatically. Example an eCommerce application that has spikes during peak shopping periods
  5. Big Data – analysis of datasets that vary in size and thus require automatic addition of nodes/servers as the data grows and the need for compute expands. Example: Hadoop performs data analysis on the end-node and not at a central server, thus as data analysis grows new nodes need to automatically spin up for analysis to be performed.

OpenStack offers:

  • Distributed object storage
  • Persistent block-level storage
  • Storage for provisioning virtual-machines and their images.
  • RBAC – Roles based Access Control and Authentication
  • Software-defined networking
  • Web browser-based GUI or a Command Line and APIs for users and administrators.

OpenStack components/projects

The projects that comprise OpenStack are listed in this table and described in detail below.

Service OpenStack Project code name Description
Dashboard Horizon Web-based GUI for using/administering OpenStack
Identity Keystone Authentication and Authorization (roles/privileges)
Networking Neutron Software defined networking for connectivity between OpenStack components
Block Storage Cinder Persistent block storage/volumes/virtual disks for instances/virtual machines
Compute Nova Launch and schedule instances/virtual machines on servers/nodes
Image Glance A registry for virtual machine images
Object Storage Swift Storage of files for users
Metering Ceilometer Usage and measuring of cloud resources
Orchestration Heat Template based engine for automatically creating resources(compute/storage/networking)

OpenStack Compute service: (PROJECT NOVA)

OpenStack Compute is the heart or core of the  OpenStack cloud. Compute provisions and manages on-demand virtual machines. Compute schedules virtual machines to run on a set of physical servers (nodes). Virtual machines can be started, stopped, suspended, created and deleted. The virtual machines run on hypervisors such as XEN, ESXi, HyperV and of course or KVM.

Compute interfaces with the Identity service to authenticate a user who is requesting a compute action (create/delete/suspend/copy a virtual machine). Compute interfaces with the Dashboard (project Horizon) service for the user web GUI interface. Compute interfaces with the image service (project Glance) to provision an image.  There are security/access controls to govern which images are accessible by users and a quota on how many instances can be created per project.

The Compute service scales horizontally on standard hardware: meaning to grow an OpenStack cloud you add more servers/nodes (horizontal scaling), rather than adding memory, cpu and disk to an existing server (vertical scaling).

OpenStack Image service: (PROJECT GLANCE)

The OpenStack Image Service provides discovery, registration and delivery services for disk and server images. Images can be used as a template to create new virtual servers. It can also be used to store and catalog backups.

The image service stores images in a variety of formats:

  • AMI- Amzon Machine Image
  • ISO – (virtual CDROM)
  • qcow2 (Qemu/KVM)
  • OVF (Open Virtualization Format)
  • RAW (unstructured)
  • VDI (VirtualBox)
  • VHD (Hyper-V, XEN, Microsoft, VMware)
  • VMDK (VMWare)

OpenStack Object storage: (PROJECT SWIFT)

Object storage provides virtual containers which allows users to store and retrieve files (images, documents, video files, graphics etc.). Object storage supports asynchronous eventual consistency replication and uses the concept of:

  • Replicas – maintain the state of objects in the case of an outage.
  • Zones – used to host replicas and ensure that each replica of given object can be stored separately. A zone might represent a disk, an disk array, a rack of servers or an entire datacenter.
  • Regions – a group of zones sharing a location

OpenStack Block storage: (PROJECT CINDER)

Block storage provides persistent block storage that comprise virtual hard-drives or volumes used by OpenStack virtual machines. These volumes are integrated into the Dashboard and Compute services to enable users to manage their own storage needs. Thus users can create (or list or delete) a volume(s) and attach it to (or detach from) a virtual machine(s). Virtual Machine snapshots are also stored on block storage volumes.

OpenStack Metering service: (PROJECT CEILOMETER)

The Metering service provides user level statistics that can be used for alerting, billing or monitoring. There is a plugin system to add new monitors.

OpenStack Orchestration service: (PROJECT HEAT)

The Orchestration service provides a template-based engine for the OpenStack cloud, used to create and manage cloud  resources: storage, networking, instances (virtual machines), and applications as a repeatable running environment. Templates are used to create stacks, or collections of resources (instances, floating IPs, volumes, security groups, or users). The service offers access to the OpenStack core services via a single modular template.

OpenStack Networking service: (PROJECT NEUTRON)

OpenStack provides networking models to accomodate different applications. It is a scalable and API driven system for providing network connectivity. As a software defined network, OpenStack networking can create networks, assign IP addresses, route traffic amd connect servers. Various network services are supported: flat networks, VLANs, GREs (Generic Routing Encapsulation – a tunneling protocol), multi-tier topologies etc.

OpenStack Networking manages IP addresses, to allocate static  or DHCP addresses. Floating IP addresses allow traffic to be dynamically rerouted to any compute resource,  for example to redirect traffic during maintenance or in the case of a failure. OpenStack Networking has a plugin extension framework to add intrusion detection systems (IDS), load balancing, firewalls and virtual private networks (VPN) .

The diagram below shows:

  1. Horizon – providing a web user interface to manage and use Cinder(block)/Swift(object), Glance (images) Nova (compute) and Quantum (networking).
  2. Keystone – providing authentication/authorization to Cinder/Swift (storage), Glance (images) Nova (compute) and Quantum (networking).
  3. Nova(compute) – scheduling and provisioning Virtual Machines.
  4. Ceilometer – monitoring Cinder/Swift (storage), Glance (images) Nova (compute) and Quantum (networking).
  5. Cinder(block storage) – providing volumes for Virtual Machines and storing backups in Swift.
  6. Glance – storing images in Swift and providing them to Virtual Machines.
  7. Quantum – providing network connectivity to Virtual Machines.
  8. Heat – orchestrating the OpenStack cloud.
OpenStack architecture

OpenStack architecture – courtesy of openstack.org

Building an Infrastructure as a Service cloud in your datacenter – first of several articles

Infrastructure as a Service (Iaas)

IaaS is one of the three delivery methods of cloud computing (the other two are Platform as a Service and Software aa Service).

blog-cloud-iaas_0

Infrastructure as a Service delivers compute, networking and storage as software on commodity hardware, typically rack-mounted servers that can be added as required to scale a cloud horizontally.

  1. Compute – virtual machines of different sizes, different number of CPUs and/or memory.
  2. Networking – software defined networking: networks, routers, switches defined in software that also provider networking services: Load Balancing, Firewalls, VPN etc.
  3. Storage  – blocks of storage as virtual disks or for storing/retrieving files

These three components are managed using a dashboard, command-line interface or API.

OpenStack dashboard

OpenStack dashboard

Characteristics of IaaS:

  • Elasticity: A user can provision (add) or de-provision (remove) cloud instances to scale their cloud up or down.
  • Multi-tenancy: The cloud servers are hosted on a shared infrastructure. This means that your cloud instances co-exist on the same hardware as another user’s cloud instances. To understand multi-tenancy, think of an apartment building (or block of flats). The renters/tenants have their own apartment, but share an elevator or stairway, foundation and roof. The owner of the building rents out apartments as needed and is responsible for the plumbing etc while each tenant is responsible for their own furniture and interior decorations. Similarly: an IaaS customer is responsible for their own applications, the cloud provider is simply providing the infrastructure.
  • User self-service: Users can create their own cloud instances/virtual servers, provision their own storage and networks. This is one of the most compelling reasons to use a cloud, users are not beholden to an IT organization to provision their infrastructure for them.
  • Utility billing: The cloud provider will bill the cloud-user for the resources used. Infrastructure as a Service is akin to a utility company providing and billing for electricity, water and natural-gas. You share electricity with everyone on the power grid provided by the power station, and only pay for what you use.
  • Virtual Machines: The servers, also called “cloud instances”, are delivered to customers as virtual machines. A virtual machine is a server or workstation, with operating system and applications that appears to the user as a physical server.

Infrastructure as a Service is typically offered in three forms:

  1. Private cloud also called on-premise
  2. Public cloud
  3. Hosted private cloud

An organization can build a private IaaS cloud and then provide infrastructure services to their internal departments or partners. To build a private IaaS cloud, you need virtualization software to run a hypervisor.

Examples of hypervisor software are:

  • HyperV, VMware, XEN.
  • KVM – Kernel-based Virtual Machine is available with most Linux distributions and as open-source software. Red Hat offers KVM virtualization.

Once you have a virtualization or hypervisor layer, then you need cloud software to provide the on-demand, user self service and elasticity features of cloud computing as a Service.

Examples of IaaS private cloud software are:

  • Eucalyptus, Microsoft, VMware.
  • OpenStack: OpenStack is open-source project with over 200 contributors.

openstack-software-diagram

These series of articles will focus on building a private cloud using Red Hat OpenStack, which is offered as a free version or paid subscription.

Next….. Concepts and architecture of OpenStack