My last blog was on the benefits, challenges, and supplier considerations for hosting and co-location. In this blog I thought I would share my thoughts and experience with Infrastructure as a Service IaaS.
IaaS sparked into life around the beginning of the millennium, and after a relatively sedate start, it has grown into a huge market, with an estimated annual global value of $44b and fundamentally changed the IT service landscape.
But let us start at the beginning. Back in the day if an organisation wanted to run business applications, store data and provide services to users such as printing, they needed servers. Sound manageable? Well typical business applications with associated databases in a high availability configuration, with production and test platforms, could require upwards for 10 servers, plus backup and DR. Therefore, it was not unusual for mid-size enterprises to have hundreds of physical servers housed in large computer rooms or data centres, requiring careful resource management, capacity planning, with associated lead times for change.
Running up a new service could take months to facilitate. Improvements in hardware performance and reduced physical size helped, but greater standardisation and virtualization were the game changers. The ability to operate multiple logical servers on a shared physical infrastructure and treat server builds as images (files) enabled more efficient and flexible service provision.
If an organisation could run multiple autonomous servers on the same physical platform, then a service provider could run multiple organisations workloads in the same
way, achieving even greater economies of scale. Increased network connectivity bandwidths enabled access, while a growth in applications, data generation and storage fuelled need.
So at its essence, Infrastructure as a Service is “what it says on the tin”, it’s another organisation providing you with a service that fully or partially alleviates the need for you to provide your own infrastructure, typically compute, storage and associated network and security. You can consume it like any other utility, e.g. telecoms, water or electricity.
This capability not only enabled IT leaders to unshackle their organisations from the constraints of infrastructure but enabled new organisations to grow bigger and faster than more established businesses. Entrepreneurs with an idea for a social platform or streaming media service where able to scale to a global audience quickly and without significant capital investment, allowing them to focus on the development of their core business and underpinning technology.
While IaaS may have started as compute and storage services it has grown significantly, it spawned Platform and Software as a Service or PaaS, which I’ll cover in another blog, but also expanded to include IT services like firewalls, server load balancers, analytics, developer tools and more. Some of the services are provided directly by the service providers, other suppliers have embraced wider engagement and created a marketplace ecosystem to cater for a wider audience.
IaaS comes in two main forms, private and public cloud. A private cloud is infrastructure dedicated to a specific client, this would normally comprise of several servers, storage, networks and other associated hardware. The provider supplies the hardware, the hosting environment, hardware break / fix and often the connectivity and virtualisation software. This can also be referred to as bare metal services.
Public cloud is based on the provision of services from a shared pool of resources. Many
clients (thousands) applications can be running on the same platform, sharing the same hardware resources. The virtualisation layer maintains segregation and ensures clients get the level of resources they have subscribed too. If this sounds a bit scary, do not worry. It is more common than you think, after all your bank account is maintained by your bank on the same hardware as millions of other accounts. The large public cloud providers are often collectively referred to as hyperscalers.
As with most things there are pros and cons for each. Private cloud is inherently less scalable in terms of capacity and minimum contracted period than public cloud. While it may offer a degree of flexibility, typically the actual physical hardware providing the services needs to be sourced and provisioned for the duration of contract period.
Costs are generally based on the number of physical servers, their specification in terms of memory and processing power, the amount and number of storage devices and their associated performance, network connectivity between the devices and the Internet, and any other associated hardware appliances and services.
The client is effectively leasing the hardware, data centre space and connectivity services, thereby incurring a consistent regular cost. It is possible to get short minimum term private cloud services, however they usually attract a setup fee, most private cloud services are based on a minimum 1 year term, however it doesn’t usually become cost effective until you commit to 3 years.
It’s worth noting that public cloud service providers come in all shapes and sizes, the hyperscalers are global $b businesses, there capability and reach is extensive, but unless you are a top spending customer you are unlikely to get a personal service. Mid-size organisations tend to operate at a national level and usually have developed their service from an existing complementary product portfolio. There are a lot of smaller, and in many cases regional providers, that are more approachable and flexible.
While many providers are able to support automation via an Application Programming Interface (API’s), enabling amongst other things on demand scaling. It is only the larger providers and hyperscalers that can offer true scalability. While not typically a requirement for most business applications, it is a key benefit for those that experience highly variable workloads. Retailers, online bookmakers, streaming media platforms can experience 1000% spike in activity around key events or campaigns. The ability for the platform to self-size based on demand has huge value.
Public cloud is usually charged on a resource basis, typical resources are processing power (virtual central processing unit, vCPU cycles), memory and storage. Its basically a break down of the components you need to build a platform. They will also typically charge for data transferred in and out of their cloud and other associated services.
The hyperscalers typically offer very short duration minimum term charging periods, down to the minute it some cases, which is ideal for transactional workloads. Providers not focused on that market, typically the smaller providers, have longer minimum term periods from 1Hr to month.
There is no hard and fast rule, but due to their inherent characteristics private cloud will suit those organisations with a consistent workload, prefer a more personal service, are subject to strict industry or government compliance, require fixed cost assurance or just like the comfort of knowing where there data resides.
Public cloud will work for most organisations. The hyperscalers are a good fit for those requiring scalability, transient workloads, access to development tools, multi-regional reach and anywhere in between, provided you are self-sufficient or have a holistic support services in place. The smaller public cloud providers might not be able to match capability and reach, but if that’s not an issue you can benefit from a more personal service, an element of bespoke’ing, and possibly more flexibility around the commercials.