Co-location ... or HaaS if it had been invented today

Updated: Sep 4, 2020

Matt Johnson

Operations Director

Co-location is the provision of the environment and services that are required for the secure and reliable operation of core IT equipment. Prior to the advent of co-location, larger organisations could fund purpose-built facilities, Data Centres. However, these are multi-million pound investments so for most organisations this requirement was met by the conversion of office space to a server room, sometimes a hideaway for overworked IT resources, location of inappropriate calendars (well it was the 80’s) and often the epicentre of unplanned service impacting incidents.

Usually sited in an organisation’s HQ, their design and build varied considerably, however many were compromised by their location in the building, the building’s suitability to support outside plant equipment, the location of the building in relation to potential external risks, insufficient investment in redundancy and the close proximity of other functions within the building, including visitors, flammable goods, gas boiler etc.  

Like any functional space they must be designed and built to a specification that will define their capability and capacity, including physical space, security, access, fire detection and suppression, floor loading, power and cooling. In support of their digital agendas many organisations have increased their IT equipment footprint, however ordering additional hardware is lot easier and cheaper than scaling a server room and all the associated systems. Therefore, it was not unusual for the integrity of server rooms to be comprised by over subscription.

Physical overcrowding is an often-overlooked root cause of many unplanned service outages. When space becomes an issue, following a logical and sensible rack layout goes out of the window, and  equipment is “wedged in” where there’s room, in extreme cases this can be on top racks and in the access spaces around them, which not only puts services at risk but also people’s safety.

Overpopulated and unstructured racks also typically suffer from poor cable management. This results in difficulty with patching, damage to patch lead connections, poor access to equipment, early equipment failure due to impeded cooling and the use of undesirable and in some cases unsafe power distribution, not to mention the obvious hinderance when troubleshooting.

It can also lead to over subscribing the power and cooling system capacity. Typically, these systems are deployed in a N+1 configuration, so the failure of any one system does not lead to a total loss of service i.e. fitting 3 air conditioning systems when only two are required to maintain the desired room temperature. So, service can continue to function while the faulty systems is repaired.

If designed limits are exceeded reserved capacity is used up. When a failure of one of the systems does occur then the room temperature will rise rapidly. This will often lead to equipment shutting down or needing to be shutdown before it sustains damage.

Resilience can also be compromised by failure to monitor and test redundant systems, it is not a natural or desirable action to create scenarios to test redundant systems, as can be attested to by my prematurely grey hair!

In short, office buildings rarely make ideal Data Centres. Suitable infrastructure is unlikely to be cost effective at smaller scale, up scaling can be a big challenge, IT support staff cannot be expected to be custodians of plant equipment, and the critical role played by IT is most organisations requires a reliable, secure, robust and scale environment.

Co-location services provide organisations with the ability to site their equipment in fit-for-purpose Data Centres. Typically, the service is costed based on the amount of physical space or space in a rack, internet connectivity and electrical power consumption. Rack space of less than 10u is often provided in shared racks, e.g. your equipment and other organisations share the same rack, so it potentially may be accessed by 3rd parties. From an information security perspective this is not ideal and can causes challenges around ISO27001 certification.

Requirements of more than 10u can often be satisfied by dedicated rack space, that being the area your equipment is in has its own front and rear door, ensuring no one else should have access without your permission. Dedicated racks are typically available for 10u, 20u, 42u and 48u.  These are suitable for most organisations, however some requirements mandate the use of cages around the racks to provide an additional layer of physical security.

Power is usually costed in amps, there is approximately 4 amps to a Kilowatt. Some providers monitor the power used and charge based on usage, other provide a power budget for a fixed cost and alert you if you are exceeding it. Most Data Centres can provide around 16amps to 32 amps per rack which will meet most typical requirements, some offer up to 100amps for ultra-high density but such requirements are very unusual. Check additional overage power charges.

The reliability of a Data Centres services is often referenced against the Tier structure maintained by The Uptime Institute or the TIA-942 standard. They are graded from Tier 1 to Tier 4, Tier 1 only offers the most basic level of resilience and therefore typically offers an availability of 99.67%, this increases to 99.995% for Tier 4. Some providers are creating their own Tier 5 specifications. In theory you are best going for the highest Tier facility, however in practice the law of diminishing returns, and in my view the wisdom of splitting your systems across two geographically diverse Data Centres, means a Tier 3 facility will meet most organisations requirements.

Data Centres can be classified into two main types, Peering Data Centres and Propriety Data Centres. Peering Data Centres or Carrier Neutral Data Centres are those that focus on collocating organisations that want to interconnect with other organisations, they typically host large numbers of Service Providers. The most prominent of these in the UK is probably TeleHouse in London. Established 30 years ago it hosts over 160 service providers, with a peering agreement its possible to link to Service Providers by the use of fibre patching (termed cross-connects) provided by the Data Centre, this negates the need for a Data Circuit.  However, space is at a premium, so the cost of hosting is higher.

Proprietary or Private Data Centres are typically operated by Service Providers that offer a broad range of services from co-location to Cloud and are more cost effective. Many offer their own high bandwidth connectivity services into Peering Data Centres so can offer good peering options if required. This type of Data Centre is ideal for most organisations hosting requirements.

I have run out of blog space and feel I have not done the subject justice, however in essence there is no one size fits all and some Data Centre providers are better than others. I would consider Tier 3 and ISO27001 as minimum requirements, table-stakes you might say! I would also advise on a dual Data Centre deployment for critical applications, ideally with diverse Data Centre providers. However, before you dive into any long-term co-location service serious consideration should be given to IaaS and PaaS, which I will cover in my next blog.

25 views

Recent Posts

See All