Roundtable: Data centres (part 1)


By GovTechReview Staff
Friday, 30 August, 2013


Data centre evolution has taken major steps forward as an ever more-intense focus on energy and operating efficiency drives physical and virtual consolidation. These two super-trends have in turn spawned a wave of innovation by vendors offering new perspectives on everything from the physical plant of the data centre, up to the management techniques used to handle previously unthinkable volumes of virtual machines.

GTR recently assembled technical experts across all the levels of the data centre infrastructure to find out what they see as the most significant opportunities – and the biggest threats – moving forward. Participants include Ross Dewar, managing director of Emantra, a Microsoft-centric provider of enterprise-class software infrastructure; Malcolm Roe, general manager of data centre operator Metronode; David Blumanis, data centre advisor for Asia-Pacific & Japan with data-centre infrastructure provider Schneider Electric; and Bevan Slattery, executive deputy chairman of IPO darling and data-centre market newcomer NextDC.

GTR: What’s driving demand in the data-centre market these days?

SLATTERY: There’s a lot of demand, particularly for independent data centres. A lot of systems integrators have sold boxes and software licenses, and they’re now moving into the service arena. For them to do that they need a data-centre capability, so a big part of our business is provisioning a data- centre capability for managed service providers (MSPs) and cloud providers.DataCentres-Roundtable

An area that we’re seeing a lot of growth in, is the virtual desktop infrastructure (VDI) environment. If you’re operating a VDI infrastructure you really need to have that sitting in the same state, even purely from a latency perspective. The good news is that you’ve got hubs for different industries in different cities – so some people are targeting certain markets and successful in winning clients. Most of our space is taken up by MSPs, and the other part by corporate clients and sometimes the government.

DEWAR: To own and operate a data centre these days is a specialist, highly capital intensive business with its own market and established players. These days a decent data centre of the type that would find itself on AGIMO’s Data Centre Facilities panel would be a multi hundred-million dollar investment in its own right – so I think governments are looking to be offered models that give them an alternative to building their own. They’re looking for a flexible centre with varying levels of requirement, varying levels of high availability, and varying levels of security through differently defined information classification capabilities. They want to be able to match facilities to requirements more than they’re able to do at the moment.

ROE: The larger government agencies are moving from ownership of data centres to colocation, and they are making the IT services provided out of those facilities contestable.

In the past, they have tended to move into facilities owned or operated by IT providers. What that doesn’t give them is contestability of the IT platform and managed IT services; moving into an open, third-party owned facility gives them that flexibility. In the past, IT was seen as a strategic differentiator – but now they’re saying that they can still maintain their differentiation around IT and the intellectual property behind their IT systems – but they don’t necessarily need to tie up large wads of capital- intensive facilities or, increasingly, in owning IT platforms at all; all they need is to own the applications that run on those platforms.

GTR: So that’s the demand side; how is the industry responding?

BLUMANIS: Seven years ago we saw a dynamic coming into the data-centre space that the design and operation of data centres was going to change forever. So we positioned ourselves in that space with in-row type cooling technologies like hot- aisle containment, and modular scalable UPS and battery solutions. In the last three or four years, it was clear the next dynamic that was occurring was going to be the energy dilemma: the industry has been through the Internet age, is going through the digital age now, and the next stage that’s going to hit the industry is the energy age.

[quote style="1"]Now, with cloud computing, it very much does impact the infrastructure and design, and the way you operate the digital infrastructure.[/quote]

Demand for electricity is increasing dramatically with the amount of data and information that everyone wants to receive in the marketplace – and this means more storage and demand in data centres. And even though they’re more efficient, consumption of electricity is also increasing because they’re processing more per square foot of space.

Now, with cloud computing, it very much does impact the infrastructure and design, and the way you operate the digital infrastructure. We’ve also got the dynamic that this infrastructure needs to live for the next 15 to 20 years – and we’re doing technology refreshes every three years, so it’s still a bit of an unknown what’s coming in the next 10 to 15.

SLATTERY: It’s worth mentioning telecommunications, too. In 2005-2006 intercapital pricing, between Brisbane and Sydney for example, was around $80 to $100 per megabyte. In 2012, it’s down to $2 to $3 per megabyte. It’s not that [carriers are] making any less money, but people’s bandwidth consumption has increased dramatically.

Two to three years ago they weren’t even thinking of doing near real-time data replication between Sydney, Melbourne, Perth and so on. Now they’re doing that; I can buy a 1Gbps link for $3000 to $4000 a month, which is amazing. That bandwidth is a massive enabler because people don’t have to think about all these edge devices and can deliver a massive amount of service on their infrastructure. When you can throw 1Gbps between states, it makes things interesting – and that really does help drive it by pushing data that doesn’t have to be Sydney centric.

GTR: How are data-centre operators capitalising on the controversy over offshore data centres?

BLUMANIS: Because of the pressures of the GFC, many industries are moving from capex to opex models; they’re for the first time being allowed to take these things in a colocation facility. Another issue is that question of ‘where is my data?’ I’m seeing governments like Singapore building unique networks with cloud providers, so they can be guaranteed their data is only living in Singapore and not going across borders. There's no way the Australian government would love to have their data sitting in China, or the US; from the government perspective it's another level of risk.

DEWAR: Within the last 18 months we’ve been studying the Canberra market and closely following AGIMO’s pronouncements. And our conclusion was that it’s mature enough to invest in a capability that we’re now happily selling in Canberra. We’re offering a DCaaS (Data Centre as a Service) offering aimed at smaller FMA agencies – the ones with 50 to 500 or so users, that can’t afford to do it themselves, or have it done for them at the moment by their lead department or agency.

The modern CIO in these departments needs to be much less of a technician than a business and risk specialist – someone who can look at the risk and weigh up the least-cost alternative. This option gives them an alternative way instead of just going to a data centre and putting all the components of a service together. If an agency rang me up and said ‘can I have 500 seats of MS Exchange by this afternoon?’, we can deliver that at low cost, high scale, zero usage, and zero pay. We’re not selling a technology; we’re selling an outcome.

GTR: There has been a lot of data-centre construction of late. How does the business case compare between refurbishing an existing facility and building a new one?

DEWAR: There have been huge advances in efficient data-centre designs and technologies of late. In my view, Australia does not have any really third-generation facilities; most of the data centres we have here are traditional, enclosed Fort Knox type things with huge overheads in terms of power and cooling. They’re probably not terribly efficiently designed in terms of modern use of virtualised servers, or in terms of their ability to power a square unit of data centre real estate. You’ll find the huge data centres being built by Google and others are totally different, third-generation designs that are much more efficient. So, to some extent, refitting an old data centre might be like putting new springs on your horse-drawn buggy. Why not just get a car?

[quote style="1"]In my view, Australia does not have any really third-generation facilities; most of the data centres we have here are traditional, enclosed Fort Knox type things with huge overheads in terms of power and cooling.[/quote]

BLUMANIS: Data centre audits really come down to finding ‘what’s the low hanging fruit?’, and where they can put Band-Aids to buy themselves some time to potentially build a new data centre. But a new data centre is a large capital outlay; normally when organisations deploy this they’re not considering power and cooling infrastructure; they find they have hotspots and major capacity issues within their facilities. So we come along, and find things you can do now to buy yourself some time, and that these are the things you can do to retrofit that.

Remember that it’s not the cost not just in dollar terms, but in downtime. The amount of risk mitigation to be able to do clean work inside a live data centre is enormous, and Schneider Electric have built that skill. If customers can’t afford the massive capital outlay to build a new data centre, it makes sense for them to spend maybe 25% of the cost of building a new DC to retrofit their existing one.

The other requirement is considering whether you are in leased premises or your own; if you’re in leased premises and manager saying you need to put another chiller on the roof, the building could be constrained or they won’t want to spend the money to provide that.

But if it’s your own building and you’re going to do the retrofit to make it more energy efficient, we would advise you on the best PUE you can get, and how to structure the capital expenditure to get that PUE. But you have got to do it in a staged fashion because you can’t shut down for a week; with some of these facilities it may take six months to retrofit. It’s a risk, but also managing that risk as to why that may be occurring.

GTR: How has data-centre planning changed with the push towards higher-density computing?

ROE: The challenge we’ve always grappled with is what is the average power density going to look like in five years’ time? The trend today is certainly high-powered entities, and it’s not unusual to increasingly see 10kW racks. And the IT industry is responding by coming up with lower voltage and more energy-efficient servers.

If you strike an average power density across the data centre, it will invariably be high or low. If it’s too high you end up running out of floor space before you run out of power and cooling infrastructure. If you set it too low, you will run out of power and cooling infrastructure before you run out of floor space. We’ve chosen a modular facility deliberately, so we can trim our average power densities. That is the hardest question we have to answer today.

And we are seeing very large users prepared to abandon cooling altogether in the facility; increasingly, the server platforms can be run a lot hotter than OH&S requirements dictate. So, one of the characteristics we’re looking at is increasing the hot-aisle temperatures to effectively double the cooling capacity we have in the facility.

That’s not going to be for everyone, but certainly those with large-scale platforms are open to running the hot side at temperatures that we would never normally consider. We’re even seeing supercomputing platforms that have previously used chilled water, going to straight air cooling. We’ve eliminated chilled water altogether in our new-generation facilities.

And that’s one of the themes that we’re seeing: we’re moving towards a smart environment with the data centre being as smart as some of the IT equipment, with more automation around heating and ventilation control, asset management, change management, and even portal access for tenants to give them full reporting and consumption and environmental statistics. – David Braue

This is the first part of a roundtable that ran in the June/July 2012 issue of Government Technology ReviewPart 2 will run tomorrow.

Related Articles

The evolution of case management

Case management is a type of work that requires gathering and analysing data from multiple...

Something very big will break unless we modernise our IT infrastructure

The current focus on application modernisation, cloud platforms, data and analytics has resulted...

Developing sustainable digital public services with low-code

Reshaping public services takes a coordinated, holistic approach that brings all stakeholders to...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd