IT/ICT 100% Cloud: a NBR special report

IT/ICT 100% Cloud: a NBR special report

The National Business Review (NBR) recently ran a special report on how businesses can utilise cloud services to their full advantage. Nick Shier, iViis Managing Director, was interviewed for the feature by Chris Keall.

The cloud: a reality check

The cloud computing revolution is continuing apace.

Half of New Zealand’s computing infrastructure is now in data centres, rather than hardware in company offices. And there are good reasons for that. Cloud computing outsources many of the headaches of IT, allows an organization to be agile and is far better for business continuity.
Yet the cloud is not perfect. Here are some of the gotchas.

It’s no cheaper

Cloud computing certainly changes the capital expenditure (capex) vs operating expenditure (opex) model says Ian Forrester, a one-time accountant and investment banker who now heads Plan B. There is no money to be paid upfront for hardware or software, and a five-person startup can gain instant access to the same apps, online storage and computing power as a big company.

And while individual circumstances vary, he says cloud computing is no cheaper – especially over time. You can amortise the cost of your own hardware. A cloud computing provider will charge you full whack each month. For some companies, colocating their own gear inside a data centre will be the best solution.

You can’t give the IT dept the boot

Moving to the cloud means a business will still need inhouse expertise, or an IT services contractor, to navigate the myriad online options.

And the more software and services move to the cloud, the more this is true. If a business wants to go all-cloud, it will need “a significant investment in skills that can manage change and agility of delivering cloud-based services.

Organisations that rely 100% on the cloud also need to ensure they have the appropriate skills attuned to ensuring these services continue to align to business needs,” says Veeam’s Nathan Steiner.

It’s worth noting the most flexible cloud computing services tend to be the most expensive. Earlier, e-tailer and Amazon Web Services (AWS) client Fishpond told NBR it was far cheaper to buy a fixed amount of server capacity with the cloud giant, even if it was most famous for its scalability.

A staffer or contractor is needed who can do their sums as the landscape and the most cost-effective options keep shifting.

There are upfront costs

Sure, a business doesn’t have to pay for any hardware or software costs up-front. But most small to medium businesses seek external help when moving to the cloud, according to IDC’s latest NZ study. NBR has been there and can confirm it does cost money. A company might be able to forget about spending $20,000 for another server upgrade, but it can spend just as much on consultants to help choose the right cloud solution and to move existing data to the cloud.

Monthly costs can get out of control

It’s easy to add more computing power in the cloud or to add more users to an app. Some bean counters would say too easy. “The costs can run away on you,” Mr Forrester says. Make sure someone on staff is keeping close tabs.

There can be a place for legacy systems

Don’t get caught up in the cloud hype and move all systems online for the sake of it. “Be selective. Not everything needs to be moved. Make sure there is a business imperative,” iViis managing director Nick Shier says.

His company focuses on enterprise-level apps, and its clients are large organisations such as Chorus, IAG and Southern Response.

Most invested a large amount of money in IT systems created in pre-cloud times (legacy systems, in industry speak), which they still use.

“We encourage companies to retain legacy systems and gain better value from them by applying the power of the cloud to their existing business processes,” Mr Shier says.

“Legacy systems should not be replaced just to get them in the cloud. They should be replaced only if they no longer support the functions required of the business or there is a risk in retaining them because they are so old and support is not available. These decisions should be made first, then the decision whether or not to use cloud should be made.”

Often, a hybrid model can work best, he says. “For example, an old legacy procurement system integrated with the latest requisition system in the cloud. The result is modern functions with the legacy back-end.”

Veeam’s Mr Steiner adds, “Rather than moving, redeveloping or migrating [legacy] systems, evaluate what processes can be redefined in order to consume and leverage new services, applications and systems that are already cloud based. This can provide a faster time to market at lower cost.”

He also says to ensure ageing systems are first virtualised as a priority. This will provide for the first step to flexible migration to the cloud. Ensuring the applications and platform are abstracted from the underlying storage, network and computer infrastructure.”

The cloud isn’t 100% secure

It’s unlikely the computers and services in an office are secured as tightly as those in data centres run by the major cloud providers.

“But the bigger they are, the harder they fall,” Mr Forrester says. “The big players are a bigger target for hackers, which makes them a more attractive target.

Veeam’s Steiner says cyber criminals are getting more sophisticated.

He also notes that many companies are now unaware of where and how their data is being accessed as staff use tablets, smartphones and other devices to access cloudhosted data.

“Control represents the biggest and newest challenge with cloud today. Control of data, control of access, control of distribution. Cloud services and data are being pushed and distributed to the edge of millions of devices, instruments and systems that do not fall within the traditional constructs of control,” he says.

The answer, as ever, is to assume you’ll get hit at some point, which means having working backups.

“Mitigating cybercrime will need to centre around processes that ensure recoverability from attack, protection of digital assets such as a data source and technology that provides for visibility and monitoring in real-time of data, applications and services,” Mr Steiner says.

Business continuity can be complicated

The Plan B boss – whose company began in backup and business continuity before expanding into full-blooded cloud computing and network connectivity services provider, with five data centres nationwide – through its acquisitions of Turnstone and Iconz – cautions that while a lot of cloud software providers store multiple copies of data and can give a company a copy, it is often not in human-readable format but only machine-readable.

That is, you can’t do anything with it until the cloud software provider comes back on. A business needs a local copy of, say, customer relationship management software files, so salespeople can still call contacts if the internet connection or cloud provider goes offline for a few hours.

A local cloud provider can be better because they can talk a business through options, he says, plus issues like whether it should have a daily, hourly or even real-time backup schedule.

Veeam offers backup, disaster recovery and virtualization management services for around 250,000 customers worldwide, including airlines and top-tier financial services companies in New Zealand.

Its head of systems engineering for Australia and New Zealand, Nathan Steiner, says there are three features of a good backup system:

  1. First and foremost it delivers a capability of ensuring three copies of data, across two different media types, of which one is stored offsite. The 3-2-1 backup rule is the single most crucial component to a good backup plan. A business does not have a good backup plan if it doesn’t meet the 3-2-1 rule.
  2. It relies on automation when it comes to real-time protection, failover recoverability, ongoing verification, visibility and monitoring of application assets and data.
  3. It offers the flexibility to provide both continuous and near-continuous availability, recoverability, management and protection of data in line with business criticality.

“Having technology that can underpin and align to business processes when it comes to backup and recoverability of critical data assets is true alignment of technology enablement against business outcomes.”

The cloud doesn’t have 100% uptime

On-premise servers have problems, and it is a business which has to panic, identify the problem then get them back online – often a major hassle, and more so if it happens after hours.

All of the major cloud providers have multiple redundant systems and excellent uptime records overall. But again, nothing is perfect.

A number of major websites and services, including Xero and Instagram, went offline for up to 11 hours on March 3 as the giant global cloud provider Amazon Web Services (AWS) suffered technical problems – later traced to human error. An AWS staff incorrectly typed a keyboard command, inadvertently taking a swathe of servers offline and affecting clients around the world.

And on June 5 last year, a massive storm on Australia’s East Coast knocked out power and AWS’ Sydney data centre went offline as backup power systems failed to kick in. Australasian service providers whose apps run on AWS, including Westpac’s online banking, Domino’s pizza ordering app and Spark’s Lightbox streaming video-on-demand service, were offline for about five hours.

Click here to read the full cloud services NBR report ↓