The ultimate technology resource.

Technology.info will inform, educate and inspire. Immerse yourself in the very latest IT research and resources to give your business the edge.

Company Directory

Compleate A - Z listing of all companies VIEW NOW

Find a company you need by area of technology:

Brought to you by IP EXPO Europe

Will Iceland become the next global data centre hub?

23 Jul 2014
by
Puni Rajah
Puni
Puni built her reputation for fact-based decision support as a business analyst at Deloitte. Since then, she has conducted IT consumption, management and sales research in the European and Asia/Pacific markets. She is expert at understanding enterprise software and services use and buying behaviour, and has coached marketing and sales executives. Puni is the Content Analyst for Data Centre EXPO which takes place alongside IP EXPO Europe in London ExCel on 8 - 9 October 2014
Iceland - Global Data Centre Hub

London, Amsterdam and Frankfurt, although big markets, have high land values, which as potential data centre hubs, makes further expansion quite expensive. One alternative being mentioned increasingly often is Iceland which upon initial inspection appears very well placed to become the next global data centre hub.

Data Centre Hub Fundamentals: Low cost and high quality
Unlikely as it may sound, Iceland has huge advantages as a potential data centre hub. First of all, it has relatively cheap electricity. Most of Iceland’s electricity comes from hydro-electric or geothermal plants, using readily-available natural resources, and it’s cheap. Early estimates suggest that the cost of a kilowatt of usable power is about $125 in Iceland, compared with $500 in Manhattan, which is going to make a huge difference to the running costs of data centres. The power-hungry aluminium smelting industry has traditionally taken advantage of this, but Icelanders are hoping that the almost equally power-hungry data centre sector might do so too.

Read More

Secondly, the climate is relatively cool all year round. There’s no need for air-conditioning to keep servers cool, as you can just open up the plant to the outside air. This also keeps the costs down, as well as making it more environmentally-friendly. The third advantage is location. Strategically placed in the middle of the Atlantic, Iceland is likely to attract customers from both Europe and North America. It also has relatively low crime and corruption, and a good supply of potential data centre employees and IT engineers. Taken altogether, PriceWaterhouseCoopers concluded as far back as in 2007 that these advantages mean that Iceland could deliver relatively low cost but high quality data centre services.

Learn about Europe’s Data Centre Hubs at Data Centre EXPO, 8 – 9 October, London ExCel

Natural disadvantages
So what’s the problem? Why isn’t Iceland already a data centre hub? There are two main issues: bandwidth and taxes. Until quite recently, Iceland didn’t have the infrastructure to connect with data centres, but more suitable undersea cabling has now been installed. The previous 25% tax on imported servers has also been abolished, meaning that cost savings are realisable, rather than swallowed up by taxes.

There are other issues, of course, largely related to nature. Iceland is a fairly remote island, although some commentators have suggested that geographical isolation could be a plus from the physical security point of view. And then there are the volcanoes and earthquakes. Even if data centres are located well away from these areas, it’s still quite hard to persuade your potential customers that you won’t let them down when the earth moves until you’ve demonstrated it in practice.

The future for Icelandic data centres
The two companies already running data centres in Iceland, Verne and Advania Thor, report that customers are gently starting to arrive. Many of them are local start-ups, such as Green Qloud in Reykjavik, but a steady trickle of customers from North America and Europe is starting to show an interest, including Datapipe, a large hosting and colocation provider. Google has established a company in Iceland, attributing the decision to Iceland’s location and its potential as a data centre hub.

The challenge now for Iceland is to expand before other locations like India and China get there first. And to do so, it needs to demonstrate that it’s got the infrastructure. Perhaps more importantly, it needs to demonstrate to potential customers that they won’t lose their data if there’s an earthquake, or one of the cables gets damaged. In these days of instant connectivity, everyone needs to know that there’s a back-up plan. And if your disaster recovery plan is the cloud, you need to know that your cloud-providing data centre has its own rock-solid back-up plan.




How the CIO, CISO and CSO roles are changing

21 Jul 2014
by
Paul Fisher
Paul Fisher is the founder of pfanda - the only content agency for the information security industry. He has worked in the technology media and communications business for the last 22 years. In that time he has worked for some of the world’s best technology media companies, including Dennis Publishing, IDG and VNU. ​ He edited two of the biggest-selling PC magazines during the PC boom of the 1990s; Personal Computer World and PC Advisor. He has also acted as a communications adviser to IBM in Paris and was the Editor-in-chief of DirectGov.co.uk (now Gov.uk) and technology editor at AOL UK.  In 2006 he became the editor of SC Magazine in the UK and successfully repositioned its focus on information security as a business enabler. In June 2012 he founded pfanda as a dedicated marketing agency for the information security industry - with a focus on content creation, customer relationship management and social media Paul is the Editorial Programmer for Cyber Security EXPO which runs alongside IP EXPO Europe, 8-9 October, ExCel, London.
Rick Howard CIO Palo Alto Networks CISO Role

Rick Howard, CIO at Palo Alto Networks will be appearing at Cyber Security Expo in a keynote presentation that analyses how the CISO and CIO fit in to the C-Suite.

As a taster, here’s some of his thoughts on the changing roles of senior enterprise security people.

1. What has changed about the job description of the CISO in 2014 compared to recent years?

The job description for the people that are responsible for security within an organization has been in a state of flux for over a decade. Since Steve Katz became the first CISO back in 1995 the security industry specifically, and business leadership in general, have been thinking and rethinking the need for such a person and the responsibilities that they should have.

Citigroup became the first commercial company to recognize the need for the brand new corporate CISO role when they responded to a highly publicized Russian malware incident. As cyber threats continued to grow in terms of real risk to the business and in the minds of the general public, business leaders recognized the need to dedicate resources to manage that risk.

The first practitioners came out of the technical ranks; the IT shops. Vendor solutions to mitigate the cyber threat ran on networks and workstations. In order to manage those solutions, it was helpful to have people who understood that world. But this was a new thing for the techies; trying to translate technical risk to a business leader did not always go very well. It became convenient to tuck these kinds of people underneath the CIO organization.

CISOs began working for the CIO because, from the C-Suite perspective, all of that technical stuff belonged in one basket. As business leaders began applying resources to mitigate cyber risk, other areas of security risk started to emerge: physical security, compliance, Fraud prevention, business continuity, safety, ethics, privacy, brand protection, etc.

The Chief Security Officer (CSO) role began to get popular with business leaders because they needed somebody to look at the entire business; not just cyber security risk to the business but general security risk to the business. CSO Magazine launched in 2002 to cater to that crowd. Since then, the industry has been in flux. Not every company organizes the same way. While the Chief Information Officer (CIO) has made its way to the executive suite in some companies (Intel Corp and McAfee to name two), that is by no means the norm.

2. Will the CISO (Chief Information Security Officer) becoming a distinct role? Will it become more or less common and why? What does this role now encompass?
The CISO role has emerged in the last five years as the defacto role to manage cyber security. If there isn’t somebody in the organization with the title of CISO, there is somebody in charge of IT Security. This person generally works for the CIO but not in all cases.

From speaking with many CISOs, CSOs, and CIOs, it seems the community has decided that the IT groups handle the day-to-day IT operations while the security groups have much more of an oversight role: Risk Assessment, Incident Response, Policy, etc. This means that the IT groups keep the firewalls up and running while the security groups are monitoring the logs and advising the CIO on security architecture and policy. Let me just say that I don’t think this is the right model either.

In this modern world, I do not believe that security should be subservient to operations in all cases. Yes, the company has to keep its servers operational, but that does not imply that if push comes to shove, security is the first thing that we turn off in order to maintain operations. For companies that understand risk to the business, security and operations are peers.

Read More

3. Is it right that physical and digital security should be merged under one organizational umbrella or should they be kept separate?
I understand why organizations have these two separate security groups. Before the Internet days, we did not have a CISO function. We did have a physical security function though but it was usually relegated to the bottom of the leadership chain.

You needed guards and fences and things like that, but those kinds of operations were more like commodity items; like power to the building or trash pickup. You needed them but once you established them, they did not materially affect the business even if they failed for a day or two. Because of this, Physical Security tended to fall under the Facilitates Management groups.

With the Internet of Things though, the situation has changed. Everything is interconnected. Just like every other organization in the business, the physical security groups have a lot of IT Security components (Badges, surveillance cameras, etc). These groups and their electronic tools could still operate by themselves, but it makes sense that business leadership tasks somebody in the company to make sure that these tools are compatible with the approved security architecture plan.

In my mind, that is the CSO organization. Just like the idea that there is no such thing as cyber risk to the business, only risk to the business, I don’t think there is a need for separate cyber security and physical security teams. It is all security. Just for ease of management, it makes sense to keep it all under one umbrella. My perfect organization would have a CSO in charge of all security of the company. The CISO would work for him with a dotted line to the CIO.  The Physical Security Director would also work for the CSO but would have a close working relationship with the CISO.

4. What skills and qualities should companies be looking for in a CSO going forward? Is the next generation about to enter the workforce going to be equipped for the role? Is the skill set broadening or narrowing?
I still believe the CSO should come up from the technical ranks. Today’s world is so complicated technically that if you do not have that background, you will be completely overrun by the latest security trend. The CSO skill that has to be learned though is how to translate that technical knowledge into something that a business leader will understand or care about.

Join Rick on Wednesday 08th October, 14:20 – 14:50 in the Cyber Security EXPO Keynote Theatre.

 

 

IP EXPO Europe Launches “Futures Den” Start-Up Fund

21 Jul 2014
by
Olivia Shannon
Olivia Shannon
Olivia is an award-winning writer and technology PR and social media expert. She has worked on PR, social media and content marketing campaigns in multiple industries, but mostly information technology. American by birth, Olivia earned a bachelor's degree in Creative Writing from Beloit College, summa cum laude and with English departmental honours, and she is a member of the Phi Beta Kappa academic honorary society. Before launching her PR career, Olivia worked as a writing tutor and in the elections division of the Office of the Missouri Secretary of State. She is interested in writing about enterprise technology, start-ups, and the way technology transforms business and communication. Olivia Shannon is an editorial specialist and the co-director of Shannon Communications, an enterprise technology public relations firm.
Futures Den @ IP EXPO

For the first time, enterprise technology start-ups have a chance to exhibit at IP EXPO Europe free of charge. The organisers of Europe’s leading cloud and IT infrastructure event have created a “start-up fund” that will give early-stage businesses free exhibition and marketing packages in the start-up focused “Futures Den” feature at this year’s event, held on the 8th and 9th of October at the ExCeL Centre London. To apply, start-ups should enter by 31st July by filling out the start-up fund application form.

Altogether, the estimated prize value of IP EXPO Europe’s new start-up fund is nearly £60,000. Ten enterprise technology start-ups will receive complete exhibition and marketing packages valued at over £5,000 each. A further five start-ups will receive a free Futures Den speaking opportunity worth £1,500 each. Five more start-ups will receive free entries to the Tech Trailblazers Awards worth £175 each.

Access to the start-up fund is open to privately funded companies under five years old, whose products or services fall into one of the following enterprise technology sectors: big data, cloud, mobile, networking, security, storage, sustainable IT and virtualization.

Launched last year, IP EXPO Europe’s Futures Den gives enterprise technology start-ups opportunities “to connect with potential partners, distributors and end-users, and for IT decision-makers to gain insight into newly developed and future technology,” says a press release about the new start-up fund. According to the event organisers, the Futures Den puts start-ups in front of “over 15,000 visitors responsible for building, running and protecting IT infrastructures at European businesses and governments.”

This year’s Futures Den agenda will include panel discussions, open networking and five-minute start-up pitches, with panellists including experts from VCs and accelerators, successful enterprise technology start-ups, and marketing, legal and accounting firms.

To enter IP EXPO Europe’s start-up fund, please visit http://www.ipexpo.co.uk/Futures-Den-Form.



IP EXPO Europe On Demand Panel: Public, Private or Hybrid Cloud

18 Jul 2014
by
Mike England
Mike England
Mike is the Content Director at Imago Techmedia which runs IP EXPO Europe, Cyber Security EXPO and Data Centre EXPO
IP EXPO Europe On Demand PANEL - Public, Private or Hybrid Cloud Front Page

In the age of ‘cloud-first’ IT strategies, how are businesses deciding where and how to host their workloads? How do they decide between public and private cloud – and how can they avoid making the wrong choice? Finding answers to these tricky questions looks set to be a popular theme at IP EXPO Europe 2014.

View the first in this exclusive series of IP EXPO Europe panel debates, in which Consulting Editor Jessica Twentyman is joined by Kate Craig-Wood, co-founder and managing director of cloud hosting company Memset, and Peter Mansell, sales manager for HP Helion at systems, software and services giant Hewlett-Packard as they explore the Public, Private or Hybrid Cloud.

Viewers will learn:

Read More

- the difference between public and private cloud computing and how organisations choose between them;
- why the hybrid cloud model may offer the best of both worlds, and how a cloud platform like HP Helion can help;
- how leading cloud hosting companies such as Memset are able to advise customers on the best cloud model for their own workloads – and offer all three;
- how smart companies are tackling the cloud integration challenge, stitching cloud systems together to create a coherent whole




Is it time to think about disaster recovery as a service?

16 Jul 2014
by
Jessica Twentyman
Jessica Twentyman
Jessica Twentyman is an experienced journalist with a 16 year track record as both a writer and editor for some of the UK's major business and trade titles, including the Financial Times, Sunday Telegraph, Director, Computer Weekly and Personnel Today.
Disaster recovery via Hybrid Cloud

Continuing the series on Building the Hybrid Cloud, we speak to IP EXPO Europe 2014 speaker, Joe Baguley, chief technology officer for EMEA at VMware, about why organisations are increasingly turning to hybrid cloud technology for disaster recovery.

In mid-April, virtualisation specialist VMware announced VMware vCloud Hybrid Service – Disaster Recovery, a service that provides customers with a continuously available recovery site if their own, on-premise VMware-based environments run into trouble.

The new disaster recovery service offers a recovery point objective (RPO) of 15 minutes, at prices starting at $835 per month. The aim, according to VMware executives, is to provide customers with “a simple, automated process for replicating and recovering critical applications and data in a warm standby environment at a fraction of the cost of duplicating infrastructure or maintaining an active tertiary data centre.”

Read More

For some customers, it may also represent an opportunity to protect applications that have previously been omitted from Disaster Recovery plans for reasons of cost and/or complexity. As we heard in Article 1 of this chapter, disaster recovery is fast emerging as an important hybrid-cloud use case.

Technology.info sat down with Joe Baguley, chief technology officer for EMEA at VMware, to discuss the new service and to ask him: is the way that companies look at hybrid cloud deployment starting to mature?

Q: So, Joe, what’s the thinking behind VMware vCloud Hybrid Service – Disaster Recovery?

A: Our thinking here is simple: hybrid cloud should be seamless. It shouldn’t be a migration – just a seamless click within your existing environment. The whole point with this service is to provide an an easy and simple way to back-up an existing VMware environment to the cloud, so that in the event of a disaster hitting their own data centre, a customer can quickly can spin up those environments in the cloud.

Q: And where exactly will those systems run – in VMware’s own data centres?

A: Sort of. What we do as a company is rent data centre space from the likes of Equinix and Savvis, but we use that space to provide a managed service to VMware customers that is entirely operated by us. Plus, there’ll also be a wide range of VMware partners offering the service to customers.

Learn more about building the Hybrid Cloud at IP EXPO Europe 2014

Q: So what does the introduction of this new disaster recovery service tell us about wider market demand for hybrid cloud services?

A: I think it’s fair to say that, initially, hybrid cloud was largely seen as a good place for dev/test environments [development and testing environments where software developers would build and try out new applications before deploying them in-house]. But many of our customers have quickly identified the hybrid cloud’s potential for disaster recovery (DR), too. We’ve actually found some interesting cases where customers come and stand up DR in our hybrid cloud, do a failover test and then realise that the system runs better in our hybrid cloud than it does on their own premises! A few have even switched to using our cloud as their primary site and their own for Disaster Recovery.
Others have said they were interested in the idea of ‘cloud-bursting’, so that on-premise apps can take advantage of our capacity during periods of peak demand – but many have quickly found that an awful lot of recoding of applications was needed in order to enable them to do that. But where we are now with hybrid cloud is that, because we’re standing up exactly the same technology stack in our data centres as we sell to customers to run in theirs, it’s a relatively trivial thing to migrate workloads between the two – they’re effectively the same environment. As a company, VMware has around 45 million VMs [virtual machines] running in customer sites today – and we’re giving those customers a place where they can ‘drag and drop’ those VMs if they need to do so. As we discussed in this chapters article on Hybrid Cloud Considerations, cloud bursting is another emerging hybrid-cloud use case.

Q: So where does that leave the whole issue of cloud interoperability and the industry standards effort that’s going on around hybrid cloud computing?

A: Well, it’s an interesting question. My answer would be that there are still many cloud interoperability issues to tackle and several major standards efforts underway – but when I sit down day to day with our customers, a lot of them say to me: “We’d love to pick an industry standard and go with it – but we can’t choose one yet, it’s just far too early.”
The one standard they do know – or at least, a de facto standard – is that they already run VMware today, so for now, that’s the technology they’ll stick with. They don’t want to pick an industry standard that’s not going to win the race in the long term.
But that’s not to say that VMware isn’t part of the wider IT industry standards effort. We’re proud to say we have a very high commit-rate to OpenStack and we are 100% backing OpenStack as we go forward. Customers will see us develop more and more interoperabilities as that platform develops but, at the moment, it’s still early days. That’s just the way it is in this industry: technology development moves faster than any standards body can.

Q: But isn’t that luring customers into putting all their eggs in one basket – a VMware branded basket?

A: It’s also about having ‘one throat to choke’: customers can buy their licenses for on-premise and their credits for hybrid cloud in a single transaction, get the same support line and avoid the need to make changes in personnel and skills. We’re working to make hybrid cloud a natural extension of all the stuff they already do.

Joe Baguley will be presenting as part of the IP EXPO Europe seminar programme.







Action for Children cuts IT costs through hybrid cloud

15 Jul 2014
by
Jessica Twentyman
Jessica Twentyman
Jessica Twentyman is an experienced journalist with a 16 year track record as both a writer and editor for some of the UK's major business and trade titles, including the Financial Times, Sunday Telegraph, Director, Computer Weekly and Personnel Today.
Action for Children

Using a mix of private and public-cloud resources enables charity to achieve goals of data confidentiality and scalable IT.

Action for Children (AfC) has been in the news recently. The charity is campaigning for an update to UK child neglect legislation and the introduction of a so-called ‘Cinderella Law’.

Behind the scenes, meanwhile, and on a day-to-day basis, the charity handles some of the most confidential data imaginable, relating to some of the UK’s most vulnerable and disadvantaged children and young people and their families and carers.

Learn more about building the Hybrid Cloud at IP EXPO Europe 2014

Much of that data – around 60 percent of AfC’s overall data storage – must, by law, be kept on the systems held on the charity’s own premises, explains Darren Robertson, data scientist and head of digital communications at AfC. As we discussed in the Hybrid Cloud Considerations article of this chapter on Building the Hybrid Cloud, this is an area of concern – and of hybrid-cloud potential – for organisations across a wide range of sectors, including charities and government departments.

Read More

Other kinds of data – details of donations, fundraising activities and projects underway around the country – must still be kept safe, but can at least be hosted by a trusted third party. Here, AfC uses a private-cloud environment provided by Rackspace, which also hosts AfC’s website on its public-cloud infrastructure. Many of the databases held in the private cloud environment, Robertson says, feed the public website, allowing visitors to look up, for example, the locations of children’s centres and project locations around the country.

This hybrid environment enables AfC to balance its need to maintain confidentiality with its focus on costs.

“As a charity, we have to keep a very close eye on costs – and, in this sector, we’re far from alone in that. It’s become quite apparent to charities that internal servers are expensive to run – so why would we want to do that, when it’s not always necessary?”

“By working with a provider to host certain types of information, we don’t have to worry whether a crowded server room is running at the right temperature, are systems patched regularly, does a particular component need replacing? We only need to ask those questions about the systems that host data that we’re absolutely obliged to keep in-house. We can devolve responsibility to Rackspace for the rest.”

AfC began looking at options for a hosting environment in April 2012 and completed its migration to Rackspace’s data centre in October of that year. Using an entirely public-cloud environment, says Robertson, was out of the question: “There’s a lot of nervousness within the charity sector around the public cloud,” he says, “but a hybrid cloud environment enables us to address those concerns by mixing public and private cloud.”

It also means that, if traffic to AfC’s public-facing website suddenly spikes – at times when it is actively campaigning for changes in legislation, for example, or if a celebrity tweets about its work – it can quickly tap into Rackspace’s extra hardware resources for that period, paying only for the extra capacity it consumes, rather than lifting the whole website to a larger dedicated server. This is what it previously had to do, Robertson says, and it meant that the charity was unable to update the website during those peak periods.

As for charity’s on-premise IT investment, “it’s still pretty similar for now, but things are winding down,” says Robertson. “We have over 72 data systems within the organisation and you need to remember that, as an organisation, we’re 150 years old, so some of it is pretty antiquated.”

The goal now, he says, is to get some of those on-premise systems into a sufficiently stable state to migrate them to the Rackspace private-cloud environment, “but the idea is that, over time, now we’ve proven that hybrid cloud works for us, we can start to move more and more stuff across. Looking after it ourselves involves too much time and cost, and as a charity, we will never be able to employ the same numbers or quality of staff that Rackspace has on its server team – so why would we not let them take the strain?”

“I strongly believe that it’s our early use of hybrid cloud that will allow us to transform our IT environment over the next few years, freeing up time and money at Action for Children that might be better spent on changing children’s lives for the better.”

Read on to find out: Why organisations are increasingly turning to hybrid cloud technology to address their disaster recovery needs




Hybrid cloud deployment – considerations

14 Jul 2014
by
Jessica Twentyman
Jessica Twentyman
Jessica Twentyman is an experienced journalist with a 16 year track record as both a writer and editor for some of the UK's major business and trade titles, including the Financial Times, Sunday Telegraph, Director, Computer Weekly and Personnel Today.
Planning for a Hybrid Cloud Deployment

Following on from our introduction to Building the Hybrid Cloud, before identifying use cases for hybrid cloud deployment, IT teams need to take security, connectivity and portability into account.

It’s now almost a year since Microsoft executive Marco Limena, vice president of the software giant’s hosting service providers business, declared that 2013 and 2014 would be the “era of the hybrid cloud”.

And in March this year, the company unveiled a survey of over 2,000 IT decision-makers worldwide that seems to validate his claim. According to the study, conducted on Microsoft’s behalf by IT analyst firm, the 451 Group, 49% of respondents said that they had already implemented a hybrid cloud deployment.

Of these, 60 percent had integrated an on-premise private cloud with a hosted private cloud. Forty-two percent, meanwhile, had combined an on-premise private cloud with a public cloud, and 40 percent had combined a hosted private cloud with public cloud resources.

Learn more about building the Hybrid Cloud at IP EXPO Europe 2014

According to Limena, “hosted private cloud is a gateway to hybrid cloud environments for many customers.” The study shows, he added, that “it’s clear we’ve reached a tipping point where most companies have moved beyond the discovery phase and are now moving forward with cloud deployments.”

So what do organisations need to bear in mind when considering a hybrid cloud deployment? According to IT experts, executing a successful hybrid cloud deployment is all about identifying the specific workloads that might benefit the most from deployment in the hybrid cloud. But first, there are three considerations to be taken into account when considering a hybrid cloud deployment.

Read More

The first is security: IT organisations should ensure that the security extended to a workload running in a private cloud can be replicated in the public cloud.

The second is connectivity: data flowing between private and public cloud resources should be kept confidential. A typical hybrid cloud deployment would achieve this using a VPN [virtual private network] connection or dedicated WAN link.

The third is portability – and in the absence of truly mature standards in hybrid cloud computing, this may prove the toughest nut to crack. It’s about ensuring technological compatibility between private and public cloud environments. In other words, organisations must be able to ‘lift and shift’ workloads between the two, knowing that they share the same operating systems and application programming interfaces [APIs], for example.

Current use cases for hybrid cloud deployment, meanwhile, include:

1. Development/testing workloads

Early stage applications are a great use case for hybrid cloud deployment, says Stuart Bernard, European cloud sales leader at IT services company CSC. “You may have an application development team that needs an environment it can spin up, use, consume and spin down quite quickly,” he says. “The frustration they have with traditional IT services is that, by the time they’ve requested that environment and it’s been made available to them, there’s been a considerable delay – and the one thing that application developers hate is a delay.” That, he adds, leads to ‘shadow IT’ procurement, as development teams get their credit cards out and buy the environment they need in the public cloud. But once they’re completely satisfied with the application they’ve built there, they then face the challenge of shifting it back into the private cloud, in order to give that application the governance and control in production that the organisation requires. A hybrid cloud environment makes it possible to use public cloud resources for a new, untested application, before migrating it back into a private cloud, once a steady-state workload pattern has been established.

2. Disaster recovery

Duplicating a private cloud environment in a secondary data centre comes with considerable costs, requiring at least twice the expenditure on IT equipment and data-centre space. According to Joe Baguley, EMEA chief technology officer at VMware, more companies are now turning to hybrid cloud for disaster recovery, so that if their on-premise private-cloud environment hits the skids, they can quickly migrate virtual machines from that environment to run in a geo-redundant cloud set-up delivered by a third-party provider.

3. Cloudbursting

The term ‘cloudbursting’ is increasingly used to refer to a situation where workloads are migrated to a different cloud environment to meet capacity demands. This might happen to an ecommerce shopping app in the run-up to Christmas, for example, or a charity running a high-profile campaign that gets a lot of media coverage. In a hybrid cloud environment, the steady-state application would be handled by a private-cloud environment, with spikes in processing requirement passed to on-demand resources located in the public cloud.

4. Meeting regulatory requirements

Many organisations handle data that must be kept in-house, according to legal or regulatory requirements – but parts of the application that collect and/or process that data (online forms, for example) can run in a public cloud to improve performance and scalability, while keeping costs low.

As we see in the next article, this is an issue that charity Action for Children tackled head-on in their hybrid cloud deployment.










Sir Tim Berners-Lee to keynote IP EXPO Europe

10 Jul 2014
by
Mike England
Sir Tim Berners-Lee

Sir Tim Berners-Lee to open IP EXPO Europe

In his first address to the core IT community of IP EXPO Europe, Sir Tim will outline his 2050 vision for the web and how businesses will use it for competitive advantage.

Above all, he’ll look at how businesses – and the people who lead them – will shape the next phase of the web’s remarkable development’

The speech will mark the 25th anniversary of the first draft of a proposal for what would become the World Wide Web. To mark this milestone, Sir Tim Berners-Lee will share his vision for successful business on the Web – from predicted challenges and the technology businesses will use to overcome them, through to the key innovations that will help drive future success, improve customer experience and create new markets.

Read More

Sir Tim comments; “I am greatly looking forward to addressing key decision makers in European business at IP EXPO Europe. The issues that will shape the future of the web – from privacy and data regulation, to sustainability and responsibility – don’t just touch our businesses, they touch our lives, and it’s thrilling that we are all at the very epicentre.”

Sir Tim’s opening keynote will be followed by an impressive programme from leading technology influencers; including Cloudera’s Doug Cutting, creator of Hadoop, and Mark Russinovich, Technical Fellow at Microsoft.

To share in the excitement and to mark the 25th anniversary of the World Wide Web, REGISTER NOW for this free to attend one-off session to hear Sir Tim Berners-Lee keynote,

About Sir Tim Berners-Lee

Sir Tim Berners-Lee invented the World Wide Web in 1989 while working as a software engineer at CERN, the large particle physics laboratory near Geneva, Switzerland. With many scientists participating in experiments at CERN and returning to their laboratories around the world, these scientists were eager to exchange data and results but had difficulties doing so. Tim understood this need, and understood the unrealized potential of millions of computers connected together through the Internet.

Tim documented what was to become the World Wide Web with the submission of a proposal specifying a set of technologies that would make the Internet truly accessible and useful to people. Despite initial setbacks and with perseverance, by October of 1990, he had specified the three fundamental technologies that remain the foundation of today’s Web (and which you may have seen appear on parts of your Web browser): HTML, URI, and HTTP.

He also wrote the first Web page editor/browser (“WorldWideWeb”) and the first Web server (“httpd“). By the end of 1990, the first Web page was served. By 1991, people outside of CERN joined the new Web community, and in April 1993, CERN announced that the World Wide Web technology would be available for anyone to use on a royalty-free basis.

Since that time, the Web has changed the world, arguably becoming the most powerful communication medium the world has ever known. Whereas only roughly one-third of the people on the planet are currently using the Web (and the Web Foundation aims to accelerate this growth substantially), the Web has fundamentally altered the way we teach and learn, buy and sell, inform and are informed, agree and disagree, share and collaborate, meet and love, and tackle problems ranging from putting food on our tables to curing cancer.

In 2007, Tim recognized that the Web’s potential to empower people to bring about positive change remained unrealized by billions around the world. Announcing the formation of the World Wide Web Foundation, he once again confirmed his commitment to ensuring an open, free Web accessible and meaningful to all where people can share knowledge, access services, conduct commerce, participate in good governance and communicate in creative ways.

A graduate of Oxford University, Tim teaches at Massachusetts Institute of Technology as a 3Com Founders Professor of Engineering and in a joint appointment in the Department of Electrical Engineering and Computer Science at CSAIL. He is a professor in the Electronics and Computer Science Department at the University of Southampton, UK, Director of the World Wide Web Consortium (W3C), and author of Weaving the Web and many other publications.

Sit Tim Berners-Lee will present the opening keynote at IP EXPO Europe on Wednesday 8th October at 10am.

Building the Hybrid Cloud

10 Jul 2014
by
Jessica Twentyman
Jessica Twentyman
Jessica Twentyman is an experienced journalist with a 16 year track record as both a writer and editor for some of the UK's major business and trade titles, including the Financial Times, Sunday Telegraph, Director, Computer Weekly and Personnel Today.
Hybrid Cloud

In this chapter on Building the Hybrid Cloud, we take a look at some of the practical considerations that organisations embarking on the hybrid cloud journey need to bear in mind.

 

We also speak to one organisation, charity Action for Children, that has already taken the plunge. And we talk to leading IT expert, VMware’s EMEA CTO Joe Baguley, about the conversations he’s having with customers around hybrid cloud.

Could combining private- and public-cloud resources into a hybrid cloud prove the best recipe for companies looking to achieve the goal of optimal scalability, flexibility and IT efficiency?

First they went public, then they went private. Now, organisations are looking to hybrid cloud deployment as a way to mix both public and private cloud resources to get the maximum benefits of almost limitless scalability, increased flexibility and the most efficient use of IT.

Learn more about building the Hybrid Cloud at IP EXPO Europe 2014

According to a recent Gartner study, nearly half of large enterprises will have deployed a hybrid cloud by the end of 2017. The growth of hybrid cloud computing, say analysts at the firm, will only serve to fuel more general adoption of cloud computing as a model for IT deployment – and they expect this overall cloud spend to account for the bulk of IT spending by 2016.

Read More

In the US, the National Institute of Standards and Technology (NIST) defines hybrid cloud as a composition of at least one private cloud and at least one public cloud. At most companies, this involves integrating a private-cloud deployment (located on their own premise or hosted on their behalf by a third-party provider) with the resources of a public-cloud provider.

The market itself, then, is a complex mix, largely comprising the major IT vendors that provide companies with data-centre infrastructure; hosting providers; and systems integration firms that help them knit private and public cloud resources together.

That third group is important: while a hybrid approach promises “cost savings and significant gains in IT and business flexibility”, according to a 2013 report from analysts at Forrester Research, “some concerns remain around how to manage and integrate on-premises infrastructure with cloud services in a hybrid cloud architecture.”

In terms of the challenges that IT decision-makers face, Forrester’s survey of over 300 companies in the US and Europe shows that two security challenges are ‘top of mind’ for respondents: ensuring the consistency of security policies between the on-premises environment and the service provider (cited by 46 percent) and securing communication and data-sharing between the two (cited by 45 percent). In addition, some applications may need to be re-architected to run in a hybrid environment – and the challenges around the connectivity needed to link public and private clouds are substantial.

“IT decision-makers will look to find solutions to these challenges with existing tools and skills — or explore new offerings that make it easier to address the challenges of using a hybrid cloud strategy,” the Forrester analysts predict.

At Gartner, meanwhile, analyst Dave Bartoletti remains adamant: “Hybrid is indeed the cloud architecture that will dominate… there will likely be very few private clouds that don’t have a hybrid component.”

Read on to find out: What considerations do organisations need to take into account before identifying suitable use cases for hybrid cloud deployment?








Big Data: Time for new approach to analysis

13 Jun 2014
by
Kevin Spurway
big data

The Big Data problem is accelerating, as companies get better at collecting and storing information that might bring business value through insight or improved customer experiences.  It used to be a small specialist group of analysts that would be responsible for extracting that insight, but this is no longer the case.  We are standing at a nexus between Big Data and the demands from thousands of users – something that we call “global scale analytics” at MicroStrategy.  The old architectural approaches are no longer up to the task and this new problem needs radical new technology.  Continuing with the old approach Big Data will fail to reach its true potential, and just become a big problem for companies.

Analytics applications now regularly serve the needs of thousands of employees to help them do their job; an employee can need access to hundreds of visualisations, reports and dashboards.  The application must ready for a query at any time, from any location and the results must be served to the user with ‘Google-like’ response times; their experience of the web is the benchmark by which they judge application responses in the work environment.Read More

With this huge rise in data and user demands the traditional technology stack simply can’t cope, it is becoming too slow and expensive to build and maintain an analytics application environment.  Sure, there are some great point solutions, but the problem is the integration between every part of the stack – the stack only performs as well as its weakest link.

The industry has only been working to solve half the problem, data collection and storage, rather than looking at the full picture which also includes analytics and visualisation.   Loosely coupled stacks scale poorly and have a huge management and resource overhead for IT departments, making them uneconomical with poor agility.

Looking at the end-to-end Big Data analytics problem requires an architecture that tightly integrates each level of the analytics stack, taking advantage of the commoditisation of computing hardware to deliver analytics that can scale with near perfect linearity and economies of scale, to deliver sub-second response times on multi-terabyte datasets.

MicroStrategy has become the first, and currently only, company to make this approach a commercial reality, tightly integrating data, analytics and visualisation.  PRIME is a massively parallel, distributed, in-memory architecture with a tightly integrated dashboard engine.  Companies like Facebook are using PRIME to analyse billions of rows in real-time, proving that this new approach is pushing the point solutions of the past out of the spotlight.

Regardless of your application, if you have thousands of users, exploding data collection, highly dimensional data, complex visualisation or globally distributed user base, then the big data problem will keep getting bigger.  With every day it grows you are playing a game of diminishing returns.  Businesses need to look at how they make their analysis as efficient as their data gathering.  We are in a new era of data exploration that demands a jump in the scale and performance of analytics applications to achieve global scale analytics – what’s the point in collecting all that data if you can’t use it….?

Kevin Spurway is Senior Vice President of Marketing at MicroStrategy

The evolution of in-memory computing technology

11 Jun 2014
by
David Akka
computers

Although it might appear to be an emerging technology because of all the recent hype about big data, in-memory computing has already been in use by large organizations for several years. For example, financial institutions have been using in-memory computing for credit card fraud detection and robotic trading, and Google has been using it to support searching huge quantities of data.

The need for in-memory technology is growing rapidly due to the huge explosion in the sheer quantity of data being collected, the addition of unstructured data including pictures, video and sound, and the abundance of meta-data including descriptions and keywords.  In addition, vendors are pushing predictive analytics as an important competitive advantage, for which implementing in-memory technology is a must.  Read More

The reduced cost of memory (RAM) hardware means that now smaller organizations, with annual revenues as low as one million dollars, also have access to in-memory technology, and are getting into the game.  The pace of adoption will continue to speed up as packaged software vendors incorporate in-memory computing is into industry leading solutions.

In-Memory Computing in the Enterprise Software Market

SAP took an all or nothing approach, deciding to embed in-memory computing across their entire ERP line with their SAP HANA solution.  Being first to market among their competitors with an in-memory computing product, SAP took on the role of market educator.  They aggressively marketed their HANA solution as a differentiator and also enjoy the fact that the overlapping data layer helps prevent modules of their solution from being replaced by other leading industry players such as Oracle, Salesforce and Microsoft.  SAP bet on the idea that customer upgrades to HANA would not be much more costly or complex than other major SAP upgrades.

Other database vendors – Oracle, IBM, and Microsoft – are adding in-memory features to conventional databases one module at a time.  Although this approach is less disruptive and quicker and less expensive to implement, it can create bottlenecks as high speed processing is limited to a single function, and the full benefits can’t be experienced across all parts of the application.

Enterprises still have many options when it comes to implementing in-memory technology.  In addition to the traditional database vendors providing in-memory technology, there are in-memory-computing first vendors like GigaSpaces.  GigaSpaces has already been providing in-memory functionality for several years.  An application-agnostic vendor like GigaSpaces also provides the advantage of enabling multiple vendors’ data to be incorporated into a single data grid. Still other options that enterprises can consider are integration solutions that embed in-memory computing technology. The focus here would be to support scenarios that combine data from multiple systems.

Implementation Strategies

In general, CIO’s shouldn’t limit their choice of suppliers of in-memory technology based on their incumbent solutions, but should instead pick a solution based on their organization’s specific objectives and priorities.  CIO’s should look at the scenarios that they want to enable; for example identifying potential fraud for insurance companies or predicting crimes for law enforcement, and then determine the most cost-effective in-memory technology solution that will enable them to achieve their goals.

Once they have decided on the data they want to use in-memory, they should do an ROI analysis based on the full cost of the solution including consultancy, the software cost, the amount of work required to modify the applications, and how efficiently the solution uses the hardware.

In some cases it may be wiser to use in-memory technology only for certain parts of applications.  For example, retailers might see the value in using in-memory computing to call up data about previous purchases and customer profiles to present targeted offers to customers while shopping, but decide to store employee work hours using more traditional methods since this data is less time sensitive.

Use Cases

Imagine how in-memory technology can change the way data is used by retailers.  In the traditional BI model, they would scan a loyalty card every time a customer made a purchase and put this data into a data warehouse where it would be analyzed to decide which products to offer that customer, based on his/her historical purchases.

With in-memory computing, all purchases are tracked, and the data is analyzed in real-time and used to predict future purchase patterns and provide offers that shoppers are most likely to accept.  This system could therefore determine that people who bought a specific belt are most likely to also purchase cuff links, for instance.  The model can be built with structured and unstructured data including purchase histories, information from social media, and images of advertisements online and in newspapers.

Supermarkets offer another obvious potential use case.  Supermarkets today provide self-service scanners to customers to speed checkout time and avoid queues.  The next step can be to use in-memory computing to provide highly personalized offers to customers as they shop, providing real-time relevant coupons or promotions at the exact place and time where decisions are made. This promises to be significantly more effective than current methods.

In-memory technology is a game changer that will create a competitive advantage for early adopters and will force other companies to follow. Once industry leaders implement in-memory technology and establish a competitive advantage as a result, it will only be a matter of time before other companies in their market space are forced to do the same.

David Akka is the Managing Director of Magic Software Enterprises UK

Technology in education: Then, now and the future

09 Jun 2014
by
Benjamin Vedrenne-Cloquet
School technology

The use of technology in education is not exactly a new or recent phenomenon. It dates back to the 1960s when Stanford University professors experimented with using computers to teach maths and reading to young children in elementary schools in east Palo Alto. What started as an experiment in delivering education has since evolved into the specialist field we have today, with potential for an even brighter future – reports claim that the education technology market will be worth $107bn by 2015 and will experience a 15 fold growth over the next decade.

Education technology has moved on, especially in the last few years, from the stage of experimentation to a stage of adoption, with tried and tested products, measurable results and burgeoning commercial opportunities. But with the share of digital in the total education market still only at 2 per cent (compared with 30 to 40 per cent in other ‘content’ industries), there is still a massive growth opportunity, with many reasons for education technology companies and innovators to be excited.

Recent advancements in technology and innovation in education technology have not only improved access to education, they have also enhanced the learning process itself, as well as making it more affordable. For example, reliable broadband services and adoption of other technologies like the cloud have facilitated policies like BYOD (Bring Your Own Device) in schools, allowing pupils to use their own devices (smartphones and tablets). As a result, less money is spent on hardware equipment for schools – which currently takes up 60% of the IT budget.  More money can then be spent on smart learning software, providing tailored lesson plans and innovative digital content such as engaging video materials – all aiding and facilitating a better learning experience for the pupil.

The learning experience has also been key for higher education and vocational training. As a result of initial high fall-out rates and limited e-learning course completion (a 2013 Times Higher Education article based on a study of 29 MOOC courses, highlighted completion rates were below 7%),  emphasis on user engagement and interactive content has been fundamental for e-learning providers, together with and the introduction of incentives for completion.

Indeed, engagement is fast becoming the main differentiator among the ever-growing field of e-learning options. Providers are putting a lot of effort into ensuring their content is immersive enough to not only attract users but also keep them engaged – all the way to the end.

Commentators have predicted the emergence of tools, interactive content and rewards designed to improve completion rates. From MOOCs to bespoke vocational training, the emphasis will be on how to make more use of this digital learning environment.

Increased use of technology in education has also granted increased access to data on achievements and progress to teachers and assessors, allowing them to identify individualised learning programmes and any knowledge gaps that might exist. The arrival of data analytic techniques in education has driven adaptive learning, where data is fed back into the system to influence learning programmes and structures. An independent report commissioned by e-learning provider, SAM Learning and conducted by the Fischer Trust into the relationship between e-Learning and GCSE results of 258,599 students between 2009 and 2011 concluded that there is a positive relationship between the use of SAM Learning and students’ progress. On average, students using SAM Learning for 10+ task hours achieved 12.3 capped points more than expected.

Education technology is also creating a stronger link between what happens in the classroom and outside the classroom (at home, in transit, etc.), making teacher endorsed digital educational resources available at all times, such as assignments and test prep material, and creating a continuum of touch points in the learning experience for pupils. This is changing the way pupils are consuming education in the same way cloud technology has changed the way we consume music and television

The future of education technology holds bright prospects for everyone involved. There is no way to know how quickly things will progress from here however, we can be sure that as technology continues to advance, it also has the power to drive better learning experiences.

How to improve enterprise messaging in mission critical situations

06 Jun 2014
by
Mark Hay
mobile

Effective communication is critical to businesses of any size. Employers are challenged with not only finding the most effective and secure communications platform that can reach an increasingly mobile workforce, but for many, one that also delivers in mission critical scenarios. As business operations are affected by unexpected events such as natural disasters or adverse weather conditions, businesses need to make sure they minimise that impact. Effective mission critical communications are key to achieving this.Read More

This is not as simple as it sounds however. With the rise of BYOD in the enterprise, the number of devices each employee has access to is increasing, as is the number of methods of communication they use. Employees could be using SMS, telephone, email, instant messaging and OTT apps. Enterprises need to be confident that despite this complexity, a mission critical message will reach the recipient no matter what device they are using and where they are located. There are a number of critical scenarios that require getting a message delivered quickly and securely, either to an employee, customer or specific supplier. For example, an IT system problem that requires immediate attention, informing employees of office closures or alerting team members on call to an emergency situation. Enterprises need to have a robust communications policy in place to make sure the message gets to the recipient regardless of the impact of BYOD.

Enterprises also need to ensure the communication system can manage responses to a message, for example, IT staff en route to fix an IT system problem should be able to alert the business and avoid duplication of efforts. There has to be a system in place with which to manage incoming communications in a mission critical situation. This could be from the very simple response to more complex response systems. Key to this is that employees are aware of the required process and response to make sure the system works effectively.

Enterprise communications in mission critical situations also need to be scalable – depending on the situation it is sometimes crucial to notify a very wide range of people at short notice, such as notifying employees of natural disasters. The nature of the situation is that there is little notice, and as such enterprises need to be flexible, adaptable and responsive in order to react as quickly as possible. Even for events that are anticipated such as transport strikes, it is impossible to predict the full impact until it happens. Businesses need to be able to quickly and reliably reach all their employees to notify them of the situation and inform them whether they should work from home or make their way into the office. By notifying employees ahead of time it can ensure business productivity by preventing employees from battling the crowds to make it in to work.

In terms of selecting a universal technology, anyone with a mobile device and SIM card can receive an SMS, regardless of the type of device or their mobile network operator. Furthermore, SMS is the last service to go with limited signal; making it the most widely accessible form of communication in critical situations. However, despite the longevity of SMS, IM is growing in popularity, with a recent study from Deloitte revealing that IM traffic overtook SMS for the first time in the UK last year with 160 billion instant messages compared with 152 billion SMS. Whilst IM is only available via a data network it provides additional benefits, for example the addition of read receipts which allows the sender to verify if the message has been read.  In order to take advantage of both methods of communication enterprises should adopt a solution which can combine the two, therefore allowing the message to excel in terms of security, responsiveness, availability and reliability depending on the criticality of the message.

Ultimately these solutions not only enhance communication within the enterprise during critical situations but they also allow for business continuity by responding quickly to unforeseeable situations. Enterprises need to decide what they want to achieve through their communications policy in order to deploy a tailored solution that allows them to obtain the full benefits of mission critical enterprise messaging.

 

Top tips for improving mission critical communications

  • Have a standard communications policy in place
  • Ensure all employees are aware of standard policy in mission critical situations
  • React to situations in a timely and efficient manner and use tools that react in the same way
  •  Ensure employees can be contacted regardless of the device they are using or their location
  • Establish where it needs to be a two-way communication process and enable this capability where necessary

 

Securing the Internet of Things

04 Jun 2014
by
Allen Storey
security padlock

The Internet of Things is already with us and it requires a paradigm shift in the way that organisations think about security. While protecting sensitive data will continue to be of the utmost importance, the rise in connected devices raises a new security concern; how to trust the identity of these devices. Enterprises and other organisations must shake themselves out of the mind-set that  online security is simply about protecting data. With the rise of the Internet of Things they must also ensure that they can protect and verify the identity of every device that connects to their environments.

This is the view of Allen Storey, Product Director at Intercede. Storey warns that the growth in connected devices gives criminals access to greater and more diverse opportunities for extortion, theft and fraud, which are likely to be even more damaging – and potentially life-endangering – than today’s malware that holds data to ransom, such as Cryptolocker. These attacks don’t just threaten to disrupt corporations. With critical national infrastructure being brought online, such as power generation and electricity grids, they become potentially vulnerable to attack from terrorists or hostile nations. A successful attack on infrastructure control systems has the potential to wreak massive disruption, and even death, in its wake.

To combat the threats inherent in the Internet of Things, organisations must have absolute confidence that the devices which connect to their networks are the devices they claim to be.

Establishing the true identity of any machine or device is a critical element in preventing criminals gaining control of, or access to, a company’s network. Anything that is Internet-connected but unprotected can be compromised, and can provide a wealth of valuable data to criminals. One example would be the ability to monitor staff movements to track or target a particular employee for some nefarious purpose, from theft to blackmail.

The most important first step is to ensure that any Internet-connected device is properly identified, so it can be trusted. One of the best methods of protecting a device is with a secure element embedded within it which can’t be copied or tampered with, and which can hold cryptographic keys that are unique to that one device. Combined with authentication that verifies each user or device that attempts to engage with it, embedded security provides an essential element of any defence against criminals seeking to exploit the Internet of Things.

As with so much related to security, one of the biggest vulnerabilities is not technology but humans. What is key to securing a world of connected devices is a big push to educate people, corporations and other organisations that while the Internet of Things will radically change our lives in many ways, the biggest change will come in the way that we need to think about security. This education should not be based on fear, uncertainty and doubt; instead we need a calm and collaborative approach to securing one of the biggest technological leaps forward in our lifetime.

What starts within the enterprise often expands to the consumer. “What I want five years from now is to be sitting in my self-driving car, checking my home surveillance system on my smart watch, when my fridge tells me I have run out of milk and automatically directs me to the supermarket,” says Storey. “But I want to be absolutely sure that it is my fridge, my car, my surveillance camera and my watch talking to me. Embedded secure elements combined with device and person identity management can make this a reality; without it our fridges may be full of spam in a way we had not predicted.”

The biggest threat to your company’s data security could be your employees

02 Jun 2014
by
Paul Evans
Paul is the managing director of Redstor.
education

In today’s work culture of ‘always on, always accessible’, employees have expectations of seamless access to documents and files both on and off the company network and on mobiles or tablets, without concern for potential security issues.

According to a Cisco study, the average number of connected devices an employee will own for work is expected to be 3.5 by the end of 2014. The increasing popularity of mobile devices, as well as company-wide BYOD (Bring Your Own Device) policies, offers businesses many opportunities but, if not managed carefully, also comes with great risks.

A study by EE found that in the UK alone nearly 10 million work-devices were lost or stolen in 2013. This of course presents many challenges to managers wishing to maintain a work environment that is rich in features and accessibility yet one which is not detrimental to security.

With only seven per cent of stolen devices ever being recovered, the resulting potential for data leakage is huge. The majority of work-related electronic devices contain sensitive or confidential information, but how many of them can be traced or remote wiped in the event of loss or theft? Without adequate protection in place, company data is vulnerable to exposure. However, research shows that only 14 per cent of companies currently have a mobile device security policy in place. So what can be done to maintain an always accessible work environment through device proliferation whilst ensuring protection against data leakage?

A common approach to data management is to undertake the arduous and ultimately counter-productive task of trying to lock down the corporate network and mobile devices in order to limit and control employees’ use of the many cloud services available today. Whilst this often has a limited degree of success in ensuring that employees don’t use consumer cloud services on the corporate network, it is far from effective.

There are now services available that address data management and leakage issues whilst allowing IT administrators to provide company staff with user-friendly remote and mobile access and file sync and share functionality, giving them the ability to access and collaborate on their files from anywhere. Managers should be considering these secure, scaleable services that enable the same level of access and productivity as their unsecure consumer-focussed counterparts without detrimentally impacting upon security.

These services eliminate risk through features such as remote data wipe. Once a device has been reported lost or stolen, IT staff can administer the device via a cloud portal and wipe confidential information, eliminating the risk of business data being misappropriated. Additional features such as two-factor authentication, device trace and high level encryption provide further protection against data leakage and theft.

Having this level of control and management in place will ultimately allow IT managers to sleep more soundly, safe in the knowledge that data is more secure and easily managed through an intuitive web interface. At the same time, it will prevent employees having to resort to using consumer cloud storage services to access their files remotely and share files with other parties making everyone a winner.