The ultimate technology resource. will inform, educate and inspire. Immerse yourself in the very latest IT research and resources to give your business the edge.

Company Directory

Compleate A - Z listing of all companies VIEW NOW

Find a company you need by area of technology:

Brought to you by IP EXPO Europe

BYOD deployment in the public sector: Solihull Council

29 Jul 2014
Mike England
Mike England
Mike is the Content Director at Imago Techmedia which runs IP EXPO Europe, Cyber Security EXPO and Data Centre EXPO

Steve Halliday, CIO at Solihull Metropolitan Borough Council, continues our chapter on ‘Managing and securing mobile devices‘ with a discussion on the BYOD technologies he’s implemented to ensure that employees have more choice in the devices they use to get work done.

When Steve Halliday joined Solihull Metropolitan Borough Council as CIO, back in 2006, certain council employees were already clamouring for permission to bring their personally owned mobile devices into work. “The trend of consumerisation was already making itself felt,” says Halliday, “but it was clear from the start that this would be a tricky challenge to address.”

In particular, he says, the local authority needed to find a way to meet employees’ demands, but at the same time satisfy the strict security requirements set by the UK Government’s information assurance advisor, CESG.

Finding a way forward took a great deal of negotiation, he recalls: “I needed to pitch a story that went way beyond allowing a few key stakeholders to access their council email on the move. It needed to be a vision of how we could introduce fundamental changes in the way we provide IT to employees. But I needed to build a good relationship with those key stakeholders first, in order that I could talk to them about that wider picture. I knew we couldn’t continue delivering IT in the old way – especially if we were going to get the best from younger council employees.”

Read More

As the mobile device management (MDM) market matured, however, a number of options emerged that could help Halliday and his team deliver ‘bring your own device’ and remote working policies, “and we considered most of them,” he says. Good Technology’s MDM system, however, was the first he saw that satisfied the CESG requirements: “That was a critical thing for me.”

Halliday started with an early 2012 pilot project of the Good for Enterprise product, across group of around 50 elected members, senior directors and some general council officers. “We worked with volunteers who had campaigned for this provision for some time, on the basis that one volunteer is better than ten pressed people,” he says. “We concluded that pilot project with the volunteers all telling me that there was no way I could take that provision away from them, now that they’d tried it. The experience, for them, was transformative.”

The council has been rolling out BYOD ever since. It’s not a mandatory requirement, but is provided to staff on request, as long as they have an identified business need for BYOD. These requests are currently approved by the human resources department, but will soon be rolled out to line managers instead. The ‘containerisation’ capabilities of Good for Enterprise, meanwhile, means that Halliday and his team can be confident that all council information is kept in a secure, encrypted location on employees’ mobile devices – and wiped centrally by them, if necessary.

Today, at Solihull Council, Good for Enterprise supports standard BYOD across smartphones and tablets, giving employees access to their personal information – their email, their calendar their to-do list and access to the intranet – on personally owned devices.

For those employees who require a full desktop experience – they may work from home two days per week, for example – a combination of a VPN  [virtual private network] link from Juniper Networks and a VDI [virtual desktop infrastructure] from Citrix is used to deliver the applications and other information they require to home PCs or laptops. This second initiative is known at the council as ‘Yodah’, which stands for ‘your own device at home’.

Around 3,000 members of council staff now have access to BYOD, Yodah or a combination of the two, says Halliday. And, for some staff, the council is starting to offer a COPE policy, which stands for ‘corporately owned, personally enabled’. This allows certain employees, with the approval of their managers, to be issued with a council-owned tablet, for example, in order to perform certain work tasks.

The main challenge now, says Halliday, is keeping a firm eye on costs while still allowing employees plenty of choice. With that in mind, the council is putting in place a points-based device allocation policy, which enables employees to have, for example, a council-owned laptop and a personally owned smartphone – but stops them from expecting the IT department to provide or support an unreasonably large collection of laptops, tablets, smartphones on their behalf.

“The way we provide IT has fundamentally changed – but there still has to be limits,” says Halliday. “On the whole, I’m very satisfied that we’ve got the balance right. We’re not imposing council-owned equipment on employees who prefer their brand-new iPhone, for example, but everyone still gets the equipment they need to do their jobs effectively – and at a lower overall cost.”

Why is DCIM not being adopted more rapidly?

28 Jul 2014
Puni Rajah
Puni built her reputation for fact-based decision support as a business analyst at Deloitte. Since then, she has conducted IT consumption, management and sales research in the European and Asia/Pacific markets. She is expert at understanding enterprise software and services use and buying behaviour, and has coached marketing and sales executives. Puni is the Content Analyst for Data Centre EXPO which takes place alongside IP EXPO Europe in London ExCel on 8 - 9 October 2014

The Data Centre Infrastructure Management (DCIM) market is predicted to grow from $307 million in 2011 to more than $3 billion by 2017, according to a recent report by market research firm MarketsandMarkets. But if growth is expected to be this big, where are the users who are going to start using DCIM, and why is it taking them so long?

DCIM drivers

The biggest driver of DCIM use is the push towards greener data centres. Traci Yarbrough, Product Manager for Savvis, a data centre provider, believes the interconnected issues of energy consumption and need to improve use of space. The European Union and U.S. Environmental Protection agencies have both indicated interest in effective energy management in IT. This has meant that power and cooling have become more important to data centre managers, and these are issues which cannot be looked at in a holistic way with just a spreadsheet.

Read More

Steve Hassell, President of Emerson Network Power’s Avocent Business, sees the big advantage of DCIM as enabling data centre managers to bring together facilities and IT management, not traditionally two areas that have worked closely together, and look at the data centre as a whole. The ability to look across the whole space and manage it sensibly is crucial as resources, including energy, are recognised as being scarcer. DCIM, he suggests, can help data centre managers avoid doing the wrong thing for the right reasons, because it gives them the information that they need to balance conflicting demands.

So why aren’t more companies adopting DCIM more quickly?

Notwithstanding MarketsandMarkets’ estimates of likely growth, it’s not clear that the DCIM market is actually currently growing at anything like that speed. So what’s holding companies back from investing in DCIM solutions?

The main reason seems to be that it may be difficult to get value quickly. DCIM is especially useful for making decisions about current use and future requirements at the same time. When considering a DCIM solution, companies should be focusing on what’s going on right now in the data centre, historic information to support modelling, energy efficiency, arrangement of space, managing cooling, and flexibility to shift power use and cooling loads.

Robert Cowham from Square Mile Systems echoes the point about value. Although DCIM solutions can provide great 3D-modelling and simulations of heat maps, you need to put in an awful lot of data to get anything worthwhile out of it. If you don’t have that kind of quality data to hand, then implementation could take a long time. Real time monitoring is only useful if you know what equipment is where, and who owns it, and not every data centre is that efficient.

Looking ahead

A high proportion of well-managed data centres continue to use models around power consumption and heat that keep them within specified limits. Real-time monitoring is just a bit too expensive at present.

Despite analysts’ forecasts, the market remains unconvinced. Advocates have an uphill struggle to demonstrate that it’s a genuine necessity, and not just a nice-to-have. And until economic recovery is more certain, spending on ‘nice-to-haves’ is likely to remain low.

9 considerations for creating a rock-solid BYOD policy

28 Jul 2014
Jessica Twentyman
Jessica Twentyman
Jessica Twentyman is an experienced journalist with a 16 year track record as both a writer and editor for some of the UK's major business and trade titles, including the Financial Times, Sunday Telegraph, Director, Computer Weekly and Personnel Today.
BYOD Policy Considerations Cloud

Continuing our chapter on managing & securing mobile devices we look at considerations for developing a rock-solid BYOD policy.

Before any organisation implements technology to secure mobile devices, it should already have a full set of rules that it wishes that technology to enforce. After all, in the event that company information goes missing, the blame sits firmly with the employer and not the employee.

Make no mistake: if employees are bringing their own smartphones and tablets to work, with or without the IT department’s blessing, then, as an employer, “you have a legal obligation to do something about it, whether you have established industry guidance to draw or on not.”

That’s the view of David Johnson, an analyst with IT market research company Forrester Research – but an outright ban on personally owned devices seldom provides an adequate answer, he writes in a blog posting focused on ‘Navigating the legal and audit initiatives of navigating a BYOD policy initiative‘ from earlier this year.

Read More

“The more restrictions you put in place, the more incentive people will have to work around them and the more sophisticated and clandestine their efforts will be,” he warns.

Johnson is based in Denver, Colorado, but the general advice to UK employers isn’t much different. Earlier this year, the Information Commissioner’s Office (ICO), the body charged with enforcing data protection laws in this country, issued new guidelines on ‘bring your own device’ (BYOD) policies.

“As the line between our personal and working lives becomes increasingly blurred, it is critical employers have a clear policy about personal devices being used at work,” said Simon Rice, group technology manager at the ICO. Employers, he added, “should not underestimate the level of effort which may be required… Remember, it is the employer who is held liable for any breaches under the Data Protection Act.”

Visit Cyber Security EXPO co-located with IP EXPO Europe to learn more about managing and securing mobile devices.

But a “fair and reasonable” BYOD policy can go a long way to ensure that an employer is viewed more favourably by the authorities, should it run into legal problems further down the line, says Jo Davis, an employment partner at law firm BP Collins. “A reasonable, binding policy on BYOD [can] protect both businesses and employees, ensuring that all risks are addressed and managed effectively,” she says.

Technology can go a long way to help, with software such as mobile device management (MDM) offering a way to apply rules to how smartphones and tablets are used, whether they’re company issued or personally owned – but a clear BYOD policy is a prerequisite to knowing which rules should be applied and how. (For more information on the products and vendors available in this crowded market, see the first article in this series on Managing and Securing Mobile Devices.

Or, as management consultants at Deloitte put it: “An approach that starts with defining your BYOD objectives and assessing your risks can help you navigate the multitude of BYOD management pitfalls.”

They have identified nine BYOD policy considerations that should be agreed on, before the IT team gets to work on identifying and implementing the technology it needs to support the policy:

1. Activation: What is the process for enabling a new employee with a device?
2. Device management: How will devices be remotely managed? What level of centralised control will exist? What level of management will be done at the end-point (for example, containerisation)? How will devices be locked, wiped and restored?
3. Lost/stolen devices: What happens when a device is lost, stolen or damaged? What process should the employer follow for reporting the event and obtaining support? Will the device be remotely wiped?
4. Support: What kind of support, and how much, can a user expect from your organisation?
5. Acceptable use: What kinds of devices, platforms, applications, services and accessories are allowed under the BYOD programme?
6. Reimbursement: Who pays for the initial device? What level of stipend is available? Is it consistent across all eligible users? Is it available recurrently – in other words, is it refreshed every two years, for example? What will be reimbursed?
7. Privacy: How will employee privacy be protected? Will your support group have access to personal information?
8. Policy violations: How will policy violators be dealt with? Will BYOD policies contradict or conflict with other policies (for example, HR policies on employee responsibilities, overtime and so on)?
9. Eligibility: Who is eligible for the BYOD programme? What roles, levels and so on are eligible and in what way (for example, is there tiered eligibility)?

After all, technical controls are only one part of a viable BYOD strategy, as Johnson of Forrester Research makes clear. “Technology’s role is to help foster safe behaviours, control information access and verify ongoing compliance,” he writes – but, he adds, it should be able to achieve all that without “getting in the way of creativity, productivity, collaboration or other daily activities.”

Overview: Managing and securing mobile devices

25 Jul 2014
Jessica Twentyman
Jessica Twentyman
Jessica Twentyman is an experienced journalist with a 16 year track record as both a writer and editor for some of the UK's major business and trade titles, including the Financial Times, Sunday Telegraph, Director, Computer Weekly and Personnel Today.
Mobile Device Management MDM

Easy to lose. Attractive to thieves. It’s no wonder that company bosses still have nightmares over the fact that employees are carrying around mobile devices holding a wealth of sensitive company information.

By 2017, around half of the world’s companies will have in place a ‘bring your own device’ (BYOD) policy, according to estimates released last year by IT market analyst company, Gartner.

Only 15 percent of companies will never move to a BYOD, reckons Gartner analyst David Willis, with the remainder offering a choice between BYOD and employer-provided devices.

But whether devices are employee-owned or company-issued, the prospect of employees roaming the streets with insecure smartphones and tablets, packed with sensitive company information, is understandably enough to bring bosses out in a nervous rash.

In this chapter, we take a look at some of the practical and legal considerations that organisations embarking on a BYOD policy need to bear in mind when they seek to mitigate the risks involved. We also speak to one organisation, Solihull Metropolitan Borough Council, that has already taken the plunge. And we talk to leading IT expert, CA Technologies’ senior vice president and general manager Ram Varadarajan, about why companies need to focus on not just the risks, but also on the opportunities that increased mobility could bring.

That’s not to say, however, that the risks should be underestimated. They’re still the biggest concern among business leaders – and quite rightly so. Last year, almost ten million mobile devices holding such information were ‘mislaid’ by UK workers, according to a recent survey of 2,000 respondents, conducted by Vision Critical on behalf of mobile operator, Everything Everywhere. Almost one in five (19 percent) admitted that they’d lost a device in the course of a work night out, while sixteen percent confessed that their smartphone, tablet or laptop had continued a journey on public transport, long after their owners had alighted.
It’s these kinds of horror stories that have spurred uptake of mobile device management (MDM) technology in recent years. These products enable IT teams to manage access to information centrally, as well as confine company information to an encrypted ‘sandbox’ on the employees’ personal device.
Last year, Gartner predicted that the market for MDM products would grow to $1.6 billion in 2014, from around $784 million last year. But, in a hot market like that, consolidation is inevitable. Of the six companies that made it into the ‘leaders’ section of Gartner’s MDM Magic Quadrant in 2013 – AirWatch, Citrix, Fiberlink, Good Technology, MobileIron and SAP – two have already been snapped up, with Fiberlink being bought by IBM in November 2013 and VMware announcing its acquisition of AirWatch in January 2014.

At the same time, vendors are extending the capabilities of their products (albeit at varying rates) to incorporate functions for securing the applications and information those devices hold, too. That’s based on the thinking that, while there’s value in helping corporate customers to track, manage and, where necessary, remote-wipe devices that go astray, there’s even more value in helping them to manage the applications that employees use.
So for now, corporate IT teams can expect more market turbulence ahead – while the need to manage and secure mobile devices has never been more important, there’s still a great deal of uncertainty for this market.

IPv6 – Securing your network as you make the transition

24 Jul 2014
Stephane Perez
Stephane Perez
Stephane is a senior security engineer with Tufin working with medium sized to large companies in various sectors like banking, industrial, security, telecommunication companies.
ipv6 address

A sneak peak at the “to-do” list of many network architects and I’m sure you’ll see that “transition to IPv6” has remained on there for a while. Many have delayed due to the undoubted security challenges and have continued to use IPv4 with NAT in the meantime. But with a little careful planning these concerns can be mitigated and it can finally be crossed off the list.

Initially it’s best to look at the most common misunderstandings surrounding IPv6. It’s largely thought of by networking professionals as an advanced version of the common IPv4 protocol. This couldn’t be further from the reality.

The IPv6 addressing scheme and address allocation mechanisms are primarily different from IPv4. It’s misleading to suggest that the two protocols are forms of IP when they only have the upper layers in common – it’s the underlying mechanisms that differentiate the two considerably. For example it’s impossible to migrate from one protocol to the other without affecting the behaviour of the network. It’s not as easy as undertaking a trivial mapping of an IPv4 network to IPv6 – this approach won’t yield the benefits offered by IPv6.  That’s why it’s best to think of it as a transition rather than a simple migration.


Challenge #1: IPv6 multi-homing

The fundamental reason for implementing IPv6 was to increase the scarcity of address space of IPv4.

IPv6 consists of so many additional addresses that it’s virtually unlimited and therefore requires special attention when designing your network security.

In a common IPv4 subnet you’ll find a /24 bit mask which corresponds to a total of 254 addressable addresses. A typical IPv6 unicast subnet however has a /64 bit mask which corresponds to 2^64 addresses. In short it’s far larger than the entire IPv4 address space!

This additional space means that the network design for IPv6 alters from its IPv4 predecessor. So in order to reduce and manage the size of the IPv6 Internet routing table, Telcos can own and assign IPv6 prefixes to clients. Therefore the prefixes are easily combined and bound to an ISP or worldwide organisation.

A good example is, 2001::/23 may be allocated to a certain service provider and 2001:0200::/23 may be allocated to another. Clients served by both ISPs could therefore have the two IPv6 prefixes. It means that you can have one prefix from the first ISP and the other from the second. Additionally a specific host may have two IPv6 addresses, one from each of the providers.

From a security perspective network architects need to factor in that the address scheme and multiple prefix could mean that a single machine may be using different routing paths in the network to and from the Internet. But it could also mean different entry/exit points of the perimeter firewalls of the company! This can cause management issues but by working with security teams when designing your network you’ll be able to spot holes in advance.


Challenge #2: Perimeter Security 

The next issue is setting out a clear separation between what is under the company’s responsibility and what’s not.

There was a clear separation between what’s inside and what is private and public using RFC 1918 (best practice for the Internet community) and IPv4 NAT. That distinction becomes more blurred for some companies that are using non-RFC-1918 IP addresses for their internal network, even if NAT is being used.

Using IPv6 the host offers one (or many) rout-able IPv6 addresses which are public and exposed to the world. The maintain privacy the solution is to provide more control of the perimeter firewall so you can set a clear barrier between what’s inside and outside of your control.


Challenge #3: Maturity of the tool-set

The third challenge is that the IPv6 tool-set is not ready and it’s delaying the transition from IPv4. We’re in a situation where manufacturers aren’t developing the toolset for IPv6 support. And at the same time, customers are waiting for the products to be mature enough to be available to deploy.

The key to overcoming this is to plan a phased transition. The required effort to perform a full scale transition from IPv4 to IPv6 is by far too complex to be achieved in an uncontrolled manner. It makes much more sense to transition service one by one.

One way a phased transition can be achieved is by creating an “Internet-facing” IPv6 environment, which opens up the external resources to the world but keeps an internal network on a legacy IPv4 address space. This can be achieved by introducing a proxy-based or IPv6/IPv4 transition gateway as part of the security architecture, allowing internal IPv4 users to access the external IPv6 applications.

Undoubtedly moving from IPv4 to IPv6 comes with major implications for network security. If you phase this transition you’ll be in a stronger position: Internet facing environments, then internal public networks, then internal private. Of course it will take longer (a few years) and will require tools that are capable of analysing threats with both IPv4 and IPv6 protocols as well as security tools but can be done!

Will Iceland become the next global data centre hub?

23 Jul 2014
Puni Rajah
Puni built her reputation for fact-based decision support as a business analyst at Deloitte. Since then, she has conducted IT consumption, management and sales research in the European and Asia/Pacific markets. She is expert at understanding enterprise software and services use and buying behaviour, and has coached marketing and sales executives. Puni is the Content Analyst for Data Centre EXPO which takes place alongside IP EXPO Europe in London ExCel on 8 - 9 October 2014
Iceland - Global Data Centre Hub

London, Amsterdam and Frankfurt, although big markets, have high land values, which as potential data centre hubs, makes further expansion quite expensive. One alternative being mentioned increasingly often is Iceland which upon initial inspection appears very well placed to become the next global data centre hub.

Data Centre Hub Fundamentals: Low cost and high quality
Unlikely as it may sound, Iceland has huge advantages as a potential data centre hub. First of all, it has relatively cheap electricity. Most of Iceland’s electricity comes from hydro-electric or geothermal plants, using readily-available natural resources, and it’s cheap. Early estimates suggest that the cost of a kilowatt of usable power is about $125 in Iceland, compared with $500 in Manhattan, which is going to make a huge difference to the running costs of data centres. The power-hungry aluminium smelting industry has traditionally taken advantage of this, but Icelanders are hoping that the almost equally power-hungry data centre sector might do so too.

Read More

Secondly, the climate is relatively cool all year round. There’s no need for air-conditioning to keep servers cool, as you can just open up the plant to the outside air. This also keeps the costs down, as well as making it more environmentally-friendly. The third advantage is location. Strategically placed in the middle of the Atlantic, Iceland is likely to attract customers from both Europe and North America. It also has relatively low crime and corruption, and a good supply of potential data centre employees and IT engineers. Taken altogether, PriceWaterhouseCoopers concluded as far back as in 2007 that these advantages mean that Iceland could deliver relatively low cost but high quality data centre services.

Learn about Europe’s Data Centre Hubs at Data Centre EXPO, 8 – 9 October, London ExCel

Natural disadvantages
So what’s the problem? Why isn’t Iceland already a data centre hub? There are two main issues: bandwidth and taxes. Until quite recently, Iceland didn’t have the infrastructure to connect with data centres, but more suitable undersea cabling has now been installed. The previous 25% tax on imported servers has also been abolished, meaning that cost savings are realisable, rather than swallowed up by taxes.

There are other issues, of course, largely related to nature. Iceland is a fairly remote island, although some commentators have suggested that geographical isolation could be a plus from the physical security point of view. And then there are the volcanoes and earthquakes. Even if data centres are located well away from these areas, it’s still quite hard to persuade your potential customers that you won’t let them down when the earth moves until you’ve demonstrated it in practice.

The future for Icelandic data centres
The two companies already running data centres in Iceland, Verne and Advania Thor, report that customers are gently starting to arrive. Many of them are local start-ups, such as Green Qloud in Reykjavik, but a steady trickle of customers from North America and Europe is starting to show an interest, including Datapipe, a large hosting and colocation provider. Google has established a company in Iceland, attributing the decision to Iceland’s location and its potential as a data centre hub.

The challenge now for Iceland is to expand before other locations like India and China get there first. And to do so, it needs to demonstrate that it’s got the infrastructure. Perhaps more importantly, it needs to demonstrate to potential customers that they won’t lose their data if there’s an earthquake, or one of the cables gets damaged. In these days of instant connectivity, everyone needs to know that there’s a back-up plan. And if your disaster recovery plan is the cloud, you need to know that your cloud-providing data centre has its own rock-solid back-up plan.

How the CIO, CISO and CSO roles are changing

21 Jul 2014
Paul Fisher
Paul Fisher is the founder of pfanda - the only content agency for the information security industry. He has worked in the technology media and communications business for the last 22 years. In that time he has worked for some of the world’s best technology media companies, including Dennis Publishing, IDG and VNU. ​ He edited two of the biggest-selling PC magazines during the PC boom of the 1990s; Personal Computer World and PC Advisor. He has also acted as a communications adviser to IBM in Paris and was the Editor-in-chief of (now and technology editor at AOL UK.  In 2006 he became the editor of SC Magazine in the UK and successfully repositioned its focus on information security as a business enabler. In June 2012 he founded pfanda as a dedicated marketing agency for the information security industry - with a focus on content creation, customer relationship management and social media Paul is the Editorial Programmer for Cyber Security EXPO which runs alongside IP EXPO Europe, 8-9 October, ExCel, London.
Rick Howard CIO Palo Alto Networks CISO Role

Rick Howard, CIO at Palo Alto Networks will be appearing at Cyber Security Expo in a keynote presentation that analyses how the CISO and CIO fit in to the C-Suite.

As a taster, here’s some of his thoughts on the changing roles of senior enterprise security people.

1. What has changed about the job description of the CISO in 2014 compared to recent years?

The job description for the people that are responsible for security within an organization has been in a state of flux for over a decade. Since Steve Katz became the first CISO back in 1995 the security industry specifically, and business leadership in general, have been thinking and rethinking the need for such a person and the responsibilities that they should have.

Citigroup became the first commercial company to recognize the need for the brand new corporate CISO role when they responded to a highly publicized Russian malware incident. As cyber threats continued to grow in terms of real risk to the business and in the minds of the general public, business leaders recognized the need to dedicate resources to manage that risk.

The first practitioners came out of the technical ranks; the IT shops. Vendor solutions to mitigate the cyber threat ran on networks and workstations. In order to manage those solutions, it was helpful to have people who understood that world. But this was a new thing for the techies; trying to translate technical risk to a business leader did not always go very well. It became convenient to tuck these kinds of people underneath the CIO organization.

CISOs began working for the CIO because, from the C-Suite perspective, all of that technical stuff belonged in one basket. As business leaders began applying resources to mitigate cyber risk, other areas of security risk started to emerge: physical security, compliance, Fraud prevention, business continuity, safety, ethics, privacy, brand protection, etc.

The Chief Security Officer (CSO) role began to get popular with business leaders because they needed somebody to look at the entire business; not just cyber security risk to the business but general security risk to the business. CSO Magazine launched in 2002 to cater to that crowd. Since then, the industry has been in flux. Not every company organizes the same way. While the Chief Information Officer (CIO) has made its way to the executive suite in some companies (Intel Corp and McAfee to name two), that is by no means the norm.

2. Will the CISO (Chief Information Security Officer) becoming a distinct role? Will it become more or less common and why? What does this role now encompass?
The CISO role has emerged in the last five years as the defacto role to manage cyber security. If there isn’t somebody in the organization with the title of CISO, there is somebody in charge of IT Security. This person generally works for the CIO but not in all cases.

From speaking with many CISOs, CSOs, and CIOs, it seems the community has decided that the IT groups handle the day-to-day IT operations while the security groups have much more of an oversight role: Risk Assessment, Incident Response, Policy, etc. This means that the IT groups keep the firewalls up and running while the security groups are monitoring the logs and advising the CIO on security architecture and policy. Let me just say that I don’t think this is the right model either.

In this modern world, I do not believe that security should be subservient to operations in all cases. Yes, the company has to keep its servers operational, but that does not imply that if push comes to shove, security is the first thing that we turn off in order to maintain operations. For companies that understand risk to the business, security and operations are peers.

Read More

3. Is it right that physical and digital security should be merged under one organizational umbrella or should they be kept separate?
I understand why organizations have these two separate security groups. Before the Internet days, we did not have a CISO function. We did have a physical security function though but it was usually relegated to the bottom of the leadership chain.

You needed guards and fences and things like that, but those kinds of operations were more like commodity items; like power to the building or trash pickup. You needed them but once you established them, they did not materially affect the business even if they failed for a day or two. Because of this, Physical Security tended to fall under the Facilitates Management groups.

With the Internet of Things though, the situation has changed. Everything is interconnected. Just like every other organization in the business, the physical security groups have a lot of IT Security components (Badges, surveillance cameras, etc). These groups and their electronic tools could still operate by themselves, but it makes sense that business leadership tasks somebody in the company to make sure that these tools are compatible with the approved security architecture plan.

In my mind, that is the CSO organization. Just like the idea that there is no such thing as cyber risk to the business, only risk to the business, I don’t think there is a need for separate cyber security and physical security teams. It is all security. Just for ease of management, it makes sense to keep it all under one umbrella. My perfect organization would have a CSO in charge of all security of the company. The CISO would work for him with a dotted line to the CIO.  The Physical Security Director would also work for the CSO but would have a close working relationship with the CISO.

4. What skills and qualities should companies be looking for in a CSO going forward? Is the next generation about to enter the workforce going to be equipped for the role? Is the skill set broadening or narrowing?
I still believe the CSO should come up from the technical ranks. Today’s world is so complicated technically that if you do not have that background, you will be completely overrun by the latest security trend. The CSO skill that has to be learned though is how to translate that technical knowledge into something that a business leader will understand or care about.

Join Rick on Wednesday 08th October, 14:20 – 14:50 in the Cyber Security EXPO Keynote Theatre.



IP EXPO Europe Launches “Futures Den” Start-Up Fund

21 Jul 2014
Olivia Shannon
Olivia Shannon
Olivia is an award-winning writer and technology PR and social media expert. She has worked on PR, social media and content marketing campaigns in multiple industries, but mostly information technology. American by birth, Olivia earned a bachelor's degree in Creative Writing from Beloit College, summa cum laude and with English departmental honours, and she is a member of the Phi Beta Kappa academic honorary society. Before launching her PR career, Olivia worked as a writing tutor and in the elections division of the Office of the Missouri Secretary of State. She is interested in writing about enterprise technology, start-ups, and the way technology transforms business and communication. Olivia Shannon is an editorial specialist and the co-director of Shannon Communications, an enterprise technology public relations firm.
Futures Den @ IP EXPO

For the first time, enterprise technology start-ups have a chance to exhibit at IP EXPO Europe free of charge. The organisers of Europe’s leading cloud and IT infrastructure event have created a “start-up fund” that will give early-stage businesses free exhibition and marketing packages in the start-up focused “Futures Den” feature at this year’s event, held on the 8th and 9th of October at the ExCeL Centre London. To apply, start-ups should enter by 31st July by filling out the start-up fund application form.

Altogether, the estimated prize value of IP EXPO Europe’s new start-up fund is nearly £60,000. Ten enterprise technology start-ups will receive complete exhibition and marketing packages valued at over £5,000 each. A further five start-ups will receive a free Futures Den speaking opportunity worth £1,500 each. Five more start-ups will receive free entries to the Tech Trailblazers Awards worth £175 each.

Access to the start-up fund is open to privately funded companies under five years old, whose products or services fall into one of the following enterprise technology sectors: big data, cloud, mobile, networking, security, storage, sustainable IT and virtualization.

Launched last year, IP EXPO Europe’s Futures Den gives enterprise technology start-ups opportunities “to connect with potential partners, distributors and end-users, and for IT decision-makers to gain insight into newly developed and future technology,” says a press release about the new start-up fund. According to the event organisers, the Futures Den puts start-ups in front of “over 15,000 visitors responsible for building, running and protecting IT infrastructures at European businesses and governments.”

This year’s Futures Den agenda will include panel discussions, open networking and five-minute start-up pitches, with panellists including experts from VCs and accelerators, successful enterprise technology start-ups, and marketing, legal and accounting firms.

To enter IP EXPO Europe’s start-up fund, please visit

IP EXPO Europe On Demand Panel: Public, Private or Hybrid Cloud

18 Jul 2014
Mike England
Mike England
Mike is the Content Director at Imago Techmedia which runs IP EXPO Europe, Cyber Security EXPO and Data Centre EXPO
IP EXPO Europe On Demand PANEL - Public, Private or Hybrid Cloud Front Page

In the age of ‘cloud-first’ IT strategies, how are businesses deciding where and how to host their workloads? How do they decide between public and private cloud – and how can they avoid making the wrong choice? Finding answers to these tricky questions looks set to be a popular theme at IP EXPO Europe 2014.

View the first in this exclusive series of IP EXPO Europe panel debates, in which Consulting Editor Jessica Twentyman is joined by Kate Craig-Wood, co-founder and managing director of cloud hosting company Memset, and Peter Mansell, sales manager for HP Helion at systems, software and services giant Hewlett-Packard as they explore the Public, Private or Hybrid Cloud.

Viewers will learn:

Read More

- the difference between public and private cloud computing and how organisations choose between them;
- why the hybrid cloud model may offer the best of both worlds, and how a cloud platform like HP Helion can help;
- how leading cloud hosting companies such as Memset are able to advise customers on the best cloud model for their own workloads – and offer all three;
- how smart companies are tackling the cloud integration challenge, stitching cloud systems together to create a coherent whole

Is it time to think about disaster recovery as a service?

16 Jul 2014
Jessica Twentyman
Jessica Twentyman
Jessica Twentyman is an experienced journalist with a 16 year track record as both a writer and editor for some of the UK's major business and trade titles, including the Financial Times, Sunday Telegraph, Director, Computer Weekly and Personnel Today.
Disaster recovery via Hybrid Cloud

Continuing the series on Building the Hybrid Cloud, we speak to IP EXPO Europe 2014 speaker, Joe Baguley, chief technology officer for EMEA at VMware, about why organisations are increasingly turning to hybrid cloud technology for disaster recovery.

In mid-April, virtualisation specialist VMware announced VMware vCloud Hybrid Service – Disaster Recovery, a service that provides customers with a continuously available recovery site if their own, on-premise VMware-based environments run into trouble.

The new disaster recovery service offers a recovery point objective (RPO) of 15 minutes, at prices starting at $835 per month. The aim, according to VMware executives, is to provide customers with “a simple, automated process for replicating and recovering critical applications and data in a warm standby environment at a fraction of the cost of duplicating infrastructure or maintaining an active tertiary data centre.”

Read More

For some customers, it may also represent an opportunity to protect applications that have previously been omitted from Disaster Recovery plans for reasons of cost and/or complexity. As we heard in Article 1 of this chapter, disaster recovery is fast emerging as an important hybrid-cloud use case. sat down with Joe Baguley, chief technology officer for EMEA at VMware, to discuss the new service and to ask him: is the way that companies look at hybrid cloud deployment starting to mature?

Q: So, Joe, what’s the thinking behind VMware vCloud Hybrid Service – Disaster Recovery?

A: Our thinking here is simple: hybrid cloud should be seamless. It shouldn’t be a migration – just a seamless click within your existing environment. The whole point with this service is to provide an an easy and simple way to back-up an existing VMware environment to the cloud, so that in the event of a disaster hitting their own data centre, a customer can quickly can spin up those environments in the cloud.

Q: And where exactly will those systems run – in VMware’s own data centres?

A: Sort of. What we do as a company is rent data centre space from the likes of Equinix and Savvis, but we use that space to provide a managed service to VMware customers that is entirely operated by us. Plus, there’ll also be a wide range of VMware partners offering the service to customers.

Learn more about building the Hybrid Cloud at IP EXPO Europe 2014

Q: So what does the introduction of this new disaster recovery service tell us about wider market demand for hybrid cloud services?

A: I think it’s fair to say that, initially, hybrid cloud was largely seen as a good place for dev/test environments [development and testing environments where software developers would build and try out new applications before deploying them in-house]. But many of our customers have quickly identified the hybrid cloud’s potential for disaster recovery (DR), too. We’ve actually found some interesting cases where customers come and stand up DR in our hybrid cloud, do a failover test and then realise that the system runs better in our hybrid cloud than it does on their own premises! A few have even switched to using our cloud as their primary site and their own for Disaster Recovery.
Others have said they were interested in the idea of ‘cloud-bursting’, so that on-premise apps can take advantage of our capacity during periods of peak demand – but many have quickly found that an awful lot of recoding of applications was needed in order to enable them to do that. But where we are now with hybrid cloud is that, because we’re standing up exactly the same technology stack in our data centres as we sell to customers to run in theirs, it’s a relatively trivial thing to migrate workloads between the two – they’re effectively the same environment. As a company, VMware has around 45 million VMs [virtual machines] running in customer sites today – and we’re giving those customers a place where they can ‘drag and drop’ those VMs if they need to do so. As we discussed in this chapters article on Hybrid Cloud Considerations, cloud bursting is another emerging hybrid-cloud use case.

Q: So where does that leave the whole issue of cloud interoperability and the industry standards effort that’s going on around hybrid cloud computing?

A: Well, it’s an interesting question. My answer would be that there are still many cloud interoperability issues to tackle and several major standards efforts underway – but when I sit down day to day with our customers, a lot of them say to me: “We’d love to pick an industry standard and go with it – but we can’t choose one yet, it’s just far too early.”
The one standard they do know – or at least, a de facto standard – is that they already run VMware today, so for now, that’s the technology they’ll stick with. They don’t want to pick an industry standard that’s not going to win the race in the long term.
But that’s not to say that VMware isn’t part of the wider IT industry standards effort. We’re proud to say we have a very high commit-rate to OpenStack and we are 100% backing OpenStack as we go forward. Customers will see us develop more and more interoperabilities as that platform develops but, at the moment, it’s still early days. That’s just the way it is in this industry: technology development moves faster than any standards body can.

Q: But isn’t that luring customers into putting all their eggs in one basket – a VMware branded basket?

A: It’s also about having ‘one throat to choke’: customers can buy their licenses for on-premise and their credits for hybrid cloud in a single transaction, get the same support line and avoid the need to make changes in personnel and skills. We’re working to make hybrid cloud a natural extension of all the stuff they already do.

Joe Baguley will be presenting as part of the IP EXPO Europe seminar programme.

Action for Children cuts IT costs through hybrid cloud

15 Jul 2014
Jessica Twentyman
Jessica Twentyman
Jessica Twentyman is an experienced journalist with a 16 year track record as both a writer and editor for some of the UK's major business and trade titles, including the Financial Times, Sunday Telegraph, Director, Computer Weekly and Personnel Today.
Action for Children

Using a mix of private and public-cloud resources enables charity to achieve goals of data confidentiality and scalable IT.

Action for Children (AfC) has been in the news recently. The charity is campaigning for an update to UK child neglect legislation and the introduction of a so-called ‘Cinderella Law’.

Behind the scenes, meanwhile, and on a day-to-day basis, the charity handles some of the most confidential data imaginable, relating to some of the UK’s most vulnerable and disadvantaged children and young people and their families and carers.

Learn more about building the Hybrid Cloud at IP EXPO Europe 2014

Much of that data – around 60 percent of AfC’s overall data storage – must, by law, be kept on the systems held on the charity’s own premises, explains Darren Robertson, data scientist and head of digital communications at AfC. As we discussed in the Hybrid Cloud Considerations article of this chapter on Building the Hybrid Cloud, this is an area of concern – and of hybrid-cloud potential – for organisations across a wide range of sectors, including charities and government departments.

Read More

Other kinds of data – details of donations, fundraising activities and projects underway around the country – must still be kept safe, but can at least be hosted by a trusted third party. Here, AfC uses a private-cloud environment provided by Rackspace, which also hosts AfC’s website on its public-cloud infrastructure. Many of the databases held in the private cloud environment, Robertson says, feed the public website, allowing visitors to look up, for example, the locations of children’s centres and project locations around the country.

This hybrid environment enables AfC to balance its need to maintain confidentiality with its focus on costs.

“As a charity, we have to keep a very close eye on costs – and, in this sector, we’re far from alone in that. It’s become quite apparent to charities that internal servers are expensive to run – so why would we want to do that, when it’s not always necessary?”

“By working with a provider to host certain types of information, we don’t have to worry whether a crowded server room is running at the right temperature, are systems patched regularly, does a particular component need replacing? We only need to ask those questions about the systems that host data that we’re absolutely obliged to keep in-house. We can devolve responsibility to Rackspace for the rest.”

AfC began looking at options for a hosting environment in April 2012 and completed its migration to Rackspace’s data centre in October of that year. Using an entirely public-cloud environment, says Robertson, was out of the question: “There’s a lot of nervousness within the charity sector around the public cloud,” he says, “but a hybrid cloud environment enables us to address those concerns by mixing public and private cloud.”

It also means that, if traffic to AfC’s public-facing website suddenly spikes – at times when it is actively campaigning for changes in legislation, for example, or if a celebrity tweets about its work – it can quickly tap into Rackspace’s extra hardware resources for that period, paying only for the extra capacity it consumes, rather than lifting the whole website to a larger dedicated server. This is what it previously had to do, Robertson says, and it meant that the charity was unable to update the website during those peak periods.

As for charity’s on-premise IT investment, “it’s still pretty similar for now, but things are winding down,” says Robertson. “We have over 72 data systems within the organisation and you need to remember that, as an organisation, we’re 150 years old, so some of it is pretty antiquated.”

The goal now, he says, is to get some of those on-premise systems into a sufficiently stable state to migrate them to the Rackspace private-cloud environment, “but the idea is that, over time, now we’ve proven that hybrid cloud works for us, we can start to move more and more stuff across. Looking after it ourselves involves too much time and cost, and as a charity, we will never be able to employ the same numbers or quality of staff that Rackspace has on its server team – so why would we not let them take the strain?”

“I strongly believe that it’s our early use of hybrid cloud that will allow us to transform our IT environment over the next few years, freeing up time and money at Action for Children that might be better spent on changing children’s lives for the better.”

Read on to find out: Why organisations are increasingly turning to hybrid cloud technology to address their disaster recovery needs

Hybrid cloud deployment – considerations

14 Jul 2014
Jessica Twentyman
Jessica Twentyman
Jessica Twentyman is an experienced journalist with a 16 year track record as both a writer and editor for some of the UK's major business and trade titles, including the Financial Times, Sunday Telegraph, Director, Computer Weekly and Personnel Today.
Planning for a Hybrid Cloud Deployment

Following on from our introduction to Building the Hybrid Cloud, before identifying use cases for hybrid cloud deployment, IT teams need to take security, connectivity and portability into account.

It’s now almost a year since Microsoft executive Marco Limena, vice president of the software giant’s hosting service providers business, declared that 2013 and 2014 would be the “era of the hybrid cloud”.

And in March this year, the company unveiled a survey of over 2,000 IT decision-makers worldwide that seems to validate his claim. According to the study, conducted on Microsoft’s behalf by IT analyst firm, the 451 Group, 49% of respondents said that they had already implemented a hybrid cloud deployment.

Of these, 60 percent had integrated an on-premise private cloud with a hosted private cloud. Forty-two percent, meanwhile, had combined an on-premise private cloud with a public cloud, and 40 percent had combined a hosted private cloud with public cloud resources.

Learn more about building the Hybrid Cloud at IP EXPO Europe 2014

According to Limena, “hosted private cloud is a gateway to hybrid cloud environments for many customers.” The study shows, he added, that “it’s clear we’ve reached a tipping point where most companies have moved beyond the discovery phase and are now moving forward with cloud deployments.”

So what do organisations need to bear in mind when considering a hybrid cloud deployment? According to IT experts, executing a successful hybrid cloud deployment is all about identifying the specific workloads that might benefit the most from deployment in the hybrid cloud. But first, there are three considerations to be taken into account when considering a hybrid cloud deployment.

Read More

The first is security: IT organisations should ensure that the security extended to a workload running in a private cloud can be replicated in the public cloud.

The second is connectivity: data flowing between private and public cloud resources should be kept confidential. A typical hybrid cloud deployment would achieve this using a VPN [virtual private network] connection or dedicated WAN link.

The third is portability – and in the absence of truly mature standards in hybrid cloud computing, this may prove the toughest nut to crack. It’s about ensuring technological compatibility between private and public cloud environments. In other words, organisations must be able to ‘lift and shift’ workloads between the two, knowing that they share the same operating systems and application programming interfaces [APIs], for example.

Current use cases for hybrid cloud deployment, meanwhile, include:

1. Development/testing workloads

Early stage applications are a great use case for hybrid cloud deployment, says Stuart Bernard, European cloud sales leader at IT services company CSC. “You may have an application development team that needs an environment it can spin up, use, consume and spin down quite quickly,” he says. “The frustration they have with traditional IT services is that, by the time they’ve requested that environment and it’s been made available to them, there’s been a considerable delay – and the one thing that application developers hate is a delay.” That, he adds, leads to ‘shadow IT’ procurement, as development teams get their credit cards out and buy the environment they need in the public cloud. But once they’re completely satisfied with the application they’ve built there, they then face the challenge of shifting it back into the private cloud, in order to give that application the governance and control in production that the organisation requires. A hybrid cloud environment makes it possible to use public cloud resources for a new, untested application, before migrating it back into a private cloud, once a steady-state workload pattern has been established.

2. Disaster recovery

Duplicating a private cloud environment in a secondary data centre comes with considerable costs, requiring at least twice the expenditure on IT equipment and data-centre space. According to Joe Baguley, EMEA chief technology officer at VMware, more companies are now turning to hybrid cloud for disaster recovery, so that if their on-premise private-cloud environment hits the skids, they can quickly migrate virtual machines from that environment to run in a geo-redundant cloud set-up delivered by a third-party provider.

3. Cloudbursting

The term ‘cloudbursting’ is increasingly used to refer to a situation where workloads are migrated to a different cloud environment to meet capacity demands. This might happen to an ecommerce shopping app in the run-up to Christmas, for example, or a charity running a high-profile campaign that gets a lot of media coverage. In a hybrid cloud environment, the steady-state application would be handled by a private-cloud environment, with spikes in processing requirement passed to on-demand resources located in the public cloud.

4. Meeting regulatory requirements

Many organisations handle data that must be kept in-house, according to legal or regulatory requirements – but parts of the application that collect and/or process that data (online forms, for example) can run in a public cloud to improve performance and scalability, while keeping costs low.

As we see in the next article, this is an issue that charity Action for Children tackled head-on in their hybrid cloud deployment.

Sir Tim Berners-Lee to keynote IP EXPO Europe

10 Jul 2014
Mike England
Sir Tim Berners-Lee

Sir Tim Berners-Lee to open IP EXPO Europe

In his first address to the core IT community of IP EXPO Europe, Sir Tim will outline his 2050 vision for the web and how businesses will use it for competitive advantage.

Above all, he’ll look at how businesses – and the people who lead them – will shape the next phase of the web’s remarkable development’

The speech will mark the 25th anniversary of the first draft of a proposal for what would become the World Wide Web. To mark this milestone, Sir Tim Berners-Lee will share his vision for successful business on the Web – from predicted challenges and the technology businesses will use to overcome them, through to the key innovations that will help drive future success, improve customer experience and create new markets.

Read More

Sir Tim comments; “I am greatly looking forward to addressing key decision makers in European business at IP EXPO Europe. The issues that will shape the future of the web – from privacy and data regulation, to sustainability and responsibility – don’t just touch our businesses, they touch our lives, and it’s thrilling that we are all at the very epicentre.”

Sir Tim’s opening keynote will be followed by an impressive programme from leading technology influencers; including Cloudera’s Doug Cutting, creator of Hadoop, and Mark Russinovich, Technical Fellow at Microsoft.

To share in the excitement and to mark the 25th anniversary of the World Wide Web, REGISTER NOW for this free to attend one-off session to hear Sir Tim Berners-Lee keynote,

About Sir Tim Berners-Lee

Sir Tim Berners-Lee invented the World Wide Web in 1989 while working as a software engineer at CERN, the large particle physics laboratory near Geneva, Switzerland. With many scientists participating in experiments at CERN and returning to their laboratories around the world, these scientists were eager to exchange data and results but had difficulties doing so. Tim understood this need, and understood the unrealized potential of millions of computers connected together through the Internet.

Tim documented what was to become the World Wide Web with the submission of a proposal specifying a set of technologies that would make the Internet truly accessible and useful to people. Despite initial setbacks and with perseverance, by October of 1990, he had specified the three fundamental technologies that remain the foundation of today’s Web (and which you may have seen appear on parts of your Web browser): HTML, URI, and HTTP.

He also wrote the first Web page editor/browser (“WorldWideWeb”) and the first Web server (“httpd“). By the end of 1990, the first Web page was served. By 1991, people outside of CERN joined the new Web community, and in April 1993, CERN announced that the World Wide Web technology would be available for anyone to use on a royalty-free basis.

Since that time, the Web has changed the world, arguably becoming the most powerful communication medium the world has ever known. Whereas only roughly one-third of the people on the planet are currently using the Web (and the Web Foundation aims to accelerate this growth substantially), the Web has fundamentally altered the way we teach and learn, buy and sell, inform and are informed, agree and disagree, share and collaborate, meet and love, and tackle problems ranging from putting food on our tables to curing cancer.

In 2007, Tim recognized that the Web’s potential to empower people to bring about positive change remained unrealized by billions around the world. Announcing the formation of the World Wide Web Foundation, he once again confirmed his commitment to ensuring an open, free Web accessible and meaningful to all where people can share knowledge, access services, conduct commerce, participate in good governance and communicate in creative ways.

A graduate of Oxford University, Tim teaches at Massachusetts Institute of Technology as a 3Com Founders Professor of Engineering and in a joint appointment in the Department of Electrical Engineering and Computer Science at CSAIL. He is a professor in the Electronics and Computer Science Department at the University of Southampton, UK, Director of the World Wide Web Consortium (W3C), and author of Weaving the Web and many other publications.

Sit Tim Berners-Lee will present the opening keynote at IP EXPO Europe on Wednesday 8th October at 10am.

Building the Hybrid Cloud

10 Jul 2014
Jessica Twentyman
Jessica Twentyman
Jessica Twentyman is an experienced journalist with a 16 year track record as both a writer and editor for some of the UK's major business and trade titles, including the Financial Times, Sunday Telegraph, Director, Computer Weekly and Personnel Today.
Hybrid Cloud

In this chapter on Building the Hybrid Cloud, we take a look at some of the practical considerations that organisations embarking on the hybrid cloud journey need to bear in mind.


We also speak to one organisation, charity Action for Children, that has already taken the plunge. And we talk to leading IT expert, VMware’s EMEA CTO Joe Baguley, about the conversations he’s having with customers around hybrid cloud.

Could combining private- and public-cloud resources into a hybrid cloud prove the best recipe for companies looking to achieve the goal of optimal scalability, flexibility and IT efficiency?

First they went public, then they went private. Now, organisations are looking to hybrid cloud deployment as a way to mix both public and private cloud resources to get the maximum benefits of almost limitless scalability, increased flexibility and the most efficient use of IT.

Learn more about building the Hybrid Cloud at IP EXPO Europe 2014

According to a recent Gartner study, nearly half of large enterprises will have deployed a hybrid cloud by the end of 2017. The growth of hybrid cloud computing, say analysts at the firm, will only serve to fuel more general adoption of cloud computing as a model for IT deployment – and they expect this overall cloud spend to account for the bulk of IT spending by 2016.

Read More

In the US, the National Institute of Standards and Technology (NIST) defines hybrid cloud as a composition of at least one private cloud and at least one public cloud. At most companies, this involves integrating a private-cloud deployment (located on their own premise or hosted on their behalf by a third-party provider) with the resources of a public-cloud provider.

The market itself, then, is a complex mix, largely comprising the major IT vendors that provide companies with data-centre infrastructure; hosting providers; and systems integration firms that help them knit private and public cloud resources together.

That third group is important: while a hybrid approach promises “cost savings and significant gains in IT and business flexibility”, according to a 2013 report from analysts at Forrester Research, “some concerns remain around how to manage and integrate on-premises infrastructure with cloud services in a hybrid cloud architecture.”

In terms of the challenges that IT decision-makers face, Forrester’s survey of over 300 companies in the US and Europe shows that two security challenges are ‘top of mind’ for respondents: ensuring the consistency of security policies between the on-premises environment and the service provider (cited by 46 percent) and securing communication and data-sharing between the two (cited by 45 percent). In addition, some applications may need to be re-architected to run in a hybrid environment – and the challenges around the connectivity needed to link public and private clouds are substantial.

“IT decision-makers will look to find solutions to these challenges with existing tools and skills — or explore new offerings that make it easier to address the challenges of using a hybrid cloud strategy,” the Forrester analysts predict.

At Gartner, meanwhile, analyst Dave Bartoletti remains adamant: “Hybrid is indeed the cloud architecture that will dominate… there will likely be very few private clouds that don’t have a hybrid component.”

Read on to find out: What considerations do organisations need to take into account before identifying suitable use cases for hybrid cloud deployment?

Big Data: Time for new approach to analysis

13 Jun 2014
Kevin Spurway
big data

The Big Data problem is accelerating, as companies get better at collecting and storing information that might bring business value through insight or improved customer experiences.  It used to be a small specialist group of analysts that would be responsible for extracting that insight, but this is no longer the case.  We are standing at a nexus between Big Data and the demands from thousands of users – something that we call “global scale analytics” at MicroStrategy.  The old architectural approaches are no longer up to the task and this new problem needs radical new technology.  Continuing with the old approach Big Data will fail to reach its true potential, and just become a big problem for companies.

Analytics applications now regularly serve the needs of thousands of employees to help them do their job; an employee can need access to hundreds of visualisations, reports and dashboards.  The application must ready for a query at any time, from any location and the results must be served to the user with ‘Google-like’ response times; their experience of the web is the benchmark by which they judge application responses in the work environment.Read More

With this huge rise in data and user demands the traditional technology stack simply can’t cope, it is becoming too slow and expensive to build and maintain an analytics application environment.  Sure, there are some great point solutions, but the problem is the integration between every part of the stack – the stack only performs as well as its weakest link.

The industry has only been working to solve half the problem, data collection and storage, rather than looking at the full picture which also includes analytics and visualisation.   Loosely coupled stacks scale poorly and have a huge management and resource overhead for IT departments, making them uneconomical with poor agility.

Looking at the end-to-end Big Data analytics problem requires an architecture that tightly integrates each level of the analytics stack, taking advantage of the commoditisation of computing hardware to deliver analytics that can scale with near perfect linearity and economies of scale, to deliver sub-second response times on multi-terabyte datasets.

MicroStrategy has become the first, and currently only, company to make this approach a commercial reality, tightly integrating data, analytics and visualisation.  PRIME is a massively parallel, distributed, in-memory architecture with a tightly integrated dashboard engine.  Companies like Facebook are using PRIME to analyse billions of rows in real-time, proving that this new approach is pushing the point solutions of the past out of the spotlight.

Regardless of your application, if you have thousands of users, exploding data collection, highly dimensional data, complex visualisation or globally distributed user base, then the big data problem will keep getting bigger.  With every day it grows you are playing a game of diminishing returns.  Businesses need to look at how they make their analysis as efficient as their data gathering.  We are in a new era of data exploration that demands a jump in the scale and performance of analytics applications to achieve global scale analytics – what’s the point in collecting all that data if you can’t use it….?

Kevin Spurway is Senior Vice President of Marketing at MicroStrategy