Category Archives: Guest Post

Big Data Without Security = Big Risk

Guest Post by C.J. Radford, VP of Cloud for Vormetric

Big Data initiatives are heating up. From financial services and government to healthcare, retail and manufacturing, organizations across most verticals are investing in Big Data to improve the quality and speed of decision making as well as enable better planning, forecasting, marketing and customer service. It’s clear to virtually everyone that Big Data represents a tremendous opportunity for organizations to increase both their productivity and financial performance.

According to WiPro, the leading regions taking on Big Data implementations are North America, Europe and Asia. To date, organizations in North America have amassed over 3,500 petabytes (PBs) of Big Data, organizations in Europe over 2,000 PBs, and organizations in Asia over 800 PBs. And we are still in the early days of Big Data – last year was all about investigation and this year is about execution; given this, it’s widely expected that the global stockpile of data used for Big Data will continue to grow exponentially.

Despite all the goodness that can stem from Big Data, one has to consider the risks as well. Big Data confers enormous competitive advantage to organizations able to quickly analyze vast data sets and turn it into business value, yet it can also put sensitive data at risk of a breach or violating privacy and compliance requirements. Big Data security is fast becoming a front-burner issue for organizations of all sizes. Why? Because Big Data without security = Big Risk.

The fact is, today’s cyber attacks are getting more sophisticated and attackers are changing their tactics in real time to get access to sensitive data in organizations around the globe. The barbarians have already breached your perimeter defenses and are inside the gates. For these advanced threat actors, Big Data represents an opportunity to steal an organization’s most sensitive business data, intellectual property and trade secrets for significant economic gain.

One approach used by these malicious actors to steal valuable data is by way of an Advanced Persistent Threat (APT). APTs are network attacks in which an unauthorized actor gains access to information by slipping in “under the radar” somehow. (Yes, legacy approaches like perimeter security are failing.) These attackers typically reside inside the firewall undetected for long periods of time (an average of 243 days, according to Mandiant’s most recent Threat Landscape Report), slowly gaining access to and stealing sensitive data.

Given that advanced attackers are already using APTs to target the most sensitive data within organizations, it’s only a matter of time before attackers will start targeting Big Data implementations. Since data is the new currency, it just makes sense for attackers to go after Big Data implementations because that’s where big value is.
So, what does all this mean for today’s business and security professionals? It means that when implementing Big Data, they need to take a holistic approach and ensure the organization can benefit from the results of Big Data in a manner that doesn’t negatively affect the risk posture of the organization.
The best way to mitigate risk of a Big Data breach is by reducing the attack surface, and taking a data-centric approach to securing Big Data implementations. These are the key steps:

Lock down sensitive data no matter the location.

The concept is simple; ensure your data is locked down regardless of whether it’s in your own data center or hosted in the cloud. This means you should use advanced file-level encryption for structured and unstructured data with integrated key management. If you’re relying upon a cloud service provider (CSP) and consuming Big Data as a service, it’s critical to ensure that your CSP is taking the necessary precautions to lock down sensitive data. If your cloud provider doesn’t have the capabilities in place or feels data security is your responsibility, ensure your encryption and key management solution is architecturally flexible in order to accommodate protecting data both on-premise and in the cloud.

Manage access through strong polices.

Access to Big Data should only be granted to those authorized end users and business processes that absolutely need to view it. If the data is particularly sensitive, it is a business imperative to have strong polices in place to tightly govern access. Fine-grained access control is essential, including things like the ability to block access by even IT system administrators (they may have the need to do things like back up the data, but they don’t need full access to that data as part of their jobs). Blocking access to data by IT system administrators becomes even more crucial when the data is located in the cloud and is not under an organization’s direct control.

Ensure ongoing visibility into user access to the data and IT processes.

Security Intelligence is a “must have” when defending against APTs and other security threats. The intelligence gained can support what actions to take in order to safeguard and protect what matters – an organization’s sensitive data. End-user and IT processes that access Big Data should be logged and reported to the organization on a regular basis. And this level of visibility must occur whether your Big Data implementation is within your own infrastructure or in the cloud.

To effectively manage that risk, the bottom line is that you need to lock down your sensitive data, manage access to it through policy, and ensure ongoing visibility into both user and IT processes that access your sensitive data. Big Data is a tremendous opportunity for organizations like yours to reap big benefits, as long as you proactively manage the business risks.

CJRadford

You can follow C.J. Radford on Twitter @CJRad.

Locking Down the Cloud

Guest Post by Pontus Noren, director and co-founder, Cloudreach.

The good news for cloud providers is that forward-thinking CIOs are rushing to embrace all things ‘cloud’, realising that it provides a flexible and cost-effective option for IT infrastructure, data storage and software applications. The bad news is that the most significant obstacle to implementation could be internal: coming from other parts of the organisation where enduring myths about legal implications, security and privacy issues remain. The reality is that today such fears are largely unfounded. CIOs need help in communicating this to their more reluctant colleagues if they want to make the move to the cloud a success.

Myth No 1: The Security Scare

In many cases, moving to the cloud can in fact represent a security upgrade for the organisation. Since the introduction of cloud-based computing and data storage around ten years ago, the issue of security has been so high profile that reputable cloud providers have made vast investments in their security set-ups – one that an individual organisation would be unable to cost-effectively match due to the far different scale on which it operates.

For example, data stored in the cloud is backed-up, encrypted and replicated across multiple geographically distributed data centres in order to protect it from the impact of natural disasters or physical breaches.  All this takes place under the watchful eyes of dedicated data centre security experts. If you compare this to the traditional in-house approach – which all too frequently sees data stored on a single server located somewhere in the basement of an office – it is not difficult to see which is the most secure option. By working with an established and respected cloud provider, such as Google or Amazon Web Services businesses can benefit from such comprehensive security measures without having to make the investment themselves.

Myth No 2: Data in Danger

Security and data privacy are closely related, but different issues. Security is mainly about physical measures taken to mitigate risks, while ‘privacy’ is more of a legal issue about who can access sensitive data, how it is processed, whether or not it is being moved and where it is at any moment in time.

Concerns around compliance with in-country data protection regulations are rife, especially when dealing with other countries.  Across Europe, for example, data protection laws vary from country to country with very strict guidelines about where data can be stored.  A substantial amount of data cannot be moved across geographical boundaries, so the security practice of replicating data across the globe has far-reaching compliance applications for data protection. However, data protection legislation states that there is always a data processor and data controller and a customer never actually ‘hands over’ its data. This doesn’t change when the cloud is involved – all large and reputable cloud services providers are only ever the data processor. For example, the provider will only ever process data on behalf of its customer, and the customer always maintains its ownership of its data, and role of data controller.

However, much of data protection law predates the cloud and is taking a while to catch up. Change is most definitely on its way. Proposed European legislation aims to make data protection laws consistent across Europe, and with highly data-restricted industries such as financial services now starting to move beyond private clouds into public cloud adoption, further change is likely to follow as organisations start to feel reassured.

So what can CIOs do to change perceptions? It comes down to three simple steps:

  • Be Specific – Identify your organization’s top ten queries and concerns and address these clearly.
  • Be Bold – Cloud computing is a well-trodden path and should not be seen as the future, rather as the now. Having tackled company concerns head on, it is important to make the jump and not just dip a toe in the water.
  • Be Early – Engage reluctant individuals early on in the implementation process, making them part of the change. This way CIOs can fend off ill-informed efforts to derail cloud plans and ensure buy-in from the people who will be using the new systems and services.

The cloud has been around for a while now and is a trusted and secure option for businesses of all sizes and across all sectors. In fact, there are more than 50 million business users alone of Google Apps worldwide. It can hold its own in the face of security and privacy concerns.  CIOs have an important role to play in reassuring and informing colleagues so that the firm can harness the many benefits of the cloud; future-proof the business and release IT expertise to add value across the business.  Don’t let fear leave your organisation on the side lines.

Pontus Noren, director and co-founder, Cloudreach Pontus Noren is director and co-founder, Cloudreach.

 

Real-Time Processing Solutions for Big Data Application Stacks – Integration of GigaSpaces XAP, Cassandra DB

Guest post by Yaron Parasol, Director of Product Management, GigaSpaces

GigaSpaces Technologies has developed infrastructure solutions for more than a decade and in recent years has been enabling Big Data solutions as well. The company’s latest platform release – XAP 9.5 – helps organizations that need to process Big Data fast. XAP harnesses the power of in-memory computing to enable enterprise applications to function better, whether in terms of speed, reliability, scalability or other business-critical requirements. With the new version of XAP, increased focus has been placed on real-time processing of big data streams, through improved data grid performance, better manageability and end-user visibility, and integration with other parts of your Big Data stack – in this version, integration with Cassandra.

XAP-Cassandra Integration

To build a real-time Big Data application, you need to consider several factors.

First– Can you process your Big Data in actual real-time, in order to get instant, relevant business insights? Batch processing can take too long for transactional data. This doesn’t mean that you don’t still rely on your batch processing in many ways…

Second – Can you preprocess and transform your data as it flows into the system, so that the relevant data is made digestible and routed to your batch processor, making batch more efficient as well. Finally, you also want to make sure the huge amounts of data you send to long-term storage are available for both batch processing and ad hoc querying, as needed.

XAP and Cassandra DB together can easily enable all the above to happen. With built-in event processing capabilities, full data consistency, and high-speed in-memory data access and local caching – XAP handles the real-time aspect with ease. Whereas, Cassandra is perfect for storing massive volumes of data, querying them ad hoc, and processing them offline.

Several hurdles had to be overcome to make the integration truly seamless and easy for end users – including XAP’s document-oriented model vs. Cassandra’s columnar data model, XAP’s immediate consistency (data must be able to move between models smoothly), XAP offers immediate consistency with performance, while Cassandra trades off between performance and consistency (with Cassandra as the Big Data store behind XAP processing, both consistency and performance are maintained).

Together with the Cassandra integration, XAP offers further enhancements. These include:

Data Grid Enhancements

To further optimize your queries over the data grid XAP now includes compound indices, which enable you to index multiple attributes. This way the grid scans one index instead of multiple indices to get query result candidates faster.
On the query side, new projections support enables you to query only for the attributes you’re interested in instead of whole objects/documents. All of these optimizations dramatically reduce latency and increase the throughput of the data grid in common scenarios.

The enhanced change API includes the ability to change multiple objects using a SQL query or POJO template. Replication of change operations over the WAN has also been streamlined, and it now replicates only the change commands instead of whole objects. Finally, a hook in the Space Data Persister interface enables you to optimize your DB SQL statements or ORM configuration for partial updates.

Visibility and Manageability Enhancements

A new web UI gives XAP users deep visibility into important aspects of the data grid, including event containers, client-side caches, and multi-site replication gateways.

Managing a low latency, high throughput, distributed application is always a challenge due to the amount of moving parts. The new enhanced UI helps users to maintain agility when managing their application.

The result is a powerful platform that offers the best of all worlds, while maintaining ease of use and simplicity.

Yaron Parasol is Director of Product Management for GigaSpaces, a provider of end-to-end scaling solutions for distributed, mission-critical application environments, and cloud enabling technologies.

Measurement, Control and Efficiency in the Data Center

Guest Post by Roger Keenan, Managing Director of City Lifeline

To control something, you must first be able to measure it.  This is one of the most basic principles of engineering.  Once there is measurement, there can be feedback.  Feedback creates a virtuous loop in which the output changes to better track the changing input demand.  Improving data centre efficiency is no different.  If efficiency means better adherence to the demand from the organisation for lower energy consumption, better utilisation of assets, faster response to change requests, then the very first step is to measure those things, and use the measurements to provide feedback and thereby control.

So what do we want to control?  We can divide it into three: the data centre facility, the use of compute capacity and the communications between the data centre and the outside world.  The balance of importance of those will differ between all organisations.

There are all sorts of types of data centres, ranging from professional colocation data centres to the server-cupboard-under-the-stairs found in some smaller enterprises.  Professional data centre operators focus hard on the energy efficiency of the total facility.  The most common measure of energy efficiency is PUE, defined originally by the Green Grid organisation.  This is simple:   the energy going into the facility divided by the energy used to power electronic equipment.  Although it is often abused, a nice example is the data centre that powered its facility lighting over POE, (power over ethernet) thus making the lighting part of the ‘electronic equipment, it is widely understood and used world-wide.  It provides visibility and focus for the process of continuous improvement.  It is easy to measure at facility level, as it only needs monitors on the mains feeds into the building and monitors on the UPS outputs.

Power efficiency can be managed at multiple levels:  at the facility level, at the cabinet level and at the level of ‘useful work’.  This last is difficult to define, let alone measure and there are various working groups around the world trying to decide what ‘useful work’ means.  It may be compute cycles per KW, revenue generated within the organisation per KW or application run time per KW and it may be different for different organisations.  Whatever it is, it has to be properly defined and measured before it can be controlled.

DCIM (data centre infrastructure management) systems provide a way to measure the population and activity of servers and particularly of virtualised machines.  In large organisations, with potentially many thousands of servers, DCIM provides a means of physical inventory tracking and control.  More important than the question “how many servers do I have?” is “how much useful work do they do?”  Typically a large data centre will have around 10% ghost servers – servers which are powered and running but which do not do anything useful.  DCIM can justify its costs and the effort needed to set it up on those alone.

Virtualisation brings its own challenges.  Virtualisation has taken us away from the days when a typical server operated at 10-15% efficiency, but we are still a long way from most data centres operating efficiently with virtualisation.  Often users will over-specify server capacity for an application, using more CPU’s, memory and storage than really needed, just to be on the safe side and because they can.   Users see the data centre as a sunk cost – it’s already there and paid for, so we might as well use it.  This creates ‘VM Sprawl’.  The way out of this is to measure, quote and charge.  If a user is charged for the machine time used, that user will think more carefully about wasting it and about piling contingency allowance upon contingency allowance ‘just in case’, leading to inefficient stranded capacity.  And if the user is given a real-time quote for the costs before committing to them, they will think harder about how much capacity is really needed.

Data centres do not exist in isolation.  Every data centre is connected to other data centres and often to multiple external premises, such as retail shops or oil rigs.  Often those have little redundancy and may well not operate efficiently.  Again, to optimise efficiency and reliability of those networks, the first requirement is to be able to measure what they are doing.  That means having a separate mechanism at each remote point, connected via a different communications network back to a central point.  The mobile phone network often performs that role.

Measurement is the core of all control and efficiency improvement in the modern data centre.  If the organisation demands improved efficiency (and if it can define what that means) then the first step to achieving it is measurement of the present state of whatever it is we are trying to improve.  From measurement comes feedback.  From feedback comes improvement and from improvement comes control.  From control comes efficiency, which is what we are all trying to achieve.

Roger Keenan, Managing Director of City Lifeline

Roger Keenan joined City Lifeline, a leading carrier neutral colocation data centre in Central London, as managing director in 2005.  His main responsibilities are to oversee the management of all business and marketing strategies and profitability. Prior to City Lifeline, Roger was general manager at Trafficmaster plc, where he fully established Trafficmaster’s German operations and successfully managed the $30 million acquisition of Teletrac Inc in California, becoming its first post-acquisition Chief Executive.

Rain From the Cloud (and Some Sun At the End)

Guest Post by Roger Keenan, Managing Director of City Lifeline

Cloud computing is changing the way in which computing and data communications operate.  The availability of high speed low cost communications through fibre optics means that remote hosting of computing and IT applications is economically possible, and there are clear cost benefits for both users and providers.  The migration from in-house computing to cloud has not been as fast as expected.  Putting aside the usual over-optimism of marketing spread-sheets, what holds users back when they think about cloud adoption?

Firstly, there is much conflicting hype in the market and many variations on which type of cloud – public, private, bare-metal, hybrid and so on, and the user must first find his way through all the hype.  Then he must decide which applications to migrate.  In general, applications with low communications requirements, important but not mission critical security needs and a low impact on the business if they go wrong are a good place to start.

Security is always the first concern of users when asked about the cloud.  When an organisation has its intellectual property and critical business operations in-house, its management (rightly or wrongly) feels secure.  When those are outside and controlled by someone else who may not share the management’s values of urgency about problems or confidentiality, management feels insecure.  When critical and confidential data is sent out over an internet connection, no matter how secure the supplier claims it is, management feels insecure.  There are battles going on in parliament at the moment about how much access the British security services should have to user data via “deep packet inspection” – in other words spying on users’ confidential information when it has left the user’s premises, even when it is encrypted.  The “Independent” newspaper in London recently reported that “US law allows American agencies to access all private information stored by foreign nationals with firms falling within Washington’s jurisdiction if the information concerns US interests.”  Consider that for a moment and note that it says nothing about the information being on US territory.  Any IT manager considering cloud would be well advised not to put forward proposals to management that involve critical confidential information moving to the cloud.  There are easier migrations to do.

Regulatory and compliance issues are barriers to adoption.  For example, EU laws require that certain confidential information supplied by users be retained inside EU borders.  If it is held on-site, there is no problem.  If it is in a cloud store, then a whole set of compliance issues arise and need to be addressed, consuming time and resources and creating risk.

Geographic considerations are important.  For a low bandwidth application with few transactions per user in any given period and limited user sensitivity to delays, it may be possible to host the application on a different continent to the user.  A CRM application such assalesforce.com is an example where that works.  For many other applications, the delays introduced and the differences in presentation to the user of identical transactions may not be acceptable.  As a rule of thumb, applications for a user in London should be hosted in London and applications for a user in Glasgow should be hosted in Glasgow.

When applications are hosted on-site, management feels in control.  If management gives its critical data to someone else, it risks lock-in – in other words, it becomes difficult for management to get its data back again or to move its outsourced operations to another supplier.  Different providers have different ethics and processes around this, but there are some real horror stories around and management’s fears are not always misplaced.

Where cloud implementations involve standard general IT functions provided by standard software optimised for cloud, the user can have confidence it will all work.  Where there is special purpose software integrated with them, life can get very complicated.  Things designed for in-house are not usually designed to be exported.  There will be unexpected undocumented dependencies and the complexity of the integration grows geometrically as the number of dependencies grows.  Cloud has different interfaces and controls and ways of doing things and the organisation may not have those skills internally.

Like the introduction of any new way of working, cloud throws up unexpected problems, challenges the old order and challenges the people whose jobs are secure in the old order.  The long term benefits of cloud are sufficiently high for both users and providers that, over time, most of the objections and barriers will be overcome.

The way in which organizations employ people has changed over the last thirty years or so from a model where everyone was a full-time employee to one where the business is run by a small, tight team pulling in subcontractors and self-employed specialists only when needed.  Perhaps the future model for IT is the same – a small core of IT in-house handing the mission critical operations, guarding corporate intellectual property and critical data and drawing in less critical or specialised services remotely from cloud providers when needed.

Roger Keenan, Managing Director of City Lifeline

Roger Keenan joined City Lifeline, a leading carrier neutral colocation data centre in Central London, as managing director in 2005.  His main responsibilities are to oversee the management of all business and marketing strategies and profitability. Prior to City Lifeline, Roger was general manager at Trafficmaster plc, where he fully established Trafficmaster’s German operations and successfully managed the $30 million acquisition of Teletrac Inc in California, becoming its first post-acquisition Chief Executive.

Project Management and the Cloud

A Guest Post by Joel Parkinson, a writer for projectmanager.com

In the world of information technology, the “cloud” has paved the way for a new method for managing things on the Internet. In a cloud environment, computing “takes place” on the Worldwide Web, and it takes the place of the software that you use on your desktop. Cloud computing is also hosted on the Web, on a server installed in a “data center”, which is usually staffed and managed by people who are experts at technology management. What does the cloud mean to project management? Here’s an overview of what cloud project management is.

What Cloud Computing Means to Project Managers

Project management is defined as the “set” of activities and processes that are done to execute, and complete, a task that’s outsourced by one party to another. Project management ensures the high probability of success of a project, through the efficient use and management of resources.   So what does cloud computing mean to project managers?  According to PM veterans, cloud computing offers a greener and more sustainable project management environment, lowers cost, eliminates the use of unnecessary software and hardware, improves scalability, and eases the process of information-sharing between team managers and staff, customers and executive management.

Benefits of Cloud Project Management

In a project management environment, the cloud speeds up the whole process. As cloud services are available anytime, any day, the cloud can help a project management team hasten the process of execution, and provides improved results and outputs too.   With the cloud, project managers and staff can also easily monitor, and act without delays as information is delivered on a real-time basis. Let’s look at the other benefits of the cloud for project managers.

Improved Resource Management

The cloud’s centralized nature also allows for the improved utilization, allocation and release of resources, with status updates and real-time information provided to help optimize utilization. The cloud also helps maintain the cost of resource use, whether its machine, capital or human resource.

Enhanced Integration Management

With the cloud, different processes and methods are integrated, and combined to create a collaborative approach for performing projects. The use of cloud-based software can also aid in the mapping and monitoring of different processes, to improve overall project management efficiency.

Overall, the cloud platform reduces the gridlocks and smoothens the project management process, and makes the whole project team productive and efficient in terms of quality of service for the customer, and it also enhances the revenues of the organization.

But does the cloud project management model mean a more carefree and less-costly environment? We could say it makes the whole process less costly, but not overly carefree. Despite the perks provided by the cloud, everything still needs to be tested and monitored, and every member of the project management team must still work upon deployment, and each of them should still be fully supported by project managers, and the clients. The cloud is perhaps the biggest innovation in the IT industry because it “optimizes” the utilization of resources within an enterprise.

Alcatel-Lucent, GigaSpaces Partner for Delivery of Carrier Cloud PaaS

Guest post by Adi Paz, Executive VP, Marketing and Business Development, GigaSpaces

GigaSpaces Cloudify solution enables the on-boarding of applications onto any cloud. For several months now, GigaSpaces has been working with Alcatel-Lucent (ALU) on the use of Cloudify in a carrier cloud service environment. Together with Alcatel-Lucent’s CloudBand™ solution, Cloudify is a fundamental building block in the technological backbone of ALU’s carrier-grade Platform-as-a-Service (CPaaS).

Dor Skuler, Vice President & General Manager of the CloudBand Business Unit at Alcatel-Lucent, has said that, “Offering CPaaS as part of the CloudBand solution enables service providers to make a smooth migration to the carrier cloud and quickly deploy value-added services with improved quality and scalability, without the need for dedicated equipment.”

This new class of carrier cloud services brings the benefits of the cloud to the carrier environment without sacrificing security, reliability and/or quality of applications. The CPaaS enables the on-boarding of mission-critical applications on a massive scale, including both legacy and new carrier cloud services. This is a factor in meeting the requirements of many customers’ Service Level Agreements (SLAs) by integrating carrier networks.

Unlike regular cloud environments, where an application needs to explicitly handle multi-zone deployments, CPaaS enables the application workload and availability to be handled through a policy driven approach. The policy describes the desired application SLA, while the carrier CPaaS maps the deployment of the application resources to the cloud and reflects the best latency, load, or availability requirements.

Additionally, the integration will enable the creation of network-aware CPaaS services, simplified on-boarding to ALU’s CloudBand platform, multi-site app deployment and simplification of management of a number of latency and location-sensitive applications. The ability to comply with five-nine reliability, security, and disaster recovery requirements ensures peace-of-mind for enterprises choosing to on-board mission-critical applications to the carrier network.

The Cloudify Approach

Cloudify manages applications at the process level, and as such uses the same underlying architecture for any application regardless of the language or the technology stack that comprises the application. That said, working at the process level is often not enough, because not all processes are made the same. For example, databases behave quite differently from web containers and load-balancers. In order for us to still get in-depth knowledge about the managed application’s processes, Cloudify uses a recipe-based approach. The recipe-based approach enables us to describe the elements that are specific to that individual process, such as the configuration element, the dependency on other processes, the specific key performance indicators that tell if that process’ behavior is aligned with its SLA, and so on.

Working on the process level makes it possible to plug into a large variety of infrastructures, whether they happen to be public, private, or bare-metal environments. Cloudify uses an abstraction layer known as the Cloud Driver that interfaces with the cloud infrastructure to provide on-demand compute resources for running applications.

The Cloudify process can be implemented be done on individual clouds from HP, Microsoft, IBM, CloudStack, etc., or in the carrier network infrastructure of a company like Alcatel-Lucent.

Adi Paz is responsible for developing and communicating GigaSpaces’ strategy, and managing the company’s go-to-market activities and strategic alliances.

 

High Street and Main Street 2013: Business Failure or Rejuvination?

Guest Post by Pontus Noren, director and co-founder, Cloudreach.

Since Woolworths stores disappeared from the physical high streets and main street in January 2009, the bricks and mortar retailers have been falling apart. More than 27,000 people were out of work when its 800 stores closed, consigning a century of trading to the history books. An alarming amount of traditional big names have sunk since: already this year we have seen Jessops, HMV and Blockbuster Video enter administration or bankruptcy.

It is upsetting to see these names disappear from view, but do not believe the headlines. The high street is not dying, it is changing.

Most recently, Blockbuster Video encountered trouble because people were ditching the traditional movie rental model in exchange for internet streaming services. Blockbuster’s model involved leaving the comfort of your sofa, walking to a video rental store, then scouring the shelves for something you wanted to watch. You’d even face a fine if it was not returned on time. In contrast, the likes of LOVEFiLM and NETFLIX charge a monthly subscription fee and allow members to browse extensive video libraries online before streaming an unlimited amount of content.

Being successful in business is all about changing the game – overlooking what is out there and offering something new. This shift in the key players of the movie rental market has been facilitated by cloud computing technology. The emergence of cloud makes it much easier for businesses to grow rapidly, as you only pay for the server space you use with the likes of Amazon Web Services.

That ability to quickly scale up and down contrasts the traditional IT model, where businesses purchase physical servers and maintain them in-house.

When technology changes, it can have a radical effect on an industry, altering the way in which things are delivered and consumed. However, the level of spend in the economy stays the same so although these shops are closing, the economy shouldn’t suffer at all. The general public will always have a certain amount of money to spend – they just spend it in different ways depending on trends and what’s available, for example spending three pounds on a coffee from Costa rather than a DVD from HMV. That has been reflected in the phoenix Woolworths business. Shop Direct acquired the brand, and its new Woolworths website now offers half a million products. This new trading status reflects the change that has taken place: where people once browsed shelves of goods in shops, they now browse the web for a bargain. People are voting with their virtual feet and it is obvious that everything is heading online.

Not only is online more convenient, and often cheaper, but people can also have richer interactions with brands online, and can benefit from items tailored to their individual specifications – something that it is difficult for high street retailers to do well. The term Web 3.0 is being coined at the moment – with streaming and personalisation coming to the fore more so than ever before.

Web 3.0 and cloud have the potential to form a strong partnership. This force has already transformed the greetings card industry, with the likes of Funky Pigeon and Moonpig using the power of cloud to produce and deliver completely personalised greetings cards. Traditional market leader Clintons Cards closed half of its stores after entering administration in December, having taken a big hit from the success of its online competitors. Online retailing has advanced from being able to offer cheaper products to ones that are also completely tailored to customers’ wishes.

The bricks and mortar high street of the future will be filled with outlets, boutiques, restaurants and coffee shops, which all inspire physical interactions – service-based offerings will be prevalent. However, the most successful businesses will have a solid online strategy supported by cloud technology to deliver a personalised, richer experience for the customer and scalable operations to meet demand. For example, retailers should look at the likes of grab-and-go food outlet Eat, which plans store portfolio growth using cloud.

The cloud changes everything. Retailers must make the most of the tools and technologies at their disposal or they risk falling behind their competitors – or worse, risk being the next big name to hit the headlines for the wrong reasons.

Pontus Noren, director and co-founder, Cloudreach

Pontus Noren is director and co-founder, Cloudreach.

Five IT Security Predictions for 2013

Guest Post by Rick Dakin, CEO and co-founder of Coalfire, an independent IT GRC auditor

Last year was a very active year in the cybersecurity world. The Secretary of Defense announced that the threat level has escalated to the point where protection of cyber assets used for critical infrastructure is vital. Banks and payment processors came under direct and targeted attack for both denial of service as well as next-generation worms.

What might 2013 have in store? Some predictions:

1. The migration to mobile computing will accelerate and the features of mobile operating systems will become known as vulnerabilities by the IT security industry. 

Look out for Windows 95 level security on iOS, Android 4 and even Windows 8 as we continue to connect to our bank and investment accounts – as well as other important personal and professional data – on smartphones and tablets.

As of today, there is no way to secure an unsecured mobile operating system (OS). Some risks can be mitigated, but many vulnerabilities remain. This lack of mobile device and mobile network security will drive protection to the data level. Expect to see a wide range of data and communication encryption solutions before you see a secure mobile OS.

The lack of security, combined with the ever-growing adoption of smartphones and tablets for increasingly sensitive data access, will result is a systemic loss for some unlucky merchant, bank or service provider in 2013. Coalfire predicts more   than 1 million users will be impacted and the loss will be more than $10 million.

2. Government will lead the way in the enterprise migration to “secure” cloud computing.

No entity has more to gain by migrating to the inherent efficiencies of cloud computing than our federal government. Since many agencies are still operating in 1990s-era infrastructure, the payback for adopting shared applications in shared hosting facilities with shared services will be too compelling to delay any longer, especially with ever-increasing pressure to reduce spending.

As a result, Coalfire believes the fledgling FedRAMP program will continue to gain momentum and we will see more than 50 enterprise applications hosted in secure federal clouds by the end of 2013. Additionally, commercial cloud adoption will have to play catch-up to the new benchmark that the government is setting for cloud security and compliance. It is expected that more cloud consumers will want increased visibility into the security and compliance posture of commercially available clouds.

3. Lawyers will discover a new revenue source – suing negligent companies over data breaches.

Plaintiff attorneys will drive companies to separate the cozy compliance and security connection. It will no longer be acceptable to obtain an IT audit or assessment from the same company that is managing an organization’s security programs. The risk of being found negligent or legally liable in any area of digital security will drive the need for independent assessment.

The expansion of the definition of cyber negligence and the range of monetary damages will become more clear as class action lawsuits are filed against organizations that experience data breaches.

4. Critical Infrastructure Protection (CIP) will replace the Payment Card Industry (PCI) standard as the white-hot tip of the compliance security sword.

Banks, payment processors and other financial institutions are becoming much more mature in their ability to protect critical systems and sensitive data.  However, critical infrastructure organizations like electric utilities, water distribution and transportation remain softer targets for international terrorists.

As the front lines of terrorist activities shift to the virtual world, national security analysts are already seeing a dramatic uptick in surveillance on those systems. Expect a serious cyber attack on critical infrastructure in 2013 that will dramatically change the national debate from one of avoidance of cyber controls to one of significantly increased regulatory oversight.

5. Security technology will start to streamline compliance management.

Finally, the cost of IT compliance will start to drop for the more mature industries such as healthcare, banking, payment processing and government. Continuous monitoring and reporting systems will be deployed to more efficiently collect compliance evidence and auditors will be able to more thoroughly and effectively complete an assessment with reduced time on site and less time organizing evidence to validate controls.

Since the cost of noncompliance will increase, organizations will demand and get more routine methods to validate compliance between annual assessment reports.

Rick Dakin is CEO and co-founder of Coalfire is an independent information technology Governance, Risk and Compliance (IT GRC) firm that provides IT audit, risk assessment and compliance management solutions. Founded in 2001, Coalfire has offices in Dallas, Denver, Los Angeles, New York, San Francisco, Seattle and Washington D.C. and completes thousands of projects annually in retail, financial services, healthcare, government and utilities. Coalfire’s solutions are adapted to requirements under emerging data privacy legislation, the PCI DSS, GLBA, FFIEC, HIPAA/HITECH, HITRUST, NERC CIP, Sarbanes-Oxley, FISMA and FedRAMP.

To Cloud, or Not: Getting Started

Guest Post by Gina Smith

Many small business owners are still apprehensive about utilizing cloud options. While it can be a big step, there are significant long-term benefits to utilizing this expanding innovation, including:

  • Enhanced Security – Cloud providers go to great lengths to protect client data, often implementing security protocols which are much more advanced than those on most “hard” networks.
  • Emergency Backup – No need to worry in the event of a fire, earthquake, flood, storm or other natural disaster. Your data and files are safe and being backed up in the “cloud”.
  • Remote Access – You and your employees can gain access to company data at anytime from anywhere in the world.
  • Easily Upgrade or Replace Computers – Quickly and painlessly replace obsolete or faulty computers by connecting the new machine(s) and remotely accessing and/or transferring any data needed directly from the cloud!

Once a business decides to take that step into the “cloud”, many get “stuck” trying to figure out which options will work best for their needs. Amazon is considered by many to be a pioneer in the world of so-called “remote computing” services. And now, Internet giant Google has thrown its hat into the game, launching their “Google Cloud” platform earlier this year.

Amazon AWS (Advanced Web Services)

Amazon was one of the first companies to develop a remote access/cloud computing product catered to the general public. They still offer the most extensive options for both users and developers. The Amazon Elastic Compute Cloud (EC2) is attractive to many companies because they offer “pay-as-you-go” programs with no upfront expenses or long-term commitments required. Amazon Simple Storage (S3) is also very flexible, offering storage options in different regions around the world. Some companies choose to store their data in a lower priced region to reduce storage costs or in a region different from where their company is located for disaster recovery purposes. Amazon still offers the most versatile services and options. Some claim their system can be difficult to learn initially, but fairly easy to get around once you get the hang of it.

Google Cloud Services

There is no doubt that Google has made a permanent mark in history. The Internet giant has revolutionized our lives and made a significant impact on modern society. The company’s launch of their Google Cloud Platform got people who had previously discounted the cloud to seriously begin considering it again. Why? Well, it’s simple. Google has already developed applications which people are comfortable with and familiar. This, of course, makes the entire thought of cloud conversion and eventual emersion much less intimidating. Google’s cloud platform is still in its early stages and does not offer quite the flexibility and options as Amazon AWS – yet. Their data centers are secure and well managed, and their interface and applications are fairly easy to learn and navigate.

GoogleAppsAndroid
GoogleAppsiOS
GoogleMobile

While this article offers a good general overview of each system, it is always advisable to conduct your own research to determine which provider will best suit your needs. Both Amazon AWS and Google Cloud provide reliable, secure, dependable, cost-saving options for businesses. Also consider utilizing companies specializing in cloud management and backup, such as www.spanning.com. And, as your business grows and your cloud use increases, don’t forget that Cloudyn can use their Cloud Intelligence and other advanced tools to analyze your usage. They can be a tremendous asset in helping manage and optimizing your data costs.

Gina Smith writes freelance articles for magazines, online outlets and publications.Smith covers the latest topics in the business, golf, tourism, technology and entertainment industries.