Todas las entradas hechas por Guest Author

The Future of Tech Companies, the NSA, and Your Information

Guest Post by Lewis Jacobs

Verizon and the NSA

Last week, the technology world was turned upside down when the Guardian broke the news that the National Security Agency had directed telecommunications company Verizon to release customer call records and metadata on an “ongoing daily basis.”

Though the metadata doesn’t include the audio content of calls, it does include the phone numbers on both ends of calls, the devices and location of both parties involved, and the time and duration of calls.

The order was leaked by Edward Snowden, an analyst for defense contractor Booz Allen Hamilton at the NSA. The order targets both international and domestic calls, and it does not contain parameters for who can see the data or whether or not the data will be destroyed after NSA use.

Though the White House and the NSA say that the data will only be used for counter-terrorism efforts and other national security measures, the order nonetheless gives the federal government access to data from all of Verizon’s more than 100 million customers.

Since the story broke, there has been significant debate over whether the NSA is working within the regulations of the First and Fourth Amendments or whether it is violating citizens’ rights to free speech and privacy. The White House has defended the order as a necessary measure for national security. But critics, including the American Civil Liberties Union and several U.S. lawmakers, disagree.

What it means for the future

The controversy raises the question of whether or not other technology and telecommunications companies will be required to follow suit—or whether they already have. Amy Davidson at the New Yorker speculates that the leaked Verizon order is “simply one of a type—the one that fell off the truck.” Adam Banner at the Huffington Post wonders, “How many other ‘top secret’ court orders are currently in action with countless other information providers?”

The NSA is said to have been monitoring and collecting customer data from some of the world’s largest technology companies with the help of surveillance program PRISM. But many companies, including Google, Facebook, Microsoft, Yahoo, Apple, and AOL, have denied providing the government direct access to their users’ information. Google, one of the companies to deny any knowledge of PRISM, wrote an open letter to the Attorney General and the FBI requesting to make public any federal requests for data.

In any case, it’s unlikely that the NSA demanded customer information only from Verizon, meaning that the federal government could be (and probably is) accessing information about citizens through their phone providers, their email services, and their search engines. Faced with federal orders, there’s not much that technology companies can do in opposition.

The future of NSA technology surveillance will depend, of course, on its legality, which is yet to be determined. It’s unclear whether or not the NSA’s actions fall under the provisions of the Patriot Act, the FISA Amendments Act, the Constitution, and federal government’s system of checks and balances.

The American Civil Liberties Union recently announced their plan to sue the White House Administration for violating the privacy rights of Americans. On the other side, whistleblower Edward Snowden is currently under investigation for the disclosure of classified information, an offense that could result in life in prison.

This article was submitted by Lewis Jacobs, an avid blogger and tech enthusiast. He enjoys fixing computers and writing about internet trends. Currently he is writing about an internet in my area campaign for local internet providers.

Sources:

http://www.newyorker.com/online/blogs/closeread/2013/06/the-nsa-verizon-scandal.html

http://www.huffingtonpost.com/adam-banner/the-nsa-and-verizon

http://money.cnn.com/2013/06/11/technology/security/google-government-data/

http://money.cnn.com/2013/06/07/technology/security/nsa-data-prism/

http://www.washingtonpost.com/blogs/wonkblog/wp/2013/06/06/everything-you-need-to-know-about-the-nsa-scandal/

Mobile Payment Future Is Tied to Services

Guest Post by Nick Nayfack, Director of Payment Solutions, Mercury Payment Systems

Consumers are already using their smartphones when they shop. They just need the incentive to take the next step to making a purchase with their phone. According to Google, some 79 percent of consumers today can be considered “mobile shoppers” because they use their smartphones for browsing for product information, searching for product reviews or looking for offers and promotions. Today’s merchants see their customers browsing their store with smartphones and know that mobile marketing is no longer an option, it’s an imperative.

There is a clear opportunity to target avid smartphone users, as well as provide merchants with the ability to turn their point of sale system into a marketing engine simply by capturing their customers’ phone numbers. By creating a point of sale environment where processing becomes prospecting, mobile and alternative payments become a natural extension of the convenience and value that merchants and consumers are looking for. Not only can consumer use their phones in store to gain product information or exclusive offers, they can skip the checkout line by paying with their phone.  In this environment, mobile payments gain adoption because of the valuable service it provides to both the merchant and the consumer.

What is it that is driving merchants to adopt mobile point of sale systems (POS) – doubling their implementation in the past year – and consumer rapid adoption of smartphones – while mobile payments has yet to experience the same growth curve? The slow speed of adoption can be tied to two gaps in the current payment landscape: convenience and value. Merchants are adopting mobile POS systems because of their affordable pricing, the ease of use, and the ability to tie value-added services like loyalty programs and gift options to their customer’s checkout experience. Consumers are looking for more value for their money and more likely to sign up for opt-in marketing at the cash register or loyalty programs if they feel like they are getting something in return.

Where is the value in Mobile Payments today?

1. Information is Still Key

Consumers are using their phones now mostly to find product information, restaurant reviews, and discount offers.  90 percent of smartphone shoppers use their phone today for “pre-shopping” activities. The most common are price comparisons (53 percent), finding offers and promotions (39 percent), finding locations of other stores (36 percent) and finding hours (35 percent).  In contrast, consumer in-store purchases from a mobile device are still in the minority (~16 percent), but show promise for fast and exponential grow.  As such, if you want consumers to use your mobile payment application, there must be a tight alignment with other frequently used mobile applications (i.e. mobile search.)

2. Remember Your Basics

Key players in the mobile payments space need to make better UX by applying principles learned from the web many years ago: mobile-specific design, clear calls to action and one shopping experience across all platforms.  Beyond the UX, there needs to be clear and repeatable value to the consumer. Special offers or incentives could be paired with your current purchase history to make one-click purchases attractive from mobile devices. From a historical perspective, Amazon introduced this concept several years ago in the e-commerce world with links that provided suggestive purchases based on the buyer’s current purchase (e.g. others that bought this book, also bought the following). While m-commerce has different considerations such as limited time and high distraction of users, there can be some lessons learned from the past.

3. Find Today’s Value

POS developers will succeed today, and in the future by helping merchants to obtain and analyze information about their business and customers. This requires coordinating with an acquirer or processor that has rich historical data to help analyze transaction history, and other data. In this way, merchants can then personalize the consumer experience for new cost benefits or improve operations for cost savings.

Lastly, as mobile evolves, new data points will provide richer context (e.g. location, social context, sku data) and merchants will have even more reference points to deliver a personal consumer experience. In this way, personalization is the key value that is coupled with convenience.

Nick Nayfack

Nick Nayfack is the director of product for Mercury Payment Systems. He is responsible for developing best practices in mobile commerce with industry peers in order to help enable merchants and consumers to navigate technological “ease-of-use.” Nick is also a member of the Electronic Transaction Associations (ETA) Mobile payments committee.

The Rise in Popularity of Hybrid Cloud Infrastructure

Guest Post by Paul Vian of  Internap

Organizations are increasingly choosing to outsource business-critical applications and content to third-party providers. But, with it comes a long list of questions in order to determine the right mix of IT infrastructure services to meet specific scalability, control, performance and cost requirements. Although a shared public cloud can offer the convenience of easily scaling infrastructure up and down on-demand, many organizations are still hesitant due to concerns around privacy and security within a shared tenancy arrangement. Another complication is that the virtualization layer typically takes around 10 per cent of the resources. Accordingly, dedicated, physical infrastructure is often ideal just for performance purposes.

Which cloud environments are businesses considering?

If a business has a fluctuating workload that has ever-changing demands and requires more resources in the short term, a cloud environment is often still the preferred choice, but this does tend to become more expensive for applications that are always on, such as databases or other highly resource-intensive applications. The reality is that organizations often require something in between, which is driving demand for flexible, hybrid cloud infrastructure solutions that can easily adapt and scale to meet a wide range of use cases and application needs.

What are the benefits of a hybrid cloud infrastructure?

Taking a tailored approach can enable businesses to scale their infrastructures ‘on demand’. It is also now possible for companies utilising physical servers to gain the flexibility and benefits they have been enjoying within a highly virtualized cloud environment in recent years. We are in an age where physical servers can be instantly spun up or down as global demand dictates, so there is no reason why organizations can’t gain the convenience and agility of the cloud with the reliability and performance of physical servers.

How can companies achieve a hybrid cloud infrastructure tailored to their specific needs?

Ideally, companies should look to work with a third-party provider that can provide access to a broader mix of services to meet these emerging demands around scalability in particular. Through working with a provider that takes a consultative sales approach, businesses can benefit from a tailored service that allows them to seamlessly mix, provision and manage both their virtual cloud and physical IT infrastructure – whether this is legacy hardware or back-up equipment. With this approach, businesses can not only meet their diverse application requirements, but also easily address changing global business needs.
We are now seeing things coming full circle; from physical networks, through to virtualization and the cloud, to today’s move towards a hybrid approach. This is in response to the ever-growing sophistication of automation and self-service capabilities, and is the way forward for any forward-thinking organization with a complex list of requirements.

Paul_Side

Paul Vian is Internap’s director of business development for Europe, Middle East, and Africa

New Features, Bells and Whistles: Google I/O Conference

Guest Post by Paul Williams, a copywriter with InternetProviders.com

The Google I/O 2013 conference started with a bang on May 15th. Developers, tech journalists and venture capitalists crowded the Moscone Center in San Francisco, where CEO Larry Page and VP Amit Singhal delivered masterful keynotes that set the tone for the rest of the event.

Although Google I/O events are mostly for developers, the conference thus far has produced many interesting items for users to dissect and marvel at. In fact, the buzz surrounding the I/O conference has mostly been focused on developments and new features that will soon be ready to enhance the Google user experience. The major announcements are related to maps, music, finances, pictures, education, games, social networking, and search.

Providing Instant Answers with Conversation and Learning

Google is leaning on its Knowledge Graph to deliver a rich search experience that draws from a massive relational database that stores 570 million entries. According to Amit Singhal, Knowledge Graph will progressively learn from the queries entered by hundreds of millions of users. To this end, a film enthusiast searching for information about director Kathryn Bigelow, will instantly see highlights from her filmography, biographical data, reviews for Zero Dark Thirty, discussions about the possible remake of Point Break, and even more nuggets of information right on Google’s search engine results page (SERP).

Google is moving beyond the traditional keyboard-mouse-screen input methods of Internet search. “OK Google” is the new approach to conversational search. In this regard, Google’s plans for voice search have already impressed users and developers alike with an interface that will surely rival Apple’s Siri. The Google Now voice-activated personal assistant is also becoming smarter with reminders, recommendations and alerts that conform to each user’s search history and preferences.

Mapping and Finance

A revamped Google Maps for mobile devices will serve as a full-fledged handheld or in-vehicle navigator while the Maps version for tablets will feature an interface that encourages exploration. Google Wallet does no longer seem to be pursuing a debit-card strategy, although it intends to take on rival PayPal with an electronic funds transfer system powered by Gmail.

Advanced Social Networking

More than a dozen new features have been added to Google Plus (G+), the search giant’s promising social network. One of the most significant upgrades is Babel, a communication tool that integrates G+ Hangouts with other messaging applications such as Voice, Talk, Gmail, and the G+ Messenger.

Google is borrowing a page from Twitter with its own set of hash tags for G+. These smart tags will search across the G+ network for user-generated content that can be analyzed and organized by hash tags that can be clicked and expanded to reveal related content. This is similar to the discontinued Google Sparks feature of G+.

The most visible G+ upgrade can be appreciated in its user interface. Multiple columns that stream updates with animated transitions and photos retouched with Google’s patent “I’m feeling lucky” style of image editing make for a much more visually-pleasing experience on G+.

Streaming Music and Game Services

Google Play is no longer limited to solely serving as a marketplace for Android apps. For less than $10 per month, users can listen to unlimited tracks streamed from Google Play’s vast online music library. Users will be able to listen from their Android mobile devices or from compatible Web browsers.

Gamers will now be able to begin playing a game on their smartphones or tablets and later resume playing on a different device or Web browser. This is similar to the popular Xbox Live online gaming service from Microsoft, although Google plans to let developers come up with third-party gaming apps on Apple iOS and non-Chrome browsers.

10072_10077_1_Avatar

Paul Williams is a part-time tech blogger, and full-time copywriter with InternetProviders.com.  You can contact him via email.

File Shares & Microsoft SharePoint: Collaboration Without Limitations

Guest Post by Eric Burniche of AvePoint.

File Shares can be a blessing and a curse when it comes to storing large quantities of data for business use. Yes, you enable a large number of users to access the data as if it were on their local machines, without actually having the data stored where disc space may be at a premium. But native management capabilities of file shares aren’t always ideal, so a third-party solution is necessary to fully optimize your file shares.

The primary benefit of file shares is simple, quick, and easy access to large volumes of data for large volumes of users at marginal infrastructure cost. With little or no training required, users can easily access file shares that consist of individual documents to large files and rich media like videos, audio and other formats than can range up to gigabytes (GB) in size.

The Simple Truth: Organizations are quickly realizing native file share limitations, including notoriously poor content management capabilities for search, permissions, metadata, and remote access. As a result, many have turned to Microsoft SharePoint to manage and collaborate on their most business-critical information and valued data.

The Problem: Organizations have various types of unstructured content on their file servers, which is data characterized as non-relational data– e.g. Binary Large Objects (BLOBs) — that when uploaded into SharePoint, are stored by default with the platform’s Microsoft SQL Server database. Once file share content is uploaded, the overall time taken to remove unstructured content from a structured database is inefficient, resulting in poor performance for SharePoint end-users and exponential storage cost increases for IT administrators.

Difficulty often arises when determining what content is business critical and should be integrated with SharePoint as compared to what content should be left alone in file shares, decommissioned, or archived according to business need. File types and sizes also create difficulty when integrating file share content with SharePoint because SharePoint itself blocks content types like Microsoft Access project files, .exe, .msi, .chm help files, and file sizes exceeding 2 GB violate SharePoint software boundaries and limitations.

The Main Questions: How can my organization utilize SharePoint to retire our legacy file share networks while avoiding migration projects and performance issues? How can my organization utilize SharePoint’s full content management functionality if my business-critical assets are blocked file types or larger than Microsoft’s 2 GB support contracts?

One Solution: Enter DocAve File Share Navigator 3.0 from AvePoint. DocAve File Share Navigator 3.0 enables organizations to increase file share activity and take full advantage of SharePoint’s content management capabilities, all while avoiding costs and disruptions associated with migration plans.
With DocAve File Share Navigator, organizations can:

  • Expose large files, rich media via list links, including blocked files more than 2 GB, into SharePoint without violating Microsoft support contracts to truly consolidate access to all enterprise-wide content
  • Decrease costs associated with migrating file share content into SharePoint’s SQL Server content databases by accessing file share content through SharePoint
  • Allow remote users to view, access, and manage network files through SharePoint without requiring a VPN connection
  • Direct access for local file-servers through SharePoint without burden on web front end servers
  • Increase file share content discoverability by utilizing SharePoint’s full metadata-based search across multiple, distributed file servers
  •  Allow read-only previews of documents for read-only file servers

The native capabilities of file shares are unlikely to improve, but fortunately there are third-party solutions such as DocAve File Share Navigator that can help turn your file share from a headache to an asset, allowing you to continue to collaborate with confidence.

Eric_Burniche

Eric Burniche is a Product Marketing Manager at AvePoint.

Big Data Without Security = Big Risk

Guest Post by C.J. Radford, VP of Cloud for Vormetric

Big Data initiatives are heating up. From financial services and government to healthcare, retail and manufacturing, organizations across most verticals are investing in Big Data to improve the quality and speed of decision making as well as enable better planning, forecasting, marketing and customer service. It’s clear to virtually everyone that Big Data represents a tremendous opportunity for organizations to increase both their productivity and financial performance.

According to WiPro, the leading regions taking on Big Data implementations are North America, Europe and Asia. To date, organizations in North America have amassed over 3,500 petabytes (PBs) of Big Data, organizations in Europe over 2,000 PBs, and organizations in Asia over 800 PBs. And we are still in the early days of Big Data – last year was all about investigation and this year is about execution; given this, it’s widely expected that the global stockpile of data used for Big Data will continue to grow exponentially.

Despite all the goodness that can stem from Big Data, one has to consider the risks as well. Big Data confers enormous competitive advantage to organizations able to quickly analyze vast data sets and turn it into business value, yet it can also put sensitive data at risk of a breach or violating privacy and compliance requirements. Big Data security is fast becoming a front-burner issue for organizations of all sizes. Why? Because Big Data without security = Big Risk.

The fact is, today’s cyber attacks are getting more sophisticated and attackers are changing their tactics in real time to get access to sensitive data in organizations around the globe. The barbarians have already breached your perimeter defenses and are inside the gates. For these advanced threat actors, Big Data represents an opportunity to steal an organization’s most sensitive business data, intellectual property and trade secrets for significant economic gain.

One approach used by these malicious actors to steal valuable data is by way of an Advanced Persistent Threat (APT). APTs are network attacks in which an unauthorized actor gains access to information by slipping in “under the radar” somehow. (Yes, legacy approaches like perimeter security are failing.) These attackers typically reside inside the firewall undetected for long periods of time (an average of 243 days, according to Mandiant’s most recent Threat Landscape Report), slowly gaining access to and stealing sensitive data.

Given that advanced attackers are already using APTs to target the most sensitive data within organizations, it’s only a matter of time before attackers will start targeting Big Data implementations. Since data is the new currency, it just makes sense for attackers to go after Big Data implementations because that’s where big value is.
So, what does all this mean for today’s business and security professionals? It means that when implementing Big Data, they need to take a holistic approach and ensure the organization can benefit from the results of Big Data in a manner that doesn’t negatively affect the risk posture of the organization.
The best way to mitigate risk of a Big Data breach is by reducing the attack surface, and taking a data-centric approach to securing Big Data implementations. These are the key steps:

Lock down sensitive data no matter the location.

The concept is simple; ensure your data is locked down regardless of whether it’s in your own data center or hosted in the cloud. This means you should use advanced file-level encryption for structured and unstructured data with integrated key management. If you’re relying upon a cloud service provider (CSP) and consuming Big Data as a service, it’s critical to ensure that your CSP is taking the necessary precautions to lock down sensitive data. If your cloud provider doesn’t have the capabilities in place or feels data security is your responsibility, ensure your encryption and key management solution is architecturally flexible in order to accommodate protecting data both on-premise and in the cloud.

Manage access through strong polices.

Access to Big Data should only be granted to those authorized end users and business processes that absolutely need to view it. If the data is particularly sensitive, it is a business imperative to have strong polices in place to tightly govern access. Fine-grained access control is essential, including things like the ability to block access by even IT system administrators (they may have the need to do things like back up the data, but they don’t need full access to that data as part of their jobs). Blocking access to data by IT system administrators becomes even more crucial when the data is located in the cloud and is not under an organization’s direct control.

Ensure ongoing visibility into user access to the data and IT processes.

Security Intelligence is a “must have” when defending against APTs and other security threats. The intelligence gained can support what actions to take in order to safeguard and protect what matters – an organization’s sensitive data. End-user and IT processes that access Big Data should be logged and reported to the organization on a regular basis. And this level of visibility must occur whether your Big Data implementation is within your own infrastructure or in the cloud.

To effectively manage that risk, the bottom line is that you need to lock down your sensitive data, manage access to it through policy, and ensure ongoing visibility into both user and IT processes that access your sensitive data. Big Data is a tremendous opportunity for organizations like yours to reap big benefits, as long as you proactively manage the business risks.

CJRadford

You can follow C.J. Radford on Twitter @CJRad.

Locking Down the Cloud

Guest Post by Pontus Noren, director and co-founder, Cloudreach.

The good news for cloud providers is that forward-thinking CIOs are rushing to embrace all things ‘cloud’, realising that it provides a flexible and cost-effective option for IT infrastructure, data storage and software applications. The bad news is that the most significant obstacle to implementation could be internal: coming from other parts of the organisation where enduring myths about legal implications, security and privacy issues remain. The reality is that today such fears are largely unfounded. CIOs need help in communicating this to their more reluctant colleagues if they want to make the move to the cloud a success.

Myth No 1: The Security Scare

In many cases, moving to the cloud can in fact represent a security upgrade for the organisation. Since the introduction of cloud-based computing and data storage around ten years ago, the issue of security has been so high profile that reputable cloud providers have made vast investments in their security set-ups – one that an individual organisation would be unable to cost-effectively match due to the far different scale on which it operates.

For example, data stored in the cloud is backed-up, encrypted and replicated across multiple geographically distributed data centres in order to protect it from the impact of natural disasters or physical breaches.  All this takes place under the watchful eyes of dedicated data centre security experts. If you compare this to the traditional in-house approach – which all too frequently sees data stored on a single server located somewhere in the basement of an office – it is not difficult to see which is the most secure option. By working with an established and respected cloud provider, such as Google or Amazon Web Services businesses can benefit from such comprehensive security measures without having to make the investment themselves.

Myth No 2: Data in Danger

Security and data privacy are closely related, but different issues. Security is mainly about physical measures taken to mitigate risks, while ‘privacy’ is more of a legal issue about who can access sensitive data, how it is processed, whether or not it is being moved and where it is at any moment in time.

Concerns around compliance with in-country data protection regulations are rife, especially when dealing with other countries.  Across Europe, for example, data protection laws vary from country to country with very strict guidelines about where data can be stored.  A substantial amount of data cannot be moved across geographical boundaries, so the security practice of replicating data across the globe has far-reaching compliance applications for data protection. However, data protection legislation states that there is always a data processor and data controller and a customer never actually ‘hands over’ its data. This doesn’t change when the cloud is involved – all large and reputable cloud services providers are only ever the data processor. For example, the provider will only ever process data on behalf of its customer, and the customer always maintains its ownership of its data, and role of data controller.

However, much of data protection law predates the cloud and is taking a while to catch up. Change is most definitely on its way. Proposed European legislation aims to make data protection laws consistent across Europe, and with highly data-restricted industries such as financial services now starting to move beyond private clouds into public cloud adoption, further change is likely to follow as organisations start to feel reassured.

So what can CIOs do to change perceptions? It comes down to three simple steps:

  • Be Specific – Identify your organization’s top ten queries and concerns and address these clearly.
  • Be Bold – Cloud computing is a well-trodden path and should not be seen as the future, rather as the now. Having tackled company concerns head on, it is important to make the jump and not just dip a toe in the water.
  • Be Early – Engage reluctant individuals early on in the implementation process, making them part of the change. This way CIOs can fend off ill-informed efforts to derail cloud plans and ensure buy-in from the people who will be using the new systems and services.

The cloud has been around for a while now and is a trusted and secure option for businesses of all sizes and across all sectors. In fact, there are more than 50 million business users alone of Google Apps worldwide. It can hold its own in the face of security and privacy concerns.  CIOs have an important role to play in reassuring and informing colleagues so that the firm can harness the many benefits of the cloud; future-proof the business and release IT expertise to add value across the business.  Don’t let fear leave your organisation on the side lines.

Pontus Noren, director and co-founder, Cloudreach Pontus Noren is director and co-founder, Cloudreach.

 

Real-Time Processing Solutions for Big Data Application Stacks – Integration of GigaSpaces XAP, Cassandra DB

Guest post by Yaron Parasol, Director of Product Management, GigaSpaces

GigaSpaces Technologies has developed infrastructure solutions for more than a decade and in recent years has been enabling Big Data solutions as well. The company’s latest platform release – XAP 9.5 – helps organizations that need to process Big Data fast. XAP harnesses the power of in-memory computing to enable enterprise applications to function better, whether in terms of speed, reliability, scalability or other business-critical requirements. With the new version of XAP, increased focus has been placed on real-time processing of big data streams, through improved data grid performance, better manageability and end-user visibility, and integration with other parts of your Big Data stack – in this version, integration with Cassandra.

XAP-Cassandra Integration

To build a real-time Big Data application, you need to consider several factors.

First– Can you process your Big Data in actual real-time, in order to get instant, relevant business insights? Batch processing can take too long for transactional data. This doesn’t mean that you don’t still rely on your batch processing in many ways…

Second – Can you preprocess and transform your data as it flows into the system, so that the relevant data is made digestible and routed to your batch processor, making batch more efficient as well. Finally, you also want to make sure the huge amounts of data you send to long-term storage are available for both batch processing and ad hoc querying, as needed.

XAP and Cassandra DB together can easily enable all the above to happen. With built-in event processing capabilities, full data consistency, and high-speed in-memory data access and local caching – XAP handles the real-time aspect with ease. Whereas, Cassandra is perfect for storing massive volumes of data, querying them ad hoc, and processing them offline.

Several hurdles had to be overcome to make the integration truly seamless and easy for end users – including XAP’s document-oriented model vs. Cassandra’s columnar data model, XAP’s immediate consistency (data must be able to move between models smoothly), XAP offers immediate consistency with performance, while Cassandra trades off between performance and consistency (with Cassandra as the Big Data store behind XAP processing, both consistency and performance are maintained).

Together with the Cassandra integration, XAP offers further enhancements. These include:

Data Grid Enhancements

To further optimize your queries over the data grid XAP now includes compound indices, which enable you to index multiple attributes. This way the grid scans one index instead of multiple indices to get query result candidates faster.
On the query side, new projections support enables you to query only for the attributes you’re interested in instead of whole objects/documents. All of these optimizations dramatically reduce latency and increase the throughput of the data grid in common scenarios.

The enhanced change API includes the ability to change multiple objects using a SQL query or POJO template. Replication of change operations over the WAN has also been streamlined, and it now replicates only the change commands instead of whole objects. Finally, a hook in the Space Data Persister interface enables you to optimize your DB SQL statements or ORM configuration for partial updates.

Visibility and Manageability Enhancements

A new web UI gives XAP users deep visibility into important aspects of the data grid, including event containers, client-side caches, and multi-site replication gateways.

Managing a low latency, high throughput, distributed application is always a challenge due to the amount of moving parts. The new enhanced UI helps users to maintain agility when managing their application.

The result is a powerful platform that offers the best of all worlds, while maintaining ease of use and simplicity.

Yaron Parasol is Director of Product Management for GigaSpaces, a provider of end-to-end scaling solutions for distributed, mission-critical application environments, and cloud enabling technologies.

Measurement, Control and Efficiency in the Data Center

Guest Post by Roger Keenan, Managing Director of City Lifeline

To control something, you must first be able to measure it.  This is one of the most basic principles of engineering.  Once there is measurement, there can be feedback.  Feedback creates a virtuous loop in which the output changes to better track the changing input demand.  Improving data centre efficiency is no different.  If efficiency means better adherence to the demand from the organisation for lower energy consumption, better utilisation of assets, faster response to change requests, then the very first step is to measure those things, and use the measurements to provide feedback and thereby control.

So what do we want to control?  We can divide it into three: the data centre facility, the use of compute capacity and the communications between the data centre and the outside world.  The balance of importance of those will differ between all organisations.

There are all sorts of types of data centres, ranging from professional colocation data centres to the server-cupboard-under-the-stairs found in some smaller enterprises.  Professional data centre operators focus hard on the energy efficiency of the total facility.  The most common measure of energy efficiency is PUE, defined originally by the Green Grid organisation.  This is simple:   the energy going into the facility divided by the energy used to power electronic equipment.  Although it is often abused, a nice example is the data centre that powered its facility lighting over POE, (power over ethernet) thus making the lighting part of the ‘electronic equipment, it is widely understood and used world-wide.  It provides visibility and focus for the process of continuous improvement.  It is easy to measure at facility level, as it only needs monitors on the mains feeds into the building and monitors on the UPS outputs.

Power efficiency can be managed at multiple levels:  at the facility level, at the cabinet level and at the level of ‘useful work’.  This last is difficult to define, let alone measure and there are various working groups around the world trying to decide what ‘useful work’ means.  It may be compute cycles per KW, revenue generated within the organisation per KW or application run time per KW and it may be different for different organisations.  Whatever it is, it has to be properly defined and measured before it can be controlled.

DCIM (data centre infrastructure management) systems provide a way to measure the population and activity of servers and particularly of virtualised machines.  In large organisations, with potentially many thousands of servers, DCIM provides a means of physical inventory tracking and control.  More important than the question “how many servers do I have?” is “how much useful work do they do?”  Typically a large data centre will have around 10% ghost servers – servers which are powered and running but which do not do anything useful.  DCIM can justify its costs and the effort needed to set it up on those alone.

Virtualisation brings its own challenges.  Virtualisation has taken us away from the days when a typical server operated at 10-15% efficiency, but we are still a long way from most data centres operating efficiently with virtualisation.  Often users will over-specify server capacity for an application, using more CPU’s, memory and storage than really needed, just to be on the safe side and because they can.   Users see the data centre as a sunk cost – it’s already there and paid for, so we might as well use it.  This creates ‘VM Sprawl’.  The way out of this is to measure, quote and charge.  If a user is charged for the machine time used, that user will think more carefully about wasting it and about piling contingency allowance upon contingency allowance ‘just in case’, leading to inefficient stranded capacity.  And if the user is given a real-time quote for the costs before committing to them, they will think harder about how much capacity is really needed.

Data centres do not exist in isolation.  Every data centre is connected to other data centres and often to multiple external premises, such as retail shops or oil rigs.  Often those have little redundancy and may well not operate efficiently.  Again, to optimise efficiency and reliability of those networks, the first requirement is to be able to measure what they are doing.  That means having a separate mechanism at each remote point, connected via a different communications network back to a central point.  The mobile phone network often performs that role.

Measurement is the core of all control and efficiency improvement in the modern data centre.  If the organisation demands improved efficiency (and if it can define what that means) then the first step to achieving it is measurement of the present state of whatever it is we are trying to improve.  From measurement comes feedback.  From feedback comes improvement and from improvement comes control.  From control comes efficiency, which is what we are all trying to achieve.

Roger Keenan, Managing Director of City Lifeline

Roger Keenan joined City Lifeline, a leading carrier neutral colocation data centre in Central London, as managing director in 2005.  His main responsibilities are to oversee the management of all business and marketing strategies and profitability. Prior to City Lifeline, Roger was general manager at Trafficmaster plc, where he fully established Trafficmaster’s German operations and successfully managed the $30 million acquisition of Teletrac Inc in California, becoming its first post-acquisition Chief Executive.

Rain From the Cloud (and Some Sun At the End)

Guest Post by Roger Keenan, Managing Director of City Lifeline

Cloud computing is changing the way in which computing and data communications operate.  The availability of high speed low cost communications through fibre optics means that remote hosting of computing and IT applications is economically possible, and there are clear cost benefits for both users and providers.  The migration from in-house computing to cloud has not been as fast as expected.  Putting aside the usual over-optimism of marketing spread-sheets, what holds users back when they think about cloud adoption?

Firstly, there is much conflicting hype in the market and many variations on which type of cloud – public, private, bare-metal, hybrid and so on, and the user must first find his way through all the hype.  Then he must decide which applications to migrate.  In general, applications with low communications requirements, important but not mission critical security needs and a low impact on the business if they go wrong are a good place to start.

Security is always the first concern of users when asked about the cloud.  When an organisation has its intellectual property and critical business operations in-house, its management (rightly or wrongly) feels secure.  When those are outside and controlled by someone else who may not share the management’s values of urgency about problems or confidentiality, management feels insecure.  When critical and confidential data is sent out over an internet connection, no matter how secure the supplier claims it is, management feels insecure.  There are battles going on in parliament at the moment about how much access the British security services should have to user data via “deep packet inspection” – in other words spying on users’ confidential information when it has left the user’s premises, even when it is encrypted.  The “Independent” newspaper in London recently reported that “US law allows American agencies to access all private information stored by foreign nationals with firms falling within Washington’s jurisdiction if the information concerns US interests.”  Consider that for a moment and note that it says nothing about the information being on US territory.  Any IT manager considering cloud would be well advised not to put forward proposals to management that involve critical confidential information moving to the cloud.  There are easier migrations to do.

Regulatory and compliance issues are barriers to adoption.  For example, EU laws require that certain confidential information supplied by users be retained inside EU borders.  If it is held on-site, there is no problem.  If it is in a cloud store, then a whole set of compliance issues arise and need to be addressed, consuming time and resources and creating risk.

Geographic considerations are important.  For a low bandwidth application with few transactions per user in any given period and limited user sensitivity to delays, it may be possible to host the application on a different continent to the user.  A CRM application such assalesforce.com is an example where that works.  For many other applications, the delays introduced and the differences in presentation to the user of identical transactions may not be acceptable.  As a rule of thumb, applications for a user in London should be hosted in London and applications for a user in Glasgow should be hosted in Glasgow.

When applications are hosted on-site, management feels in control.  If management gives its critical data to someone else, it risks lock-in – in other words, it becomes difficult for management to get its data back again or to move its outsourced operations to another supplier.  Different providers have different ethics and processes around this, but there are some real horror stories around and management’s fears are not always misplaced.

Where cloud implementations involve standard general IT functions provided by standard software optimised for cloud, the user can have confidence it will all work.  Where there is special purpose software integrated with them, life can get very complicated.  Things designed for in-house are not usually designed to be exported.  There will be unexpected undocumented dependencies and the complexity of the integration grows geometrically as the number of dependencies grows.  Cloud has different interfaces and controls and ways of doing things and the organisation may not have those skills internally.

Like the introduction of any new way of working, cloud throws up unexpected problems, challenges the old order and challenges the people whose jobs are secure in the old order.  The long term benefits of cloud are sufficiently high for both users and providers that, over time, most of the objections and barriers will be overcome.

The way in which organizations employ people has changed over the last thirty years or so from a model where everyone was a full-time employee to one where the business is run by a small, tight team pulling in subcontractors and self-employed specialists only when needed.  Perhaps the future model for IT is the same – a small core of IT in-house handing the mission critical operations, guarding corporate intellectual property and critical data and drawing in less critical or specialised services remotely from cloud providers when needed.

Roger Keenan, Managing Director of City Lifeline

Roger Keenan joined City Lifeline, a leading carrier neutral colocation data centre in Central London, as managing director in 2005.  His main responsibilities are to oversee the management of all business and marketing strategies and profitability. Prior to City Lifeline, Roger was general manager at Trafficmaster plc, where he fully established Trafficmaster’s German operations and successfully managed the $30 million acquisition of Teletrac Inc in California, becoming its first post-acquisition Chief Executive.