IBM Exec: “Big Data Is the Phenomenon of Our Time”

The agreement between IBM and Apple to collaborate on mobility and Big Data will certainly rank as one of the big stories of the year in global enterprise IT. In addition to Apple’s devices and IBM’s custom apps, an absolute key to this deal will be telecommunications.
As we all know, what we call Information Technology (IT) in the US is generally referred to as Information and Communications Technology (ICT) throughout the world, demonstrating telco’s importance to the whole enchilada.
Thus, we were fortunate to speak about the IBM/Apple deal recently with Robert Fox, IBM’s Global Industry Leader for Telecommunications Media & Entertainment.
Here’s what we asked, and what he had to say:
Big Data Journal: Apple CEO Tim Cook mentioned “Big Data Analytics” as a key reason to do business with IBM. From your point of view and IBM’s point of view, what strengths to you bring in this area?

Bob Fox: The Apple and IBM partnership is all about combining IBM’s Big Data and Analytics capabilities with Apple’s legendary consumer experience, hardware and software integration and developer platform.
IBM is the proven leader in Big Data and Analytics with more than 40,000 data and analytics client engagements that spans research and development, solutions, software and hardware. The analytics portfolio is made up of more than 15,000 analytics consultants, 4,000 analytics patents, 6,000 industry solution business partners, and 400 IBM mathematicians who are helping clients use Big Data to transform their business.
Over the last ten years, we have been applying these resources to solve mission critical challenges in sales, marketing, operations, fraud, security, and many other functions across the 17 industries on which we focus.

BDJ: Big Data is, obviously, nothing without strong telco to deliver it throughout enterprises and the world. What is IBM’s vision and execution in the telco aspect of the IBM/Apple agreement?

Bob: While communications service providers (CSPs) are rethinking how new networks will be provisioned and managed in order to meet new traffic demands, they are also faced with the need to radically change the way networks are maintained and customers are serviced.
Some 76% of CSP enterprise customers report that they are not satisfied and are demanding faster and more efficient service. In the consumer segment, CSPs rank among the lowest in traditional measures of customer satisfaction, including NPS and advocacy.

BDJ: So how do you improve this?
Robert: Customer satisfaction can be drastically improved in this industry by giving mobile workers in the field access to real-time ticket management, service history or parts inventories in the palm of their hand.
To help restore telco’s customer service reputation, IBM and Apple will develop more than 100 enterprise solutions, starting with apps for telecommunications field service personnel. These applications will allow CSPs to deliver the right services the first time, all at lower costs.

BDJ: Do have an estimate/projection on the amounts of Big Data than an individual enterprise may be collecting and analyzing? Do you have a global estimate on the growth of Big Data over the next few years?

Bob: Thanks to a proliferation of devices and the infusion of technology into all things and processes, the world is generating more than 2.5 billion gigabytes of data every day, and 80 percent of it is unstructured—everything from images, video and audio to social media and a blizzard of impulses from embedded sensors and distributed devices.
It is not atypical for a single CSP to collect data on tens of billions of events, yielding a petabyte or more of data to store and analyze–every day!
With the market for data and analytics estimated to reach $187 billion by 2015, organizations spanning many industries have become increasingly dependent on data—for recording their business transactions, managing their production lines and defining their growth strategies.
The emergence of Big Data is the phenomenon of our time; it is a new natural resource. It is fueled by the proliferation of devices, the rise of social media and the infusion of technology into all things and processes.

read more

HIPAA, cloud, and your business: What you need to know

By David Linthicum

When it comes to HIPAA compliant solutions, security, and cloud adoption, what most find frustrating is how to sort the myths from reality.  The “addressable” requirements of the security rules tend to be the most difficult to meet.  Thus, these addressable requirements have a tendency to fall off the radar, and could therefore create issues with compliance.

Under the HIPAA Omnibus Rule, business associates, which include many public cloud computing providers, are now directly liable for HIPAA compliance.  This rule also covers what associate agreements need to be in place, with a clear responsibility outlined for who will protect the data.

So, the trend has been to rethink the role of cloud computing, by those charged with HIPAA security and policy.  At its essence, this means understanding the existing requirements, and then understanding how the emerging use of cloud computing could provide compliant and secure HIPAA solutions.

Cloud computing has the potential to improve upon the best practices and technology that exist today.  Those healthcare organizations that have been reluctant to move IT assets to public cloud, or managed services providers, now see a day when there will be little option but to leverage these services.  Budgets are always tight, and the practice of building new data centers as healthcare organizations expand is becoming a bit tiresome to the boards of directors that pay the bills.

So, consider the next few years to be a bit of a forced marriage between cloud computing, manage services providers and their need to deal with healthcare compliance issues such as HIPAA.  Both the regulators and the healthcare organizations need to work closely together to insure that the resulting solutions don’t place patient data at risk, nor run afoul of the law.

Things are certainly scary.  Last year, breaches at Oregon Health & Science University involved the illegal storage of unencrypted patient information on a public cloud provider.  These types of events put focus on the issue of how the emerging regulations, such as the HIPAA Omnibus Rule, affect cloud vendor compliance.

So, what’s an underfunded healthcare IT shop suppose to do to insure that they remain HIPAA compliant, as well as bring both agility and efficiency to their organization through the use of cloud computing?  Here are a few suggestions:

– First, create a HIPAA cloud strategy that defines the approaches, agreements, and target technology providers that you would like to leverage.  Make sure to note costs, as well as do a quick business case study.

– Second, make sure to understand the risk, and the need for both security and governance.  Many healthcare organizations think that technology will save them.  However, it’s more about the people and processes, and then the technology.

– Finally, make sure to build outside validation and auditing into the process to make certain all of the agreements and technologies are up-to-date, and that the risk is as low as you can reasonably make it.

This is actually not that difficult to figure out, when you dig deeper into the issues.  However, like any other technical changes that require an assessment of legal issues, it’s a bit nerve-racking at first.

The post HIPAA, Cloud, and Your Business. appeared first on Cloud Computing News.

@ThingsExpo Insight: Passive Data & The IoT

Within the world of Big Data & the IoT, there’s a significant sub-culture of what @CloudExpo & @DevOpsSummit faculty member Lori MacVittie has recently termed “passive” data. Her example was a little barcode reader she carries around that scans food items for gluten. The device hooks into databases “out there,” as she puts it—probably in the cloud somewhere out there.

Let’s take a look at how this device interacts among the dimensions of Big Data and the IoT that I outlined the other day. These dimensions are Urgency, Importance, Frequency, Consequences, Remedy & Cost.

The final analysis also includes size, ie, dataflow of course. I leave size out of the group of dimensions as I see it as a function of them, particularly frequency. Also, it seems to be a habit to focus on the size of the problem—eg, how many cloud instances do we need to buy—rather than the data’s real significance to the organization.

In Lori’s use case, the dataflow from even millions of such devices will not be particularly large, but it’s the other dimnsions that matter more. The importance of this information is critical to buying a particular food item. To the merchant and supplier, it’s critical to the sale. The urgency of this information is thus very high as well.

The consequence is a lost sale, the remedy an awareness that people will have this device, and presumably many others checking for all conceivable aspects of what’s in the food they’re buying.

I could see the activity or passivity of data as fitting in nicely with the frequency dimension.

The bottom-line here seems to be that the actual cost of deploying and serving this application and others like it won’t be particularly large per se, but the potential of opportunity cost lost will be very high.

In my particular case, not that anyone wants to know other than for use-case reasons, sugars are the big killer. I’d like my smart watch, smart belt, or smart ring to let me know, when I prompt it, whenever something has less than X milligrams of sugar. I’m sure we can think of at least 100 other things, and perhaps will see the Swiss Army knife approach to such devices in the future.

read more

Calling for a common cloud architecture

By Kathy L. Grise

The overarching theme around cloud computing is truly ubiquitous and reaches across not just the computing industry and professionals, but really touches across academia, government, and industry, plus the average, general consumer.   

A successful cloud implementation happens when it all functions effectively and is transparent to the user. The user should not have to worry about the where, how, or what behind the cloud. Issues like privacy, security, reliability, and accessibility should be transparent. Naturally, the success is based upon a sound architecture(s) behind cloud computing.

There are numerous pieces and parts that host, drive, and support cloud computing, ranging from its SaaS, PaaS, etc. to the basic and fundamental physical components.

To use the “drive” analogy, let’s think about what drives an automobile. For the purpose of this analogy, if you are a collector of cars for display only stop reading, but if it’s important that your car runs then read on. A functional automobile implies it has to be usable, e.g. drivable, and normally requires an engine. Typically, an engine needs six cylinders and pistons. Also needed are a crankshaft and connecting rods. In terms of driving the cloud, the cloud engine needs software, services, networking, storage, and platform(s) to operate seamlessly.

The cloud provider should ensure that all these pieces and parts fit nicely together, and that all issues are covered. Additionally, it is not necessarily a requirement that a single cloud provider carries the full burden of providing and servicing all the pieces and parts, but does have to ensure that each piece or part can communicate and function together. This drives demand for a common platform that can be interchangeable, interoperable, and interconnected.

It is critical to have common architectures that are interchangeable, interoperable, and interconnected for successful cloud services and applications

For example, a cloud provider could develop and offer a competitive solution on cloud security that differentiates itself among its competitors. As part of that solution, the provider would pull together an overall package from other specialised providers.  As a result of a shared burden, the provider can minimize their overall costs and advance in their field of security. 

This common platform has enabled the rapid startup of literally hundreds of new companies advancing cloud security for a multi-billion dollar industry, resulting in the creation of new jobs, opportunities, and advancements in technology.

The IEEE has a global initiative to develop interoperability between cloud to cloud, and its federation. Its P2302 draft standard defines topology, functions, and governance for cloud to cloud interoperability and federation. Topological elements include clouds, roots, exchanges (which mediate governance between clouds), and gateways (which mediate data exchange between clouds). Functional elements include name spaces, presence, messaging, resource ontologies (including standardized units of measurement), and trust infrastructure. Governance elements include registration, geo-independence, trust anchor, and potentially compliance and audit. IEEE’s Intercloud Testbed provides for a practical application and verification of P2302.

Overall, it is critical to have common architectures that are interchangeable, interoperable, and interconnected for successful cloud services and applications. Common architecture translates into real business, real jobs, real dollars, real advancements in technology, and ultimately benefits the end consumer. So let us all move towards a more interchangeable, interoperable, and interconnected environment for cloud computing.

About the author

Kathy Grise, IEEE Future Directions Program Director, works directly with IEEE volunteers, IEEE staff, and consultants in support of new initiatives, and is the IEEE staff program director for the IEEE Cloud Computing Initiative, Big Data Initiative, Green ICT Initiative, and the IEEE Technology Navigator. Prior to joining the IEEE staff, Ms. Grise held numerous positions at IBM, and most recently was a Senior Engineering Manager for Process Design Kit Enablement in the IBM Semiconductor Research and Development Center.

Unified, Comprehensive and Easy-to-Use IT Systems Monitoring

This one-hour webinar will cover the core benefits and features of up.time, including how up.time proactively monitors, alerts and reports on the performance, availability, and capacity of all physical servers, virtual machines, network devices, applications, and services. We’ll take you through the up.time Dashboards, show you how alerting and action profiles work, and dive into up.time’s deep reporting capabilities, including SLA reports. In the end, you’ll know how up.time works and what up.time can do for your company.

read more

Data Privacy Perspectives: United States vs Europe & Reality vs Illusion

It’s time to face reality: “Americans are from Mars, Europeans are from Venus,” and in today’s increasingly connected world, understanding “inter-planetary” alignments and deviations is mission-critical for cloud.
In her session at 15th Cloud Expo, Evelyn de Souza, Data Privacy and Compliance Strategy Leader at Cisco Systems, will discuss cultural expectations of privacy based on new research across these elements.

read more

Blog Update: New URL, Same Great Content!

By Ben Stephenson, Emerging Media Specialist, GreenPages Technology Solutions

 

Very quick update: Our Journey to the Cloud blog has been transferred over to GreenPages.com. It’s the same great content, just hosted in a new place. All Journey to the Cloud links will be redirected to the new URL http://blog.greenpages.com/. If you’re currently subscribed to receive Journey to the Cloud posts, you will continue to get them delivered. If you would like to subscribe to get our posts delivered via email, you can do so here!

Why’d we do it? At GreenPages, we believe that it’s no longer about the Journey to the Cloud. The cloud is already here. Now it’s about managing your hybrid cloud environment. As we enter the second wave of virtualization, the conversation becomes more about deciding which applications you should run in the cloud…and which cloud you should run them in. We wanted the blog to reflect where the industry currently is.

No one panic, you still get to hear about cloud management from John Dixon, storage and data management from Randy Weis, software defined networking from Nick Phelps, and advanced virtualization from Chris Ward.

So, be sure to come back and visit http://blog.greenpages.com/ for more great content from our experts!

The Answer for Government Applications Migrating to the Cloud: Visibility

Recently, I’ve had several conversations with US Federal Government Agencies about monitoring applications moving to FedRAMP (The Federal Risk and Authorization Management Program) data centers. Because of the Government’s Cloud First policy, which mandates that agencies take full advantage of cloud computing benefits, agencies are increasingly forced to move application outside of their own data centers. With less control on the infrastructure, the new focus is now on the performance and availability of their applications running in the cloud. Agencies want an assurance their applications are running at the same level of performance (or better) once they make the move. This is where I believe an APM solution like AppDynamics is a perfect fit to mitigate risk by providing agencies 100% visibility into their application performance.

read more

Ten Answers Regarding Mobile App Testing

This white paper digs deep into the reasons testing mobile apps is fundamentally harder than traditional web or desktop applications. A collaboration by Tina Zhuo and Dennis Schultz from IBM along with Yoram Mizrachi from Perfecto Mobile and John Montgomery from uTest, these experts explore the complexities of mobile test environments, the value of the mobile device cloud, the unique role crowd sourcing can play, and how teams can leverage automation to help deliver quality apps.

read more