SDDC: Product or Project By @SuperNap | @CloudExpo [#Cloud #SDN]

Simply defined the SDDC promises that you’ll be able to treat “all” of your IT infrastructure as if it’s completely malleable. That there are no restrictions to how you can use and assign everything from border controls to VM size as long as you stay within the technical capabilities of the devices. The promise is great, but the reality is still a dream for the majority of enterprises.

In his session at 15th Cloud Expo, Mark Thiele, EVP, Ecosystem Evangelism at SUPERNAP, will cover where and how a business might benefit from SDDC and also why they should or shouldn’t attempt to adopt today.
Mark Thiele’s responsibilities at SUPERNAP include evaluating new data center technologies, developing partners and providing industry thought leadership. His insights on the next generation of technological innovations and how these technologies speak to client needs and solutions are invaluable. He shares his enthusiasm and passion for technology and how it impacts daily life and business on local, national and world stages.

read more

Announcing @SAP “Gold Sponsor” of @CloudExpo | [@SAPInMemory]

SAP HANA combines database, data processing, and application platform capabilities in-memory. The platform provides libraries for predictive, planning, text processing, spatial, and business analytics. This new architecture enables converged OLTP and OLAP data processing within a single in-memory column-based data store with ACID compliance, while eliminating data redundancy and latency. By providing advanced capabilities, such as predictive text analytics, spatial processing, data virtualization, on the same architecture, it further simplifies application development and processing across big data sources and structures. This makes SAP HANA the most suitable platform for building and deploying next-generation, realtime applications and analytics.

read more

IBM Watson lands in Thailand, South Africa and Australia

Picture credit: ChrisDag/Flickr

IBM has announced that its supercomputer Watson is being rolled out in a variety of locations, including Australia, Thailand, and Spain.

The global expansion has come about after the IBM Watson Group was formed in January. IBM also announced various collaborations with companies, as well as startups creating apps that are powered by Watson.

Watson is being trialled in Spain with CaixaBank, to develop a cognitive system to teach Watson Spanish. Similarly, ANZ Global Wealth is working with Watson to analyse and observe the types of questions coming from both customers and financial advisors, to offer an improved advice process.

Watson’s credentials in healthcare will also be tested. The supercomputer is being deployed in Bumrungrad International Hospital in Thailand, to improve the quality of cancer care, as well as at Metropolitan Health in South Africa, to provide personalised, outcome-based health services. The latter will be the first commercial application of Watson on the African continent.

Deakin University in Australia is also trialling Watson to develop an online student engagement advisor, developing and fitting profiles for the university’s 50,000 students, assisting them from where certain buildings in the university are to which careers they should pursue.

Evidently, it’s an emphasis on improving the client and improving Watson’s capabilities. One of the key tenets to Watson is that it continually learns from its mistakes.

CloudTech was treated to a demonstration of Watson Analytics last month, and found some interesting insights. Users have three ways of starting their project; either starting with a question, starting from a use case, or starting from data itself.

Ask Watson Analytics a question – in the case of the demo, ‘why do people purchase?’ – and it spits out data based on drivers, not based as much on figures. As Watson Analytics is based in the cloud, it gives great flexibility in adding use cases.

Read more: Watson Analytics: How it makes sales and marketing’s jobs easier

Here’s proof Oracle is now taking cloud computing very seriously

Picture credit: Peter Kaminski/Flickr

It’s taken a long time for Oracle, and former CEO Larry Ellison in particular, to embrace cloud computing. But now the ship has truly sailed, as the software giant has hired Google App Engine mastermind Peter Magnusson in a senior VP role.

Magnusson, who was most recently VP engineering at Snapchat, was engineering director at Google, responsible for Google’s platform as a service offering App Engine, as well as working on strategy for Cloud Platform.

Little is known about the role Magnusson will undertake at Oracle; his LinkedIn page confirms he’s now working at Redwood, but coyly, the job description merely states ‘Oracle public cloud’.

The company has been engaging in a strategic shift over the past couple of months, with founder Ellison stepping sideways to assume the role of chief technical officer and Safra Catz and Mark Hurd appointed as co-CEOs.

At Oracle OpenWorld last month, Ellison was effusive about Oracle’s new cloud capabilities, making expansions in IaaS, PaaS and SaaS, as well as providing a cloud capable version of Java.

“Extreme performance has always been part of the Oracle brand,” he told delegates. “What has not always been a part of the Oracle brand is the notion of extreme ease of use and extreme low cost.” The Oracle Cloud platform, he said, changes that.

“Our cloud is bigger than people think, and it’s going to get a lot bigger,” he added.

Magnusson’s move from Google to Snapchat was an interesting one at the time, but given Snapchat is one of Google’s biggest cloud customers – a fact the search giant occasionally likes to slip into conversation – the defection made sense. We’ll just have to wait and see what prompted this switch.

Oracle didn’t have any comment to make when CloudTech enquired.

Announcing All Star @SoftLayer Faculty at @CloudExpo Silicon Valley [#Cloud #PaaS]

As Platform as a Service (PaaS) matures as a category, developers should have the ability to use the programming language of their choice to build applications and have access to a wide array of services. Bluemix is IBM’s open cloud development platform that enables users to easily build cloud-based, creative mobile and web applications without having to spend large amounts of time and resources on configuring infrastructure and multiple software licenses. In this track, you will learn about the array of services to support and accelerate application development, as well as building applications on Bluemix using Java and node.JS. Learn more about Bluemix at www.bluemix.net.

read more

CIOs: How to procure Web-scale IT infrastructure expertise

Picture credit: Michael Heiss/Flickr

Here’s the challenge. Your CEO won’t accept anything less than a world-class IT infrastructure that meets the expectations of his most informed Line-of-Business leaders. If you’re the CIO and you don’t want to rely upon public cloud service providers, then are you ready to deliver the caliber of IT services that they can provision?

Take a moment. Consider the implications. Ponder the impact of your actions.

All the leading public cloud services providers routinely design and assemble their own data center infrastructure components, due to their extreme needs for scale and cost control. Regardless of the cloud services company, a common element among all these devices is a requirement to run an open-source operating system, such as Linux, and various other purpose-built open-source software components. Gartner, Inc. refers to this trend as the Web-scale IT phenomena.

Why you need to adopt Web-scale IT practices

What is Web-scale IT? It’s all of the things happening at large cloud services firms – such as Google, Amazon and Facebook – that enables them to achieve extreme levels of compute and storage delivery. The methodology includes industrial data centre components, web-oriented architectures, programmable management, agile processes, a collaborative organization style and a learning culture.

By 2017, Web-scale IT will be an architectural approach found operating in 50 percent of all global enterprises – that’s up from less than 10 percent in 2013, according to Gartner. However, they also predict that most corporate IT organizations have a significant expertise shortfall that creates a huge demand for long-term technical staff training and near-term consulting guidance.

Gartner reports that legacy capacity planning and performance management skills within typical enterprise IT teams are no longer sufficient to meet today’s rapidly evolving large multinational business. By 2016, according to Gartner’s assessment, the lack of required skills will be a major constraint to growth for 80 percent of all large companies.

Gartner also believes that Web-scale IT organizations are very different than Conventional IT teams – in particular, they proactively learn from one another. Furthermore, Web-scale organizations extend the data center virtualization concept by architecting applications to be stateless, wherever possible.

“While major organizations continue to maintain and sustain their conventional capacity-planning skills and tools, they need to regularly re-evaluate the tools available and develop the capacity and performance management skills present in the Web-scale IT community,” said Ian Head, research director at Gartner.

Hybrid cloud computing services constructed in this way are better equipped to scale geographically and share multiple data centers with limited impact on user performance. This approach also blurs the lines between capacity planning, fault-tolerant designs, and disaster recovery.

When to plan for large-scale web systems

Demand shaping uses various techniques to adjust the quantity of resources required by any one service so that the infrastructure does not become overloaded. Gartner predicts that through 2017, 25 percent of large organizations will use demand shaping to plan and manage capacity – that’s up from less than 1 percent in 2014. So, you’ll need a plan of action – to take you from your current scale, to a Web-scale.

Gartner says that CIOs and other IT leaders must plan both the application and infrastructure architecture carefully. Infrastructure and product teams must work together to use application functionality, which allows an orderly degradation of service by reducing non-essential features and functions (when necessary).

Besides, the different architectures and the vastness of Web-scale IT organizations make traditional capacity planning tools of limited utility. In-memory computing and deep analytics tools are generally used to extract the required data directly from the infrastructure and from reporting capabilities built into software applications. This information is used to inform real-time decisions to allocate resources and manage potential bottlenecks.

“These operational skills and tools are currently unique to each Web-scale organization and are not yet available in most end-user organizations,” said Mr. Head. “However they will be in increasingly high demand as large organizations of all types begin to pursue the tangible business benefits of a Web-scale approach to IT infrastructure.”

Getting ready to scale-out your infrastructure

So, if your IT organization isn’t prepared for this transition, what are your options? Your search for hybrid cloud training services and consulting guidance should start with a requirement for proven OpenStack expertise. As the leading open-source cloud Infrastructure-as-a-Service platform, you’re likely going to need IT talent that’s already experienced with prior OpenStack deployments.

That said, choose wisely. Keep in mind; few suppliers will have Web-scale infrastructure experience. As you prepare your list of qualified vendors, ask for customer case study examples and their use case scenarios. To help reduce the risk of procurement remorse, take all the time you need to perform due diligence.

Regardless of your IT operational budget, familiarize yourself with the OpenStack trailblazers, such as eNovance, and become versed in the language and processes of the Web-scale infrastructure deployment pioneers. Now you’ll be ready to embark upon your scale-out infrastructure journey.

How to Lock Down Sensitive Data By @Motorola | @CloudExpo

For retailers everywhere, it’s a challenging new day. Security threats are a constant – both inside their four walls and out. The big security breaches we hear about on the news; the smaller ones sometimes not. But their impact remains costly to us all. The need for mobility, rapidly evolving technology and meeting growing customer expectations for network user access continues to complicate matters for retail IT – and has set the stage for the increased risk with over the air breaches using rogue Bluetooth® devices.
Bluetooth technology has transformed how we connect – both at home and at work. It is intended for consumer product connectivity and can be misused for gaining entry into enterprise applications. The technology is often used for payment card readers, but is also being misused by hackers as an unsecured way to gain access into retailers’ networks. Once inside, they can install malware onto point of sale (POS) devices and plant a rogue Bluetooth transmitter. Then, using their own mobile devices, they can download collected information to their device as they walk by undetected – up to 300 feet away.

read more

Getting the Most Out of Your SDN By @Riverbed | @CloudExpo [#Cloud]

Fundamentally, SDN is still mostly about network plumbing. While plumbing may be useful to tinker with, what you can do with your plumbing is far more intriguing. A rigid interpretation of SDN confines it to Layers 2 and 3, and that’s reasonable. But SDN opens opportunities for novel constructions in Layers 4 to 7 that solve real operational problems in data centers. “Data center,” in fact, might become anachronistic – data is everywhere, constantly on the move, seemingly always overflowing. Networks move data, but not all networks are suitable for all data.
In his session at 15th Cloud Expo, Steve Riley, Technical Leader in the Office of the CTO at Riverbed Technology, will discuss how finding (or building) the right network, with the right applications, is still a labor-intensive task. Must it always be this way? No: for networks will soon be expressed as code. Finally, the data, the applications that process it, the networks that move it and the objects that store it can all be described by software constructs – let’s call this collection a super-blob – in the hands of skilled developers. Freed from their dependence on any given location, super-blobs can move around as necessary, resting on any physical fabric that can satisfy their requirements. As requirements change, locations may change – while preserving all application states. Location-independent computing is within our grasp.

read more

Cloud underpins majority of tech trends for 2015, Gartner analysts find

Picture credit: Pam Broviak/Flickr

Cloud computing is one of the 10 strategic technology trends for 2015, according to analysis from Gartner.

The findings were presented by analysts at the Gartner Symposium/ITxpo earlier this week. Gartner defines a strategic technology trend as one “with the potential for significant impact on the organisation in the next three years.”

David Cearley is vice president and Gartner Fellow. He says there are three main themes with 2015 tech trends; the merging of the real and virtual worlds; the technology impact of the digital business shift; and ‘intelligence everywhere’.

The latter point can be seen in a few of the trends; computing everywhere, the Internet of Things, and smart machine learning. In other words, computational power is moving away from the device.

Naturally, cloud and client computing will be a key element of this. For 2015, Gartner argues, the focus will be on promoting centrally coordinated applications that can port across multiple devices.

“Cloud is the new style of elastically scalable, self-service computing, and both internal applications and external applications will be built on this new style,” said Cearley.

“While network and bandwidth costs may continue to favour apps that use the intelligence and storage of the client device effectively, coordination and management will be based in the cloud.”

This more sophisticated definition of cloud computing is in stark contrast to cloud’s position in the latest Gartner hype cycle, where it was stuck firmly in the ‘trough of disillusionment’.

Gartner has made various predictions about cloud computing in the past, with varied degrees of success. By 2015, the analyst house predicts the death of the traditional sourcing model of IT, as well as well as a move to cloud office systems.

Other trends include software-defined apps and infrastructure, web-scale IT, advanced analytics, and 3D printing.

Read more about Gartner’s prognosis here.

Scaling for the Cloud: Interview with Seth Proctor of @NuoDB | @CloudExpo

NuoDB is all about getting customrs to the cloud. It describes itself as offering “a distributed database offering a rich SQL implementation (that is) designed for the modern datacenter, and as a scale-out cloud database.”

We talked to company CTO Seth Proctor, who is also on the faculty of @CloudExpo, about the challenges the company faces in its quest. Here is what he had to say:

Cloud Computing Journal: What have you found to be the biggest challenges in migrating customers to the cloud? How much of the challenge is technical and how much is psychological?

Seth Proctor: I’d say it’s a mix. There is definitely a psychological hurdle when you’re talking about giving up control over the infrastructure in a move to public cloud. Even if the “migration to cloud” really means moving to a service model on-premise there’s still detail that’s harder to keep clear, especially around security and audit.

I think this is one of the scariest things about “cloud” to enterprises and it’s both psychological and technical. On the purely technical front one of the biggest challenges is around scaling the data model. Another is how you migrate operational tasks like monitoring, backup and upgrade. Concerns about how all these pieces will work together
often leave would-be migrators frozen, or upgrading in only the most minimal ways.

CCJ: One fear I’ve heard a lot about is when a company acquires another company. Just when they thought it was safe to get back in the water, a whole new legacy integration comes into their world. To what degree have your customers faced this challenge?

Seth: Our customers certainly have many technologies they have to integrate, either from acquisition or simply because of the scope and scale of their projects. One of the great things about building a standard SQL database is that’s a common platform that simplifies operations and integration. I think this is one of the key enablers you need if you’re going to scale out a cloud.

CCJ: Why do your customers do business with you?

Seth: Whether it’s a public cloud, on-premise services or some other view of deployment patterns, our customers are all trying to get to a cloud model.

In other words, they’re all trying to simplify, scale, automate and build on a provisioning & SLA-centric view of the world. They want distribution that guarantees resiliency and they want to run in-memory and on-demand for significant efficiencies.

At the same time they need the fundamentals that we’ve relied on for decades: transactions, security, standard APIs and portability. Basically, they need an enterprise database that takes their existing applications and experiences and scales them for the cloud. That’s what NuoDB provides them.

read more