How to get public cloud right first time and achieve hyperscale greatness

(c)iStock.com/Choreograph

Public cloud is increasingly attractive because of its scale, global reach, agility and efficiency, making it the optimal deployment option for a growing number of workloads, applications and business solutions.

But what businesses gain in terms of flexibility could come back to haunt them in other ways, in terms of overall cost, durability, security and supportability as the solution matures.

Getting the right people

Even if the business is notionally ready for hyperscale, organisations still have to address the issue of trying to find the right people to support their move into the public cloud. Creating environments and instances in the public cloud may be simple but finding the right people to create secure, highly available, integrated and well-supported software solutions is not.

The temptation is to assume an organisation’s existing IT team will be able to handle any hyperscale transformation and ongoing management, but it’s not always as straightforward as it says on the tin. It can be hard to keep skills relevant when the capabilities and services offered by hyperscalers change constantly with over 1000 updates/ features released a year. In addition, those skills are in high demand and attract premium salaries so businesses are frequently at risk of having their talent poached by rivals. There’s also a danger that the skills of the people who develop the solutions are misapplied to ongoing support afterwards.

Getting enough people

Even if businesses have the right people and skills, the other question they need to ask is do they have enough of them? Hyperscale does not sit easily alongside the traditional model of a static technical workforce with siloed specialisms. Instead, organisations may need to introduce lightweight, flatter structures and processes that favour collaboration and cross functional working. These are better suited to a DevOps approach, which goes hand in hand with hyperscale. With technical resources tuned to support the business in an increasingly fluid way, the most effective teams are frequently multi-discipline.

Getting them in the right place

Businesses need to be sure that, like their applications and workloads, their valuable people resources are also in the most appropriate ‘execution venue’ for their skillsets. Internal expertise should not be diluted by ongoing infrastructure management when they are better suited to drive innovation that creates competitive edge or enhances the customer experience.

Getting a managed cloud service

The promise of hyperscale is undoubtedly extremely compelling, but many businesses find their ambitions thwarted by issues such as a lack of budget, time, technical skills, resources, confidence or vision.

A managed cloud provider (MCP) can help businesses to bypass the requirement to create and manage an internal talent pool, reduce the expense and overhead of providing ongoing training, monitoring, alerting, authentication, backup and restore. The MCP can also provide best practice architectures to accelerate solution development along with the skills and insight to enhance custom development.

MCPs can be especially valuable where tools and automation processes are complex, deep expertise relatively scarce and the price of management failure (typically around solution durability and cost control) high and well documented.

Obligatory analogy

To use a building analogy, even with limited DIY skills, it is relatively easy to construct a garden shed (your quick-start hyperscale solution), or transfer an existing one from a neighbour’s garden (think technical lift and shift). The structure can then be used to hold gardening tools and the lawnmower (read application code and data).

Sheds are useful and fulfill a temporary or ‘good enough’ requirement for contents not important enough to merit their own space in the house. When it comes to building something long lasting or habitable, most people would probably consider engaging an estate agent or an architect and builder to deliver a home with running water, electricity (monitoring and alerting), a number of separate rooms and a hall for welcoming visitors (semi-segregated areas for security).

Very few people would consider designing and building a house on their own – most don’t want to source, contract and manage all the different professionals, trades and suppliers involved.  Instead, engaging an expert third party who has all the necessary technical qualifications and experience ensures things are put together correctly and operate efficiently.

It may be a laboured analogy – and it’s probably also a renter’s market – but the fact is that public cloud is becoming a more and more viable option for an increasing number of scenarios – and an increasing number of businesses are moving their entire digital footprint into hyperscale. So for organisations trying to work out why their shed can’t be more like a permanent residence, it might be time to call in the professionals – think about the real-world impact of that leaky tap.

IoT revenues grow to $6.7bn in Q4 2015

Development projectA new study from Technology Business Research (TBR) has found IoT’s revenues have grown to $6.7 billion over the course of Q4.

The research, which focused on the industry’s largest IoT players, including AWS, GE, Google, Intel and Microsoft amongst others, highlighted strong year-on-year growth as tier one vendors aim to drive profits in a relatively open marketplace. A lack of competition, high-profits and immature regulations/standards, are driving IoT up the priority list for tier one vendors currently.

“Effectively, every type of IT and operational technology (OT) vendor will have a stake in the growing commercial IoT market, as IoT solutions will drive increased use of diverse IT and OT products and services,” said TBR Devices and IoT Analyst Dan Callahan. “In addition to building interest in established IT products, commercial IoT will create growth in specialized business consulting, hardware, network, development, management and security components.

“IT and OT vendors that are quick to capture IoT opportunities within their current customer base, and attract new ones through developer programs and investing in growing mindshare, will enjoy additional, immediate, revenue opportunities.”

The ongoing adoption of cloud computing and the increasing pressure to capitalize on the growing amount of data available to organizations, were highlighted as drivers for the adoption of the technology, as customers aim to increase operational efficiency and the effectiveness of the decision making process. TBR believes the 21 benchmarked companies are gaining an advantage in the attractive IoT market due mainly to minimized competition. A lack of standards and security concerns around the technology has set a high barrier to entry for tech companies, though there is a healthy value chain in which smaller organizations can capitalize.

North America is seen as the leading region to integrate IoT and develop an early adopter community, accounting for just over 40% of the activity. APAC and CALA represented 24.8% and 5.5% of the market, respectively, whereas EMEA accounted for the majority of the remainder.

Understanding the difference between DRaaS and backup for your business

(c)iStock.com/XtockImages

According to a report by one of the big four auditors, KPMG, more than 40% of companies that suffer a major business disruption are unable to recover from the long-term impact of the failure and go out of business within two years.

To avoid this fate, organisations must have a solid disaster recovery strategy in place. Cloud-based disaster recovery and backup solutions are giving companies of all sizes access to business continuity capabilities that were once only available to enterprises with large IT footprints. However, many teams are confused about the difference between disaster recovery as a service (DRaaS) and backup. As a result, companies are settling for inadequate protection or spending more money than necessary on solutions that they do not essentially need.

The bank account analogy

I’ve found it’s helpful to use an analogy to explain the difference. Let’s pretend your IT systems are actually a bank account. Now, if you find yourself with a zero balance in your bank account you’re in a dark place. However, let’s pretend you didn’t know it and wrote one more cheque to, let’s say, the paper boy (it’s the 90s in this analogy so I can say paper boy).

The backup system is like having £1000 under your mattress. This is what happens. The cheque bounces; you incur a £30 fee; the paper boy is annoyed; you hand him some cash next time he delivers the paper and he recovers.

So, you are safe. You keep getting your ever-critical morning paper but it hurt a bit and took a few days to resolve.

If this were an IT system, maybe you’d have to find/buy new hardware, set it up, find the backup and bring the systems up from the backup copy. All this would take a few days, at a minimum, and all the while you’d lose some revenue due to systems being down and a bunch of people would be pretty peeved. But, at the end of the day, you wouldn’t lose all your data. It was backed up.

A disaster recovery system on the other hand is like an overdraft account. This is what happens. The cheque clears and the paper boy is none the wiser. Your bank extends a ‘loan’ to you in the form of an overdraft account, for which you may pay a bit of interest. You can rectify the situation with funds from elsewhere at your earliest convenience.

So, you are also safe and there was no discontinuity of service. It may have cost a bit, but not a lot. By and large, no one really knew anything went wrong.

If this were an IT system, your failover would happen in seconds or minutes to a cloud-based target environment where the workloads would hum along as though nothing happened. Until you were ready to fail-back you would pay a relatively small fee for the resources you used. No one would be angry about lost revenue or broken systems.

Making the right choice for your business

Now that the difference between the two is clear, let’s come back to the present day. Which solution do you go with to protect your business? For workloads where downtime is deadly, DRaaS is the way to go. For workloads where you’d hate to lose the data, but it isn’t critical stuff (like development systems), backup may be sufficient. For many operational and regulatory compliance reasons, customers may do both.

Increasingly we are finding that many of our customers are looking at a blended solution. The nice thing is that you can easily configure and deploy both, while making use of an exceptional global cloud infrastructure that provides access to off-site alternatives at a fraction of the cost of building another data centre. Additionally, with bonuses like free bandwidth, no setup fees, encrypted storage and included technical support, this is a solution that you can easily sell to your business.

Any extended loss of productivity can lead to a reduced cash flow through late invoicing, lost orders and increased costs as staff work extra hours to recover from the downtime, missed delivery dates and so on. After all, to go back to my analogy, no one wants to deal with an irate paper boy – or even worse – an irate executive.

Rackspace extends Azure Fanatical Support footprint to Europe

Europe At Golden Sunrise - View From SpaceRackspace has announced the unlimited availability launch of its Fanatical Support services for Microsoft Azure customers in the UK, Benelux and DACH regions, as well as two new service levels, Navigator and Aviator.

The Fanatical Support was previously available in US markets, though the expansion puts the Azure service in line with its other offerings, such as for Amazon Web Services. The Navigator service offers access to tools and automation, whereas Aviator does the same, and goes further to offer a fully-managed Azure experience, providing increased man-hours, custom architecture design and all-year support, as well as performing environment build and deployment activities.

“It’s been nearly a year since Rackspace announced Fanatical Support for Microsoft Azure, which we launched to assist customers who want to run IaaS workloads on the powerful Azure cloud, but prefer not to architect, secure and operate them first-hand,” said Jeff DeVerter, Chief Technologist for Microsoft Technology at Rackspace.

“Our launch of this offering marked an important expansion of our strategy to offer the world’s best expertise and service on industry-leading technologies, and is a natural progression of our 14-year relationship with Microsoft.”

As part of the announcement, the confirmed Help for Heroes would be one of the first UK organizations to utilize the new offering. The company has been utilizing the Azure platform for some time now, as a means to counter website downtime during periods of high traffic volume during fundraising campaigns.

“Being able to scale up quickly is important, but so is scaling down during times that are quieter,” said Charles Bikhazi, Head of Application Services at Help for Heroes. “As with any charity, we’re always looking to make cost savings where possible and that’s exactly what this solution has delivered. Now, we only pay for infrastructure that’s actually being used which ensures that costs don’t spiral out of control. The new offering gives us access to this much needed scalability and resilience without the burden of having to run the platform ourselves.”

Accenture and IPsoft team up to launch AI initiative

Robotic hand, accessing on laptop, the virtual world of information. Concept of artificial intelligence and replacement of humans by machines.Accenture has expanded its partnership with IPsoft to accelerate the adoption and implementation of artificial intelligence technologies.

As part of the relationship the team will launch the Accenture Amelia Practice, a new consulting arm for Accenture which will develop go-to-market strategies using the IPsoft’s product offering to build virtual agent technology for customers. In the first instance, the team will target the banking, insurance and travel industries.

“Artificial intelligence is maturing rapidly and offers great potential to reshape the way that organisations conduct business and interact with their customers and employees,” said Paul Daugherty, Accenture’s CTO “At the same time, executives are overwhelmed by the plethora of technologies and many products that are advertising AI or Cognitive capabilities.”

“With our new Accenture Amelia practice, we are taking an important step forward in advancing the business potential of artificial intelligence by combining IPsoft’s world-class virtual agent platform with Accenture’s broad technology capabilities and industry experience to help clients transform their business and operations.”

The extended partnership will focus on creating practical implementations for AI within the current business world, using automation at scale to increase organizational efficiencies. The IPsoft team have implemented the same concept with a number of customers including programs to answer invoicing queries from suppliers and front-line customer service bots.

Artificial intelligence is seemingly one of a number of new areas being prioritized by the Accenture team, as industry continues trends towards a more digitally enabled ecosystem. Recent research from highlighted the digital economy accounted for roughly 22% of the world’s total economy, with this figure predicted to rise to 25% by 2015. This figure was as low as 15% in 2005. The same research also predicts growth of new technology will continue on an upward scale, as 28% of the respondents believe the pace of change will increase “at an unprecedented rate”.

While Accenture’s business has predominantly been focused around traditional IT to date, the team’s future business will shift slightly towards disruptive technologies, building on its new business mantra ‘Every Business is a Digital Business’. AI is one of those prioritized disruptions, as it described artificial intelligence and intelligent automation as the “essential new co-worker for the digital age”.

It would appear Accenture are betting heavy on these new technologies as it claims 70% of executives are making significantly more investments in artificial intelligence technologies than they did in 2013, and 55% state that they plan on using machine learning and embedded AI solutions (like Amelia) extensively.

The Dangers of Cloud Storage By @tofly4wifi | @CloudExpo #Cloud

Today nearly all of us have our information stored on the cloud. It’s a very easy solution that allows users to seamlessly create back-ups of photos, contacts and other personal information, giving users access to their accounts anywhere from any device. Perhaps its most prized feature is that it has no storage limits, unlike mobile devices and PCs.
There is, however, a downside to cloud services. Although it is useful in storing data, it could be the reason data is lost. Recently, the celebrity iCloud hack went to trial. The hacker admitted he acquired the credentials by spear phishing his victims and once he had them, all the data they stored on the cloud, whether intentionally or not, was exposed and later posted online.

read more

Cloud Native Applications | @CloudExpo #BigData #DataLake #Microservices

As the Cloud becomes more of a norm as part of enterprise computing, enterprises now have to deal with the issue of how to ensure that the applications effectively use the attributes of cloud. There are monolithic applications of the previous era that are continuing to be migrated to the cloud using a lift and shift approach or with minimal changes, they do get benefitted with certain attributes of cloud like availability and management, but there is also emerging a new set of application architecture namely the ‘cloud native applications.’

read more

Struggling to Scale Agile? | @DevOpsSummit #DevOps #Agile #DigitalTransformation

Small teams are more effective. The general agreement is that anything from 5 to 12 is the ‘right’ small. But of course small teams will also have ‘small’ throughput – relatively speaking. So if your demand is X and the throughput of a small team is X/10, you probably need 10 teams to meet that demand. But more teams also mean more effort to coordinate and align their efforts in the same direction. So, the challenge is how to harness the power of small teams and yet orchestrate multiples of them to get higher throughput.

In the context of enterprise Agile, this is very critical.

read more

Six DevOps Case Studies | @DevOpsSummit #Agile #DevOps #Microservices

Admittedly, two years ago I was a bulk contributor to the DevOps noise with conversations rooted in the movement around culture, principles, and goals. And while all of these elements of DevOps environments are important, I’ve found that the biggest challenge now is a lack of understanding as to why DevOps is beneficial. It’s getting the wheels going, or just taking the next step. The best way to start on the road to change is to take a look at the companies that have already made great headway into modern software delivery. There is no one-size-fits-all DevOps, but there are existing implementations which contain a treasure trove of tips and tricks, and sometimes even direct implementations strategies for rugged DevOps.

read more

Tips for Data Scientists | @CloudExpo #BigData #IoT #DigitalTransformation

I spend a lot of time helping organizations to “think like a data scientist.” My book “Big Data MBA: Driving Business Strategies with Data Science” has several chapters devoted to helping business leaders to embrace the power of data scientist thinking. My Big Data MBA class at the University of San Francisco School of Management focuses on teaching tomorrow’s business executives the power of analytics and data science to optimize key business processes, uncover new monetization opportunities and create a more compelling, engaging customer and channel engagement.

read more