AWS turns 10: A dominant market share, but does that tell the full story?

(c)iStock.com/RuthBlack

Today marks a notable milestone in the history of cloud computing, as Amazon Web Services (AWS) turns 10 years old.

March 14 2006 saw a press release go out on the wire from Amazon announcing “a simple storage service that offers software developers a highly scalable, reliable, and low-latency data storage infrastructure at very low costs.” A decade on, AWS continues to hold a significant lead over the competition in infrastructure as a service (IaaS), and while moving into other areas, such as gaming with Lumberyard, and virtual reality if a recent job posting is to be believed, its primary cloud operation shows no signs of stopping.

Werner Vogels, Amazon CTO, described a few of the various learnings AWS has provided over 10 years in a blog post. Building evolvable systems, expecting the unexpected – everything will fail over time, despite the best laid plans – and building software services for automation are all key, he argued.

“There are hundreds of lessons that we’ve learned about building and operating services that need to be secure, reliable, scalable, with predictable performance at the lowest possible cost,” he wrote. “With over a million active customers per month, who in turn may serve hundreds of millions of their own customers, there is no lack of opportunities to gain more experience and perhaps no better environment for continuous improvement in the way we serve our customers.”

James Hamilton, AWS senior principal engineer, moved to Amazon in 2008 after a “game changing” experience with AWS at his previous employer. Citing Netflix’s decision to go all-in on cloud in 2010 as key, Hamilton wrote: “The best proof of innovation is customer commitment and, without a doubt, the highest form of customer commitment is to decide to run the entire company on cloud infrastructure.”

So how does the rest of the industry see the history of AWS? Ian Moyse, a regular contributor to CloudTech and board member of Eurocloud and the Cloud Industry Forum, argues AWS has not only disrupted the landscape, it has dragged other companies – such as Microsoft – along with it more quickly than expected.

“In recent years AWS has really disrupted the cloud world, driving Microsoft into cloud deeper and quicker than they likely would have done and driving pricing down and functionality up in what has come to be known as the race to zero,” he said. “For providers and customers, AWS has driven more affordable compute, enabled innovation and empowered faster DevOps. Many apps we have today may not have existed if it were not for the change AWS has brought upon the IT sector.”

Research on the state of the cloud infrastructure market has not surprisingly seen AWS as the darling of the industry. Synergy Research offers a quarterly analysis of the market, and the current trend is that while other vendors, such as Microsoft and Google, are growing slightly quicker than Amazon year over year, it is hardly making a dent in Amazon’s 30% global market share.

Yet fellow analyst house Cloud Spectator argues AWS ranks comparatively poorly when compared to smaller, more niche players. “We see many smaller players find an advantage by offering high-performance infrastructure at a very competitive price,” CEO Kenny Li told CloudTech. “The [larger] providers offer the size and additional services for handling massive customer volume, but the volume also comes with additional performance considerations which may result in lower performance on cloud servers.”

For Moyse, the Cloud Spectator report represents one dynamic of the market, and that there’s room for vendors of all shapes and sizes. “A wide range of cloud providers will continue to be selected by customers, some offering local relationship to the smaller business and feeling the goliath brands leave them uncomfortable, some choosing for support reasons where they lack in-house skills in the new services and are unable to afford them against higher paying enterprises sapping the talent pool,” he said.

“AWS and Azure, whilst very attractive commercially and easy to spin up, do need some knowledge and experience to gain the outcome and quality businesses require,” Moyse added. “Much like Salesforce has done, we can expect a services ecosystem to develop around these platforms to advise and support smaller businesses in utilising them.”

Vogels argued there will inevitably be the occasional bump in the road. But, as he finished his missive, “remember it is still day one.” For AWS, the question now, with a burgeoning ecosystem, is whether the coming decade can be as successful as the first.

Head in the clouds? What to consider when selecting a hybrid cloud partner

online shopping cartThe benefits of any cloud solution relies heavily on how well it’s built and how much advance planning goes into the design. Developing any organisation’s hybrid cloud infrastructure is no small feat, as there are many facets, from hardware selection to resource allocation, at play. So how do you get the most from your hybrid cloud provider?

Here are seven important considerations to make when designing and building out your hybrid cloud:

  1. Right-sizing workloads

One of the biggest advantages of a hybrid cloud service is the ability to match IT workloads to the environment that best suits it. You can build out hybrid cloud solutions with incredible hardware and impressive infrastructure, but if you don’t tailor your IT infrastructure to the specific demands on workloads, you may end up with performance snags, improper capacity allocation, poor availability or wasted resources. Dynamic or more volatile workloads are well suited to the hyper-scalability and speedy provisioning of hybrid cloud hosting, as are any cloud-native apps your business relies on. Performance workloads that require higher IOPS (input/output per second), CPU and utilisation are typically much better suited to a private cloud infrastructure if they have elastic qualities or requirements for self-service. More persistent workloads almost always deliver greater value and efficiency with dedicated servers in a managed hosting or co-location environment. Another key benefit to choosing a hybrid cloud configuration is the organisation only pays for extra compute resources as required.

  1. Security and compliance: securing data in a hybrid cloud

Different workloads may also have different security or compliance requirements which dictates a certain type of IT infrastructure hosting environment. For example, your most confidential data shouldn’t be hosted in a multi-tenant environment, especially if that business is subject to Health Insurance Portability and Accountability Act (HIPAA) or PCI compliance requirements. Might seem obvious, but when right-sizing your workloads, don’t overlook what data must be isolated, and also be sure to encrypt any data you may opt to host in the cloud. Whilst cloud hosting providers can’t provide your compliance for you, most offer an array of managed IT security solutions. Some even offer a third-party-audited Attestation of Compliance to help you document for auditors how their best practices validate against your organisation’s compliance needs.

  1. Data centre footprint: important considerations

There is a myriad of reasons an organisation may wish to outsource its IT infrastructure: from shrinking its IT footprint to driving greater efficiencies, from securing capacity for future growth, or simply to streamline core business functions. The bottom line is that data centres require massive amounts of capital expenditure to both build and maintain, and legacy infrastructure does become obsolete over time. This can place a huge capital and upfront strain onto any mid-to-large-sized businesses expenditure planning.

But data centre consolidation takes discipline, prioritisation and solid growth planning. The ability to migrate workloads to a single, unified platform consisting of a mix of cloud, hosting and datacentre colocation provides your IT Ops with greater flexibility and control, enabling a company to migrate workloads on its own terms and with a central partner answerable for the result.

  1. Hardware needs

For larger workloads should you seek to host on premises, in a private cloud, or through colocation, and what sort of performance needs do you have with hardware suppliers? A truly hybrid IT outsourcing solution enables you to deploy the best mix of enterprise-class, brand-name hardware that you either choose to manage yourself or consume fully-managed from a cloud hosting service provider. Performance requirements, configuration characteristics, your organisation’s access to specific domain expertise (in storage, networking, virtualisation, etc.) as well as the state of your current hardware often dictates the infrastructure mix you adopt. It may be the right time to review your inventory and decommission that hardware reaching end of life. Document the server de-commissioning and migration process thoroughly to ensure no data is lost mid-migration, and follow your lifecycle plan through for decommissioning servers.

  1. Personnel requirements

When designing and building any new IT infrastructure, it’s sometimes easy to get so caught up in the technology that you forget about the people who manage it. With cloud and managed hosting, you benefit from your provider’s expertise and their SLAs — so you don’t have to dedicate your own IT resource to maintaining those particular servers. This frees up valuable staff bandwidth so that your staff focuses on tasks core to business growth, or trains for the skills they’ll need to handle the trickier configuration issues you introduce to your IT infrastructure.

  1. When to implement disaster recovery

A recent study by Databarracks also found that 73% of UK SME’s have no proper disaster recovery plans in place in the event of data loss, so it’s well worth considering what your business continuity planning is in the event of a sustained outage. Building in redundancy and failover as part of your cloud environment is an essential part of any defined disaster recovery service.

For instance, you might wish to mirror a dedicated server environment on cloud virtual machines – paying for a small storage fee to house the redundant environment, but only paying for compute if you actually have to failover. That’s just one of the ways a truly hybrid solution can work for you. When updating your disaster recovery plans to accommodate your new infrastructure, it’s essential to determine your Recovery Point Objectives and Recovery Time Objective (RPO/RTO) on a workload-by-workload basis, and to design your solution with those priorities in mind.

Written by Annette Murphy, Commercial Director for Northern Europe at Zayo Group

Google’s AlphaGo publicity stunt raises profile of AI and machine learning

Google AlphaGoWorld Go champion Lee Se-dol has beaten AlphaGo, an AI program developed by Google’s DeepMind unit this weekend, though he still trails the program 3-1 in the series.

Google’s publicity stunt highlights the progress which has been made in the world of artificial intelligence and machine learning, as commentators predicted a run-away victory for Se-dol.

DeepMind founder Demis Hassabis commented on Twitter “Lee Sedol is playing brilliantly! #AlphaGo thought it was doing well, but got confused on move 87. We are in trouble now…” allowing Se-dol to win the fourth game in the five game series. While the stunt demonstrates the potential of machine learning, Se-dol’s consolation victory proves that the technology is still capable of making mistakes.

The complexity of the game presented a number of problems for the DeepMind team, as traditional machine learning techniques would not enable the program to be successful. Traditional AI methods, which construct a search tree over all possible positions, would have required too much compute power due to the vast number of permutations within the game. The game is played primarily through intuition and feel, presenting a complex challenge for AI researchers.

The DeepMind team created a program that combined an advanced tree search with deep neural network, which enabled the program to play thousands of games with itself. The games allowed the machine to readjust its behaviour, a technique called reinforcement learning, to improve its performance day by day. This technique allows the machine to play human opponents in its own right, as opposed to mimic other players which it has studied. Commentators who has watched all four games have repeatedly questioned whether some of the moves put forward by AlphaGo were mistakes or simply unconventional strategies devised by the reinforcement learning technique.

Although the AlphaGo program demonstrates progress as well as an alternative means to build machine learning techniques, the defeat highlights that AI is still fallible; there is still some way to go before AI will become the norm in the business world.

In other AI news Microsoft has also launched its own publicity stunt, though Minecraft. The AIX platform allows computer scientists to use the world of Minecraft as a test bed to improve their own artificial intelligence projects. The platform is currently available to a small number of academic researchers, though it will be available via an open-source licence during 2016.

Minecraft appeals to the mass market due to the endless possibilities offered to the users, however the open-ended nature of the game also lends itself to artificial intelligence researchers. From searching an unknown environment, to building structures, the platform offers researchers an open playing field to build custom scenarios and challenges for an acritical intelligence offering.

Aside from the limitless environment, Minecraft also offers a cheaper alternative for researchers. In a real world environment, researcher may deploy a robot in the field though any challenges may cause damage to the robot itself. For example, should the robot not be able to navigate around a ditch, this could result in costly repairs or even replacing the robot entirely. Falling into a ditch in Minecraft simply results in restarting the game and the experiment.

“Minecraft is the perfect platform for this kind of research because it’s this very open world,” said Katja Hofmann, lead researcher at the Machine Learning and Perception group at Microsoft Research Cambridge. “You can do survival mode, you can do ‘build battles’ with your friends, you can do courses, you can implement our own games. This is really exciting for artificial intelligence because it allows us to create games that stretch beyond current abilities.”

One of the main challenges the Microsoft team are aiming to address is the process of learning and addressing problems. Scientists have become very efficient at teaching machines to do specific tasks, though decision making in new situations is the next step in the journey. This “General Intelligence” is more similar to the complex manner in which humans learn and make decisions every day. “A computer algorithm may be able to take one task and do it as well or even better than an average adult, but it can’t compete with how an infant is taking in all sorts of inputs – light, smell, touch, sound, discomfort – and learning that if you cry chances are good that Mom will feed you,” Microsoft highlighted in its blog.

The key steps when migrating from a physical environment to the cloud

(c)iStock.com/BsWei

As companies continue to step up the pace of migration to the cloud, many find themselves in the situation of having to bridge the gap between their physical and cloud infrastructures, which brings new challenges. IT has applications it regards as critical but lines of business have different application requirements that they regard as critical. As a result, IT often ends up becoming overloaded with requests, and as they attempt to satisfy demand with more resources, this can lead to over-provisioning of cloud infrastructure and runaway costs.

As more workloads are moved to the cloud, IT management must evolve and adopt a new approach to delivering infrastructure, applications and end user access. Indeed, IT needs to remove its attachment to physical infrastructure and start to present themselves as a service providing internal customers with access to flexible resources on-demand to support digital business initiatives. That however is easier said than done and for many migrating their physical environment into the cloud is extremely daunting.

In our experience, a good way to begin cloud migration is to use cloud for net new workloads. By running new and often non-mission-critical workloads in the cloud, the operational team can grow accustomed to managing the cloud and understanding performance metrics, and even gain confidence by estimating costs. Being able to slowly move workloads to the cloud will help acclimatise to the new way of working and ensure that IT understands the environment that it is working in, before migrating major workloads.

We often get asked what type of applications organisations should move to the cloud. Thisvery much depends on the organisation, but our advice is to take an inventory of your data and applications, and then decide which applications are most important to be hosted in the cloud. You need to assess and tier according to business criticality, considering how much of your environment is virtual and how much is physical, and then identify any critical components, such as specific networking requirements or physical systems. A lack of knowledge and visibility of what assets organisations have and are using is a common challenge for companies due to the complexity of IT infrastructure, so understanding what you have before you consider what to move is really important.

In many cases, migrating virtual and less mission-critical applications first feels like the swiftest path to initial success. However, understanding the full range of applications you’re ultimately migrating will help you select a provider best positioned to address your needs. Using cloud-based disaster recovery services is also a good way to start out and become comfortable with cloud operations – particularly if your cloud service provider enables self-service DR management and testing.

Physical systems are often the most trying when migrating to the cloud. They are usually the remainders of an older time and part of the IT environment because they are necessary and critical to business operations. There are, however, instances when moving these legacy systems to the cloud is beneficial to the business.

Alongside this, moving workloads to the cloud can mean losing visibility into performance metrics, long term history, and even cost visibility. This can greatly increase the burden of managing your cloud workloads and introduce some fear with respect to billing, costs and performance. While migration to cloud is considered to have low set up costs, the ongoing management is equally critical to your cloud decision. Making sure that you work with your cloud provider to maintain the transparency and visibility of these systems is important to keep the business costs and IT budget running as normal.

Your cloud service provider should be providing support and training to help ease cloud migration. Companies considering cloud need advisory and architecture advice and services to help them through this transition. However, many providers fall short on the basic on-boarding and support processes they offer to customers as part of cloud deployments.

You should ensure that you understand the levels of on-boarding training and support and ongoing customer support for your cloud offering. This goes beyond understanding self-help, knowledge based or message boards. Make sure that you understand the additional costs and the SLAs that you are entering into. Equally, as you look to grow your cloud services, look for a provider that provides support over the phone.

In summary,cloud-based applications offer many benefits, including the ability to scale IT resources when needed, quickly launch new apps and ensure a high level of performance at all times – not to mention, if you have a disaster recovery programme in place, keeping the business running if disaster strikes. Having applications in the cloud provides a stable and scalable infrastructure on demand and provides IT employees with the freedom to focus on more strategic initiatives that drive digital transformation across the business.

Read more: Analysing security and regulatory concerns with cloud app migration

IBM announces $200 million Indosat Ooredoo cloud deal

Money cloudIBM has announced a five-year $200 million contract with Indosat Ooredoo to develop and deliver solutions on IBM’s cloud platform, Bluemix.

As part of the deal, IBM and Indosat Ooredoo will build an integrated command centre to serve local clients, of both organizations. The move forms part of IBM’s expansion plans for the cloud business, which coincides with the company’s recent win in South Africa, where it will open its first cloud data centre in the country.

“This collaboration shows how IBM’s expertise, technology and services can help Indosat Ooredoo and Lintasarta lead market change in Indonesia while also transforming their existing operations,” said Martin Jetter, SVP, IBM Global Technology Services.

Indonesia’s telco and technology market has been growing rapidly over recent years. Smartphone growth has been healthy in the world’s fourth most populous country, as penetration of total mobile phones is expected to reach 53% in 2017, up from an estimated 24% in 2013. This demonstrates huge potential for growth, as smartphone penetration in China during 2013 was estimated at around 71%.

Outside of Indonesia, the Asia-Pacific region is expected to be a significant growth area for the cloud industry. Market research firm IDC, estimates that by 2018, more than 70% of enterprise organizations in the region will access public cloud IaaS and SaaS capabilities via aggregation hubs.

Jakarta has regularly been quoted as the city which produces the largest number of tweets per day, though this is not solely down to consumers. Businesses regularly use social media, most notably twitter, to communicate with its customers, more so than in western markets.

“Use of smart mobile devices is becoming pervasive, opening up enormous opportunities for local businesses – so we are excited to be working with Indosat Ooredoo and Lintasarta to help clients tap into the power and flexibility of cloud-based solutions and digitally transform their businesses,” Jetter said.

Indosat Ooredoo’s subsidiary, Lintasarta, will jointly develop and deliver cloud-based solutions with IBM, accelerating collaboration and automation of software delivery and infrastructure changes. Customers of the telco will also have access to IBM’s cloud-based enterprise mobility management platform.

“We will be able to bring a greater range of higher value services to market more rapidly, with the confidence of knowing that we are collaborating with one of the world’s largest and most innovative technology companies,” said Alexander Rusli, President and CEO of Indosat Ooredoo. “This landmark alliance will reshape the local market and help Indonesian customers and organizations tap into the most advanced technology available anywhere in the world.”

Alongside the deal, IBM has also announced that it will open its first cloud data centre in South Africa. Working in collaboration with Gijima and Vodacom, the move aims to support cloud adoption and customer demand across the African continent.

“Our new Cloud Data Center gives customers a local onramp to IBM Cloud services including moving mission critical SAP workloads to the cloud with ease,” said Hamilton Ratshefola, IBM Country GM in South Africa. “It also gives customers the added flexibility of keeping data within country which is a key differentiator for IBM.”

The announcement adds to IBM’s growth on the continent, where it currently has a presence in at least 24 countries. IBM has highlighted that Africa is a substantial market for future international growth of its cloud business.

Rackspace updates OpenStack-powered cloud server, OnMetal

Cisco and IBM are teaming up on converged hardware solutions

Rackspace has updated its OpenStack-powered cloud server, OnMetal, focusing its new features on building connectivity between public cloud and dedicated hardware.

The company highlighted it delivers enhanced compute power, and is designed for customers aiming to run workloads such as Cassandra, Docker and Spark, which require intensive data processing as well as the ability to quickly scale and deploy.

“With the combination of new features and performance capabilities in the next generation of OnMetal, it can be a solution for many customers seeking OpenStack as the platform to run their most demanding workloads,” said Paul Voccio, VP Software Development at Rackspace.

The new servers, designed from Open Compute Project specs, feature the Intel Xeon E5-2600 v3 processors, and build on Rackspace’s journey to lead the OpenStack market. Last month, Rackspace added an OpenStack-as-a-Service option, in partnership with Red Hat, to its proposition while highlighting its ambitions “to deliver the most reliable and easy-to-use OpenStack private and hybrid clouds in the world.”

Rackspace claims app performance and reliability indicators are increased with OnMetal cloud servers. The bare metal offering, generally associated with increased security, has helped its customer Brigade avoid performance limitations common with virtualized environments.

“OnMetal has played a significant role in our ability to deliver the Brigade app with optimal uptime, and to innovate and grow the application with the performance of a dedicated environment,” said John Thrall, CTO of Brigade.

Alert Logic Expands EMEA Presence with Belfast Office | @CloudExpo @AlertLogic #Cloud

SYS-CON Events announced today that Alert Logic, Inc., the leading provider of Security-as-a-Service solutions for the cloud, will exhibit at SYS-CON’s 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY.
Alert Logic has announced that it received a grant by Invest Northern Ireland in support of its security research and technology development center located in Belfast. The £572,000 grant will enable the continued growth of its Belfast office in Northern Ireland by supporting approximately 90 jobs.

read more

First Open Source GPU Could Change Future of Computing | @CloudExpo #Cloud

Researchers at Binghamton University recently became the first to create an open source graphics processor unit (GPU). The GPU they created, called Nyami, is appropriate for general purposes as well as graphics-specific work.

Nyami is significant in the research, computing and open source communities because it marks the first time open source has been used to design a GPU, as well as the first time a research team was able to test how different hardware and software configurations affect GPU performance. The results of the experiments the researchers performed are now part of the open source community, and that work will help others follow in the original research team’s footsteps. According to Timothy Miller, a computer science assistant professor at Binghamton, as others create their own GPUs using open source, it will push computing power to the next level.

read more

Developing a Microservices Pipeline | @DevOpsSummit #DevOps #Microservices

The rise of microservices has enabled teams to branch out from developing for innately complicated, monolithic applications and work with small, flexible and comparatively simple components. But when these components (numbering anywhere from the hundreds to the thousands for one application) need to work together, traditional tools and release processes can be lacking. Organizations need to adopt new delivery models, release strategies and tooling that can handle this new multitude of services and their dependencies.

read more

Parallels at UCISA 16: Learning Made Practical

Members of the Parallels team will be attending and presenting at UCISA16 in Manchester from March 16–18. Universities and Colleges Information Systems Association (UCISA) is an organization designed to increase the value of education and educational initiatives by offering a more comprehensive way for students to learn, giving them access to a higher educational standard through […]

The post Parallels at UCISA 16: Learning Made Practical appeared first on Parallels Blog.