Business Cloud News magazine Issue 2 | March/April 2015

OFC_BCN_April15-1Business Cloud News is proud to announce the second issue of BCN is now available online.

In this issue we focus on the two interrelated trends – which create equally entangled issues and questions – and the cloud’s role therein: Big Data and the Internet of Things.

We asked over 700 senior IT decision makers globally about their big data rollout plans in order to get a better sense of their views on where the challenges and bottlenecks lie, and crucially, what components of their data systems will move to the cloud, when, and why.

Also in this issue, we look at the role open source cloud technologies are playing in the rapid transformation of the film and TV industries; the IT strategy driving the innovative vehicle manufacturer Nissan; and the growing presence of cloud computing in what is often thought to be a technologically conservative industry – the financial services sector.

Elsewhere in this issue we look at the shifts in the core capabilities that underpin and to some extent enable cloud computing: the emergence and impact of software-defined networking, and computational heterogeneity in the cloud.

We hope you enjoy issue #2 of BCN!

 

Five Reasons APM Fails By @AppDynamics | @DvOpsSummit [#DevOps]

The first cause of failure is the silos in many of today’s organizations. There are often too many stakeholders involved in APM decision-making ranging from application support, server teams, network teams, database teams (DBAs), application developers, and various architects across the organization. We’re also seeing more non-technical users, such as the business owner of an application interested in seeing usage and performance data on critical Business Transactions within the application. These business users will become a more central user of APM in the future. It’s critical to identify the primary user of the product, and determine requirements focused on those primary users. Secondary users can have input but should not be the ones driving the key decision points. As products mature, they can sell into multiple areas or even cross sell through teams, but it shouldn’t be the focus of the initial implementation.

read more

Upcoming Live Events – Windows Server 2003…Does the Cloud Make Sense for Your Migration?

I just wanted to take a quick minute to let the readers of our blog know that GreenPages is holding a series of live events around migrating Windows Server 2003 Workloads. The events are free and will be held in Cambridge, MA, Portland, ME, Tampa, FL, and Alpharetta, GA. David Barter, our Practice Manager of Microsoft Technologies, will be hosting the events.

We decided to put these events together because of the impact Windows Server 2003 End-of-Life is having on IT professionals across the globe. As you are probably already aware, the End-of-Life date is July 14th. Needless to say, that is coming up pretty quickly. There are perceived, and often real, challenges involved in upgrading applications. However, there are some serious drawbacks if you do not migrate. First, no new updates will be developed or released after end of support. Not migrating could also cause compliance issues for various regulatory and industry standards. Furthermore, staying put will cost more in the end. Maintenance costs for outdated hardware will increase and there will be additional costs for security measures that need to be taken.

On the flip side, benefits of migrating include reducing operational costs and increasing efficiencies, improving employee productivity, the ability to be cloud ready, and increasing business agility. There are different paths you can take, such as migrating to Windows Server 2012, Azure, or Office 365 as an individual product or as a Platform as a Service.

During the events, David will cover:

  • Developing an action plan and ways Azure and Office 365 can be part of it.
  • Potential migration pitfalls
  • Determining which applications will run “as is” on new platforms and which won’t
  • The areas of your infrastructure that will be affected by End of Life.
  • Examples of GreenPages’ customers going to the cloud, including how they approached the decision process and what their experiences were like.

You can register here. If there isn’t an event near you but you’re interested in learning more on the topic, I would highly recommend downloading David’s whitepaper.  These should be great events (plus you get a free lunch and entered to win an Xbox One)! Below is some more information on event locations.

Portland, Maine

  • March 26th from 10am-11am at the Portland Harbor Hotel

Tampa, Florida

  • April 1st from 10am-2pm at the Microsoft Campus

Alpharetta, Georgia

  • April 2nd from 10am-2pm at the Microsoft Campus

Cambridge, Massachusetts

  • April 7th from 10am-2pm at the Microsoft Campus

 

If you have any specific questions about event logistics, feel free to reach out to Kelsey Barrett, our Marketing and Event Coordinator.

 

By Ben Stephenson, Emerging Media Specialist

 

Severalnines: Mixing DevOps with database management and going its own way

(c)iStock.com/Yuri_Arcurs

Sweden-based database management provider Severalnines is an interesting company. Its name would suggest SLAs but the firm, with approximately 7000 users, supports MySQL, MariaDB and MongoDB among others and enables businesses to support both SQL and NoSQL open source databases in the cloud or on-premise.

Yet the way Severalnines has built its success is of most interest. Instead of going cap in hand to VCs and building up cash without going near making a profit, Severalnines has done it the hard way, relying predominantly on the success of its ClusterControl product to expand, with customers such as BT, Orange and Cisco on board. And with a 100% growth in the past 12 months, things are certainly on track.

Vinay Joosery is CEO and co-founder of Severalnines. He argues startups today are focusing at their goals from the wrong angle.

He tells CloudTech: “Five or 10 years ago, I didn’t see as many young people really dreaming about starting their own company, but now it seems like today, when you go to some of these networking events, the dream is [to] have a good idea, build the prototype with your friends, and then go and contact the VCs.

“Basically you get excited about the wrong things. Solve a real problem and find real customers who are willing to pay for that problem – I think that’s probably where the focus should be.”

Severalnines has done that. The company’s focus is on architecting for failover – a big problem when managing databases and the majority of a firm’s staff doesn’t know how to set things up. Add cloud failures and multiple data centres to the mix and it all gets very complicated.

Covering both SQL and NoSQL use cases, Joosery argues Severalnines doesn’t push its customers in any direction – “they are the experts in what they want” – but adds: “What they don’t know though is – does this database work for me? We think that the technology is not enough. It needs to be maintained, it needs to be scaled, if there are issues you need to know how to fix it. These are the kinds of things that we address with the product.”

Plenty of complexity gets thrown in with the working environments in large enterprises, who will typically have an army of Oracle database admins in one room, an army of sysadmins in another, and armies of developers elsewhere. The growing DevOps movement is changing this landscape, and for Severalnines – which includes support for Puppet and Chef – it’s a key trend to play into.

“Our product has been addressing that angle from day one in a way,” Joosery says. “You have people managing the database, they’re doing some programming, they’re building web services, they know a bit of everything. What you will see in ClusterControl today, it’s doing a number of things addressing mainly on the sysadmin point of view, all the database aspects. The stuff it does is a combination of sysadmin tasks that you need to do on a database, as well as pure database administration tasks.”

Joosery adds another benefit of DevOps is writing infrastructure as code. In the future, you won’t need to put boxes together, screw in cables, or do anything manually. You write a program to spin up VMs in the cloud and set up connections instead.

Watch this space for further developments on that front. Yet there is a common theme between pursuing DevOps architecture and building distributed systems to architect against failover.

“We are helping people with their databases, but we are not doing it as the other tools vendors are,” Joosery explains. “They are taking an approach where they are looking at single instances. What we do is focus from day one to build tools for systems, and the system for us today is a distributed system, which can be distributed across multiple machines.

“In a distributed system you assume there will be failures. The shit will hit the fan, that’s just by default. Our software is built from the ground up to make sure that these things are easy.

“Let’s say you have a bunch of services, you’re running it in a couple of Amazon regions and you want to have the same data in your own private data centre as well. If for example the Irish region of Amazon gets knocked out, [you] failover to Singapore or failover to the US or failover to your own data centre.

“This is why we are called Severalnines – people need more availability.”

Building Trust From the Bottom Up By @AppDynamics | @DevOpsSummit [#DevOps]

There are lots of ways DevOps can fail. For all the revolutionary promise of the idea, it takes a tricky cultural shift to get Development and Operations working together. Many companies—especially big ones—take a top-down approach. C-suite execs trumpet a Brand New DevOps Initiative, which everyone else resists, undermines, or ignores.
As a developer at a SaaS company, my success depends on Ops. Too often, Dev and Ops are divided by mistrust and rarely talk between releases. Bridging this gap requires finding a way to dial down the tensions that spring from differences in status and divergent incentives.

read more

How to achieve success in the cloud

To cloud or not to cloud? With the right strategy, it need not be the question.

To cloud or not to cloud? With the right strategy, it need not be the question.

There are two sides to the cloud coin: one positive, the other negative, and too many people focus on one at the expense of the other for a variety of reasons ranging from ignorance to wilful misdirection. But ultimately, success resides in embracing both sides and pulling together the capabilities of both enterprises and their suppliers to make the most of the positive and limit the negative.

Cloud services can either alleviate or compound the business challenges identified by Ovum’s annual ICT Enterprise Insights program, based on interviews with 6,500 senior IT executives. On the positive side both public and private clouds, and everything in between, help:

Boost ROI at various levels: From squeezing more utilization from the underlying infrastructure to making it easier to launch new projects with the extra resources exposed asa result.

Deal with the trauma of major organisational/ structural changes as they can adapt to the ups and downs of requirements evolution.

Improve customer/citizen experience, and therefore satisfaction: This has been one of the top drivers for cloud adoption. Cloud computing is at its heart user experience-centric. Unfortunately many forget this, preferring instead to approach cloud computing from a technical perspective.

Deal with security, security compliance, and regulatory compliance: An increasing number of companies acknowledge that public cloud security and compliance credentials are at least as good if not better than their own, particularly in a world where security and compliance challenges are evolving so rapidly. Similarly, private clouds require security to shift from reactive and static to proactive and dynamic security, whereby workloads and data need to be secured as they move in and out of internal IT’s boundaries.

On the other hand, cloud services have the potential to compound business challenges. For instance, the rise of public cloud adoption contributes to challenges related to increasing levels of outsourcing. It is all about relationship management, and therefore relates to another business challenge: improving supplier relationships.

In addition to having to adapt to new public cloud offerings (rather than the other way round), once the right contract is signed (another challenging task), enterprises need to proactively manage not only their use of the service but also their relationships with the service provider, if only to be able to keep up with their fast-evolving offerings.

Similarly, cloud computing adds to the age-old challenge of aligning business and IT at two levels: cloud-enabling IT, and cloud-centric business transformation.

From a cloud-enabling IT perspective, the challenge is to understand, manage, and bridge a variety of internal divides and convergences, including consumer versus enterprise IT, developers versus IT operations, and virtualisation ops people versus network and storage ops. As the pace of software delivery accelerates, developers and administrators need to not only to learn from and collaborate with one another, but also deliver the right user experience – not just the right business outcomes. Virtualisation ops people tend to be much more in favour than network and storage ops people of software-defined datacentre, storage, and networking (SDDC, SDS, SDN) with a view to increasingly take control of datacentre and network resources. But the storage and network ops people, however, are not so keen on letting the virtualisation people in.

When it comes to cloud-centric business transformation, IT is increasingly defined in terms of business outcomes within the context of its evolution from application siloes to standardised, shared, and metered IT resources, from a push to a pull provisioning model, and more importantly, from a cost centre to an innovation engine.

The challenge, then, is to understand, manage, and bridge a variety of internal divides and convergences including:

Outside-in (public clouds for green-field application development) versus inside-out (private cloud for legacy applicationmodernization) perspectives. Supporters of the two approaches can be found on both the business and IT sides of the enterprise.

Line-of-business executives (CFO, CMO, CSO) versus CIOs regarding cloud-related roles, budgets, and strategies: The up-andcoming role of chief digital officer (CDO) exemplifies the convergence between technology and business C-level executives. All CxOs can potentially fulfil this role, with CDOs increasingly regarded as “CEOs in waiting”. In this context, there is a tendency to describe the role as the object of a war between CIOs and other CxOs. But what digital enterprises need is not CxOs battling each other, but coordinating their IT investments and strategies. Easier said than done since, beyond the usual political struggles, there is a disparity between all side in terms of knowledge, priorities, and concerns.

Top executives versus middle management: Top executives who are broadly in favour of cloud computing in all its guises, versus middle management who are much less eager to take it on board, but need to be won over since they are critical to cloud strategy execution.

Laurent Lachal

Shadow IT versus Official IT: Where IT acknowledges the benefits of Shadow IT (it makes an organisation more responsive and capable of delivering products and services that IT cannot currently support) and its shortcomings (in terms of costs, security, and lack of coordination, for example). However, too much focus on control at the expense of user experience and empowerment perpetuates shadow IT.

Only then will your organisation manage to balance both sides of the cloud coin.

Laurent Lachal is leading Ovum Software Group’s cloud computing research. Besides Ovum, where he has spent most of his 20 year career as an analyst, Laurent has also been European software market group manager at Gartner Ltd.

Migrating Your Windows PC to Mac in Parallels Desktop

Guest blog by Manoj Dhanasekar, Parallels Support Team Have you recently switched to Mac but still need some of your Windows programs? Are you about to ditch your old PC? Wait! Give it one last chance. You can actually migrate your Windows–including all of your programs and files–to your Mac. How? Read on: To import your […]

The post Migrating Your Windows PC to Mac in Parallels Desktop appeared first on Parallels Blog.

LeShop taps OpenShift for hybrid cloud app development and management

LeShop has deployed OpenShift to support the company's hybrid cloud strategy

LeShop has deployed OpenShift to support the company’s hybrid cloud strategy

LeShop.ch, one of Switzerland’s largest online supermarkets has selected Red Hat’s commercial OpenShift distribution in a bid to improve how it develops and deploys applications in the cloud.

The company, which uses a combination of its own datacentres and the public cloud to host its applications and consumer-facing websites, was looking to deploy a platform-as-a-service because it wanted increase the performance of its apps and ease their management in a hybrid environment.

Last year the e-retailer migrated its applications to a tightly linked micro-services oriented architecture in order to make its online platforms more scalable, and said it selected OpenShift after considering a number of options including Cloud Foundry and Docker-based platforms.

“It’s not going to be a problem to complete the project on time and on budget,” said Raphaël Anthamatten, head of infrastructure and operations at LeShop.ch. “OpenShift Enterprise provides all of the functions we need to implement the highly flexible micro-services architecture in development and operation.”

Ashesh Badani, vice president and general manager, OpenShift at Red Hat said: “Quickly and reliably launching innovative solutions to market, while leveraging new technologies and application architectures, is one of the key challenges for any online retailer. With OpenShift Enterprise, we support LeShop.ch in developing innovative new services for their online customers in order to become an even more prominent leader in the Swiss market.”

Cisco to open Internet of Things innovation centre in Australia

Cisco will open an Internet of Things innovation centre in Australia this year

Cisco will open an Internet of Things innovation centre in Australia this year

Networking giant Cisco plans to open an Internet of Everything Innovation Centre in Australia this year, which the company said will house experts in the Internet of Things and help catalyse IoT innovation in the region.

The $15m centre, one of eight planned globally (Rio de Janeiro, Toronto, Songdo, Berlin, Barcelona, Tokyo and London) will include locations in Sydney at Sirca and in Perth at Curtin University. Perth-based energy firm Woodside Energy will also contribute resources to the centre.

The centres include dedicated space to demonstrate Internet of Things platforms, and are being pitched as areas where customers, startups and researchers can come together to prototype and test out their ideas.

“Australia is a sophisticated market with a high level of innovation and an early adopter of new technology. Australia is already highly regarded globally for its resources and agriculture sectors and is well-placed to serve the rapidly growing Asian markets, and the Australian government has prioritised these sectors accordingly,” said Irving Tan, senior vice president Asia Pacific and Japan at Cisco.

“The aim now with Cisco IoE Innovation Centre, Australia and its ecosystem of partners is to accelerate innovation and the adoption of the IoE in Australia,” Tan said.

The announcement comes the same week Cisco published a report claiming UK Internet of Things startups could generate more than £100bn over the decade as their offerings catch on in industries like healthcare, retail, transport and energy.

The company also said large firms, SMEs, and government organisations in the UK need to cultivate more joint innovation partnerships if any industry stakeholders are to reap the financial benefits of such a proliferation in internet-connected devices.

Three steps to resilient AWS deployments

Picture credit: Flickr/NandorFejer

Hardware fails. Versions expire. Storms happen. An ideal infrastructure is fault-tolerant, so even the failure of an entire datacenter – or Availability Zone in AWS – does not affect the availability of the application.

In traditional IT environments, engineers might duplicate mission-critical tiers to achieve resiliency. This can cost thousands or hundreds of thousands of dollars to maintain and is not even the most effective way to achieve resiliency.On an IaaS platform like Amazon Web Services, it is possible to design fail-over systems with lower fixed costs and zero single points of failure with a custom mix of AWS and 3rd party tools.

Hundreds of small activities contribute to the overall resiliency of the system, but below are the most important foundational principles and strategies.

1. Create a loosely coupled, lean system

This basic system design principle bears repeating: decouple components such that each has little or no knowledge of other components. The more loosely coupled the system is, the better it will scale.

Loose coupling isolates the components of your system and eliminates internal dependencies so that the failure of a single component of your system is unknown by the other components. This creates a series of agnostic black boxes that do not care whether they serve data from EC2 instance A or B, thus creating a more resilient system in the case of the failure of A, B, or another related component.

Best practices:

– Deploy Vanilla Templates. At Logicworks, our standard practice for Managed AWS hosting is to use a “vanilla template” and configure at deployment time through Puppet and configuration management. This gives us fine-grain control over instances at the time of deployment so that if, for example, we need to deploy a security update to our instance configuration, we only touch the code once on the Puppet manifest, rather than having to manually patch every instance deployed with Golden Template. By eliminating your new instances’ dependency on your Golden Template, you reduce the failure risk of the system and allow the instance to be spun up more quickly.

– Simple Queuing Service or Simple Workflow Service. When you use a queue or buffer to relate components, the system can support spillover during load spikes by distributing requests to other components. Put SQS between layers so that the number of instances can scale on its own as needed based the length of the queue. If everything were to be lost, a new instance would pick up queued requests when your application recovers.

– Make your applications as stateless as possible. Application developers have long employed a variety of methods to store session data for users. This almost always makes the scalability of the application suffer, particularly if session state is stored in the database. If you must store state, saving it on the client reduces database load and eliminates server-side dependencies.

– Minimise interaction with the environment using CI tools, like Jenkins.

– Elastic Load Balancers. Distribute instances across multiple Availability Zones (AZs) in Auto Scaling groups. Elastic Load Balancers (ELBs) should distribute traffic among healthy instances based on frequent health checks, which you control the criteria for.

– Store static assets on S3. On the web serving front, best practice is storing static assets on S3 instead of going to the EC2 nodes themselves. Putting CloudFront in front of S3 will let you deploy static assets so you do not have the throughput of those assets going to your application. This not only decreases the likelihood that your EC2 nodes will fail, but also reduces cost by allowing you to run leaner EC2 instance types that do not have to handle content delivery load.

2. Automate your infrastructure

Human intervention is itself a single point of failure. To eliminate this, we create a self-healing, auto scaling infrastructure that dynamically creates and destroys instances and gives them the appropriate roles and resources with custom scripts. This often requires a significant upfront engineering investment.

However, automating your environment before build significantly cuts development and maintenance costs later. An environment that is fully optimised for automation can mean the difference between hours and weeks to deploy instances in new regions or create development environments.

Best practices:

– The infrastructure in action. In the case of the failure of any instance, it is removed from the Auto Scaling group and another instance is spun up to replace it.

  • CloudWatch triggers the new instance spun up from an AMI in S3, copied to a hard drive about to be brought up.
  • The CloudFormation template allows us to automatically set up a VPC, a NAT Gateway, basic security, and creates the tiers of the application and the relationship between them. The goal of the template is to minimally configure the tiers and then get connected to the Puppet master. This template can then be held in a repository, from where it can be checked out as needed, by version (or branch), making it reproducible and easily deployable as new instances when needed – i.e., when existing applications fail or when they experience degraded performance.
  • This minimal configuration lets the tiers be configured by Puppet, a fully expressive language that allows for close control of the machine. Configuring Puppet manifests and making sure the Puppet Master knows what each instance they are spinning up does is one of the more time-consuming and custom solutions a managed service provider can architect.

– Simple failover RDS. RDS offers a simple option for multiple availability-zone failover during disaster recovery. It also attaches the SQL Server instance to an Elastic Block Store with provisioned IOPS for higher performance.

3. Break and Destroy

If you know that things will fail, you can build mechanisms to ensure your system persists no matter what happens. In order to create a resilient application, cloud engineers must anticipate what could possibly develop a bug or be destroyed and eliminate those weaknesses.

This principle is so crucial to the creation of resilient deployments that Netflix – true innovators in resiliency testing – has created an entire squadron of Chaos Engineers “entirely focused on controlled failure injection.” Implementing best practices and then constantly monitoring and updating your system is only the first step to creating a fail-proof environment.

Best practices:

– Performance testing. In software engineering as in IaaS, performance testing is often the last and most-frequently ignored phase of testing. Subjecting your database or web tier to stress or performance tests from the very beginning of the design phase – and not just from a single location inside a firewall – will allow you to measure how your system will perform in the real world.

– Unleash the Simian Army. If you a survive Simian Army attack on your production environment with zero downtime or latency, it is proof that your system is truly resilient. Netflix’s open-source suite of chaotic destroyers is on GitHub. Induced failures prevent future failures.

Unfortunately, deploying resilient infrastructure is not just a set of to-dos. It requires a constant focus throughout AWS deployment on optimising for automatic fail-over precise configuration of various native and 3rd party tools.

The post 3 Steps to Resilient AWS Deployments appeared first on Cloud Computing News.