Turn your DevOps volume up to 11 and seize the business opportunity

(c)iStock.com/gbrundin

“These go to 11.” One of the great lines from the classic mockumentary – This is Spinal Tap. Spoken by fictional rock guitarist Nigel Tufnel in the wonderful scene where he proudly shows off how his amplifier has a volume range of 0 to 11. Still hilarious after 30-odd years.

With DevOps and continuous delivery, organisations can also shoot for 11 – striving to take automation to its ultimate limit and then exceeding it. One hundred software deployments today, why not thousands tomorrow? Heck, let’s crank up the release volume to the max and push every code commit all the way to production.

But just being able to turn up the deployment dial is child’s play compared to what DevOps can deliver. True practitioners understand implicitly that volume counts for nothing if application quality or supportability suffers. Not content with velocity alone, expert practitioners ramp up all dials to 11 on the business amplifier – concurrently – including quality, resilience, supportability and compliance.

This takes some seriously effective collaboration supported by advanced cross-functional toolset integrations. Here are ‘11’ good ones to consider:

  • Automatically convert user stories from agile planning into the smallest set of test cases which cover 100 percent of the functionality in the user stories, linked to the right data and expected results.
  • Provide a real-time dashboard for managing and monitoring multi-application release content (user stories, features, bug fixes) through the entire release pipeline. This enables teams to gain visibility of release progress, more easily reconcile dependencies and map to business requirements.
  • Automatically attach test data criteria to test cases produced with agile planning tools. As features get promoted from dev to test to pre-production this allows execution of test cases with data that already exists in the target environment.
  • Integrate application performance management (APM) within continuous integration to check software builds against pass/fail conditions. Top solutions enable developers to invoke this quality check right from the tools they use (e.g. Jenkins dashboard) and be taken in context to APM data.
  • Leverage service virtualization to generate realistic services and inject referentially correct data with fully integrated test data management. This significantly improves the efficiency and quality of testing, while reducing compliance risk.
  • Speed the test bed preparation process by automating test data generation and reservation services as part of each deployment workflow.
  • Automatically initiate test case processes and tie the results back into releases to determine go/no go for automated promotion, enabling faster, higher quality deployments.
  • Provision virtual services and execute test suites on multiple virtual environments directly within a deployment workflow. By deploying into any testing environment, teams no longer have to wait for hardware environments to be built and become ready for testing.
  • Automatically deploy and initiate application performance monitoring as part of a deployment workflow, with metric capture feeding back critical performance information before and after release promotion.
  • Integrate API management with APM for in-depth visibility into problematic APIs that impact the customer experience and application performance. Capabilities should extend to proactive alerting on emerging issues and the ability to trace the interaction of APIs related to specific business transactions.
  • Enable APM users to incorporate load test scenarios with key performance metrics into their business analysis. This supports faster detection of issues and more opportunities to positively improve application quality.

There will be many more toolchain integrations to consider, but these 11 are great ones to start with. They help ensure quality is injected into every release, while addressing key issues associated with maintaining compliance and improving application supportability.

Of note too, is how essential data, metrics and workflows are integrated seamlessly in the context of roles, functions and processes across the entire software lifecycle. This way the entire pipeline moves with purpose, while individual disciplines, be that development, testing or IT operations are never impaired.

So go ahead, crank up your DevOps practices to 11 and fully amplify the incredible digital business opportunity.

Third-Party Problem | @DevOpsSummit @Catchpoint #DevOps #WebPerf

In the digital world, performance, availability, and reliability are key pillars of business that can impact revenue, reputation, and operational excellence. We often write about the impact that third-party services have on performance on our blog, but sometimes that impact is felt in our personal lives as well. Every year, we monitor major retailers’ performance and availability during Black Friday and Cyber Monday, testing to capture the inevitable failures and outages as traffic volumes climb to record highs. This year, however, most of these retailers did amazingly well and I am very happy for them.

read more

HPE Buys SimpliVity

Cloud market is booming, and every company wants to expand their presence. Hewlett Packard Enterprise (HPE) is no exception, as it plans to have a larger footprint in this sector within the next few years. To this end, it has acquired a company called SimpliVity for $650 million in cash.

SimpliVity was founded in 2009, and is based in Westborough, Massachusetts. Over the last eight years of operations, this company has raised $276 million in four rounds of funding, according to Crunchbase. It specializes in the making of hyper-converged infrastructure (HCI) – a $2.4 billion market with a strong growth of more than 25 percent a year. HCI saves companies billions of dollars every year in technology and infrastructure costs, as it combines computing, storage, and networking into a single component. To cash in on this trend, SimpliVity created its own HCI product called OmniCube that runs on many hardware such as Lenovo, Dell, Cisco, HPE, and Huawei.

Its flagship product, OmniCube, provides a simple and scalable architecture to achieve the highest levels of performance while ensuring data protection. Users can start with a single node, and can expand it to a global network of nodes to move and protect data across different physical locations.

With HCI being such a hot market, SimpliVity had big plans for its future. In March 2015, it obtained a funding of $175 million from investors based on its valuation of $1 billion. However, those investors are taking a beating now because HPE is paying only $650 million for the company.

This brings up an important question of why SimpliVity decided on an offer that’s almost 35 percent less than the company’s worth. If you look at the IPO tech market, it’s come to a standstill since 2015. Nutanix, a competitor, took the IPO route, but was not as successful as it was expected to be. Also, experts opine that the tech IPO will not improve any time soon, and is only expected to get worse. Given this scenario, SimpliVity’s directors thought this was a good offer., and decided to take it, even if it meant the last round of investors have to lose $350 million.

As for HPE, this is a sweet deal as it can expand HPE’s existing capabilities. It is also likely to fit into HPE’s strategy of making hybrid IT simple for its customers, as more companies are looking for ways to create a secure and resilient IT infrastructure at affordable prices. With this acquisition, HPE aims to get a larger market share in the hybrid cloud platform –  platforms that run applications partly on clients’ private servers and partly on public cloud servers.

In addition, HPE’s sales is expected to get a big boost from OmniCube’s revenue, thereby making it a double treat for the company. In fact, when SimpiVity’s product and market reach is combined with HPE’s existing product line and marketshare, it can be a deadly combination. The best part is customers get to enjoy these added benefits, and HPE can hope to get more revenue, even if it was a bittersweet end for SimpliVity.

The post HPE Buys SimpliVity appeared first on Cloud News Daily.

Google launches cloud-based key management with new service

(c)iStock.com/tarik kizilkiya

Google has announced the launch of Cloud Key Management Service (KMS), which enables admins to manage their encryption keys in Google Cloud Platform without maintaining an on-premise management system.

The news marks Google’s entry into this particular security arena, following Amazon Web Services (AWS) and Microsoft who launched such initiatives as far back as 2014 and 2015 respectively.

“Customers in regulated industries, such as financial services and healthcare, value hosted key management services for the ease of use and peace of mind that they provide,” wrote Maya Kaczorowski, Google Cloud Platform product manager in a blog post. “Cloud KMS offers a cloud-based root of trust that you can monitor and audit.

“As an alternative to custom-built or ad-hoc key management systems, which are difficult to scale and maintain, Cloud KMS makes it easy to keep your keys safe,” Kaczorowski added.

Alongside this, the company has published a whitepaper which doubles down on its security efforts and details ‘how security is designed into [Google’s] infrastructure from the ground up’, in the words of Google Security distinguished engineer Niels Provos.

The paper, which can be read here, explains how the security of Google’s infrastructure is designed in progressive layers, from the data centre, to the hardware and software which underpins the infrastructure, and the processes put in place to support operational security.

“Google Cloud’s global infrastructure provides security through the entire information processing lifecycle,” wrote Provos. “This infrastructure provides secure deployment of services, secure storage of data with end-user privacy safeguards, secure communications between services, secure and private communication with customers over the internet and safe operation by administrators.”

Regarding the KMS service, Leonard Austin, CTO at Google customer Ravelin, notes the cloud firm is “transparent about how it does its encryption by default…and Cloud KMS makes it easy to implement best practices.”

You can read more about KMS here.

Google launches cloud-based key management with new service

(c)iStock.com/tarik kizilkiya

Google has announced the launch of Cloud Key Management Service (KMS), which enables admins to manage their encryption keys in Google Cloud Platform without maintaining an on-premise management system.

The news marks Google’s entry into this particular security arena, following Amazon Web Services (AWS) and Microsoft who launched such initiatives as far back as 2014 and 2015 respectively.

“Customers in regulated industries, such as financial services and healthcare, value hosted key management services for the ease of use and peace of mind that they provide,” wrote Maya Kaczorowski, Google Cloud Platform product manager in a blog post. “Cloud KMS offers a cloud-based root of trust that you can monitor and audit.

“As an alternative to custom-built or ad-hoc key management systems, which are difficult to scale and maintain, Cloud KMS makes it easy to keep your keys safe,” Kaczorowski added.

Alongside this, the company has published a whitepaper which doubles down on its security efforts and details ‘how security is designed into [Google’s] infrastructure from the ground up’, in the words of Google Security distinguished engineer Niels Provos.

The paper, which can be read here, explains how the security of Google’s infrastructure is designed in progressive layers, from the data centre, to the hardware and software which underpins the infrastructure, and the processes put in place to support operational security.

“Google Cloud’s global infrastructure provides security through the entire information processing lifecycle,” wrote Provos. “This infrastructure provides secure deployment of services, secure storage of data with end-user privacy safeguards, secure communications between services, secure and private communication with customers over the internet and safe operation by administrators.”

Regarding the KMS service, Leonard Austin, CTO at Google customer Ravelin, notes the cloud firm is “transparent about how it does its encryption by default…and Cloud KMS makes it easy to implement best practices.”

You can read more about KMS here.

Languages for 2017 | @DevOpsSummit @AppDynamics #DevOps #JavaScript

It’s hard to believe that it’s already 2017. But with the new year comes new challenges, new opportunities—and, of course—new software projects. One of the most important questions beginner, intermediate, and advanced coders all have to answer before they begin their next project is which programming language to use. Instead of reaching for an old favorite, pause for a moment to consider the options.

read more

Accenture to Present at @DevOpsSummit NY | @Geek_King #AI #CD #DevOps

All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the works because of misaligned incentives.

read more

[session] Serverless – The Next Major Shift in Cloud By @LinuxAcademyCOM | @CloudExpo #Cloud #Serverless

In 2014, Amazon announced a new form of compute called Lambda. We didn’t know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology.
In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers with heavy investments in serverless computing, when most of the industry has its eyes on Docker and containers.

read more

Confessions of #DevOps: Fannie Mae, Liberty Mutual, Capital One | @CloudExpo

True Story. Over the past few years, Fannie Mae transformed the way in which they delivered software. Deploys increased from 1,200/month to 15,000/month. At the same time, productivity increased by 28% while reducing costs by 30%. But, how did they do it? During the All Day DevOps conference, over 13,500 practitioners from around the world to learn from their peers in the industry. Barry Snyder, Senior Manager of DevOps at Fannie Mae, was one of 57 practitioners who shared his real world journey through his enterprise transformation.

read more

Oracle announces new UK, US and Turkey cloud regions, adds product enhancements

(c)iStock.com/maybefalse

Software giant Oracle has made a series of announcements at its Oracle CloudWorld event in New York, with the standout being the launch of three new cloud regions including the UK.

The new regions, in Virginia, London, and Turkey to be more precise, are expected to go live by the middle of 2017, while the company adds it expects further regions, in APAC, the Middle East, and North America, to be launched a year later.

Oracle adds that the new regions will comprise at least three high bandwidth, low latency sites – for which the company code is ‘availability domains’ – located several miles from each other and designed to be built to avoid failover.

Back in September, Oracle co-founder and CTO Larry Ellison spoke of these fledgling next-generation data centres to delegates at their OpenWorld event. “We have a modern architecture for infrastructure where there’s no single point of failure,” he said. “Faults are isolated, therefore faults are tolerated. If we lose the data centre, then you won’t even know about it.” At the time, Ellison told delegates that “Amazon’s lead is over” in infrastructure as a service.

“Oracle is committed to building the most differentiated cloud platform that delivers on the requirements of a wide array of customer workloads,” said Deepak Patil, vice president of development at Oracle Cloud Platform in a statement. “This regional expansion underscores our commitment to making the engineering and capital investments required to continue to be a global large scale cloud platform leader.”

Elsewhere, the company announced expansions of Oracle Cloud Platform with what was described as an industry first. The Oracle Database Cloud Service is now available on bare metal compute, as well as new virtual machine, compute, load balancing, and storage capabilities for the platform. Oracle says its Database Cloud Service is perfect for development, testing, and deployment of enterprise workloads, while the advancements to the overall platform gives it ‘differentiated database performance at every scale, and deeply integrated IaaS capabilities for customers of any size’.

“These latest investments in the Oracle Cloud Platform provide a clear path to develop, test, and scale applications – with the Oracle Database or third-party databases,” said Thomas Kurian, Oracle president of product development. “We offer customers the most comprehensive approach to moving to the cloud and accelerating their business strategies.”

In November, a study from Oracle argued various barriers remain for an enterprise IT cloud model to succeed, with proving return on investment and discord between infrastructures the key stumbling blocks.