The glitch economy: Counting the cost of software failures

In today’s increasingly digitalised world, the effect of a software glitch can be dramatic. Take an example from July this year when a glitch caused the stock prices of well-known Nasdaq companies such as Amazon, Apple, Alphabet, eBay and Microsoft to be inaccurately listed on websites well after that day’s closing bell.

Even though the actual prices of the stocks were unchanged, the sites showed some had plummeted in price and others had nearly doubled. Unsurprisingly, many people were fooled and took to social media to discuss the false listings, dragging company names into a controversy over something that had never happened.

Software glitches happen every day. Recent research by Tricentis revealed that software failures in the US cost the economy $1.1 trillion in assets in 2016. In total, software failures at 363 companies affected 4.4 billion customers and caused more than three and a half years of lost time.

Ever decreasing development cycles

Software releases are the single biggest factor contributing to downtime (and glitches) across all industries. With organisations being relentlessly forced into a tradeoff between agility and risk, many of the quality checks and governance balances put in place to prevent glitches are coming under severe pressure.

With the move to more agile methodologies, long software development lifecycles are becoming a thing of the past. Timelines are frequently measured in weeks or days and projects are often in a perpetual state of release and testing. Frequently, the next version of the code is ready for testing and staging when teams are deploying the last version to production.

Developing for mobile applications is another factor driving relentless development cycles. For example, customer-facing applications may suddenly need to access a new location service or require a location service to connect them to other users. The difference between a mobile app being adopted or ignored can be down to whether it has the ability to rapidly adapt to new features. And with mobile projects targeting multiple mobile platforms simultaneously, it can be very difficult to finalise the point where a mobile app can stop responding to changes in a platform.

Timelines are also shortened because cloud computing and DevOps infrastructure-as-a-code approaches can radically reduce the set-up time for infrastructure and the effort required to configure it. Code can ship as soon as it is ready to go.

The trend away from quality assurance to quality engineering is also blurring the relationship between application development and the teams that focus on testing and certifying the software is ready for release to production. Quality engineers are often embedded with application developers so the acceptance testing begins in the first continuous integration build.

What can be done?

Short of developing a software system that requires no changes at all – an impossibility – businesses need to consider other ways to try and reduce the potential for glitches emerging at a later date and causing disruption (as well as embarrassment).

Adopting a streamlined approach utilising predictable, high-quality, automated enterprise software delivery would help cut down delivery delays and glitches. And by implementing end to end release management, organisations can manage software delivery across the entire enterprise release portfolio and throughout the complete lifecycle of each release, including planning, approval and execution.

Another integral part of the process is testing. Modern enterprise test management tools can support the complete software testing process across all types of development methodologies. They can use a single instance for all projects by consolidating testing design, planning, manual and automated execution, defect tracking and progress reporting.

Matching development and delivery

By implementing enterprise release management technology, organisations can enable existing systems to support the adaptive environment that more accelerated methodologies are creating in many Global 1000 companies. Enterprise release management tools can accelerate delivery and maintain the integrity of the software and stability without compromising the speed of production. With an enterprise release management platform, companies can scale to meet their exponentially increasing release demands.

The software development cycle is becoming increasingly agile, but for many organisations the delivery management process is struggling to achieve the continuous delivery approach that is required. Modern enterprise release management technology can help businesses to better match development to delivery. Unless enterprises take steps to try and align the two more effectively, the risk of software glitches and downtime could well increase.

Google and Cisco join hands for hybrid cloud

In today’s age of intense competition, partnerships seem to be the best way to increase market share. That’s probably why we’ve seen so many companies join hands with others to create a win-win partnership for everyone involved.  The latest in this trend is the partnership between two IT giants – Google and Cisco.

Both the companies announced that they would join hands to help their respective customers create efficient hybrid cloud solutions. Through this partnership, both companies want to bring the power and advantages of cloud to the on-premise environment of their customers. To do this, they would be using the Kubernetes container created by Google. It also looks like they would be using the Istio service mesh for connecting different microservices between clouds.

Though the companies did not go into detail as to how they would implement this idea, we can understand that their solutions will take into the security and policy configurations of their respective enterprise environments and at the same time, will provide the necessary networking and performance data in real-time.

Another aspect that we know is Apigee, the company that Google acquired last year, will act as the medium through which legacy applications can connect to modern applications, and maybe even tap into the power of cloud. This is a good move because there are a ton of legacy applications in the on-premise environment, so it’s important that a fair amount of support is given to these as well.

This partnership has been in the pipeline for some time. Tech people of both the companies have been working together over the past few months to understand the feasibility and planning required to create a combined hybrid cloud. As of now, the plan is to roll out the combined service to a limited set of customers by the first week of 2018 and have it available to the general public by the second half of 2018.

A considerable stake is present in this partnership for both the companies. As for Google, it’s going all out to catch up with its competitors, Microsoft and AWS, who seem to be moving higher with every quarter. As for Cisco, it wants to redefine its business and continue to stay relevant in a changing tech world.

Let’s wait and see how this partnership plays out to both these tech giants as well as the general public at large.

The post Google and Cisco join hands for hybrid cloud appeared first on Cloud News Daily.

[session] A Well-Behaved Network | @CloudExpo @Infoblox #SDN #SDS #SDDC

As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide:
Agility: ability to deliver applications and services faster, even in complex hybrid environments
Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves
Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization
Security: implement always-vigilant DNS security

read more

Assessing the evolution of the managed services and MSP space

Managed services are dead – long live the managed services provider!

A bold statement and one that whilst some people may believe to be true, I would like to put the argument that this thinking is wrong. One thing is for sure though, the way the industry views managed service providers is changing.

I would categorise the reaction to this change by managed service providers into three core areas: those that aren’t; those that are; and those that want to.

Those that aren’t

This is typically reserved for those extremely large companies. Their lack of desire to change is as deep-rooted in an arrogance – in other words, we are too big and it will be too difficult for our customer to change anyway – or a lack of capability to make fundamental changes to a business that is too large to control.

Those that are

As a rule, this is more akin to those companies who are smaller than ‘those that aren’t’ but have a very similar approach to their own business model. They find themselves in a position where change is achievable but it is more enforced, rather than there is a desire to adapt.

Those that want to

These are the companies that are not only making changes, but they are doing it to affect positive alterations within their own business as well as those of their customer. These are companies who more typically refer to their customers as a ‘partner’ and have the appropriate credentials to truly demonstrate that they can work together. These companies are likely to be small and can often be startups or companies with a short history.

Of course, those from a managed service provider background will insist that I am wrong or, of course that they are in the ‘those that want to’ category. I would argue that less than 20% of companies who operate in a true MSP space are in that category. The other 80% are in a sharp decline and, whilst they may be too big to go completely out of business, their market share is likely to change drastically in the next decade as existing contracts come to a close.

So why has industry started taking a different view? The trust is, it hasn’t. This is a view that industry has long had but has done nothing about or, more importantly, not had the opportunity to do anything about. Now the tide is turning.

From network infrastructure to end user compute, from application to print, more services are available in more ways than ever before. No longer is a circuit something that can only be delivered by a telco. No longer is a hosting solution something that can only be delivered by a data centre provider. The flexibility in being able to select the technologies that best suit the business, in the most cost effective manner and with the service wrap that meets business requirements is powerful and achievable.

So surely the correct answer is to take everything in-house and let the business manage everything itself, right? Managed service providers really are dead? Wrong. This is an approach that some ideologists are looking to entertain but it is a dangerous one. The flexibility offered through cloud computing has given the business choice. It has not given the business a mandate to follow a different path that must be adhered to at all costs.

A collaborative approach to service and the provision of service will be the new norm. Managed service providers are being forced to step up and change and this is to the benefit of businesses and a reform that is long overdue. This may not be met with good grace. Again, I argue that there are those that aren’t, those that are and those that want to. Lip service will not be tolerated and businesses’ flexibility to change is not limited to technology, the same can be applied to service provision. It is ‘those that want to’ that are the managed service providers who will flourish in this new landscape.

[session] Storage-as-a-Service | @CloudExpo @NetApp #SDN #DX #DataCenter

In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, will describe how NetApp designed a three-year program of work to migrate 25PB of a major telco’s enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform.
This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching of virtual storage services to its enterprise market.

read more

[session] Enterprise #DigitalTransformation | @CloudExpo #AI #DX #FinTech #SmartCities

Digital Transformation (DX) is not a «one-size-fits all» strategy. Each organization needs to develop its own unique, long-term DX plan. It must do so by realizing that we now live in a data-driven age, and that technologies such as Cloud Computing, Big Data, the IoT, Cognitive Computing, and Blockchain are only tools. In her general session at 21st Cloud Expo, Rebecca Wanta will explain how the strategy must focus on DX and include a commitment from top management to create great IT jobs, monitor progress, and never forget that their enterprise is in a day-to-day battle for survival.

read more

T-Mobile at @CloudExpo New York | @TMobile #Mobile #IoT #DX #SmartHome #SmartCities

SYS-CON Events announced today that T-Mobile exhibited at SYS-CON’s 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America’s Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company’s advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on quality and value.

Based in Bellevue, Washington, T-Mobile US provides services through its subsidiaries and operates its flagship brands, T-Mobile and MetroPCS.

For more information, visit https://www.t-mobile.com.

read more

[slides] Multi-Cloud #DevOps | @CloudExpo @NetApp @SolidFire #Serverless

All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in – resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud.

read more

Tech News Recap for the Week of 10/23/17

If you had a busy week and need to catch up, here’s a tech news recap of articles you may have missed for the week of 10/23/2017!

Crucial strategies for strengthening network security. How digital transformation is reshaping the IT budget. Why network nerds are excited about SD-WAN. Microsoft Azure gets the managed Kubernetes services. Cisco scoops up BroadSoft, boosts communications tools portfolio. Bad Rabbit ransomware emerges and more top news this week you may have missed! Remember, to stay up-to-date on the latest tech news throughout the week, follow @GreenPagesIT on Twitter.

Tech News Recap

Featured

IT Operations

  • How digital transformation is reshaping the IT budget: The journey of 3 CIOs
  • Why network nerds are so excited about SD-WAN
  • GE adds edge analytics, AI capabilities to Predix industrial IoT suite
  • Solve the mystery of VDI licenses
  • How virtualization continues to redefine IT by extending beyond VMs
  • Field of digital dreams: Why MLB is betting its future on big data, Wi-Fi, apps, and AR

[Interested in learning more about SD-WAN? DownloadWhat to Look For When Considering an SD-WAN Solution.]

Microsoft

Dell

Cisco

  • Cisco scoops up BroadSoft for $1.9 billion to boost communications tools portfolio
  • Cisco, Google partner to simplify hybrid cloud deployments
  • Cisco rolls out new storage networking telemetry capabilities

HPE

IBM 

Cloud

  • Fidelity Investment’s key to hybrid cloud: Application flexibility
  • Trusting the cloud? Trust yourself more

Security

Thanks for checking out our tech news recap!

By Jake Cryan, Digital Marketing Specialist

While you’re here, check out this white paper on how to rethink your IT security, especially when it comes to financial services.