Organisations need ‘reality check’ on cloud costs, research advocates

How do you balance your cloud and on-premises budget – and how do you get the most out of it? It has long been a problem for organisations once they decide they want to move their systems to the cloud – and according to new research from SoftwareONE, companies continue to suffer from high costs and low visibility.

The study, which polled 300 C-level and IT decision makers in North America, had some interesting data points alongside some less surprising results. More than half (53%) of those polled said they were looking at a hybrid approach to IT – a stat for the latter category. Yet a similar number (45%) said they were either increasing or maintaining their on-prem investments in the coming year.

The problem is naturally built around cost and management. Even organisational budgets have discrepancies depending on who you talk to. On average, according to respondents, IT perceives its annual budget at $5.05 million, while the C suite sees it at $6.3m. What’s more, C-level sees 43% of their perceived budget going onto cloud services this year, where IT sees it more towards a third.

On the management side, 42% of firms polled said they rely on external, third party software to manage cloud deployments. A quarter (26%) believe cloud pricing models were more complex than on-prem equivalents.

A hybrid approach is therefore here to stay, with a four-phase application plan – retire, retain, re-host and re-platform – advocated. To avoid major cost headaches, the report argues that a fine grain approach to the architecture of applications is required – examining ‘how all aspects of the cloud can be used to finely engineer the on-premises applications to realise the maximum benefits.’

Regular industry watchers will be aware that a sub-genre of companies have sprung up with the goal of giving organisations better visibility into their cloud spend. With the most popular cloud providers, such as Amazon Web Services (AWS) and Microsoft Azure, the plentiful resources and tools at their disposal means it can take a lot of expertise to use their products efficiently.

As this publication reported last year, the sector was becoming especially hot with M&A and funding activity ramping up. CloudCheckr, a Rochester-based cloud management platform, secured $50 million in series A funding last March, while Boston-based CloudHealth Technologies raised $46m a few months later in a series D – with European expansion plans coming to fruition.

“Challenges remain in migrating high availability applications to the cloud, and hybrid and multi-cloud deployments are only adding to that complexity,” the report concludes. “Organisations succeeding with the cloud are conducting full, purpose-built migrations and relying on third party tools to better manage and fully utilise their investments in cloud.

“Organisations must have a clear vision and strategy for governing, managing and optimising their IT investments – on-premises and in the cloud – especially as they embrace the hybrid cloud,” the report adds. “To fully reap the benefits of the hybrid cloud, organisations must have complete transparency from on-premises to the cloud in order to maximise the value of their IT investments.”

You can find out more and read the report here.

10 charts that will change your perspective of big data’s growth

  • Worldwide big data market revenues for software and services are projected to increase from $42bn in 2018 to $103bn in 2027, attaining a Compound Annual Growth Rate (CAGR) of 10.48% according to Wikibon
  • Forrester predicts the global big data software market will be worth $31bn this year, growing 14% from the previous year. The entire global software market is forecast to be worth $628bn in revenue, with $302bn from applications
  • According to an Accenture study, 79% of enterprise executives agree that companies that do not embrace big data will lose their competitive position and could face extinction. Even more, 83%, have pursued big data projects to seize a competitive edge
  • 59% of executives say big data at their company would be improved through the use of AI according to PwC

Sales and marketing, research & development (R&D), supply chain management (SCM) including distribution, workplace management and operations are where advanced analytics including big data are making the greatest contributions to revenue growth today. McKinsey Analytics’ study Analytics Comes of Age, published in January 2018 (PDF, 100 pp., no opt-in) is a comprehensive overview of how analytics technologies and big data are enabling entirely new ecosystems, serving as a foundational technology for artificial intelligence (AI).

McKinsey finds that analytics and big data are making the most valuable contributions in the basic materials and high tech industries. The first chart in the following series of ten is from the McKinsey Analytics study, highlighting how analytics and big data are revolutionizing many of the foundational business processes of sales and marketing.

The following ten charts provide insights into big data’s growth:

Nearly 50% of respondents to a recent McKinsey Analytics survey say analytics and Big Data have fundamentally changed business practices in their sales and marketing functions

Also, more than 30% say the same about R&D across industries, with respondents in High Tech and Basic Materials & Energy report the greatest number of functions being transformed by analytics and big data. Source: Analytics Comes of Age, published in January 2018 (PDF, 100 pp., no opt-in).

Worldwide big data market revenues for software and services are projected to increase from $42bn in 2018 to $103bn in 2027, attaining a Compound Annual Growth Rate (CAGR) of 10.48%

As part of this forecast, Wikibon estimates the worldwide big data market is growing at an 11.4% CAGR between 2017 and 2027, growing from $35bn to $103bn. Source: Wikibon and reported by Statista.

According to NewVantage Venture Partners, big data is delivering the most value to enterprises by decreasing expenses (49.2%) and creating new avenues for innovation and disruption (44.3%)

Discovering new opportunities to reduce costs by combining advanced analytics and big data delivers the most measurable results, further leading to this category being the most prevalent in the study. 69.4% have started using big data to create a data-driven culture, with 27.9% reporting results. Source: NewVantage Venture Partners, Big Data Executive Survey 2017 (PDF, 16 pp.)

The Hadoop and big data markets are projected to grow from $17.1bn in 2017 to $99.31bn in 2022 attaining a 28.5% CAGR

The greatest period of projected growth is in 2021 and 2022 when the market is projected to jump $30bn in value in one year. Source: StrategyMRC and reported by Statista.

Big data applications and analytics is projected to grow from $5.3bn in 2018 to $19.4bn in 2026, attaining a CAGR of 15.49%

Big data market worldwide includes Professional Services is projected to grow from $16.5B in 2018 to $21.3B in 2026. Source: Wikibon and reported by Statista.

Comparing the worldwide demand for advanced analytics and big data-related hardware, services and software, the latter category’s dominance becomes clear

The software segment is projected to increase the fastest of all categories, increasing from $14B in 2018 to $46B in 2027 attaining a CAGR of 12.6%. Sources: WikibonSiliconANGLE; Statista estimates and reported by Statista.

Advanced analytics and big data revenue in China are projected to be worth ¥57.8bn ($9bn) by 2020

The Chinese market is predicted to be one of the fastest growing globally, growing at a CAGR of 31.72% in the forecast period. Sources: Social Sciences Academic Press (China) and Statista.

Non-relational analytic data stores are projected to be the fastest growing technology category in big datagrowing at a CAGR of 38.6% between 2015 and 2020

Cognitive software platforms (23.3% CAGR) and Content Analytics (17.3%) round out the top three fastest growing technologies between 2015 and 2020. Source: Statista.

A decentralized general-merchandise retailer that used big data to create performance group clusters saw sales grow 3% to 4%

Big data is the catalyst of a retailing industry makeover, bringing greater precision to localization than has been possible before. Big data is being used today to increase the ROI of endcap promotions, optimize planograms, help to improve upsell and cross-sell sales performance and optimize prices on items that drive the greatest amount of foot traffic. Source: Use Big Data to Give Local Shoppers What They Want, Boston Consulting Group, February 8, 2018.

84% of enterprises have launched advanced analytics and big data initiatives to bring greater accuracy and accelerate their decision-making

Big data initiatives focused on this area also have the greatest success rate (69%) according to the most recent NewVantage Venture Partners Survey. Over a third of enterprises, 36%, say this area is their top priority for advanced analytics and Big Data investment. Sources: NewVantage Venture Partners Survey and Statista.

Additional big data information sources

4 Pain Points of Big Data and how to solve them, Digital McKinsey via Medium, November 10, 2017

53% Of Companies Are Adopting Big Data Analytics, Forbes, December 24, 2017

6 Predictions For The $203 Billion Big Data Analytics Market, Forbes, Gil Press, January 20, 2017

Analytics Comes of Age, McKinsey Analytics, January 2018 (PDF, 100 pp.)

Big Data & Analytics Is The Most Wanted Expertise By 75% Of IoT Providers, Forbes, August 21, 2017

Big Data 2017 – Market Statistics, Use Cases, and Trends, Calsoft (36 pp., PDF)

Big Data and Business Analytics Revenues Forecast to Reach $150.8 Billion This Year, Led by Banking and Manufacturing Investments, According to IDC, March 14, 2017

Big Data Executive Survey 2018, Data and Innovation – How Big Data and AI are Driving Business Innovation, NewVantage Venture Partners, January 2018 (PDF, 18 pp.)

Big Data Tech Hadoop and Spark Get Slow Start in Enterprise, Information Week, March 20, 2018

Big Success With Big Data, Accenture  (PDF, 12 pp.)

Gartner Survey Shows Organizations Are Slow to Advance in Data and Analytics, Gartner, February 5, 2018

How Big Data and AI Are Driving Business Innovation in 2018, MIT Sloan Management Review, February 5, 2018

IDC forecasts big growth for Big Data, Analytics Magazine. April 2018

IDC Worldwide Big Data Technology and Services 2012 – 2015 Forecast, Courtesy of EC Europa (PDF, 34 pp.)

Midyear Global Tech Market Outlook For 2017 To 2018, Forrester, September 25, 2017 (client access reqd.)

Oracle Industry Analyst Reports – Data-rich website of industry analyst reports

Ten Ways Big Data Is Revolutionizing Marketing And Sales, Forbes, May 9, 2016

The Big Data Payoff: Turning Big Data into Business Value, CAP Gemini & Informatica Study, (PDF, 12 pp.)

The Forrester Wave™: Enterprise BI Platforms With Majority Cloud Deployments, Q3 2017 courtesy of Oracle

Haley Fung Joins @DevOpsSUMMIT NY Faculty | @IBMDevOps #Serverless #DevOps #APM #Monitoring #ContinupusDelivery

DevOps with IBMz? You heard right. Maybe you’re wondering what a developer can do to speed up the entire development cycle–coding, testing, source code management, and deployment-? In this session you will learn about how to integrate z application assets into a DevOps pipeline using familiar tools like Jenkins and UrbanCode Deploy, plus z/OSMF workflows, all of which can increase deployment speeds while simultaneously improving reliability. You will also learn how to provision mainframe system as cloud-like service.

read more

Blockchain/Crypto Bubble: Dot-Com Bubble All Over Again? | @CloudEXPO #FinTech #Blockchain #Bitcoin

Today, the entire blockchain/cryptocurrency hairball is itself in a massive bubble. Rather than speculation in cryptos driving the market over the cliff, however, it’s speculative interest in initial coin offerings (ICOs).

This is no mere currency play. Deep pockets with more money than sense are betting on an entire market full of startups, largely because of FOMO – ‘fear of missing out.’

All this hullabaloo is giving me a serious case of déjà vu. I’ve lived through such a bubble before – the dot-com bubble of the turn of the century.

Unlike most of the blockchain/crypto players out there who were children at the time, I saw the craziness of the dot-com runup and bust from the inside. Similarities to the current bubble abound.

Lest we make the mistakes of the past, however, it’s also important to point out the differences. In truth, the two bubbles only have superficial similarities. We can only gain wisdom by understanding both how they are alike – and how they are different.

read more

Designing new cloud architectures: Exploring CI/CD – from data centre to cloud

Today, most companies are using continuous integration and delivery (CI/CD) in one form or another – and this is of significance due to various reasons:

  • It increases the quality of the code base and the testing of that code base
  • It greatly increases team collaboration
  • It reduces the time in which new features reach the production environment
  • It reduces the number of bugs that in turn reach the production environment

Granted, these reasons apply if – and only if – CI/CD is applied with more than 70% correctness. Although there is no single perfect way of doing CI/CD, there are best practices to follow, as well as caveats to avoid in order to prevent unwanted scenarios.

Some of the problems that might arise as a consequence include: the build being broken frequently; the velocity in which new features are pushed creating havoc in the testing teams or even in the client acceptance team; features being pushed to production without proper or sufficient testing; the difficulty in tracking and even the separation of big releases; old school engineers struggling to adapt to the style.

IaaC

A few years ago, the thinking model indicated that CI/CD was only useful for the product itself; that it will only affect the development team and that operation teams were only there to support the development lifecycle. This development-centric approach suddenly came to an end when different technologies appeared, spellbinding the IT market completely. These technologies I am making reference to are those that allow to create infrastructure as code.

CI/CD is no longer exclusive to development teams. Its umbrella has expanded throughout the entirety of engineering teams, software engineers, infrastructure, network, systems engineers, and so forth.

DevOps

Nobody knows what DevOps really is, but if you are not doing, using, breathing, dreaming – being? – DevOps, you’re doing it wrong. All teasing aside, with the advent of DevOps, the gap that existed between development teams and operation teams has become closer, to the extent of some companies mixing the teams. Even so, some of those took a different approach and have multidisciplinary teams where engineers work on the product throughout the lifecycle, coding, testing and deploying – including on occasion security teams as well, now called DevOpsSec.

As the DevOps movement becomes more popular, CI/CD does as well, since it is a major component. Not doing CI/CD means not doing DevOps.

From data centre to cloud

After reducing some terms and concepts, it is clear why CI/CD is so important. Since architectures and abstraction levels change when migrating a product from data centre into the cloud, it has become necessary to evaluate what is needed in the new ecosystem for two reasons:

  • To take advantage of what the cloud has to offer, in terms of the new paradigm and the plethora of options
  • To avoid making the mistake of treating the cloud as a data centre and building everything from scratch

Necessary considerations

The CI/CD implementation to use in the cloud must fulfil the majority of the following:

  • Provided as a service: The cloud is XaaS-centric, and avoiding building things from scratch is a must. In the case of building from scratch, if it is a non in-house component, nor a value-added product feature, I would suggest a review of the architecture in addition to a logical business justification
  • Easy to get in, easy to get out: A non-complicated process of in-out means that the inner workings of the implementation are likely to be non-complicated as well. Also, in case it does not work as expected, an easy way out is always a necessity
  • Portable configuration: This is a nice to have, in order to avoid reinventing the wheel and learning a given implementation details in-depth, it is easier to move from one system to another. Typical configurations are compatible with YAML or JSON formats – however many providers allow the use of familiar language such as Python, Java or JavaScript in order to fit the customer
  • Integration with VCS as a service: This is practically a given. As an example, Bitbucket provides pipelines within a repository. AWS does it differently with CodeCommit, which provides Git repositories as a service within. Different cloud providers will employ different ways and some will integrate with external repositories as well
  • Artifact store: It depends on the type of application, but having an artefact store to store the output of the build is often a good idea. Once the delivery part is done, deploying to production is significantly easier if everything is packaged neatly
  • Statistics and metric visualisation: This is in terms of what is occurring throughout the entire pipeline, which tests are failing, which features are ready, which pipeline is having problems, analogously for the code base, and not to mention the staging/testing/UAT or similar systems prior to production
  • No hidden fees: Although the technological part is important, the financial and economic part will be so too. In cloud, the majority of things turn to OpEx, and things that are running and unused can impact greatly. In terms of pipelines, it is important to focus on the cost of build minutes per month, the cost storage of GB for VCS and artefact store, the cost per parallel pipeline, the cost of the testing infrastructure used for the given purpose, among other things. Being fully aware of minutiae and reading the fine print pays off
  • Alerts and notifications: Mainly in case of failure, but also setting minimum and maximum thresholds for number of commits, for example, can yield substantial information; no-one committing frequently to the code base may mean breaking the DevOps chain
  • Test environments easy to create/destroy: The less manual integration, the better. This needs to be automated and integrated
  • Easy ‘delivery to deployment’ integration: The signoff after the delivery stage will be a manual step, but only to afterwards trigger a set of automated steps. Long gone are the days in which an operator ran a code upgrade manually
  • Fast, error-free rollback: When problems arise after a deployment, the rollback must be easy, fast and, above all, automatic or at least semi-automatic. Human intervention at this stage is a recipe for disaster
  • Branched testing: Having a single pipeline and only performing CI/CD on the master branch is an unpopular idea – not to mention that if that is the case, breaking the build would mean affecting everyone else’s job
  • Extensive testing suite: This may not be necessarily cloud-only, but it is of significance. At minimum, four of the following must exist: unit testing, integration testing, acceptance, smoke, capacity, performance, UI/UX
  • Build environment as a service: Some cloud providers allow for virtualised environments; Bitbucket pipelines allow for integration with Docker and Docker Hub for the build environment

Monitoring, metrics, and continuous tracking of the production environment

The show is not over once deployment happens. It is at that moment, and after, when it is critical to keep track of what is occurring. Any glitch or problem can potentially snowball into an outage; thus it is important to extract as many metrics as possible and monitor as many sensors as possible without loosing track of the important things. By this, I mean establishing priorities to avoid generating chaos between engineers on-call and at desk.

Most cloud providers will provide an XaaS for monitoring, metrics, logs and alerts, plus integration with other external systems. For instance, AWS provides CloudWatch that, in turn, provides everything as a service and integrated. Google Cloud provides Stackdriver, a similar service; Microsoft has a slightly more basic service in Azure Monitor. Another giant, Alibaba, provides Cloud Monitor at a similar level as the competition. Needless to say, every major cloud provides this as a service in one level or another.

This is an essential component and must not go unnoticed – I cannot emphasise this enough. Even if the cloud does not provide a service, it must provide integration with other monitoring services from other cloud-oriented service providers, such as Dynatrace, which integrates with the most popular enterprise cloud providers.

Conclusion

CI/CD is a major component of the technology process. It can make or break your product in the cloud, and in the data centre; however, evaluating the list above when designing a new cloud architecture can save time, money and effort on a significant level.

When designing a cloud architecture, it is fundamentally important to avoid copying the current architecture, and focus the design as if the application is a cloud native application, thinking that it was born to perform in the cloud together with the entire lifecycle. As I have mentioned previously, once a first architecture is proposed and initially peer reviewed, then a list of important caveats must be brought to attention before moving onto a more solid version of the architecture.

As a final comment, doing CI/CD halfway is better than not doing it at all. Some engineers and authors may argue that it is a binary decision – either there is CI/CD or there is not. I rather think that every small improvement gained by adopting CI/CD, CI, or CD only, even in stages, is a win. In racing, whether it is by a mile or a metre, a win is a win.

Happy architecting and let us explore the cloud in depth.